AppEngine Flexible with custom runtime - logging challenge - google-app-engine

Let's say my custom runtime uses a container with a bash process in it.
#snippet
ADD crontab /etc/cron.d/zip-splitter
RUN crontab /etc/cron.d/zip-splitter
RUN chmod 0644 /etc/cron.d/zip-splitter
CMD ["/var/local/zip-splitter/entry.sh"]
In entry.sh I have:
#!/bin/bash
#
echo "Starting cron in the background"
cron -f -L 0 &
#
# Respond to liveness & readiness checks from AppEngine
#
echo "Starting gunicorn"
cd /var/local/zip-splitter && gunicorn -b :8080 main:app
Now the trouble I am having lies with the jobs scheduled by cron. How do I get stdout/stderr from said jobs to reach my GCP console logs ?
I have tried:
using the Linux "logger" command by piping stdout & stderr
directing stdout & stderr to "local files" in the container in
/var/log
using "gcloud logs" (couldn't get nice log lines)
Thanks in advance.

Related

Plesk-Scheduled-Tasks reporting "No such file or directory"

I have a working Centos/Plesk (18.0.40 Update #1) environment running Plesk-Scheduled-Tasks with no problems, and I have a new machine that should be a duplicate of that machine (Plesk 18.0.42 Update #1) that is failing to run the Plesk-Scheduled-Tasks (reporting "No such file or directory" on all the tasks that I have added).
Eliminating as many permissions factors as possible, I am testing a scriptless task running "whoami" will work on the original machine but shows an "-: whoami: command not found" error message on the new.
Note, I am also declaring tasks at the domain level - if I was to add a top level task (where it prompts you for the System user) then it can use root and therefore works - but I do not want these tasks to run under root.
Clicking "Run Now" gives the following:
Hiho.
The run scheduled tasks and also the shell access if it´s enabled for your subscription is mostly chrooted. So you have only a minimum on commands which you can use here.
If you open your subscription via FTP Client you should see a bin folder in there. In the bin folder are all commands you are able to use in the chrooted shell.
Example on one of my subscriptions:
bash cat chmod cp curl du false grep groups gunzip gzip head id less ln ls
mkdir more mv pwd rm rmdir scp sh tail tar touch true unrar unzip vi wget

Bash BG Process in Subshell: Display Output and Also Pipe to Command (Docker CMD Startup Shell Script)

Direct Question
I am starting a command in a subshell, in the BG. How can I pass the output to both a FG process, as well as displaying it on-screen?
( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started"
This command works, but I cannot see the output of sqlserver on-screen.
It is critical that the script waits wait for sqlserver to print "Service Broker manager has started" before proceeding.
Use Case and Source Code
This is the current setup. Files are abbreviated.
--- Dockerfile
FROM mcr.microsoft.com/mssql/server:2019-latest
CMD ["/bin/bash", "dockerrun.sh"]
--- dockerrun.sh
# start DB and wait for it to be up.
( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started"
# restore the DB from backup file
~/sqlpackage/sqlpackage #args
# keep this script running, otherwise Docker will stop the container.
while sleep 1000; do :; done
If you want to keep the user informed, you can use tee to copy all the output to stderr:
( /opt/mssql/bin/sqlservr | tee /dev/stderr & ) |
grep -q "Service Broker manager has started"

Mounting a GCS bucket on AppEngine Flexible Environment

I am trying to mount a GCS bucket on AppEngine Flexible Environment app using gcsfuse.
My Dockerfiles includes the following:
# gscfuse setup
RUN echo "deb http://packages.cloud.google.com/apt cloud-sdk-jessie main" | tee /etc/apt/sources.list.d/google-cloud.sdk.list
RUN echo "deb http://packages.cloud.google.com/apt gcsfuse-jessie main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN wget -qO- https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update && apt-get install -y --no-install-recommends google-cloud-sdk gcsfuse strace
RUN gcsfuse --implicit-dirs my_bucket my_dir
I took most of this from here. It's pretty much just the standard way to install gcsfuse, plus --no-install-recommends.
If I start an app this way, it does not mount the drive. This was not too surprising to me, since it didn't seem like a supported feature of the flexible environment.
Here is the confusing part. If I run gcloud app instances ssh "<instance>", then run container_exec gaeapp /bin/bash, then gcsfuse my_bucket my_dir works fine.
However, if I run gcloud app instances ssh "<instance>" --container gaeapp, then gcsfuse my_bucket my_dir fails with this error:
fusermount: failed to open /dev/fuse: Operation not permitted
This is the same error I get if I run gcsfuse as a subprocess in my main.py.
Based on this unresolved thread, I ran strace -f and saw the exact same problem as that user did, an EPERM issue.
[pid 59] open("/dev/fuse", O_RDWR) = -1 EPERM (Operation not permitted)
Whichever way I log into the container (or if I run a subprocess from main.py), I am user root. If I run export then I do see different vars, so there is some difference in what's being run, but everything else looks the same to me.
Other suggestions I've seen include using the gcsfuse flags -o allow_other and -o allow_root. These did not work.
There may be a clue in the fact that if I try to run umount on a login that cannot run gcsfuse, it says "must be superuser to unmount", even though I am root.
It seems like there is probably some security setting that I do not understand. However, since I could in theory get main.py to trigger an external program to log in and run gcsfuse for me, it seems like there should be a way to get it to work without having to do that.
RUN commands are about creating a new layer for your dockerfile, so you're actually running that command during the image creation, which the Flex build system doesn't like.
I'm not sure why shelling out in the application didn't work, you could try 'sudo'ing it in the python subprocess, or possibly push it out of the application code by adding 'gcsfuse setup &&' to the ENTRYPOINT in the dockerfile.

Using psexec.exe in jenkins, handle is invalid

I am using Jenkins on a Windows7 system. I would like to use it to execute a batch script on a remote Windows system. The batch script will be used to flash a development board and run some tests. I came across psexec.exe. That works well through a command prompt window--I can connect and run the script without any issues, but when I try to have Jenkins do it, I get the following output:
PsExec v2.11 - Execute processes remotely
Copyright (C) 2001-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
The handle is invalid.
Connecting to ABCDEFG...
Couldn't access ABCDEFG:
Connecting to ABCDEFG...
Build step 'Execute Windows batch command' marked build as failure
The command I am using in both cases is:
psexec.exe \\ABCDEFG -u "DOMAIN\username" -p "password" "C:\test.bat"
The user associated with username has administrator privileges on the remote system (ABCDEFG is not the real name of the system).
Can anyone help me figure out why it is not working through Jenkins? Or, is there an easier/better way to execute a batch script on a remote Windows system through Jenkins?
Thanks to all your help, especially Technext, I have a solution.
I needed run "services.msc", find "Jenkins", right click on it, and go to "Properties". Once the properties windows appeared, I had to click the "Stop" button to stop Jenkins, open the "Log On" tab, enter in my username and password (the username I used when running through command prompt), and start Jenkins again. That got rid of the "handle is invalid" message in Jenkins.
Update:
A better solution was to go onto the system I was using psexec.exe to get onto, go to Control Panel > User Accounts > Give other users access to this computer. Click on "Add..." and type in the username and domain Jenkins uses to run its commands (to find this, open your Jenkins in a browser window, go to Manage Jenkins > System Information and look for USERNAME and USERDOMAIN under Environment Variables). Make sure you give it Administrator rights. Then click ok. Now psexec.exe shouldn't have the "handle is invalid" issue.
Sorry, I don't have enough reputation for comments, but is the single \ a typo? Since
The handle is invalid.
probably means that the computer address is invalid. Try
psexec.exe \\ABCDEFG -u "DOMAIN\username" -p "password" "C:\test.bat"
Notice the two backslashes to access a locally mapped computer.
otherwise if that does not work i recommend the # tag
psexec.exe #servername.txt -u "DOMAIN\username" -p "password" "C:\test.bat"
where #servername.txt is a textfile containing only the servernames, one per line. The file parameter handles the formatting of \
ex servername.txt
ABCDEFG
COMPUTER2
EDIT: also found after some quick googling that it can be related to windows security.
Check out that a simple restart of the remote machine doesn't solve the problem. Also, adding parameters -h and -accepteula may help. Modified command:
psexec.exe \\ABCDEFG -u "DOMAIN\username" -p "password" -h -accepteula "C:\test.bat"
I execute below code from Jenkins pipeline groovy script to connect dynamically created VM as a resource on Jenkins master. Below code connect dynamically created VM as resource on Jenkins master with 4 executors. You can change the number of executors based on your requirement.
bat label: 'ConnectResource', script: """
#echo OFF
C:\\apps\\tools\\psexec \\\\${machine_ip} -u ${machine_ip}\\${machine_username} -p ${machine_password} -accepteula -d -h -i 1 cmd.exe /c "cd C:\\apps\\jenkins\\ & java -jar C:\\apps\\jenkins\\swarm.jar -master http://pnlv6s540:8080 -username ${jenkins_user_name} -password ${jenkins_user_password} -name ${machine_ip}_${BUILD_NUMBER} -labels ${machine_ip}_${BUILD_NUMBER} -deleteExistingClients -disableClientsUniqueId -executors 4" & ping 127.0.0.1 -n 60 > nul
"""

how to bind a app to a user when it connect the server with a terminal

My app now has been sealed as a product, which will be sold with a PC with linux system installed. How ever I will create a new user for the customers, but I want bind a interface-like app to the user, so when my custumers log in via terminals the selected app runs automatically, when connection ends, the app quit the same way.
I know , maybe this can be implemented programmally..but...
Do you know any suggestion???
thanx
all appreciated...
As mentioned by AProgrammer you can run your app as the user shell or in the profile, as in this example
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022
# if running bash
if [ -n "$BASH_VERSION" ]; then
# include .bashrc if it exists
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
fi
# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi
# run you app here
exec myapp
If you have your app started by xinetd then you can have it start up on connect. On disconnect your app will be sent a SIGHUP, so you can catch that and shut down.
The program executed by terminal login is the user shell as determined by a field in /etc/passwd. You could either put your program as the shell, or arrange for you program to be executed by the shell start up scripts (~/.profile, ~/.cshrc depending on the shell).

Resources