GCP: if __name__ == '__main__' NOT WORKING - google-app-engine

I am trying to run a Python application on Google Cloud Run on GCP and the code includes an
if __name__ == '__main__':
statement. For some reason, the code after this statement does not run and
print(__name__ == '__main__')
returns 'False' while
print(__name__)
returns 'main'.
When I run the code in a Jupyter notebook,
print(__name__ == '__main__')
returns 'True' while
print(__name__)
returns '__main__'.
Why does (i) print(__name__ == '__main__') return 'False' and (ii) print(__name__) return 'main' when the code is run on Google Cloud Run? And how does one fix this issue?

The value of __name__ depends on how the file is being used. Consider the following file named test.py:
# test.py
print(__name__)
If we run the file directly, we get:
$ python test.py
__main__
This will print "__main__" because the interpreter is running the file as the main program.
Instead, if we import the file:
$ python -c "import test"
test
This will print test as that's the name of the module we're importing.
I'm guessing that you have your Cloud Run application in a file named main.py, and that you're using gunicorn as an HTTP server. So when you specify something like:
CMD exec gunicorn --bind :$PORT main:app
This is telling gunicorn to import the app variable from the main.py file, and thus __name__ will always be "main" in that file.
Depending on what you're doing in this if __name__ == "__main__" block, you could eliminate this check entirely and run the code inside it whenever the file is imported instead.

Before executing code, Python interpreter reads source file and define
few special variables/global variables, such as the __name__ global variable.
If the python interpreter is running that module (the source file) as
the main program, it sets the special __name__ variable to have a
value “__main__”.
If this file is being imported from another module, __name__ will be
set to the module’s name.
Module’s name is available as value to __name__ global variable.
A module is a file containing Python definitions and statements.
The file name is the module name with the suffix .py appended.
Here you may find some examples in order to comprehend this process.
This might explain why when you run it from Jupyter notebook it is returning ‘true’ and whereas when it is run by the application on Cloud Run it is returning ‘false’.
Could you please specify how the script is being run and provide the code as requested by #John Hanley?

Related

Icinga2 check_file_age always returns File not found

I'm using check_file_age command with service created in icinga2 director. It always returns file not found.
FILE_AGE CRITICAL: File not found - /root/last-backup
The file exists on the server and returns OK if ran in terminal.
~ '/usr/lib/nagios/plugins/check_file_age' '-c' '95000' '-f' '/root/last-backup' '-w' '90000'
FILE_AGE OK: /root/last-backup is 70052 seconds old and 11 bytes | age=70052s;90000;95000 size=11B;0;0;0
If I check the debug.log, the command returns exit code 2.
The problem is in permission, the agent is not able to access the file.
Move your file to different folder. Like /var/backups/last-backup.

Save an environment variable without using /bin/bash

I have a program that creates a environment variable called $EGG with this code
memcpy(buff,"EGG=",4);
putenv(buff);
system("/bin/bash");
And the value of buff is used to create an environment variable, and I use it through $EGG variable, but for use it I see that I must use the call system("/bin/bash");. Otherwise, if I don't use /bin/bash call I don't find my $EGG variable.
Is there a way to save my environment variable without calling /bin/bash?
Short answer: You cannot modify an existing environment the way you try.
Background: The program you use to create the environment variable EGG in gets its own environment on start-up (typically as copy of the environment of the process that starts the program). Inside the program's very own environment EGG is created.
When a program ends its environment is also gone and with it would had been created in there.
To modify an environment programmatically do not write a (C) program but use a script.
Using bash this could look like this:
#!/bin/bash
export EGG=4
save this as for example set_egg_to_4.sh and adjust the files mode do be able to execute the it:
$ chmod +x set_egg_to_4.sh
Then run the script:
$ ./set_egg_to_4.sh
and test for EGG, by doing
$ echo $EGG
4
To "permanently" set an environment variable, add it to your .bash_profile file.
export EGG=4
This file is sourced each time you start a login session, so that EGG is added to your environment each time. Any that inherit (directly or indirectly) from this shell will also have EGG in its environment.
There may be other files on your system that are sourced immediately on startup, so that a variable set in such a file is available to all processes (regardless of the user). (One I'm think of is /etc/environment.)

how to Change the file Attribute from read-only to read/write using Python?

I have a bunch of files which have read-only status. It is not possible to change the status manual so I was thinking of a code that I could say change the status of all the files in one folder from read-only. However, I can't find the right coding. Is the python helpful for this case?
Thanks in advance,
I don't see the need of python here. You could simply type the below written command in the terminal:
chmod -R 777 foldername (777 is just for example)
If you want to stick with python, type the following in the python interpreter:
import subprocess
subprocess.call(['chmod', '-R', '777', 'foldername'])

Passing commandline arguments to QPython

I am running a simple client-server program written in python, on my android phone using QPython and QPython3. I need to pass some commandline parameters. How do I do that?
I found a couple of way of running a script that I imported from my Linux laptop.
If I put frets.py in the script3 directory, and create this script in the same directory:
import sys, os
dir = '/storage/emulated/0/com.hipipal.qpyplus/scripts3/'
os.chdir(dir)
def callfrets(val):
os.system(sys.executable+" frets.py " + val)
while True:
val = input('$:')
if val:
callfrets(val)
else:
break
I can run the program with the same commandline inputs that I used in Linux, getting output on the console. Just invoke this script from the editor or the programs menu.
I also found (after getting some argparse errors) that I can get to a usable Linux shell by quiting the Python console with sys.exit(1):
import sys
sys.exit(1)
drops me into the shell with the / directory. Changing directory
cd /storage/emulated/0/Download # or to the scripts3 directory
lets me run that original script directly
python frets.py -a ...
This shell has the necessary permisions and $PATH (/data/data/com.hipipal.qpy3/files/bin).
(I had problems getting this working on my phone, but updating Qpython3 took care of that.)
Just write a wrapper script which get the parameters and pass to the real script using some function like execfile, and put the script into /sdcard/com.hipipal.qpyplus/scripts or /sdcard/com.hipipal.qpyplus/scripts3 (for qpython3).
Then you can see the script in scripts when clicking the start button.

Tipfy: "NotFound: 404" when accessing multi-auth example locally

I am using the Tipfy framework ( tipfy.org ) on the Google App Engine. I would like to extend the multi-auth example ( http://tipfy-auth.appspot.com/ ).
To try the example, I installed Tipfy.
The *hello_world* app is accessible through the browser if I run the local server.
Then I added the multi-auth app in a second directory called multi_auth, added it in the config.py *apps_installed* list (removed hello_world) and reloaded the page.
I get the following output:
Traceback (most recent call last)
*
File "/home/ideaglobe/ideabox/tipfy/project/multiauthapp/distlib/tipfy/__init__.py", line 442, in wsgi_app
[Display the sourcecode for this frame] [Open an interactive python shell in this frame] response = self.handle_exception(request, e)
*
File "/home/ideaglobe/ideabox/tipfy/project/multiauthapp/distlib/tipfy/__init__.py", line 430, in wsgi_app
[Display the sourcecode for this frame] [Open an interactive python shell in this frame] rv = self.dispatch(request)
*
File "/home/ideaglobe/ideabox/tipfy/project/multiauthapp/distlib/tipfy/__init__.py", line 547, in dispatch
[Display the sourcecode for this frame] [Open an interactive python shell in this frame] raise request.routing_exception
[console ready]
>>> dump()
Local variables in frame
self <tipfy.Tipfy object at 0x9d7f22c>
request <Request 'http://localhost:8080/' [GET]>
>>>
NotFound: 404: Not Found
Obviously, the handler is not found, but why? Where can I set which app should be loaded?
I would be glad about a hint.
I just did the same process last night succesfully:
Be sure to have downloaded all the extensions needed by the example using buildout
Copy the multi-auth config.py file to the app root overwriting the original one.
Copy all the files from multi-auth static and templates folders to the app root static and templates folders
Be sure that config.py has 'apps.multi-auth' in the apps_installed list
Did you define the rules for the request handlers as described in the documentation?
http://www.tipfy.org/wiki/extensions/auth/#authentication-endpoints

Resources