Configuring the remote_api for AppEngine on Python 2.7, I need to set up the configuration calls to create and configure the dabase call stubs so that I don't have to replicate the configuration call in every REST resource and handler. The code I want to have is something similar to this:
def configure_remote_api():
try:
from google.appengine.ext.remote_api import remote_api_stub
remote_api_stub.ConfigureRemoteApi(None, '/_ah/remote_api', auth_func, 'myapp.appspot.com')
except ImportError:
pass
What I want is to set it up so it is modularly called, and doesn't have to be replicated all over the application code, not even configure_remote_api(). This way, we can keep our codebase clean and have automatic remote_api use whenever developing locally. How can I do this?
Maybe you can put the call in appengine_config.py. That usually gets loaded pretty early on. (But please check.)
Related
I have been racking my brains on this for a few weeks now, trying different variations from Google Cloud service offerings but can't seem to find the proper one.
I have a python script with dependencies etc, that I have containerized, pushed, and deploy to GCR.
The script is a bot that connects to an external websocket receiving signals perpetually to then do other processing via API against another external service.
What would be the best service offering from Google Cloud to run this?
So far, I've tried:
GCR Web Service - requires listening service (:8080) which I do not provide in this use case, and, it scales down your service when there is no traffic so no go.
GCR Job Service - Seems like the next ideal option (no HTTP port requirement) - however, since the script (my entry point), upon launch, doesn't 'return' unless it quits, the service launch just allows it to run for a minute or so, until the jobs API declares it as 'failed' - basically, it is launching it via my entry point which just executes the script as if it was running locally and my script isn't meant to return anything.
To try and get around this, I went the google's recommended way and built a main.py with they standard boilerplate, and built it as a wrapper to act as a launcher for the actual script. I did this via a simple subprocess.Popen using their sample main.py as shown below.
main.py
import json
import os
import sys
import subprocess
# Retrieve Job-defined env vars
TASK_INDEX = os.getenv("CLOUD_RUN_TASK_INDEX", 0)
TASK_ATTEMPT = os.getenv("CLOUD_RUN_TASK_ATTEMPT", 0)
# Define main script
def main():
print(f"Starting Task #{TASK_INDEX}, Attempt #{TASK_ATTEMPT}...")
subprocess.Popen(["python3", "myscript.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(f"Completed Task #{TASK_INDEX}.")
# Start script
if __name__ == "__main__":
try:
main()
except Exception as err:
message = f"Task #{TASK_INDEX}, " \
+ f"Attempt #{TASK_ATTEMPT} failed: {str(err)}"
print(json.dumps({"message": message, "severity": "ERROR"}))
sys.exit(1) # Retry Job Task by exiting the process
My thinking being, this would allow the job to execute my script and mark the job as completed, while the actual script remains running. Also, since subprocess.Popen sets its stdout and stderr to PIPE, my thinking is it would get caught by the google logging and I would see the output.
The job runs and marks it as succeed, however, I see no indication of the actual script executing anywhere.
I had similar issue with Google Cloud functions. Jobs seemed like an ideal option since I can run on their scheduler to make sure it is launching after saying, every hour (my script uses a lock file so it doesn't run again if running).
Am I just missing the point on how these cloud services run?
Do offerings like google cloud run jobs/functions, etc meant to execute jobs that return and quit until launched again by however scheduled?
Do I need to consider Google Computing engine as an option for this use case that is, a full running VM instead of stateless/serverless options?
I am trying to use this in a containerized, scalable as needed, fashion to both make my project portable and minimize costs as much as possible given the always running nature of the job.
Lastly, I know services like pythonanywhere as I am sure others, make this kinda stuff easier, but I would like to learn how to do this via standard cloud offerings like GCR, AWS, etc.
thanks for any insight / advice!
Cloud Run best fit is for HTTP Rest APIs serving (stateless services). There are also Jobs in beta.
One of the top feature of Run is that it scales to 0, when there are not requests to your service (your service instance gets totally destroyed).
If your bot needs to stay alive "for ever", Run is not for you... (Even if you can configure Run to keep at least one instance live).
I would consider instead AppEngine or Compute.
I'm getting this error on app engine using flask to make a Slack bot. It happens whenever I send a POST request from Slackbot.
Unfortunately, the url provided in the error is a dead link. How do I go about using sockets instead of URLFetch?
/base/data/home/apps/[REDACTED]/lib/requests/packages/urllib3/contrib/appengine.py:115:
AppEnginePlatformWarning: urllib3 is using URLFetch on Google App
Engine sandbox instead of sockets. To use sockets directly instead of
URLFetch see https://urllib3.readthedocs.io/en/latest/contrib.html.
As detailed on Google's Sockets documentation, sockets can be used by setting the GAE_USE_SOCKETS_HTTPLIB environment variable. This feature seems to be available only on paid apps, and impacts billing.
Though the error you posted gets logged as an Error in App Engine, this thread suggests (see reply #8) that the error is actually meant as a warning, which the text "AppEnginePlatformWarning" seems to suggest anyway.
The comment block on the source page for appengine.py is also instructive.
You didn't post any information about your implementation, but on Google App Engine Standard edition, using URLFetch via the AppEngineManager should be just fine, though you will get the error.
You can use the following to silence this:
import warnings
import urllib3.contrib.appengine
warnings.filterwarnings('ignore', r'urllib3 is using URLFetch', urllib3.contrib.appengine.AppEnginePlatformWarning)
For me, turns out the presence of requests_toolbelt dependency in my project was the problem: it somehow forced the requests library to use urllib3, which requires URLFetch to be present, otherwise it raises an AppEnginePlatformError. As suggested in the app engine docs, monkey-patching Requests with requests_toolbelt forces the former to use URLFetch, which is no longer supported by GAE in a Python 3 runtime.
The solution was to remove requests_toolbelt from my requirements.txt file
I'm building a webapp in Go that requires authentication. I'd like to run local tests using appengine/aetest that validate the authentication behavior. However, I do not see any way to create an aetest.Context with a dummy user. Am I missing something?
I had a similar issue with Python sdk. The gist of the solution is to bypass authentication when tests run locally.
You should have access to the [web] app object at the the test setup time - create a user object and save it into the app (or wherever your get_current_user() method will check).
This will let you unit test all application functions except authentication itself. For the later part you can deploy your latest changes as unpublished google app version, then test authentication and if all works - publish the version.
I've discovered some header values that seem to do the trick. appengine/user/user_dev.go has the following:
X-AppEngine-Internal-User-Email
X-AppEngine-Internal-User-Federated-Identity
X-AppEngine-Internal-User-Federated-Provider
X-AppEngine-Internal-User-Id
X-AppEngine-Internal-User-Is-Admin
If I set those headers on the Context's Request when doing in-process tests, things seem to work as expected. If I set the headers on a request that I create separately, things are less successful, since the 'user.Current()' call consults the Context's Request.
These headers might work in a Python environment as well.
I am using the warmup service to carry out precaching/etc. The request gets called with self.request.host being prefixed with a version of the app.
All other handler requests are coming with the expected host name for the app.
So if the app name is myapp - then all requests are called with self.request.host set to myapp.appspot.com, whereas for "_ah/warmup" call it is getting set to nnn.myapp.appspot.com.
My code is expecting the self.request.host to be always 'myapp.appspot.com'. Is this by design or am I missing something.
Thanks.
I think this is by design because the warmup service is for a specific version. All other requests are going straight to your main app URL, which is just "aliased" to whatever version happens to be the default version at the time.
By the way, it is documented that you can access all deployed versions of your app by prefixing the version number to the domain name, so you should be aware that any users could access any version if they know about this, and if you haven't taken countermeasures! So you should definitely support this - it's an official feature of App Engine.
And is there a way to send a request directly to that server?
Actually there is a way and it can be useful for pushing new data out to all instances of an application.
from google.appengine.api import modules
instance_id = modules.get_current_instance_id()
ref: GAE Modules Docs
For what purpose?
If you want to test different versions, you can use traffic splitting https://developers.google.com/appengine/docs/adminconsole/trafficsplitting
That is different versions though, and not a specific instance.
No there isn't.
Usually when someone asks something like this, they're headed in the wrong direction on app engine. Frontend servers get started and shutdown all the time. If you are designing anything that relies on a particular instance, you're doing it wrong. You need to design requests that work no matter what instance they hit.
Consider using backends if you must do that.
I use Python and a datetime stamp to identify an instance. This instance id is set by appengine_config.py. To signal other instances I use a flag in memcache, which is checked by the __init__ of my webapp2 request handler.
I use signals to other instances to flush the jinja environment and reload dynamic python code, because I could not find another way.
Here is an example of a memcache flag; signalling to reload all dynamic modules, which had been set by instance id: '2012-12-26 16:39:50.072000'
{ u'_all': { u'dyn_reloads_dt': datetime.datetime(2012, 12, 26, 16, 39, 59, 120000),
u'setter_instance': '2012-12-26 16:39:50.072000'}}
And I starred the feature request from : Ibrahim Arief
With the advent of Modules, you can get the current instance id in a more elegant way:
ModulesServiceFactory.getModulesService().getCurrentInstanceId()
Also, according to this doc, you can route requests specifically to a particular instance by using a URL like
http://instance.version.module.app-id.appspot.com
Note that you need to replace the dot with -dot- to suppress the SSL certificate warning your web client may be complaining about:
http://instance-dot-version-dot-module-dot-app-id.appspot.com