I've got auto scaling on in my app engine flexible (PHP) app.yaml file. Is there a gcloud or console way to figure out the machine class of the instances that have been assigned to my service?
I know that I can set the resources I need in terms of disk and RAM in my yaml file, but this is a minimum; actual allocated machines may be different. I'm looking to determine what was allocated after the machine is running.
There's a public feature request already opened for this feature, however it isn't yet available.
As a workaround, you can use the Google API Explorer and use the Stackdriver Logging API v2 > logging.entries.list endpoint. You should get relevant information by entering the following details:
fields: entries/protoPayload
Request body: filter: resource.type="gce_instance_template" AND protoPayload.methodName="v1.compute.instanceTemplates.insert" AND protoPayload.resourceName:"/global/instanceTemplates/aef-"
projectIds: <YOUR_PROJECT_IDS>
You'll see the required information under "machineType".
Related
How do I get the AppEngine service name and version in the GAE flexible env from my Java code at runtime in the Java 8/Jetty runtime?
I need the service and version to populate the ServiceContext info in the stackdriver error reporting on GCP. https://cloud.google.com/error-reporting/docs/formatting-error-messages
I am the Stackdriver Error Reporting product manager.
To answer your question:
While I cannot find a clear documentation page for it. It seems that the environment variables GAE_MODULE_NAME and GAE_MODULE_VERSION contain the data you are looking for.
However, we recently changed how errors are processed on App Engine flexible environment: service name and version are now extracted automatically are not needed in the log entry payload. The serviceContext field is now optional on GAE Flex.
The formatting error messages page should be updated in the following days to reflect this change.
According to the current documentation the environment variables GAE_SERVICE and GAE_VERSION should be used.
It is also possible to get the instance ID with the GAE_INSTANCE environment variable.
https://cloud.google.com/appengine/docs/flexible/java/migrating#modules
I like that I can use the Logs API (described here: https://cloud.google.com/appengine/docs/java/logs/) to programatically access and display app & request logs as I see fit--it's great.
Now that I'm using Managed VMs on AppEngine you can see on the Admin Console Logs Viewer that there are a ton of additional logs--including in my case a custom log which I found I could include in the viewer (decribed here: https://cloud.google.com/appengine/docs/managed-vms/custom-runtimes#logging).
My question is: Is there any way I can use the Logs API (or other pipelines already built?) to access these logs? My Managed VM module includes several components which could produce logs that I want to view:
App logs -- I can get these! No problem here.
Custom log files created by background processes I kick off in _ah/start (like "my_custom_1.log" in the screenshot)
STDERR & STDOUT from my background processes
Relevant Managed VM logs (e.g. for when an instance was restarted due to bad health... other system events like normal restarts?)
Basically I want "the total picture" at the instance level. Anyone tried to tame Managed VMs in this way with success? I'm not looking forward to rolling my own solution. And I wouldn't even know where to start on the problem of capturing STDERR and STDOUT. Any help appreciated.
There is a difference between App Engine logging and Google Cloud logging. Some of the Managed VM logs go to both, but much of it only goes to cloud logging.
Until recently there was not an API to read Cloud logs, only to write them. However, there is a new v2 beta API: https://cloud.google.com/logging/docs/api/introduction_v2
To do things at an instance level, entries in Cloud logging should have metadata set to denote which VM they came from. Both of these values seem to vary on logs from my VMs:
compute.googleapis.com/resource_name
compute.googleapis.com/resource_id
Consider an image (avatar) uploader to Google Cloud Storage which will start from the user's web browser, and then pass through a Go appengine instance which will handle standard compression/cropping etc. and then set the resulting image as an object in Cloud Storage
How can I ensure that the appengine instance isn't overloaded by too much or bad data? In other words, I think I'm asking two questions (or possibly not):
How can I limit the amount of data allowed to be sent to an appengine instance in a single request, or is there already a default safe limit?
How can I validate the data to make sure it's proper jpg/png/gif before attempting to process it with standard go image libraries?
All App Engine requests are limited to 32MB.
You can check the size of the file being uploaded before the upload starts.
You can verify the file's mime-type and only allow correct files to be uploaded.
I'm following the guidelines and updating my code to use the new Cloud Storage API in GAE, i do need to set the cachecontrol headers, previously it was easy:
files.gs.create(filename, mime_type='image/png', acl='public-read', cache_control='public, max-age=100000, must-revalidate' )
BUT, with the new API, the guidelines says that the "cache_control" is not available...
I get this error when tried to put the cachecontrol inside the Options:
ValueError: option cache_control is not supported.
Tried with Cache-Control and the same error...
As usual, the documentation of the new API is not good.
Can someone help me how to set the cache headers in the new Cloud Storage API using PYTHON. In case is not possible, can I still use the old api for my project?
Thanks.
You are right. As documented here,
the open function only supports x-goog-acl and x-goog-meta headers.
Cache control is likely to be added in the near future to make migration easier. Please note that the main value of the GCS client lib is buffered read, buffered resumable write, and automatically retries to overcome transient errors. Many other simple REST operations on GCS (e.g cache, file copy, create bucket ...) can already be done by Google API Client. The "downside" of Google API Client is that since it doesn't come directly from/for App Engine, it does not have dev appserver support.
To access a remote datastore locally using the original dev_appserver I would set --default_partition=s as mentioned here
In March 2013 Google made devappserver2 the default development server, and it does not support --default_partition resulting in the original, dreaded:
BadRequestError: app s~appname cannot access app dev~appname's data
It appears like the first few requests are served correctly with
os.environ["APPLICATION_ID"] == 's~appname'
Then a subsequent request results in a call to /_ah/warmup and then
os.environ["APPLICATION_ID"] == 'dev~appname'
The docs specifically mention related topics but appear geared to dev_appserver here
Warning! Do not get the App ID from the environment variable. The development server simulates the production App Engine service. One way in which it does this is to prepend a string (dev~) to the APPLICATION_ID environment variable, which is similar to the string prepended in production for applications using the High Replication Datastore. You can modify this behavior with the --default_partition flag, choosing a value of "" to match the master-slave option in production. Google recommends always getting the application ID using the get_application_id() method, and never using the APPLICATION_ID environment variable.
You can do the following dirty little trick:
from google.appengine.datastore.entity_pb import Reference
DEV = os.environ['SERVER_SOFTWARE'].startswith('Development')
def myApp(*args):
return os.environ['APPLICATION_ID'].replace("dev~", "s~")
if DEV:
Reference.app = myApp