Total cpu utilization of running app engine flexible instances - google-app-engine

I need to make decisions in an external system based on the current CPU utilization of my App Engine Flexible service. I can see the exact values / metrics I need to use in the dashboard charting in my Google Cloud Console, but I don't see a direct, easy way to get this information from something like a gcloud command.
I also need to know the count of running instances, but I think I can use gcloud app instances list -s default to get a list of my running instances in the default service, and then I can use a count of lines approach to get this info easily. I intend to make a python function which returns a tuple like (instance_count, cpu_utilization).
I'd appreciate if anyone can direct me to an easy way to get this. I am currently exploring the StackDriver Monitoring service to get this same information, but as of now it is looking super-complicated to me.

You can use the gcloud app instances list -s default command to get the running instances list, as you said. To retrieve CPU utilization, have a look on this Python Client for Stackdriver Monitoring. To list available metric types:
from google.cloud import monitoring
client = monitoring.Client()
for descriptor in client.list_metric_descriptors():
print(descriptor.type)
Metric descriptors are described here. To display utilization across your GCE instances during the last five minutes:
metric = 'compute.googleapis.com/instance/cpu/utilization'
query = client.query(metric, minutes=5)
print(query.as_dataframe())
Do not forget to add google-cloud-monitoring==0.28.1 to “requirements.txt” before installing it.
Check this code that locally runs for me:
import logging
from flask import Flask
from google.cloud import monitoring as mon
app = Flask(__name__)
#app.route('/')
def list_metric_descriptors():
"""Return all metric descriptors"""
# Instantiate client
client = mon.Client()
for descriptor in client.list_metric_descriptors():
print(descriptor.type)
return descriptor.type
#app.route('/CPU')
def cpuUtilization():
"""Return CPU utilization"""
client = mon.Client()
metric = 'compute.googleapis.com/instance/cpu/utilization'
query = client.query(metric, minutes=5)
print(type(query.as_dataframe()))
print(query.as_dataframe())
data=str(query.as_dataframe())
return data
#app.errorhandler(500)
def server_error(e):
logging.exception('An error occurred during a request.')
return """
An internal error occurred: <pre>{}</pre>
See logs for full stacktrace.
""".format(e), 500
if __name__ == '__main__':
# This is used when running locally. Gunicorn is used to run the
# application on Google App Engine. See entrypoint in app.yaml.
app.run(host='127.0.0.1', port=8080, debug=True)

Related

Deploying a python bot script on Google Cloud Run (GCR)

I have been racking my brains on this for a few weeks now, trying different variations from Google Cloud service offerings but can't seem to find the proper one.
I have a python script with dependencies etc, that I have containerized, pushed, and deploy to GCR.
The script is a bot that connects to an external websocket receiving signals perpetually to then do other processing via API against another external service.
What would be the best service offering from Google Cloud to run this?
So far, I've tried:
GCR Web Service - requires listening service (:8080) which I do not provide in this use case, and, it scales down your service when there is no traffic so no go.
GCR Job Service - Seems like the next ideal option (no HTTP port requirement) - however, since the script (my entry point), upon launch, doesn't 'return' unless it quits, the service launch just allows it to run for a minute or so, until the jobs API declares it as 'failed' - basically, it is launching it via my entry point which just executes the script as if it was running locally and my script isn't meant to return anything.
To try and get around this, I went the google's recommended way and built a main.py with they standard boilerplate, and built it as a wrapper to act as a launcher for the actual script. I did this via a simple subprocess.Popen using their sample main.py as shown below.
main.py
import json
import os
import sys
import subprocess
# Retrieve Job-defined env vars
TASK_INDEX = os.getenv("CLOUD_RUN_TASK_INDEX", 0)
TASK_ATTEMPT = os.getenv("CLOUD_RUN_TASK_ATTEMPT", 0)
# Define main script
def main():
print(f"Starting Task #{TASK_INDEX}, Attempt #{TASK_ATTEMPT}...")
subprocess.Popen(["python3", "myscript.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(f"Completed Task #{TASK_INDEX}.")
# Start script
if __name__ == "__main__":
try:
main()
except Exception as err:
message = f"Task #{TASK_INDEX}, " \
+ f"Attempt #{TASK_ATTEMPT} failed: {str(err)}"
print(json.dumps({"message": message, "severity": "ERROR"}))
sys.exit(1) # Retry Job Task by exiting the process
My thinking being, this would allow the job to execute my script and mark the job as completed, while the actual script remains running. Also, since subprocess.Popen sets its stdout and stderr to PIPE, my thinking is it would get caught by the google logging and I would see the output.
The job runs and marks it as succeed, however, I see no indication of the actual script executing anywhere.
I had similar issue with Google Cloud functions. Jobs seemed like an ideal option since I can run on their scheduler to make sure it is launching after saying, every hour (my script uses a lock file so it doesn't run again if running).
Am I just missing the point on how these cloud services run?
Do offerings like google cloud run jobs/functions, etc meant to execute jobs that return and quit until launched again by however scheduled?
Do I need to consider Google Computing engine as an option for this use case that is, a full running VM instead of stateless/serverless options?
I am trying to use this in a containerized, scalable as needed, fashion to both make my project portable and minimize costs as much as possible given the always running nature of the job.
Lastly, I know services like pythonanywhere as I am sure others, make this kinda stuff easier, but I would like to learn how to do this via standard cloud offerings like GCR, AWS, etc.
thanks for any insight / advice!
Cloud Run best fit is for HTTP Rest APIs serving (stateless services). There are also Jobs in beta.
One of the top feature of Run is that it scales to 0, when there are not requests to your service (your service instance gets totally destroyed).
If your bot needs to stay alive "for ever", Run is not for you... (Even if you can configure Run to keep at least one instance live).
I would consider instead AppEngine or Compute.

How to run Google App Engine app indefinitely

I successfully deployed a twitter screenshot bot on Google App Engine.
This is my first time deploying.
First thing I noticed was that the app didn't start running until I clicked the link.
When I did, the app worked successfully (replied to tweets with screenshots) as long as the tab was loading and open.
When I closed the tab, the bot stopped working.
Also, in the cloud shell log, I saw:
Handling signal: term
[INFO] Worker exiting (pid 18)
This behaviour surprises me as I expect it to keep running on google server indefinitely.
My bot works by streaming with Twitter api. Also the "worker exiting" line above surprises me.
Here is the relevant code:
def get_stream(set):
global servecount
with requests.get(f"https://api.twitter.com/2/tweets/search/stream?tweet.fields=id,author_id&user.fields=id,username&expansions=author_id,referenced_tweets.id", auth=bearer_oauth, stream=True) as response:
print(response.status_code)
if response.status_code == 429:
print(f"returned code 429, waiting for 60 seconds to try again")
print(response.text)
time.sleep(60)
return
if response.status_code != 200:
raise Exception(
f"Cannot get stream (HTTP {response.status_code}): {response.text}"
)
for response_line in response.iter_lines():
if response_line:
json_response = json.loads(response_line)
print(json.dumps(json_response, indent=4))
if json_response['data']['referenced_tweets'][0]['type'] != "replied_to":
print(f"that was a {json_response['data']['referenced_tweets'][0]['type']} tweet not a reply. Moving on.")
continue
uname = json_response['includes']['users'][0]['username']
tid = json_response['data']['id']
reply_tid = json_response['includes']['tweets'][0]['id']
or_uid = json_response['includes']['tweets'][0]['author_id']
print(uname, tid, reply_tid, or_uid)
followers = api.get_follower_ids(user_id='1509540822815055881')
uid = int(json_response['data']['author_id'])
if uid not in followers:
try:
client.create_tweet(text=f"{uname}, you need to follow me first :)\nPlease follow and retry. \n\n\nIf there is a problem, please speak with my creator, #JoIyke_", in_reply_to_tweet_id=tid, media_ids=[mid])
except:
print("tweet failed")
continue
mid = getmedia(uname, reply_tid)
#try:
client.create_tweet(text=f"{uname}, here is your screenshot: \n\n\nIf there is a problem, please speak with my creator, #JoIyke_", in_reply_to_tweet_id=tid, media_ids=[mid])
#print(f"served {servecount} users with screenshot")
#servecount += 1
#except:
# print("tweet failed")
editlogger()
def main():
servecount, tries = 1, 1
rules = get_rules()
delete = delete_all_rules(rules)
set = set_rules(delete)
while True:
print(f"starting try: {tries}")
get_stream(set)
tries += 1
If this is important, my app.yaml file has only one line:
runtime: python38
and I deployed the app from cloud shell with gcloud app deploy app.yaml
What can I do?
I have searched and can't seem to find a solution. Also, this is my first time deploying an app sucessfully.
Thank you.
Google App Engine works on demand i.e. when it receives an HTTP(s) request.
Neither Warmup requests nor min_instances > 0 will meet your needs. A warmup tries to 'start up' an instance before your requests come in. A min_instance > 0 simply says not to kill the instance but you still need an http request to invoke the service (which is what you did by opening a browser tab and entering your Apps url).
You may ask - since you've 'started up' the instance by opening a browser tab, why doesn't it keep running afterwards? The answer is that every request to a Google App Engine (Standard) app must complete within 1 - 10 minutes (depending on the type of scaling) your App is using (see documentation). For Google App Engine Flexible, the timeout goes up to 60 minutes. This tells you that your service will timeout after at most 10 minutes on GAE standard or 60 minutes on GAE Flexible.
I think the best solution for you on GCP is to use Google Compute Engine (GCE). Spin up a virtual server (pick the lowest configuration so you can stick within the free tier). If you use GCE, it means you spin up a Virtual Machine (VM), deploy your code to it and kick off your code. Your code then runs continuously.
App Engine works on demand, i.e, only will be up if there are requests to the app (this is why when you click on the URL the app works). As well you can set 1 instance to be "running all the time" (min_instances) it will be an anti-pattern for what you want to accomplish and App Engine. Please read How Instances are Managed
Looking at your code you're pulling data every minute from Twitter, so the best option for you is using Cloud Scheduler + Cloud Functions.
Cloud Scheduler will call your Function and it will check if there is data to process, if not the process is terminated. This will help you to save costs because instead of have something running all the time, the function will only work the needed time.
On the other hand I'm not an expert with the Twitter API, but if there is a way that instead of pulling data from Twitter and Twitter calls directly your function it will be better since you can optimize your costs and the function will only run when there is data to process instead of checking every n minutes.
As an advice, first review all the options you have in GCP or the provider you'll use, then choose the best one for your use case. Just selecting one that works with your programming language does not necessarily will work as you expect like in this case.

400-500ms latency for simple Google Drive API file query from App Engine

I'm running a Python 3.7 App Engine application in the Standard Environment. In general my time-to-first-byte for requests to this application range from 70-150ms when doing simple server-side rendering of text. My app runs in the us-central region, I'm testing from California.
I've been benchmarking requests to Google Drive API v3. I have a request which simply takes the drive file ID in the URL and returns some metadata about the file. The simplified code looks like:
from googleapiclient.discovery import build
def get(file_id: str):
credentials = oauth2.get_credentials()
service = build("drive", "v3", cache_discovery=False, credentials=credentials)
data = service.files().get(fileId=file_id, fields=",".join(FIELDS), supportsTeamDrives=True)
return json.dumps(data)
My understanding is that this request should be making a single request to the Drive API. I'm seeing 400-500ms time-to-first-byte coming back from the server. That implies ~300ms to get data from the Drive API. This seems quite high to me.
My questions:
* Is this kind of per-API-call latency common for Google Drive API
* If this is not common, what is common?
* What steps, if any, could I take to reduce the amount of time talking to the Drive API?

First Datastore query always slow

I have a Python 3.7 project which communicates with Datastore using the google.cloud.ndb library.
I've noticed that the first request when an instance is brought up is always an order of magnitude (several seconds) slower than subsequent ones. This is true even running locally with an emulated Datastore. I've verified that the delay is due to the first ndb.Key(...).get() which gets run. Presumably the Datastore connection takes some time to setup?
Has anyone found a way to reduce this delay?
Code example:
from flask import Flask
from google.cloud import ndb
import time
client = ndb.Client()
def ndb_wsgi_middleware(wsgi_app):
def middleware(environ, start_response):
with client.context():
return wsgi_app(environ, start_response)
return middleware
app = Flask(__name__)
app.wsgi_app = ndb_wsgi_middleware(app.wsgi_app)
#app.route('/main')
def main():
now_ts = time.time()
org = ndb.Key(Org, 1).get()
print('Finished get in %f' % (time.time() - now_ts))
return 'Does not exist' if org is None else 'Exists'
class Org(ndb.Model):
pass
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080, debug=True)
Output after 2 localhost:8080/main fetches from browser (using the local datastore emulator brought up by the command gcloud beta emulators datastore start):
Finished get in 2.043116
127.0.0.1 - - [09/Oct/2019 22:41:49] "GET /main HTTP/1.1" 200 -
Finished get in 0.001995
127.0.0.1 - - [09/Oct/2019 22:41:56] "GET /main HTTP/1.1" 200 -
The reason you are experiencing this behavior is because unless configured otherwise, all App Engine application shut down their instances when idle. This means that if they receive a new request, they must take time to spin back up and respond, resulting in a higher response time than they would otherwise. This is by design to avoid taking up resources and generating charges unnecessarily as instances are charged by the minute when they are running.
You can avoid this using warm-up requests which is a special type of loading request that prepares an instance before live requests are made.
Other option would be to set up Automatic scaling on your application using modules, and setting a value for minimum idle instances. This means, if you set it up, that at least one instance will be running at all times, preventing any and all loading requests to be needed. However, as these instances are in a constant state of running, this will incur additional charges.

GAE - Python 3.7 - How to log?

I have a google app engine project in python 3.7 in which I would like to write some logs.
I am used to program in app engine python 2.7 and I was using the simple code:
logging.info('hi there!')
to write any log onto the google cloud log console.
That command above now doesn't work anymore and it says:
logging has no attribute 'info'
I searched and I found this possible new code
from flask import Flask
from google.cloud import logging
app = Flask(__name__)
#app.route('/l')
def hello():
logging_client = logging.Client()
log_name = LOG_NAME
logger = logging_client.logger(LOG_NAME)
text = 'Hello, world!'
logger.log_text(text, severity='CRITICAL')
return text
This code above doesn't give any error in the stack-driver report page BUT it displays nothing at all in the log page.
So how can I write a log for my app engine project in python3.7?
The second generation standard environment (which includes python 3.7) is IMHO closer to the flexible environment than to the first generation standard environment (which includes python 2.7).
Many of the APIs which had customized versions for the 1st generation (maintained by the GAE team) were not (or at least not yet) ported in the 2nd generation if the respective functionality was more or less covered by alternate, more generic approaches, already in use in the flexible environment (most of them based on services developed and maintained by teams other than the GAE one).
You'll notice the similarity between many of the service sections in these 2 migration guides (which led me to the above summary conclusion):
Understanding differences between Python 2 and Python 3 on the App Engine standard environment
Migrating Services from the Standard Environment to the Flexible Environment (i.e 1st generation standard to flexible)
Logging is one of the services listed in both guides. The 1st generation used a customized version of the standard python logging library (that was before Stackdriver became a standalone service). For the 2nd generation logging was simply delegated to using the now generally available Stackdriver logging service (which is what the snippet you shown comes from). From Logging (in the 1st guide):
Request logs are no longer automatically correlated but will still
appear in Stackdriver Logging. Use the Stackdriver Logging client
libraries to implement your desired logging behavior.
The code snippet you show corresponds, indeed, to Stackdriver Logging. But you seem to be Using the client library directly. I don't know if this is a problem (GAE is often a bit different), but maybe you could also try using the standard Python logging instead:
Connecting the library to Python logging:
To send all log entries to Stackdriver by attaching the Stackdriver
Logging handler to the Python root logger, use the setup_logging
helper method:
# Imports the Google Cloud client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Connects the logger to the root logging handler; by default this captures
# all logs at INFO level and higher
client.setup_logging()
Using the Python root logger:
Once the handler is attached, any logs at, by default, INFO level or
higher which are emitted in your application will be sent to
Stackdriver Logging:
# Imports Python standard library logging
import logging
# The data to log
text = 'Hello, world!'
# Emits the data using the standard logging module
logging.warn(text)
There are some GAE-specific notes in there as well (but I'm not sure if they also cover the 2nd generation standard env):
Google App Engine grants the Logs Writer role by default.
The Stackdriver Logging library for Python can be used without needing
to explicitly provide credentials.
Stackdriver Logging is automatically enabled for App Engine
applications. No additional setup is required.
Note: Logs written to stdout and stderr are automatically sent to Stackdriver Logging for you, without needing to use
Stackdriver Logging library for Python.
Maybe worth noting that the viewing the logs would likely be different as well outside the 1st generation standard env (where app logs would be neatly correlated to the request logs).
And there's also the Using Stackdriver Logging in App Engine apps guide. It doesn't specifically mention the 2nd generation standard env (so it might need an update) but has good hints for the flexible environment which might be useful. For example the Linking app logs and requests section might be of interest if the missing request logs correlation has anything to do with it.
Even though logging works differently in Python 2.7 and 3.7, the same method of logging provided in Reading and Writing Application Logs in Python 2.7 should work for Python 3.7 as well since logs written to stdout and stderr will still appear in the Stackdriver Logging.
Import logging
logging.getLogger().setLevel(logging.DEBUG)
logging.debug('This is a debug message')
logging.getLogger().setLevel(logging.INFO)
logging.info('This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')
#logging.warn is deprecated
logging.warn('This is a warning' message)
=======================================
Import logging
app.logger.setLevel(logging.ERROR)
app.logger.error('This is an error message')
However, log entries are no longer automatically correlated with requests as in Python 2.7, which is why you see them in plain text. I have created a feature request to address this which you can follow here.

Resources