How to set deadline for BigQuery on Google App Engine - google-app-engine

I have a Google App Engine program that calls BigQuery for data.
The query usually takes 3 - 4.5 seconds and is fine but sometimes takes over five seconds and throws this error:
DeadlineExceededError: The API call urlfetch.Fetch() took too long to respond and was cancelled.
This article shows the deadlines and the different kinds of deadline errors.
Is there a way to set the deadline for a BigQuery job to be above 5 seconds? Could not find it in the BigQuery API docs.

BigQuery queries are fast, but often take longer than the default App Engine urlfetch timeout. The BigQuery API is async, so you need to break up the steps into API calls that each are shorter than 5 seconds.
For this situation, I would use the App Engine Task Queue:
Make a call to the BigQuery API to insert your job. This returns a JobID.
Place a task on the App Engine task queue to check out the status of the BigQuery query job at that ID.
If the BigQuery Job Status is not "DONE", place a new task on the queue to check it again.
If the Status is "DONE," then make a call using urlfetch to retrieve the results.

Note I would go with Michael's suggestion since that is the most robust. I just wanted to point out that you can increase the urlfetch timeout up to 60 seconds, which should be enough time for most queries to complete.
How to set timeout for urlfetch in Google App Engine?

I was unable to get the urlfetch.set_default_fetch_deadline() method to apply to the Big Query API, but was able to increase the timeout when authorizing the big query session as follows:
from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
credentials = ServiceAccountCredentials.from_json_keyfile_dict(credentials_dict, scopes)
# Create an authorized session and set the url fetch timeout.
http_auth = credentials.authorize(Http(timeout=60))
# Build the service.
service = build(service_name, version, http=http_auth)
# Make the query
request = service.jobs().query(body=query_body).execute()
Or with an asynchronous approach using jobs().insert
query_response = service.jobs().insert(body=query_body).execute()
big_query_job_id = query_response['jobReference']['jobId']
# poll the job.get endpoint until the job is complete
while True:
job_status_response = service.jobs()\
.get(jobId=big_query_job_id).execute()
if job_status_response['status']['state'] == done:
break
time.sleep(1)
results_respone = service.jobs()\
.getQueryResults(**query_params)\
.execute()
We ended up going with an approach similar to what Michael suggests above, however even when using the asynchronous call, the getQueryResults method (paginated with a small maxResults parameter) was timing out on url fetch, throwing the error posted in the question.
So, in order to increase the timeout of URL Fetch in Big Query / App Engine, set the timeout accordingly when authorizing your session.

To issue HTTP requests in AppEngine you can use urllib, urllib2, httplib, or urlfetch. However, no matter what library you choose, AppEngine will perform HTTP requests using App Engine's URL Fetch service.
The googleapiclient uses httplib2. It looks like httplib2.Http passes it's timeout to urlfetch. Since it has a default value of None, urlfetch sets the deadline of that request to 5s no matter what you set with urlfetch.set_default_fetch_deadline.
Under the covers httplib2 uses the socket library for HTTP requests.
To set the timeout you can do the following:
import socket
socket.setdefaulttimeout(30)
You should also be able to do this but I haven't tested it:
http = httplib2.Http(timeout=30)
If you don't have existing code to time the request you can wrap your query like so:
import time
start_query = time.time()
<your query code>
end_query = time.time()
print(end_query - start_query)

This is one way to solve bigquery timeouts in AppEngine for Go. Simply set TimeoutMs on your queries to well below 5000. The default timeout for bigquery queries is 10000ms which is over the default 5 second deadline for outgoing requests in AppEngine.
The gotcha is that the timeout must be set both in the initial request: bigquery.service.Jobs.Query(…) and the subsequent b.service.Jobs.GetQueryResults(…) which you use to poll the query results.
Example:
query := &gbigquery.QueryRequest{
DefaultDataset: &gbigquery.DatasetReference{
DatasetId: "mydatasetid",
ProjectId: "myprojectid",
},
Kind: "json",
Query: "<insert query here>",
TimeoutMs: 3000, // <- important!
}
queryResponse := bigquery.service.Jobs.Query("myprojectid", query).Do()
// determine if queryResponse is a completed job and if not start to poll
queryResponseResults := bigquery.service.Jobs.
GetQueryResults("myprojectid", res.JobRef.JobId).
TimeoutMs(DefaultTimeoutMS) // <- important!
// determine if queryResponseResults is a completed job and if not continue to poll
The nice thing about this is that you maintain the default request deadline for the overall request (60s for normal requests and 10min for tasks and cronjobs) while avoiding setting the deadline for outgoing requests to some arbitrary large value.

Related

How to run Google App Engine app indefinitely

I successfully deployed a twitter screenshot bot on Google App Engine.
This is my first time deploying.
First thing I noticed was that the app didn't start running until I clicked the link.
When I did, the app worked successfully (replied to tweets with screenshots) as long as the tab was loading and open.
When I closed the tab, the bot stopped working.
Also, in the cloud shell log, I saw:
Handling signal: term
[INFO] Worker exiting (pid 18)
This behaviour surprises me as I expect it to keep running on google server indefinitely.
My bot works by streaming with Twitter api. Also the "worker exiting" line above surprises me.
Here is the relevant code:
def get_stream(set):
global servecount
with requests.get(f"https://api.twitter.com/2/tweets/search/stream?tweet.fields=id,author_id&user.fields=id,username&expansions=author_id,referenced_tweets.id", auth=bearer_oauth, stream=True) as response:
print(response.status_code)
if response.status_code == 429:
print(f"returned code 429, waiting for 60 seconds to try again")
print(response.text)
time.sleep(60)
return
if response.status_code != 200:
raise Exception(
f"Cannot get stream (HTTP {response.status_code}): {response.text}"
)
for response_line in response.iter_lines():
if response_line:
json_response = json.loads(response_line)
print(json.dumps(json_response, indent=4))
if json_response['data']['referenced_tweets'][0]['type'] != "replied_to":
print(f"that was a {json_response['data']['referenced_tweets'][0]['type']} tweet not a reply. Moving on.")
continue
uname = json_response['includes']['users'][0]['username']
tid = json_response['data']['id']
reply_tid = json_response['includes']['tweets'][0]['id']
or_uid = json_response['includes']['tweets'][0]['author_id']
print(uname, tid, reply_tid, or_uid)
followers = api.get_follower_ids(user_id='1509540822815055881')
uid = int(json_response['data']['author_id'])
if uid not in followers:
try:
client.create_tweet(text=f"{uname}, you need to follow me first :)\nPlease follow and retry. \n\n\nIf there is a problem, please speak with my creator, #JoIyke_", in_reply_to_tweet_id=tid, media_ids=[mid])
except:
print("tweet failed")
continue
mid = getmedia(uname, reply_tid)
#try:
client.create_tweet(text=f"{uname}, here is your screenshot: \n\n\nIf there is a problem, please speak with my creator, #JoIyke_", in_reply_to_tweet_id=tid, media_ids=[mid])
#print(f"served {servecount} users with screenshot")
#servecount += 1
#except:
# print("tweet failed")
editlogger()
def main():
servecount, tries = 1, 1
rules = get_rules()
delete = delete_all_rules(rules)
set = set_rules(delete)
while True:
print(f"starting try: {tries}")
get_stream(set)
tries += 1
If this is important, my app.yaml file has only one line:
runtime: python38
and I deployed the app from cloud shell with gcloud app deploy app.yaml
What can I do?
I have searched and can't seem to find a solution. Also, this is my first time deploying an app sucessfully.
Thank you.
Google App Engine works on demand i.e. when it receives an HTTP(s) request.
Neither Warmup requests nor min_instances > 0 will meet your needs. A warmup tries to 'start up' an instance before your requests come in. A min_instance > 0 simply says not to kill the instance but you still need an http request to invoke the service (which is what you did by opening a browser tab and entering your Apps url).
You may ask - since you've 'started up' the instance by opening a browser tab, why doesn't it keep running afterwards? The answer is that every request to a Google App Engine (Standard) app must complete within 1 - 10 minutes (depending on the type of scaling) your App is using (see documentation). For Google App Engine Flexible, the timeout goes up to 60 minutes. This tells you that your service will timeout after at most 10 minutes on GAE standard or 60 minutes on GAE Flexible.
I think the best solution for you on GCP is to use Google Compute Engine (GCE). Spin up a virtual server (pick the lowest configuration so you can stick within the free tier). If you use GCE, it means you spin up a Virtual Machine (VM), deploy your code to it and kick off your code. Your code then runs continuously.
App Engine works on demand, i.e, only will be up if there are requests to the app (this is why when you click on the URL the app works). As well you can set 1 instance to be "running all the time" (min_instances) it will be an anti-pattern for what you want to accomplish and App Engine. Please read How Instances are Managed
Looking at your code you're pulling data every minute from Twitter, so the best option for you is using Cloud Scheduler + Cloud Functions.
Cloud Scheduler will call your Function and it will check if there is data to process, if not the process is terminated. This will help you to save costs because instead of have something running all the time, the function will only work the needed time.
On the other hand I'm not an expert with the Twitter API, but if there is a way that instead of pulling data from Twitter and Twitter calls directly your function it will be better since you can optimize your costs and the function will only run when there is data to process instead of checking every n minutes.
As an advice, first review all the options you have in GCP or the provider you'll use, then choose the best one for your use case. Just selecting one that works with your programming language does not necessarily will work as you expect like in this case.

First Datastore query always slow

I have a Python 3.7 project which communicates with Datastore using the google.cloud.ndb library.
I've noticed that the first request when an instance is brought up is always an order of magnitude (several seconds) slower than subsequent ones. This is true even running locally with an emulated Datastore. I've verified that the delay is due to the first ndb.Key(...).get() which gets run. Presumably the Datastore connection takes some time to setup?
Has anyone found a way to reduce this delay?
Code example:
from flask import Flask
from google.cloud import ndb
import time
client = ndb.Client()
def ndb_wsgi_middleware(wsgi_app):
def middleware(environ, start_response):
with client.context():
return wsgi_app(environ, start_response)
return middleware
app = Flask(__name__)
app.wsgi_app = ndb_wsgi_middleware(app.wsgi_app)
#app.route('/main')
def main():
now_ts = time.time()
org = ndb.Key(Org, 1).get()
print('Finished get in %f' % (time.time() - now_ts))
return 'Does not exist' if org is None else 'Exists'
class Org(ndb.Model):
pass
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080, debug=True)
Output after 2 localhost:8080/main fetches from browser (using the local datastore emulator brought up by the command gcloud beta emulators datastore start):
Finished get in 2.043116
127.0.0.1 - - [09/Oct/2019 22:41:49] "GET /main HTTP/1.1" 200 -
Finished get in 0.001995
127.0.0.1 - - [09/Oct/2019 22:41:56] "GET /main HTTP/1.1" 200 -
The reason you are experiencing this behavior is because unless configured otherwise, all App Engine application shut down their instances when idle. This means that if they receive a new request, they must take time to spin back up and respond, resulting in a higher response time than they would otherwise. This is by design to avoid taking up resources and generating charges unnecessarily as instances are charged by the minute when they are running.
You can avoid this using warm-up requests which is a special type of loading request that prepares an instance before live requests are made.
Other option would be to set up Automatic scaling on your application using modules, and setting a value for minimum idle instances. This means, if you set it up, that at least one instance will be running at all times, preventing any and all loading requests to be needed. However, as these instances are in a constant state of running, this will incur additional charges.

Deadline exceeded while waiting for HTTP response, Python, Google App Engine

I am using a cron job to download a web page and save it, using Google App Engine.
After 5 seconds I get a Dealine exceeded error.
How can I avoid the error. Searching around this site, I can extend the time limit for urlfetch. But it isn't urlfetch that is causing issues. It is the fact that I can't run the task for over 5 seconds.
For example, I have tried this fix, but it only works if I am running the page myself, not via a cron job:
HTTPException: Deadline exceeded while waiting for HTTP response from URL: #deadline
Cron jobs can run for a maximum of 10 minutes, but that doesn't mean the URLFetch can't timeout before then. The default timeout for URLFetch is exactly 5 seconds but you can raise this.
"You can set a deadline for a request, the most amount of time the service will wait for a response. By default, the deadline for a fetch is 5 seconds. The maximum deadline is 60 seconds for HTTP requests and 10 minutes for task queue and cron job requests. When using the URLConnection interface, the service uses the connection timeout (setConnectTimeout()) plus the read timeout (setReadTimeout()) as the deadline."
see: https://developers.google.com/appengine/docs/java/urlfetch/?csw=1#Requests
"A cron job will invoke a URL, using an HTTP GET request, at a given time of day. An HTTP request invoked by cron can run for up to 10 minutes, but is subject to the same limits as other HTTP requests."
see: https://developers.google.com/appengine/docs/python/config/cron
Furthermore check out this older stackoverflow question with very similar issue.
If Google App Engine cron jobs have a 10 minute limit, then why do I get a DeadlineExceededError after the normal 30 seconds?

The sockettimeoutexception is frustrating on Google App engine(JAVA). Workaround needed

I am connecting to a 3rd party server on the following wsdl
http://webservices.ticketvala.com/axis2/services/WSTicketvala?wsdl
I am using JAX-WS to generate client code and call relevent method on 3rd party server. 3rd party server may take time between 15-25 seconds to send response.
It works fine on tomcat.
Now when i deploy this to GAE 1.5.3, often i get ScocketTimeoutException in less than 10 seconds. Sometimes it is succesfull taking even 20 seconds. I want to know why it fails many times. And any workaround to increase this response deadline time / to avoid this ScoketTimeOutException forever.
Similarly,
I have another RESTfull service at http://ticketgoose.com/bookbustickets/TGWSStationNameResponseAction.do?" +
"event=getStationDetails&password=123456&userId=ctshubws
I am connecting it through java.net.URL and many times i get TimeoutException. How can i raise this timeout limit to more than 30 seconds?
Thanks
Deepak
No user-initiated request can take more than 30 seconds to complete in Google App Engine:
http://code.google.com/intl/en/appengine/docs/java/runtime.html#The_Request_Timer
And a HTTP request to a external URL in a user-initiated request can't take more than 10 seconds to complete:
http://code.google.com/intl/en/appengine/docs/java/urlfetch/overview.html#Requests
If you need to do the overall work in more than 30 seconds and can do it in background (not needing to return the response directly via HTTP), use Task Queues. Note that the simplest way to do background work with tasks is to use the DeferredTask. And the best part: HTTP requests to external URLs on tasks can take up to 10 minutes to complete.

Is long polling possible in Google App Engine?

I need to make application that needs to poll server often, but GAE has limitations on requests, so making a lot of requests could be very costly. Is it possible to use long polling and make requests wait for the maxium 30 seconds for changes?
Google AppEngine has a new feature Channel API, with that you have
a possibility to build a good realtime application.
Another solution is to use a third part comet server like mochiweb
or twisted with a iframe pattern.
Client1, waiting a event:
client1 --Iframe Pattern--> Erlang/Mochiweb(HttpLongPolling):
Client2, sending a message:
client2 --XhrIo--> AppEngine --UrlFetch--> Erlang/Mochiweb
To use mochiweb with comet pattern, Richard Jones has written a good
topic (on google: Richard Jones A Million-user Comet Application).
We've tried implementing a Comet-like long-polling solution on App Engine, with mixed results.
def wait_for_update(request, blob):
"""
Wait for blob update, if wait option specified in query string.
Otherwise, return 304 Not Modified.
"""
wait = request.GET.get('wait', '')
if not wait.isdigit():
return blob
start = time.time()
deadline = start + int(wait)
original_sha1 = blob.sha1
try:
while time.time() < deadline:
# Sleep one or two seconds.
elapsed = time.time() - start
time.sleep(1 if elapsed < 7 else 2)
# Try to read updated blob from memcache.
logging.info("Checking memcache for blob update after %.1fs",
elapsed)
blob = Blob.cache_get_by_key_name(request.key_name)
# Detect changes.
if blob is None or blob.sha1 != original_sha1:
break
except DeadlineExceededError:
logging.info("Caught DeadlineExceededError after %.1fs",
time.time() - start)
return blob
The problem I'm seeing is that requests following a long-polling one, are getting serialize (synchronized) behind the long-polling request. I can look at a trace in Chrome and see a timeline like this:
Request 1 sent. GET (un-modified) blob (wait until changed).
Request 2 sent. Modify the blob.
After full time-out, Request 1 returns (data unmodified).
Request 2 gets processed on server, and returns success.
I've used wireshark and Chrome/timeline to confirm that I AM sending the modification request to the server on a distinct TCP connection from the long-polling one. So this snychronization must be happing on the App Engine production server. Google doesn't document this detail of the server behavior, as far as I know.
I think waiting for the channel API is the best hope we have of getting good real-time behavior from App Engine.
I don't think long polling is possible. The default request timeout for google appengine is 30 seconds.
In long polling if the message takes more than 30 secs to generate, then it will fail.
You are probably better off using short polling.
Another approach is to "simulate" long polling withing the 30 sec limit. To do this if a message does not arrive within, say 20 sec, the server can send a "token" message instead of a normal message, requiring the client to consume it and connect again.
There seems to be feature request (and its accepted) on google appengine for long polling

Resources