Why am I seeing a "Deadline exceeded" HTTPException when making an HTTP request from within a defer (only) on App Engine? I'm setting a timeout (540 seconds) when I make the request using urllib2 (see below), yet my request times out at around 180 seconds. This same request works fine locally using the Cloud SDK and Djangae.
req = urllib2.Request(
settings.ENDPOINT,
json_data,
{
'Content-Type': 'application/json',
'X-API-KEY': settings.SOME_KEY,
}
)
response = urllib2.urlopen(req, timeout=settings.SOME_TIMEOUT)
UPDATE:
I've also tried setting the global google.appengine.api.urlfetch timeout to 540 via set_default_fetch_deadline without success.
I get these warnings all the time in my code:
/base/data/home/apps/s~my-project-id/version-id/lib/urllib3/contrib/appengine.py:256:
AppEnginePlatformWarning: URLFetch does not support granular timeout
settings, reverting to total or default URLFetch timeout.
It's possible urllib2 hits the same issue, but simply does not log a warning the way urllib3 does
EDIT:
I just noticed you said you also tried to set urlfetch's global timeout. The Java docs say that
The maximum deadline is 60 seconds for HTTP(S) requests
For some reason this absent from the python docs, but my guess would be that regardless of what you set the timeout to be you are being capped by some internal max-timeout value.
https://cloud.google.com/appengine/docs/standard/java/outbound-requests#request_timeouts
Related
I have 2 service. One is hosted in Google App Engine and one is hosted in Cloud Run.
I use urlfetch (Python 2) imported from google.appengine.api in GAE to call APIs provided by the Cloud Run.
Occasionally there are a few (like <10 per week) DeadlineExceededError shown up like this:
Deadline exceeded while waiting for HTTP response from URL
But these few days such error suddenly occurs frequently (like ~40 per day). Not sure if it is due to Christmas peak hour or what.
I've checked Load Balancer logs of Cloud Run and turned out the request has never reached the Load Balancer.
Has anyone encountered similar issue before? Is anything wrong with GAE urlfetch?
I found a conversion which is similar but the suggestion was to handle the error...
Wonder what can I do to mitigate the issue. Many thanks.
Update 1
Checked again, found some requests from App Engine did show up in Cloud Run Load Balancer logs but the time is weird:
e.g.
Logs from GAE project
10:36:24.706 send request
10:36:29.648 deadline exceeded
Logs from Cloud Run project
10:36:35.742 reached load balancer
10:36:49.289 finished processing
Not sure why it took so long for the request to reach the Load Balancer...
Update 2
I am using GAE Standard located in US with the following settings:
runtime: python27
api_version: 1
threadsafe: true
automatic_scaling:
max_pending_latency: 5s
inbound_services:
- warmup
- channel_presence
builtins:
- appstats: on
- remote_api: on
- deferred: on
...
The Cloud Run hosted API gateway I was trying to call is located in Asia. In front of it there is a Google Load Balancer whose type is HTTP(S) (classic).
Update 3
I wrote a simple script to directly call Cloud Run endpoint using axios (whose timeout is set to 5s) periodically. After a while some requests were timed out. I checked the logs in my Cloud Run project, 2 different phenomena were found:
For request A, pretty much like what I mentioned in Update 1, logs were found for both Load Balancer and Cloud Run revision.
Time of CR revision log - Time of LB log > 5s so I think this is an acceptable time out.
But for request B, no logs were found at all.
So I guess the problem is not about urlfetch nor GAE?
Deadline exceeded while waiting for HTTP response from URL is actually a DeadlineExceededError. The URL was not fetched because the deadline was exceeded. This can occur with either the client-supplied deadline (which you would need to change), or the system default if the client does not supply a deadline parameter.
When you are making a HTTP request, App Engine maps this request to URLFetch. URLFetch has its own deadline that is configurable. See the URLFetch documentation.
You can set a deadline for each URLFetch request. By default, the deadline for a fetch is 5 seconds. You can change this default by:
Including the following appengine.api.urlfetch.defaultDeadline setting in your appengine-web.xml configuration file. Specify the timeout in seconds:
<system-properties>:
<property name="appengine.api.urlfetch.defaultDeadline" value="10"/>
</system-properties>
You can also adjust the default deadline by using the urlfetch.set_default_fetch_deadline() function. This function stores the new default deadline on a thread-local variable, so it must be set for each request, for example, in a custom middleware.
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(45)
If your Cloud Run service is processing long requests, you can increase the request timeout. If your service doesn't return a response within the time specified, the request ends and the service returns an HTTP 504 error.
Update the timeoutSeconds attribute in YAML file as :
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: SERVICE
spec:
template:
spec:
containers:
- image: IMAGE
timeoutSeconds: VALUE
OR
You can update the request timeout for a given revision at any time by using the following command:
gcloud run services update [SERVICE] --timeout=[TIMEOUT]
If requests are terminating earlier with error code 503, you might need to update the request timeout setting for your language framework:
Node.js developers might need to update the [server.timeout property via server.setTimeout][6] (use server.setTimeout(0) to achieve an unlimited timeout) depending on the version you are using.
Python developers need to update Gunicorn's default timeout.
I am using mule-4. I am trying to integrate a third-party API(confidential). It is working from the postman and is returning responses within 1 second.
When I wrote a request connector for the same in mule, The API kept giving a timeout exception.
I increased the response timeout to 2 minutes then also got the same error i.e. timeout exceeded.
Please help.
EDIT 1:
I was able to reproduce this issue on postman. SO postman is adding Connection:keep-alive header by default and when this particular header is added then the API gives response within seconds but when this header is missing then the API gives a timeout error.
You are not really providing too much details of the issue. I can give some generic guidelines:
Ensure that there is network connectivity. If you are testing Postman and Mule from the same computer it is probably not an issue.
Ensure host, port and certificates (if using TLS) are the same. Pointing an HTTPS request to port 80 could cause a timeout sometimes.
Enable HTTP Wire logging in Mule and compare with Postman code as HTTP to identify any significant difference between the requests. For example: headers, URI, body should be the same, except probably for a few headers. Depends on the API. This is usually the main cause for differences between Postman and Mule, when the request are different.
I have an application that runs into problems when 100+ items are being send to my nodejs backend for processing. The entire request can take up to 3 minutes due to external api call limits per minute.
I've tried both axios and superagent but both hit a timeout at 1-2 minutes and the frontend will error saying net::ERR_EMPTY_RESPONSE with axios and Error: Timeout exceeded at Request.push.RequestBase from superagent - but my backend will continue to process jobs and succeed.
In the express backend I've opened up the timeout to be 10 minutes following the advice here Nodejs and express server closes connection after 2 minutes.
I'm asking for advice as my only next thought would be to break up the results on my frontend and send many, smaller requests, to get the job done.
Thanks in advance for any help or advice.
On axios you can set own timeout timer. Jus Initialize enter point:
const api = axios.create({
baseURL: apiURL,
timeout: 10 * 60 * 1000, // whatever time you want
});
and just use it like:
api.get()
api.post()
...
Our GAE python application communicates with BigQuery using the Google Api Client for Python (currently we use version 1.3.1) with the GAE-specific authentication helpers. Very often we get a socket error while communicating with BigQuery.
More specifically, we build a python Google API client as follows
1. bq_scope = 'https://www.googleapis.com/auth/bigquery'
2. credentials = AppAssertionCredentials(scope=bq_scope)
3. http = credentials.authorize(httplib2.Http())
4. bq_service = build('bigquery', 'v2', http=http)
We then interact with the BQ service and get the following error
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/gae_override/httplib.py", line 536, in getresponse
'An error occured while connecting to the server: %s' % e)
error: An error occured while connecting to the server: Unable to fetch URL: [api url...]
The error raised is of type google.appengine.api.remote_socket._remote_socket_error.error, not an exception that wraps the error.
Initially we thought that it might be timeout-related, so we also tried setting a timeout altering line 3 in the above snippet to
3. http = credentials.authorize(httplib2.Http(timeout=60))
However, according to the log output of client library the API call takes less than 1 second to crash and explicitly setting the timeout did not change the system behavior.
Note that the error occurs in various API calls, not just a single one, and usually this happens on very light operations, for example we often see the error while polling BQ for a job status and rarely on data fetching. When we re-run the operation, the system works.
Any idea why this might happen and -perhaps- a best-practice to handle it?
All HTTP(s) requests will be routed through the urlfetch service.
Beneath that, the Google Api Client for Python uses httplib2 to make HTTP(s) requests and under the covers this library uses socket.
Since the error is coming from socket you might try to set the timeout there.
import socket
timeout = 30
socket.setdefaulttimeout(timeout)
If we continue up the stack httplib2 will use the timeout parameter from the socket level timeout.
http://httplib2.readthedocs.io/en/latest/libhttplib2.html
Moving further up the stack you can set the timeout and retries for BigQuery.
try:
timeout = 30000
num_retries = 5
query_request = bigquery_service.jobs()
query_data = {
'query': (query_var),
'timeoutMs': timeout,
}
And finally you can set the timeout for urlfetch.
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(30)
If you believe it's timeout related you might want to test each library / level to make sure the timeout is being passed correctly. You can also use a basic timer to see the results.
start_query = time.time()
query_response = query_request.query(
projectId='<project_name>',
body=query_data).execute(num_retries=num_retries)
end_query = time.time()
logging.info(end_query - start_query)
There are dozens of questions about timeout and deadline exceeded for GAE and BigQuery on this site so I wouldn't be surprised if you're hitting something weird.
Good luck!
We are using the developers python guide with Python data 2.15 library and as per example stated for app engine.
createSite("test site one", description="test site one", source_site =("https://sites.google.com/feeds/site/googleappsforus.com/google-site-template" ))
We are getting an un-predictable response every time we use.
Exception: HTTPException: Deadline exceeded while waiting for HTTP response from URL: https://sites.google.com/feeds/site/googleappsforyou.com
Did someone experience the same issue? Is it AppEngine or Sites API related?
Regards,
Deadline exceeded while waiting for HTTP response from URL is actually a DeadlineExceededError. When you are making a HTTP request, App Engine maps this request to URLFetch. URLFetch has its own deadline that is configurable. See the URLFetch documentation.
It seems that your client library catches DeadlineExceededError and throws HTTPException. Your client library either passes a deadline to URLFetch (which you would need to change) or the default deadline is insufficient for your request.
Try setting the default URLFetch deadline like this:
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(45)
Also make sure to check out Dealing with DeadlineExceededErrors in the official Docs.
Keep in mind that any end-user initiated request must complete in 60 seconds or will encounter a DeadlineExceededError. See App Engine Python Runtime Request Timer.
The accepted solution did not work for me when working with the very recent versions of httplib2 and googleapiclient. The problem appears to be that httplib2.Http passes it's timeout argument all the way through to urlfetch. Since it has a default value of None, urlfetch sets the limit for that request to 5s irrespective of whatever default you set with urlfetch.set_default_fetch_deadline. It appears that you have a few options.
First option -- You can explicitly pass an instance of httplib2.Http around:
http = httplib2.Http(timeout=30)
...
client.method().execute(http=http)
Second option, you can set the default value using sockets1:
import socket
socket.setdefaulttimeout(30)
Third option, you can tell appengine to use sockets for requests2. To do that, you modify app.yaml:
env_variables:
GAE_USE_SOCKETS_HTTPLIB : 'anyvalue'
1This might only work for paid apps since the socket api might not be present for unpaid apps...
2I'm almost certain this will only work for paid apps since the socket api won't be functional for unpaid apps...