prematurely timeout an API request from Vercel deploy - reactjs

I know there's an execution timeout for serverless functions from vercel. Is there a way for me to set a timeout manually so I don't exceed this time?
For example, I am on a hobby account with a 5 second timeout limit. In my custom API route, I am looking for a way to "stop" this request before the 5 second timer, and just return an error response to my client. i.e. res.status(500).json({error:"some error message"});

Related

Azure Logic Apps Response HTTP Action Timing-Out After 60 Seconds

I have a very simple Azure Logic App that makes a REST call to an SAP web server and translates the response JSON before sending a response back to the caller of the Logic App. What is baffling me is that when the SAP call takes just over 1 minute, the Response action throws this error:
ActionResponseTimedOut. The execution of template action 'Response' is
failed: the client application timed out waiting for a response from
service. This means that workflow took longer to respond than the
alloted timeout value. The connection maintained between the client
application and service will be closed and client application will get
an HTTP status code 504 Gateway Timeout.
According to Microsoft documentation, the time-out HTTP calls is supposed to be 120 seconds (2 minutes). Unless the Logic App history display is completely wrong, the entire Logic App never takes any where near 120 seconds to complete, it keeps failing at just over 60 seconds.
The SAP GET CustomerCredit action shown in the sample below is a Logic Apps Customer Connector, not the built-in SAP action. The Logic App is the current production version, not a preview version.
Am I doing something wrong? I'd be fine if the Logic App actually timed-out after 2 minutes, but a 1 minute time-out is a bit extreme.
I don't know why your logic app shows ActionResponseTimedOut error even if it doesn't execute more than 120 seconds. I test it in my side and it works fine it the execution time less than 120 seconds. Here I can provide a workaround which may help with your problem.
1. Click "..." --> "Setting" of your "Response" action.
2. Enable "Asynchronous Response"
3. Then when you request the url of logic app, it will response with 202 accepted immediately. And in header of the response, we can find a "Location".
4. Request the url in "Location", it will response you with the result of the logic app(if the workflow is completed). If the workflow hasn't been completed, it will still response 202.

Logic Apps: HTTP trigger and response time out

I have pretty simple LA that contains just 3 actions. It has HTTP trigger, then it gets some data from SQL server and returns http response with SQL data.
Sometimes, it takes 30-50 seconds to get data from SQL but Logic App in the meantime responses with Timeout error to caller.
The execution of template action 'Response_2' is failed: the client application timed out waiting for a response from service. This means that workflow took longer to respond than the alloted timeout value. The connection maintained between the client application and service will be closed and client application will get an HTTP status code 504 Gateway Timeout.
Any idea how to increase allowed time for response?
You can turn on the Asynchronous Response in the Settings of the Response action:
When you run your logic app longer than its time limit, you will accept 202 HTTP Code first:
It will return a response contains location header:
You can request the location URL, if the status of your logic app still is running, it will return 202.
If the status of your logic app is Succeeded, then it will return the results you want.
You can refer this official document.

Advice when working with long running post requests from a React app to NodeJS Backend

I have an application that runs into problems when 100+ items are being send to my nodejs backend for processing. The entire request can take up to 3 minutes due to external api call limits per minute.
I've tried both axios and superagent but both hit a timeout at 1-2 minutes and the frontend will error saying net::ERR_EMPTY_RESPONSE with axios and Error: Timeout exceeded at Request.push.RequestBase from superagent - but my backend will continue to process jobs and succeed.
In the express backend I've opened up the timeout to be 10 minutes following the advice here Nodejs and express server closes connection after 2 minutes.
I'm asking for advice as my only next thought would be to break up the results on my frontend and send many, smaller requests, to get the job done.
Thanks in advance for any help or advice.
On axios you can set own timeout timer. Jus Initialize enter point:
const api = axios.create({
baseURL: apiURL,
timeout: 10 * 60 * 1000, // whatever time you want
});
and just use it like:
api.get()
api.post()
...

Azure Scheduler Long Running HTTP Action

We have a Azure Function that is called from the Azure Scheduler. It is a long running function. We are using the Durable Functions Framework. It is returning to the Azure Scheduler 202 Accepted, the Location for the callback to check status of the function. For some reason it seems the Azure Scheduler is not calling back. Based on the documentation of "Limits" you are to implement the "HTTP asynchronous protocols" if you have an action that takes longer than 1 minute. This is what we have done. We have verified the response headers have Location and Retry-After with values.
From Docs:
There’s a static (not configurable) request timeout of 60 seconds for HTTP actions. For longer running operations, follow HTTP asynchronous protocols; for example, return a 202 immediately but continue working in the background.
Any one else encounter this behavior?

GAE App gets socket errors when communicating with BigQuery

Our GAE python application communicates with BigQuery using the Google Api Client for Python (currently we use version 1.3.1) with the GAE-specific authentication helpers. Very often we get a socket error while communicating with BigQuery.
More specifically, we build a python Google API client as follows
1. bq_scope = 'https://www.googleapis.com/auth/bigquery'
2. credentials = AppAssertionCredentials(scope=bq_scope)
3. http = credentials.authorize(httplib2.Http())
4. bq_service = build('bigquery', 'v2', http=http)
We then interact with the BQ service and get the following error
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/gae_override/httplib.py", line 536, in getresponse
'An error occured while connecting to the server: %s' % e)
error: An error occured while connecting to the server: Unable to fetch URL: [api url...]
The error raised is of type google.appengine.api.remote_socket._remote_socket_error.error, not an exception that wraps the error.
Initially we thought that it might be timeout-related, so we also tried setting a timeout altering line 3 in the above snippet to
3. http = credentials.authorize(httplib2.Http(timeout=60))
However, according to the log output of client library the API call takes less than 1 second to crash and explicitly setting the timeout did not change the system behavior.
Note that the error occurs in various API calls, not just a single one, and usually this happens on very light operations, for example we often see the error while polling BQ for a job status and rarely on data fetching. When we re-run the operation, the system works.
Any idea why this might happen and -perhaps- a best-practice to handle it?
All HTTP(s) requests will be routed through the urlfetch service.
Beneath that, the Google Api Client for Python uses httplib2 to make HTTP(s) requests and under the covers this library uses socket.
Since the error is coming from socket you might try to set the timeout there.
import socket
timeout = 30
socket.setdefaulttimeout(timeout)
If we continue up the stack httplib2 will use the timeout parameter from the socket level timeout.
http://httplib2.readthedocs.io/en/latest/libhttplib2.html
Moving further up the stack you can set the timeout and retries for BigQuery.
try:
timeout = 30000
num_retries = 5
query_request = bigquery_service.jobs()
query_data = {
'query': (query_var),
'timeoutMs': timeout,
}
And finally you can set the timeout for urlfetch.
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(30)
If you believe it's timeout related you might want to test each library / level to make sure the timeout is being passed correctly. You can also use a basic timer to see the results.
start_query = time.time()
query_response = query_request.query(
projectId='<project_name>',
body=query_data).execute(num_retries=num_retries)
end_query = time.time()
logging.info(end_query - start_query)
There are dozens of questions about timeout and deadline exceeded for GAE and BigQuery on this site so I wouldn't be surprised if you're hitting something weird.
Good luck!

Resources