I have pretty simple LA that contains just 3 actions. It has HTTP trigger, then it gets some data from SQL server and returns http response with SQL data.
Sometimes, it takes 30-50 seconds to get data from SQL but Logic App in the meantime responses with Timeout error to caller.
The execution of template action 'Response_2' is failed: the client application timed out waiting for a response from service. This means that workflow took longer to respond than the alloted timeout value. The connection maintained between the client application and service will be closed and client application will get an HTTP status code 504 Gateway Timeout.
Any idea how to increase allowed time for response?
You can turn on the Asynchronous Response in the Settings of the Response action:
When you run your logic app longer than its time limit, you will accept 202 HTTP Code first:
It will return a response contains location header:
You can request the location URL, if the status of your logic app still is running, it will return 202.
If the status of your logic app is Succeeded, then it will return the results you want.
You can refer this official document.
Related
I am using fetch in a NodeJS application. Technically, I have a ReactJS front-end calling the NodeJS backend (as a proxy), and then the proxy calls out to backend services on a different domain.
However, from logging errors from consumers (I haven't been able to reproduce this issue myself) I see that a lot of these proxy calls (using fetch) throw an error that just says Network Request Failed, which is of no help. Some context:
This only occurs on a subset of all total calls (lets say 5% of traffic)
Users that encounter this error can often make the same call again some time later (next couple minutes/hours/days) and it will go through
From Application Insights, I can see no correlation between browsers, locations, etc
Calls often return fast, like < 100 ms
All calls are HTTPS, non are HTTP
We have a fetch polyfill from fetch-ponyfill that will take over if fetch is not available (Internet Explorer). I did test this package itself and the calls went through fine. I also mentioned that this error does occur on browsers that do support fetch, so I don't think this is the error.
Fetch settings for all requests
Method is set per request, but I've seen it fail on different types (GET, POST, etc)
Mode is set to 'same-origin'. I thought this was odd, since we were sending a request from one domain to another, but I tried to set it differently and it didn't affect anything. Also, why would some requests work for some, but not for others?
Body is set per request, based on the data being sent.
Headers is usually just Accept and Content-Type, both set to JSON.
I have tried researching this topic before, but most posts I found referenced React native applications running on iOS, where you have to set some security permissions in the plist file to allow HTTP requests or something to do with transport security.
I have implement logging specific points for the data in Application Insights, and I can see that fetch() was called, but then() was never reached; it went straight to the .catch(). So it's not even reaching code that parses the request, because apparently no request came back (we then parse the JSON response and call other functions, but like I said, it doesn't even reach this point).
Which is also odd, since the request never comes back, but it fails (often) within 100 ms.
My suspicions:
Some consumers have some sort of add-on for there browser that is messing with the request. Although, I run with uBlock Origin and HTTPS Everywhere and I have not seen this error. I'm not sure what else could be modifying requests that would cause it to immediately fail.
The call goes through, which then reaches an Azure Application Gateway, which might fail for some reason (too many connected clients, not enough ports, etc) and returns a response that immediately fails the fetch call without running the .then() on the response.
For #2, I remember I had traced a network call that failed and returned Network Request Failed: Made it through the proxy -> made it through the Application Gateway -> hit the backend services -> backend services sent a response. I am currently requesting access to backend service logs in order to verify this on some more recent calls (last time I did this, I did it through a screenshare with a backend developer), and hopefully clear up the path back to the client (the ReactJS application). I do remember though that it made it to the backend services successfully.
So I'm honestly not sure what's going on here. Does anyone have any insight?
Based on your excellent description and detective work, it's clear that the problem is between your Node app and the other domain. The other domain is throwing an error and your proxy has no choice but to say that there's an error on the server. That's why it's always throwing a 500-series error, the Network Request Failed error that you're seeing.
It's an intermittent problem, so the error is inconsistent. It's a waste of your time to continue to look at the browser because the problem will have been created beyond that, either in your proxy translating that request or on the remote server. You have to find that error.
Here's what I'd do...
Implement brute-force logging in your Node app. You can use Bunyan, or Winston or just require(fs) and write out to some file when an error occurs. Then look at the results. Only log it out when the response code from the other server is in the 400 or 500 ranges. Log the request object and the response object.
Something like this with Bunyan:
fetch(urlToRemoteServer)
.then(res => res.json())
.then(res => whateverElseYoureDoing(res))
.catch(err => {
// Get the request & response to the remote server
log.info({request: req, response: res, err: err});
});
where the res in this case is the response we just got from the other domain and req is our request to them.
The logs on your Azure server will then have the entire request and response. From this you can find commonalities. and (🤞) the cause of the problem.
I have a very simple Azure Logic App that makes a REST call to an SAP web server and translates the response JSON before sending a response back to the caller of the Logic App. What is baffling me is that when the SAP call takes just over 1 minute, the Response action throws this error:
ActionResponseTimedOut. The execution of template action 'Response' is
failed: the client application timed out waiting for a response from
service. This means that workflow took longer to respond than the
alloted timeout value. The connection maintained between the client
application and service will be closed and client application will get
an HTTP status code 504 Gateway Timeout.
According to Microsoft documentation, the time-out HTTP calls is supposed to be 120 seconds (2 minutes). Unless the Logic App history display is completely wrong, the entire Logic App never takes any where near 120 seconds to complete, it keeps failing at just over 60 seconds.
The SAP GET CustomerCredit action shown in the sample below is a Logic Apps Customer Connector, not the built-in SAP action. The Logic App is the current production version, not a preview version.
Am I doing something wrong? I'd be fine if the Logic App actually timed-out after 2 minutes, but a 1 minute time-out is a bit extreme.
I don't know why your logic app shows ActionResponseTimedOut error even if it doesn't execute more than 120 seconds. I test it in my side and it works fine it the execution time less than 120 seconds. Here I can provide a workaround which may help with your problem.
1. Click "..." --> "Setting" of your "Response" action.
2. Enable "Asynchronous Response"
3. Then when you request the url of logic app, it will response with 202 accepted immediately. And in header of the response, we can find a "Location".
4. Request the url in "Location", it will response you with the result of the logic app(if the workflow is completed). If the workflow hasn't been completed, it will still response 202.
I have React app in front-end (client), calling API provided by Flask back-end (server) via axios package.
Both client and server are running locally. Client: localhost:3000. Server: localhost:5000
The problem is: after many requests, the server can not receive request from client.
Here is the picture of the received requests, which is captured in backend:
As you can see, after some success request, the React app stuck with pending request:
The lastest request: 127.0.0.1 - - [11/Jun/2020 09:34:39] "GET /posts HTTP/1.1" is error 500, but in network tab of chrome, the request is still pending, so i dont know if the server received that request or not. Nothing shown in console log of chrome, no error printed in backend terminal windows (i have some lines of code to print error in backend), just the error 500
What am i doing wrong? If this question is still confused, please comment below and i can update more info about it. Thank you!
The fact that your server is reporting a 500 error, but the request on the client is still pending makes me think that something is wrong on the server side.
To verify this, you can try manually calling raise Exception() in one of your endpoints. Then instead of seeing the error resolve through the react app, you can try calling your endpoint with curl, or a client like Postman. If you are able to see a 500 error there, then the error is likely in your React app. If that request does not resolve, then the error is probably in the server.
Is an error response is being sent to the client? The way to do this in flask is using error handlers.
Thank you for helping me.
i debugged my server and found that 2 request call from react app were served by 1 cursor
(i used 1 connector.cursor for the whole connection => 2 request arrives in one moment => the cursor does not know how to serve the result)
=> Solution: change 2 request to 1 new request to new API, in new API, return the result which is containing both result of that 2 request
In my shopify store I have setup an order creation webhook. The webhook points to a Cakephp action URL which receives the data from webhook as following:-
$content = file_get_contents ( "php://input" );
After that it is saving this order data to the app database as:-
$orderData =array('order'=>$data['order_number'],'details'=>$content);
$orders = new Order ();
$orders->saveall($orderData);
Now the issue is that for each single order created the webhook is getting invoked multiple times. Although it performs the necessary action in the first attempt, yet Shopify is not able to identify the call success and is getting it invoked again and again until the limit reaches. After the limit is reached the webhook is getting deleted from the store.
My question is that do we need to send any type of status or response to the webhook call after it performs the necessary action. Because it is not very clear from shopify webhook documentation. They state that webhook call success is determined from HTTP status 200. How can I check what is the status returned by a webhook call? How can I make sure that Shopify is informed of webhook success through my app code and it does not invokes further calls to the webhook?
Yes, you need to send a 200 response to Shopify within a short time 5s. Otherwise, Shopify will send a request in a short time.
The official guide suggests that you store the webhook data and process it with a queue, thread, or whatever ways you preferred. After that, you return a 200 response to Shopify immediately.
IMO, if there are many webhook requests sending to you, it's better to separate the webhook receiver from your app server. You can do it with AWS Lambda or docker swarm so that the webhook requests won't break your app server.
Source:
Time limit: enter link description here
Webhooks with AWS Lambda: enter link description here
Just to clarify for others, you have to explicitly return a 2XX HTTP code or it'll retry 19 times over 48 hours, then delete your webhook if it exceeds that.
I created an App Engine backend to serve http requests for a long running process. The backend process works as expected when the query references an input of small size, but times out when the input size is large. The query parameter is the url of an App Engine BlobStore blob, which is the input data for the backend process. I thought the whole point of using App Engine backends was to avoid the timeout restricts that App Engine frontends possess. How can I avoid getting a timeout?
I call the backend like this, setting the connection timeout length to infinite:
HttpURLConnection connection = (HttpURLConnection)(new URL(url + "?" + query).openConnection());
connection.setRequestProperty("Accept-Charset", charset);
connection.setRequestMethod("GET");
connection.setConnectTimeout(0);
connection.connect();
InputStream in = connection.getInputStream();
int ch;
while ((ch = in.read()) != -1)
json = json + String.valueOf((char) ch);
System.out.println("Response Message is: " + json);
connection.disconnect();
The traceback (edited for anonymity) is:
Uncaught exception from servlet
java.net.SocketTimeoutException: Timeout while fetching URL: http://my-backend.myapp.appspot.com/somemethod?someparameter=AMIfv97IBE43y1pFaLNSKO1hAH1U4cpB45dc756FzVAyifPner8_TCJbg1pPMwMulsGnObJTgiC2I6G6CdWpSrH8TrRBO9x8BG_No26AM9LmGSkcbQZiilhC_-KGLx17mrS6QOLsUm3JFY88h8TnFNer5N6-cl0iKA
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.convertApplicationException(URLFetchServiceImpl.java:142)
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.fetch(URLFetchServiceImpl.java:43)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.fetchResponse(URLFetchServiceStreamHandler.java:417)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.getInputStream(URLFetchServiceStreamHandler.java:296)
at org.someorg.server.HUDXML3UploadService.doPost(SomeService.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
As you can see, I'm not getting the DeadlineExceededException, so I think something other than Google's limits is causing the timeout, and also making this a different issue from similar stackoverflow posts on the topic.
I humbly thank you for any insights.
Update 2/19/2012: I see what's going on, I think. I should be able to have the client wait indefinitely using GWT [or any other type of client-side async framework] async handler for an any client request to complete, so I don't think that is the problem. The problem is that the file upload is calling the _ah/upload App Engine system endpoint which then, once the blob is stored in the Blobstore) calls the upload service's doPost backend to process the blob. The client request to _ah/upload is what is timing out, because the backend doesn't return in a timely fashion. To make this timeout problem go away, I attempted to make the _ah_upload service itself a public backend accessible via http://backend_name.project_name.appspot.com/_ah/upload, but I don't think that google allows a system service (like _ah/upload) to be run as a backend. Now my next approach is to just have ah_upload immediately return after triggering the backend processing, and then call another service to get the original response I wanted, after processing is finished.
The solution was to start a backend process as a tasks and add that to the task queue, then returning a response to client before it waits to process the backend task (which can take a long time). If I could have assigned ah_upload to a backend, this would have also solved the problem, since the clien't async handler could wait forever for the backend to finish, but I do not think Google permits assigning System Servlets to backends. The client will now have to poll persisted backend process response data, as Paul C mentioned, since tasks can not respond like a normal servlet.