I have a logic app that calls API endpoint through APIM. when I call endpoint in the logic app using the webhook connector, webhook keeps running and doe snot resume logic app.
While if I tried the same API endpoint through postman, it respond back within 3-5 seconds.
Below is the screenshot.
Not sure if i am missing anything.
This is outside the intended use of the webhook http task.
https://learn.microsoft.com/en-us/azure/connectors/connectors-native-webhook
You'll want to use the regular http post task, but you can extend the timeout in the settings to accommodate your long-running http call.
(click the triple dots in the top-right, then utilize the "Action Timeout" setting).
You could also try disabling asyncronous behavior:
https://learn.microsoft.com/en-us/azure/connectors/connectors-native-http#avoid-http-timeouts-for-long-running-tasks
If the logic app still times out, you will probably need to design a different solution.
Related
I am using fetch in a NodeJS application. Technically, I have a ReactJS front-end calling the NodeJS backend (as a proxy), and then the proxy calls out to backend services on a different domain.
However, from logging errors from consumers (I haven't been able to reproduce this issue myself) I see that a lot of these proxy calls (using fetch) throw an error that just says Network Request Failed, which is of no help. Some context:
This only occurs on a subset of all total calls (lets say 5% of traffic)
Users that encounter this error can often make the same call again some time later (next couple minutes/hours/days) and it will go through
From Application Insights, I can see no correlation between browsers, locations, etc
Calls often return fast, like < 100 ms
All calls are HTTPS, non are HTTP
We have a fetch polyfill from fetch-ponyfill that will take over if fetch is not available (Internet Explorer). I did test this package itself and the calls went through fine. I also mentioned that this error does occur on browsers that do support fetch, so I don't think this is the error.
Fetch settings for all requests
Method is set per request, but I've seen it fail on different types (GET, POST, etc)
Mode is set to 'same-origin'. I thought this was odd, since we were sending a request from one domain to another, but I tried to set it differently and it didn't affect anything. Also, why would some requests work for some, but not for others?
Body is set per request, based on the data being sent.
Headers is usually just Accept and Content-Type, both set to JSON.
I have tried researching this topic before, but most posts I found referenced React native applications running on iOS, where you have to set some security permissions in the plist file to allow HTTP requests or something to do with transport security.
I have implement logging specific points for the data in Application Insights, and I can see that fetch() was called, but then() was never reached; it went straight to the .catch(). So it's not even reaching code that parses the request, because apparently no request came back (we then parse the JSON response and call other functions, but like I said, it doesn't even reach this point).
Which is also odd, since the request never comes back, but it fails (often) within 100 ms.
My suspicions:
Some consumers have some sort of add-on for there browser that is messing with the request. Although, I run with uBlock Origin and HTTPS Everywhere and I have not seen this error. I'm not sure what else could be modifying requests that would cause it to immediately fail.
The call goes through, which then reaches an Azure Application Gateway, which might fail for some reason (too many connected clients, not enough ports, etc) and returns a response that immediately fails the fetch call without running the .then() on the response.
For #2, I remember I had traced a network call that failed and returned Network Request Failed: Made it through the proxy -> made it through the Application Gateway -> hit the backend services -> backend services sent a response. I am currently requesting access to backend service logs in order to verify this on some more recent calls (last time I did this, I did it through a screenshare with a backend developer), and hopefully clear up the path back to the client (the ReactJS application). I do remember though that it made it to the backend services successfully.
So I'm honestly not sure what's going on here. Does anyone have any insight?
Based on your excellent description and detective work, it's clear that the problem is between your Node app and the other domain. The other domain is throwing an error and your proxy has no choice but to say that there's an error on the server. That's why it's always throwing a 500-series error, the Network Request Failed error that you're seeing.
It's an intermittent problem, so the error is inconsistent. It's a waste of your time to continue to look at the browser because the problem will have been created beyond that, either in your proxy translating that request or on the remote server. You have to find that error.
Here's what I'd do...
Implement brute-force logging in your Node app. You can use Bunyan, or Winston or just require(fs) and write out to some file when an error occurs. Then look at the results. Only log it out when the response code from the other server is in the 400 or 500 ranges. Log the request object and the response object.
Something like this with Bunyan:
fetch(urlToRemoteServer)
.then(res => res.json())
.then(res => whateverElseYoureDoing(res))
.catch(err => {
// Get the request & response to the remote server
log.info({request: req, response: res, err: err});
});
where the res in this case is the response we just got from the other domain and req is our request to them.
The logs on your Azure server will then have the entire request and response. From this you can find commonalities. and (🤞) the cause of the problem.
My app architecture is slack events -> API Gateway -> Lambda -> does someoperation and returns an .png file which is generated using numpy and matplotlib.
When i deal with just text output in lambda, it works fine, but when i deal with file uploads, it works strange,
It uploads files to slack using[files.upload] method and then after a minute again my lambda gets triggered and ends up in uploading another file.
Is it because slack return HTTP response for file.upload method and somehow my app catches that and it runs agian?
It would be of great help as even in the slack events, events are same without any difference but i am really not sure why my lambda gets invoked again and i verified the request ID's and it is different and even at API getway there are two different request ID's but i have requested only one time...it drives me crazy...
I found out the way to do this. With the help of this article https://aws.amazon.com/premiumsupport/knowledge-center/custom-headers-api-gateway-lambda/ i added HTTP Header [client Header information] in API Gateway and i pass it to lambda. So, in Lambda i catch the retry events from slack with the help of header which contains X-Slack-Retry-Num for retry events and return it immediately as return 200.
if 'X-Slack-Retry-Num' in output['headers']:
slk_retry = output['headers']['X-Slack-Retry-Num']
return 200
else:
"Consider this as first event and provide your actual code and logic"
Logic Apps support calling other Logic Apps with a special action:
They support something they call the "asynchronous pattern" through this option:
where the called Logic App returns a 202 (Accepted) and the calling Logic App will implicitly poll the same trigger URL until completion is signalled using 200 (OK).
How are you supposed to implement this pattern in the called Logic App? I can send a response, but once that's happened, I can't send a response again. Or can I? If so; how? How do you specify the polling URL?
In the response action, you could enable 'Asynchronous Response'. See the dialog below
[]
In my shopify store I have setup an order creation webhook. The webhook points to a Cakephp action URL which receives the data from webhook as following:-
$content = file_get_contents ( "php://input" );
After that it is saving this order data to the app database as:-
$orderData =array('order'=>$data['order_number'],'details'=>$content);
$orders = new Order ();
$orders->saveall($orderData);
Now the issue is that for each single order created the webhook is getting invoked multiple times. Although it performs the necessary action in the first attempt, yet Shopify is not able to identify the call success and is getting it invoked again and again until the limit reaches. After the limit is reached the webhook is getting deleted from the store.
My question is that do we need to send any type of status or response to the webhook call after it performs the necessary action. Because it is not very clear from shopify webhook documentation. They state that webhook call success is determined from HTTP status 200. How can I check what is the status returned by a webhook call? How can I make sure that Shopify is informed of webhook success through my app code and it does not invokes further calls to the webhook?
Yes, you need to send a 200 response to Shopify within a short time 5s. Otherwise, Shopify will send a request in a short time.
The official guide suggests that you store the webhook data and process it with a queue, thread, or whatever ways you preferred. After that, you return a 200 response to Shopify immediately.
IMO, if there are many webhook requests sending to you, it's better to separate the webhook receiver from your app server. You can do it with AWS Lambda or docker swarm so that the webhook requests won't break your app server.
Source:
Time limit: enter link description here
Webhooks with AWS Lambda: enter link description here
Just to clarify for others, you have to explicitly return a 2XX HTTP code or it'll retry 19 times over 48 hours, then delete your webhook if it exceeds that.
I have a silverlight app that uses a simple Login.aspx page. I have all the basic ASP.NET config and it works great for page requests when sessions expire or are missing. But the silverlight service requests are not page requests, they are looking for application/msbin serialized data. So when these requests arrive for an expired session, they are redirected to the login page, which they follow and eventually end up swallowing HTML content (the login page markup). Of course that ends with a content/parsing error, as I would expect.
So my question is, what must I do to have the silverlight service responses somehow redirect the browser when the server finds the session has expired.
I've written this by hand in Javascript before and had to have the ajax response handler detect a custom header so it could do a document.location = newPath. Something along those lines would be nice.
Also, I'm not interested in other solutions I've read for keeping the session alive with no-op pings, and I would prefer not to have to implement timers and custom session manager inside the client. Hoping I've missed a setting somewhere.
I had exactly the same problem. Unfortunately it is not a simple setting. I documented a possible solution here. After session timeout a call of the RIA services returns a DomainServiceException which will be handled in the central Unhandled_Exception handler. If the user is not authenticated I just redirect to the main page.