In my shopify store I have setup an order creation webhook. The webhook points to a Cakephp action URL which receives the data from webhook as following:-
$content = file_get_contents ( "php://input" );
After that it is saving this order data to the app database as:-
$orderData =array('order'=>$data['order_number'],'details'=>$content);
$orders = new Order ();
$orders->saveall($orderData);
Now the issue is that for each single order created the webhook is getting invoked multiple times. Although it performs the necessary action in the first attempt, yet Shopify is not able to identify the call success and is getting it invoked again and again until the limit reaches. After the limit is reached the webhook is getting deleted from the store.
My question is that do we need to send any type of status or response to the webhook call after it performs the necessary action. Because it is not very clear from shopify webhook documentation. They state that webhook call success is determined from HTTP status 200. How can I check what is the status returned by a webhook call? How can I make sure that Shopify is informed of webhook success through my app code and it does not invokes further calls to the webhook?
Yes, you need to send a 200 response to Shopify within a short time 5s. Otherwise, Shopify will send a request in a short time.
The official guide suggests that you store the webhook data and process it with a queue, thread, or whatever ways you preferred. After that, you return a 200 response to Shopify immediately.
IMO, if there are many webhook requests sending to you, it's better to separate the webhook receiver from your app server. You can do it with AWS Lambda or docker swarm so that the webhook requests won't break your app server.
Source:
Time limit: enter link description here
Webhooks with AWS Lambda: enter link description here
Just to clarify for others, you have to explicitly return a 2XX HTTP code or it'll retry 19 times over 48 hours, then delete your webhook if it exceeds that.
Related
I have a logic app that calls API endpoint through APIM. when I call endpoint in the logic app using the webhook connector, webhook keeps running and doe snot resume logic app.
While if I tried the same API endpoint through postman, it respond back within 3-5 seconds.
Below is the screenshot.
Not sure if i am missing anything.
This is outside the intended use of the webhook http task.
https://learn.microsoft.com/en-us/azure/connectors/connectors-native-webhook
You'll want to use the regular http post task, but you can extend the timeout in the settings to accommodate your long-running http call.
(click the triple dots in the top-right, then utilize the "Action Timeout" setting).
You could also try disabling asyncronous behavior:
https://learn.microsoft.com/en-us/azure/connectors/connectors-native-http#avoid-http-timeouts-for-long-running-tasks
If the logic app still times out, you will probably need to design a different solution.
I have this request that is querying my service which is inside of tryMax.
The access token to authenticate a request expires every five minutes and is generated at the beginning of the simulation run as ${token}
Is there a way within the tryMax to send another token generation request that will update the expired ${token} (Authorization header value) if the response code is 401 or the response body contains information about the request not being authenticated. Then retry the request before tryMax moves to the next iteration?
I have tried setting status code as a session attribute, however the request is not being sent and the token doesn't update, I tried doing a .doIf after the request exec, putting a doIf inside it's own exec and even playing around with transformResponse, all with no success.
Any suggestions how to approach this?
you can do something like what is outlined in
Gatling (performance test):how to perform task in background every x-minutes
However - is this really the scenario you want to model? How does the client you are simulating handle the 401? The scenario you are proposing only works if the client is in charge of manually handling its own refreshes.
Is it possible to receive direct messages on behalf of a slack bot via POST requests to a certain domain?
I want to have an endpoint in Google App Engine that receives incoming direct messages from Slack via POST requests, and posts messages back via the API. Is it possible?
You can use the new Events API. Create a bot, subscribe to message.im events, and set your endpoint as the callback URL
You just need to set up an "outgoing webhook"in slack and point it to whatever endpoint you need on your GAE server. In order to respond just use an "incoming webhook" to receive the answer.
Is there a way to send http request (to a specific url address) each time i receive email (Google account), with the content of the email received using Google App Engine?
As per your question, it seems that you already have an Incoming Email Handler in your App Engine application.
If the above is true, then in the Incoming Email Handler, you can parse out the message and if it meets your condition for invoking the http request, then you can definitely do that. You can use the URL Fetch service for the same.
One design decision you might want to do is whether you want to keep all your URL Fetch code inside of the incoming Email Handler or you want that to be handled externally via a Task Queue. In that case, I suggest that you can use the Task Queue to create a task when an incoming email comes in to your handler. Then the Task Queue logic can take care of one or more things, which includes invoking the HTTP Service, as you wanted.
I created an App Engine backend to serve http requests for a long running process. The backend process works as expected when the query references an input of small size, but times out when the input size is large. The query parameter is the url of an App Engine BlobStore blob, which is the input data for the backend process. I thought the whole point of using App Engine backends was to avoid the timeout restricts that App Engine frontends possess. How can I avoid getting a timeout?
I call the backend like this, setting the connection timeout length to infinite:
HttpURLConnection connection = (HttpURLConnection)(new URL(url + "?" + query).openConnection());
connection.setRequestProperty("Accept-Charset", charset);
connection.setRequestMethod("GET");
connection.setConnectTimeout(0);
connection.connect();
InputStream in = connection.getInputStream();
int ch;
while ((ch = in.read()) != -1)
json = json + String.valueOf((char) ch);
System.out.println("Response Message is: " + json);
connection.disconnect();
The traceback (edited for anonymity) is:
Uncaught exception from servlet
java.net.SocketTimeoutException: Timeout while fetching URL: http://my-backend.myapp.appspot.com/somemethod?someparameter=AMIfv97IBE43y1pFaLNSKO1hAH1U4cpB45dc756FzVAyifPner8_TCJbg1pPMwMulsGnObJTgiC2I6G6CdWpSrH8TrRBO9x8BG_No26AM9LmGSkcbQZiilhC_-KGLx17mrS6QOLsUm3JFY88h8TnFNer5N6-cl0iKA
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.convertApplicationException(URLFetchServiceImpl.java:142)
at com.google.appengine.api.urlfetch.URLFetchServiceImpl.fetch(URLFetchServiceImpl.java:43)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.fetchResponse(URLFetchServiceStreamHandler.java:417)
at com.google.apphosting.utils.security.urlfetch.URLFetchServiceStreamHandler$Connection.getInputStream(URLFetchServiceStreamHandler.java:296)
at org.someorg.server.HUDXML3UploadService.doPost(SomeService.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
As you can see, I'm not getting the DeadlineExceededException, so I think something other than Google's limits is causing the timeout, and also making this a different issue from similar stackoverflow posts on the topic.
I humbly thank you for any insights.
Update 2/19/2012: I see what's going on, I think. I should be able to have the client wait indefinitely using GWT [or any other type of client-side async framework] async handler for an any client request to complete, so I don't think that is the problem. The problem is that the file upload is calling the _ah/upload App Engine system endpoint which then, once the blob is stored in the Blobstore) calls the upload service's doPost backend to process the blob. The client request to _ah/upload is what is timing out, because the backend doesn't return in a timely fashion. To make this timeout problem go away, I attempted to make the _ah_upload service itself a public backend accessible via http://backend_name.project_name.appspot.com/_ah/upload, but I don't think that google allows a system service (like _ah/upload) to be run as a backend. Now my next approach is to just have ah_upload immediately return after triggering the backend processing, and then call another service to get the original response I wanted, after processing is finished.
The solution was to start a backend process as a tasks and add that to the task queue, then returning a response to client before it waits to process the backend task (which can take a long time). If I could have assigned ah_upload to a backend, this would have also solved the problem, since the clien't async handler could wait forever for the backend to finish, but I do not think Google permits assigning System Servlets to backends. The client will now have to poll persisted backend process response data, as Paul C mentioned, since tasks can not respond like a normal servlet.