How to handle failed skill events? - alexa

I'm implementing skill lifecycle events using "Skill Events". Going through the docs can't find anything that mentions what should I respond with for these events. Closest I found was:
Alexa will attempt to redeliver events if an acknowledgement is not
sent by the skill service, for up to one hour. If the skill service
receives an event, and the skill service sends an acknowledgment in
response, this event must then be managed by the skill service. In
either case, the skill service cannot, at a later time, retrieve past
events from Alexa.
Source
What does it imply, an empty 200 response? What do if something fails. Should I return a 200 status with a formatted error similar to Alexa ErrorResponse?
As the skill event data schema is different from typical Alexa events, I presume it's different.

So far by just playing with the responses, if I return an empty 200 response, Alexa understands that I acknowledged the request and doesn't send it anymore.
If something fails I respond with a 400 status and plaintext error msg. Then I received the request again later.
Also be sure to save the timestamp from either AlexaSkillEvent.SkillEnabled or AlexaSkillEvent.SkillAccountLinked requests with the user, so you can validate if the repeatedly sent events are valid if something isn't right.

Related

For Cloud Run triggered from PubSub, when is the right time to send ACK for the request message?

I was building a service that runs on Cloud Run that is triggered by PubSub through EventArc.
'PubSub' guarantees delivery at least one time and it would retry for every acknowledgement deadline. This deadline is set in the queue subscription details.
We could send an acknowledgement back at two points when a service receives a pub-sub request (which is received as a POST request in the service).
At the beginning of the request as soon as the request was received. The service would then continue to process the request at its own pace. However, this article points out that
When an application running on Cloud Run finishes handling a request, the container instance's access to CPU will be disabled or severely limited. Therefore, you should not start background threads or routines that run outside the scope of the request handlers.
So sending a response at the beginning may not be an option
After the request has been processed by the service. So this would mean that, depending on what the service would do, we cannot always predict how long it would take to process the request. Hence we cannot set the Acknowledgement deadline correctly, resulting in PubSub retries and duplicate requests.
So what is the best practice here? Is there a better way to handle this?
Best practice is generally to ack a message once the processing is complete. In addition to the Cloud Run limitation you linked, consider that if the endpoint acked a message immediately upon receipt and then an error occurred in processing it, your application could lose that message.
To minimize duplicates, you can set the ack deadline to an upper bound of the processing time. (If your endpoint ends up processing messages faster than this, the ack deadline won’t rate-limit incoming messages.) If the 600s deadline is not sufficient, you could consider writing the message to some persistent storage and then acking it. Then, a separate worker can asynchronously process the messages from persistent storage.
Since you are concerned that you might not be able to set the correct "Acknowledgement Deadline", you can use modify_ack_deadline() in your code where you can dynamically extend your deadline if the process is still running. You can refer to this document for sample code implementations.
Be wary that the maximum acknowledgement deadline is 600 seconds. Just make sure that your processing in cloud run does not exceed the said limit.
Acknowledgements do not apply to Cloud Run, because acks are for "pull subscriptions" where a process is continuously pulling the Cloud PubSub API.
To get events from PubSub into Cloud Run, you use "push subscriptions" where PubSub makes an HTTP request to Cloud Run, and waits for it to finish.
In this push scenario, PubSub already knows it made you a request (you received the event) so it does not need an acknowledgement about the receipt of the message. However, if your request sends a faulty response code (e.g. http 500) PubSub will make another request to retry (and this is configurable on the Push Subscription itself).

Cancelling Requests AngularJS

I'm wondering if it's possible to manage the following scenario:
I'm using Laravel as API and AngularJS as Frontend. I have a functionality which allows to the user to search over a list of customers. But I'm experiencing a problem because when user enters every letter of the name of the customer, I'm sending a request to Laravel (API), but sometimes the last request is getting the response faster than the first request.
So, the final result displayed to the user is the response of the first request, which is bad because the user has already finished to type the entire customer name.
My objective is this:
If the user types the first letter, send a request to the API.
If the user types a second letter, check if the previous response is not received then cancell the previous request and send a new request.
If the user types a third letter, check if a response was received from the second request then send a request, if the previous response is not received then cancell the previous request and send a new request.
I'm not sure if my example is clear, but have seen similar behaviour before on many websites.
If found this: Cancelling $http request but looks like this is a bad practice.
How can I do this? Any clue will be really appreciated. Thanks in advance!
Your requirement can be met quite easily by setting a boolean variable to true right before the API call is started, and setting it to false when it responds. And then you can use this variable to check if a request is already in progress.
However, I think what you want is actually a little different than what you describe. It makes more sense to wait until the user stops typing before sending the request. In Angular you can do that quite easily with the debounce setting:
<input ng-model="params.q" ng-change="doApiCall()" ng-model-options="{ debounce: 500 }" type="text">
Edit: I now see that your requirement is a little bit different. You want to cancel the running request. There may be ways to cancel running ajax calls, but you certainly can't guarantee that. What should be doable is to make a queue so that the new request will start right after the last one is finished.

Salesforce Outbound Messages

I used postbin.org to test the workflow outbound message service. I specified the criteria like Account Name not equal to Null.But when I create record of Account Object,I am getting
org.xml.sax.SAXParseException: Content is not allowed in prolog. error in Delivery Failure Reason.
I dont know to to check it out.Please let me know.Thanks in advance..
The Outbound messaging feature requires that your listener send back a well formed soap message indicating that it successfully processed the message as its HTTP response. It seems unlikely that postbin.org is sending that response, and hence OM is reporting that as a delivery failure, and the message will get retried again later.

Google Channel API sends a message to all clients

I created a working Google Channel AP and now I would like to send a message to all clients.
I have two servlets. The first creates the channel and tells the clients the userid and token. The second one is called by an http post and should send the message.
To send a message to a client, I use:
channelService.sendMessage(new ChannelMessage(channelUserId, "This is a server message!"));
This sends the message just to one client. How could I send this to all?
Have I to store every Id which I use to create a channel and send the message for every id? How could I pass the Ids to the second servlet?
Using Channel API it is not possible to create one channel and then having many subscribers to it. The server creates a unique channel for individual JavaScript clients, so if you have the same Client ID the messages will be received only by one.
If you want to send the same message to multiple clients, in short, you will have to keep a track of active clients and send the same message to all of them.
If that approach sounds scary and messy, consider using PubNub for your push notification messages, where you can easily create one channel and have many subscribers. To make it run on Google App Engine is not that hard, since they support almost any platform or device.
I know this is an old question, but I just finished an open source project that uses the Channel API to implement a publish/subscribe model, i.e. you can have multiple users subscribe to a single topic, and then all those subscribers will be notified when anyone publishes a message to the topic. It also has some nice features like automatic message persistence if desired, and "return receipts", where a subscriber can be notified whenever OTHER subscribers receive that message. See https://github.com/adevine/gaewebpubsub#gae-web-pubsub. Licensed under Apache 2.0 license.

Guarding against missed messages in AppEngine Channel API

In the AppEngine Channel API, channels automatically close after 2 hours. We are handling this by rejoining the channel in the onError event.
Is there a chance the messages could get missed if they are sent while the channel is reconnecting?
Our scenario: We have an appointment scheduling system where appointments are booked elsewhere through an API. We use the channel to display new appointments on the schedule as they arrive. But I'm concerned that some appointments could get missed if they are booked during the time when a channel is closed and reconnected. Does the Channel API guard against this?
A little bit of background: the "client id" in the Channel API is used to create a transient XMPP endpoint. A given client id will always map to the same transient endpoint. So when you re-connect using a token to a channel created with the same client id, you are reconnecting to the same endpoint. Because of this you might see behavior where your client gets messages sent before recreating the channel. But there are no guarantees and we don't actively queue messages when they're sent to a channel with no listening clients.
In your case, could you return an up-to-date list of appointments as part of the same response that returns a new token?
You're not 'reconnecting' the channel, you're creating an entirely new one - so yes, messages could be missed. You should get an exception if you try to send a message to a closed channel, however.

Resources