Alexa - response to AudioPlayer.PlaybackStarted cause error - alexa

In my Alexa skill, I'm broadcasting an MP3 file through AudioPlayer Directive.
As the file starts playing, I get:
{
"type": "AudioPlayer.PlaybackStarted",
"requestId": "requestId",
"timestamp": "2018-02-28T13:17:54Z",
"locale": "en-US",
"token": "tokenstring",
"offsetInMilliseconds": 0
}
My service does not generate a response to this event, but I receive this error immediately:
{
"type": "System.ExceptionEncountered",
"requestId": "requestId,
"timestamp": "2018-02-28T13:17:55Z",
"locale": "en-US",
"error": {
"type": "INVALID_RESPONSE",
"message": "An exception occurred while dispatching the request to the skill."
},
"cause": {
"requestId": "amzn1.echo-api.request.8492b40e-1698-409f-8bed-61dc1f3de663"
}
}
In the documents it says I don't have to respond to this event, but is there something mandatory I need to send back to the Alexa? Maybe an HTTP status?

I found your question while looking for the same answer for myself.
I also found some notes in the Amazon Developer forums that there were some changes made some time back to require a response, but that documentation hadn't been fully updated...
I've added this below and its cleared up the issue for me.
if(event.request.type == 'AudioPlayer.PlaybackStarted' || event.request.type
== 'AudioPlayer.PlaybackStopped') {
response = {
"version": "1.0",
"response": {
"shouldEndSession": true
}
};
}
Hope that helps.

Related

Reading message from Service bus

I have a logic app with started when there is a message in serviceBus queue. The message is being published to the service bus from the DevOps pipeline using "PublishToAzureServiceBus" as a JSON message or from the pipeline webhook.
But getting an issue while converting a message from service bus to original JSON format, not able to get valid JSON object. It's getting append with some Serialization object.
I have tried with base64 decode, and JSON converts but have not been able to get success.
Below is the content of the message it looks like.
Any pointer on how can solve this?
Sample message sent
{
"id": "76a187f3-c154-4e60-b8bc-c0b754e54191",
"eventType": "build.complete",
"publisherId": "tfs",
"message": {
"text": "Build 20220605.8 succeeded"
},
"detailedMessage": {
"text": "Build 20220605.8 succeeded"
},
"resource": {
"uri": "vstfs:///Build/Build/288",
"id": 288,
"buildNumber": "20220605.8",
"url": "https://dev.azure.com/*******/_apis/build/Builds/288",
"startTime": "2022-06-05T14:47:01.1846966Z",
"finishTime": "2022-06-05T14:47:16.7602096Z",
"reason": "manual",
"status": "succeeded",
"drop": {},
"log": {},
"sourceGetVersion": "LG:refs/heads/main:********",
"lastChangedBy": {
"displayName": "Microsoft.VisualStudio.Services.TFS",
"id": "00000000-0000-0000-0000-000000000000",
"uniqueName": "***************"
},
"retainIndefinitely": false,
"definition": {
"definitionType": "xaml",
"id": 20,
"name": "getReleaseFile",
"url": "https://dev.azure.com/************/_apis/build/Definitions/20"
},
"requests": [
{
"id": 288,
"url": "https://dev.azure.com/B*****/**********/_apis/build/Requests/288",
"requestedFor": {
"displayName": "B*****.sag",
"id": "*******",
"uniqueName": "B**********"
}
}
]
},
"resourceVersion": "1.0",
"resourceContainers": {
"collection": {
"id": "*******",
"baseUrl": "https://dev.azure.com/B*****/"
},
"account": {
"id": "******",
"baseUrl": "https://dev.azure.com/B*****/"
},
"project": {
"id": "**********",
"baseUrl": "https://dev.azure.com/B*****/"
}
},
"createdDate": "2022-06-05T14:47:28.6089499Z"
}
Message received
#string3http://schemas.microsoft.com/2003/10/Serialization/�q{"id":"****","eventType":"build.complete","publisherId":"tfs","message":{"text":"Build 20220605.8 succeeded"},"detailedMessage":{"text":"Build 20220605.8 succeeded"},"resource":{"uri":"vstfs:///Build/Build/288","id":288,"buildNumber":"20220605.8","url":"https://dev.azure.com/*****/********/_apis/build/Builds/288","startTime":"2022-06-05T14:47:01.1846966Z","finishTime":"2022-06-05T14:47:16.7602096Z","reason":"manual","status":"succeeded","drop":{},"log":{},"sourceGetVersion":"LG:refs/heads/main:f0b1a1d2bd047454066cf21dc4d4c710bca4e1d7","lastChangedBy":{"displayName":"Microsoft.VisualStudio.Services.TFS","id":"00000000-0000-0000-0000-000000000000","uniqueName":"******"},"retainIndefinitely":false,"definition":{"definitionType":"xaml","id":20,"name":"getReleaseFile","url":"https://dev.azure.com/******/_apis/build/Definitions/20"},"requests":[{"id":288,"url":"https://dev.azure.com/*****/******/_apis/build/Requests/288","requestedFor":{"displayName":"baharul.sag","id":"******","uniqueName":"baharul.*****"}}]},"resourceVersion":"1.0","resourceContainers":{"collection":{"id":"3*****","baseUrl":"https://dev.azure.com/*****/"},"account":{"id":"******","baseUrl":"https://dev.azure.com/*****/"},"project":{"id":"*******","baseUrl":"https://dev.azure.com/*****/"}},"createdDate":"2022-06-05T14:47:28.6089499Z"}
When reading message from service bus in peek mode can see as below where <#string3http://schemas.microsoft.com/2003/10/Serialization/��> is appended to json string
Publish using PublishToAzureServiceBus from Azure pipeline.
Publish from Azure DevOps project webhook
I believe what is happening is that you have two different types of serialisation in the body of the brokered message created by the PublishToAzureServiceBus task. This is because the brokered message only supports binary content.
So the json is initially serialised as a binary string using the data contract serialiser.
How to solve this? Do the following before passing to your json deserialiser - unfortunately the logic app isn't doing this:
byte[] messageContent = brokeredMessage.GetBody<byte[]>();
string messageContentStr = Encoding.UTF8.GetString(messageContent);
I probably wouldn't use a logic app to do the reading of the message because to insert c# like I suggest you're gonna need to call an azure function or similar. I'd create an azure function to read your messages as above.

Alexa Smart Home "Failed to Retrieve State"

I am playing with a sample Alexa Smart Home skill - I am not talking to any real hardware or back-end, just trying to get message flow working. I have set up a simple switch/plug/light that can just support turning On/Off - and I have account linked working and the skill enabled. When I try looking at it via the Alexa app on phone or web (with debug enabled) it always says the device isn't responding, or it's "Failed to Retrieve State". I can definitely see the messages in Cloud Watch as follows.
Any idea why I'd be chronically getting such a response??
Request:
"directive": {
"endpoint": {
"cookie": {},
"endpointId": "endpoint-003",
"scope": {
"token": "<<<SUPRESSING>>",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "50397414-bb9d-412f-8a2c-15669978ab64",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Response:
{
"context": {
"properties": [
{
"name": "connectivity",
"namespace": "Alexa.EndpointHealth",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "OK"
},
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "ON"
}
]
},
"event": {
"endpoint": {
"endpointId": "endpoint-003",
"scope": {
"token": "Alexa-access-token",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "7a8b9a71-adda-41b8-acba-4d3855374845",
"name": "Response",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Problem was: The "name" in my header response should have been "ReportState". "Response" is only used for things that set/change values.
My general advice is to always verify that THREE things are good:
Initial "Discovery"
"Response" messages
General "ReportState" queries.
By this - I mean that:
Anything you advertised as should be reported in "discovery" better be reported in other ("ReportState") messages. If you advertise a "PowerController" - if your ReportStates don't contain status for that, you'll either not see the status, or it'll keep retrying forever (continuing to look for it) - or you might get some sort of an error.
If you CHANGED your discovery stuff - make sure that you really removed, re-discovered, and that the states (above) for the new additions/removals are okay
Always make sure that "EndpointHealth" is being reported.

google smart home actions implement offline response issue

I implement google smart home actions for plug device,
I can control on/off plug with no problems, but about implement
response offline status for Execute and Query, it seems google assistant never get status, it always said "OK, turn on/off plug", My response JSON log from cloud function log as below:
Execute:
{
"requestId": "847886417406301663",
"payload": {
"commands": {
"ids": [
"T90197200015"
],
"status": "ERROR",
"errorCode": "deviceOffline",
"online": false
}
}
}
Query:
{
"requestId": "11887439270473779795",
"payload": {
"devices": {
"T90197200015": {
"errorCode": "deviceOffline",
"status": "ERROR",
"online": false
}
}
}
}
Is the format correct, why google assistant didn't get offline status?
BR,
Jack
Yes that format is correct. The device generally may be offline, sending the status: "ERROR" and errorCode are more contextually applied to the specific request and response.

Unable to debug "There was a problem with the requested skill's response"

Background: I am working through a flashcard Skill that enables Alexa to ask basic questions about a programming language. The user can choose between Ruby, Python or JS.
The progression goes:
LaunchRequest welcomes the user, then asks for their language preference
User responds, causing SetLanguageIntent to trigger
A question is then asked of the user
However, I am unable to get past the SetLanguageIntent without encountering "There was a problem with the requested skill's response".
Here is the dialogue:
As one can see from the response, SetLanguageIntent is activated properly with the slot ruby also matching correctly.
"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.743f750e-96d9-4ef9-aeba-e0aec2e45afb",
"timestamp": "2018-09-12T13:35:25Z",
"locale": "en-US",
"intent": {
"name": "SetMyLanguageIntent",
"confirmationStatus": "NONE",
"slots": {
"language": {
"name": "language",
"value": "ruby",
"resolutions": {
"resolutionsPerAuthority": [
{
"authority": "amzn1.er-authority.echo-sdk.amzn1.ask.skill.ee34487d-d343-4deb-ab6c-193777c92aa8.languages",
"status": {
"code": "ER_SUCCESS_MATCH"
},
"values": [
{
"value": {
"name": "ruby",
"id": "58e53d1324eef6265fdb97b08ed9aadf"
}
}
]
}
]
},
"confirmationStatus": "NONE"
}
}
}
However, at this point the error message "There was a problem with the requested skill's response" always appears. There are no errors reported on CloudWatch Logs.
For reference, here is the SetLanguageIntent code. As noted in the comment, the test "Okay" should at least have been said. However, it does not get executed.
'SetMyLanguageIntent': function() {
this.response.speak('Okay'); //this should at least have been said
this.attributes.flashcards.currentLanguage = this.event.request.intent.slots.languages.value;
var currentLanguage = this.attributes.flashcards.currentLanguage
this.response
.speak('Okay, I will ask you some questions about ' +
currentLanguage + '. Here is your first question. ' +
AskQuestion(this.attributes))
.listen(AskQuestion(this.attributes));
this.emit(':responseReady');
},
Any help is much appreciated!
Edit: updated with slot names
I see a problem with your code. This is why you're probably having an issue. You are trying to access an undefined property this.event.request.intent.slots.languages.value. The mistake is word languages. It should be language.
So the way to acces a slot value should be:
this.event.request.intent.slots.language.value
I fixed the same problem for myself by simply adding a cardRenderer() to my response. Like this:
this.response.speak("my text").cardRenderer("ttitle","some content","image");
In the test simulator's device log, I noticed the card renderer show up, saying the skill responded with a failure. So I took a shot and it worked. Hopefully this fixes it for you too!

Alexa skill Rest API

Can we use Rest API instead of using Lambda. The reason im asking is because we got the request, we know what alexa accepts as a response, and we know that it is a POST. So connect all of these into REST API. The reason im asking is that the whole project is based in Jax-RS, so we want to have it all in one place, wihtout using lamda or anything. Not that lamda isn't that great.
So the request that alexa passes to Lambda is:
{
"session": {
"sessionId": "SessionId.a82f0b92-3650-4d45-8f12-e030ffc10894",
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.8f35038e-13ac-4327-8e4f-e5df52dc1432"
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account.AFP3ZWPOS2BGJR7OWJZ3DHPKMOMNWY4AY66FUR7ILBWANIHQN73QGGUEQZ7YXOLC7NYVD3JPUAHAGUS4ZFXJ6ZMS4EHO2CJFPWFLWLYZLDP7S227ADI54A2ZMLZLDO5CXSIB47ELNY54S2M7FDNJFHTSU67B7HB3UZUN6OUUR5BYS3UBRSIPBG4IWRLHUN36NXDYBWUM3NMQZRA"
},
"new": true
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId.bfdb3c27-028b-4224-977a-558129808e9a",
"timestamp": "2016-07-11T17:52:55Z",
"intent": {
"name": "HelloWorldIntent",
"slots": {}
},
"locale": "en-US"
},
"version": "1.0"
}
Response:
{
"version": "1.0",
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "Hello World!"
},
"card": {
"content": "Hello World!",
"title": "Greeter",
"type": "Simple"
},
"shouldEndSession": true
},
"sessionAttributes": {}
}
Sure you can. In fact, when you are creating your skill in the Alexa Developer Portal, you have that option. The caveat is that you will need to manage your own TLS certificate and will have to make sure that the latency/responsiveness is decent based on the location of your users.
If you would like to explore this further, you can use Amazon's Java code examples. They can be found at: https://github.com/amzn/alexa-skills-kit-java.
You can definitely set up a RESTful service API for use with Alexa.
And, if you set it up in Azure, you don't even need to create your own certificate.
You can use a rest api as the endpoint for alexa skills. The apis will be invoked in the following manner
[Configured_URL]>/**alexa/[intent]**
Where [Configured_URL] - is the url endpoint configured in amazon site for invoking
[intent] - is the name of the intent
You should host your service accordingly
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/developing-an-alexa-skill-as-a-web-service
https://iwritecrappycode.wordpress.com/2016/04/01/create-an-alexa-skill-in-node-js-and-hosting-it-on-heroku/

Resources