Alexa skill Rest API - alexa

Can we use Rest API instead of using Lambda. The reason im asking is because we got the request, we know what alexa accepts as a response, and we know that it is a POST. So connect all of these into REST API. The reason im asking is that the whole project is based in Jax-RS, so we want to have it all in one place, wihtout using lamda or anything. Not that lamda isn't that great.
So the request that alexa passes to Lambda is:
{
"session": {
"sessionId": "SessionId.a82f0b92-3650-4d45-8f12-e030ffc10894",
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.8f35038e-13ac-4327-8e4f-e5df52dc1432"
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account.AFP3ZWPOS2BGJR7OWJZ3DHPKMOMNWY4AY66FUR7ILBWANIHQN73QGGUEQZ7YXOLC7NYVD3JPUAHAGUS4ZFXJ6ZMS4EHO2CJFPWFLWLYZLDP7S227ADI54A2ZMLZLDO5CXSIB47ELNY54S2M7FDNJFHTSU67B7HB3UZUN6OUUR5BYS3UBRSIPBG4IWRLHUN36NXDYBWUM3NMQZRA"
},
"new": true
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId.bfdb3c27-028b-4224-977a-558129808e9a",
"timestamp": "2016-07-11T17:52:55Z",
"intent": {
"name": "HelloWorldIntent",
"slots": {}
},
"locale": "en-US"
},
"version": "1.0"
}
Response:
{
"version": "1.0",
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "Hello World!"
},
"card": {
"content": "Hello World!",
"title": "Greeter",
"type": "Simple"
},
"shouldEndSession": true
},
"sessionAttributes": {}
}

Sure you can. In fact, when you are creating your skill in the Alexa Developer Portal, you have that option. The caveat is that you will need to manage your own TLS certificate and will have to make sure that the latency/responsiveness is decent based on the location of your users.
If you would like to explore this further, you can use Amazon's Java code examples. They can be found at: https://github.com/amzn/alexa-skills-kit-java.

You can definitely set up a RESTful service API for use with Alexa.
And, if you set it up in Azure, you don't even need to create your own certificate.

You can use a rest api as the endpoint for alexa skills. The apis will be invoked in the following manner
[Configured_URL]>/**alexa/[intent]**
Where [Configured_URL] - is the url endpoint configured in amazon site for invoking
[intent] - is the name of the intent
You should host your service accordingly
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/developing-an-alexa-skill-as-a-web-service
https://iwritecrappycode.wordpress.com/2016/04/01/create-an-alexa-skill-in-node-js-and-hosting-it-on-heroku/

Related

Locale ignored in APLA Alexa Developer Console

I'm new to developing skills with Alexa. I've followed the Build Multi-turn Skills Tutorial with Alexa Conversations tutorial up to module 3.
Because I want to develop a skill only for German users I've altered the language settings in the Alexa developer console of my skill to only support German language.
I change the APLA code in the tutorial with the APLA with the "edit audio response" to this:
{
"type": "APLA",
"version": "0.8",
"mainTemplate": {
"parameters": [
"payload"
],
"item": {
"type": "Selector",
"strategy": "randomItem",
"items": [
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'de-DE'}",
"content": "Willkommen bei meiner App"
},
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'de-DE'}",
"content": "Willkommen."
},
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'en-US'}",
"content": "Welcome."
}
]
}
}
}
At the bottom of the console I see that my locale is set to German but when I preview the APL above the audio player always says "Welcome." with the English voice, the other two options are never triggered. What am I missing here?
The audio response tool doesn't take in account the language of the website.
There are no ways to test the condition environment.alexaLocale in this tool.
To test it, update the code of your skill and test it either on the test tabyour skill in the developer console or directly on a real device. Just tested with your code, it works perfectly. Just not on the audio tool.

Why is Google App Engine throwing access forbidden errors?

Could really use some help here. I have a GAE NodeJS app in the standard environment. Until a few days ago (09/23) it was running just fine, it would respond to requests as expected, etc.
Today, the app responds with 403's when I try to make any request to my appspot url. I'm 100% certain this is not a code issue, as if I deploy the same code to GAE in another project, it works fine. Furthermore, the only firewall rule is a wildcard to allow all traffic.
Edit: adding the only relevant-looking log entry I see from the project:
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {},
"authenticationInfo": {
"principalEmail": "address#domain.com"
},
"requestMetadata": {
"callerIp": "x.x.x.x",
"requestAttributes": {
"time": "2021-09-23T15:04:05.198927Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "appengine.googleapis.com",
"methodName": "google.appengine.v1.Services.UpdateService",
"authorizationInfo": [
{
"resource": "apps/my-google-cloud-project-id/services/default",
"permission": "appengine.services.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "apps/my-google-cloud-project-id/services/default",
"serviceData": {
"#type": "type.googleapis.com/google.appengine.v1.AuditData",
"updateService": {
"request": {
"name": "apps/my-google-cloud-project-id/services/default",
"service": {
"networkSettings": {
"ingressTrafficAllowed": "INGRESS_TRAFFIC_ALLOWED_INTERNAL_AND_LB"
}
},
"updateMask": "networkSettings"
}
}
},
"resourceLocation": {
"currentLocations": [
"us-east1"
]
}
},
"insertId": "an-id",
"resource": {
"type": "gae_app",
"labels": {
"project_id": "my-google-cloud-project-id",
"zone": "",
"module_id": "default",
"version_id": ""
}
},
"timestamp": "2021-09-23T15:04:05.131761Z",
"severity": "NOTICE",
"logName": "projects/my-google-cloud-project-id/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "some-operation-uuid",
"producer": "appengine.googleapis.com/admin",
"first": true
},
"receiveTimestamp": "2021-09-23T15:04:05.495890906Z"
}
I don't recall making this change, and I'm not sure what the ingressTrafficAllowed value was before.
Somehow the ingress setting on the GAE service got changed. I believe that issue was fixed by going to GCP console > App Engine > Services > select affected service(s) -> Edit ingress setting from the top, and select the appropriate value.
I say I believe this fixed the issue as I was still getting 403's on my appspot url after doing this, and ultimately I ended up deleting and re-creating the project from scratch, which got everything working again. Clearly there was some misconfiguration somewhere in my project, but GCP does not make it easy to diagnose what the issue might be.

Alexa Smart Home "Failed to Retrieve State"

I am playing with a sample Alexa Smart Home skill - I am not talking to any real hardware or back-end, just trying to get message flow working. I have set up a simple switch/plug/light that can just support turning On/Off - and I have account linked working and the skill enabled. When I try looking at it via the Alexa app on phone or web (with debug enabled) it always says the device isn't responding, or it's "Failed to Retrieve State". I can definitely see the messages in Cloud Watch as follows.
Any idea why I'd be chronically getting such a response??
Request:
"directive": {
"endpoint": {
"cookie": {},
"endpointId": "endpoint-003",
"scope": {
"token": "<<<SUPRESSING>>",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "50397414-bb9d-412f-8a2c-15669978ab64",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Response:
{
"context": {
"properties": [
{
"name": "connectivity",
"namespace": "Alexa.EndpointHealth",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "OK"
},
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "ON"
}
]
},
"event": {
"endpoint": {
"endpointId": "endpoint-003",
"scope": {
"token": "Alexa-access-token",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "7a8b9a71-adda-41b8-acba-4d3855374845",
"name": "Response",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Problem was: The "name" in my header response should have been "ReportState". "Response" is only used for things that set/change values.
My general advice is to always verify that THREE things are good:
Initial "Discovery"
"Response" messages
General "ReportState" queries.
By this - I mean that:
Anything you advertised as should be reported in "discovery" better be reported in other ("ReportState") messages. If you advertise a "PowerController" - if your ReportStates don't contain status for that, you'll either not see the status, or it'll keep retrying forever (continuing to look for it) - or you might get some sort of an error.
If you CHANGED your discovery stuff - make sure that you really removed, re-discovered, and that the states (above) for the new additions/removals are okay
Always make sure that "EndpointHealth" is being reported.

IBM CLOUD function action took too long to respond in IBM watson chat dialog

Hi, I am creating a chatbot. I developed a IBM cloud function(action) in IBM.
This is the action code..
{
"context": {
"my_creds": {
"user": "ssssssssssssssssss",
"password": "sssssssssssssssssssssss"
}
},
"output": {
"generic": [
{
"values": [
{
"text": ""
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
},
"actions": [
{
"name": "ssssssssssss/user-detail",
"type": "server",
"parameters": {
"name": "<?input.text?>",
"lastname": "<?input.text?>"
},
"credentials": "$my_creds",
"result_variable": "$my_result"
}
]
}
Now my action user detail is giving response when i am invoking the code.
But when i am checking the output with my chatbot I am getting execution of cloud functions action took too long.
There is currently a 5 second limitation on processing time for a cloud function being called from a dialog node. If your process will need longer than this, you'll need to do it client side through your application layer.

Alexa - response to AudioPlayer.PlaybackStarted cause error

In my Alexa skill, I'm broadcasting an MP3 file through AudioPlayer Directive.
As the file starts playing, I get:
{
"type": "AudioPlayer.PlaybackStarted",
"requestId": "requestId",
"timestamp": "2018-02-28T13:17:54Z",
"locale": "en-US",
"token": "tokenstring",
"offsetInMilliseconds": 0
}
My service does not generate a response to this event, but I receive this error immediately:
{
"type": "System.ExceptionEncountered",
"requestId": "requestId,
"timestamp": "2018-02-28T13:17:55Z",
"locale": "en-US",
"error": {
"type": "INVALID_RESPONSE",
"message": "An exception occurred while dispatching the request to the skill."
},
"cause": {
"requestId": "amzn1.echo-api.request.8492b40e-1698-409f-8bed-61dc1f3de663"
}
}
In the documents it says I don't have to respond to this event, but is there something mandatory I need to send back to the Alexa? Maybe an HTTP status?
I found your question while looking for the same answer for myself.
I also found some notes in the Amazon Developer forums that there were some changes made some time back to require a response, but that documentation hadn't been fully updated...
I've added this below and its cleared up the issue for me.
if(event.request.type == 'AudioPlayer.PlaybackStarted' || event.request.type
== 'AudioPlayer.PlaybackStopped') {
response = {
"version": "1.0",
"response": {
"shouldEndSession": true
}
};
}
Hope that helps.

Resources