Locale ignored in APLA Alexa Developer Console - alexa

I'm new to developing skills with Alexa. I've followed the Build Multi-turn Skills Tutorial with Alexa Conversations tutorial up to module 3.
Because I want to develop a skill only for German users I've altered the language settings in the Alexa developer console of my skill to only support German language.
I change the APLA code in the tutorial with the APLA with the "edit audio response" to this:
{
"type": "APLA",
"version": "0.8",
"mainTemplate": {
"parameters": [
"payload"
],
"item": {
"type": "Selector",
"strategy": "randomItem",
"items": [
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'de-DE'}",
"content": "Willkommen bei meiner App"
},
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'de-DE'}",
"content": "Willkommen."
},
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'en-US'}",
"content": "Welcome."
}
]
}
}
}
At the bottom of the console I see that my locale is set to German but when I preview the APL above the audio player always says "Welcome." with the English voice, the other two options are never triggered. What am I missing here?

The audio response tool doesn't take in account the language of the website.
There are no ways to test the condition environment.alexaLocale in this tool.
To test it, update the code of your skill and test it either on the test tabyour skill in the developer console or directly on a real device. Just tested with your code, it works perfectly. Just not on the audio tool.

Related

How to play 2 Audiofiles in one response

I want to have a personal greeting (mp3) in my Alexa Skill Launch Intent and start and audio stream directly when the first mp3 is finished. I tried it with responsebuilder and addaudioplayerplaydirective. But you can add only directive to a response, but I have 2 files to play after another. Does anyone have an idea how to solve this?
AudioPlayer is more ideal for long-form audio like meditations or songs. Once your skill starts the audio player, the custom skill sessions ends and your users can't do what they could normally do while in your skill.
It sounds more like you just want to play some shorter audio clips to greet your users. If that's the case and your audio files meet the requirements, APL for Audio, may be a better solution for you.
Here's an example directive your skill can include in its response to play two audio files, one after another, then have Alexa say something.
{
"type": "Alexa.Presentation.APLA.RenderDocument",
"token": "developer-provided-string",
"document": {
"type": "APLA",
"version": "0.91",
"mainTemplate": {
"parameters": [
"payload"
],
"item": {
"type": "Sequencer",
"items": [
{
"type": "Audio",
"source": "soundbank://soundlibrary/ui/gameshow/amzn_ui_sfx_gameshow_intro_01",
"filters": [
{
"type": "Volume",
"amount": "20%"
},
{
"type": "FadeIn",
"duration": 1000
}
]
},
{
"type": "Audio",
"source": "soundbank://soundlibrary/alarms/beeps_and_bloops/bell_01"
},
{
"type": "Speech",
"content": "Hello world!"
}
]
}
}
}
}
If what you really need is the AudioPlayer, have your skill issue a directive to start the first clip. Then add a handler to your skill to capture the PlaybackNearlyFinishedRequest event. In that, your handler should return another directive to queue up the next audio clip.

I tried inserting a document in mongodb but I then,

I tried inserting a document in mongodb but I then, I received an error saying "Insert not permitted while document contains errors" and yet, I still can't find where the error is in my code,
please help.
I do not know if there's a new version of mongodb or probably a new way of writing code to insert documents in mongodb but still, I've tried all I can to locate the error in this code but I still couldn't.
I also tried checking if I used the curly bracket incorrectly or the square bracket incorrectly but I think its probably fine to me but I'm unsure. Hopefully someone blessed can help me check it out.
[
{
"images": [
{
"public_id": "nextjs_media/pb8fnxyickqqe9krov82",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605263280/nextjs_media/pb8fnxyickqqe9krov82.jpg"
},
{
"public_id": "nextjs_media/irfwxjz56x4xa6pdwoks",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605263281/nextjs_media/irfwxjz56x4xa6pdwoks.jpg"
}
],
"checked": false,
"inStock": 500,
"sold": 0,
"title": "animal",
"price": 5,
"description": "How to and tutorial videos of cool CSS effect, Web Design ideas,JavaScript libraries, Node.",
"content": "Welcome to our channel Dev AT. Here you can learn web designing, UI/UX designing, html css tutorials, css animations and css effects, javascript and jquery tutorials and related so on.",
"category": "5faa35a88fdff228384d51d8"
},
{
"images": [
{
"public_id": "nextjs_media/jdi9qo0oiinwik8uxzxn",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605278590/nextjs_media/jdi9qo0oiinwik8uxzxn.jpg"
},
{
"public_id": "nextjs_media/k2pjwtpzolcieioacnu2",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605278591/nextjs_media/k2pjwtpzolcieioacnu2.jpg"
},
{
"public_id": "nextjs_media/qbh6auephsy5leaapsu1",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605278592/nextjs_media/qbh6auephsy5leaapsu1.jpg"
},
{
"public_id": "nextjs_media/gnsgrxorl5utlnxygjn6",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605278594/nextjs_media/gnsgrxorl5utlnxygjn6.jpg"
},
{
"public_id": "nextjs_media/w8qj2rlrhh1es8wxhcui",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605278596/nextjs_media/w8qj2rlrhh1es8wxhcui.jpg"
}
],
"checked": false,
"inStock": 300,
"sold": 10,
"title": "wedding invitation",
"price": 5,
"description": "How to and tutorial videos of cool CSS effect, Web Design ideas,JavaScript libraries, Node.",
"content": "Welcome to our channel Dev AT. Here you can learn web designing, UI/UX designing, html css tutorials, css animations and css effects, javascript and jquery tutorials and related so on.",
"category": "5faa35b58fdff228384d51da"
},
{
"images": [
{
"public_id": "nextjs_media/u8qltexka25minj2rj46",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605318879/nextjs_media/u8qltexka25minj2rj46.jpg"
},
{
"public_id": "nextjs_media/wb5osprab71emsxp3ibm",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605318910/nextjs_media/wb5osprab71emsxp3ibm.jpg"
},
{
"public_id": "nextjs_media/nelvbtwdbk1vjvhufort",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605318911/nextjs_media/nelvbtwdbk1vjvhufort.jpg"
},
{
"public_id": "nextjs_media/bnyeto9vaz40yfts92we",
"url": "https://res.cloudinary.com/devatchannel/image/upload/v1605318913/nextjs_media/bnyeto9vaz40yfts92we.jpg"
}
],
"checked": false,
"inStock": 153,
"sold": 5,
"title": "laptop",
"price": 25,
"description": "How to and tutorial videos of cool CSS effect, Web Design ideas,JavaScript libraries, Node.",
"content": "Welcome to our channel Dev AT. Here you can learn web designing, UI/UX designing, html css tutorials, css animations and css effects, javascript and jquery tutorials and related so on.",
"category": "5faa35a88fdff228384d51d8"
}
]```

Alexa Smart Home "Failed to Retrieve State"

I am playing with a sample Alexa Smart Home skill - I am not talking to any real hardware or back-end, just trying to get message flow working. I have set up a simple switch/plug/light that can just support turning On/Off - and I have account linked working and the skill enabled. When I try looking at it via the Alexa app on phone or web (with debug enabled) it always says the device isn't responding, or it's "Failed to Retrieve State". I can definitely see the messages in Cloud Watch as follows.
Any idea why I'd be chronically getting such a response??
Request:
"directive": {
"endpoint": {
"cookie": {},
"endpointId": "endpoint-003",
"scope": {
"token": "<<<SUPRESSING>>",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "50397414-bb9d-412f-8a2c-15669978ab64",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Response:
{
"context": {
"properties": [
{
"name": "connectivity",
"namespace": "Alexa.EndpointHealth",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "OK"
},
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "ON"
}
]
},
"event": {
"endpoint": {
"endpointId": "endpoint-003",
"scope": {
"token": "Alexa-access-token",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "7a8b9a71-adda-41b8-acba-4d3855374845",
"name": "Response",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Problem was: The "name" in my header response should have been "ReportState". "Response" is only used for things that set/change values.
My general advice is to always verify that THREE things are good:
Initial "Discovery"
"Response" messages
General "ReportState" queries.
By this - I mean that:
Anything you advertised as should be reported in "discovery" better be reported in other ("ReportState") messages. If you advertise a "PowerController" - if your ReportStates don't contain status for that, you'll either not see the status, or it'll keep retrying forever (continuing to look for it) - or you might get some sort of an error.
If you CHANGED your discovery stuff - make sure that you really removed, re-discovered, and that the states (above) for the new additions/removals are okay
Always make sure that "EndpointHealth" is being reported.

Error code: InvalidIntentSamplePhraseSlot -

I got the error code Error code: InvalidIntentSamplePhraseSlot when I built the model using the new skills console.
The full error message is
Sample utterance "AddBookmarkIntent i am at {pageno} of {mybook}" in intent "AddBookmarkIntent" cannot include both a phrase slot and another intent slot. Error code: InvalidIntentSamplePhraseSlot -
where {pageno} is AMAZON.NUMBER and {mybook} is AMAZON.SearchQuery
What is the error about and how can I solve it?
edit: add the JSON for the intent
{
"name": "AddBookmarkIntent",
"slots": [
{
"name": "mybook",
"type": "AMAZON.SearchQuery"
},
{
"name": "pageno",
"type": "AMAZON.NUMBER"
}
],
"samples": [
"i am at {pageno} of the book {mybook}",
"save page {pageno} to the book {mybook}",
"save page {pageno} to {mybook}",
"i am at {pageno} of {mybook}"
]
}
It's not allowed to have a slot of the type AMAZON.SearchQuery in the same Utterance with another slot, in your case AMAZON.NUMBER.
Mark one of the slots as required and ask for them separately.
A little example:
Create the Intent put in the utterances and slots:
"intents": [
{
"name": "AddBookmarkIntent",
"samples": [
"I am at {pageno}"
],
"slots": [
{
"name": "mybook",
"type": "AMAZON.SearchQuery",
"samples": [
"For {mybook}"
]
},
{
"name": "pageno",
"type": "AMAZON.NUMBER"
}
]
}
Mark the specific slot as required so Alexa will automatically ask for it:
"dialog": {
"intents": [
{
"name": "AddBookmarkIntent",
"confirmationRequired": false,
"prompts": {},
"slots": [
{
"name": "mybook",
"type": "AMAZON.SearchQuery",
"elicitationRequired": true,
"confirmationRequired": false,
"prompts": {
"elicitation": "Elicit.Intent-AddBookmarkIntent.IntentSlot-mybook"
}
}
]
}
]
}
and create the prompts to ask for the slot:
"prompts": [
{
"id": "Elicit.Intent-AddBookmarkIntent.IntentSlot-mybook",
"variations": [
{
"type": "PlainText",
"value": "For which book you like to save the page?"
}
]
}
]
This is probably much easier with the skill builder BETA and not its editor because it will automatically create the JSON in the background.
The error is telling you that you have an Intent name in your Sample Utterance where it should only have Slots and it looks like you do.
"AddBookmarkIntent i am at {pageno} of {mybook}"
"AddBookmarkIntent" shouldn't actually be inside of the utterance. So turn your utterance into:
"i am at {pageno} of {mybook}"
I know that some of the documents show an example of the sample utterances with the Intent Name first, such as here. But that has a big warning near the top:
So you have to be careful about which documents you read and follow based on which way you are building your Alexa Skill.
Follow this if you are using the Skill Builder.
It unfortunately seems like an utterance can only reference 1 "Phrase" slot type.
For your specific case, it does look like there is now a non-phrase slot type AMAZON.Book in public beta; if you use that instead of AMAZON.SearchQuery it might work?
Src: https://developer.amazon.com/en-US/docs/alexa/custom-skills/slot-type-reference.html

Alexa skill Rest API

Can we use Rest API instead of using Lambda. The reason im asking is because we got the request, we know what alexa accepts as a response, and we know that it is a POST. So connect all of these into REST API. The reason im asking is that the whole project is based in Jax-RS, so we want to have it all in one place, wihtout using lamda or anything. Not that lamda isn't that great.
So the request that alexa passes to Lambda is:
{
"session": {
"sessionId": "SessionId.a82f0b92-3650-4d45-8f12-e030ffc10894",
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.8f35038e-13ac-4327-8e4f-e5df52dc1432"
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account.AFP3ZWPOS2BGJR7OWJZ3DHPKMOMNWY4AY66FUR7ILBWANIHQN73QGGUEQZ7YXOLC7NYVD3JPUAHAGUS4ZFXJ6ZMS4EHO2CJFPWFLWLYZLDP7S227ADI54A2ZMLZLDO5CXSIB47ELNY54S2M7FDNJFHTSU67B7HB3UZUN6OUUR5BYS3UBRSIPBG4IWRLHUN36NXDYBWUM3NMQZRA"
},
"new": true
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId.bfdb3c27-028b-4224-977a-558129808e9a",
"timestamp": "2016-07-11T17:52:55Z",
"intent": {
"name": "HelloWorldIntent",
"slots": {}
},
"locale": "en-US"
},
"version": "1.0"
}
Response:
{
"version": "1.0",
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "Hello World!"
},
"card": {
"content": "Hello World!",
"title": "Greeter",
"type": "Simple"
},
"shouldEndSession": true
},
"sessionAttributes": {}
}
Sure you can. In fact, when you are creating your skill in the Alexa Developer Portal, you have that option. The caveat is that you will need to manage your own TLS certificate and will have to make sure that the latency/responsiveness is decent based on the location of your users.
If you would like to explore this further, you can use Amazon's Java code examples. They can be found at: https://github.com/amzn/alexa-skills-kit-java.
You can definitely set up a RESTful service API for use with Alexa.
And, if you set it up in Azure, you don't even need to create your own certificate.
You can use a rest api as the endpoint for alexa skills. The apis will be invoked in the following manner
[Configured_URL]>/**alexa/[intent]**
Where [Configured_URL] - is the url endpoint configured in amazon site for invoking
[intent] - is the name of the intent
You should host your service accordingly
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/developing-an-alexa-skill-as-a-web-service
https://iwritecrappycode.wordpress.com/2016/04/01/create-an-alexa-skill-in-node-js-and-hosting-it-on-heroku/

Resources