I got the error code Error code: InvalidIntentSamplePhraseSlot when I built the model using the new skills console.
The full error message is
Sample utterance "AddBookmarkIntent i am at {pageno} of {mybook}" in intent "AddBookmarkIntent" cannot include both a phrase slot and another intent slot. Error code: InvalidIntentSamplePhraseSlot -
where {pageno} is AMAZON.NUMBER and {mybook} is AMAZON.SearchQuery
What is the error about and how can I solve it?
edit: add the JSON for the intent
{
"name": "AddBookmarkIntent",
"slots": [
{
"name": "mybook",
"type": "AMAZON.SearchQuery"
},
{
"name": "pageno",
"type": "AMAZON.NUMBER"
}
],
"samples": [
"i am at {pageno} of the book {mybook}",
"save page {pageno} to the book {mybook}",
"save page {pageno} to {mybook}",
"i am at {pageno} of {mybook}"
]
}
It's not allowed to have a slot of the type AMAZON.SearchQuery in the same Utterance with another slot, in your case AMAZON.NUMBER.
Mark one of the slots as required and ask for them separately.
A little example:
Create the Intent put in the utterances and slots:
"intents": [
{
"name": "AddBookmarkIntent",
"samples": [
"I am at {pageno}"
],
"slots": [
{
"name": "mybook",
"type": "AMAZON.SearchQuery",
"samples": [
"For {mybook}"
]
},
{
"name": "pageno",
"type": "AMAZON.NUMBER"
}
]
}
Mark the specific slot as required so Alexa will automatically ask for it:
"dialog": {
"intents": [
{
"name": "AddBookmarkIntent",
"confirmationRequired": false,
"prompts": {},
"slots": [
{
"name": "mybook",
"type": "AMAZON.SearchQuery",
"elicitationRequired": true,
"confirmationRequired": false,
"prompts": {
"elicitation": "Elicit.Intent-AddBookmarkIntent.IntentSlot-mybook"
}
}
]
}
]
}
and create the prompts to ask for the slot:
"prompts": [
{
"id": "Elicit.Intent-AddBookmarkIntent.IntentSlot-mybook",
"variations": [
{
"type": "PlainText",
"value": "For which book you like to save the page?"
}
]
}
]
This is probably much easier with the skill builder BETA and not its editor because it will automatically create the JSON in the background.
The error is telling you that you have an Intent name in your Sample Utterance where it should only have Slots and it looks like you do.
"AddBookmarkIntent i am at {pageno} of {mybook}"
"AddBookmarkIntent" shouldn't actually be inside of the utterance. So turn your utterance into:
"i am at {pageno} of {mybook}"
I know that some of the documents show an example of the sample utterances with the Intent Name first, such as here. But that has a big warning near the top:
So you have to be careful about which documents you read and follow based on which way you are building your Alexa Skill.
Follow this if you are using the Skill Builder.
It unfortunately seems like an utterance can only reference 1 "Phrase" slot type.
For your specific case, it does look like there is now a non-phrase slot type AMAZON.Book in public beta; if you use that instead of AMAZON.SearchQuery it might work?
Src: https://developer.amazon.com/en-US/docs/alexa/custom-skills/slot-type-reference.html
Related
I want to have a personal greeting (mp3) in my Alexa Skill Launch Intent and start and audio stream directly when the first mp3 is finished. I tried it with responsebuilder and addaudioplayerplaydirective. But you can add only directive to a response, but I have 2 files to play after another. Does anyone have an idea how to solve this?
AudioPlayer is more ideal for long-form audio like meditations or songs. Once your skill starts the audio player, the custom skill sessions ends and your users can't do what they could normally do while in your skill.
It sounds more like you just want to play some shorter audio clips to greet your users. If that's the case and your audio files meet the requirements, APL for Audio, may be a better solution for you.
Here's an example directive your skill can include in its response to play two audio files, one after another, then have Alexa say something.
{
"type": "Alexa.Presentation.APLA.RenderDocument",
"token": "developer-provided-string",
"document": {
"type": "APLA",
"version": "0.91",
"mainTemplate": {
"parameters": [
"payload"
],
"item": {
"type": "Sequencer",
"items": [
{
"type": "Audio",
"source": "soundbank://soundlibrary/ui/gameshow/amzn_ui_sfx_gameshow_intro_01",
"filters": [
{
"type": "Volume",
"amount": "20%"
},
{
"type": "FadeIn",
"duration": 1000
}
]
},
{
"type": "Audio",
"source": "soundbank://soundlibrary/alarms/beeps_and_bloops/bell_01"
},
{
"type": "Speech",
"content": "Hello world!"
}
]
}
}
}
}
If what you really need is the AudioPlayer, have your skill issue a directive to start the first clip. Then add a handler to your skill to capture the PlaybackNearlyFinishedRequest event. In that, your handler should return another directive to queue up the next audio clip.
I've returned to try and make some datastudio custom javascript.
So I started off with a template type settings and basic js. Manifest is listing correctly - datastudio sees the custom item.
I took a long time for it to be authorised.
However, on adding the custome js, the console is reporting a load of erros.
first : data.0.type is not a valid config
second : data.0.elements.data.0.type is not a valid config.
Json:
{
"data": [
{
"id": "idtestviz",
"label": "Dimension Element Heading",
"type":"DIMENSION"
}
]
,
"style": [
{
"id": "idtestvizstyles",
"label": "Test Styles",
"elements":[
{
"id":"idtestvizfontcolor",
"label":"Font Colour",
"defaultValue":"#FFFF00"
}
]
}
]
}
It did have options in before, same error.
And appears to be the same as in https://developers.google.com/datastudio/visualization/define-config
Also it also is erroring on 'is already used in the config'
and that data.0.elements.style.0.elements.0.type required field that cannot be found
Seems like there are more checks that need to be done.
Is there a validator for json etc. before running, or has something updated on google side that their documentation hasn't been updated yet?
Or the more likely aspect, I'm missing some critical stuff...
Regards
Vince
Re checked my json config with a previous one that works, noted some errors in the objects. Corrected those and the json errors in the console have gone away.
JS errors remain - working on those... closing this question.
{
"data": [
{
"id":"test_viz_data",
"label":"Test Viz Data",
"elements":[
{
"id": "text_viz_dimensions",
"label": "Dimension Element Heading",
"type": "DIMENSION",
"options": {
"min": 1,
"max": 1
}
}
,
{
"id": "test_metrics",
"label": "Metric fields",
"type": "METRIC",
"options": {
"min": 1,
"max": 1
}
}
]
}
]
,
"style": [
{
"id": "idstyles",
"label": "Test Styles",
"elements":[
{
"id":"idfontcolor",
"label":"Font Colour",
"type":"FONT_COLOR",
"defaultValue":"#FFFF00"
}
]
}
]
,
"interactions": [
]
}
I am playing with a sample Alexa Smart Home skill - I am not talking to any real hardware or back-end, just trying to get message flow working. I have set up a simple switch/plug/light that can just support turning On/Off - and I have account linked working and the skill enabled. When I try looking at it via the Alexa app on phone or web (with debug enabled) it always says the device isn't responding, or it's "Failed to Retrieve State". I can definitely see the messages in Cloud Watch as follows.
Any idea why I'd be chronically getting such a response??
Request:
"directive": {
"endpoint": {
"cookie": {},
"endpointId": "endpoint-003",
"scope": {
"token": "<<<SUPRESSING>>",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "50397414-bb9d-412f-8a2c-15669978ab64",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Response:
{
"context": {
"properties": [
{
"name": "connectivity",
"namespace": "Alexa.EndpointHealth",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "OK"
},
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2020-06-29T16:49:59.00Z",
"uncertaintyInMilliseconds": 0,
"value": "ON"
}
]
},
"event": {
"endpoint": {
"endpointId": "endpoint-003",
"scope": {
"token": "Alexa-access-token",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "<<SHORTENED>>",
"messageId": "7a8b9a71-adda-41b8-acba-4d3855374845",
"name": "Response",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Problem was: The "name" in my header response should have been "ReportState". "Response" is only used for things that set/change values.
My general advice is to always verify that THREE things are good:
Initial "Discovery"
"Response" messages
General "ReportState" queries.
By this - I mean that:
Anything you advertised as should be reported in "discovery" better be reported in other ("ReportState") messages. If you advertise a "PowerController" - if your ReportStates don't contain status for that, you'll either not see the status, or it'll keep retrying forever (continuing to look for it) - or you might get some sort of an error.
If you CHANGED your discovery stuff - make sure that you really removed, re-discovered, and that the states (above) for the new additions/removals are okay
Always make sure that "EndpointHealth" is being reported.
I'm building a simple Guess Who skill game for Alexa. I have two intents right now: GenderIntent and HairColorIntent.
GenderIntent has a custom slot to handle gender and related synonyms such as mapping "boy" and "man" to "Male". This is working great. It returns a resolution within the slot. Exactly what I need.
HairColorIntent has a predefined Amazon slot, AMAZON.Color. This is not working great as it never returns a resolution regardless of the color supplied.
Here is my model for GenderIntent and HairColorIntent:
{
"name": "GenderIntent",
"samples": [
"are you a {Gender}"
],
"slots": [
{
"name": "Gender",
"type": "GENDER_TYPES",
"samples": []
}
]
},
{
"name": "HairColorIntent",
"samples": [
"is your hair {HairColor}",
"do you have {HairColor} hair"
],
"slots": [
{
"name": "HairColor",
"type": "AMAZON.Color"
}
]
}
GenderIntent returns the following slot WITH resolutions:
{
"Gender": {
"name": "Gender",
"value": "male",
"resolutions": {
"resolutionsPerAuthority": [
{
"authority": "amzn1.er-authority.echo-sdk.amzn1.ask.skill.2ed972f4-1c5a-4cc1-8fd7-3f440f5b8968.GENDER_TYPES",
"status": {
"code": "ER_SUCCESS_MATCH"
},
"values": [
{
"value": {
"name": "Male",
"id": "63889cfb9d3cbe05d1bd2be5cc9953fd"
}
}
]
}
]
},
"confirmationStatus": "NONE",
"source": "USER"
}
}
HairColorIntent returns the following WITHOUT resolutions:
{
"HairColor": {
"name": "HairColor",
"value": "brown",
"confirmationStatus": "NONE",
"source": "USER"
}
}
I'd like HairColorIntent's HairColor slot to return the resolution. What am I doing wrong?
Resolution is only returned if you use synonyms in your slot type.
Not exactly sure how you handle it in your code, for example Node.js would be:
handlerInput.requestEnvelope.request.intent.slots.Gender.resolutions.resolutionPerAuthority[0].values[0].value.name
If you do not use synonyms (for example for the HairColor slot), you can get the value simply by handlerInput.requestEnvelope.request.intent.slots.HairColor.value
Working with predefined slot types this should work well with your code. If you want custom slot types to also return resolution whether you actually use synonyms or not, you can always just simply give the value as a synonym and it should return the full resolution tree.
Hope that answered your question.
I am currently writing a alexa spelling skill to ask different spellings and to check user input according to the data we have. I created below intents & slots to do the expected work:
Intents:
SpellingIntent - Ask a random word from the list of words
AnswerIntent - Validate the user input
Slots:
Words - to keep track of all the words
Spellings - Spellings of words in dot separated format
For example if the word is apple, then spellings slot would have a.p.p.l.e
My app is working fine if the user spelled the word correctly, but if the user misspelled the word then I am not getting event till answerIntent to validate.
I researched about this and I found that amazon deprecated Amazon.LITERAL built in slot type to trigger any word spoken by user and I have to use SearchQuery. But I am not getting how to get the event fired to my answer intent whatever the user said.
Could anyone help me out to figure this?
I haven't tested this but I would suggest trying the following as a solution.
1 - Stop using AMAZON.SearchQuery, instead define a custom slot value like the below:
{
"types": [
{
"name": "LETTER",
"values": [
{
"name": {
"value": "a",
"synonyms": []
}
},
{
"name": {
"value": "b",
"synonyms": []
}
},
{
"name": {
"value": "c",
"synonyms": []
}
},
// ... and so on
]
}
]
}
2 - Redefine your AnswerIntent to accept varying quantities of the LETTER slot value, like the below:
{
"intents": [
{
"name": "AnswerIntent",
"slots": [
{
"name": "LetterOne",
"type": "LETTER"
},
{
"name": "LetterTwo",
"type": "LETTER"
},
{
"name": "LetterThree",
"type": "LETTER"
},
// ... and so on
],
"samples": [
"{LetterOne}",
"{LetterOne} {LetterTwo}",
"{LetterOne} {LetterTwo} {LetterThree}",
// ... and so on
]
}
]
}
In theory, this set up should trigger the AnswerIntent any time a user speaks a sequence of letters. You should then be able to collect the letters passed through in slot values and compare them against the correct spelling.
As a potential additional step you could try adding synonyms to the slot values for phonetically matching words such as the below. Then access the slots in your code via the key value.
{
"name": {
"value": "b",
"synonyms": [
"be",
"bee"
]
}
}