First, I should clarify that I have the built-in Fallback intent set up properly and it triggers every time when I try to issue a rubbish command.
The problem occurs when I issue a command that is defined in a custom slot. For example, I have a custom slot type with the following slot values defined: apple, tomato, onion.
This slot type is added in a custom intent with the following utterances: Information about {food}, info about {food}.
So, in the intent utterances, I have not defined an utterance for the slot alone, i.e. just {food}.
Now, when Alexa is expecting a command, if I say just "apple", it gives an error tone and stops the skill without any error information. I realized that this happens when Alexa can't map the utterance to any intent, but I cannot figure out why it is not triggering the Fallback intent when this happens.
I opened the Utterance Profiler and tried the command there and all I get is that no intents have been triggered with that utterance. For other utterances, in this case if I use "orange" which is not defined in the slot types, it will map to the Fallback intent.
Does anyone have any ideas how I can solve this?
Related
I am working on an Alexa Skill and am having trouble for Alexa to understand my voice input. Therefore the utterances are not properly matched with the required slots... and alexa is always re asking or getting stuck.
Here are some examples:
affirm: f.m., a from
Speedbird: Speedboard, speaker, speed but, speed bird, spirit, speedbath
wind: windies (wind is), when is home (wind is calm)
runway 03: runway sarah three
takeoff: the cough
Any solution to training Alexa to properly understand me? Or should I just write as utterance all these "false" utterances so alexa will match my intents properly?
Thanks for any help!
There is no chance to train the language understanding itself of Alexa.
Yes, as you wrote: I would just take these false utterances as matches for your intent.
This seems also what is recommended by amazon:
...might show you that sometimes Alexa misunderstands the word "mocha" as
"milk." To mitigate this issue, you can map an utterance directly to
an Alexa intent to help improve Alexa's understanding within your
skill. ....
two common ways to improve ASR accuracy are to map an intent value or
a slot value to a failing utterance
Maybe give an other person a try to see if it's recognized the same way as your speech.
Word-Only Slots
If you're still struggling with this, you should try adding more variations to your slot values (synonyms are an option if you have specific interpretations that keep repeating). Consider adding synonyms like speed bird for Speedbird (and take off for takeoff). Non-standard word slots will not resolve as accurately as common words. By breaking Speedbird into two words, Alexa should more successfully recognize the slot. Information about synonyms are here:
https://developer.amazon.com/en-US/docs/alexa/custom-skills/define-synonyms-and-ids-for-slot-type-values-entity-resolution.html#sample-slot-type-definition-and-intentrequest
Once you've done this, you'll want to grab the canonical value of the slot, not the interpreted value (e.g. you want Speedbird not speedboard).
To see an example, scroll to the very last JSON code block. The scenario described in this request is that the user said the word track with is a synonym for the slot value song in their request. You'll see the MediaType value is track (what the user said) but if you take a look at the resolutions object, inside the values array, the first value object is the actual slot value song (what you want) associated with the synonym.
This StackOverflow goes a little more into the details on how you get that value:
How do I get the canonical slot value out of an Alexa request
Word and Number Slots
In the case of the "runway 03" example, consider breaking this into two different slots, e.g. {RunwaySlot : Custom} {Number : Amazon.Number}. You'll have better luck with these more complex slots. The same is true for an example like "red airplane," you'll want to break it into two slots: {Color : Amazon.Color} {VehicleSlot : Custom}
.
https://developer.amazon.com/en-US/docs/alexa/custom-skills/slot-type-reference.html#number
I have a skill in Alexa, Cortana and Google and in each case, there is a concept of terminating the flow after speaking the result or keeping the mic open to continue the flow. The skill is mostly in an Http API call that returns the information to speak, display and a flag to continue the conversation or not.
In Alexa, the flag returned from the API call and passed to Alexa is called shouldEndSession. In Google Assistant, the flag is expect_user_response.
So in my code folder, the API is called from the Javascript file and returns a JSON object containing three elements: speech (the text to speak - possibly SSML); displayText (the text to display to the user); and shouldEndSession (true or false).
The action calls the JavaScript code with type Search and a collect segment. It then outputs the JSON object mentioned above. This all works fine except I don't know how to handle the shouldEndSession. Is this done in the action perhaps with the validate segment?
For example, "Hi Bixby, ask car repair about changing my tires" would respond with the answer and be done. But something like "Hi Bixby, ask car repair about replacing my alternator". In this case, the response may be "I need to know what model car you have. What model car?". The user would then say "Toyota" and then Bixby would complete the dialog with the answer or maybe ask for more info.
I'd appreciate some pointers.
Thanks
I think it can easily be done in Bixby by input prompt when a required input is missing. You can build input-view to better enhance the user experience.
To start building the capsule, I would suggest the following:
Learn more about Bixby on https://bixbydevelopers.com/dev/docs/dev-guide
Try some sample capsules and watch some tutorial videos on https://bixbydevelopers.com/dev/docs/sample-capsules
If you have a Bixby enabled Samsung device, check our marketplace for ideas and inspirations.
How can I make Alexa prompt me (e.g. “You want weather for which city again?”) if I missed a parameter (slot i.e. city_name) in my utterance?
I am making a skill which tells me weather of a city. I have utterances and it works alright, but when I don't define a city name (city_name is also my only slot in my intent) then it directly goes to stop intent and gives my message "Alexa cannot help you with this".
In my slot (city_name) I've even checked "Is this slot required to fulfill the intent?" and have filled Alexa prompts and user utterances but still it doesn't work.
Use alexa dialogs. Dialog automatically fills all the required slots by reprompting the user for the slot value that he has missed. A dialog has 3 states, STARTED, IN_PROGRESS, COMPLETED. Dialog state will be completed only if you have all the required slot values filled. Watch the tutorial here
You can use different Dialog interface directives to ask the user for the information you need to fulfill their request. When ever there is an user interaction with your skill, you will get a request at your backend with the mapped intent and filled(or unfilled) slots. Even if you use dialog model and has filled in all the utterances for each slot, you will have to respond back with an appropriate directive to continue.
There are three ways in which you can handle a dialog model.
1. Delegating to Alexa
You can use the Dialog.Delegate directive to let Alexa determine the next step in the dialog and uses the prompts that you have defined in the dialog model to elicit slot values, confirm slot values, or confirm the entire intent.
If you have unfilled slots just return a delegate directive, Alexa will use the prompts defined in the interaction model to fill that slot. As long as the dialogState property is not COMPLETE you can continue delegating to Alexa.
Once the conversation is complete, the incoming IntentRequest has a dialogState of COMPLETED. All required information is now available in the intent's slot values.
Note: With Dialog.Delegate directive you cannot send outputSpeech or reprompt from your code. Instead those defined in interaction model will be used. And the COMPLETED status is only possible when you use Dialog.Delegate.
2. Control the Dialog
In each turn of the conversation you can take the control and ask for what you need rather than delegating it to Alexa. This is useful especially when you want slots to be filled a particular order or you want to confirm slots as you go or your slot's "mandate" property is dynamic in nature and so on.
You can use Dialog.ElicitSlot directive to ask for a particular slot, Dialog.ConfirmSlot to confirm a particular slot and Dialog.ConfirmIntent to confirm an intent itself.
3. Combining both
When you receive an intent request you can return a delegate directive or any other directive as you wish. Even if you delegate, at any point you can take over the dialog rather than continuing to delegate to Alexa.
More on the different directive here
Sample Interactions:
1. Using delegate directive here
2. Using ElicitSlot directive here
3. Using ConfirmSlot directive here
4. Using ConfirmIntent directive here
I have implemented a multi-turn dialog for Alexa. The Help-Intent provides different Help-Texts depending on the state of the dialog. After the User has triggered the HelpIntent and was presented the Help-Text, I want to elicit a specific slot with the ElicitSlotDirective
Now this seems to be not supported, since you can only elicit slots of the current intent, and the HelpIntent does not have slots.
https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs/issues/162
My question now is: How can I return to my multi-turn dialog and elicit a specific slot after the user triggered the HelpIntent?
You can now use intent chaining to elicit a slot from a different Intent. For example:
.addDirective({
type: 'Dialog.ElicitSlot',
slotToElicit: 'drink',
updatedIntent: {
name: 'OrderIntent',
confirmationStatus: 'NONE'
}
})
See this blog post.
The documentation states that:
Implementing the built-in intents is recommended, but optional.
I recommend that you define your own HelpIntent with overlapping utterances to the AMAZON.HelpIntent, but with your needed Slot types.
In this case, your service receives an IntentRequest for MyHelpIntent, even though these phrases overlap with the built-in AMAZON.HelpIntent.
The documentation also states, that this practice is not recommended, because the built-in intent may have a better coverage of sample utterances. It states that it is better practice to extend the built-in Intents. But (stupid enough from Amazon), the HelpIntent does not support Slots. So the only way would be a custom Help Intent.
I don't see a way to use Dialog Directives with the built-in Intents.
Here's a convoluted workaround that might work (there's no straight forward way right now, Nov 2018):
On every loop of the multi-turn dialog save your dialog based intent in the session attributes (the whole intent object, you can use the intent.name as key)
On every triggered intent (even HelpIntent) save the intent name in a lastIntent session attribute (to keep track of the previous intent name)
User triggers help and you're now in the HelpIntent. After you provide your help message append a question in the end that will cause the user to say something that will trigger your dialog based intent again
Do the following steps when you are in the dialog based intent and only if the lastIntent was HelpIntent (the one in the previous step):
Load the most recent intent data from the session attributes and, in it, delete the slot value and resolutions of the slot you want to elicitate (alternatively if you want to start from scratch you can delete the remaining slot values too, up to you)
Replace the current intent with the modified intent of the previous step
Emit a DialogDelegate with the current intent (your model needs to flag the slot you want to elicitate with elicitationRequired set to true)
lets say i have a skill with 2 custom intents, 'FirstIntent' and 'SecondIntent'. SecondIntent also has a required slot, 'reqSlot'.
Now, i would like to sequence my intents. After my skill sent the FirstIntent-response, i would like Alexa to send a request with SecondIntent and a directive to elicit reqSlot, without the user to invoke it.
They say here, at the parameter 'updatedInted':
"Note that you cannot change intents when returning a Dialog directive, so the intent name and set of slots must match the intent sent to your skill."
Is this generally possilbe or did anyone figure out a workaround for this scenario?
Thanks :)
There are ways to handle this.
You can try:
When you send your first response it must set the shouldEndSession flag to false.
The end of your first response's output speech should lead the user into invoking the second response. For example: 'Say telephone number followed by your number'.
This way the user doesn't need to explicitly invoke your skill to get to the next intent.
It is not currently possible to cause Alexa to start speaking without a user first having spoken to it.
So for example, I cannot create a skill that will announce to my wife that "Ron is on his way home" whenever I leave work.
I can however, create a skill that my wife can ask "Is Ron on his way home", and it can respond with the answer.
Also, the new notifications allow a skill to post a notification, but this just causes the Alexa to light up its circular ring to indicate that a notification is waiting. A user must still ask Alexa to read it. In the example I cite above, that might be ok.
A lot of us would love for Alexa to be able to spontaneously start talking, but Amazon has not enabled that. Can you just imagine the opportunity for advertising abuse that functionality might enable? Nothing like sitting down watching TV and having Alexa start talking, "Hey, wouldn't some Wonder Popcorn taste great about now? We're having a sale..."