How to fire fallbackIntent even if user's dialogue fall into some other intent - alexa

I am developing an app and everything is working good. One condition are there where I have set the utterances but if user is speaking something else, I am throwing it to the fallbackIntent. One of my utterance is {name} so user can speak any name. But I have define range of name as well that user is allowed only these names. So my problem is if the user is choosing defined names, everything working great and if user said something else like what is weather of chicago, it is going to fallbackIntent as well but the issue is if user speak some name which is not in the list, then too it is coming into defined intent. What i want that if user speak something which is correct but not in my defined name then too redirect it to the fallbackIntent. Is there any way I can call intent in giving condition? I am using php.

When you define a custom slot, Alexa take it's values as samples. So values which are not in the slot-value-list will also be passed to you. And with respect your intent, those slot values are valid, hence that intent is triggered.
The solution is to validate the slot values at your backend and return an appropriate response.
In your case, if u get any other names other than those you have defined, respond back with an error or give FallbackIntent's response.
When you create a custom slot type, a key concept to understand is
that this is training data for Alexa’s NLP (natural language
processing). The values you provide are NOT a strict enum or array
that limit what the user can say. This has two implications
1) words and phrases not in your slot values will be passed to you,
2) your code needs to perform any validation you require if what’s
said is unknown.

Related

Alexa Skill - Programmatically Enable/Disable Slot Matching

I have an Alexa skill that at one point asks for names, and at another point asks for numbers. The names are being mapped to a slot of type AMAZON.FirstName and the numbers to a slot of type AMAZON.NUMBER. The problem is that Alexa is aggressively interpreting even number values as names. (e.g. Saying "eight" is likely to be cast as the name "Tate.")
From what I can tell, Dialog Delegation is useful only if you know exactly how many of each type you need to capture. But in my case there are a variable number of times I will need to capture a name, so I can't just fill that slot once and be done with it.
Ideally I would like a way to programmatically turn the slots on and off. So when I prompt the user for a name, any utterance CAN ONLY BE MAPPED TO A NAME or else rejected (obviously HELP and EXIT, etc would still work). And then when I ask for a number, any utterance WILL ONLY BE MAPPED TO A NUMBER, it won't even try to cast it into the type AMAZON.FirstName.
Is there any way to achieve that? Or are there any other workarounds for scenarios like this?
I'd change the approach you're taking. You have a great tool for validation before even getting to code itself. Click the slot you're trying to validate, then click on the "validations" tab.
Right there, you can add either one or two rules. If you do one, you can add the "not within a set of values" and you can type "one", "two", etc. in order to avoid getting those numeric values inside your name slot.
If you take the two rule validation, you will need to add a "Value within slot types' slot values". That way, you will only accept values inside the AMAZON.FirstName slot type.
You don't really need to enable/disable a slot, you can simply take both in the same utterance. Just make sure you're validating your slots properly and that way you'll avoid unvalid data from getting into your skill :)
Read more: https://developer.amazon.com/es-mx/docs/custom-skills/validate-slot-values.html

Alexa: How to get User request's text message in default fallback intend?

I have an utterance like
"what is {my_slot}"
and "my_slot" is having values like Java, Oracle, PHP
Suppose user ask for a question "What is js", which is not in my slot value, Alexa's defaultfallback is triggerd in the above situtaion.
How to get the text message / slot value which is given by the user?
Slot values samples that you provide are treated as training data. So values that are not in your list will also be returned to you.
When you create a custom slot type, a key concept to understand is
that this is training data for Alexa’s NLP (natural language
processing). The values you provide are NOT a strict enum or array
that limit what the user can say. This has two implications
1) words and phrases not in your slot values will be passed to you,
2) your code needs to perform any validation you require if what’s
said is unknown.
If your interaction model design is correct, when the users say "What is js" that intent will be triggered and the slot value "js" will be passed to you. You should then validate the slot value and respond back with an appropriate response.
AMAZON.FallbackIntent is triggered when there are no appropriate intent mappings for the user utterance.
If your utterance "What is js" is triggering AMAZON.FallbackIntent, then update/change your interaction model such that it will map to associated intent and slot correctly. Then validate "js" in your backend.

Adding slot changing the intent

I have created a dialog which checks for some Intent and entity to trigger the response, I have also added slots to capture the missing entities. But when user enter the slot value it changes the intent thus causing change in final response. I have tried adding context variable also and deleting it after response but it gets deleted before response and I am getting empty context variable in response.
Like I have added a slot for capturing missing color values in an Intent say 'looking' and color values are like 'I, G, H' and there's also an Intent let's say Goodbye which is also trained for values like 'G or H'. So, when a user fills the slot value with 'G or H' it also overrides the previous intent 'looking' to 'Goodbye' and my final response value changes. What is the best way to handle this kind of flow?
The current intent is based on the latest utterance by the end user. So when someone types in a follow up to a slot, the intent will change and this is intended.
A common confusion is that this impacts the dialog tree. Because when you test it in "Try it out" you see the intent change. Unless your dialog tree is explicitly looking for it after the slot, then it has no impact what-so-ever.
If you do need it to stay the same, then you can send back the intent object in your context. This will disable Watson Assistant from trying to guess the intent.
The danger here is you need to be mindful that what you send back might not reflect what the user has entered. For example, they may ask something that has to trigger the handler of a slot. Doing this will disable that ability.

Elicit Slot from within HelpIntent in Alexa

I have implemented a multi-turn dialog for Alexa. The Help-Intent provides different Help-Texts depending on the state of the dialog. After the User has triggered the HelpIntent and was presented the Help-Text, I want to elicit a specific slot with the ElicitSlotDirective
Now this seems to be not supported, since you can only elicit slots of the current intent, and the HelpIntent does not have slots.
https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs/issues/162
My question now is: How can I return to my multi-turn dialog and elicit a specific slot after the user triggered the HelpIntent?
You can now use intent chaining to elicit a slot from a different Intent. For example:
.addDirective({
type: 'Dialog.ElicitSlot',
slotToElicit: 'drink',
updatedIntent: {
name: 'OrderIntent',
confirmationStatus: 'NONE'
}
})
See this blog post.
The documentation states that:
Implementing the built-in intents is recommended, but optional.
I recommend that you define your own HelpIntent with overlapping utterances to the AMAZON.HelpIntent, but with your needed Slot types.
In this case, your service receives an IntentRequest for MyHelpIntent, even though these phrases overlap with the built-in AMAZON.HelpIntent.
The documentation also states, that this practice is not recommended, because the built-in intent may have a better coverage of sample utterances. It states that it is better practice to extend the built-in Intents. But (stupid enough from Amazon), the HelpIntent does not support Slots. So the only way would be a custom Help Intent.
I don't see a way to use Dialog Directives with the built-in Intents.
Here's a convoluted workaround that might work (there's no straight forward way right now, Nov 2018):
On every loop of the multi-turn dialog save your dialog based intent in the session attributes (the whole intent object, you can use the intent.name as key)
On every triggered intent (even HelpIntent) save the intent name in a lastIntent session attribute (to keep track of the previous intent name)
User triggers help and you're now in the HelpIntent. After you provide your help message append a question in the end that will cause the user to say something that will trigger your dialog based intent again
Do the following steps when you are in the dialog based intent and only if the lastIntent was HelpIntent (the one in the previous step):
Load the most recent intent data from the session attributes and, in it, delete the slot value and resolutions of the slot you want to elicitate (alternatively if you want to start from scratch you can delete the remaining slot values too, up to you)
Replace the current intent with the modified intent of the previous step
Emit a DialogDelegate with the current intent (your model needs to flag the slot you want to elicitate with elicitationRequired set to true)

Amazon Alexa: slot types

Can any one tell me the slot type of the intent which can have long sentences in the English(India) Language? In English(US) I use AMAZON.StreetAddress for this purpose. Thanks.
Consider using a custom slot type. According to Amazon, use of the Amazon.LITERAL type is discouraged, and a custom slot type is recommended instead. Usually for custom slot types, you specify a set of sample values. However, based on your use case, it sounds like you want a match that is as close as possible to just catching all possible inputs, which is Scenario #3 from this Amazon Alexa blog article. As the article's content for the "Catch All" scenario mentions:
If you use the same training data that you would have used for
LITERAL, you’ll get the same results.
IMO, of special importance is the last paragraph regarding Scenario #3.
If you’re still not getting the results, trying setting the CatchAll
values to around twenty 2 to 8 word random phrases (from a random word
generator – be really random). When the user says something that
matches your other utterances, those intents will still be sent. When
it doesn’t match any of those, it will fall to the CatchAll slot. If
you go this route, you’re going to lose accuracy because you’re not
taking full advantage of Alexa’s NLP so you’ll need to test heavily.
Hope that helps.

Resources