I have defined #out_of_scope intent in the conversation workspace and added several user examples. Now when I ask a question that is not specified as an example, the JSON returned has #out_of_scope intent with a very low confidence score.
Why is that ? Does this mean, that if a user asks a question which is not in the list of specified examples, it will not be mapped to #out_of_scope intent ?
Thank you,
Sandhya
Technically #out_of_scope is just a label. If the user input is not similar to your sample data, it will not be highly confident. The real benefit of the out of scope class is that it will go a long way in making so your other classes are not highly confident against any "noise"
Related
I am working on an Alexa Skill and am having trouble for Alexa to understand my voice input. Therefore the utterances are not properly matched with the required slots... and alexa is always re asking or getting stuck.
Here are some examples:
affirm: f.m., a from
Speedbird: Speedboard, speaker, speed but, speed bird, spirit, speedbath
wind: windies (wind is), when is home (wind is calm)
runway 03: runway sarah three
takeoff: the cough
Any solution to training Alexa to properly understand me? Or should I just write as utterance all these "false" utterances so alexa will match my intents properly?
Thanks for any help!
There is no chance to train the language understanding itself of Alexa.
Yes, as you wrote: I would just take these false utterances as matches for your intent.
This seems also what is recommended by amazon:
...might show you that sometimes Alexa misunderstands the word "mocha" as
"milk." To mitigate this issue, you can map an utterance directly to
an Alexa intent to help improve Alexa's understanding within your
skill. ....
two common ways to improve ASR accuracy are to map an intent value or
a slot value to a failing utterance
Maybe give an other person a try to see if it's recognized the same way as your speech.
Word-Only Slots
If you're still struggling with this, you should try adding more variations to your slot values (synonyms are an option if you have specific interpretations that keep repeating). Consider adding synonyms like speed bird for Speedbird (and take off for takeoff). Non-standard word slots will not resolve as accurately as common words. By breaking Speedbird into two words, Alexa should more successfully recognize the slot. Information about synonyms are here:
https://developer.amazon.com/en-US/docs/alexa/custom-skills/define-synonyms-and-ids-for-slot-type-values-entity-resolution.html#sample-slot-type-definition-and-intentrequest
Once you've done this, you'll want to grab the canonical value of the slot, not the interpreted value (e.g. you want Speedbird not speedboard).
To see an example, scroll to the very last JSON code block. The scenario described in this request is that the user said the word track with is a synonym for the slot value song in their request. You'll see the MediaType value is track (what the user said) but if you take a look at the resolutions object, inside the values array, the first value object is the actual slot value song (what you want) associated with the synonym.
This StackOverflow goes a little more into the details on how you get that value:
How do I get the canonical slot value out of an Alexa request
Word and Number Slots
In the case of the "runway 03" example, consider breaking this into two different slots, e.g. {RunwaySlot : Custom} {Number : Amazon.Number}. You'll have better luck with these more complex slots. The same is true for an example like "red airplane," you'll want to break it into two slots: {Color : Amazon.Color} {VehicleSlot : Custom}
.
https://developer.amazon.com/en-US/docs/alexa/custom-skills/slot-type-reference.html#number
I'm using watson assistant (plus) and I'm actually fighting with the correct usage of entity usage inside intent examples. First of all, inside the webUI I can't find trace of what mentioned in the documentation about entity suggestions, entity annotation inside intents examples..(we are on frankfurt server).
I have many intents in one skill and I decided to use entity mentions in intents examples. Having no trace of simplified way to add entity inside the single example, I directly wrote it inside the phrase.
From "What I need to activate MySpecificService ABC ?" to "What I need to activate #services:(MySpecificService ABC)", the same syntax used in dialog nodes.
I have used this method diffusely on my skill, according the documentation.
My problems starts here. Assistant refuse to detect the right intent when I try it.
If I ask "What I need to activate MyService Name?" the assistant detect a totally wrong intent, with low confidence (0.40 or less), and the correct intent does not appear neither as 2nd or 3rd intent (it correctly detect the entity).
No similar examples using exaclty #services:(MySpecificService ABC) in other intents, but I used other references to #services or #services:(otherservice name) in other intents.
I read documentation many times, I googled around, watched videos.. but nothing. Evidently I've misunderstood something.
Can You help me?
Intents are the actions/verbs that the user is trying to achieve. In this case, an intent could be the activation itself (no matter what is he trying to activate).
So you should write different examples of an activation question:
"How can I activate my service?", "I need to activate this service", etc.
The entities are the objects/substantives. In your case, services.
So in your dialog, if you are need the assistant to detect the intent+entity. So create a node with the condition #activation && #service:MySpecificService
Be aware that if you have several nodes in your dialog, their order will impact the way that your assistant analyzes the input. If the #activation && #service node is before the #activation && #service:MySpecificService node; the first one will be triggered as "MySpecificService" is one of the #services.
Hope that this helps!
im dealing with entities in intents as well and i think we're also on the frankfurt server.
Since youre on the frankfurt server im pretty sure the reason youre not seeing the annotation options is that youre using german language.
Annotations as mentioned in the documentation is only available for english language (unfortunately)
kr
I am developing an app and everything is working good. One condition are there where I have set the utterances but if user is speaking something else, I am throwing it to the fallbackIntent. One of my utterance is {name} so user can speak any name. But I have define range of name as well that user is allowed only these names. So my problem is if the user is choosing defined names, everything working great and if user said something else like what is weather of chicago, it is going to fallbackIntent as well but the issue is if user speak some name which is not in the list, then too it is coming into defined intent. What i want that if user speak something which is correct but not in my defined name then too redirect it to the fallbackIntent. Is there any way I can call intent in giving condition? I am using php.
When you define a custom slot, Alexa take it's values as samples. So values which are not in the slot-value-list will also be passed to you. And with respect your intent, those slot values are valid, hence that intent is triggered.
The solution is to validate the slot values at your backend and return an appropriate response.
In your case, if u get any other names other than those you have defined, respond back with an error or give FallbackIntent's response.
When you create a custom slot type, a key concept to understand is
that this is training data for Alexa’s NLP (natural language
processing). The values you provide are NOT a strict enum or array
that limit what the user can say. This has two implications
1) words and phrases not in your slot values will be passed to you,
2) your code needs to perform any validation you require if what’s
said is unknown.
I have implemented a multi-turn dialog for Alexa. The Help-Intent provides different Help-Texts depending on the state of the dialog. After the User has triggered the HelpIntent and was presented the Help-Text, I want to elicit a specific slot with the ElicitSlotDirective
Now this seems to be not supported, since you can only elicit slots of the current intent, and the HelpIntent does not have slots.
https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs/issues/162
My question now is: How can I return to my multi-turn dialog and elicit a specific slot after the user triggered the HelpIntent?
You can now use intent chaining to elicit a slot from a different Intent. For example:
.addDirective({
type: 'Dialog.ElicitSlot',
slotToElicit: 'drink',
updatedIntent: {
name: 'OrderIntent',
confirmationStatus: 'NONE'
}
})
See this blog post.
The documentation states that:
Implementing the built-in intents is recommended, but optional.
I recommend that you define your own HelpIntent with overlapping utterances to the AMAZON.HelpIntent, but with your needed Slot types.
In this case, your service receives an IntentRequest for MyHelpIntent, even though these phrases overlap with the built-in AMAZON.HelpIntent.
The documentation also states, that this practice is not recommended, because the built-in intent may have a better coverage of sample utterances. It states that it is better practice to extend the built-in Intents. But (stupid enough from Amazon), the HelpIntent does not support Slots. So the only way would be a custom Help Intent.
I don't see a way to use Dialog Directives with the built-in Intents.
Here's a convoluted workaround that might work (there's no straight forward way right now, Nov 2018):
On every loop of the multi-turn dialog save your dialog based intent in the session attributes (the whole intent object, you can use the intent.name as key)
On every triggered intent (even HelpIntent) save the intent name in a lastIntent session attribute (to keep track of the previous intent name)
User triggers help and you're now in the HelpIntent. After you provide your help message append a question in the end that will cause the user to say something that will trigger your dialog based intent again
Do the following steps when you are in the dialog based intent and only if the lastIntent was HelpIntent (the one in the previous step):
Load the most recent intent data from the session attributes and, in it, delete the slot value and resolutions of the slot you want to elicitate (alternatively if you want to start from scratch you can delete the remaining slot values too, up to you)
Replace the current intent with the modified intent of the previous step
Emit a DialogDelegate with the current intent (your model needs to flag the slot you want to elicitate with elicitationRequired set to true)
For example: If the user writes in the Watson Conversation Service:
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
How you can know that user doesn't want to have a pool, but he loves to live in a Condo?
This is a good question and yeah this is a bit tricky...
Currently your best bet is to provide as much examples of the utterances that should be classified as a particular intent as a training examples for that intent - the more examples you provide the more robust the NLU (natural language understanding) will be.
Having said that, note that using examples such as:
"I would want to have a pool in my new house, but I wouldn't love to live in a Condo"
for intent-pool and
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
for intent-condo will make the system to correctly classify these sentences, but the confidence difference between these might be quite small (because of the reason they are quite similar when you look just at the text).
So the question here is whether it is worth to make the system classify such intents out-of-the-box or instead train the system on more simple examples and use some form of disambiguation if you see the top N intents have low confidence differences.
Sergio, in this case, you can test all conditions valid with peers node (continue from) and your negative (example else) you can use "true".
Try used the intends for determine the flow and the entities for defining conditions.
See more: https://www.ibm.com/watson/developercloud/doc/conversation/tutorial_basic.shtml
PS: you can get the value of entity using:
This is a typical scenario of multi intents in Conversation service. Everytime user says something, all top 10 intents are identified. You can change your dialog JSON editor like this to see all intents.
{
"output": {
"text": {
"values": [
"<? intents ?>"
],
"selection_policy": "sequential"
}
}
}
For example, When user makes a statement, that will trigger two intents, you'll see that intents[0].confidence and intents[1].confidence both will be pretty high, which means that Conversation identified both the intents from the user text.
But there is a major limitation in it as of now, there is no guaranteed order for the identified intents, i.e. if you have said
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo", there is no guarantee that positive intent "would_not_want" will be the intents[0].intent and intent "would_want" will be the intents[1].intent. So it will be a bit hard to implement this scenario with higher accuracy in your application.
This is now easily possible in Watson Assistant. You can do this by creating contextual entities.
In your intent, you mark the related entity and flag it to the entity you define. The contextual entities will now learn the structure of the sentence. This will not only understand what you have flagged, but also detect entities you haven't flagged.
So example below ingredients have been tagged as wanted and not wanted.
When you run it you get this.
Full example here: https://sodoherty.ai/2018/07/24/negation-annotation/