I have several intents in my skill including some AMAZON built in intents. Three of these are AMAZON.FallbackIntent, AMAZON.NoIntent and AMAZON.YesIntent.
When saying "yes" the AMAZON.YesIntent is triggered but when I say "no" AMAZON.FallbackIntent is triggered. So I added "no" as an additional utterance to AMAZON.NoIntent but still "no" is routed to AMAZON.FallbackIntent.
Anyone know why this happens and how to fix it?
You might have a AMAZON.LITERAL or AMAZON.SearchQuery slot in one of your intents, which kind of interferes with the AMAZON.NoIntent. Try removing that intent, AMAZON.NoIntent should work fine. A similar issue was discussed in forum
The reason AMAZON.NoIntent didn't work was because I had another intent that partially had similar or equal utterances as AMAZON.NoIntent. This intent was never matched against which made it a bit more difficult to figure out it was that intent that created this issue.
Related
I'm using watson assistant (plus) and I'm actually fighting with the correct usage of entity usage inside intent examples. First of all, inside the webUI I can't find trace of what mentioned in the documentation about entity suggestions, entity annotation inside intents examples..(we are on frankfurt server).
I have many intents in one skill and I decided to use entity mentions in intents examples. Having no trace of simplified way to add entity inside the single example, I directly wrote it inside the phrase.
From "What I need to activate MySpecificService ABC ?" to "What I need to activate #services:(MySpecificService ABC)", the same syntax used in dialog nodes.
I have used this method diffusely on my skill, according the documentation.
My problems starts here. Assistant refuse to detect the right intent when I try it.
If I ask "What I need to activate MyService Name?" the assistant detect a totally wrong intent, with low confidence (0.40 or less), and the correct intent does not appear neither as 2nd or 3rd intent (it correctly detect the entity).
No similar examples using exaclty #services:(MySpecificService ABC) in other intents, but I used other references to #services or #services:(otherservice name) in other intents.
I read documentation many times, I googled around, watched videos.. but nothing. Evidently I've misunderstood something.
Can You help me?
Intents are the actions/verbs that the user is trying to achieve. In this case, an intent could be the activation itself (no matter what is he trying to activate).
So you should write different examples of an activation question:
"How can I activate my service?", "I need to activate this service", etc.
The entities are the objects/substantives. In your case, services.
So in your dialog, if you are need the assistant to detect the intent+entity. So create a node with the condition #activation && #service:MySpecificService
Be aware that if you have several nodes in your dialog, their order will impact the way that your assistant analyzes the input. If the #activation && #service node is before the #activation && #service:MySpecificService node; the first one will be triggered as "MySpecificService" is one of the #services.
Hope that this helps!
im dealing with entities in intents as well and i think we're also on the frankfurt server.
Since youre on the frankfurt server im pretty sure the reason youre not seeing the annotation options is that youre using german language.
Annotations as mentioned in the documentation is only available for english language (unfortunately)
kr
I have below utterances in my intent.
What is the parameter value for today ?
What's the parameter value for today ?
What is a parameter value ?
Whats a parameter value ?
Now, if I ask alexa "what was my parameter value for today ?", It cant understand.
So, I wonder do I need to add all possible utterances with all tense, verbs, phrases, articles in mind ?
Alexa's NLP is very good. From my experience, I can say that it's constantly improving. The utterances you mentioned above can be minimized to just one or two. Ideally, you should give some 10 to 15 variants or more which covers the specified user intention and not duplicates of the same utterances.
Anyway, I tried your utterances as it is and got the intent recognized.
I want to write a quiz/interview game where the flow is like this:
"Alexa, start Movie Trivia."
Welcome to Movie Trivia. Do you need to hear the rules?
"No."
What category would you like to play? Comedy, drama, or animation?
"Comedy."
Question 1. In what year was Star Wars released? A, 1970. B, 1977. C, 1980.
"B."
Correct. Your score is 1. Question 2...
I managed to write spaghetti code to accomplish this, with lots of if session.attributes.category, if session.attributes.needsRules, etc stuff, 3 pages of nested if-elsing.
I'm using Node and the official Alexa SDK, so I read its documentation cover to cover, but it's quite confusing and broken in places (examples that haven't worked since June, instructions for old UIs and so on). My question is: what kind of flow is 'correct'/traditional for something like this?
In the code I was writing, I used elicitSlot a lot, which is nice because it lets me listen solely for the things I expect to hear (eg answerType "A", "B", "C"). But elicitSlot leads to you re-triggering the same intent. So would it be a matter of having each intent check if a slot is filled, and if not, speak a question and elicit that slot, and if so, set a session attribute and then forward to a different intent?
That seems sloppy. Maybe the solution is to define an askingRulesState, askingCategoryState, askingQuestionState, etc, each with only a single handler. But states with only a single handler seems... wrong?
If I'm going to ask the user a question like "What category would you like to play?", does that mean I need to create a SetCategoryIntent? And if so, how would I prevent the user from triggering that intent except when I want them to?
I realise this is a bit of a big vague question but it's really difficult for me to boil it down to something smaller and clearer, since my issue is that the flow in general is really disorienting to me. I'd appreciate even the smallest tip!
You might have a look here, this will handle a lot of the if else's and elicit slots you wrote. For the questions and such you wil indeed probably have to make a state so you can check if you are in question asking state or just in the set-up state. This will help your skill deter men what it has to do. (don't forget to ALWAYS put this state back in because Alexa is tricky if you do not do this sometimes. You can find more info over here. This also looks like a pretty good example of what you are trying to make.
Hope this helps you forward a bit.
lets say i have a skill with 2 custom intents, 'FirstIntent' and 'SecondIntent'. SecondIntent also has a required slot, 'reqSlot'.
Now, i would like to sequence my intents. After my skill sent the FirstIntent-response, i would like Alexa to send a request with SecondIntent and a directive to elicit reqSlot, without the user to invoke it.
They say here, at the parameter 'updatedInted':
"Note that you cannot change intents when returning a Dialog directive, so the intent name and set of slots must match the intent sent to your skill."
Is this generally possilbe or did anyone figure out a workaround for this scenario?
Thanks :)
There are ways to handle this.
You can try:
When you send your first response it must set the shouldEndSession flag to false.
The end of your first response's output speech should lead the user into invoking the second response. For example: 'Say telephone number followed by your number'.
This way the user doesn't need to explicitly invoke your skill to get to the next intent.
It is not currently possible to cause Alexa to start speaking without a user first having spoken to it.
So for example, I cannot create a skill that will announce to my wife that "Ron is on his way home" whenever I leave work.
I can however, create a skill that my wife can ask "Is Ron on his way home", and it can respond with the answer.
Also, the new notifications allow a skill to post a notification, but this just causes the Alexa to light up its circular ring to indicate that a notification is waiting. A user must still ask Alexa to read it. In the example I cite above, that might be ok.
A lot of us would love for Alexa to be able to spontaneously start talking, but Amazon has not enabled that. Can you just imagine the opportunity for advertising abuse that functionality might enable? Nothing like sitting down watching TV and having Alexa start talking, "Hey, wouldn't some Wonder Popcorn taste great about now? We're having a sale..."
Alexa just doesn't understand the word 'postpaid' and I've tried it a million times in my skill. I also tried "Alexa, Simon says postpaid" but it repeats something else other than postpaid, I don't know why. My sample utterance is like this "what is the {type} sales" and the type has custom slot values "postpaid",etc.
I've looked at AMAZON.LITERAL but didn't quite understand it if it will help me in my case. So any workaround will be helpful and thanks in advance.
What does Alexa think you said? Maybe you can use that in your intent also. Your code can check for and replace whatever that is to "postpaid".
This is a bit of a hack, but may work for you until Amazon provides us with a way to fine tune input.
Alexa will not always restrict the transcription the options in a slot to the given values, specially if you have a large list of possible values. Either using a list or AMAZON.LITERAL, in this case, your best bet may be to check wether the identified value is in fact one of the values in your list and use it, otherwise, you can use a phonetic matching/similarity algorithm to select the closest value.
Hit me up if you need example code (in Python in my case)
This feels simplistic but have you tried breaking postpaid into two words?
{type} == "post paid"
Slots can contain multi word utterances. Perhaps Alexa will recognize the two distinct morphemes.