Alexa skill 2 intents and open slot - alexa

Is it possible that Alexa reacts right when I have in a sample utterances something like this, opened slots?
MyNameIsIntent {firstname}
ProjektIntent {projekt}
Right now she always goes back to MyNameIsIntent even if she ask a question for a ProjektIntent
The bad conversation:
Alexa: Welcome, what is your name?
Me: Frank
Alexa Hi Frank, Tell me what project is interesting for you?
Me: Teleshoping
Alexa: Hi Teleshoping, Tell me what project is interesting for you?
Im totally new to Alexa, could you give me please a tip if it is possible that Alexa answers the right question? I tried to do it with a session atributes but it doesn't work

An utterance file like that won't work. Alexa uses the text of the utterance - the stuff aside from the slot name, of which you have none - to choose an intent.
So, unfortunately you can't design your interactions to allow for a response of just 'Teleshoping'. Instead, the user would have to say something like 'I'm interested in Teleshoping' (you would need to tell them to say it like that), and the sample utterance would be 'I'm interested in {projekt}'.

You can use States do react differently on the same intents depending on the state of your application.
See https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs#making-skill-state-management-simpler for the official documentation.
You create a dictionary of StateHandlers like this (in Node.js 4.3):
var askNameStateHandlers = Alexa.CreateStateHandler("askNameState",
{
"MyIntent": function () { /* ... */ }
// more handlers ...
});
Then you register your handlers like this, assuming you have a dictionary of handlers without states defaultHandlers and two with handlers for a specific state askNameStateHandlers and askProjectStateHandlers:
exports.handler = function(event, context, callback) {
var alexa = Alexa.handler(event, context);
alexa.registerHandlers(defaultHandlers, askNameStateHandlers, askProjectStateHandlers);
alexa.execute();
};
To change the state of your application, simply assign it like this inside a handler function:
this.handler.state = "askNameState";
The alexa-sdk will then take care of calling the right handler depending on your application state.
There is also a full example implementation at https://github.com/alexa/skill-sample-nodejs-highlowgame
Note that this way, you will only have one intent MyIntent who accepts the answers to both questions, deciding which of your functions should handle the result based on the application state only.

And just to add a little further clarification to Tom's answer, Alexa can't tell if the word "teleshoping" is your first name or project.
If you use longer utterances as Tom mentions, such as "my name is {firstname}" and "I want to {projekt}" then Alexa will have no trouble discerning.
You can also help Alexa tell the difference by filling in the slot values, but this assumes that you know what the possible projekt values will be.
firstname can use the builtin AMAZON.US_FIRST_NAME slottype.

Related

Alexa having trouble understanding my voice input

I am working on an Alexa Skill and am having trouble for Alexa to understand my voice input. Therefore the utterances are not properly matched with the required slots... and alexa is always re asking or getting stuck.
Here are some examples:
affirm: f.m., a from
Speedbird: Speedboard, speaker, speed but, speed bird, spirit, speedbath
wind: windies (wind is), when is home (wind is calm)
runway 03: runway sarah three
takeoff: the cough
Any solution to training Alexa to properly understand me? Or should I just write as utterance all these "false" utterances so alexa will match my intents properly?
Thanks for any help!
There is no chance to train the language understanding itself of Alexa.
Yes, as you wrote: I would just take these false utterances as matches for your intent.
This seems also what is recommended by amazon:
...might show you that sometimes Alexa misunderstands the word "mocha" as
"milk." To mitigate this issue, you can map an utterance directly to
an Alexa intent to help improve Alexa's understanding within your
skill. ....
two common ways to improve ASR accuracy are to map an intent value or
a slot value to a failing utterance
Maybe give an other person a try to see if it's recognized the same way as your speech.
Word-Only Slots
If you're still struggling with this, you should try adding more variations to your slot values (synonyms are an option if you have specific interpretations that keep repeating). Consider adding synonyms like speed bird for Speedbird (and take off for takeoff). Non-standard word slots will not resolve as accurately as common words. By breaking Speedbird into two words, Alexa should more successfully recognize the slot. Information about synonyms are here:
https://developer.amazon.com/en-US/docs/alexa/custom-skills/define-synonyms-and-ids-for-slot-type-values-entity-resolution.html#sample-slot-type-definition-and-intentrequest
Once you've done this, you'll want to grab the canonical value of the slot, not the interpreted value (e.g. you want Speedbird not speedboard).
To see an example, scroll to the very last JSON code block. The scenario described in this request is that the user said the word track with is a synonym for the slot value song in their request. You'll see the MediaType value is track (what the user said) but if you take a look at the resolutions object, inside the values array, the first value object is the actual slot value song (what you want) associated with the synonym.
This StackOverflow goes a little more into the details on how you get that value:
How do I get the canonical slot value out of an Alexa request
Word and Number Slots
In the case of the "runway 03" example, consider breaking this into two different slots, e.g. {RunwaySlot : Custom} {Number : Amazon.Number}. You'll have better luck with these more complex slots. The same is true for an example like "red airplane," you'll want to break it into two slots: {Color : Amazon.Color} {VehicleSlot : Custom}
.
https://developer.amazon.com/en-US/docs/alexa/custom-skills/slot-type-reference.html#number

Alexa Skills, make AMAZON.FallbackIntent return empty string

I am trying to make a mock interview skill on Alexa where the skill asks the user a question for example: "tell me about your background and experiences".
The user would give an answer, and when the user is done answering, he/she can say "next question" to get the next question.
So "next question" is really the only intent the app is waiting to hear. The problem is when the user is giving an answer for example:
"My name is Bob, I am from New York, I studied biology, etc.",
the session is still live, and Alexa obviously doesn't understand the intent so AMAZON.FallbackIntent gets triggered.
Is there a way to just return an empty string when AMAZON.FallbackIntent gets called so the mock interview session doesn't get disrupted?
Thank you!
It sounds like you need to control the session and constrain the user.
IMO Alexa has a lot of trouble with long user utterances. The problem really stems from the interaction model and the unpredictability of what a user will say. This blog post sheds some light on VUI issues (https://medium.com/hackernoon/lessons-learned-moving-from-web-to-voice-development-35daa1d301db). tl;dr - you have to maintain state and context.
One approach you can take is to ask the user specific questions. "What is your name?" should map to one intent and update the session/persistence with the slot value. Then you respond with the next question you want the user to answer(i.e. "Where do you live", "what university did you attend") , having another intent ready to handle that slot value. You have to realize that users can say anything at any point in your Alexa Skill session and your skill has to handle it.
Here's an Amazon Developer blog post that can help you better understand dialog management and slot confirmation: https://developer.amazon.com/blogs/alexa/post/3a23c045-b568-4a6a-8a8c-fd5511a08053/build-advanced-alexa-skills-confirm-what-customers-want-with-dialog-management

Porting an Alexa Skill - completing or continuing the dialog

I have a skill in Alexa, Cortana and Google and in each case, there is a concept of terminating the flow after speaking the result or keeping the mic open to continue the flow. The skill is mostly in an Http API call that returns the information to speak, display and a flag to continue the conversation or not.
In Alexa, the flag returned from the API call and passed to Alexa is called shouldEndSession. In Google Assistant, the flag is expect_user_response.
So in my code folder, the API is called from the Javascript file and returns a JSON object containing three elements: speech (the text to speak - possibly SSML); displayText (the text to display to the user); and shouldEndSession (true or false).
The action calls the JavaScript code with type Search and a collect segment. It then outputs the JSON object mentioned above. This all works fine except I don't know how to handle the shouldEndSession. Is this done in the action perhaps with the validate segment?
For example, "Hi Bixby, ask car repair about changing my tires" would respond with the answer and be done. But something like "Hi Bixby, ask car repair about replacing my alternator". In this case, the response may be "I need to know what model car you have. What model car?". The user would then say "Toyota" and then Bixby would complete the dialog with the answer or maybe ask for more info.
I'd appreciate some pointers.
Thanks
I think it can easily be done in Bixby by input prompt when a required input is missing. You can build input-view to better enhance the user experience.
To start building the capsule, I would suggest the following:
Learn more about Bixby on https://bixbydevelopers.com/dev/docs/dev-guide
Try some sample capsules and watch some tutorial videos on https://bixbydevelopers.com/dev/docs/sample-capsules
If you have a Bixby enabled Samsung device, check our marketplace for ideas and inspirations.

Elicit Slot from within HelpIntent in Alexa

I have implemented a multi-turn dialog for Alexa. The Help-Intent provides different Help-Texts depending on the state of the dialog. After the User has triggered the HelpIntent and was presented the Help-Text, I want to elicit a specific slot with the ElicitSlotDirective
Now this seems to be not supported, since you can only elicit slots of the current intent, and the HelpIntent does not have slots.
https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs/issues/162
My question now is: How can I return to my multi-turn dialog and elicit a specific slot after the user triggered the HelpIntent?
You can now use intent chaining to elicit a slot from a different Intent. For example:
.addDirective({
type: 'Dialog.ElicitSlot',
slotToElicit: 'drink',
updatedIntent: {
name: 'OrderIntent',
confirmationStatus: 'NONE'
}
})
See this blog post.
The documentation states that:
Implementing the built-in intents is recommended, but optional.
I recommend that you define your own HelpIntent with overlapping utterances to the AMAZON.HelpIntent, but with your needed Slot types.
In this case, your service receives an IntentRequest for MyHelpIntent, even though these phrases overlap with the built-in AMAZON.HelpIntent.
The documentation also states, that this practice is not recommended, because the built-in intent may have a better coverage of sample utterances. It states that it is better practice to extend the built-in Intents. But (stupid enough from Amazon), the HelpIntent does not support Slots. So the only way would be a custom Help Intent.
I don't see a way to use Dialog Directives with the built-in Intents.
Here's a convoluted workaround that might work (there's no straight forward way right now, Nov 2018):
On every loop of the multi-turn dialog save your dialog based intent in the session attributes (the whole intent object, you can use the intent.name as key)
On every triggered intent (even HelpIntent) save the intent name in a lastIntent session attribute (to keep track of the previous intent name)
User triggers help and you're now in the HelpIntent. After you provide your help message append a question in the end that will cause the user to say something that will trigger your dialog based intent again
Do the following steps when you are in the dialog based intent and only if the lastIntent was HelpIntent (the one in the previous step):
Load the most recent intent data from the session attributes and, in it, delete the slot value and resolutions of the slot you want to elicitate (alternatively if you want to start from scratch you can delete the remaining slot values too, up to you)
Replace the current intent with the modified intent of the previous step
Emit a DialogDelegate with the current intent (your model needs to flag the slot you want to elicitate with elicitationRequired set to true)

IntentRequest triggered by Response - without user-invocation

lets say i have a skill with 2 custom intents, 'FirstIntent' and 'SecondIntent'. SecondIntent also has a required slot, 'reqSlot'.
Now, i would like to sequence my intents. After my skill sent the FirstIntent-response, i would like Alexa to send a request with SecondIntent and a directive to elicit reqSlot, without the user to invoke it.
They say here, at the parameter 'updatedInted':
"Note that you cannot change intents when returning a Dialog directive, so the intent name and set of slots must match the intent sent to your skill."
Is this generally possilbe or did anyone figure out a workaround for this scenario?
Thanks :)
There are ways to handle this.
You can try:
When you send your first response it must set the shouldEndSession flag to false.
The end of your first response's output speech should lead the user into invoking the second response. For example: 'Say telephone number followed by your number'.
This way the user doesn't need to explicitly invoke your skill to get to the next intent.
It is not currently possible to cause Alexa to start speaking without a user first having spoken to it.
So for example, I cannot create a skill that will announce to my wife that "Ron is on his way home" whenever I leave work.
I can however, create a skill that my wife can ask "Is Ron on his way home", and it can respond with the answer.
Also, the new notifications allow a skill to post a notification, but this just causes the Alexa to light up its circular ring to indicate that a notification is waiting. A user must still ask Alexa to read it. In the example I cite above, that might be ok.
A lot of us would love for Alexa to be able to spontaneously start talking, but Amazon has not enabled that. Can you just imagine the opportunity for advertising abuse that functionality might enable? Nothing like sitting down watching TV and having Alexa start talking, "Hey, wouldn't some Wonder Popcorn taste great about now? We're having a sale..."

Resources