Porting an Alexa Skill - completing or continuing the dialog - alexa

I have a skill in Alexa, Cortana and Google and in each case, there is a concept of terminating the flow after speaking the result or keeping the mic open to continue the flow. The skill is mostly in an Http API call that returns the information to speak, display and a flag to continue the conversation or not.
In Alexa, the flag returned from the API call and passed to Alexa is called shouldEndSession. In Google Assistant, the flag is expect_user_response.
So in my code folder, the API is called from the Javascript file and returns a JSON object containing three elements: speech (the text to speak - possibly SSML); displayText (the text to display to the user); and shouldEndSession (true or false).
The action calls the JavaScript code with type Search and a collect segment. It then outputs the JSON object mentioned above. This all works fine except I don't know how to handle the shouldEndSession. Is this done in the action perhaps with the validate segment?
For example, "Hi Bixby, ask car repair about changing my tires" would respond with the answer and be done. But something like "Hi Bixby, ask car repair about replacing my alternator". In this case, the response may be "I need to know what model car you have. What model car?". The user would then say "Toyota" and then Bixby would complete the dialog with the answer or maybe ask for more info.
I'd appreciate some pointers.
Thanks

I think it can easily be done in Bixby by input prompt when a required input is missing. You can build input-view to better enhance the user experience.
To start building the capsule, I would suggest the following:
Learn more about Bixby on https://bixbydevelopers.com/dev/docs/dev-guide
Try some sample capsules and watch some tutorial videos on https://bixbydevelopers.com/dev/docs/sample-capsules
If you have a Bixby enabled Samsung device, check our marketplace for ideas and inspirations.

Related

Amazon Alexa Skills - How to Add a Loop in the Skill?

I want to create a skill that's a simple game, where first the user launches the skill with its invocation name and then Alexa asks a question, "Shall I roll the dice?" If the user answers "Yes," it rolls the dice, and says the result. Then Alexa asks again, "Shall I roll the dice?" If "Yes," do the same thing. This is the main loop I'm talking about, and it'll continue until the user answers "No" or "Quit" to this question.
I just can't figure out how to add the loop, or where it should go. I've looked at tutorials and videos and whatnot and just nothing I've found mentions a loop which I find really odd. But I'm a noob at this.
Any help would be awesome. I've been wanting to do this skill for so long but just am stuck on this loop thing.
I recommend you to take some time to understand how a skill work and then I recommend you to develop a quiz skill from this doc
You will then have a better understanding of how a request is made to Alexa service and how a response is returned. The logic behind Intent, how does a slot work, ...
An Alexa Skill is like a card game. The player can select any card at any time. Each card has its own function and is triggered by a voice.
So when the skill first asks the user for Shall I roll the dice?, the user will say either yes or no.
If the user says yes, it will then go to your AMAZON.YesIntent,
If the user says no, it will then go to your AMAZON.NoIntent.
But you also need to make sure that the user can also say:
Stop > Amazon.StopIntent
Anything else, such as, cheese > FallbackIntent
By doing the quiz skill cited above, you will understand how to build your interaction model effectively.
A loop is straightforward. If the user replies yes, then in your intent handler for AMAZON.YesIntent, you will need to trigger the same function that will inject, in the response builder the prompt: Shall I roll the dice ?.
Keep in mind that a user can also ask to repeat. Imagine a skill being a personal assistant. It's not a voice mail. There are many other ways to say Shall I roll the dice? to not sound like a robot. Try implementing different response values possible to have a great customer experience overall.

IBM Watson Assistant: How do I have the chatbot repeat a response until it recognizes what the user is saying?

I am building a chatbot that needs to be able to have long, branching conversations with users. Its purpose is to be able to engage the user for longs periods of time. One of the problems that I'm running into is how to handle unrelated responses from a user in the middle of a dialogue tree without "resetting" the entire conversation.
For example, let's say they have the following conversation:
Chatbot: Do you like vanilla or chocolate ice cream?
User: Vanilla
Chatbot: (recognizes "vanilla" and responds with appropriate child node) Great! Would you like chocolate or caramel on top?
User: Caramel
Chatbot: (recognizes "caramel" and responds with appropriate child node) That sounds delicious! Do you prefer sprinkles or whipped cream?
User: I would like a cherry!
At that point, my problem is that the chatbot triggers the "anything_else" response and says something like "I didn't understand that." Which means if the user wants to continue the conversation about ice cream, he has to start from the very beginning.
I'm very new to using IBM Watson assistant, but I did as much research as I could and I wasn't able to find anything. Any advice or help would be appreciated! So far the only idea I had was to have an "anything_else" option for every single dialogue node that could jump back to the next node up. But that sounds extremely complicated and time consuming. I was wondering if there was an easier way to just have the chatbot repeat whatever question it is asking until it gets a response that triggers one of the child nodes.
EDIT: It may be helpful to add that what I'm trying to here is "funnel" the user down certain conversation paths.
In anything_else node, you can enable return after digressions which will go back to the previous node and it fulfils your requirement.
There is a Anything Else option that acts as a fallback when the chatbot fails to recognize the intent.
You can take a look at the documentation here.

IntentRequest triggered by Response - without user-invocation

lets say i have a skill with 2 custom intents, 'FirstIntent' and 'SecondIntent'. SecondIntent also has a required slot, 'reqSlot'.
Now, i would like to sequence my intents. After my skill sent the FirstIntent-response, i would like Alexa to send a request with SecondIntent and a directive to elicit reqSlot, without the user to invoke it.
They say here, at the parameter 'updatedInted':
"Note that you cannot change intents when returning a Dialog directive, so the intent name and set of slots must match the intent sent to your skill."
Is this generally possilbe or did anyone figure out a workaround for this scenario?
Thanks :)
There are ways to handle this.
You can try:
When you send your first response it must set the shouldEndSession flag to false.
The end of your first response's output speech should lead the user into invoking the second response. For example: 'Say telephone number followed by your number'.
This way the user doesn't need to explicitly invoke your skill to get to the next intent.
It is not currently possible to cause Alexa to start speaking without a user first having spoken to it.
So for example, I cannot create a skill that will announce to my wife that "Ron is on his way home" whenever I leave work.
I can however, create a skill that my wife can ask "Is Ron on his way home", and it can respond with the answer.
Also, the new notifications allow a skill to post a notification, but this just causes the Alexa to light up its circular ring to indicate that a notification is waiting. A user must still ask Alexa to read it. In the example I cite above, that might be ok.
A lot of us would love for Alexa to be able to spontaneously start talking, but Amazon has not enabled that. Can you just imagine the opportunity for advertising abuse that functionality might enable? Nothing like sitting down watching TV and having Alexa start talking, "Hey, wouldn't some Wonder Popcorn taste great about now? We're having a sale..."

WATSON conversation to fetch real time data from Rest API

We are creating a Bot using Watson, which will provide the rate of food materials to the end user and along with the availability. In order to fetch the availability, we need to call a rest API with the food details, which in turn will provide us the status.
So, here I wanted to know, how we can call rest api from Watson to fetch (feed) data into conversation.
In this case, you can use Watson Conversation, and create the Intents with responses based on the food materials.
You'll use the Context variable to get the food when use types and your application code will do something with this value. In this case, providing the status.
You can create one entity with all food values, and get the value with context variable with:
{
"context": {
"foodValue": "<? #foodtype ?>"
},
Inside your app, debug the return, you'll see one array if user types more than 1 food value.
And, with this values you'll check and return something for the user, I cant show any example because you does not specify what language you're use.
How to use context variable: click here.
See the official for call the API documentation.
See the official documentation about Conversation Service.
Check one project with Weather example from IBM Developer, this project gets the City from user typed and with this data do something in the app, in this case, return the Weather.

Alexa skill 2 intents and open slot

Is it possible that Alexa reacts right when I have in a sample utterances something like this, opened slots?
MyNameIsIntent {firstname}
ProjektIntent {projekt}
Right now she always goes back to MyNameIsIntent even if she ask a question for a ProjektIntent
The bad conversation:
Alexa: Welcome, what is your name?
Me: Frank
Alexa Hi Frank, Tell me what project is interesting for you?
Me: Teleshoping
Alexa: Hi Teleshoping, Tell me what project is interesting for you?
Im totally new to Alexa, could you give me please a tip if it is possible that Alexa answers the right question? I tried to do it with a session atributes but it doesn't work
An utterance file like that won't work. Alexa uses the text of the utterance - the stuff aside from the slot name, of which you have none - to choose an intent.
So, unfortunately you can't design your interactions to allow for a response of just 'Teleshoping'. Instead, the user would have to say something like 'I'm interested in Teleshoping' (you would need to tell them to say it like that), and the sample utterance would be 'I'm interested in {projekt}'.
You can use States do react differently on the same intents depending on the state of your application.
See https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs#making-skill-state-management-simpler for the official documentation.
You create a dictionary of StateHandlers like this (in Node.js 4.3):
var askNameStateHandlers = Alexa.CreateStateHandler("askNameState",
{
"MyIntent": function () { /* ... */ }
// more handlers ...
});
Then you register your handlers like this, assuming you have a dictionary of handlers without states defaultHandlers and two with handlers for a specific state askNameStateHandlers and askProjectStateHandlers:
exports.handler = function(event, context, callback) {
var alexa = Alexa.handler(event, context);
alexa.registerHandlers(defaultHandlers, askNameStateHandlers, askProjectStateHandlers);
alexa.execute();
};
To change the state of your application, simply assign it like this inside a handler function:
this.handler.state = "askNameState";
The alexa-sdk will then take care of calling the right handler depending on your application state.
There is also a full example implementation at https://github.com/alexa/skill-sample-nodejs-highlowgame
Note that this way, you will only have one intent MyIntent who accepts the answers to both questions, deciding which of your functions should handle the result based on the application state only.
And just to add a little further clarification to Tom's answer, Alexa can't tell if the word "teleshoping" is your first name or project.
If you use longer utterances as Tom mentions, such as "my name is {firstname}" and "I want to {projekt}" then Alexa will have no trouble discerning.
You can also help Alexa tell the difference by filling in the slot values, but this assumes that you know what the possible projekt values will be.
firstname can use the builtin AMAZON.US_FIRST_NAME slottype.

Resources