I have a workflow that asks different Yes/No questions. Using the Amazon built in Yes and No intents, how can I use them for multiple questions.
Here is an example of the flowchart.
I create a state called "Injury" in order have different handlers for this flow. When the use says "No" to the first question, the AMAZON.NoIntent emits the BurnIntent question. At this point if the user says "No", it loops back to the BurnIntent. How can I determine inside the Yes and No intents which intent to move on to? Is there a way to track which question I'm on to determine which intent to emit?
One of the ways would be to keep the state or a question in the self.attributes. That is some kind of session variables which is shared between intents and wiped out once the user ends using a skill.
For example, you can store there the last question you have asked the user by self.attrbiutes["lastQuestionID"] = questionId or a current "level". Then, once your "Yes/No" intent has been triggered you can use that value and decide what to do next.
My assumption is that you are using Node.js SDK. But I'm pretty sure there is something similar for other languages.
You can also read a little bit more about state management in the Tips on State Management at Three Different Levels article.
An alternative way is to make custom yes no slots and use it for each question if your flow isn't too big.
This post explains how to do it.
Related
I'm new to Alexa Conversations - still learning. My challenge: In the Alexa Conversation Dialog, I'm trying to enable a skill to ask to play music in a certain room. For example, a user might ask to play Prince in the Kitchen or they might ask to play Let's Go Crazy in the Bedroom or they might ask to play Dua Lipa Radio in the Bathroom. In each case, I need to prompt the user to ask them if the request is an Artist, a Song, a Playlist or a Station. Currently I'm prompting the user and saving their answer in a custom variable called MusicType.
How do I now take the answer and convert that to a different API Parameter? In this case I'd want to take MusicType and set it to PlayListName in the API. I don't see how to take the values out of variables and then associate them to something else. help?
I tried using the Inform Args section but that will only continue to save the variable - it seems it needs conditional logic here?
With the Alexa Conversations Description Language you can add conditional logic as already suggested by yourself, which then uses further expressions to handle the variable content and/or invoke the API in different ways.
But be aware that the feature is currently in Beta (might change without notice) and is not supported in the UI, so it needs to be done with CLI access from your side.
In an Alexa Skill, I have an intent of ordering an ice cream. I know I can check some information in an API or whatever function I have built inside my Python code, but only if that function is literally atomical.
If I have slots, asking for more information, I cannot check for api info or other personal functions, because having slots in Alexa is not like having slots in a chatbot, for example. There is no "dialog tree" or anything similar.
The Ice cream example is simple, but I tried first on doing this before moving on something more complex.
This would be the very basic format of the intent:
But what I want is during this execution, get some information, inside the intent!
I have searched everywhere, and I have not found anything useful. I asked the Alexa Developer support, and they told me to check Dialog. This was also part of the answer:
Skill interaction model completely depends on slots defined in your skill. But you can even make it more dynamic depending on no of slot values provided by user but it will always from interaction model already defined in your skill.
You can't create dynamic response based on the data you are getting from APIs or from skill.
But seeing other Alexa skills that are deployed, I guess it should be possible!
What I thought on doing is several functions like:
get the icecream: only that, getting an icecream flavour. Identifying the flavour in python and calling the api. Saving the flavour in a slot/dynamodb.
Add something: Everytime the user wants to add something (like whipped cream), call this function. Checking the api depending on what the user wants to add.
This makes it a bit dirty in my opinion, and makes the user lose a bit of flow with Alexa, but might do the job. But is there any way to do it directly in only one function (even though it is atomical)?
I have written a smart speaker app for Google Home using DialogFlow, and am now in the process of porting it over to Alexa.
One of the fundamental differences seems to be the inability to easily trigger follow-up intents. For example, I have a dialog that asks the user a series of questions, one after the other, before providing a result based on the answers provided. e.g. ({slot types})
Do you like a low maintenance or working garden? {low maintenance}{working}
Do you like a garden you can relax in? {yes/no}
Would you like to grow vegetables in your garden? {yes/no}
This is easy to achieve using DialogFlow follow-up intents, but I have no clue where to start with Alexa and there dont seem to be many examples out there. All I can find seems to focus on slot filling for a single dialog.
I am using my own API service to serve results (vs Lambda).
Can anybody recommend a way of achieving this in an Alexa Skill?
I managed to achieve this by adding a single utterance with three individual slots, one for each of the answers required:-
inspire me {InspireMaintenance} {InspireRelax} {InspireVeg}
These Slots back onto one SlotType - Custom_YesNo, which had Yes and No values + synonyms. My C# service then checks for each of these required slots and where one is missing it triggers the relevant question as a response. Once all slots are filled it provides the answer.
Not as intuitive as Dialogflow and requires code to achieve what can be done without code in DF, but at least it works :)
I'm looking to build an event management tool and want to give users the possibility to inquire disabilities.
I need a predictable input refering to an existing disability or handicap so the event organizers and the software could know about it.
In a wonderfull world, i'd like to use the id or label to redirect the organizer to an help notice and i want it internationalizable.
I don't know why my question was downvoted, is it something with stackoverflow rules or the queston itself ?
Anyway, i found i could start answering my question here:
https://bioportal.bioontology.org/ontologies/ICF
I'm having some troubles with intents on API.AI.
I have an intent -let’s call it intent01- aimed at managing any generic info request about some services (e.g. “I would like to know more about your services” and so on), which replies to the user explaining the services and asking him if he want to have more details about service1 or service2.
I than created 3 intents (intent01.1, intent01.2, intent01.3) in order to handle the possible user’s replies to intent1 (“I want to know more about service1”, “I want to know more about service2” or “no interest”), because each of them has to provide a different answer. They are linked to the father intent using the context.
I also wanted to manage a possible direct user’s question such as “I want to know more about service 1”, so I created a different intent (intent02), which provides exactly the same answer of intent01.1.
This solution doesn’t seem to be much scalable, do anyone know a best practice in order not to duplicate intents in such a situation?
Thank you for your time
Stefano
Please see here i think it resolves your issue. Regards