Can we elicit slot of a different intent in Alexa - alexa

If our backend receive a request for say AMAZON.YesIntent or any other custom intent. Can we Elicit slot of a different intent than the triggered intent as response.
Ex:
...
user: Yes
(Amazon.YesIntent is mapped)
Alexa: Which city do you want to stay in?
(elicit slot of another intent)
...

Yes this is now possible, read the update section
Unfortunately you can't. Only updated intent of same type can be sent with a Dialog.ElicitSlot directive.
Note that you cannot change intents when returning a Dialog directive,
so the intent name and set of slots must match the intent sent to your
skill.
You will receive an "Invalid Directive" card and "There was some problem with the requested skills response" as error message.
Update (April, 8th 2019)
Use updateIntent object to specify the new intent whose slot has to be triggered. When you update the Intent object originally sent to your skill with the new updateIntent, include all of the slots, including any empty slots you are not changing.
{
"type": "Dialog.ElicitSlot",
"slotToElicit": "slotOfSomeOtherIntent",
"updatedIntent": {
"name": "SomeOtherIntent",
"confirmationStatus": "NONE",
"slots": {
"slotOfSomeOtherIntent": {
"name": "slotOfSomeOtherIntent",
"value": "string",
"resolutions": {},
"confirmationStatus": "NONE"
}
}
}
}
Read more about Change the intent or update slot values during the dialog
Read more about ElicitSlot Directive

Related

Alexa model evaluation works great but intent is never called in simulator or alexa device

I'm struggling to build my Alexa Interaction model. My application is used for requesting live data from a smart home device. All i do is basically calling my Server API with a Username & Password and i get a value in return. My interaction model works perfectly for requesting the parameters, for example i can say "Temperature" and it works perfectly fine across all testing devices. For that intent i got a custom RequestType.
However for setting up Username & Password i need to use an built-it slot type: AMAZON.NUMBER. As i only need numbers for my credentials this should work perfectly fine in theory.
I got a interaction model setup which works perfectly fine when i press "Evaluate Model" in Alexa developer console. However once i go to Test on the simulator or to my real Alexa device it's absolutely impossible to call the intent. It always calls one of more other intents? (I see this in the request JSON).
Here's how the intent looks:
{
"name": "SetupUsername",
"slots": [
{
"name": "username",
"type": "AMAZON.NUMBER"
}
],
"samples": [
"my user id is {username}",
"username to {username}",
"set my username to {username}",
"set username to {username}",
"user {username}",
"my username is {username}",
"username equals {username}"
]
}
Whatever i say or type in the Simulator, i cannot call this intent. I have no overlaps from other intents. Does anyone see an issue here?
Thank you in advance
EDIT: I just realized that if you want to do account linking on Alexa you need to implement OAuth2 - maybe my intents are never called because they want to bypass me implementing my own authentication?
UPDATE:
This is the intent that is usually called instead - it's my init intent. So for example is i say "my username is 12345" the following intent is gonna be called:
UPDATE 2:
Here is my full interaction model.
(HelpIntent and SetPassword are only for testing purposes, they don't make sense right now)
It's impossible calling SetupUsername with any of the samples in my model.
You need to build the interaction model. Saving is not enough
When you develop your Interaction Model, you have to save it AND build it. Otherwise only the evaluation model will work (documentation about it).
Then when you test in the test console, you should see in the JSON Input, at the bottom, which intent was called:
"request": {
"type": "IntentRequest",
"requestId": "xxxx",
"locale": "en-US",
"timestamp": "2021-10-20T14:38:59Z",
"intent": {
"name": "HelloWorldIntent", <----------- HERE
"confirmationStatus": "NONE"
}
}

How can I get matched utterance text from Alexa request instead of intent name?

I have created one Alexa skill which I want to communicate with my chatbot. When I am asking question to Alexa, in request only intent name is coming.But I want the utterance text also. Is it possible to get that utterance?
"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.480ebab4-cd67-418e-b67f-eb8a00b74020",
"timestamp": "2020-02-13T06:55:52Z",
"locale": "en-US",
"intent": {
"name": "ask_utterance",
"confirmationStatus": "NONE"
}
}
This is the request.I am correctly getting intent name but i want utterance text which will then I will send to my chatbot. Is it possible to do that?
No, you can't (as for now) have the complete utterance.
The only thing that is near to a complete utterance is AMAZON.SearchQuery slot.
Otherwise, you will obtain only slot values.

Can I determine whether an Alexa Request was triggered by a routine or a user?

I have a need to differentiate between an explicit request and a request from a routine.
Here is an example. Let's say I am controlling a smart light. The light is able to detect occupancy.
If a user comes in the room and says turn on the light, it will check occupancy and turn off.
However, if the user creates a scheduled routine to turn the light on, we should disable the occupancy check.
I don't see anything in the documentation for the TurnOn Directive that would indicate the source of the request.
Is there an indicator that I missed? Can I add some indicator? Or has anyone used a different approach to accomplish similar functionality?
The official response from Amazon is that you can't tell the difference. Here is a recent response from Amazon's Alexa developer forum: https://forums.developer.amazon.com/questions/218340/skills-invoking-routines.html
That said, you will generally see additional fields in the launch request if it is launched from a Routine:
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.abunchofnumbers",
"timestamp": "2020-01-18T22:27:01Z",
"locale": "en-US",
"target": {
"path": "AMAZON.Launch",
"address": "amzn1.ask.skill.abunchofnumbers"
},
"metadata": {
"referrer": "amzn1.alexa-speechlet-client.SequencedSimpleIntentHandler"
},
"body": {},
"payload": {},
"targetURI": "alexa://amzn1.ask.skill.abunchofnumbers/AMAZON.Launch",
"launchRequestType": "FOLLOW_LINK_WITH_RESULT",
"shouldLinkResultBeReturned": true
}
The target, metadata, body, payload, targetURI, and launchRequestType fields are generally not found when a user launches a skill with their voice. HOWEVER, I do not believe the existence of these fields are unique to being launched by an Alexa Routine. I suspect you'll find them if the skill was launched when, for example, Alexa asks, "Hey, since you like the Blind Monkey skill would you like to try Blind Pig?" and you say "yes."

passing selected button value from fb messenger to dialog flow

I am failing to understand the simple passing of parameters to and fro via webhook. i am trying to build a simple bot using dialog flow and fb messenger. i have requirement to show two buttons to the end user to pick a cake type. i am able to show the options using the below custom response:
{
"facebook": {
"attachment": {
"type": "template",
"payload": {
"template_type": "button",
"text": "What kind of cake would you like?",
"buttons": [
{
"type": "postback",
"payload": "witheggs",
"title": "Contain Eggs"
},
{
"type": "postback",
"payload": "noeggs",
"title": "Eggless"
}
]
}
}
}
}
once user tap one of the two buttons then how do i set it to some variable in dialog flow and then ask next set of question?
I guess you're missing few steps here. Before, I explain you what to do, be sure to know what a postback is! Postback, when tapped, the text is sent as a user query to dialogflow.com.
Step-1: I created an intent with custom payload as follows:
Step-2: Now, I created a new intent where I have entered user says as noeggs which is of type postback & payload as noeggs the in previous image.
Step-3: Save & test it in FB Messenger.
So basically, what has happened here is, when you click on Eggless button, postback as noeggs is sent as user query to dialogflow.com where there is an intent which matches user says with noeggs & sends response back.

Multiple answers for a node in IBM Watson Conversation service

How I can specify multiple answers for a specific dialog node? For example, when user asks “What is your name”, my VA should reply “My name is Alexander”, or “You can call me Alexander”, or “Friends call me Alex”.
Maybe Conversation service must return a code and application checks the code and choose a random answer from answer's db.
For the application node that gives the response, select advanced mode and change to this:
{
"output": {
"text": {
"values": [
"My name is Alexander.",
"You can call me Alexander",
"Friends call me Alex"
],
"selection_policy": "random",
"append": false
}
}
}

Resources