Using sessionAttributes in Alexa Skill - alexa

I am building an Alexa skill and not quite sure if I am using sessionAtrributes correctly. I know sessionAttributes are used to carry-forward a session's data to next invocation.
So I have these two intents
1) ListToDoItem
In this intent my skill will look into a database and list out the
to-do items stored in the database. After listing the items, Alexa
will go on to say "do you want me to list detailed info on these
to-do items?", to handle this I will pass the items retrieved in the
previous session as sessionAttributes. When asked to list detailed
info on the items, I will extract the previously forwarded
sessionAtrributes and compose a detailed speech response.
So for this intent I have to sample utterances
list my to-do items
yes
The utterance 'yes' will be used so that the sessionAttributes can be extracted to create a detailed speech response.
2) ListDoneItems
This intent will be used to list out completed items. It is similar to the previous intent, the only difference being, this intent will list out completed items.
For this intent will have 2 sample utterances
list my completed items
yes
Like before it has an 'yes' to generate a detailed speech response based on the sessionAttributes.
But the problem I have is that when I reply 'yes' to the ListDoneItems intent's 'Do you want me to list the completed items'?, the next intent request generated is of type ListToDoItems instead of ListDoneItems, even though I have set shouldEndSession to false in my skill response. This is happening because there is a crossover between sample utterances between my intents. So is having similar intents in different intents wrong? How to design interaction model to create a multi-turn dialog in order to use in sessionAttributes?

I think this will be of use to someone searching for answer.
Basically in the sample utterance you should not include phrases for your re-prompts; i.e. in my case I should not add 'yes' as an utterance. Instead I should be using Amazon.YesIntent.
When using Amazon.YesIntent, you should maintain a state machine in your SessionAttributes pointing to the last invoked intent. For example if two or more of your intents have a possible case where the user response could invoke a YesIntent, you should store the last invoked intent name and the associated session data in the SessionAttributes. So in the function which handles the YesIntent, you should check the state of your previous invocations and delegate to control to the corresponding intent handler.
In my case I will store previously invoked intent name as key and its associated data as it's value in the session.attributes;
"session": {
"new":"false",
"sessionId": "sessionId",
"application": {
"applicationId": "applicationId"
},
"attributes": {
"PreviousIntent": {
"PreviousIntentData"
}
}
In the function which handles YesIntent, check the for the previous state (session.attributes.PreviousIntent) and delegate the control to the function which handles that intent.

Related

How to get the number (length) of slot values for Alexa skill

I am trying to generate a random response for my Alexa Skill. I have set it up as:
Intent = myIntent
Slot = mySlot
Slot Type = mySlotType
Slot Values = {A,B,C,D} //the ids are unique numbers 1 - 4
when the user says a word such as A it uses this to create a response. Now I want to add a case for 'random'.
So Slot Values = {random,A,B,C,D}. //ID for random is 0
When the user says random, I want to randomly choose from the other Slot Values and use this to create a response.
Can Slot Value ID be used to return the Slot Value value?
Anybody know a good way to do this? I am a novice so excuse any obvious oversights.
This might be a workaround for your problem. You can get the JSON structure of your interaction model and use it as a constant in your lambda index.js file. I usually use this official tool for generating backend code for my skill
:
https://s3.amazonaws.com/webappvui/skillcode/v2/index.html.
When you'll generate the code through this tool, you'll see that the code generated also has the whole interaction model used as constant. Since you will have the whole JSON schema of the interaction model at your disposal, you can perform any action on it.
Note: If you don't know where to get JSON schema of your interaction model, scroll down on the build tab of your skill on the developer console, you'll find a menu of JSON editor on the left navigation. It will give you the JSON schema of your interaction model.
You can use mySlot as optional value in the intent description. For example you can add few utterances without slot inside them. And on the backend side you can check is the slot filled. If it is not filled you can generate random answer.

Ask user for input from LaunchIntent

I'm writing a skill in Node JS 8. I have an intent set up with slots and it works properly if I say
Ask {skill name} to {utterance}.
I'd like to design my skill so that the user can just say
Open {skill Name}
and on opening it will ask them for input that will then be handled and passed to the intent. I've seen multiple people say that you can't do this. But I've used 2 skills today that did exactly this. I'm just looking for the correct syntax to do this.
I have:
'LaunchRequest': function() {
this.response.speak("What note would you like?");
this.emit(':responseReady');
}
Which seems like it should work, but I'm pretty new to JS and Alexa.
Yes, it is possible.
When the skill user open your skill, you can give a welcome message followed by a question.
Ex:
[user] : open note skill
[Alexa] : Welcome to note skill. What note would you like?
----------<Alexa will wait for users input>--------
[user] : ABC note.
[Alexa] : <response>
In order for Alexa to wait for users input after it says the welcome message, you need to keep the session alive. The session is kept alive based on shouldEndSession parameter in the response. For any request, if not provided, shouldEndSession defaults to true. In your case, the response to LaunchRequest should have this shouldEndSession parameter set to false. Only by which the session remains open and users can continue interaction.
Ex:
'LaunchRequest': function() {
const speech = "Welcome to note skill. What note would you like?";
const reprompt = "What note would you like?";
this.emit(':ask', speech, reprompt);
}
Read this answer to know more about how you can keep the session alive using ask-nodejs-sdk.
Using Dialog Model
Another way to achieve this is to use Dialog directives. Dialog directives helps you to fill and validate slot values easily. You can use the directives to ask the user for the information you need to fulfill their request.
More information on Dialog directives here

List in custom Alexa skills

I'm new to Alexa Skills but meanwhile I've read tons of information and tutorials.
Fortunately, I'm currently able to create my own custom skill (based on PHP) on my own server and it already works using different intents, utterances, slots etc..
Now, I want Alexa to read a list of items (I send via JSON) in PlainText but I can't find any information how to do this.
I assume there are two options (please correct me if I'm wrong):
Sending a JSON answer including one item - Alexa reads this item - the user says e.g. "next" - Alexa requests my server for the next item - my server sends the next JSON answer ... and so on.
Sending a JSON answer including all items in an array - Alexa reads each item one after another.
I'm not sure which solution is possible and how it can be solved.
So, can anyone help me on this or point me to some information?
Both ways are possible and which one to choose depends on what you are listing.
Using AMAZON.NextIntent
If a single list item include item name and some details about it, then reading out it in one go won't be a good user experience. In this case you can use AMAZON.NextIntent to handle users "next" request.
When the user asks for the list, give the first item in your response and keep a track of the item index in response sessionAttributes. You can also set a STATE attribute too, so that you can check this state in AMAZON.NextIntent handler before you give the next item.
"sessionAttributes": {
"index": 1,
"STATE": "LIST_ITEMS"
}
When the user say "next"
check whether the state is LIST_ITEMS and based on the index give the next item from your list. And in sessionAttributes increment the index
More on sessionAttributes and Response Parameters here
Now, if your items are just names then you can read it one after the other.
In both these solutions it is always good to use SSML rather than just PlainText. SSML gives you more control on how your response is spoken.
More on SSML here

Alexa custom skill not giving answers to the default queries

I have created a custom skill for Alexa. I can able to get the response for custom skills also. But I'm getting one issue.
I have created a simple skill with invocation FullName and intent FullNameIntent which has the utterance like What is my full name, then it's returning my full name. But next query, if I'm asking what is the time now, it's giving answers again full name only.
My expected answer is current time as response.
You will have to create separate intent and sample utterances for 'what is the time now'. As you didn't mention anything about the intent for the time I believe you don't have an intent and sample utterance for the same.

How can I set the approvers for all steps in an approval process?

I have an approval process with three steps, all of which are set to Assigned Approver = Manually Chosen. When the user submits the record for approval, I'd like to have Apex code determine who the three approvers are. However, I don't see a way to hook into the approve request submission.
If I submit the approval with Apex Using Approval.process(), I can set the initial (and only the initial) approver with ProcessSubmitRequest.setNextApproverIds(). This call leads you to believe you can specify multiple approvers since it takes an array of Ids, but the array can only have 1 element, or else runtime a error occurs.
Once I know what the first approver's response is, I can use Apex to submit her response and, again, set the immediately next approver by passing a ProcessWorkitemRequest instance to Approval.process(). An important note here is that the approver must not approve via the standard UI. Instead, they must do something that invokes the Apex code so that we can set who the next approve should be. A trigger on the object under review, or a custom button + VF page could be used to invoke the Apex.
My main question is, how can I make sure that the user does not use the standard approval buttons? They appear in the Approvals related list and on the salesforce home screen. It may be in other places as well. Again, if they use the standard submit and approve buttons, I don't have any way to hook in to set the next approver.
We ran into a similar issue a while back and solved it by creating custom lookup fields to certain users. For example, if we wanted to route an approval request up to a Director and then a VP, we added Director__c and MarketVP__c fields to the object. These fields were populated in code by climbing the role hierarchy whenever a request was submitted. Our approval process's steps then chose who the assignee would be based on the values in these fields (first step would be assigned to Related User: Director and the second step would be assigned to Related User: Market VP, etc.).
To get around the standard approval button issue (we had other reasons for hiding it), we just hid that from their homepage layouts and built our own VF page and included it in a custom homepage component. This component functioned as an inbox with links to any records that were pending the user's approval. All user interaction with the approval objects was handled through other VF pages with their own Approve and Reject buttons. I don't know if the objects you're submitting to the approval process even use VF pages, so this may not be feasible for your situation.
A lot of customization for something that shouldn't need it, I know. Might not be the answer you're looking for, but hopefully it's some food for thought.

Resources