Alexa Conversation: Converting from a Variable to an API Parameter? - alexa

I'm new to Alexa Conversations - still learning. My challenge: In the Alexa Conversation Dialog, I'm trying to enable a skill to ask to play music in a certain room. For example, a user might ask to play Prince in the Kitchen or they might ask to play Let's Go Crazy in the Bedroom or they might ask to play Dua Lipa Radio in the Bathroom. In each case, I need to prompt the user to ask them if the request is an Artist, a Song, a Playlist or a Station. Currently I'm prompting the user and saving their answer in a custom variable called MusicType.
How do I now take the answer and convert that to a different API Parameter? In this case I'd want to take MusicType and set it to PlayListName in the API. I don't see how to take the values out of variables and then associate them to something else. help?
I tried using the Inform Args section but that will only continue to save the variable - it seems it needs conditional logic here?

With the Alexa Conversations Description Language you can add conditional logic as already suggested by yourself, which then uses further expressions to handle the variable content and/or invoke the API in different ways.
But be aware that the feature is currently in Beta (might change without notice) and is not supported in the UI, so it needs to be done with CLI access from your side.

Related

Alexa without Alexa NLU

It is possible to use Alexa skill with custom NLU without using Alexa ones? I needed it because Alexa doesn't provide context, which is crucial for my business needs. For example, I can be satisfied if I will be able to get user input from the skill.
It is up to the developer to provide context. Check out the AttributesManager in the SDK.
https://developer.amazon.com/en-US/docs/alexa/alexa-skills-kit-sdk-for-nodejs/manage-attributes.html
You can set session or persistent attributes to "remember" the context. For example, I use session attributes and set a variable called "last_asked" to store which Yes or No question they were last asked, so the skill has the context of those generic answers.
I don't know what you exacty mean with "User Input".
If you want to have
the audio what the user is speaking - No it's not possible
the complete text what the user was speaking (speech to text) - Yes this is possible. Use a Custom Slot Type with some variances.
What do you mean with "doesn't provide context" exactly? If you know the user already in your system you could use "account linking".

Checking info in API in an Alexa Skill

In an Alexa Skill, I have an intent of ordering an ice cream. I know I can check some information in an API or whatever function I have built inside my Python code, but only if that function is literally atomical.
If I have slots, asking for more information, I cannot check for api info or other personal functions, because having slots in Alexa is not like having slots in a chatbot, for example. There is no "dialog tree" or anything similar.
The Ice cream example is simple, but I tried first on doing this before moving on something more complex.
This would be the very basic format of the intent:
But what I want is during this execution, get some information, inside the intent!
I have searched everywhere, and I have not found anything useful. I asked the Alexa Developer support, and they told me to check Dialog. This was also part of the answer:
Skill interaction model completely depends on slots defined in your skill. But you can even make it more dynamic depending on no of slot values provided by user but it will always from interaction model already defined in your skill.
You can't create dynamic response based on the data you are getting from APIs or from skill.
But seeing other Alexa skills that are deployed, I guess it should be possible!
What I thought on doing is several functions like:
get the icecream: only that, getting an icecream flavour. Identifying the flavour in python and calling the api. Saving the flavour in a slot/dynamodb.
Add something: Everytime the user wants to add something (like whipped cream), call this function. Checking the api depending on what the user wants to add.
This makes it a bit dirty in my opinion, and makes the user lose a bit of flow with Alexa, but might do the job. But is there any way to do it directly in only one function (even though it is atomical)?

Alexa Skill with Multiple Yes No Questions

I have a workflow that asks different Yes/No questions. Using the Amazon built in Yes and No intents, how can I use them for multiple questions.
Here is an example of the flowchart.
I create a state called "Injury" in order have different handlers for this flow. When the use says "No" to the first question, the AMAZON.NoIntent emits the BurnIntent question. At this point if the user says "No", it loops back to the BurnIntent. How can I determine inside the Yes and No intents which intent to move on to? Is there a way to track which question I'm on to determine which intent to emit?
One of the ways would be to keep the state or a question in the self.attributes. That is some kind of session variables which is shared between intents and wiped out once the user ends using a skill.
For example, you can store there the last question you have asked the user by self.attrbiutes["lastQuestionID"] = questionId or a current "level". Then, once your "Yes/No" intent has been triggered you can use that value and decide what to do next.
My assumption is that you are using Node.js SDK. But I'm pretty sure there is something similar for other languages.
You can also read a little bit more about state management in the Tips on State Management at Three Different Levels article.
An alternative way is to make custom yes no slots and use it for each question if your flow isn't too big.
This post explains how to do it.

Is it possible to restrict Alexa or an AVS device to a custom skill?

Is it possible to restrict and AVS device (a device running Alexa) to a single skill? So if I built an AI skill and have it running on a device, is it possible to keep the experience inside the custom skill so I don't have to keep saying Alexa, open
One trick you can do with AVS is to prepend every single request with a sound clip equivalent to: "ask to ..." It's definitely a hack, but I was able to use it with some success.
See my write-up here: https://www.linkedin.com/pulse/adding-context-alexa-will-blaschko-ma
The relevant parts (in case the link goes away).
Regular voice commands don't carry my extra information about the
user, but I wanted to find a way to tack on metadata to the voice
commands, and so I did just that--glued it right onto the end of the
command and updated my intents to know what the new structure would
be.
...
In addition to facial recognition, voice recognition could help
identify users, but let's not stop there. Any amount of context can be
added to a request based on available local data.
“Find frozen yogurt nearby" could silently become “Alexa open Yelp and
find frozen yogurt near 1st and Pine, Seattle” using some built in
geolocation in the device (phone, in this case).
I also use something similar in my open source Android Alexa library to send prerecorded commands: https://github.com/willblaschko/AlexaAndroid
I think you are looking for AWS Lex which allows you to write Alexa like skills without the rest of Alexa feature set.
http://docs.aws.amazon.com/lex/latest/dg/what-is.html

Can you launch an Amazon Echo (Alexa) skill with just the name of the skill? No other connecting words

Is it possible to launch an Alexa App with just its name? This is similar to when you ask it what the weather is.
"Alexa, weather"
However I would like to be able to say
"Alex, weather in Chicago" and have it return that value
I can't seem to get the app to launch without a connecting word. Things like ask, open, tell would count as a connecting word.
I have searched the documentation but can't find mention of it, however there are apps in the app store that do this.
It is documented in the first item here.
I've verified that this works with my own skill. One thing I've noticed is that Alexa's speech recognition is much worse when invoked in this manner presumably because it requires matching against a greater set of possible words. I have to really enunciate in a quiet room to get Alexa to recognize my invocation name in this context.
When developing a custom skill you have to use the connecting words e.g. Alexa, ask your invocation name to do something.
If you want to pass a variable, you have to specify the sample utterances:
OneshotTideIntent get high tide
OneshotTideIntent get high tide for {City} {State}
Then you handle cases in your code when user does not provide these values. For examples see https://github.com/amzn/alexa-skills-kit-js
When writing the example phrases you use the following construct:
"Alexa, [connecting word] [your invocation name], [sample utterance]". As far as I have noticed she is rather picky and you have to be exact when invoking custom skill (the voice recognition works way better with built in skills)
EDIT: launching skill without connecting word is possible when developing "smart home" skill

Resources