Alexa Skill: Invocation name: How to catch the skill STARTING PHRASE? - alexa

I'm working on my coming soon music player skill.
I'd like to have different behaviour following a different skil linvocation, by example:
1. start immediately the audio streaming:
Play MyMusicSkillName
2. just introduce the skill behaviour with a welcome/help message:
Open MyMusicSkillName
My question is:
There is a way to distinguish the invocation VERB, getting the invocation starting phrase (and distinguish in LaunchRequest event)?
Any idea?
See documentation: https://developer.amazon.com/docs/custom-skills/understanding-how-users-invoke-custom-skills.html#invoking-a-skill-with-no-specific-request-no-intent
Ask Alexa, Ask Daily Horoscopes
Begin Alexa, Begin Trivia Master
Launch Alexa, Launch Car Fu
Load Alexa, Load Daily Horoscopes
Open Alexa, Open Daily Horoscopes
Play Alexa, Play Trivia Master
Play the game Alexa, Play the game Trivia Master
Resume Alexa, Resume Trivia Master
Run Alexa, Run Daily Horoscopes
Start Alexa, Start Daily Horoscopes
Start playing the game Alexa, Start playing the game
Trivia Master
Tell Alexa, Tell Daily Horoscopes
Use Alexa, Use Daily Horoscopes

Unfortunately, the LaunchRequest does not provide any way to differentiate how the user opened your skill.
However, you can differentiate between a new user and a user who has used the skill before by saving something in your database such as a last_played audio. Then when you handle the LaunchRequest, you can check that user's ID in your database and if they have a last_played entry, then automatically continue playing. Or if its a new user, then provide an introduction/welcome message.
Returning users who you think want to hear the introduction message by saying "Open" instead of "Play" probably want to be reminded of what to do or what options the skill has, which you should handle in the HelpIntent anyway. So if your skill auto played for them, they should naturally ask a question of your skill that will launch the HelpIntent.

Related

Alexa Conversation: Converting from a Variable to an API Parameter?

I'm new to Alexa Conversations - still learning. My challenge: In the Alexa Conversation Dialog, I'm trying to enable a skill to ask to play music in a certain room. For example, a user might ask to play Prince in the Kitchen or they might ask to play Let's Go Crazy in the Bedroom or they might ask to play Dua Lipa Radio in the Bathroom. In each case, I need to prompt the user to ask them if the request is an Artist, a Song, a Playlist or a Station. Currently I'm prompting the user and saving their answer in a custom variable called MusicType.
How do I now take the answer and convert that to a different API Parameter? In this case I'd want to take MusicType and set it to PlayListName in the API. I don't see how to take the values out of variables and then associate them to something else. help?
I tried using the Inform Args section but that will only continue to save the variable - it seems it needs conditional logic here?
With the Alexa Conversations Description Language you can add conditional logic as already suggested by yourself, which then uses further expressions to handle the variable content and/or invoke the API in different ways.
But be aware that the feature is currently in Beta (might change without notice) and is not supported in the UI, so it needs to be done with CLI access from your side.

Alexa custom skill for a device without invocation name (not a smart home skill)

I have created an Alexa custom skill that I am using to control various devices in my house. I am using a custom skill rather than implementing the smart home skills as I want to be able to support non-standard utterances. For instance, I can ask
Alexa, ask [invocation] what is the brightness of the porch lights right now?
Everything with the custom skill works really well, except that I don't want to have to say the invocation name. I'd prefer to interact with porch lights as if they were a discovered smart home skills device, like:
Alexa, what is the brightness of the porch lights right now?
This seems to be purpose of the canFulfillIntent. I have implemented this interface in Python (perhaps incorrectly), but Alexa always responds: "Sorry, I didn't find a device named porch lights".
Is what I am asking possible? And if it is, does anyone have a Python example? My reading is that while this is the purpose of canFulfillIntent it does not function like this yet (thus the reason why there are two ecobee skills, for instance).
It is not possible. Every time you ask for brightness and other things it will assume that you are referring to a smart home device and the skill will not be invoked. Thus the 2 ecobee skills. This video might help you if you go for a smart home skill.

Is it possible to restrict Alexa or an AVS device to a custom skill?

Is it possible to restrict and AVS device (a device running Alexa) to a single skill? So if I built an AI skill and have it running on a device, is it possible to keep the experience inside the custom skill so I don't have to keep saying Alexa, open
One trick you can do with AVS is to prepend every single request with a sound clip equivalent to: "ask to ..." It's definitely a hack, but I was able to use it with some success.
See my write-up here: https://www.linkedin.com/pulse/adding-context-alexa-will-blaschko-ma
The relevant parts (in case the link goes away).
Regular voice commands don't carry my extra information about the
user, but I wanted to find a way to tack on metadata to the voice
commands, and so I did just that--glued it right onto the end of the
command and updated my intents to know what the new structure would
be.
...
In addition to facial recognition, voice recognition could help
identify users, but let's not stop there. Any amount of context can be
added to a request based on available local data.
“Find frozen yogurt nearby" could silently become “Alexa open Yelp and
find frozen yogurt near 1st and Pine, Seattle” using some built in
geolocation in the device (phone, in this case).
I also use something similar in my open source Android Alexa library to send prerecorded commands: https://github.com/willblaschko/AlexaAndroid
I think you are looking for AWS Lex which allows you to write Alexa like skills without the rest of Alexa feature set.
http://docs.aws.amazon.com/lex/latest/dg/what-is.html

Does an Alex skill need to have an invocation name?

I would like to build a skill to help those with ADHD. I'd like to teach Alexa to automatically do certain tasks, such as remind them to do certain things based on information in their calendar.
I want Alexa to help the person with ADHD manage simple daily tasks, rather than burden them with remembering to even ask for the reminder.
Example:
Rather than saying, "Alexa I'm going to school. Ask schoolPlanner what I should bring?"
They could just say "Bye Alexa, I'm going to school"
And after looking into your calendar or something, Alexa would respond, "did you remember your phone and book bag?"
A possible solution is in beta right now, for English skills only: https://developer.amazon.com/docs/custom-skills/understand-name-free-interaction-for-custom-skills.html
This is not possible at this time:
https://forums.developer.amazon.com/questions/61953/invoking-custom-skill-without-ask-and-invocation-n.html

Can you launch an Amazon Echo (Alexa) skill with just the name of the skill? No other connecting words

Is it possible to launch an Alexa App with just its name? This is similar to when you ask it what the weather is.
"Alexa, weather"
However I would like to be able to say
"Alex, weather in Chicago" and have it return that value
I can't seem to get the app to launch without a connecting word. Things like ask, open, tell would count as a connecting word.
I have searched the documentation but can't find mention of it, however there are apps in the app store that do this.
It is documented in the first item here.
I've verified that this works with my own skill. One thing I've noticed is that Alexa's speech recognition is much worse when invoked in this manner presumably because it requires matching against a greater set of possible words. I have to really enunciate in a quiet room to get Alexa to recognize my invocation name in this context.
When developing a custom skill you have to use the connecting words e.g. Alexa, ask your invocation name to do something.
If you want to pass a variable, you have to specify the sample utterances:
OneshotTideIntent get high tide
OneshotTideIntent get high tide for {City} {State}
Then you handle cases in your code when user does not provide these values. For examples see https://github.com/amzn/alexa-skills-kit-js
When writing the example phrases you use the following construct:
"Alexa, [connecting word] [your invocation name], [sample utterance]". As far as I have noticed she is rather picky and you have to be exact when invoking custom skill (the voice recognition works way better with built in skills)
EDIT: launching skill without connecting word is possible when developing "smart home" skill

Resources