How to use Alexa.Discovery interface with Custom Skill - alexa

Currently, I am using my custom skill to set the temperature of my thermostat..
First I speak the Invocation command: "Alexa, open Heat Control"
Then, I speak to set the temperature: "Set bedroom temperature to 25"
But, I want to discover and add my thermostat in Devices section of alexa.
So that I don't have to speak Invocation command everytime.
How can I use Alexa.Discovery with my Custom Skill?

Related

How to change persona voice with audio track?

I want to create a change voice function for a meeting application using webrtc.
I have 2 solutions for it:
Solution 1: Use MediaStreamAudioSourceNode and connect node to create audio filter
https://github.com/mdn/webaudio-examples#stream-source-buffer
But I can't control the sex of voice.
Solution 2: I use speech to text and text to speech.
I use speech to text to get the content of the speaker.
Afterthat, I send text to another member in meeting. And use text to speech to create voice.
But the transmission speed is slow and inaccurate in terms of content.
Do you know any AI or library that supports this?

Creating a new voice for Alexa within my skill

Is it possible to change the voice alexa is using within my skill?
i.e. the user asks
Alexa Ask Car Washing when the next available appointment is?
and have Alexa respond with a voice that matches my carwashing brand?
Yes, it is possible.
You can use SSML Tags in output speech response of your skill to achieve this.
You can
whisper
put emphasis on a word or phrase
use different languages like French, Spanish etc.
different voices and many more
For Example
<speak>
Here's a surprise you did not expect.
<voice name="Kendra"><lang xml:lang="en-US">I want to tell you a secret.</lang></voice>
<voice name="Brian"><lang xml:lang="en-GB">Your secret is safe with me!</lang></voice>
<voice name="Kendra"><lang xml:lang="en-US">I am not a real human.</lang></voice>.
Can you believe it?
</speak>
Learn more about Using SSML Tags HERE

Allow users to issue multiple commands to Alexa after using the launch phrase once

If every intent requires the invocation name of the skill, then what is the purpose of the launch phrase?
For example, this would make sense:
"Alexa, open adventure game" (launch phrase)
"move forward" (intent)
"pick up item" (intent)
"close adventure game" (exit)
But from what I've seen you have to do this:
"Alexa, open adventure game" (launch phrase)
"Alexa, ask adventure game to move forward" (intent)
"Alexa, ask adventure game to pick up item" (intent)
"Alexa, close adventure game" (exit)
My real issue here is that the dialog structure of "Alexa, ask {invocation_name} to {utterance}" is too bloated, and it's hard to see from the documentation how to get around this. I'm hoping that I'm missing something in how the launch phrase works will that allow my users to issue commands more naturally.
You don't have to use the invocation name every time if you keep the session alive. In order to do that you have to include shouldEndSession parameter and set it to false in your response JSON. Alexa ends the session when shouldEndSession is set to true or if its not present.
Ex: for LaunchRequest give response like this.
{
'version':'1.0',
'sessionAttributes':{
},
'response':{
'outputSpeech':{
'type':'PlainText',
'text':'Launch phrase here'
},
'reprompt':{
'outputSpeech':{
'type':'PlainText',
'text':'re-prompt phrase here'
}
},
'shouldEndSession':false
}
}
Note that shouldEndSession is false, and this keep the session alive. And the user can just say "move forward" without using the invocation name.
And when you actually want to end the session set shouldEndSession to true.
Ex: for Amazon.StopIntent
Note: Even if you set shouldEndSession to false , the default Alexa session timeout is 8 seconds and it is not configurable.
Every intent does not require the invocation name.
The following is very much possible:
"Alexa, open adventure game" (launch phrase)
"move forward" (intent)
"pick up item" (intent)
"close adventure game" (exit)
Have you had a problem with a skill, or is it just something you noticed while reading the sample utterances in the skills store?
If you are having a problem with a skill, it may be that the session is timing out, and the developer has provided no reprompt. In this case, the wake word + invocation (Alexa, open adventure game) has to be said to launch the skill again.
If you are just going by the examples you've given (the second set), you have to understand that each utterance is not just an intent utterance, it is a launch+intent utterance. What that means is, each of these examples are meant for launching a skill and an intent. So basically your example is
"Alexa, open adventure game" (launch phrase)
"Alexa, ask adventure game to move forward" (launch phrase + intent)
"Alexa, ask adventure game to pick up item" (launch phrase + intent)
"Alexa, close adventure game" (launch phrase + exit)
These utterances are not meant to be said one after the other, they are all standalone commands.
Once a skill has been launched, you don't need to repeat the invocation again and again as long as the session is active.

Alexa echo show highlight text

I am creating a skill for Echo Show and was able to display text using BodyTemplate1. Now I wanted to highlight each word while Alexa utters those words. Could anyone know how to do this?
Not available yet. You can vote the feature request up though.
This is available now using APL (Alexa Presentation Language). The SpeakText directive will accomplish what you want to do.

How to change params like Language or Voice on IBM project-intu setup on a Raspberry?

I was able to setup project-intu successfully on my raspberry and it works great with the default dialog which is in english.
Now I want to use my own Conversation Workspace which is in Spanish and change the default Voice from en-US_MichaelVoice to es-ES_LauraVoice but I can't find which file should I update. I tried to update bin/raspi/etc/profile/body.json file but it didn't work.
Change m_Gender & m_Language in SpeakingAgent, it will automatically select a voice based on those values.

Resources