How can I add custom mode in Alexa smart home skill - alexa

I am implementing smart home skill in which user will change mode. I found Alexa supported few inbuild modes and for user-defined mode they have CUSTOM mode mechanism. Over the documentation they mention below JSON need to set:
{
"name": "thermostatMode",
"value": {
"value": "CUSTOM",
"customName": "VENDOR_HEAT_COOL"
}
}
Question: Where do we need to set above JSON.
I tried to add above JSON under Alexa.ThermostatController interface of device discovery response, but it is not working.

I would suggest you to use Thermostat Controller which only supports below modes but if you have custom modes then better to use primitive controller like "ModeController" where you can provide custom modes details in discovery response.
AUTO Indicates automatic heating or cooling based on the current temperature and the setpoint.
COOL Indicates cooling mode.
HEAT Indicates heating mode.
ECO Indicates economy mode.
OFF Indicates that heating and cooling is off, but the device might still have power.
More info about ModeController:
https://developer.amazon.com/en-US/docs/alexa/device-apis/alexa-modecontroller.html

Related

Testing Alexa Skills with audio playback - is this expected behavior?

I am making an audio skill using the audio player template with the source code from the official Amazon repo.
Additionally, I have also followed suit with the instructions and added the required PlayAudio intent with the required utterances.
I am using EchoSim to test my Skill. This is the JSON from SpeechSynthesizer.Speak:
{
"directive": {
"header": {
"dialogRequestId": "dialogRequestId-d2e37caa-98b6-4aec-99b1-d24298e422d5",
"namespace": "SpeechSynthesizer",
"name": "Speak",
"messageId": "43150bc3-5fe1-44f0-aeea-fbec4808a4ce"
},
"payload": {
"url": "cid:GlobalDomain_ActionableAbandon_52324515-eee3-4232-b9e4-19edeab556c5_1919623608",
"token": "amzn1.as-ct.v1.#ACRI#GlobalDomain_ActionableAbandon_52324515-eee3-4232-b9e4-19edeab556c5",
"format": "AUDIO_MPEG"
}
}
}
My problem is: this links to a mp3 audio, but no audio is playing. I was wondering if this is indeed the correct response I should be getting, and that its working this way simply because I am not testing on a device, or if there is anything I should modify?
Any insight is much appreciated.
The common issues with audioplayer interfaces is the strict audio requirements, this looks the reason of your issue. The link provided by Amod is for the SSML not audioplayer. Make sure to follow all the requirements for audio stream:
Audio file must be hosted at an Internet-accessible HTTPS endpoint on
port 443.
The web server must present a valid and trusted SSL certificate.
Self-signed certificates are not allowed (really important).Many
content hosting services provide this. For example, you could host
your files at a service such as Amazon Simple Storage Service (Amazon
S3) (an Amazon Web Services offering).
If the stream is a playlist container that references additional
streams, each stream within the playlist must also be hosted at an
Internet-accessible HTTPS endpoint on 443 with a valid and trusted
SSL certificate.
The supported formats for the audio file include AAC/MP4, MP3, PLS,
M3U/M3U8, and HLS. Bitrates: 16kbps to 384 kbps.
This information an be found on the official documentation below:
https://developer.amazon.com/en-US/docs/alexa/custom-skills/audioplayer-interface-reference.html#audio-stream-requirements

How to make Alexa 'sing' jingle bells?

I'm creating a seasonal Alexa skill, where there will be intents such as 'how many sleeps till Christmas', 'am I on the good list' etc; and I'd also like an intent to ask Alexa to sing Jingle Bells. The key part is making her sing it.
In my skill, for the singJingleBells intent, I output the the lyrics for Jingle Bells as the speech response, but Alexa reads the lyrics. (as expected if I'm honest).
I've discovered there is a (presumably official Amazon) skill to make her sing Jingle Bells. You can say Alexa, sing Jingle Bells
I would like my skill to do the same.
I'm guessing the Amazon skill does it with SSML phonetics, or more likely, a pre-recorded MP3 via either an SSML audio tag or SSML speechcon interjection
Is there anyway to discover/capture the output response of the Amazon skill so that I can understand (and copy!) the way it does it?
Using Steve's idea, I can use the console on echosim.io to capture the SpeechSynthesizer. Not sure if this gets me any closer?
{
"directive": {
"header": {
"dialogRequestId": "dialogRequestId-6688b290-80d3-4111-a29d-4c60c6d47c31",
"namespace": "SpeechSynthesizer",
"name": "Speak",
"messageId": "c5771361-2a80-4b00-beb6-22a783a7c504"
},
"payload": {
"url": "cid:b438a3ea-d337-4c5f-b719-816e429ed473#Alexa3P:1.0/2017/11/06/20/94a9a7c4112b44568bff10df69d30825/01:18::TNIH_2V.f000372f-b147-4bea-81fb-4c2e7de67334ZXV/0_359577804",
"token": "amzn1.as-ct.v1.Domain:Application:Knowledge#ACRI#b438a3ea-d337-4c5f-b719-816e429ed473#Alexa3P:1.0/2017/11/06/20/94a9a7c4112b44568bff10df69d30825/01:18::TNIH_2V.f000372f-b147-4bea-81fb-4c2e7de67334ZXV/0",
"format": "AUDIO_MPEG"
}
}
}
If I understand correctly, you want to get the Alexa audio output into an .mp3 file (or some other format) so that it can be played back again in a custom skill.
If that's the goal, you'll need to use the Alexa Voice Service (AVS) and more specifically the SpeechSynthesizer Interface to get the audio output that you'd then use in your custom skill response.
So, you'll be using both the Alexa Skills Kit (for the skill) and the Alexa Voice Service (AVS) to get the audio.
You can use an audio clip of 'Jingle Bells' using the audio tag. A maximum of 5 audio tags can be used in a single output reponse.
The audio clip must following points.
The MP3 must be hosted at an Internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the MP3 file must present a valid, trusted SSL certificate. Self-signed certificates cannot be used.
The MP3 must not contain any customer-specific or other sensitive information.
The MP3 must be a valid MP3 file (MPEG version 2).
The audio file cannot be longer than ninety (90) seconds.
The bit rate must be 48 kbps. Note that this bit rate gives a good result when used with spoken content, but is generally not a high enough quality for music.
The sample rate must be 16000 Hz.
Refer to this link for more clarity, Audio Tag

How does wakeword work in alexa voice service javaclient sample?

I found that there are some wording "wakewordAgentEnabled" found in alexa voice service javaclient sample but when I run the program and android companion app, it shows a "Listen" button, it works properly, but how to call the wakeword "Hey Alexa" instead of using the "Listen" button?
Actually, I would like to use the logic of wakeword in Android app, so no need to click a button.
Is the sample support wakeword?
Is it needed to work with the Kitt-AI snowboy together?
From what I understand (I work in the Alexa org at Amazon) the reason the Echo can respond to wake words ('Alexa' 'Amazon' and 'Echo') is actually hardware in the device that opens up the connection. In order to obtain this on another device such as an Android phone you would actually need to constantly be listening, converting speech to text, and validating text for the wake word which would be very resource intensive and a large power drain. To reduce that drain it is just a button to open the connection.

refresh rate for insert/update for subscriptions? rate,quantity, etc

I have questions related pushing messages to a user.
Here is the use-case.
A user is walking inside a wifi enabled warehouse and we would like to use the glasses to send critical information and warnings about the components in that building which required the user to interact with the component(s).
We have used push notifications in android devices with ok results, but with a live hud I would like faster updates.
Basically we will send something like this to the user
{
"html": "<article>\n <section>\n <strong class=\"red\">ALERT </strong>13:10 device ABCD tolerance failure. \n </p>\n </section>\n</article>\n",
"notification": {
"level": "DEFAULT"
}
}
How quickly can we get the information to the device?
What is the update rate? If we see an alert from a machine can, how quickly can we refresh the user of its status.
Is there some type of flood protection that would cause us grief?
I assume native api will have more options, such as polling or some type of custom subscription service which we could use for faster updates than google's service. Is this correct?
Thanks
Nick
This is not something that is expected to be done with the Mirror API. The GDK is where you would want to do this and they are taking feature requests. You might want to add your use case to this thread:
https://code.google.com/p/google-glass-api/issues/detail?can=2&start=0&num=100&q=&colspec=ID%20Type%20Status%20Priority%20Owner%20Component%20Summary&groupby=&sort=&id=75
To answer some of your other questions:
1 - Mirror API card pushes happen within seconds
2 - Seconds
3 - You are currently limited to 1000 card pushes a day per developer account, so that would be shared across all your users
4 - Curently there is no supported way to do that
As a final thought, if you really want to do this without official support, you could watch this video which shows you how to run "normal" android apk's on Glass. It is a presentation from Google I/O 2013:
http://www.youtube.com/watch?v=OPethpwuYEk

Google Wave Robot - Change to Wave in response to an external event

I am writing a Google Wave Robot that allows users to "manage a wave". I plan to have a configuration page on my website. When the configuration is changed, ideally all waves where this user added by this user should change immediately (or at least next time someone views the wave). What is the best way of doing this?
Apparently, "a robot cannot contact Wave directly; it can only respond to wave-related events and cron events". If I decide to go the cron route, how quickly can I update the Wave?
Please check out the new Robot API v2 - it enables robots to actively push information into Wave.
as far as I know, cron does not work at the moment (2009-11-28) either. You could add a gadget, that changes its internal state - for example 1 sec after wave loaded using a timer - and listen for DOCUMENT_CHANGED or BLIP_SUBMITTED (less volume) events. But unless your bot comes with a gadget anyway, this is of course not very pretty.
=HC

Resources