Alexa cannot recognize the utterence or sent the right intent - alexa

I'm developing an Alexa skill currently. I customed an intent called "PlayNextChannelIntent" with utterence like "next channel" and "next feed", however, alexa always sent my request to the built-in Next intent(I use this intent for other actions). Then, I changed my utterence to "forward channel" or "plus channel", but alexa can not recognize my voice and cannot sent intent at all. How can I fix this problem?

Related

Amazon Alexa doesn't open the skill developed by me( on the device)

I have a problem with Amazon Alexa.
I have started to develop a small skill in Alexa Developer Console.
Everything works perfectly when I test it in that console, but when I tell to Alexa device to open my skill. It tell me "I don't know about that".
I don't understand why, the email address is the same for the Developer Console and for the device. I'm sure that the invocation is correct. I tried to disable and enable the skill from the AlexaApp, but it still doesn't work.
Any ideas? Thank you!
It could be an internal recognition issue from the invocation name.
What I recommend you to do:
Change the invocation name by something very simple ex: Test four
Save the model and Build the model
Try open Test four in the developer console
Try open Test four on the device
If open Test four doesn't work on the device, relogin on the Alexa app with the same email as the developer console and resync the device. Make sure the app is enabled.
If it work with test four it means that the invocation name you choosed previously is not properly recognized by Alexa. You should keep it as simple as possible or ask the support to improve the recognition of it.

Alexa testing invocation name - failed submission

I have got a question on testing the invocation of a custom skill in Alexa.
My invocation name is let's say "merry christmas", if I type or speak it in the test section of Alexa Skills creation, LaunchRequest is triggered and conversation starts.
As soon as I submit the skill for certification, Amazon is correctly testing it with a phrase like "Alexa, launch merry christmas". In this case the LaunchRequest is not triggered.
How can I test the whole invocation name in the console? Is there any way to debug why the LaunchRequest is not triggered? Does it trigger a different intent rather that "LaunchRequest"?
Thanks
I do not believe there is a way to test the complete invocation phrase.
To try and debug this issue you should log your request at the app root where the request comes in, before your intent handlers are executed. This will let you see if you are receiving any request at all, and if so what it is instead of a LaunchRequest.

How to handle failed skill events?

I'm implementing skill lifecycle events using "Skill Events". Going through the docs can't find anything that mentions what should I respond with for these events. Closest I found was:
Alexa will attempt to redeliver events if an acknowledgement is not
sent by the skill service, for up to one hour. If the skill service
receives an event, and the skill service sends an acknowledgment in
response, this event must then be managed by the skill service. In
either case, the skill service cannot, at a later time, retrieve past
events from Alexa.
Source
What does it imply, an empty 200 response? What do if something fails. Should I return a 200 status with a formatted error similar to Alexa ErrorResponse?
As the skill event data schema is different from typical Alexa events, I presume it's different.
So far by just playing with the responses, if I return an empty 200 response, Alexa understands that I acknowledged the request and doesn't send it anymore.
If something fails I respond with a 400 status and plaintext error msg. Then I received the request again later.
Also be sure to save the timestamp from either AlexaSkillEvent.SkillEnabled or AlexaSkillEvent.SkillAccountLinked requests with the user, so you can validate if the repeatedly sent events are valid if something isn't right.

How to make Alexa 'sing' jingle bells?

I'm creating a seasonal Alexa skill, where there will be intents such as 'how many sleeps till Christmas', 'am I on the good list' etc; and I'd also like an intent to ask Alexa to sing Jingle Bells. The key part is making her sing it.
In my skill, for the singJingleBells intent, I output the the lyrics for Jingle Bells as the speech response, but Alexa reads the lyrics. (as expected if I'm honest).
I've discovered there is a (presumably official Amazon) skill to make her sing Jingle Bells. You can say Alexa, sing Jingle Bells
I would like my skill to do the same.
I'm guessing the Amazon skill does it with SSML phonetics, or more likely, a pre-recorded MP3 via either an SSML audio tag or SSML speechcon interjection
Is there anyway to discover/capture the output response of the Amazon skill so that I can understand (and copy!) the way it does it?
Using Steve's idea, I can use the console on echosim.io to capture the SpeechSynthesizer. Not sure if this gets me any closer?
{
"directive": {
"header": {
"dialogRequestId": "dialogRequestId-6688b290-80d3-4111-a29d-4c60c6d47c31",
"namespace": "SpeechSynthesizer",
"name": "Speak",
"messageId": "c5771361-2a80-4b00-beb6-22a783a7c504"
},
"payload": {
"url": "cid:b438a3ea-d337-4c5f-b719-816e429ed473#Alexa3P:1.0/2017/11/06/20/94a9a7c4112b44568bff10df69d30825/01:18::TNIH_2V.f000372f-b147-4bea-81fb-4c2e7de67334ZXV/0_359577804",
"token": "amzn1.as-ct.v1.Domain:Application:Knowledge#ACRI#b438a3ea-d337-4c5f-b719-816e429ed473#Alexa3P:1.0/2017/11/06/20/94a9a7c4112b44568bff10df69d30825/01:18::TNIH_2V.f000372f-b147-4bea-81fb-4c2e7de67334ZXV/0",
"format": "AUDIO_MPEG"
}
}
}
If I understand correctly, you want to get the Alexa audio output into an .mp3 file (or some other format) so that it can be played back again in a custom skill.
If that's the goal, you'll need to use the Alexa Voice Service (AVS) and more specifically the SpeechSynthesizer Interface to get the audio output that you'd then use in your custom skill response.
So, you'll be using both the Alexa Skills Kit (for the skill) and the Alexa Voice Service (AVS) to get the audio.
You can use an audio clip of 'Jingle Bells' using the audio tag. A maximum of 5 audio tags can be used in a single output reponse.
The audio clip must following points.
The MP3 must be hosted at an Internet-accessible HTTPS endpoint. HTTPS is required, and the domain hosting the MP3 file must present a valid, trusted SSL certificate. Self-signed certificates cannot be used.
The MP3 must not contain any customer-specific or other sensitive information.
The MP3 must be a valid MP3 file (MPEG version 2).
The audio file cannot be longer than ninety (90) seconds.
The bit rate must be 48 kbps. Note that this bit rate gives a good result when used with spoken content, but is generally not a high enough quality for music.
The sample rate must be 16000 Hz.
Refer to this link for more clarity, Audio Tag

Alexa Skills Set SDK - increase timeout of skill

I am building an Alexa instructional exercise skill using the Alexa Skill Set SDK on nodejs. I am saving each cooking step to the DB, therefore if the skill times out, the user can reopen the skill and continue where they left off.
Problem is that users are annoyed that they have to keeping reopening the skill, people work at different speeds, is it possible to keep the skill open or increase the time out whilst I wait for the user to complete the step and then say "Alexa, next step"?
I tried increasing the lambda timeout, it made no difference.
I have been trying to do this for quite awhile. There have been several responses on the Amazon developer forums from folks at Amazon (for example, this response) that state that the approximate 8-10 second timeout is not configurable.
The following solution is bit of a hack and not recommended, but may serve your purpose.
Just modify your response like below:
<speak>
Tell recipe step here.
<audio src="<-- Hosted silent mp3 file URL -->" />
</speak>
You can add a silent mp3 file in your response. Your skill will be on for the time of that mp3 file.
But to interrupt Alexa in the mid of this response, user will have to say Alexa, next step instead of Next step.
There is API you could call to provide progressive response

Resources