I don't understand how Alexa handles unmatched utterances. There is a AMAZON.FallbackIntent available only in English and French. So for my French skills, after submission they reported :
While interacting with your skill we determined that, in certain cases, your skill does not respond with a clear and audible response to some of the inputs provided. We do not allow skills to respond in an inaudible or unclear manner to customers.
Upon providing long irrelevant input to the skill in an open session, the skill should end the session post 8 seconds or provide a verbal prompt understanding that the audio input is irrelevant. Expected Behavior: Your skill should respond in an audible and clear manner to all customer input by providing instructions on what to do next.
I don't understand how to handle this feedback
UPDATE: FallbackIntent is now also available in French, Italian, Spanish and Japanese
AMAZON.FallbackIntent is only available for English and German locales. The certification feedback means that you're not handling out-of-domain utterances gracefully (this can happen even on skills where the locale supports the fallback intent).
If the feedback in question is for the French version of your skill (i.e. you're handling this correctly in the English version thanks to the fallback intent availability) your only option until fallback become available in other locales is to follow the advice I'm giving here:
How to fallback into error when reprompt is processed in Alexa Skill
Related
I'm having trouble avoiding the FallbackIntent. I'm writing a skill that labels different days Red Days or Blue Days. Using the dev console Utterance Profile I keep getting the fallback intent when I ask "is today a red day". I have a specific sample of that question in my interaction model so I don't understand why my intent is not getting identified. Does anyone have a suggestion?
specyfying the skill name does not help
Saving is not enough, you missed the "build model" button so the utterance profiler will be able to evaluate your utterances.
After building, you can see it work properly.
Also, I recommend you to keep AMAZON.FallbackIntent in case your user is saying something not defined in your Interaction model. You might want to catch that.
I am developing a google smart home action app for interacting with our security systems. This includes also controlling relays for switches, outlets, lights, as well as thermostats for heating or cooling systems and some other peripherals, besides the original arming/disarming capabilities of the system.
Is there a place where I can find a sort of utterance reference (at least in English or in any Western Europe language)?
My company is Italian (as am I), and I am struggling to test the action (which is currently in DRAFT): most of the commands must be inferred by examples on the types & traits official documentation.
But this is far from being complete, plus it's in English only. Most of the times I'm trying to invoke the action using an Italian guessed translation from English examples but it's not working or understood by the assistant.
I also need to create a user manual for our customers in the available languages (I currently speak Italian and English only), and I need to give examples to the translators.
I cannot find anything anywhere.
Is there anything of the sort? Even a partial/incomplete list would do, to begin with.
Our internal team is constantly working on updating the public documentation for Smart Home devices with sample utterances for controlling and interacting with them in the list of languages supported by the traits.
Right now we have provided utterances for devices in en-US, de-DE and fr-FR. More information about the sample utterances for Window (as an example) device type can be found here
https://developers.google.com/assistant/smarthome/guides/window#fr-fr
Meanwhile you can use Google Translate to convert phrases to other languages, and see if the translations are working for you.
Is it possible to make Alexa listen for a French word from an English Alexa skill?
When we make Alexa speak we can choose the language for example:
<speak>
Welcome to Paris. <voice name="Celine"><lang xml:lang="fr-FR">Bienvenue à Paris</lang></voice>
</speak>
I want the user to be able to repeat "Bienvenue à Paris" and Alexa to understand this utterance.
Alexa can support multiple languages. I would follow these instructions to add French to your skill so she can understand the sentence "Bienvenue à Paris"
https://developer.amazon.com/docs/custom-skills/develop-skills-in-multiple-languages.html#add-a-language-to-an-existing-skill
Hope it helps
Ester
Learn French by smartio life is doing that, I have no clue how.
A user learning basic french does not want to have to configure his Alexa in French.
I've been thinking about creating utterances in english that would sound like the expected french (that would soon be very difficult to do), or use a phonetic utterance (I don't think it exists)
I am sorry if I am using a translation tool because I am using a translation tool
I'd like Alexa to talk to Alexa on a regular basis with Alexa's custom skills, but how do I do it?
Example: alexa xxx at 6 o'clock
I want to make custom skills that will do every day when I say it
You'll need to design the skill's Interaction Model so that it responds to invocations that involve the invocation word of your skill and a specific intent in one of the formats specified by Amazon.
Possibilities for your example:
"Alexa, ask Awesome Skill to set an alarm at 6 o'clock."
"Alexa, set an alarm at 6 o'clock using Awesome Skill."
As far as the functionality after the user schedules something at 6 o'clock, you'll need to look into the Notifications API and it's limitations.
Is it possible to launch an Alexa App with just its name? This is similar to when you ask it what the weather is.
"Alexa, weather"
However I would like to be able to say
"Alex, weather in Chicago" and have it return that value
I can't seem to get the app to launch without a connecting word. Things like ask, open, tell would count as a connecting word.
I have searched the documentation but can't find mention of it, however there are apps in the app store that do this.
It is documented in the first item here.
I've verified that this works with my own skill. One thing I've noticed is that Alexa's speech recognition is much worse when invoked in this manner presumably because it requires matching against a greater set of possible words. I have to really enunciate in a quiet room to get Alexa to recognize my invocation name in this context.
When developing a custom skill you have to use the connecting words e.g. Alexa, ask your invocation name to do something.
If you want to pass a variable, you have to specify the sample utterances:
OneshotTideIntent get high tide
OneshotTideIntent get high tide for {City} {State}
Then you handle cases in your code when user does not provide these values. For examples see https://github.com/amzn/alexa-skills-kit-js
When writing the example phrases you use the following construct:
"Alexa, [connecting word] [your invocation name], [sample utterance]". As far as I have noticed she is rather picky and you have to be exact when invoking custom skill (the voice recognition works way better with built in skills)
EDIT: launching skill without connecting word is possible when developing "smart home" skill