Alexa skill having trouble avoiding the FallbackIntent - alexa

I'm having trouble avoiding the FallbackIntent. I'm writing a skill that labels different days Red Days or Blue Days. Using the dev console Utterance Profile I keep getting the fallback intent when I ask "is today a red day". I have a specific sample of that question in my interaction model so I don't understand why my intent is not getting identified. Does anyone have a suggestion?
specyfying the skill name does not help

Saving is not enough, you missed the "build model" button so the utterance profiler will be able to evaluate your utterances.
After building, you can see it work properly.
Also, I recommend you to keep AMAZON.FallbackIntent in case your user is saying something not defined in your Interaction model. You might want to catch that.

Related

Alexa Skill Trigger Follow Up Intent

I have written a smart speaker app for Google Home using DialogFlow, and am now in the process of porting it over to Alexa.
One of the fundamental differences seems to be the inability to easily trigger follow-up intents. For example, I have a dialog that asks the user a series of questions, one after the other, before providing a result based on the answers provided. e.g. ({slot types})
Do you like a low maintenance or working garden? {low maintenance}{working}
Do you like a garden you can relax in? {yes/no}
Would you like to grow vegetables in your garden? {yes/no}
This is easy to achieve using DialogFlow follow-up intents, but I have no clue where to start with Alexa and there dont seem to be many examples out there. All I can find seems to focus on slot filling for a single dialog.
I am using my own API service to serve results (vs Lambda).
Can anybody recommend a way of achieving this in an Alexa Skill?
I managed to achieve this by adding a single utterance with three individual slots, one for each of the answers required:-
inspire me {InspireMaintenance} {InspireRelax} {InspireVeg}
These Slots back onto one SlotType - Custom_YesNo, which had Yes and No values + synonyms. My C# service then checks for each of these required slots and where one is missing it triggers the relevant question as a response. Once all slots are filled it provides the answer.
Not as intuitive as Dialogflow and requires code to achieve what can be done without code in DF, but at least it works :)

WitAI - Your app may fail to handle this story properly

I've been playing with WitAI recently to see what it can do, and I had built a user flow of about 5/6 questions, with a couple of Yes/No/I don't know branches etc. I've set up a couple of "bot executes" options with the hope in the future that I will write some function on my side to do data processing etc.
But on every save now, I see a red light on the left side saying "Your app may fail to handle this story properly".
I've rebuilt a use flow with no bot executes, and no branches, and I've trained up the bot on each sentence to expect, but I still see this. I'm really confused, as there are no fields highlighted red to suggest that the bot doesn't understand that line. Can anyone help out here?

Does an Alex skill need to have an invocation name?

I would like to build a skill to help those with ADHD. I'd like to teach Alexa to automatically do certain tasks, such as remind them to do certain things based on information in their calendar.
I want Alexa to help the person with ADHD manage simple daily tasks, rather than burden them with remembering to even ask for the reminder.
Example:
Rather than saying, "Alexa I'm going to school. Ask schoolPlanner what I should bring?"
They could just say "Bye Alexa, I'm going to school"
And after looking into your calendar or something, Alexa would respond, "did you remember your phone and book bag?"
A possible solution is in beta right now, for English skills only: https://developer.amazon.com/docs/custom-skills/understand-name-free-interaction-for-custom-skills.html
This is not possible at this time:
https://forums.developer.amazon.com/questions/61953/invoking-custom-skill-without-ask-and-invocation-n.html

HTTP service returning wrong intent despite correct training

I have a strange situation. For some utterances, LUIS has been trained to return GetGenericResponse intent. Eg., thank you, you are nice, etc. (screenshot below)
But in the JSON, LUIS is returning the wrong intent (GetBotIntroduction) for them. This is even after manually clicking the “Train” button and republishing the service.
Am I missing something here?
I had posted this question on LUIS MSDN forums as well. Someone from Microsoft responded with a solution that worked for me.
LUIS uses a machine learning model to make its predictions and in some
edge cases when two intents are similar it can get confused on certain
utterances even if they are labelled as one intent or the other.
To fix this issue you need only to add 1 or 2 more labels to
“GetEducationHelp” that are similar to “learning about ai”, such as “I
am learning about ai”. Once you retrain after adding that label the
model should learn to distinguish between both intents sufficiently.
https://social.msdn.microsoft.com/Forums/azure/en-US/75ea0e86-a4d0-4aa6-bfaa-054d899079a4/http-endpoint-returning-wrong-intent-despite-correct-training?forum=LUIS

Can you launch an Amazon Echo (Alexa) skill with just the name of the skill? No other connecting words

Is it possible to launch an Alexa App with just its name? This is similar to when you ask it what the weather is.
"Alexa, weather"
However I would like to be able to say
"Alex, weather in Chicago" and have it return that value
I can't seem to get the app to launch without a connecting word. Things like ask, open, tell would count as a connecting word.
I have searched the documentation but can't find mention of it, however there are apps in the app store that do this.
It is documented in the first item here.
I've verified that this works with my own skill. One thing I've noticed is that Alexa's speech recognition is much worse when invoked in this manner presumably because it requires matching against a greater set of possible words. I have to really enunciate in a quiet room to get Alexa to recognize my invocation name in this context.
When developing a custom skill you have to use the connecting words e.g. Alexa, ask your invocation name to do something.
If you want to pass a variable, you have to specify the sample utterances:
OneshotTideIntent get high tide
OneshotTideIntent get high tide for {City} {State}
Then you handle cases in your code when user does not provide these values. For examples see https://github.com/amzn/alexa-skills-kit-js
When writing the example phrases you use the following construct:
"Alexa, [connecting word] [your invocation name], [sample utterance]". As far as I have noticed she is rather picky and you have to be exact when invoking custom skill (the voice recognition works way better with built in skills)
EDIT: launching skill without connecting word is possible when developing "smart home" skill

Resources