Why are my sentences returning my intent even tough they are not in the utterance list of my intent? - alexa

We are developing a skill and my invocation name is "call onstar"
I got an intent "CallOnStarIntent"
I got the next utterances
"switch to onstar",
"access onstar emergency",
"access onstar advisor",
"access onstar",
"connect to onstar emergency",
"connect to onstar advisor",
"connect to onstar",
"i want to use onstar",
"open onstar",
"call onstar emergency",
"call onstar advisor",
"call onstar",
"use onstar",
"start onstar",
"onstar information",
"onstar services",
"onstar please",
"onstar emergency",
"onstar advisor"
These are the listed utterances and they are working fine when i try a utterance "call square" i get Amazon.FallBackIntent as expected. But when i tried with utterances like "ping onstar" , "play onstar", or any utterances that has the word onstar it returns CallOnStarIntent.
Does any one know why is this happening?
Thanks in advance.

Your utterances are processed by a machine learning algorithm that creates a model that will also match similar utterances so this is normal (your extra utterances seem to be similar enough for the model to determine there's a match). However there's something that you can do to make the model more precise at matching:
You can extend the sample utterances of AMAZON.FallbackIntent to include the ones where you don't want a match (e.g. "ping onstar")
You can try to change the sensitivity tuning of the AMAZON.FallbackIntent to HIGH so matching out-of-domain utterances becomes more aggressive
From the Alexa developer docs:
"You can extend AMAZON.FallbackIntent with more utterances. Add utterances when you identify a small number of utterances that invoke custom intents, but should invoke AMAZON.FallbackIntent instead. For large number of utterances that route incorrectly, consider adjusting AMAZON.FallbackIntent sensitivity instead."
To adjust the AMAZON.FallbackIntent sensitivity to HIGH you can use either the ASK CLI or JSON Editor to update the interactionModel.languageModel.modelConfiguration.fallbackIntentSensitivity.level setting in the JSON for your interaction model. Set fallbackIntentSensitivity.level to HIGH, MEDIUM, or LOW.
{
"interactionModel": {
"languageModel": {
"invocationName": "...",
"intents": [],
"types": [],
"modelConfiguration": {
"fallbackIntentSensitivity": {
"level": "HIGH"
}
}
},
"dialog": {},
"prompts": []
}
}

The list of utterances for an intent are not to be seen as a closed set of values like an enumeration in programming languages.
They are only samples used to train your Alexa skill. It's described in the documentation page about best practices for sample utterances:
"Alexa also attempts to generalize based on the samples you provide to interpret spoken phrases that differ in minor ways from the samples specified."

Related

Alexa model evaluation works great but intent is never called in simulator or alexa device

I'm struggling to build my Alexa Interaction model. My application is used for requesting live data from a smart home device. All i do is basically calling my Server API with a Username & Password and i get a value in return. My interaction model works perfectly for requesting the parameters, for example i can say "Temperature" and it works perfectly fine across all testing devices. For that intent i got a custom RequestType.
However for setting up Username & Password i need to use an built-it slot type: AMAZON.NUMBER. As i only need numbers for my credentials this should work perfectly fine in theory.
I got a interaction model setup which works perfectly fine when i press "Evaluate Model" in Alexa developer console. However once i go to Test on the simulator or to my real Alexa device it's absolutely impossible to call the intent. It always calls one of more other intents? (I see this in the request JSON).
Here's how the intent looks:
{
"name": "SetupUsername",
"slots": [
{
"name": "username",
"type": "AMAZON.NUMBER"
}
],
"samples": [
"my user id is {username}",
"username to {username}",
"set my username to {username}",
"set username to {username}",
"user {username}",
"my username is {username}",
"username equals {username}"
]
}
Whatever i say or type in the Simulator, i cannot call this intent. I have no overlaps from other intents. Does anyone see an issue here?
Thank you in advance
EDIT: I just realized that if you want to do account linking on Alexa you need to implement OAuth2 - maybe my intents are never called because they want to bypass me implementing my own authentication?
UPDATE:
This is the intent that is usually called instead - it's my init intent. So for example is i say "my username is 12345" the following intent is gonna be called:
UPDATE 2:
Here is my full interaction model.
(HelpIntent and SetPassword are only for testing purposes, they don't make sense right now)
It's impossible calling SetupUsername with any of the samples in my model.
You need to build the interaction model. Saving is not enough
When you develop your Interaction Model, you have to save it AND build it. Otherwise only the evaluation model will work (documentation about it).
Then when you test in the test console, you should see in the JSON Input, at the bottom, which intent was called:
"request": {
"type": "IntentRequest",
"requestId": "xxxx",
"locale": "en-US",
"timestamp": "2021-10-20T14:38:59Z",
"intent": {
"name": "HelloWorldIntent", <----------- HERE
"confirmationStatus": "NONE"
}
}

Google Smart Home Toggles Trait mysterious utterances

I'm struggling to complete the development for a SmartHome action on our security panels, involving different trait implementations (including ArmDisarm, Power, Thermostats, etc.).
One specific problem is related to Toggles Trait.
I need to accept commands to enable or disable intrusion sensor bypass/exclusion.
I've added to the SYNC response the following block, for instance, for a window sensor in the kitchen:
{
'id': '...some device id...',
'name': {'name': 'Window Sensor'},
'roomHint': 'Kitchen',
'type': 'action.devices.types.SENSOR',
'traits': 'action.devices.traits.Toggles',
'willReportState': true,
'attributes': {
'commandOnlyToggles': false,
'queryOnlyToggles': false,
'availableToggles': [
{
'name': 'bypass',
'name_values': {
{ 'name_synonym': ['bypass', 'bypassed', 'exclusion'}, 'lang': 'en'],
{ 'name_synonym': ['escluso', 'bypass', 'esclusa', 'esclusione'], 'lang': 'it'}
},
}
]
}
}
I was able to trigger the EXECUTE intent by saying
"Turn on bypass on Window Sensor" (although very unnatural).
I was able to trigger the QUERY intent by saying
"Is bypass on Window Sensor?" (even more unnatural).
These two utterances where found somewhere in a remote corner of a blog.
My problem is with Italian language (and also other western EU languages such as French/Spanish/German).
The EXECUTE Intent seems to be triggered by this utterance (I bet no Italian guy will ever say anything like that):
"Attiva escluso su Sensore Finestra"
(in this example the name provided in the SYNC request was translated from "Window Sensor" to "Sensore Finestra" when running in the context of an Italian linked account).
However I was not able to find the utterance for the QUERY request, I've tried everything that could make some sense, but the QUERY intent never gets triggered, and the assistant redirects me to a simple search on the web.
Why is there such a mistery over utterances? The sample English utterances in assistant docs are very limited, and most of the times it's difficult to guess their counterpart in specific languages; furthermore no one from AOG has ever been able to give me any piece of information on this topic.
It's been more than a year now for me, trying to create a reference guide for utterances to be included in our device user manual, but still with no luck.
Can any one of you point me to some reference?
Or is there anything wrong with my SYNC data?
You can file a bug on the public tracker and include the QUERYs you have attempted. Since the execution intents seem to work, it may just be a bug in the backend grammar that isn't triggering.

How can I get matched utterance text from Alexa request instead of intent name?

I have created one Alexa skill which I want to communicate with my chatbot. When I am asking question to Alexa, in request only intent name is coming.But I want the utterance text also. Is it possible to get that utterance?
"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.480ebab4-cd67-418e-b67f-eb8a00b74020",
"timestamp": "2020-02-13T06:55:52Z",
"locale": "en-US",
"intent": {
"name": "ask_utterance",
"confirmationStatus": "NONE"
}
}
This is the request.I am correctly getting intent name but i want utterance text which will then I will send to my chatbot. Is it possible to do that?
No, you can't (as for now) have the complete utterance.
The only thing that is near to a complete utterance is AMAZON.SearchQuery slot.
Otherwise, you will obtain only slot values.

Can I determine whether an Alexa Request was triggered by a routine or a user?

I have a need to differentiate between an explicit request and a request from a routine.
Here is an example. Let's say I am controlling a smart light. The light is able to detect occupancy.
If a user comes in the room and says turn on the light, it will check occupancy and turn off.
However, if the user creates a scheduled routine to turn the light on, we should disable the occupancy check.
I don't see anything in the documentation for the TurnOn Directive that would indicate the source of the request.
Is there an indicator that I missed? Can I add some indicator? Or has anyone used a different approach to accomplish similar functionality?
The official response from Amazon is that you can't tell the difference. Here is a recent response from Amazon's Alexa developer forum: https://forums.developer.amazon.com/questions/218340/skills-invoking-routines.html
That said, you will generally see additional fields in the launch request if it is launched from a Routine:
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.abunchofnumbers",
"timestamp": "2020-01-18T22:27:01Z",
"locale": "en-US",
"target": {
"path": "AMAZON.Launch",
"address": "amzn1.ask.skill.abunchofnumbers"
},
"metadata": {
"referrer": "amzn1.alexa-speechlet-client.SequencedSimpleIntentHandler"
},
"body": {},
"payload": {},
"targetURI": "alexa://amzn1.ask.skill.abunchofnumbers/AMAZON.Launch",
"launchRequestType": "FOLLOW_LINK_WITH_RESULT",
"shouldLinkResultBeReturned": true
}
The target, metadata, body, payload, targetURI, and launchRequestType fields are generally not found when a user launches a skill with their voice. HOWEVER, I do not believe the existence of these fields are unique to being launched by an Alexa Routine. I suspect you'll find them if the skill was launched when, for example, Alexa asks, "Hey, since you like the Blind Monkey skill would you like to try Blind Pig?" and you say "yes."

Alexa skill Interaction Model unsure about custom slot

I brought an Amazon echo in hope to have it send commands to my HTPC.
I found and set up the following which uses alexa with eventghost.
http://www.eventghost.org/forum/viewtopic.php?f=2&t=7429&sid=c3d48a675d6d5674b25a35f4850bc920
The original poster used "literal" in the skill intent which I found doesn't work anymore. After reading through the whole thread I saw you need to create a custom slot type.
here is the skill set up
Intent scheme
{
"intents": [ {
"intent": "Run",
"slots": [
{
"name": "Action",
"type": "Commands"
} ]
} ]
}
Custom Slot Types
Commands
cleanup
clean up
move movies
move downloads
move cartoons
move the cartoons
move the downloads
move the downloaded movies
play
pause
stop
Sample Utterances
Run {Action}
What I'm wanting to do is say:
"Alexa tell/ask (Invocation Name) to clean up"
or
"Alexa tell/ask (Invocation Name) to Move movies"
I typed in the custom slot to what I believe is the correct format based on my web searching.
the problem is when I run it through Alexa it sometimes hits Eventghost slightly wrong.
how can I fine tune it. or do i have the skill set up wrong?
Above Setup Looks fine, Alexa skill has ability learn by training Skill more
But I dont know, you made typo error
Your Sample Utterances looks like "Alexa tell/ask (Invocation Name) clean up", but your ask as "Alexa tell/ask (Invocation Name) to clean up" with extra word as "to", if this is not an typo error, please remove word "to"
Because while pronunciation, the word "to" will try to combine with your commands

Resources