Alexa model evaluation works great but intent is never called in simulator or alexa device - alexa

I'm struggling to build my Alexa Interaction model. My application is used for requesting live data from a smart home device. All i do is basically calling my Server API with a Username & Password and i get a value in return. My interaction model works perfectly for requesting the parameters, for example i can say "Temperature" and it works perfectly fine across all testing devices. For that intent i got a custom RequestType.
However for setting up Username & Password i need to use an built-it slot type: AMAZON.NUMBER. As i only need numbers for my credentials this should work perfectly fine in theory.
I got a interaction model setup which works perfectly fine when i press "Evaluate Model" in Alexa developer console. However once i go to Test on the simulator or to my real Alexa device it's absolutely impossible to call the intent. It always calls one of more other intents? (I see this in the request JSON).
Here's how the intent looks:
{
"name": "SetupUsername",
"slots": [
{
"name": "username",
"type": "AMAZON.NUMBER"
}
],
"samples": [
"my user id is {username}",
"username to {username}",
"set my username to {username}",
"set username to {username}",
"user {username}",
"my username is {username}",
"username equals {username}"
]
}
Whatever i say or type in the Simulator, i cannot call this intent. I have no overlaps from other intents. Does anyone see an issue here?
Thank you in advance
EDIT: I just realized that if you want to do account linking on Alexa you need to implement OAuth2 - maybe my intents are never called because they want to bypass me implementing my own authentication?
UPDATE:
This is the intent that is usually called instead - it's my init intent. So for example is i say "my username is 12345" the following intent is gonna be called:
UPDATE 2:
Here is my full interaction model.
(HelpIntent and SetPassword are only for testing purposes, they don't make sense right now)
It's impossible calling SetupUsername with any of the samples in my model.

You need to build the interaction model. Saving is not enough
When you develop your Interaction Model, you have to save it AND build it. Otherwise only the evaluation model will work (documentation about it).
Then when you test in the test console, you should see in the JSON Input, at the bottom, which intent was called:
"request": {
"type": "IntentRequest",
"requestId": "xxxx",
"locale": "en-US",
"timestamp": "2021-10-20T14:38:59Z",
"intent": {
"name": "HelloWorldIntent", <----------- HERE
"confirmationStatus": "NONE"
}
}

Related

Can I determine whether an Alexa Request was triggered by a routine or a user?

I have a need to differentiate between an explicit request and a request from a routine.
Here is an example. Let's say I am controlling a smart light. The light is able to detect occupancy.
If a user comes in the room and says turn on the light, it will check occupancy and turn off.
However, if the user creates a scheduled routine to turn the light on, we should disable the occupancy check.
I don't see anything in the documentation for the TurnOn Directive that would indicate the source of the request.
Is there an indicator that I missed? Can I add some indicator? Or has anyone used a different approach to accomplish similar functionality?
The official response from Amazon is that you can't tell the difference. Here is a recent response from Amazon's Alexa developer forum: https://forums.developer.amazon.com/questions/218340/skills-invoking-routines.html
That said, you will generally see additional fields in the launch request if it is launched from a Routine:
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.abunchofnumbers",
"timestamp": "2020-01-18T22:27:01Z",
"locale": "en-US",
"target": {
"path": "AMAZON.Launch",
"address": "amzn1.ask.skill.abunchofnumbers"
},
"metadata": {
"referrer": "amzn1.alexa-speechlet-client.SequencedSimpleIntentHandler"
},
"body": {},
"payload": {},
"targetURI": "alexa://amzn1.ask.skill.abunchofnumbers/AMAZON.Launch",
"launchRequestType": "FOLLOW_LINK_WITH_RESULT",
"shouldLinkResultBeReturned": true
}
The target, metadata, body, payload, targetURI, and launchRequestType fields are generally not found when a user launches a skill with their voice. HOWEVER, I do not believe the existence of these fields are unique to being launched by an Alexa Routine. I suspect you'll find them if the skill was launched when, for example, Alexa asks, "Hey, since you like the Blind Monkey skill would you like to try Blind Pig?" and you say "yes."

"locale" information missing in Alexa HouseholdList event request / How to get multi language support in event handler?

I have successfully integrated the Alexa.HousholdListEvents in my node.js AWS lambda based skill. Now I am trying to use language translation as for usual Intents / Requests.
Unfortunately in the HousholdListEvent the "request" does not contain locale information and instead of a translated string I am getting just the identifier repeated when using t(). See example below. I cannot get the locale information from the received event and would have to fall back to english which is blocking me from starting skill the certification process.
If you need further information - feel free to ask. I am more than happy to provide more details if needed.
Any advice? Help is appreciated!
Why do I have no locale information as part of the event?
Why is t() not working as expected (just like for normal intents)?
How could I translate in the event handler based on the origin locale?
My event request:
"request": {
"type": "AlexaHouseholdListEvent.ItemsCreated",
"requestId": "4a3d1715-e9b3-4980-a6eb-e4047ac40907",
"timestamp": "2018-03-12T11:20:13Z",
"eventCreationTime": "2018-03-12T11:20:13Z",
"eventPublishingTime": "2018-03-12T11:20:13Z",
"body": {
"listId": "YW16bjEuYWNjb3VudC5BRVlQT1hTQ0MyNlRQUU5RUzZITExKN0xNUUlBLVNIT1BQSU5HX0lURU0= ",
"listItemIds": [
"fbcd3b22-7954-4c9a-826a-8a7322ffe57c"
]
}
},
My translation usage:
this.t('MY_STRING_IDENTIFIER')
My result (in the ItemsCreated event handler):
MY_STRING_IDENTIFIER
Expected result (as for other requests):
"This is my translated text"

How watson conversation should say good afternon based on time zone?

If the user logins to the website in morning, watson would say Good Morning!
If the user logins to the website in afternoon, watson would say Good afternoon!
If the user logins to the website in evening, watson would say Good Evening!
I've written like this
{
"conditions": "now().before('12:00:00')",
"output": {
"text": {
"values": [ "Good morning!" ]
}
}
}
But after closing the json editor the code is changing to like this:
{
"output": {
"text": {
"values": [
"Good morning!"
]
}
}
}
Can anyone please say what the solution is? Please provide the entire code for
["good morning,good afternoon,good evening"]
`
You can't define conditions in the JSON editor. So it deletes any field that is not part of the schema.
You can set the condition within the tooling UI at the IF statement section. Just paste in your condition part. As the functionality has recently changed, you will need to do the following.
On the Welcome node, click the "Customise" Cog. Select "Allow multiple responses".
Set your conditions now at each response part.
If you are using the workspace API, then I recommend to export your workspace to see how a node block is correctly structured. Alternatively you can check the API spec.
https://www.ibm.com/watson/developercloud/conversation/api/v1/#create_workspace

Alexa skill Interaction Model unsure about custom slot

I brought an Amazon echo in hope to have it send commands to my HTPC.
I found and set up the following which uses alexa with eventghost.
http://www.eventghost.org/forum/viewtopic.php?f=2&t=7429&sid=c3d48a675d6d5674b25a35f4850bc920
The original poster used "literal" in the skill intent which I found doesn't work anymore. After reading through the whole thread I saw you need to create a custom slot type.
here is the skill set up
Intent scheme
{
"intents": [ {
"intent": "Run",
"slots": [
{
"name": "Action",
"type": "Commands"
} ]
} ]
}
Custom Slot Types
Commands
cleanup
clean up
move movies
move downloads
move cartoons
move the cartoons
move the downloads
move the downloaded movies
play
pause
stop
Sample Utterances
Run {Action}
What I'm wanting to do is say:
"Alexa tell/ask (Invocation Name) to clean up"
or
"Alexa tell/ask (Invocation Name) to Move movies"
I typed in the custom slot to what I believe is the correct format based on my web searching.
the problem is when I run it through Alexa it sometimes hits Eventghost slightly wrong.
how can I fine tune it. or do i have the skill set up wrong?
Above Setup Looks fine, Alexa skill has ability learn by training Skill more
But I dont know, you made typo error
Your Sample Utterances looks like "Alexa tell/ask (Invocation Name) clean up", but your ask as "Alexa tell/ask (Invocation Name) to clean up" with extra word as "to", if this is not an typo error, please remove word "to"
Because while pronunciation, the word "to" will try to combine with your commands

Multiple answers for a node in IBM Watson Conversation service

How I can specify multiple answers for a specific dialog node? For example, when user asks “What is your name”, my VA should reply “My name is Alexander”, or “You can call me Alexander”, or “Friends call me Alex”.
Maybe Conversation service must return a code and application checks the code and choose a random answer from answer's db.
For the application node that gives the response, select advanced mode and change to this:
{
"output": {
"text": {
"values": [
"My name is Alexander.",
"You can call me Alexander",
"Friends call me Alex"
],
"selection_policy": "random",
"append": false
}
}
}

Resources