"locale" information missing in Alexa HouseholdList event request / How to get multi language support in event handler? - alexa

I have successfully integrated the Alexa.HousholdListEvents in my node.js AWS lambda based skill. Now I am trying to use language translation as for usual Intents / Requests.
Unfortunately in the HousholdListEvent the "request" does not contain locale information and instead of a translated string I am getting just the identifier repeated when using t(). See example below. I cannot get the locale information from the received event and would have to fall back to english which is blocking me from starting skill the certification process.
If you need further information - feel free to ask. I am more than happy to provide more details if needed.
Any advice? Help is appreciated!
Why do I have no locale information as part of the event?
Why is t() not working as expected (just like for normal intents)?
How could I translate in the event handler based on the origin locale?
My event request:
"request": {
"type": "AlexaHouseholdListEvent.ItemsCreated",
"requestId": "4a3d1715-e9b3-4980-a6eb-e4047ac40907",
"timestamp": "2018-03-12T11:20:13Z",
"eventCreationTime": "2018-03-12T11:20:13Z",
"eventPublishingTime": "2018-03-12T11:20:13Z",
"body": {
"listId": "YW16bjEuYWNjb3VudC5BRVlQT1hTQ0MyNlRQUU5RUzZITExKN0xNUUlBLVNIT1BQSU5HX0lURU0= ",
"listItemIds": [
"fbcd3b22-7954-4c9a-826a-8a7322ffe57c"
]
}
},
My translation usage:
this.t('MY_STRING_IDENTIFIER')
My result (in the ItemsCreated event handler):
MY_STRING_IDENTIFIER
Expected result (as for other requests):
"This is my translated text"

Related

Azure Logic Apps and Microsoft Forms - Get field descriptors

I have a Logic App that retrieves the responses submitted by the users through Microsoft Forms.
When I see the Logic App Run, I can see the descriptor for each field (MuleSoft, IoT & Integration, Encuesta de tecnologías, ...), for example:
But in the "Show raw outputs" I can't see those fields, I get an identifier (rcb6ccf0fc9e44f74b44fa2715fec4f27, ...):
How I can retrieve those descriptors??
The solution is to add a 'Send an HTTP request to SharePoint' action to get the details of the form.
The Site Address is: https://forms.office.com
The Method is: GET
The Uri is: /formapi/api/forms('')?select=id,title,questions&$expand=questions($expand=choices)
This returns a JSON with all the questions and for each question the ID, Title and more info about the question.
We can implement a loop through these questions and with each ID, extract the response from the Microsoft Forms:
foreach": "#body('Send_an_HTTP_request_to_SharePoint')['questions']"
And Compose the result:
"Compose": {
"inputs": {
"Id": "#{items('For_each')['id']}",
"Name": "#items('For_each')['title']",
"Value": "#{body('Get_response_details')[item()['id']]}"
},
"runAfter": {},
"type": "Compose"
}
These are field identifiers. You can retrieve them directly from the Dynamic content of Get response details.
Alternatively, you can build your own JSON body(in your case Get response details) from Compose connector.

Alexa model evaluation works great but intent is never called in simulator or alexa device

I'm struggling to build my Alexa Interaction model. My application is used for requesting live data from a smart home device. All i do is basically calling my Server API with a Username & Password and i get a value in return. My interaction model works perfectly for requesting the parameters, for example i can say "Temperature" and it works perfectly fine across all testing devices. For that intent i got a custom RequestType.
However for setting up Username & Password i need to use an built-it slot type: AMAZON.NUMBER. As i only need numbers for my credentials this should work perfectly fine in theory.
I got a interaction model setup which works perfectly fine when i press "Evaluate Model" in Alexa developer console. However once i go to Test on the simulator or to my real Alexa device it's absolutely impossible to call the intent. It always calls one of more other intents? (I see this in the request JSON).
Here's how the intent looks:
{
"name": "SetupUsername",
"slots": [
{
"name": "username",
"type": "AMAZON.NUMBER"
}
],
"samples": [
"my user id is {username}",
"username to {username}",
"set my username to {username}",
"set username to {username}",
"user {username}",
"my username is {username}",
"username equals {username}"
]
}
Whatever i say or type in the Simulator, i cannot call this intent. I have no overlaps from other intents. Does anyone see an issue here?
Thank you in advance
EDIT: I just realized that if you want to do account linking on Alexa you need to implement OAuth2 - maybe my intents are never called because they want to bypass me implementing my own authentication?
UPDATE:
This is the intent that is usually called instead - it's my init intent. So for example is i say "my username is 12345" the following intent is gonna be called:
UPDATE 2:
Here is my full interaction model.
(HelpIntent and SetPassword are only for testing purposes, they don't make sense right now)
It's impossible calling SetupUsername with any of the samples in my model.
You need to build the interaction model. Saving is not enough
When you develop your Interaction Model, you have to save it AND build it. Otherwise only the evaluation model will work (documentation about it).
Then when you test in the test console, you should see in the JSON Input, at the bottom, which intent was called:
"request": {
"type": "IntentRequest",
"requestId": "xxxx",
"locale": "en-US",
"timestamp": "2021-10-20T14:38:59Z",
"intent": {
"name": "HelloWorldIntent", <----------- HERE
"confirmationStatus": "NONE"
}
}

Google Smart Home Toggles Trait mysterious utterances

I'm struggling to complete the development for a SmartHome action on our security panels, involving different trait implementations (including ArmDisarm, Power, Thermostats, etc.).
One specific problem is related to Toggles Trait.
I need to accept commands to enable or disable intrusion sensor bypass/exclusion.
I've added to the SYNC response the following block, for instance, for a window sensor in the kitchen:
{
'id': '...some device id...',
'name': {'name': 'Window Sensor'},
'roomHint': 'Kitchen',
'type': 'action.devices.types.SENSOR',
'traits': 'action.devices.traits.Toggles',
'willReportState': true,
'attributes': {
'commandOnlyToggles': false,
'queryOnlyToggles': false,
'availableToggles': [
{
'name': 'bypass',
'name_values': {
{ 'name_synonym': ['bypass', 'bypassed', 'exclusion'}, 'lang': 'en'],
{ 'name_synonym': ['escluso', 'bypass', 'esclusa', 'esclusione'], 'lang': 'it'}
},
}
]
}
}
I was able to trigger the EXECUTE intent by saying
"Turn on bypass on Window Sensor" (although very unnatural).
I was able to trigger the QUERY intent by saying
"Is bypass on Window Sensor?" (even more unnatural).
These two utterances where found somewhere in a remote corner of a blog.
My problem is with Italian language (and also other western EU languages such as French/Spanish/German).
The EXECUTE Intent seems to be triggered by this utterance (I bet no Italian guy will ever say anything like that):
"Attiva escluso su Sensore Finestra"
(in this example the name provided in the SYNC request was translated from "Window Sensor" to "Sensore Finestra" when running in the context of an Italian linked account).
However I was not able to find the utterance for the QUERY request, I've tried everything that could make some sense, but the QUERY intent never gets triggered, and the assistant redirects me to a simple search on the web.
Why is there such a mistery over utterances? The sample English utterances in assistant docs are very limited, and most of the times it's difficult to guess their counterpart in specific languages; furthermore no one from AOG has ever been able to give me any piece of information on this topic.
It's been more than a year now for me, trying to create a reference guide for utterances to be included in our device user manual, but still with no luck.
Can any one of you point me to some reference?
Or is there anything wrong with my SYNC data?
You can file a bug on the public tracker and include the QUERYs you have attempted. Since the execution intents seem to work, it may just be a bug in the backend grammar that isn't triggering.

Can I determine whether an Alexa Request was triggered by a routine or a user?

I have a need to differentiate between an explicit request and a request from a routine.
Here is an example. Let's say I am controlling a smart light. The light is able to detect occupancy.
If a user comes in the room and says turn on the light, it will check occupancy and turn off.
However, if the user creates a scheduled routine to turn the light on, we should disable the occupancy check.
I don't see anything in the documentation for the TurnOn Directive that would indicate the source of the request.
Is there an indicator that I missed? Can I add some indicator? Or has anyone used a different approach to accomplish similar functionality?
The official response from Amazon is that you can't tell the difference. Here is a recent response from Amazon's Alexa developer forum: https://forums.developer.amazon.com/questions/218340/skills-invoking-routines.html
That said, you will generally see additional fields in the launch request if it is launched from a Routine:
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.abunchofnumbers",
"timestamp": "2020-01-18T22:27:01Z",
"locale": "en-US",
"target": {
"path": "AMAZON.Launch",
"address": "amzn1.ask.skill.abunchofnumbers"
},
"metadata": {
"referrer": "amzn1.alexa-speechlet-client.SequencedSimpleIntentHandler"
},
"body": {},
"payload": {},
"targetURI": "alexa://amzn1.ask.skill.abunchofnumbers/AMAZON.Launch",
"launchRequestType": "FOLLOW_LINK_WITH_RESULT",
"shouldLinkResultBeReturned": true
}
The target, metadata, body, payload, targetURI, and launchRequestType fields are generally not found when a user launches a skill with their voice. HOWEVER, I do not believe the existence of these fields are unique to being launched by an Alexa Routine. I suspect you'll find them if the skill was launched when, for example, Alexa asks, "Hey, since you like the Blind Monkey skill would you like to try Blind Pig?" and you say "yes."

"Resource not found for the segment" using Graph subscription beta

Im using the Microsoft Graph to get calendar event with application permission. It works perfectly. Now Im trying to set up a subscription to listen to event changes however the normal v1.0 do not suport this. However beta, at least in the description, say it works.
From the page: https://graph.microsoft.io/en-us/docs/api-reference/beta/api/subscription_post_subscriptions
"Note: the /beta endpoint allows Application permissions as a preview feature for most resources."
So I tried this with the URL:
https://graph.microsoft.com/beta/subscriptions
Sending in a json object like this:
{
"changeType":"created,updated,deleted",
"notificationUrl":"https%3A%2F%2FXXX.com%2Fo365.php",
"resource":"%2Fusers%2Fooom%40xxx.com%2Fevents",
"clientState":"1486588355561",
"expirationDateTime":"2018-11-20T18:23:45.9356913Z"
}
Doing this I get the result:
{
"error": {
"code": "BadRequest",
"message": "Resource not found for the segment '/users/room#xxx.com/events'.",
"innerError": {
"request-id": "d9ca58b1-ee1f-4072-81d5-0f1a25dcdd45",
"date": "2017-02-08T21:26:51"
}
}
}
I have tried all types of combos in the resource but cant get it to work.
Anybody that have an idea on how to do this?
The value of json properties don't need to be url encoded. The resource also doesn't need a '/' in front of "users" (although this isn't what is causing your issue).
Try changing your JSON to:
{
"changeType":"created,updated,deleted",
"notificationUrl":"https://myurl.com/o365.php",
"resource":"users/person#myemail.com/events",
"clientState":"1486588355561",
"expirationDateTime":"2018-11-20T18:23:45.9356913Z"
}
Feel free to respond back if this doesn't address the issue.

Resources