Google Smart Home Toggles Trait mysterious utterances - google-smart-home

I'm struggling to complete the development for a SmartHome action on our security panels, involving different trait implementations (including ArmDisarm, Power, Thermostats, etc.).
One specific problem is related to Toggles Trait.
I need to accept commands to enable or disable intrusion sensor bypass/exclusion.
I've added to the SYNC response the following block, for instance, for a window sensor in the kitchen:
{
'id': '...some device id...',
'name': {'name': 'Window Sensor'},
'roomHint': 'Kitchen',
'type': 'action.devices.types.SENSOR',
'traits': 'action.devices.traits.Toggles',
'willReportState': true,
'attributes': {
'commandOnlyToggles': false,
'queryOnlyToggles': false,
'availableToggles': [
{
'name': 'bypass',
'name_values': {
{ 'name_synonym': ['bypass', 'bypassed', 'exclusion'}, 'lang': 'en'],
{ 'name_synonym': ['escluso', 'bypass', 'esclusa', 'esclusione'], 'lang': 'it'}
},
}
]
}
}
I was able to trigger the EXECUTE intent by saying
"Turn on bypass on Window Sensor" (although very unnatural).
I was able to trigger the QUERY intent by saying
"Is bypass on Window Sensor?" (even more unnatural).
These two utterances where found somewhere in a remote corner of a blog.
My problem is with Italian language (and also other western EU languages such as French/Spanish/German).
The EXECUTE Intent seems to be triggered by this utterance (I bet no Italian guy will ever say anything like that):
"Attiva escluso su Sensore Finestra"
(in this example the name provided in the SYNC request was translated from "Window Sensor" to "Sensore Finestra" when running in the context of an Italian linked account).
However I was not able to find the utterance for the QUERY request, I've tried everything that could make some sense, but the QUERY intent never gets triggered, and the assistant redirects me to a simple search on the web.
Why is there such a mistery over utterances? The sample English utterances in assistant docs are very limited, and most of the times it's difficult to guess their counterpart in specific languages; furthermore no one from AOG has ever been able to give me any piece of information on this topic.
It's been more than a year now for me, trying to create a reference guide for utterances to be included in our device user manual, but still with no luck.
Can any one of you point me to some reference?
Or is there anything wrong with my SYNC data?

You can file a bug on the public tracker and include the QUERYs you have attempted. Since the execution intents seem to work, it may just be a bug in the backend grammar that isn't triggering.

Related

action.devices.commands.OpenClose trait in Google Smart Home actions does not work properly

When I give command "Open the device 30 percent", the data received by intent fulfillment is
{
"command": "action.devices.commands.OpenClose",
"params": {
"followUpToken": "00f38e7b45edbc12fafce49c23568896b7feea58a8a4ba873f31abad7db96de28a25389a2987c7f8deff41afcb25fdffb2b81fe2",
"openPercent": 100
}
}
As shown above, the "openPercent" is not correctly interpreted. But if I give command "Close the device 70 percent", the data received by intent fulfillment is
{
"command": "action.devices.commands.OpenClose",
"params": {
"followUpToken": "00f38e7b4588ad650859efe30a46d7dcb565e3a7eea257919678d5cda32fd769f290298ec8c9d40d0eb3a1b52b0063921823a39d",
"openPercent": 30
}
}
So we can see that the "openPercent" is correctly interpreted for this command.
Just wondering what caused the action.devices.commands.OpenClose trait only works partially.
There are two main possibilities of what might be going wrong with the device trait on this one: Your execution might not be interpreted correctly by the Google systems, or your device definition might be in a way such that Google sends you a discrete fully open intent.
To check if the issue is with the interpretation, you can try typing out the command as well as speaking it via voice, to see if Google sends you different values with the execution intent. Please also try our different grammar, such as “Open the device 30 percent”, “Open the device”, “Set the device to 50 open”, “Partially open the device 60 percent” to see if any of these would help. It might be possible that your intent to open might be always recognized as “100%”.
The second possibility is that your device definition might lead Google to send you discrete states of fully open & fully close (or one of these, as in your case). To troubleshoot, please check your Sync response for any potential issues on how you define the trait attributes.

Can I determine whether an Alexa Request was triggered by a routine or a user?

I have a need to differentiate between an explicit request and a request from a routine.
Here is an example. Let's say I am controlling a smart light. The light is able to detect occupancy.
If a user comes in the room and says turn on the light, it will check occupancy and turn off.
However, if the user creates a scheduled routine to turn the light on, we should disable the occupancy check.
I don't see anything in the documentation for the TurnOn Directive that would indicate the source of the request.
Is there an indicator that I missed? Can I add some indicator? Or has anyone used a different approach to accomplish similar functionality?
The official response from Amazon is that you can't tell the difference. Here is a recent response from Amazon's Alexa developer forum: https://forums.developer.amazon.com/questions/218340/skills-invoking-routines.html
That said, you will generally see additional fields in the launch request if it is launched from a Routine:
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.abunchofnumbers",
"timestamp": "2020-01-18T22:27:01Z",
"locale": "en-US",
"target": {
"path": "AMAZON.Launch",
"address": "amzn1.ask.skill.abunchofnumbers"
},
"metadata": {
"referrer": "amzn1.alexa-speechlet-client.SequencedSimpleIntentHandler"
},
"body": {},
"payload": {},
"targetURI": "alexa://amzn1.ask.skill.abunchofnumbers/AMAZON.Launch",
"launchRequestType": "FOLLOW_LINK_WITH_RESULT",
"shouldLinkResultBeReturned": true
}
The target, metadata, body, payload, targetURI, and launchRequestType fields are generally not found when a user launches a skill with their voice. HOWEVER, I do not believe the existence of these fields are unique to being launched by an Alexa Routine. I suspect you'll find them if the skill was launched when, for example, Alexa asks, "Hey, since you like the Blind Monkey skill would you like to try Blind Pig?" and you say "yes."

Discord.js bot Custom presence with image

Currently trying to use setPresence for my bot, and I can get the name but no extra details for the 2nd line and no images.
bot.user.setPresence({
game: {
name: "Ready to brawl!",
application_id: 'the id',
details: "These sharks are ready for a fight.",
type: 0
},
assets: {
large_image: "large",
large_text: "Do not jump into that tank...",
small_image: "small",
small_text: "c!help"
},
status: "dnd"
});
So what shows up is:
My bot is on DND, shows "Playing Ready to brawl!", but nothing else. The details part doesn't show up and there's no large or small image.
I've used a custom presence application before so I assumed you needed your own discord application on the developers site, so I made one. I have the name of it the exact same as the "name: '.' " part and I have the id in the application id part. (I'm not sure if it's bad to share the ID so I excluded it.)
Before I tried using large_image and small_image as pure inputs and gave them links, but neither that nor this application one worked.
So if I'm seriously fucking something up here, help would be appreciated
It's normal, you can't use Rich Presence with a bot actually. Maybe someday Discord will allowed this.

Errors with multi turn dialog in alexa skills

I have created a skill with name "BuyDog" and its invocation name is "dog app"
So that should mean, I can use the intents defined inside only after the invocation name is heard. (is that correct?)
Then I have defined the Intents with slots as:
"what is {dog} price."
"Tell me the price of {dog}."
where the slot {dog} is of slot type "DogType". I have marked this slot as required to fulfill
Then I have added the endpoint to AWS lambda function where I have used the blueprint code of factskills project in node.js, and done few minor changes just to see the working.
const GET_DOG_PRICE_MESSAGE = "Here's your pricing: ";
const data = [
'You need to pay $2000.',
'You need to pay Rs2000.',
'You need to pay $5000.',
'You need to pay INR 3000.',
];
const handlers = {
//some handlers.......................
'DogIntent': function () {
const factArr = data;
const factIndex = Math.floor(Math.random() * factArr.length);
const randomFact = factArr[factIndex];
const speechOutput = GET_DOG_PRICE_MESSAGE + randomFact;
}
//some handlers.......................
};
As per the about code I was expecting when
I say: "Alexa open dog app"
It should just be ready to listen to the intent "what is {dog} price." and the other one. Instead it says a random string from the node.js code's data[] array. I was expecting this response after the Intent was spoken as the slot was required for intent to complete.
And when
I say: "open the dog app and Tell me the price of XXXX."
It asks for "which breed" (that is my defined question) But it just works fine and show the pricing
Alexa says: "Here's your pricing: You need to pay $5000."
(or other value from the data array) for any XXXX (i.e. dog or not dog type).
Why is alexa not confirming the word is in slot set or not?
And when
I say: "open the dog bark".
I expected alexa to not understand the question but it gave me a fact about barking. WHY? How did that happen?
Does alexa have a default set of skills? like search google/amazon etc...
I am so confused. Please help me understand what is going on?
Without having your full code to see exactly what is happening and provide code answers, I hope just an explanation for your problems/questions will point you in the right direction.
1. Launching Skill
I say: "Alexa open dog app"
It should just be ready to listen to the intent...
You are expecting Alexa to just listen, but actually, Alexa opens your skill and is expecting you to have a generic welcome response at this point. Alexa will send a Launch Request to your Lambda. This is different from an IntentRequest and so you can determine this by checking request.type. Usually found with:
this.event.request.type === 'LaunchRequest'
I suggest you add some logging to your Lambda, and use CloudWatch to see the incoming request from Alexa:
console.log("ALEXA REQUEST= " + event)
2. Slot Value Recognition
I say: "open the dog app and Tell me the price of XXXX."
Why is alexa not confirming the word is in slot set or not?
Alexa does not limit a slot to the slot values set in the slotType. The values you give the slotType are used as a guide, but other values are also accepted.
It is up to you, in your Lambda Function, to validate those slot values to make sure they are set to a value you accept. There are many ways to do this, so just start by detecting what the slot has been filled with. Usually found with:
this.event.request.intent.slots.{slotName}.value;
If you choose to set up synonyms in the slotType, then Alexa will also provide her recommended slot value resolutions. For example you could inlcude "Rotty" as a synonym for "Rottweiler", and Alexa will fill the slot with "Rotty" but also suggest you to resolve that to "Rottweiler".
var resolutionsArray = this.event.request.intent.slots.{slotName}.resolutions.resolutionsPerAuthority;
Again, use console.log and CloudWatch to view the slot values that Alexa accepts and fills.
3. Purposefully Fail to Launch Skill
I say: "open the dog bark".
I expected alexa to not understand the question but it gave me a fact about barking.
You must be doing this outside of your Skill, where Alexa will take any inputs and try to recognize an enabled skill, or handle with her best guess of default abilities.
Alexa does have default built-in abilities (not skills really) to answer general questions, and just be fun and friendly. You can see what she can do on her own here: Alexa - Things To Try
So my guess is, Alexa figured you were asking something about dog barks, and so provided an answer. You can try to ask her "What is a dog bark" and see if she responds with the exact same as "open the dog bark", just to confirm these suspicions.
To really understand developing an Alexa skill you should spend the time to get very familiar with this documentation:
Alexa Request and Response JSON Formats
You didn't post a lot of your code so it's hard to tell exactly what you meant but usually to handle incomplete events you can have an incomplete even handler like this:
const IncompleteDogsIntentHandler = {
// Occurs when the required slots are not filled
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'DogIntent'
&& handlerInput.requestEnvelope.request.dialogState !== 'COMPLETED'
},
async handle(handlerInput) {
return handlerInput.responseBuilder
.addDelegateDirective(handlerInput.requestEnvelope.request.intent)
.getResponse();
}
you add this handler right above your actual handler usually in the index.js file of your lambda
This might not fix all your issues, but it will help you handle the event when a user doesn't mention a dog.

How to display a file when a user requires it in Watson-Conversation?

I would like to know how to display a file when a user types something.
Ex: Show me the course details
Output: The file(pdf format) which is on my PC gets displayed.
Basically, you need to know how to work conversation: is one API for creating Intents, Entities and your Dialog flow.
Your application will access all nodes with the return from the API, and you will create conditions to get something for know if the user asked something about "Show me the course details".
I recommend to you create one intent like #aboutCourse and show examples to Watson know if the user will ask something with this purpose.
Something like:
Watson says: Hi! How can I help you?
User: Please show me the course details
Watson will recognize your intent and response what you paste within the node with the Intent condition #aboutCourse.
Make sure if the user really want this with:
Watson says: You really want to know details about the course?
User: yes / ok // or something to confirm
Or you can add some Intent confidence level for this node condition like: intents[0].confidence >= 0.75
And your code will check if the Intent is #aboutCourse and the entity is #yes, and do something in your application.
Or, you can create one context variable too, because, depends on your node flow, the intents will modify within your flow because every time Watson try to recognize what the user wants.
With your dialog flow, you will create one context variable and check if user says yes, like:
{
"context": {
"courseConfirm": "<? #yes ?>" //create one intent with confirm examples and value equal yes
},
"output": {
"text": {
"values": [
"Ok, you say #yes. I'll check, one moment."
],
"selection_policy": "sequential"
}
}
}
And within your application:
function updateMessage(input, response) {
if (response.context.courseConfirm == 'yes') {
//do something with code with code
}
}
Or you can create one function inside my example, like this answer.
Obs.: This code example is with conversation-simple project, from IBM Developers, but you will do something like my example with the same logic:get the return from API and do something within your application.

Resources