Thermostat touch control is available for one action, but not for another action - google-smart-home

I'm developing a Smart Home action for Actions on Google. I have two Smart Home actions with draft status. Each action returns an idencial SYNC response that includes a thermostat device, as shown below.
{
"payload": {
"agentUserId": "1234",
"devices": [
{
"id": "my-test-device-id",
"type": "action.devices.types.THERMOSTAT",
"deviceInfo": {
"model": "L",
"manufacturer": "L",
"hwVersion": "1.0.0",
"swVersion": "2.0.0"
},
"traits": [
"action.devices.traits.TemperatureSetting"
],
"willReportState": false,
"name": {
"name": "My AC"
},
"attributes": {
"thermostatTemperatureUnit": "C",
"availableThermostatModes": [
"off",
"heat",
"cool",
"on"
]
}
}
]
},
"requestId": "1695631778966374749"
}
When I link these actions in Google Home app on Google Pixel 3 (Android 11), thermostat's touch control is available for one action, but not the other. When touch control is unavailable, the thermostat receives a gear icon. When I press that icon, "Device settings" screen appears and I cannot change the temperature. What could be the cause of this difference?
Tested on Google Pixel 3 (Android 11)
Thermostat with Touch Controls
Thermostat without Touch Controls (with gear icon)

I have come to the conclusion that my Google project ID is receiving special treatment from Google to disable thermostat touch control in my smart home action.
Specifically, if the project ID is prefixed with nature-remo-smart-home then thermostat touch control is not available. However, If I create a new project with a different prefix, touch control is available.

Related

How to play 2 Audiofiles in one response

I want to have a personal greeting (mp3) in my Alexa Skill Launch Intent and start and audio stream directly when the first mp3 is finished. I tried it with responsebuilder and addaudioplayerplaydirective. But you can add only directive to a response, but I have 2 files to play after another. Does anyone have an idea how to solve this?
AudioPlayer is more ideal for long-form audio like meditations or songs. Once your skill starts the audio player, the custom skill sessions ends and your users can't do what they could normally do while in your skill.
It sounds more like you just want to play some shorter audio clips to greet your users. If that's the case and your audio files meet the requirements, APL for Audio, may be a better solution for you.
Here's an example directive your skill can include in its response to play two audio files, one after another, then have Alexa say something.
{
"type": "Alexa.Presentation.APLA.RenderDocument",
"token": "developer-provided-string",
"document": {
"type": "APLA",
"version": "0.91",
"mainTemplate": {
"parameters": [
"payload"
],
"item": {
"type": "Sequencer",
"items": [
{
"type": "Audio",
"source": "soundbank://soundlibrary/ui/gameshow/amzn_ui_sfx_gameshow_intro_01",
"filters": [
{
"type": "Volume",
"amount": "20%"
},
{
"type": "FadeIn",
"duration": 1000
}
]
},
{
"type": "Audio",
"source": "soundbank://soundlibrary/alarms/beeps_and_bloops/bell_01"
},
{
"type": "Speech",
"content": "Hello world!"
}
]
}
}
}
}
If what you really need is the AudioPlayer, have your skill issue a directive to start the first clip. Then add a handler to your skill to capture the PlaybackNearlyFinishedRequest event. In that, your handler should return another directive to queue up the next audio clip.

Locale ignored in APLA Alexa Developer Console

I'm new to developing skills with Alexa. I've followed the Build Multi-turn Skills Tutorial with Alexa Conversations tutorial up to module 3.
Because I want to develop a skill only for German users I've altered the language settings in the Alexa developer console of my skill to only support German language.
I change the APLA code in the tutorial with the APLA with the "edit audio response" to this:
{
"type": "APLA",
"version": "0.8",
"mainTemplate": {
"parameters": [
"payload"
],
"item": {
"type": "Selector",
"strategy": "randomItem",
"items": [
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'de-DE'}",
"content": "Willkommen bei meiner App"
},
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'de-DE'}",
"content": "Willkommen."
},
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'en-US'}",
"content": "Welcome."
}
]
}
}
}
At the bottom of the console I see that my locale is set to German but when I preview the APL above the audio player always says "Welcome." with the English voice, the other two options are never triggered. What am I missing here?
The audio response tool doesn't take in account the language of the website.
There are no ways to test the condition environment.alexaLocale in this tool.
To test it, update the code of your skill and test it either on the test tabyour skill in the developer console or directly on a real device. Just tested with your code, it works perfectly. Just not on the audio tool.

How does Google Smart Home determine channelNumber for action.devices.commands.selectChannel?

Created Google Smart Home Action.
Implemented device with:
a. deviceType = action.devices.types.SETTOP
b. deviceTrait = action.devices.traits.Channel
Device is successfully discovered and added to Google Home App's Homegraph.
User sends command: "Ok Google, change to ESPN"
Receives the following json in fulfillment URL:
{
"requestId": "[RequestId GUID]",
"inputs": [{
"intent": "action.devices.EXECUTE",
"payload": {
"commands": [{
"devices": [{
"id": "[SettopBox device Id]"
}],
"execution": [{
"command": "action.devices.commands.selectChannel",
"params": {
"channelCode": "espn",
"channelName": "ESPN",
"channelNumber": "206"
}
}]
}]
}
}]
}
Questions:
How does Google Smart Home determine the "channelNumber" value for "ESPN"? The user's command was "Ok Google, change to "ESPN". This does not contain any information about the channel number.
If a provider was set automatically, is there a setting in Google Home or Google Assistant to change this provider?
The number of a channel for the Channel trait is provided in the SYNC request along with any relevant labels.
{
"availableChannels": [
{
"key": "ktvu2",
"names": [
"Fox",
"KTVU"
],
"number": "2"
},
{
"key": "abc1",
"names": [
"ABC",
"ABC East"
],
"number": "4-11"
}
]
}
As shown in the snippet, the channel number comes from the service. This may be up to the developer of the integration how these numbers may be determined, whether from a cable provider or over-the-air. The field is optional, so a service without channel numbers may still work by saying its name.

Azure Indoor Maps not rendering

Trying to create floor plan upload to azure indoor maps. It is uploaded using postman and got the tilesetid but when I provide tilesetid in the Azure Indoor maps sample, it is not rendering the image in html file. When I use sample zip file provided by azure it is working fine.
Following the article as shown in Azure Indoor Maps
Autocad settings
Below is the manifest file
{
"version": "1.1",
"directoryInfo": {
"name": "Digital Twins Testing Building",
"streetAddress": "Contoso Way",
"unit": "1",
"locality": "Eastside",
"postalCode": "00000",
"adminDivisions": [
"Contoso City",
"Contoso State",
"United States"
],
"hoursOfOperation": "Mo-Fr 08:00-17:00 open",
"phone": "1 (425) 555-1234",
"website": "www.contoso.com",
"nonPublic": false,
"anchorLatitude": 33.44277,
"anchorLongitude": -112.072754,
"anchorHeightAboveSeaLevel": 1000,
"defaultLevelVerticalExtent": 2
},
"buildingLevels": {
"levels": [{
"levelName": "Ground Level",
"ordinal": 0,
"verticalExtent": 5,
"filename": "./GroundLevelFloorPlan.dwg"
}]
},
"georeference": {
"lat": 33.44277,
"lon": -112.072754,
"angle": 0
},
"dwgLayers": {
"exterior": [
"exterior"
],
"unit": [
"unit"
]
}
}
From the manifest, I see you expressed loading only exterior and unit, and didn't pass the label layer which is what brings labels and let you add more properties for the units (or zones). If you don't see the map, I would suggest checking the conversion results (see here) which is always a good practice. Another good way to troubleshoot is review the content in the dataset via WFS API, for example units via https://atlas.microsoft.com/wfs/datasets//collections/unit/items?api-version=1.0&subscription-key={{subcriptionkey}}

Cannot Access WebExtension APIs

I have the following manifest.json:
{
"manifest_version": 2,
"name": "Application Name",
"version": "1.0",
"description": "blah blah blah",
"icons": {
"48": "icons/icon-48.png",
"96": "icons/icon-96.png"
},
"permissions": [
"activeTab",
"tabs",
"history",
"storage"
],
"browser_action": {
"default_icon": "icons/icon-32.png",
"default_title": "Title",
"default_popup": "popup/popup.html"
},
"content_scripts": [{
"matches": [
"<all_urls>"
],
"js": [
"content_scripts/script1.js",
"content_scripts/script2.js"
]
}]
}
I have access to the the storage API (browser.storage is defined) in my content scripts, but both the history and tabs APIs (browser.tabs and browser.history) are undefined. Am I missing something in the manifest to get access to these permissions?
One of the few WebExtensions APIs that is available for content scripts is browser.storage. Most WebExtensions APIs can only be accessed when using a background script. Using message passing you can still call those APIs (what you do is basically calling a function in the background script from within the content script). Please see the example on this page: https://developer.mozilla.org/en-US/Add-ons/WebExtensions/API/runtime/sendMessage
See also Firefox WebExtention API: TypeError: browser.browserAction is undefined for a similar problem.

Resources