Skill under development not available on all Alexa devices - alexa

I am building a piggy bank skill for my kids (no plans to publish it yet). After completing development, the skill immediately became available in my phone's Alexa app and one of two Alexa devices I own (both Echo Shows). Whenever I ask the non-functioning Echo Show ask piggy bank, it replies with I can't do that but for other ideas you can say, Alexa, open Amazon Kids. What could prevent a skill from being available on one but not being available on another device?

Answering my own question in case someone else runs into the same issue... It took me many hours to figure out the problem, even built a second skill and performed a full factory reset. The clue that ultimately led me down the right track was the Amazon Kids part of Alexa's response. On said device I had Amazon Kids enabled because it is in our family room, and I re-enabled it right away after the full factory reset. After disabling it, the skill under development started working as intended.

Related

Making an energy recovery ventilator - practical to make it Matter enabled?

I posted something much like this on openhab and home assistant forum too, I will decide what to do based on what I hear...
I am trying to produce an open source Energy Recovery Ventilator, and software is not my forte.
I frankly find the sheer variety and quantity of buzzwords and subsystems in the home automation sphere difficult to navigate. I am unclear on why exactly things have to be so complicated... anyway.
I am using a raspberry pi pico running micropython. Do you think it would be practical to make it appear as basically a fan with several different modes to a matter hub? Maybe report back some info so the user can see some status updates etc. ?
What I want is basically to allow it to be controlled by a hub, which may be running on a phone or someone’s PC, so the hub’s user interface etc. Can be used to make the device turn off an on, up and down on a schedule, it can be connected to other devices like a co2 detector, smart switch, etc.
I need, sooner or later, possibly with the help of module(s) running on the pico to cache data (like schedule data) get the time or whatever, a dictionary which I will use for the rest of the system to interface to. The main loop consults the dictionary to determine behaviour at any given moment. The hub checks what time of day it is, etc. And sends that info along.
Is this sort of thing doable?
I tried to look into making the thing Alexa compatible and ye gads it would take me months to get that stuff working. They make everything so complicated
Found some stuff for esp32 devices like esphome, but it is not practical to use as a module in a larger system. MQTT looks like it could play an important role, but doesn't quite get me there and for some reason Alexa, Google home etc still cant really talk to mqtt devices very well, esp. including device setup etc. Basically, envision a little hardware device that just serves up some fields and takes back some fields, then appears as a device to Google home's app etc. I need that, but a software module that runs on a pico. Is it practical to roll this or is it going to be an ungainly undertaking?

Can an Alexa skill keep the microphone always on?

I have been a C/C# developer for many years but haven't written any Alexa apps. I would like to write a skill to listen for baby babble (NOT WORDS) and respond in different ways. I would like my Alexa skill to keep the microphone ALWAYS on (similar to how "Alexa, Guard" works) because babies speak randomly.
Is there some sample code I can look at?
You can't do this.
There is no way to keep the microphone open with a custom skill.
When you activate the skill "open my skill", Alexa starts talking and then, when she stops, you have only 8 + 8 seconds to speak, otherwise, the session will be closed.
So you have 8 seconds, then if you don't speak there is a re-prompt phrase asking you to tell something, and another 8 seconds where you can speak.
If you don't, the session will be closed.
There is a way to keep the session open more than 8 seconds (playing some mute music for example), but the problem remains because you can only speak after that "music" is finished.

Developing an Alexa skill with custom wait time after re-prompt

I am trying to develop an alexa skill with a custom delay time. Currently, whenever a user asks a question, Alexa responds to it and waits for 8 seconds. After this, there is re-prompt speech (if present) and Alexa again waits for 8 seconds. This 16 second wait is followed by session closure.
I want to keep the re-prompt text to be active even if the user does not ask anything after 16 seconds time out. Is it possible?
This is not currently possible because Alexa will wait for a maximum of eight seconds before closing the session. You can add a re-prompt to reminder a user that a response is required to continue with the skill interaction, but it is not allowed to leave the session open for an undefined period of time. This enables the user to ask for different skills and first party features without closing a skill manually every time.
As with natural conversation, if the Alexa service thinks a question asked is misunderstood or confusing, re-prompts allow Alexa to clarify and reformulate a question to get the answer Alexa is seeking. Shorten a re-prompt for brevity when a customer is familiar enough with the context of a conversation that they won’t need the entire prompt again immediately. The key is that you provide enough information to guide the customer, understanding that you are essentially 8 seconds away from losing that connection if they don’t know how to answer. While re-prompts must be understandable, they provide an opportunity to expand on the initial request to get the conversation moving.
You can find additional info here:
https://developer.amazon.com/en-US/docs/alexa/alexa-design/available.html

Google Play Game Services - Real Time Multiplayer - How to get the delay

I am using google play game services – real time multiplayer API to add multiplayer feature to my mobile games. The engine I am using is Unity3D, but my question does not have to do with Unity (I believe so) so it is not important.
What I would like to know is the delay of the messages that are received over the internet to make my games smooth and synchronized.
I know that in other APIs like Photon you can easily find the delay of the message that is being received but I don’t seem to find it on google play game services API.
Is there any way to know the delay of the received messages on google play game services API?
Thank you for your time!
Determining the latency of the messages is a bit complex in the case of Google Real Time multiplayer APIs since the connections are peer to peer, so most of the data travels directly from one player to the other. (see for details: https://developers.google.com/games/services/common/concepts/realtimeMultiplayer#messaging)
The short answer is you can estimate it yourself, by adding sequence numbers to the messages, and then exchange the time difference each client experienced between the messages. I recommend measuring several messages, and sizes, and not have too much memory since conditions will change. Something like the average time between each message for 30-100 messages and then plan for the slowest link.
To make a good real-time game, you really should assume the latency is variable (sometimes it is low, others high), and it is always longer than you want :)
You might want to checkout https://gamedev.stackexchange.com/questions/58450/mobile-multiplayer-games-and-coping-with-high-latency which has a good discussion on how to handle this situation.

Web RTC without Web RTC

My problem is this...
I have two sites, one acting as an "Admin" site, the other as general "User" site. I need to broadcast live audio from the "Admin" site to all clients of the "User" site. I need to do this with <1 sec of latency.
Some restrictions include:
No install on "User" machines (the idea being the whole thing sits on the web)
If there needs to be a 3rd party plugin then Silverlight is preferred*
Any help much appreciated here
*I have tried IceCast with a flash client, IIS Smooth Streaming, Internet radio, all of which give us a latency of >5 secs.
Have you tried Flash with a server like Red5? You're generally going to get subsecond latency (though not much less than that), as it's designed for realtime communications. There's a learning curve with Flex and ActionScript, but if you're at all familiar with XAML, you can pick it up from the sample apps that come with Red5 pretty quickly.
Failing that, if there aren't too many clients, you can use one of the two real-time peer-to-peer solutions out there, namely Flash over RTMFP or WebRTC over JSEP/ICE/RTP. If you can ensure that all the clients are using Chrome, then WebRTC is probably your best bet. If you can ensure that they're not using Chrome, then Flash is a good choice. The current Flash Pepper client on Chrome is buggy up the wazoo when it comes to audio processing, and no sign of a fix in sight. (It doesn't support echo cancellation, and the volume of the audio goes up and down horribly.) So if you're using Flash, steer clear of recording and broadcasting your audio on Chrome. And I wouldn't recommend either approach if you have more than half a dozen clients - the number of audio streams is gonna overwhelm your "Admin" browser pretty quickly, I think. Better to push that out to something like a Red5 server.
Silverlight is a bad choice for more reasons than I can count. I'm saying this as a guy who spent several years trying to implement a realtime communication solution on Silverlight. Don't do it.

Resources