I have searched about how to implement Alexa on the raspberry pi but everywhere I was getting an example of sample app only.
I am looking for creating a product using Alexa. Can someone guide me is Avs-device-SDK is the right choice?
And how can I change it's wakeup word once I install it?
I was searching this too and found this issue but didn't get the Idea:
https://github.com/alexa/avs-device-sdk/issues/610
Thanks.
AVS-device-SDK is the right place to start.
Right now, the only wake word according to documentation is Alexa.
There's a tutorial for building an Alexa prototype on a Raspberry Pi:
https://developer.amazon.com/docs/alexa-voice-service/required-hardware.html
And #Rubbal is correct that "Alexa" is currently the only supported wake word for third party Alexa devs.
Related
I've been looking at the SPIFFS file system project on https://github.com/pellepl/spiffs and it seems like it has a pretty large user community. I've been looking at the WIKI on the integration and configuration which seems to be well documented. I didn't see a user group forum? is there one?
My question pertains to the actual formatting of the flash device, within the examples shown in the WIKI pages I didn't example code on how to format the flash. There's a description, but it doesn't show an example of how to use the API.
I'm hoping someone might know of example of how to use the APIs. Any help is greatly appreciated.
I'm all set, I took another look at the API's and I figured out how to use them to mount and format the spi flash to initialize it. I ran thru the test example and it works. It went better than expected.
I'm planning on doing an experiment, where we will setup a Google Assistant or Alexa device and see how people would interact with voice assistants in a certain environment. It's basically a Wizard of Oz experiment (https://en.wikipedia.org/wiki/Wizard_of_Oz_experiment). Is it possible to intercept the voice commands before they get passed to the Assistant or Alexa? This could help me decide/manage if I want to handle the user input or let Google/Alexa handle it.
Will you be using a purchased "original" device or will you use, e.g. an Raspberry PI and build it yourself?
For the former this won't be possible out of the bow. However, I recently stumbled upon an article. It describes a new device which would achieve something that might help you: It allows you to "reprogram" the activation word for Alexa and Google Assistant. The article mentions that the device's hardware is a Raspberry PI. So, I guess you could build something similar yourself. That was also the first idea that came into my mind.
I would imagine something like this:
On your raspberry you have a script (I guess written in python would be easiest) that listens for the wake-word, e.g. "Alexa" and also records the following voice. However, you have Alexa itself not running for now, so it doesn't get triggered. Your script also includes a logic for when to pass the command on to Alexa or what to do with it instead. When it decides that the command is to be passed on, the script starts Alexa and replays the recording. Thus, triggering it the same way the users would have triggered it, in the first place.
Another idea would be to use two microphones. One for your script and one for Alexa. Your script having the ability to mute/unmute those.
Pleas take into account that those are just spontaneous ideas. It's completely possible that I've missed something and this wouldn't work. But until somebody who has done this before comes up, I'd give it a try!
By implementing the "canFulfillIntentRequest", Can I launch my custom intent without asking the name of skill while it is already playing an audio? like instead of saying:
"Alexa, ask <inovation name> to get me the latest on China"
can I say?
"Alexa, get me the latest on China"
Any help will be highly appreciable.
If the skill is invoked and is already playing the audio then it depends on the variable shouldEndSession. If shouldEndSession variable is set to True, while playing the audio file then NO, otherwise YES. You could ask at the end of the audio file, it the user wants to hear more or something.
And if the skill is not invoked, then it is not possible.
Question: 1
I want to use fingerprint scanner in codename one. Can anybody tell me is it available in codename one or not? If yes, how to use and if no, then how can I code it in codename one ?
Question: 2
How to get the maximum device info in codename one like android version, mobile model, or other stuff ?
Thanks,
No. Fingerprint scanning isn't available at this time.
You can use native interfaces to integrate native device functionality check out this quick video and the advanced section in the developer guide.
Device information is available in Display.getProperty() as well as some other methods in that class. Notice that if you get things such as UDID you will get a permission prompt.
I am very new at this and trying to get an understanding of this. I have read a lot on the DroneKit-Python site trying to figure out how exactly am I able to communicate with it.
Drone I am currently using is Iris+
I have looked more and there are software that already provide this, but I want to be able to control it plus more.
I want to set waypoints, tell it to then fly give the way points and keep going to them. Also, to be able to arm itself, which is in the example, and override the safety mechanism.
Here is the basic of what I am trying to use it for. Have it fly up at a certain time. Go to the waypoints 1,2,3,1,etc.. Then after X amount of time or on low battery go back to launch point and land.
I have found plenty of code that provides what i need to do, though I don't know if it will work and more importantly I don't even know how to start programming for this. Maybe I have the wrong approach in doing this?
I kind of want this to be a light API, so that in the future I can make a simple UI on my phone and insert some coordinates to give it ways points and that is it. I know there is software out there already that does it, but I want to remove the need for touching the drone. I want it to start and end autonomously.
If anyone could help provide some info that much would be greatly appreciated.
Assuming you have no companion computer (Iris+ does not by default), you are OK with running a ground station app (you won't be out of range to send commands to "end mission on time expiry") and that driving the behaviour from your phone is important, I would be looking at DroneKit Android.
Some notes:
You're going to have to touch the drone at some point to attach the
batteries.
You can arm the device from dronekit
You can override the safety mechanism from a script. I hope you have
a lot of money to pay for the new drones you're going to have to buy when they crash and all the litigation from damaged people and property (in other words "don't do it".
The default behaviour is to return the device to launch (RTL) on low battery. This is convigurable
Setting a time is more "problematic". You can have a timer in a script that then sends return-to-launch but the script needs to be connected to the UAV. This means that either you have to be running in a connected ground station (which might potentially be out of range) or on a companion computer.
Iris+ does not have a companion computer. You have to install one or connect from a Ground Control Station.
DroneKit-Python runs on Linux, MacOSX or Windows. You can't just run it on an ordinary phone, though you could find some other mechanism to send messages/scripts to it running on a companion Computer.
DroneKit Android runs on Android. We do have a planned iOS version too. In theory these could run on a companion computer, but in practice currently these are only used as ground stations.