Is there any possibility to switch the language for skills? It seems to me that alexa directly uses german as well as english but using alexa over a skill, english is the preferred language. So slots are not beeing recognized properly
When you create a skill you can choose a "default language" next to your "Skill name".
When your skill is already created you can set the Availability of your skill.
Maybe your device itself is not configured to the correct language?
Did you try the "Alexa Simulator" in the developer console? There you can change the language as well to test your skill.
You should have a look in the JSON Input what is written for "locale".
Here are some more details: https://developer.amazon.com/docs/custom-skills/develop-skills-in-multiple-languages.html
Yes, you can add language from your build screen > language settings and also sample utterance for a particular language and you can test different language by test > change language.
Related
I'm planning on doing an experiment, where we will setup a Google Assistant or Alexa device and see how people would interact with voice assistants in a certain environment. It's basically a Wizard of Oz experiment (https://en.wikipedia.org/wiki/Wizard_of_Oz_experiment). Is it possible to intercept the voice commands before they get passed to the Assistant or Alexa? This could help me decide/manage if I want to handle the user input or let Google/Alexa handle it.
Will you be using a purchased "original" device or will you use, e.g. an Raspberry PI and build it yourself?
For the former this won't be possible out of the bow. However, I recently stumbled upon an article. It describes a new device which would achieve something that might help you: It allows you to "reprogram" the activation word for Alexa and Google Assistant. The article mentions that the device's hardware is a Raspberry PI. So, I guess you could build something similar yourself. That was also the first idea that came into my mind.
I would imagine something like this:
On your raspberry you have a script (I guess written in python would be easiest) that listens for the wake-word, e.g. "Alexa" and also records the following voice. However, you have Alexa itself not running for now, so it doesn't get triggered. Your script also includes a logic for when to pass the command on to Alexa or what to do with it instead. When it decides that the command is to be passed on, the script starts Alexa and replays the recording. Thus, triggering it the same way the users would have triggered it, in the first place.
Another idea would be to use two microphones. One for your script and one for Alexa. Your script having the ability to mute/unmute those.
Pleas take into account that those are just spontaneous ideas. It's completely possible that I've missed something and this wouldn't work. But until somebody who has done this before comes up, I'd give it a try!
I've written my first Alexa Skill and it appears to work fine on the simulator as provided in the developer console however when I try to launch it on my echo dot. It doesn't appear to work. I am from Canada and thus have added English (CA) version as well to the interaction model. Unfortunately, it doesn't seem to recognize it. It just had a short two tone(?) beep sound.
Resolved. I forgot to setup my echo dot again with my developer account (I previously was using a regular account).
By implementing the "canFulfillIntentRequest", Can I launch my custom intent without asking the name of skill while it is already playing an audio? like instead of saying:
"Alexa, ask <inovation name> to get me the latest on China"
can I say?
"Alexa, get me the latest on China"
Any help will be highly appreciable.
If the skill is invoked and is already playing the audio then it depends on the variable shouldEndSession. If shouldEndSession variable is set to True, while playing the audio file then NO, otherwise YES. You could ask at the end of the audio file, it the user wants to hear more or something.
And if the skill is not invoked, then it is not possible.
All the cortana bots seem to be sandboxed with limited capabilities. Is there any way I can write system files or chagne system configuration through these bots ?
for that kind of control you should create a UWP project and use Voice Command Definition API cortana skills are good for ordering pizza or book a flight, something that needs to be done over internet with a third-party API
Not without a Win32 Application that communicates with the Cortana Skill via the web.
What you can do is write script and but them in a folder under C:\ProgramData\Microsoft\Windows\Start Menu\Programs\
than you can run them like "Hey Cortana open custom script"
I have locale module enabled. But I don't see any options to have e-mails (Account activation, Welcome, Password Recovery) translated.
How do I send e-mails in the user's preferred language?
Just in case someone needs it. Need to download the Internalization module http://drupal.org/project/i18n and enable the Variables translation submodule. Then you need to follow instructions here http://drupal.org/node/1113374 . To input the translation (or another version for another language) you need to switch your language to the one you want the translation for. And put the other language version into the field where the English version was.
If the language you want to use is enabled on the site, you should be able to translate the message text in the Translate interface. Go to Configuration -> Translate interface. Under the Translate tab, type in the first few words of the message. Find the text you want and add a translation.