I was asked to write a Skill in which users have to ask for a specific product.
Examples:
Alexa, what is the price of the Fujifilm Instax Mini 7S Camera?
Alexa, how many TOSTITOS® Tortilla Chips are left?
Question: Can I add custom brands / product names to Alexa's vocabulary?
I guess Amazon can recognize some products, but how about new brands that amazon hasn't seen yet?
If this can't be done, it makes no sense of building that Skill.
Is there any alternative? Did Google solve this problem in Google home?
Thanks!
Alexa skills has some predefined slots which could help you https://developer.amazon.com/docs/custom-skills/slot-type-reference.html#list-slot-types, I'm not sure how specific you need it by your example Fujifilm Instax Mini 7S Camera, but still it would be the easiest way.
If you need more, or different, brands you can also check in that link how to have a slot with custom values, pretty much you define that in the developers console where you configure your Skill
Related
I have created an Alexa custom skill that I am using to control various devices in my house. I am using a custom skill rather than implementing the smart home skills as I want to be able to support non-standard utterances. For instance, I can ask
Alexa, ask [invocation] what is the brightness of the porch lights right now?
Everything with the custom skill works really well, except that I don't want to have to say the invocation name. I'd prefer to interact with porch lights as if they were a discovered smart home skills device, like:
Alexa, what is the brightness of the porch lights right now?
This seems to be purpose of the canFulfillIntent. I have implemented this interface in Python (perhaps incorrectly), but Alexa always responds: "Sorry, I didn't find a device named porch lights".
Is what I am asking possible? And if it is, does anyone have a Python example? My reading is that while this is the purpose of canFulfillIntent it does not function like this yet (thus the reason why there are two ecobee skills, for instance).
It is not possible. Every time you ask for brightness and other things it will assume that you are referring to a smart home device and the skill will not be invoked. Thus the 2 ecobee skills. This video might help you if you go for a smart home skill.
I have written a smart speaker app for Google Home using DialogFlow, and am now in the process of porting it over to Alexa.
One of the fundamental differences seems to be the inability to easily trigger follow-up intents. For example, I have a dialog that asks the user a series of questions, one after the other, before providing a result based on the answers provided. e.g. ({slot types})
Do you like a low maintenance or working garden? {low maintenance}{working}
Do you like a garden you can relax in? {yes/no}
Would you like to grow vegetables in your garden? {yes/no}
This is easy to achieve using DialogFlow follow-up intents, but I have no clue where to start with Alexa and there dont seem to be many examples out there. All I can find seems to focus on slot filling for a single dialog.
I am using my own API service to serve results (vs Lambda).
Can anybody recommend a way of achieving this in an Alexa Skill?
I managed to achieve this by adding a single utterance with three individual slots, one for each of the answers required:-
inspire me {InspireMaintenance} {InspireRelax} {InspireVeg}
These Slots back onto one SlotType - Custom_YesNo, which had Yes and No values + synonyms. My C# service then checks for each of these required slots and where one is missing it triggers the relevant question as a response. Once all slots are filled it provides the answer.
Not as intuitive as Dialogflow and requires code to achieve what can be done without code in DF, but at least it works :)
I'm trying to restrict an echo dot to allow only my skill to be invoked on the device. This is a publicly available device so I don't want users making Alexa repeat what they say or set alarms, or basically just using any other skill than the ones I allow. How would I go about restricting access to all other skills?
I've tried going into my Amazon account to see if there were any settings I could manage, but I had no luck. Is it simply not possible to do this right now?
Thanks for any help.
For this use case you may want to look into Alexa for Business: https://aws.amazon.com/alexaforbusiness/
Is it possible to launch an Alexa App with just its name? This is similar to when you ask it what the weather is.
"Alexa, weather"
However I would like to be able to say
"Alex, weather in Chicago" and have it return that value
I can't seem to get the app to launch without a connecting word. Things like ask, open, tell would count as a connecting word.
I have searched the documentation but can't find mention of it, however there are apps in the app store that do this.
It is documented in the first item here.
I've verified that this works with my own skill. One thing I've noticed is that Alexa's speech recognition is much worse when invoked in this manner presumably because it requires matching against a greater set of possible words. I have to really enunciate in a quiet room to get Alexa to recognize my invocation name in this context.
When developing a custom skill you have to use the connecting words e.g. Alexa, ask your invocation name to do something.
If you want to pass a variable, you have to specify the sample utterances:
OneshotTideIntent get high tide
OneshotTideIntent get high tide for {City} {State}
Then you handle cases in your code when user does not provide these values. For examples see https://github.com/amzn/alexa-skills-kit-js
When writing the example phrases you use the following construct:
"Alexa, [connecting word] [your invocation name], [sample utterance]". As far as I have noticed she is rather picky and you have to be exact when invoking custom skill (the voice recognition works way better with built in skills)
EDIT: launching skill without connecting word is possible when developing "smart home" skill
I want to train nlc in such way that -
If I give an input as - "Sharpies" or "Cakes" or "iPhone6" then it should result in order as intent.
But it's not working for all the products, as intent should come for all the product names, where I would need to train NLC with few of product name and it will work for all the products (dynamically).
As we have thousands of products, how can get the intent as "order" for all products instead of adding all in ".csv" (Don't want to hard code all the product names)?
Can you please help me with this to retrieve the exact intent for all dyanmical products name as input to NLC?
What you are trying to do is not what NLC is intended for.
The purpose of intent is to understand what it is the end user is trying to achieve, not what products/keywords may appear in a sentence.
For example "I want to buy an iPhone" vs "I want to unlock my iPhone". Both mention iPhone but have two very different intents. In this case with training, you can distinguish between wanting to purchase, vs wanting to unlock.
One option you can try is looking at the Alchemy API entity extraction.
Another option is to use Watson Explorer Studio. But you will need Watson explorer to get it. There is Watson Knowledge Studio coming soon, which like WEX-Studio allows you to build custom annotators. You can use these annotators with UIMA to parse your text.
So you could easily build something to understand that "I don't want to buy an iPhone" is not the same as "I want to buy an iPhone", and have it extract iPhone as a product.
There is unsupported old free version of WEX-Studio called Languageware, if you want to see if that can help. That site contains manual and videos. Here is a video which I did that gives an example of how you would use it.