What are the rules for building user phrases and Google responses (OnOff and Cook traits)? - google-smart-home

Current documentation doesn't fully describe the rules for how a user can build the phrases to trigger any operation and possible answers. Could you please provide the following:
for "action.devices.traits.OnOff" trait:
the full set of phrases that user can use to trigger turning on/off OR rules to build them;
possible response phrases from Google Assistant if turning on/off was started successfully OR rules to build them.
for "action.devices.traits.Cook" trait (for two ways of parameters combination: cookingMode + foodPreset OR cookingMode + foodPreset + quantity + unit (ounces)):
the full set of phrases that user can use to trigger cook operation OR rules to build them;
possible response phrases from Google Assistant if cook operation was started successfully OR rules to build them;
the full set of phrases that user can use to cancel cook operation OR rules to build them;
possible response phrases from Google Assistant if cancellation of cook operation was started successfully OR rules to build them.
what additional words could the user add when framing this phrase for these two traits? For example, “me”, “please”, “my new {foodPreset}”, “a cup of {foodPreset}” (“cup” is not a “unit”) and any other words and phrases. What are the rules for this?
are there any recommendations for “foodPreset” parameter (words amount, words complexity)?

There are no strict rules. Traits can be triggered through natural language processing, so you may expect any relevant phrase should work. The documentation provides examples for OnOff and Cook but aren't limited by the provided phrases.
Responses also are based around good voice design and natural language, so there aren't any strict rules to what you'd expect. Additionally, such requests and responses may change as the platform continues to evolve. The NLP system is able to extract meaning from larger statements, so general things like "turn on the light", "turn on the light please", and "please turn on the light for me" should all match.
With regards to foodPreset, the key can be whatever you want for your service. The synonyms should be fairly varied and include any possible way that an individual would refer to that food item.

Related

How to call multiple intents at once when the slot value is shared by multiple intents?

For example, I have a skill that tells the user about food interactions with medications. The user tells Alexa which medication they want to know about.
Each intent corresponds to a different medication interaction (e.g., grapefruit juice, antacids, etc.). Under the GrapefruitDrugs slot-type, I have several slot values (all of them are drug names). When the GrapefruitIntent is called, it gives the user a warning about this interaction. The problem is that I have other slot-types that share the same drugs and their warnings also need to be communicated to the user.
'Drug X', however, will interact with both grapefruit juice AND antacids. So the slot-type AntacidDrugs also has 'Drug X' listed as a slot value. I want both intents to be called.
How do I make sure that both intents get called? I've looked into chaining intents, however I have yet to see an example other than one that links an intent to the LaunchRequest.
Intents are here just to point out what kind of sentence you have received on your webhook. Webhook can be on lambda or can be your custom server (e.g. Java, PHP, NodeJs …). In any case, here is your code and logic. Here you know at what state your conversation is and how should given intent be interpreted.
So in some conversation states you will react the same for two intents, while in some cases you might interpret the same intent in two different ways.

Watson Assistant - entity reference in Intents - I need to understand what I'm missing

I'm using watson assistant (plus) and I'm actually fighting with the correct usage of entity usage inside intent examples. First of all, inside the webUI I can't find trace of what mentioned in the documentation about entity suggestions, entity annotation inside intents examples..(we are on frankfurt server).
I have many intents in one skill and I decided to use entity mentions in intents examples. Having no trace of simplified way to add entity inside the single example, I directly wrote it inside the phrase.
From "What I need to activate MySpecificService ABC ?" to "What I need to activate #services:(MySpecificService ABC)", the same syntax used in dialog nodes.
I have used this method diffusely on my skill, according the documentation.
My problems starts here. Assistant refuse to detect the right intent when I try it.
If I ask "What I need to activate MyService Name?" the assistant detect a totally wrong intent, with low confidence (0.40 or less), and the correct intent does not appear neither as 2nd or 3rd intent (it correctly detect the entity).
No similar examples using exaclty #services:(MySpecificService ABC) in other intents, but I used other references to #services or #services:(otherservice name) in other intents.
I read documentation many times, I googled around, watched videos.. but nothing. Evidently I've misunderstood something.
Can You help me?
Intents are the actions/verbs that the user is trying to achieve. In this case, an intent could be the activation itself (no matter what is he trying to activate).
So you should write different examples of an activation question:
"How can I activate my service?", "I need to activate this service", etc.
The entities are the objects/substantives. In your case, services.
So in your dialog, if you are need the assistant to detect the intent+entity. So create a node with the condition #activation && #service:MySpecificService
Be aware that if you have several nodes in your dialog, their order will impact the way that your assistant analyzes the input. If the #activation && #service node is before the #activation && #service:MySpecificService node; the first one will be triggered as "MySpecificService" is one of the #services.
Hope that this helps!
im dealing with entities in intents as well and i think we're also on the frankfurt server.
Since youre on the frankfurt server im pretty sure the reason youre not seeing the annotation options is that youre using german language.
Annotations as mentioned in the documentation is only available for english language (unfortunately)
kr

Amazon Alexa: slot types

Can any one tell me the slot type of the intent which can have long sentences in the English(India) Language? In English(US) I use AMAZON.StreetAddress for this purpose. Thanks.
Consider using a custom slot type. According to Amazon, use of the Amazon.LITERAL type is discouraged, and a custom slot type is recommended instead. Usually for custom slot types, you specify a set of sample values. However, based on your use case, it sounds like you want a match that is as close as possible to just catching all possible inputs, which is Scenario #3 from this Amazon Alexa blog article. As the article's content for the "Catch All" scenario mentions:
If you use the same training data that you would have used for
LITERAL, you’ll get the same results.
IMO, of special importance is the last paragraph regarding Scenario #3.
If you’re still not getting the results, trying setting the CatchAll
values to around twenty 2 to 8 word random phrases (from a random word
generator – be really random). When the user says something that
matches your other utterances, those intents will still be sent. When
it doesn’t match any of those, it will fall to the CatchAll slot. If
you go this route, you’re going to lose accuracy because you’re not
taking full advantage of Alexa’s NLP so you’ll need to test heavily.
Hope that helps.

How i can determine negative answers using Watson Conversation

For example: If the user writes in the Watson Conversation Service:
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
How you can know that user doesn't want to have a pool, but he loves to live in a Condo?
This is a good question and yeah this is a bit tricky...
Currently your best bet is to provide as much examples of the utterances that should be classified as a particular intent as a training examples for that intent - the more examples you provide the more robust the NLU (natural language understanding) will be.
Having said that, note that using examples such as:
"I would want to have a pool in my new house, but I wouldn't love to live in a Condo"
for intent-pool and
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
for intent-condo will make the system to correctly classify these sentences, but the confidence difference between these might be quite small (because of the reason they are quite similar when you look just at the text).
So the question here is whether it is worth to make the system classify such intents out-of-the-box or instead train the system on more simple examples and use some form of disambiguation if you see the top N intents have low confidence differences.
Sergio, in this case, you can test all conditions valid with peers node (continue from) and your negative (example else) you can use "true".
Try used the intends for determine the flow and the entities for defining conditions.
See more: https://www.ibm.com/watson/developercloud/doc/conversation/tutorial_basic.shtml
PS: you can get the value of entity using:
This is a typical scenario of multi intents in Conversation service. Everytime user says something, all top 10 intents are identified. You can change your dialog JSON editor like this to see all intents.
{
"output": {
"text": {
"values": [
"<? intents ?>"
],
"selection_policy": "sequential"
}
}
}
For example, When user makes a statement, that will trigger two intents, you'll see that intents[0].confidence and intents[1].confidence both will be pretty high, which means that Conversation identified both the intents from the user text.
But there is a major limitation in it as of now, there is no guaranteed order for the identified intents, i.e. if you have said
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo", there is no guarantee that positive intent "would_not_want" will be the intents[0].intent and intent "would_want" will be the intents[1].intent. So it will be a bit hard to implement this scenario with higher accuracy in your application.
This is now easily possible in Watson Assistant. You can do this by creating contextual entities.
In your intent, you mark the related entity and flag it to the entity you define. The contextual entities will now learn the structure of the sentence. This will not only understand what you have flagged, but also detect entities you haven't flagged.
So example below ingredients have been tagged as wanted and not wanted.
When you run it you get this.
Full example here: https://sodoherty.ai/2018/07/24/negation-annotation/

Any business examples of using Markov chains?

What business cases are there for using Markov chains? I've seen the sort of play area of a markov chain applied to someone's blog to write a fake post. I'd like some practical examples though? E.g. useful in business or prediction of stock market, or the like...
Edit: Thanks to all who gave examples, I upvoted each one as they were all useful.
Edit2: I selected the answer with the most detail as the accepted answer. All answers I upvoted.
The obvious one: Google's PageRank.
Hidden Markov models are based on a Markov chain and extensively used in speech recognition and especially bioinformatics.
I've seen spam email that was clearly generated using a Markov chain -- certainly that qualifies as a "business use". :)
There is a class of optimization methods based on Markov Chain Monte Carlo (MCMC) methods. These have been applied to a wide variety of practical problems, for example signal & image processing applications to data segmentation and classification. Speech & image recognition, time series analysis, lots of similar examples come out of computer vision and pattern recognition.
We use log-file chain-analysis to derive and promote secondary and tertiary links to otherwise-unrelated documents in our help-system (a collection of 10m docs).
This is especially helpful in bridging otherwise separate taxonomies. e.g. SQL docs vs. IIS docs.
I know AccessData uses them in their forensic password-cracking tools. It lets you explore the more likely password phrases first, resulting in faster password recovery (on average).
Markov chains are used by search companies like bing to infer the relevance of documents from the sequence of clicks made by users on the results page. The underlying user behaviour in a typical query session is modeled as a markov chain , with particular behaviours as state transitions...
for example if the document is relevant, a user may still examine more documents (but with a smaller probability) or else he may examine more documents (with a much larger probability).
There are some commercial Ray Tracing systems that implement Metropolis Light Transport (invented by Eric Veach, basically he applied metropolis hastings to ray tracing), and also Bi-Directional- and Importance-Sampling- Path Tracers use Markov-Chains.
The bold texts are googlable, I omitted further explanation for the sake of this thread.
We plan to use it for predictive text entry on a handheld device for data entry in an industrial environment. In a situation with a reasonable vocabulary size, transitions to the next word can be suggested based on frequency. Our initial testing suggests that this will work well for our needs.
IBM has CELM. Check out this link:
http://www.research.ibm.com/journal/rd/513/labbi.pdf
I recently stumbled on a blog example of using markov chains for creating test data...
http://github.com/emelski/code.melski.net/blob/master/markov/main.cpp
Markov model is a way of describing a process that goes through a series of states.
HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but depends on some other data on that sequence).
Common applications include:
Crypt-analysis, Speech recognition, Part-of-speech tagging, Machine translation, Stock Prediction, Gene prediction, Alignment of bio-sequences, Gesture Recognition, Activity recognition, Detecting browsing pattern of a user on a website.
Markov Chains can be used to simulate user interaction, f.g. when browsing service.
My friend was writing as diplom work plagiat recognision using Markov Chains (he said the input data must be whole books to succeed).
It may not be very 'business' but Markov Chains can be used to generate fictitious geographical and person names, especially in RPG games.
Markov Chains are used in life insurance, particularly in the permanent disability model. There are 3 states
0 - The life is healthy
1 - The life becomes disabled
2 - The life dies
In a permanent disability model the insurer may pay some sort of benefit if the insured becomes disabled and/or the life insurance benefit when the insured dies. The insurance company would then likely run a monte carlo simulation based on this Markov Chain to determine the likely cost of providing such an insurance.

Resources