How to parse out a preferred name in Watson Assistant? - ibm-watson

Watson asks "What is your name?" The user responds, "My friends call me Dick, my colleagues call me Richard, but you can call me Rich." or "I prefer Steve, but most people call me Steven." or "I prefer Bill over William and Will." How can I get Watson to recognize the preferred names of Rich, Steve, and Bill?

Easiest way. Change your question to prompt a unique answer.
Can I have your first name please
What is your last name?
How would you like me to refer to you as?
Technical way you can try using NLU or contextual entities combined with #sys-name. But the work involved will not be worth the trouble for something that may not happen.

Related

Watson Assistant - entity reference in Intents - I need to understand what I'm missing

I'm using watson assistant (plus) and I'm actually fighting with the correct usage of entity usage inside intent examples. First of all, inside the webUI I can't find trace of what mentioned in the documentation about entity suggestions, entity annotation inside intents examples..(we are on frankfurt server).
I have many intents in one skill and I decided to use entity mentions in intents examples. Having no trace of simplified way to add entity inside the single example, I directly wrote it inside the phrase.
From "What I need to activate MySpecificService ABC ?" to "What I need to activate #services:(MySpecificService ABC)", the same syntax used in dialog nodes.
I have used this method diffusely on my skill, according the documentation.
My problems starts here. Assistant refuse to detect the right intent when I try it.
If I ask "What I need to activate MyService Name?" the assistant detect a totally wrong intent, with low confidence (0.40 or less), and the correct intent does not appear neither as 2nd or 3rd intent (it correctly detect the entity).
No similar examples using exaclty #services:(MySpecificService ABC) in other intents, but I used other references to #services or #services:(otherservice name) in other intents.
I read documentation many times, I googled around, watched videos.. but nothing. Evidently I've misunderstood something.
Can You help me?
Intents are the actions/verbs that the user is trying to achieve. In this case, an intent could be the activation itself (no matter what is he trying to activate).
So you should write different examples of an activation question:
"How can I activate my service?", "I need to activate this service", etc.
The entities are the objects/substantives. In your case, services.
So in your dialog, if you are need the assistant to detect the intent+entity. So create a node with the condition #activation && #service:MySpecificService
Be aware that if you have several nodes in your dialog, their order will impact the way that your assistant analyzes the input. If the #activation && #service node is before the #activation && #service:MySpecificService node; the first one will be triggered as "MySpecificService" is one of the #services.
Hope that this helps!
im dealing with entities in intents as well and i think we're also on the frankfurt server.
Since youre on the frankfurt server im pretty sure the reason youre not seeing the annotation options is that youre using german language.
Annotations as mentioned in the documentation is only available for english language (unfortunately)
kr

Alexa Skills, make AMAZON.FallbackIntent return empty string

I am trying to make a mock interview skill on Alexa where the skill asks the user a question for example: "tell me about your background and experiences".
The user would give an answer, and when the user is done answering, he/she can say "next question" to get the next question.
So "next question" is really the only intent the app is waiting to hear. The problem is when the user is giving an answer for example:
"My name is Bob, I am from New York, I studied biology, etc.",
the session is still live, and Alexa obviously doesn't understand the intent so AMAZON.FallbackIntent gets triggered.
Is there a way to just return an empty string when AMAZON.FallbackIntent gets called so the mock interview session doesn't get disrupted?
Thank you!
It sounds like you need to control the session and constrain the user.
IMO Alexa has a lot of trouble with long user utterances. The problem really stems from the interaction model and the unpredictability of what a user will say. This blog post sheds some light on VUI issues (https://medium.com/hackernoon/lessons-learned-moving-from-web-to-voice-development-35daa1d301db). tl;dr - you have to maintain state and context.
One approach you can take is to ask the user specific questions. "What is your name?" should map to one intent and update the session/persistence with the slot value. Then you respond with the next question you want the user to answer(i.e. "Where do you live", "what university did you attend") , having another intent ready to handle that slot value. You have to realize that users can say anything at any point in your Alexa Skill session and your skill has to handle it.
Here's an Amazon Developer blog post that can help you better understand dialog management and slot confirmation: https://developer.amazon.com/blogs/alexa/post/3a23c045-b568-4a6a-8a8c-fd5511a08053/build-advanced-alexa-skills-confirm-what-customers-want-with-dialog-management

Using apostrophes for Alexa slots

I am developing an Alexa skill, where I have a stop for names of fruits. However, if I speak something like "What is apple's cost" where the slot value has an apostrophe, Alexa does not seem to recognize the apostrophe. Workaround is to say something like "What is the cost of an apple" but that would not be the best customer experience.
How can I make Alexa understand slot value with apostrophes? Any help is appreciated.
I think this is what you are looking for.
Create Intents, Utterances, and Slots (Rules for Sample Utterances)
If the word for a slot value may have apostrophes indicating the
possessive, or any other similar punctuation (such as periods or
hyphens) include those within the brackets defining the slot. Do not
add 's after the closing bracket. For example: ...
My friend, the apostrophe could be parsed depending on the voice recognition system internally, but it will never understand in real time an apostrophe.
Good news though, you dont need the apostrophe, think about it, it is only recognizing what the custommer would say without capital letters and special characters. Meaning, if the custommer says "What is apple's cost", alexa would recognize as the following "what is apples cost". This is a problem that should be worked server-side, cause you only need to understand what the custommer meant. You should implement server side a string matching function using levenshtein's algorithm.

How i can determine negative answers using Watson Conversation

For example: If the user writes in the Watson Conversation Service:
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
How you can know that user doesn't want to have a pool, but he loves to live in a Condo?
This is a good question and yeah this is a bit tricky...
Currently your best bet is to provide as much examples of the utterances that should be classified as a particular intent as a training examples for that intent - the more examples you provide the more robust the NLU (natural language understanding) will be.
Having said that, note that using examples such as:
"I would want to have a pool in my new house, but I wouldn't love to live in a Condo"
for intent-pool and
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
for intent-condo will make the system to correctly classify these sentences, but the confidence difference between these might be quite small (because of the reason they are quite similar when you look just at the text).
So the question here is whether it is worth to make the system classify such intents out-of-the-box or instead train the system on more simple examples and use some form of disambiguation if you see the top N intents have low confidence differences.
Sergio, in this case, you can test all conditions valid with peers node (continue from) and your negative (example else) you can use "true".
Try used the intends for determine the flow and the entities for defining conditions.
See more: https://www.ibm.com/watson/developercloud/doc/conversation/tutorial_basic.shtml
PS: you can get the value of entity using:
This is a typical scenario of multi intents in Conversation service. Everytime user says something, all top 10 intents are identified. You can change your dialog JSON editor like this to see all intents.
{
"output": {
"text": {
"values": [
"<? intents ?>"
],
"selection_policy": "sequential"
}
}
}
For example, When user makes a statement, that will trigger two intents, you'll see that intents[0].confidence and intents[1].confidence both will be pretty high, which means that Conversation identified both the intents from the user text.
But there is a major limitation in it as of now, there is no guaranteed order for the identified intents, i.e. if you have said
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo", there is no guarantee that positive intent "would_not_want" will be the intents[0].intent and intent "would_want" will be the intents[1].intent. So it will be a bit hard to implement this scenario with higher accuracy in your application.
This is now easily possible in Watson Assistant. You can do this by creating contextual entities.
In your intent, you mark the related entity and flag it to the entity you define. The contextual entities will now learn the structure of the sentence. This will not only understand what you have flagged, but also detect entities you haven't flagged.
So example below ingredients have been tagged as wanted and not wanted.
When you run it you get this.
Full example here: https://sodoherty.ai/2018/07/24/negation-annotation/

alexa skill does not recognize my correct answer

i wrote a simple Q&A Alexa skill which ask the user to guess about the planet name based on its property.
questions are like "Which is the brightest planet in the solar system"
when user respond as Venus. Alexa says that the answer is incorrect. the correct answer is venus.
I am not sure why cant it recognize.
There are a few places things can be going wrong.
1) Just because the user said it, doesn't mean that's what Alexa heard. Did you confirm in the companion app that Alexa heard the word "venus"? Did you try the simulator and type in Venus? That would get past it parsing what you said.
2) How are you testing the answer? Alexa, typically, returns things in lower case, since there is no casing in spoken language. Venus is a proper name, so I'm not sure it would return it as upper case or lower case. Either way, if you are using a case sensitive string compare then you need to make sure the cases match, or else use a case insensitive string comparison. If you are using Javascript, tips on doing case insensitive comparisons are here.
3) How are you recognizing the answer? Do you have a separate intent for "Venus"? Do you have a slot for it? Do you use a LITERAL with multiple utterances for examples? Do you use a custom slot? Each of these will return the results in different ways. The best option is to use a custom slot.
4) Have you checked your log files? What is your code actually receiving from Alexa? If your code doesn't print it, add extra log statements to see what your code is getting, and what you are doing with it.
You have not given enough information in your question to answer it definitively. Hopefully the above will give you ideas how to work the answer out yourself, or will prompt you to update your question with better information.

Resources