Request confirmation on lower confidence levels in Watson Conversation - ibm-watson

I know how to set conditions based on intent match and confidence level. I would like to proceed with a flow if the confidence is above a certain threshold, and request confirmation if it is in a mid-range before proceeding with the flow.
I can do this by doubling the nodes:
Create one node that matches at high confidence
Create a node underneath that matches at a lower level. If I get confirmation, route back to first node, bypassing the condition.
Is there a better pattern that doesn't duplicate all nodes?

I'm confused by your question, but, I believe you want make condition base on intent and confidence from intent. Right?
Well, I believe you can do this with code... And let Watson work only for the intelligence of the understandings of intentions and point the confidence. Or you can create a single node throughout the conversation as:
if intent.confidence >= 0.75
Response: I did not understand your question.
Or, with code for each set condition and intent, like:
if(intent[0].intents === 'requestPizza' && intent.confidence >= 0.75) {
data.output.text[0] = "Hey, you want request pizza or you want to know how to request pizza?";
}
See one example from IBM Developers with Nodejs.

Related

Problem with dialog nodes and intents in Watson Assistent

I'm using IBM Watson Assistant for creating a chatbot. I'm using the web interface with the intents, entities and dialog flow|tree (I don't know how it is called, I'm just calling it web interface). I have four problems and hope that someone can help with it.
I have created two intens: #how_are_you with an example "How are you?" and intent #feeling_good with example "I'm good". Of course I have much more examples for these two intents. In the dialog I have now a parent node looking for #feeling_good and a child node looking for #how_are_you (skipping user input in-between). When a user now inputs the sentence "I'm good. How are you?" then only #feeling_good is triggered but not #how_are_you. How can I trigger both intents with only one user input?
I would like to have one node in the dialog which waits for say 100s and then sends another message to the user. Waiting is no problem (using pause) but how can I do it that only a message is sent after the 100s if the user did not send another message during the waiting period? That means when the user sends a message the waiting node should be canceled.
I have a node which checks for a certain intent. When the intent does not match I'm jumping back to the parent node. The problem is that the text from the parent node is repeated each time. How can I prevent this repetition when jumping back?
The last question is perhaps a bit more tricky. I would like to define an array of the numbers [1,2,3,4,5]. Then one node should sample a random number without replacement from that array (e.g. 2), i.e. the remaining array is then [1,3,4,5]. After some time another node should pick another number at random from the array (say 4). And so on. How can this be implemented? I know about variables (e.g. $var) but I don't know how to represent arrays and sample random numbers.
Thank you so much for your answers in advance. And happy new year to everybody.
1) In Watson Assistant always the intent with the highest confidence is used first. Hence processing multiple intents triggered by one sentence is tricky. The "best" solution is to use composite intent - #HELLO_HOW_ARE_YOU. Alternatively you can create conditions that would check if the first two intents returned are a comination of #HELLO and #HOW_ARE_YOU
2) Waiting and sending messages due to inactivity should be ideally handled by the client implementing the chat console in your interface. WA is not well suited for these types of operations, while there is some support, better way how to handle these is get your client application - when inactivity detected - to send something that will be mapped to #INACTIVITY_INTENT and WA will respond with your message coupled with that intent.
3) Don't jump to the node but jump to the first child of that node and use wait for user input.
4) This is possible. WA expression language supports getting random number, getting the size of an array and removing elements from the array.
E.g. <? $array.remove(new Random().nextInt(3))?>

Why does the Fallback Intent not get called if you say ONE random word?

Whenever I go inside the skill and say one completely random word, the Fallback Intent is not triggered. The echo will just emit a sound and in the Alexa simulator, it would just show nothing. But I know for a fact that I am still inside the skill and the session has not yet ended since if I try to say an utterance that is mapped to a certain intent without including the word Alexa, it would respond correctly. BUT, if I try to say TWO completely random words the Fallback Intent is triggered. For example(this is already inside the skill), if I say the word "pizza" it would just respond with that weird noise and stay in the current session. But if I say the words "pizza pie" it would map to the Fallback Intent.
I have observed this behavior in a skill that has many custom intents each having many utterances configured. But when I tried inputting the word "pizza" to a skill with only 3 custom intents, the Fallback intent works fine.
If, when you say the out-of-domain word, you get a reprompt and then and end of session it means that Alexa assigned a very low confidence level to the mapping of that utterance to an intent. And this also applies the fallback intent!
Every time you build your model and out-of-domain model for fallback is built in parallel. That model is supposed to catch out-of-domain utterances but it's not perfect. Only utterances with a high confidence of matching the fallback model will be routed to the fallback intent. This is by design (fot the current version) meaning that not all utterances (both low and high confidence) will trigger fallback when fallback is the candidate. So what you're seeing here is an utterance that generates a low confidence for fallback (fallback is the best chosen candidate but confidence is too low). As fallback gets better it will become more effective at capturing these cases. A rather awkward solution (which defeats the purpose of fallback I guess) will be to extend fallback with sample one word utterances similar to the ones you're trying. Hope this helps...
Update: FallbackIntent sensitivity tuning was added recently so now, if you set it to high in the voice interactions model, it will work as you expected!
I think I just found the solution...
Having single word on any intent, Alexa by mistake (obviously), tends to match a single word interaction with some intent with a single word as sample utterances!!!!!!
As I see, Alexa uses the terms and the term count to calculate the statistical match with user interaction... wow!
Hope it helps you guys!

IBM Watson Assistant: How do I have the chatbot repeat a response until it recognizes what the user is saying?

I am building a chatbot that needs to be able to have long, branching conversations with users. Its purpose is to be able to engage the user for longs periods of time. One of the problems that I'm running into is how to handle unrelated responses from a user in the middle of a dialogue tree without "resetting" the entire conversation.
For example, let's say they have the following conversation:
Chatbot: Do you like vanilla or chocolate ice cream?
User: Vanilla
Chatbot: (recognizes "vanilla" and responds with appropriate child node) Great! Would you like chocolate or caramel on top?
User: Caramel
Chatbot: (recognizes "caramel" and responds with appropriate child node) That sounds delicious! Do you prefer sprinkles or whipped cream?
User: I would like a cherry!
At that point, my problem is that the chatbot triggers the "anything_else" response and says something like "I didn't understand that." Which means if the user wants to continue the conversation about ice cream, he has to start from the very beginning.
I'm very new to using IBM Watson assistant, but I did as much research as I could and I wasn't able to find anything. Any advice or help would be appreciated! So far the only idea I had was to have an "anything_else" option for every single dialogue node that could jump back to the next node up. But that sounds extremely complicated and time consuming. I was wondering if there was an easier way to just have the chatbot repeat whatever question it is asking until it gets a response that triggers one of the child nodes.
EDIT: It may be helpful to add that what I'm trying to here is "funnel" the user down certain conversation paths.
In anything_else node, you can enable return after digressions which will go back to the previous node and it fulfils your requirement.
There is a Anything Else option that acts as a fallback when the chatbot fails to recognize the intent.
You can take a look at the documentation here.

How to set a level of confidence for watson conversation?

I would like to understand how to create a way to redirect the conversation to the anything_else node when confidence is lower then a established limit.
I am creating a node triggered by intents[0].confidence < 0.5 that jumps to the anything_else answer.
So if I enter a value "huaiuhsuskunwku" it recognizes as the intent #greetings and do redirect its node.
Any idea why it is recognizing it as a greeting in the first place?
And how can I configure it properly?
Two things here:
1a. Before the newest API was released, which is still beta, we used what is called a relational classifier. Meaning it checks all the classes available, and will do its best to fit it into the most similar one. So I would assume you have relatively few intents, and each intent has only a handful of samples. There are too many features in the algorithm to point to one specifically, but its finding some features that make it think it is part of that class.
1b. You should create a class for off-topic that just includes a bunch of things you dont want to respond to. This essentially helps balance out the existing classes so it knows it is NOT your main classes. You wont need any dialog nodes for this, the off-topic class simply helps it fall to anything else as you want
2. Just this week we have released an update to the API. This changes it to an absolute classifier so scoring is handled differently now. Each class is evaluated on its own. We have also included a built in off-topic handler to help weed out gibberish like this. See the docs here:
https://www.ibm.com/watson/developercloud/doc/conversation/release-notes.html
Watson follows Top to Bottom Flow. So there may be 2 cases.
Your Greeting node is above to the one which you created for routing to Anything else and that your query's (huaiuhsuskunwku) confidence was >=0.20 for #greeting intent. In this case just move your greetings dialog below to the node you created.
If your greeting dialog is below to the node you created for routing to Anything else dialog. The given condition(confidence < 0.5) failed and thus skipped that dialog. In this case, check the confidence of that query in 'Try it' window and adjust confidence value in dialog accordingly.

How i can determine negative answers using Watson Conversation

For example: If the user writes in the Watson Conversation Service:
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
How you can know that user doesn't want to have a pool, but he loves to live in a Condo?
This is a good question and yeah this is a bit tricky...
Currently your best bet is to provide as much examples of the utterances that should be classified as a particular intent as a training examples for that intent - the more examples you provide the more robust the NLU (natural language understanding) will be.
Having said that, note that using examples such as:
"I would want to have a pool in my new house, but I wouldn't love to live in a Condo"
for intent-pool and
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
for intent-condo will make the system to correctly classify these sentences, but the confidence difference between these might be quite small (because of the reason they are quite similar when you look just at the text).
So the question here is whether it is worth to make the system classify such intents out-of-the-box or instead train the system on more simple examples and use some form of disambiguation if you see the top N intents have low confidence differences.
Sergio, in this case, you can test all conditions valid with peers node (continue from) and your negative (example else) you can use "true".
Try used the intends for determine the flow and the entities for defining conditions.
See more: https://www.ibm.com/watson/developercloud/doc/conversation/tutorial_basic.shtml
PS: you can get the value of entity using:
This is a typical scenario of multi intents in Conversation service. Everytime user says something, all top 10 intents are identified. You can change your dialog JSON editor like this to see all intents.
{
"output": {
"text": {
"values": [
"<? intents ?>"
],
"selection_policy": "sequential"
}
}
}
For example, When user makes a statement, that will trigger two intents, you'll see that intents[0].confidence and intents[1].confidence both will be pretty high, which means that Conversation identified both the intents from the user text.
But there is a major limitation in it as of now, there is no guaranteed order for the identified intents, i.e. if you have said
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo", there is no guarantee that positive intent "would_not_want" will be the intents[0].intent and intent "would_want" will be the intents[1].intent. So it will be a bit hard to implement this scenario with higher accuracy in your application.
This is now easily possible in Watson Assistant. You can do this by creating contextual entities.
In your intent, you mark the related entity and flag it to the entity you define. The contextual entities will now learn the structure of the sentence. This will not only understand what you have flagged, but also detect entities you haven't flagged.
So example below ingredients have been tagged as wanted and not wanted.
When you run it you get this.
Full example here: https://sodoherty.ai/2018/07/24/negation-annotation/

Resources