I would like to understand how to create a way to redirect the conversation to the anything_else node when confidence is lower then a established limit.
I am creating a node triggered by intents[0].confidence < 0.5 that jumps to the anything_else answer.
So if I enter a value "huaiuhsuskunwku" it recognizes as the intent #greetings and do redirect its node.
Any idea why it is recognizing it as a greeting in the first place?
And how can I configure it properly?
Two things here:
1a. Before the newest API was released, which is still beta, we used what is called a relational classifier. Meaning it checks all the classes available, and will do its best to fit it into the most similar one. So I would assume you have relatively few intents, and each intent has only a handful of samples. There are too many features in the algorithm to point to one specifically, but its finding some features that make it think it is part of that class.
1b. You should create a class for off-topic that just includes a bunch of things you dont want to respond to. This essentially helps balance out the existing classes so it knows it is NOT your main classes. You wont need any dialog nodes for this, the off-topic class simply helps it fall to anything else as you want
2. Just this week we have released an update to the API. This changes it to an absolute classifier so scoring is handled differently now. Each class is evaluated on its own. We have also included a built in off-topic handler to help weed out gibberish like this. See the docs here:
https://www.ibm.com/watson/developercloud/doc/conversation/release-notes.html
Watson follows Top to Bottom Flow. So there may be 2 cases.
Your Greeting node is above to the one which you created for routing to Anything else and that your query's (huaiuhsuskunwku) confidence was >=0.20 for #greeting intent. In this case just move your greetings dialog below to the node you created.
If your greeting dialog is below to the node you created for routing to Anything else dialog. The given condition(confidence < 0.5) failed and thus skipped that dialog. In this case, check the confidence of that query in 'Try it' window and adjust confidence value in dialog accordingly.
Related
I'm using watson assistant (plus) and I'm actually fighting with the correct usage of entity usage inside intent examples. First of all, inside the webUI I can't find trace of what mentioned in the documentation about entity suggestions, entity annotation inside intents examples..(we are on frankfurt server).
I have many intents in one skill and I decided to use entity mentions in intents examples. Having no trace of simplified way to add entity inside the single example, I directly wrote it inside the phrase.
From "What I need to activate MySpecificService ABC ?" to "What I need to activate #services:(MySpecificService ABC)", the same syntax used in dialog nodes.
I have used this method diffusely on my skill, according the documentation.
My problems starts here. Assistant refuse to detect the right intent when I try it.
If I ask "What I need to activate MyService Name?" the assistant detect a totally wrong intent, with low confidence (0.40 or less), and the correct intent does not appear neither as 2nd or 3rd intent (it correctly detect the entity).
No similar examples using exaclty #services:(MySpecificService ABC) in other intents, but I used other references to #services or #services:(otherservice name) in other intents.
I read documentation many times, I googled around, watched videos.. but nothing. Evidently I've misunderstood something.
Can You help me?
Intents are the actions/verbs that the user is trying to achieve. In this case, an intent could be the activation itself (no matter what is he trying to activate).
So you should write different examples of an activation question:
"How can I activate my service?", "I need to activate this service", etc.
The entities are the objects/substantives. In your case, services.
So in your dialog, if you are need the assistant to detect the intent+entity. So create a node with the condition #activation && #service:MySpecificService
Be aware that if you have several nodes in your dialog, their order will impact the way that your assistant analyzes the input. If the #activation && #service node is before the #activation && #service:MySpecificService node; the first one will be triggered as "MySpecificService" is one of the #services.
Hope that this helps!
im dealing with entities in intents as well and i think we're also on the frankfurt server.
Since youre on the frankfurt server im pretty sure the reason youre not seeing the annotation options is that youre using german language.
Annotations as mentioned in the documentation is only available for english language (unfortunately)
kr
I am building a chatbot that needs to be able to have long, branching conversations with users. Its purpose is to be able to engage the user for longs periods of time. One of the problems that I'm running into is how to handle unrelated responses from a user in the middle of a dialogue tree without "resetting" the entire conversation.
For example, let's say they have the following conversation:
Chatbot: Do you like vanilla or chocolate ice cream?
User: Vanilla
Chatbot: (recognizes "vanilla" and responds with appropriate child node) Great! Would you like chocolate or caramel on top?
User: Caramel
Chatbot: (recognizes "caramel" and responds with appropriate child node) That sounds delicious! Do you prefer sprinkles or whipped cream?
User: I would like a cherry!
At that point, my problem is that the chatbot triggers the "anything_else" response and says something like "I didn't understand that." Which means if the user wants to continue the conversation about ice cream, he has to start from the very beginning.
I'm very new to using IBM Watson assistant, but I did as much research as I could and I wasn't able to find anything. Any advice or help would be appreciated! So far the only idea I had was to have an "anything_else" option for every single dialogue node that could jump back to the next node up. But that sounds extremely complicated and time consuming. I was wondering if there was an easier way to just have the chatbot repeat whatever question it is asking until it gets a response that triggers one of the child nodes.
EDIT: It may be helpful to add that what I'm trying to here is "funnel" the user down certain conversation paths.
In anything_else node, you can enable return after digressions which will go back to the previous node and it fulfils your requirement.
There is a Anything Else option that acts as a fallback when the chatbot fails to recognize the intent.
You can take a look at the documentation here.
I know how to set conditions based on intent match and confidence level. I would like to proceed with a flow if the confidence is above a certain threshold, and request confirmation if it is in a mid-range before proceeding with the flow.
I can do this by doubling the nodes:
Create one node that matches at high confidence
Create a node underneath that matches at a lower level. If I get confirmation, route back to first node, bypassing the condition.
Is there a better pattern that doesn't duplicate all nodes?
I'm confused by your question, but, I believe you want make condition base on intent and confidence from intent. Right?
Well, I believe you can do this with code... And let Watson work only for the intelligence of the understandings of intentions and point the confidence. Or you can create a single node throughout the conversation as:
if intent.confidence >= 0.75
Response: I did not understand your question.
Or, with code for each set condition and intent, like:
if(intent[0].intents === 'requestPizza' && intent.confidence >= 0.75) {
data.output.text[0] = "Hey, you want request pizza or you want to know how to request pizza?";
}
See one example from IBM Developers with Nodejs.
SITUATION :
I have an application where i have to issue a gift cupon kind of a thing when the user reaches a certain score say 'x'.
I want to create a coupon with a unique QRcode, at the time the user reaches the score 'x' so that he can download it on his iphone and use it. Once it is used , the cupon should be invalidated. this applies to any user using the application. Meaning a coupon is created once the score is reached and deleted or invalidated once it is used.
ISSUE :
I'm not able to figure out how to create a cupon everytime any user reaches the score. Ofcourse, i did go through a lot of documentations and links like http://www.raywenderlich.com/20734/beginning-passbook-part-1. I also tried using pass-source but the valid account requires you to pay minimum about 8$.
As suggested in raywenderlich tutorials, i can create passes but thats not created through the application.
Also i didn't see any method where we can be notified when a user uses his issued coupon so that we can invalidate it.
Am i missing something here?
"Using" a QR code on a coupon means it is scanned by something else. That something else has to take responsibility to report the activity back to you, so you could then update the pass with an "Expired" flag in your database, re-sign and rebuild the pass, issue the push notification so that it would eventually update on the device. You'd also probably want that scanner-thingie to check with you to see that the code is valid before accepting it. So, yeah, not Apple's problem.
I've got a WinForms client-server app that displays various offers in a list. Every user (client) has a "rating". An offer consists of various data including a minimum and maximum rating. If a user's rating does not fall in that interval, he should not be able to take the offer.
Of course I could just perform some server filtering and send a list of offers prefiltered for each user to the client application. But that would surely, and rightfully, lead to confused requests "Why isn't this offer showing up? I know it exists, it shows up on [other user]'s screen."
How should I handle this? My favorite solution so far is to grey out the offer and add a tooltip "You can't take this offer because your rating is too high/low" while displaying greyed-out offers at the bottom of the list to leave the actually valid offers easily visible on top of the list.
A disabled option tells the user:
The action is possible.
Just not right now.
But the user can make it possible.
Unless there is some simple action the user can do to change his or her rating (e.g., by selecting some other controls in the same window), do not use disabling and do not show the offers. Disabling may confuse some users who will then hunt around the window for something to do to enable those offers. It’s a great idea to use a tooltip to explain disabled objects, but that’s not a standard and not all users will think to hover the mouse over a disabled option (Why should they? It’s disabled).
Including offers users can’t have, even when disabled, clutters your display, forces more scrolling, and distracts the users from the offers you actually want them to consider. Furthermore, showing unavailable offers can come across as taunting (“ha, ha, your rating isn’t high enough”) and may diminished the perceived value of the available offers by comparison, resulting in lower user satisfaction.
It seems unlikely to me that users are going to go around comparing the offers on their windows, but maybe you have user research saying they do. In any case, you should label the list of offers to make the criteria clear (e.g., “Offers available at your rating” may be sufficient).
If you want to encourage users to increase their ratings, then maybe include something advertising the benefits of an improved rating. For example you could have a link "-Improve your rating- by four points and get -five additional offers-." The first -link- tells users how to improve their rating, while the second lists the offers as a motivator. The latter link should only be there if the offers will still be available if the user actually succeeds in getting four more points.
That sounds like a good way to do it.
As a slight improvement, if it makes sense for your application, you might consider including the actual numbers in the tooltip, e.g.:
This offer requires a rating between 5 and 8. Your rating is 4.