Watson Assistant - entity reference in Intents - I need to understand what I'm missing - artificial-intelligence

I'm using watson assistant (plus) and I'm actually fighting with the correct usage of entity usage inside intent examples. First of all, inside the webUI I can't find trace of what mentioned in the documentation about entity suggestions, entity annotation inside intents examples..(we are on frankfurt server).
I have many intents in one skill and I decided to use entity mentions in intents examples. Having no trace of simplified way to add entity inside the single example, I directly wrote it inside the phrase.
From "What I need to activate MySpecificService ABC ?" to "What I need to activate #services:(MySpecificService ABC)", the same syntax used in dialog nodes.
I have used this method diffusely on my skill, according the documentation.
My problems starts here. Assistant refuse to detect the right intent when I try it.
If I ask "What I need to activate MyService Name?" the assistant detect a totally wrong intent, with low confidence (0.40 or less), and the correct intent does not appear neither as 2nd or 3rd intent (it correctly detect the entity).
No similar examples using exaclty #services:(MySpecificService ABC) in other intents, but I used other references to #services or #services:(otherservice name) in other intents.
I read documentation many times, I googled around, watched videos.. but nothing. Evidently I've misunderstood something.
Can You help me?

Intents are the actions/verbs that the user is trying to achieve. In this case, an intent could be the activation itself (no matter what is he trying to activate).
So you should write different examples of an activation question:
"How can I activate my service?", "I need to activate this service", etc.
The entities are the objects/substantives. In your case, services.
So in your dialog, if you are need the assistant to detect the intent+entity. So create a node with the condition #activation && #service:MySpecificService
Be aware that if you have several nodes in your dialog, their order will impact the way that your assistant analyzes the input. If the #activation && #service node is before the #activation && #service:MySpecificService node; the first one will be triggered as "MySpecificService" is one of the #services.
Hope that this helps!

im dealing with entities in intents as well and i think we're also on the frankfurt server.
Since youre on the frankfurt server im pretty sure the reason youre not seeing the annotation options is that youre using german language.
Annotations as mentioned in the documentation is only available for english language (unfortunately)
kr

Related

Reference in B2C_1A_TrustFrameworkExtensions missing in Identity Experience Framework examples

I'm getting an error when uploading my customized policy, which is based on Microsoft's SocialAccounts example ([tenant] is a placeholder I added):
Policy "B2C_1A_TrustFrameworkExtensions" of tenant "[tenant].onmicrosoft.com" makes a reference to ClaimType with id "client_id" but neither the policy nor any of its base policies contain such an element
I've done some customization to the file, including adding local account signon, but comparing copies of TrustFrameworkExtensions.xml in the examples, I can't see where this element is defined. It is not defined in TrustFrameworkBase.xml, which is where I would expect it.
I figured it out, although it doesn't make sense to me. Hopefully this helps someone else running into the same issue.
The TrustFrameworkBase.xml is not the same in each scenario. When Microsoft documentation said not to modify it, I assumed that meant the "base" was always the same. The implication of this design is: If you try to mix and match between scenarios then you also need to find the supporting pieces in the TrustFrameworkBase.xml and move them into your extensions document. It also means if Microsoft does provide an update to their reference policies and you want to update, you need to remember which one you implemented originally and potentially which other ones you had to pull from or do line-by-line comparison. Not end of the world, but also not how I'd design an inheritance structure.
This also explains why I had to work through previous validation errors, including missing <DisplayName> and <Protocol> elements in the <TechnicalProfile> element.
Yes - I agree that is a problem.
My suggestion is always to use the "SocialAndLocalAccountsWithMfa" scenario as the sample.
That way you will always have the correct attributes and you know which one to use if there is an update.
It's easy enough to comment out the MFA stuff in the user journeys if you don't want it.
There is one exception. If you want to use "username" instead of "email", the reads/writes etc. are only in the username sample.

Eliciting states and counties with Alexa

I have a skill that elicits a U.S. state and county from the user and then retrieves some data. The backend is working fine, but I am concerned about how to structure the conversation. So far, I have created an intent called GetInfoIntent, which has two custom slots, state_name, and county_name
There are about 3,000 U.S. counties with many duplicate names. It seems silly to me that I am asking for a county, without first "narrowing down", by states. Another way I can think of to do the conversation is to have 50 intents, "GetNewHampshireInfo, GetCaliforniaInfo, etc. If I did it this way, I'd need a custom slot type for each state, like nh_counties, ca_counties. etc.
This must be a pretty generic problem. Is there a standard approach, or best practice, I can use?
My (not necessarily best practice) practice tips:
Single slot for single data type. Meaning only have one slot for a four digit number even if you use it in more than one place for two different things in the skill.
As few intents as you need
no more no less. You certainly can and should break up the back end code with helper code, but try and not break the intents into too many smaller pieces. It can lead to difficulty when Alexa is trying to choose the intended intent.
Keep it voice focused. How would you ask in a
conversation. Voice first development is always the way to go.
For the slot filling I think it is fine to ask both state and county.
If the matching is not correct ask for confirmation.
Another option is to not use auto filling within the Alexa skill and use the dialog interface. Ask the county first and then only when it has more than one state option and is ambiguous continue the dialog to fill the state.
Even if you did have 50 separate intents you really never want to have two slots that can be filled by the same word. For example having a mo_counties and ky_counties that Clack satisfies both is ambiguous and can cause unneeded difficultly.
So for someone looking for the "best practice" I have learning that there isn't one yet (maybe never will be). Do what makes sense for the conversation and try and keep it as simple as it needs to be and no less on the back end.
I also find it helpful to find a non-developer to test your conversation flow.
This wasn't really technical and is all opinion, but that is a lot of what Alexa development is. I would suggest Tuesday Alexa office hours at https://www.twitch.tv/amazonalexa very helpful and you can ask questions about stuff like this.

IBM Watson Assistant: How do I have the chatbot repeat a response until it recognizes what the user is saying?

I am building a chatbot that needs to be able to have long, branching conversations with users. Its purpose is to be able to engage the user for longs periods of time. One of the problems that I'm running into is how to handle unrelated responses from a user in the middle of a dialogue tree without "resetting" the entire conversation.
For example, let's say they have the following conversation:
Chatbot: Do you like vanilla or chocolate ice cream?
User: Vanilla
Chatbot: (recognizes "vanilla" and responds with appropriate child node) Great! Would you like chocolate or caramel on top?
User: Caramel
Chatbot: (recognizes "caramel" and responds with appropriate child node) That sounds delicious! Do you prefer sprinkles or whipped cream?
User: I would like a cherry!
At that point, my problem is that the chatbot triggers the "anything_else" response and says something like "I didn't understand that." Which means if the user wants to continue the conversation about ice cream, he has to start from the very beginning.
I'm very new to using IBM Watson assistant, but I did as much research as I could and I wasn't able to find anything. Any advice or help would be appreciated! So far the only idea I had was to have an "anything_else" option for every single dialogue node that could jump back to the next node up. But that sounds extremely complicated and time consuming. I was wondering if there was an easier way to just have the chatbot repeat whatever question it is asking until it gets a response that triggers one of the child nodes.
EDIT: It may be helpful to add that what I'm trying to here is "funnel" the user down certain conversation paths.
In anything_else node, you can enable return after digressions which will go back to the previous node and it fulfils your requirement.
There is a Anything Else option that acts as a fallback when the chatbot fails to recognize the intent.
You can take a look at the documentation here.

How to set a level of confidence for watson conversation?

I would like to understand how to create a way to redirect the conversation to the anything_else node when confidence is lower then a established limit.
I am creating a node triggered by intents[0].confidence < 0.5 that jumps to the anything_else answer.
So if I enter a value "huaiuhsuskunwku" it recognizes as the intent #greetings and do redirect its node.
Any idea why it is recognizing it as a greeting in the first place?
And how can I configure it properly?
Two things here:
1a. Before the newest API was released, which is still beta, we used what is called a relational classifier. Meaning it checks all the classes available, and will do its best to fit it into the most similar one. So I would assume you have relatively few intents, and each intent has only a handful of samples. There are too many features in the algorithm to point to one specifically, but its finding some features that make it think it is part of that class.
1b. You should create a class for off-topic that just includes a bunch of things you dont want to respond to. This essentially helps balance out the existing classes so it knows it is NOT your main classes. You wont need any dialog nodes for this, the off-topic class simply helps it fall to anything else as you want
2. Just this week we have released an update to the API. This changes it to an absolute classifier so scoring is handled differently now. Each class is evaluated on its own. We have also included a built in off-topic handler to help weed out gibberish like this. See the docs here:
https://www.ibm.com/watson/developercloud/doc/conversation/release-notes.html
Watson follows Top to Bottom Flow. So there may be 2 cases.
Your Greeting node is above to the one which you created for routing to Anything else and that your query's (huaiuhsuskunwku) confidence was >=0.20 for #greeting intent. In this case just move your greetings dialog below to the node you created.
If your greeting dialog is below to the node you created for routing to Anything else dialog. The given condition(confidence < 0.5) failed and thus skipped that dialog. In this case, check the confidence of that query in 'Try it' window and adjust confidence value in dialog accordingly.

How i can determine negative answers using Watson Conversation

For example: If the user writes in the Watson Conversation Service:
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
How you can know that user doesn't want to have a pool, but he loves to live in a Condo?
This is a good question and yeah this is a bit tricky...
Currently your best bet is to provide as much examples of the utterances that should be classified as a particular intent as a training examples for that intent - the more examples you provide the more robust the NLU (natural language understanding) will be.
Having said that, note that using examples such as:
"I would want to have a pool in my new house, but I wouldn't love to live in a Condo"
for intent-pool and
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo"
for intent-condo will make the system to correctly classify these sentences, but the confidence difference between these might be quite small (because of the reason they are quite similar when you look just at the text).
So the question here is whether it is worth to make the system classify such intents out-of-the-box or instead train the system on more simple examples and use some form of disambiguation if you see the top N intents have low confidence differences.
Sergio, in this case, you can test all conditions valid with peers node (continue from) and your negative (example else) you can use "true".
Try used the intends for determine the flow and the entities for defining conditions.
See more: https://www.ibm.com/watson/developercloud/doc/conversation/tutorial_basic.shtml
PS: you can get the value of entity using:
This is a typical scenario of multi intents in Conversation service. Everytime user says something, all top 10 intents are identified. You can change your dialog JSON editor like this to see all intents.
{
"output": {
"text": {
"values": [
"<? intents ?>"
],
"selection_policy": "sequential"
}
}
}
For example, When user makes a statement, that will trigger two intents, you'll see that intents[0].confidence and intents[1].confidence both will be pretty high, which means that Conversation identified both the intents from the user text.
But there is a major limitation in it as of now, there is no guaranteed order for the identified intents, i.e. if you have said
"I wouldn't want to have a pool in my new house, but I would love to live in a Condo", there is no guarantee that positive intent "would_not_want" will be the intents[0].intent and intent "would_want" will be the intents[1].intent. So it will be a bit hard to implement this scenario with higher accuracy in your application.
This is now easily possible in Watson Assistant. You can do this by creating contextual entities.
In your intent, you mark the related entity and flag it to the entity you define. The contextual entities will now learn the structure of the sentence. This will not only understand what you have flagged, but also detect entities you haven't flagged.
So example below ingredients have been tagged as wanted and not wanted.
When you run it you get this.
Full example here: https://sodoherty.ai/2018/07/24/negation-annotation/

Resources