How to disregard the #sys-number to words - ibm-watson

How can I do watson conversation to ignore a number written in a sentence don't converting to #sys-number?
Ex.: I want to give you A call.
In the case, the Watson understends 'A' with '1'. But I don't have this.

When you're using this example to send the message, Watson Conversation shown to you what they understand (Intent, entities, etc).
You can use .literal method and show the number in the cardinal form, and you can use the #sys-number for getting this number in another answer, or in your application with one custom code.
Example:
I want to give you #sys-number.literal call.
In this case, you can check in this table how System entity #sys-number works:
Obs.: You can disable the #sys-number inside the System Entities and Watson won't try to identify the number inside your Conversation.
See the Official documentation talking about System Entities within Watson Conversation.

I am not seeing the same behavior. I have sys-number turned on but it is not matching to the letter A.

Related

LUIS: Adding patterns to intents is not taking any effect

I followed what is described in the tutorial
I first added an Pattern.any entity
Next, I added a pattern for the required intent
I had already created an intent like shown and now I click on train
When I test, the intent is not hit
Any idea what's missing?
TL;DR: Read patterns doc and improve your entity detection.
The Issue
The problem with your example that you have posted here is that LUIS failing to actually detect command_paramsentity, therefore it cannot even match to your any one of those 3 patterns that you have shown.
As stated in Add common pattern template utterance formats to improve predictions:
In order for a pattern to be matched to an utterance, first the entities within the utterance have to match the entities in the template utterance. This means the entities have to have enough examples in example utterances with a high degree of prediction before patterns with entities are successful. However, the template doesn't help predict entities, only intents.
While patterns allow you to provide fewer example utterances, if the entities are not detected, the pattern does not match.
So you need to work on building out your command_params entity to make it detectable before using a pattern.
Your entity
I'm not convinced Pattern.any is the correct entity type for you to use, as it's an entity that's used for values that are of variable length--maybe they are extremely long, for example.
I don't know what type of values your entity can evaluate to, but I suspect it would probably be better to go the route of creating a simple entity + phrase list (uses machine-learning) or a list entity if the entity values are a known set (exact pattern matching), depending on your command params values.
Update: also there are regex entities as well, that may work for you. (Again, I don't know what your entity values could be, so it's hard to point exactly to the correct entity to use)
Additionally, if you need help with understanding how to improve entity detection in general, see this StackOverflow answer.
The patterns are extremely literal. If the part of the phrase does not match exactly, the intent won't get recognized. (Note: you can add these phrases to the intent directly, instead of in the pattern, in which case it will recognize the intent but not the entities. Can be helpful if you have a dialog to prompt users for the missing entities.)
In your case, the way you have the pattern written you would need to write command create $mytest, which should recognize the intent as well as the entity mytest. Since you did not include the $ character in your test, neither the intent nor the entity was recognized.
You do have the ability to mark a character as optional via brackets [], though I've had mixed success with this. Your phrases are specific enough that it may work in your case. So instead you could make your patterns like command create [$]command_params where both command create $mytest and command create mytest would work and have the correct entity. Do note that if someone types something like command create $mytest please, it's going to pick up the entire phrase mytest please as your entity. (If anyone knows how to create a pattern that avoids this, that would be fantastic!).

What expression can I use in Watson Assistant for opening times on certain dates?

I have a Watson bot i'm trying to program for reserving tables. I'd like to know the expression I could use to implement my opening times.
For example the restaurant has the following hours:
Monday-Friday 11:30AM until 10:30PM, last reservation can is 9:30PM.
Saturday-Sunday 5PM until 10:30PM
I don't want Watson to take reservations outside those hours. How code I implement this in slots?
You can use methods of the expession language to evaluate the input.
For example a condition to check if it is a valid weekday reservation could be :
#sys-date.reformatDateTime('u')<6 AND #sys-time.before('21:30:01') AND #sys-time.after('11:29:59')
I would not recommend to to do the check in slots.
Easier would be to do the check after slot-filling.
If it is no valid reservation you can offer the client to just try again.
I don't think there's any way to do this in Watson Assistant directly. You can do conditional evaluation (check if a number if greater than or less than another number), but your needs are a little more complex (with time involved, and even dates too).
I'd suggest handling your reservation validation process externally using the webhook feature. Collect your reservation date and time, and then send those to your webhook as parameters. The webhook can then respond with a confirmation that the reservation is OK, or it could reject it (and provide a reason). When your dialog node that calls the webhook gets the response, if it sees a rejection based on operating hours, it could inform the user that they need to select a time that the restaurant is open, remind them of the hours, and then go back to the node that collects the reservation info.

Watson generic word from utterance

Trying to create a set of entities & intents for things of the ilk "describe <something>" or "tell me about <something>" or "list instances of <something>". The <something>s are not known in advance. Consequently, I cannot exhaustively list the possible values for the entity.
My impression from (albeit very little) use and from the documentation is that the conversation API isn't good at this type of thing. Experience thus far says that it will recognize things that match the examples given for some entity, but I haven't seen that it can generalize to something like
describe #target
show me instances of #target
without knowing the set of values for #target.
What am I missing?
Based on your example, you can combine the Intents and Entities for your purpose.
It's a good practice, I think.
Like Daniel said, you can create one intent with examples for asking about something, like these examples within your #describeAbout:
Describe about
Can you please describe
Can you please explain about
List instances of
etc...
And create one entity like #typesDescribe, with yours values. Like this values:
Paper
Love
Fruits
After Watson training your examples, with your Dialog, create one flow with the condition
if #describeAbout AND #typesDescribe:Paper
Response:
#typesDescribe (Will show the value: Paper) is a thin material produced by pressing together moist fibres of cellulose pulp derived from wood, rags or grasses, and drying them into flexible sheets.
And, usually, if your confidence about Intent and Entity are small, you can add one more condition for your Intent with the confidence level that you want. Check.
Obs.: You can create one Intent alone, with condition #describeAbout, and the response will ask for your user "You can know what?", and create one flow with various #typesDescribe:value, for example.
Which services are you talking about? NLC is able to do this, and so is Conversation, by using wildcards. Either one of these can be trained to recognize intents with wildcard values in their training data. Just use an asterisk ( like this - "*" ) for the wildcard.
You don't have to train Conversation with every possible utterance, it learns from it's training data. So if you provided the service a series of utterances like "describe apples", "describe oranges", "describe fireflies", and "describe astrophysics", and then associated all of them with an intent of "#provide_description", then the Conversation service would indicte this intent for requests like "describe math".
Please also try to use real utterances for your training. I am not sure that your users will speak in two word sentences all of the time. Provide enough training data for each intent so the service is able to learn the various different ways people express the same intents.

How to get a trained Watson natural language classifier to NOT pick up a class?

When using the nice demo at http://watson-on-classifier.mybluemix.net, you sometimes got the answer "Sorry, I don't understand the question. Please try to rephrase it." when your question is not related to any of the supported themes.
I don't understand how to do this using Watson natural language classifier: it seems to me that whatever the entry, it choose one of the classes it has been trained for... How do you achieve rejection of some entries as "does not match any of the classes with enough confidence" ?
Thanks for your help.
Roughly speaking, what NLC does behind the scenes (I guess) is to try to correlate one statement with another based on concepts parsed from the input text and calculated using some ontology, so it can find synonyms or concepts that are "kind of" or "part of" other concepts.
So, in order to have a rejection, I can see 3 possible ways
the entry has no correlation to any of the data used in the classifier because the concepts are too far from the concepts of the training data, in the ontology
the entry has equal correlation to more than one category, so the system can't tell if it belongs to one or another
the entry has correlation with one category, but the confidence level is too low, so it does not satisfy some threshold defined by the system
NLC will always return answers in order of confidence. The system has been set up that if intents fall below a certain level of confidence it will not return an answer.
This is defined by the person writing the application.

Dynamic Multiple Choice (Like a Wizard) - How would you design it? (e.g. Schema, AI model, etc.)

This question can probably be broken up into multiple questions, but here goes...
In essence, I'd like to allow users to type in what they would like to do and provide a wizard-like interface to ask for information which is missing to complete a requested query. For example, let's say a user types: "What is the weather like in Springfield?"
We recognize the user is interested in weather, but it could be Springfield, Il or Springfield in another state. A follow-up question would be:
What Springfield did you want weather for?
1 - Springfield, Il
2 - Springfield, Wi
You can probably think of a million examples where a request is missing key data or its ambiguous. Make the assumption the gist of what the user wants can be understood, but there are missing pieces of data required to complete the request.
Perhaps you can take it as far back as asking what the user wants to do and "leading" them to a query.
This is not AI in the sense of taking any input and truly understanding it. I'm not referring to having some way to hold a conversation with a user. It's about inferring what a user wants, checking to see if there is an applicable service to be provided, identifying the inputs needed and overlaying that on top of what's missing from the request, then asking the user for the remaining information. That's it! :-)
How would you want to store the information about services? How would you go about determining what was missing from the input data?
My thoughts:
Use regex expressions to identify clear pieces of information. These will be matched to the parameters of a service. Figure out which parameters do not have matching data and look up the associated question for those parameters. Ask those questions and capture answers. Re-run the service passing in the newly captured data. These would be more free-form questions.
For multiple choice, identify the ambiguity and search for potential matches ranked in order of likelihood (add in user history/preferences to help decide). Provide the top 3 as choices.
Thoughts appreciated.
Cheers,
Henry
This is not AI in the sense of taking any input and truly understanding it.
It most certainly is! You follow this up by stating exactly that:
I'm not referring to having some way to hold a conversation with a user. It's about inferring what a user wants, checking to see if there is an applicable service to be provided, identifying the inputs needed and overlaying that on top of what's missing from the request, then asking the user for the remaining information. That's it! :-)
Inference is at the heart of any topics in AI. What did the user mean? What did the user want? What information should I fetch? How do I parse that information and decide what the answer is?
You're essentially trying to design a state-of-the-art AI system, that uses a combination of NLP techniques to parse natural language queries and then using (maybe) a learning algorithm to determine how to perform the search, possibly hitting a knowledge base, or maybe Google (which also requires a process to parse the returned data to find the answer).
If there is any way you can constrain how input is entered (i.e. how the query is asked), that will help. But then you'll essentially be building a Web form... which has been done a million times over.
In short, you're attempting to create a very complex system but you explicitly don't want to use any of the relevant techniques. If you're attempting to use regexs to do all of this, good luck to you. Because that's one heck of a deep and dark rabbit hole into which I would not want to fall.
But if you insist, start by finding a good book on NLP, because that's where you'll have to start anyway.

Resources