Passing variables into Watson Dialog - ibm-watson

In many situations, it may be helpful to pass known information (e.g. the user's name to present a personalized greeting) into a new Watson Dialog conversation so to avoid asking the user redundant or unnecessary questions. In looking at the API documentation, I don't see a way to do that. Is there a best practice method for passing variables into a Watson Dialog conversation?

In the Dialog service a variable is part of a profile that you create to store information that users provide during conversations.
The following code shows an example of a profile variable that saves the user's name.
<variables>
<var_folder name="username">
<var name="username" type="TEXT" description="The user's name."></var>
</var_folder>
</variables>
In your scenario you will set this variable by calling:
PUT /v1/dialogs/{dialog_id}/profile
with:
{
"client_id": 4435,
"name_values": [
{
"name": "username",
"value": "Bruce Wayne"
}
]
}
Don't forget to replace {dialog_id} and {client_id}.
We have an API Explorer that let you try-out the APIs: Dialog API Explorer.
You can also read more about this in this tutorial.

It should also be noted that if you leave the client_id out then one is allocated for you. You can then pass this in to the start conversation call to make sure the that the profile is picked up. I found this useful where I have welcome messages which I want to imbed profile variables in to e.g. "Hello "

Related

Ask user for input from LaunchIntent

I'm writing a skill in Node JS 8. I have an intent set up with slots and it works properly if I say
Ask {skill name} to {utterance}.
I'd like to design my skill so that the user can just say
Open {skill Name}
and on opening it will ask them for input that will then be handled and passed to the intent. I've seen multiple people say that you can't do this. But I've used 2 skills today that did exactly this. I'm just looking for the correct syntax to do this.
I have:
'LaunchRequest': function() {
this.response.speak("What note would you like?");
this.emit(':responseReady');
}
Which seems like it should work, but I'm pretty new to JS and Alexa.
Yes, it is possible.
When the skill user open your skill, you can give a welcome message followed by a question.
Ex:
[user] : open note skill
[Alexa] : Welcome to note skill. What note would you like?
----------<Alexa will wait for users input>--------
[user] : ABC note.
[Alexa] : <response>
In order for Alexa to wait for users input after it says the welcome message, you need to keep the session alive. The session is kept alive based on shouldEndSession parameter in the response. For any request, if not provided, shouldEndSession defaults to true. In your case, the response to LaunchRequest should have this shouldEndSession parameter set to false. Only by which the session remains open and users can continue interaction.
Ex:
'LaunchRequest': function() {
const speech = "Welcome to note skill. What note would you like?";
const reprompt = "What note would you like?";
this.emit(':ask', speech, reprompt);
}
Read this answer to know more about how you can keep the session alive using ask-nodejs-sdk.
Using Dialog Model
Another way to achieve this is to use Dialog directives. Dialog directives helps you to fill and validate slot values easily. You can use the directives to ask the user for the information you need to fulfill their request.
More information on Dialog directives here

How to get Watson Assistant to capture multiple entities in context variables

I'm trying to create a Watson chatbot and I'm running into this issue.
I'm making chatbot that's helping people find organizations that provide food, shelter, drug treatment, etc.
I have a dialog node that asks the user what service they're looking for and storing it as a $service context variable.
This works well if the user says something like "I want food" as "food" gets stored into $service.
But say for instance a user says something like "I want food and drug treatment." I want Watson to then be able to store both of these variables as context variables.
How do I do that?
Its quite simple.
Just use
"service":<?#entityname.values?>
It will store all the input value of this entity in service array.

Alexa Skill - How to get complete text of statement asked to alexa

I am creating an Alexa skill, I have coded several custom and default intents and they are working fine.
Now I want to write a fallback intent wherein I want to get the exact statement asked/sent to Alexa skill, is there a way wherein we may get the entire question string/text that has been asked to Alexa skill. I know we can get slot values and intent information, but I need the entire text statement sent to skill.
Thanks
Well, I had faced the same issue. After trying several methods, I have got the complete text of the statement asked Alexa.
You have to make the following setup in your Alexa skill (name of intent, slot name, and slot type you can choose as per your need)
Setting up Intent
Setting up custom slot type
After setting up your Alexa skill you can invoke your skill, keep some response for launch request and say anything you want, you can catch the entire text as shown here.
"intent": {
"name": "sample",
"confirmationStatus": "NONE",
"slots": {
"sentence": {
"name": "sentence",
"value": "hello, how are you?",
"resolutions": {
"resolutionsPerAuthority": [
{
"authority": "xxxxxxx",
"status": {
"code": "xxxxxxx"
}
}
]
},
"confirmationStatus": "NONE",
"source": "USER"
}
}
}
Note*: In this method you will need to handle utterances properly if there are more than one intent.
There's no way to get the whole utterance straight from a top level intent. Right now the closest you can get is using a custom slot with type AMAZON.SearchQuery (not a custom type as suggested in another answer) but you will have to define an anchor phrase in your utterance that goes before the slot. For example, you would define an utterance like:
search {query}
where query is a slot of type AMAZON.SearchQuery.
The anchor search in the utterance is mandatory (a requirement of the SearchQuery type), so as long as the user starts the utterance by saying search, anything that follows will be captured which is pretty close to what you want to achieve.
Having said that there's actually one indirect way to approximate capturing the whole utterance the user is saying (filtered by NLU) leveraging AMAZON.SearchQuery but only as part of an ongoing dialog using Dialog Management. If you're engaging in a dialog of this kind where Alexa automatically uses defined prompts to solicit slot information you can define an utterance that is a single isolated slot of type AMAZON.SearchQuery with no anchor. Example:
Alexa: Ok, I will create a reminder for you. Please tell me the text of the reminder
User: Pick of the kids from school
Alexa: Ok. I will remind you to Pick up the kids from school
In the example above Alexa detects that the user wants to send a reminder but there's no reminder text set up so it elicits the slot. When you, as a developer, define the prompts that Alexa needs to ask you also define the possible reponses. In this case you can define a response utterance as just:
{query}
and capture the whole thing the user says in response to the prompt, like e.g. "pick up the kids from school"
The English US language has a Slot Type called AMAZON.LITERAL that lets you capture the exact phrase or sentence used (depending on how it's used in your utterance). This Slot Type, however, isn't available in other regions.
Amazon also don't recommend using it:
Although you can submit new and updated English (US) skills with
AMAZON.LITERAL, custom slot types provide better accuracy than
AMAZON.LITERAL in most cases. Therefore, we recommend that you
consider migrating to custom slot types if possible. Note that
AMAZON.LITERAL is not supported for any language other than English
(US).
See: https://developer.amazon.com/docs/custom-skills/literal-slot-type-reference.html
There once used to be a slot type called Amazon.LITERAL, that was allowed to be used in specific regions. However, it has now been either deprecated (or removed).
There is however another solution to this problem using custom slots.
Let's say we are creating a Food Ordering System on Alexa. A skill for something like Zomato or Yelp for Alexa. Let us give the skill the invocation name robert.
So first we make a list of the type of statements which are going to be made. You can skip this step if your skill isn't this specific. However, this just helps you define the type of statements your skill might expect to encounter.
Alexa order robert to send me a chicken steak with mashed potatoes.
Alexa ask robert to recommend me some good Indian restaurants near me.
Alexa please tell robert to rate Restaurant XYZ's recent delivery with a single star.
After we have made a list of statements we store them in a csv file.
We go ahead and click on the Add button beside Slot Types.
Give your Custom Slot Type a name.
Now once you are done with this, come up with the list of constructs in which your skill can be invoked. Some of them have been given below.
Alexa ask robert to ...
Alexa make robert ...
Alexa order robert to ...
Alexa tell robert to ...
The three dots (...) represent the actual part of the order/statement. This is the text which you are interested in extracting. An example would be; for the statement,
Alexa ask Robert to send me a bucket of chicken nuggets.
you would be interested in extracting only the portion in bold.
Now Amazon classifies statements based on intent. They have five default, predefined intents for Welcome, Cancelling, Help and other basic functionalities. We go ahead and create a custom intent for dealing with the mainstream statements that will be used to primarily interact with our skill.
Under the new Custom Intent Window, at the bottom of the page is the space to add slots which will be used in your intent. We add our previously created custom slot and name it literal. (You can name it anything)
The custom slot, literal in our case, is the text we want to be extracted out of the user's statements.
Now we go ahead and replace the three dots (...) in the list of constructs with {literal} and add it to the list of sample utterances.
For the statement
Alexa order robert to send me a chicken steak with mashed potatoes.
The JSON would contain a section like this for the custom intent and highlighting the custom slot text.
"request": {
"type": "IntentRequest",
"requestId": "",
"timestamp": "2019-01-01T19:37:17Z",
"locale": "en-IN",
"intent": {
"name": "InteractionIntent",
"confirmationStatus": "NONE",
"slots": {
"literal": {
"name": "literal",
"value": "to send me a chicken steak with mashed potatoes.",
"resolutions": {
"resolutionsPerAuthority": [
{
"authority": "",
"status": {
"code": ""
}
}
]
},
"confirmationStatus": "NONE",
"source": "USER"
}
}
}
}
Under the slots subsection under the custom intent we have our literal slot whose value gives us the text of the user's speech.
"slots": {
"literal": {
"name": "literal",
"value": "to send me a chicken steak with mashed potatoes."

Coinbase payment iframe switching api versions

I am setting up my site to use the Coinbase iframe for accepting payments.
I am testing using the sandbox.
Sometimes when I make a payment the callback to my server takes the form:
{
"order": {
"id": "YDWALXJW",
"uuid": "2a6de442-be7b-5517-9b49-f00908460115",
"resource_path": "/v2/orders/2a6de442-be7b-5517-9b49-f00908460115",
"metadata": null,
"created_at": "2015-12-06T16:58:02-08:00",
"status": "completed",
...
and other times it looks like this:
{
"id": "f08d1f11-27f9-5be2-87fd-e086d1b67cab",
"type": "wallet:orders:paid",
"data": {
"resource": {
"id": "309a20df-a8e6-532d-9a2b-3ce5ea754d6d",
"code": "52N6TG58",
"type": "order",
...
I realize this is probably just api v1 vs v2, but I don't understand why it seems to be randomly switching back and forth. Any ideas of how to make it just use v2?
Thanks.
Most likely you've entered the same URL as both a Notifications (v2) and Callback (v1) URL.
This is easy to do, given that there are 3 different places in the UI where you can provide either or both the callback/notifications URL.
Merchant Settings Page
Your API Key's Edit form
The Merchant Tools Generator
You'll receive a POST message for each place you've entered this URL. (I was able to get 5 unique POSTs in my testing!)
The right place to include the URL depends on your situation:
If you just want merchant notifications (paid orders, mispaid orders and payouts), put it in the Merchant settings page.
If you are building an app with functionality beyond merchant tools, and want a broader set of wallet notifications, put it in your API Key's Edit form.
For Merchants I would generally not recommend entering the URL for a button generated via option 3. Based on the title of your question, I'm guessing this is your situation.
You won't be able to view or edit this setting in the future. If you're re-using a static button that you've previously generated, and think that you've included a URL there which you'd like removed, you'll need to replace the button by generating a new one.
I hope that helps!

How can i combine location based service in passbook ios6?

I see
Passbook can use location and time data to launch the passes when the app believes you will want to use them.
in apple doc.
So can i send push notification when my customers approach a particular location? If yes, how can i do it?
Is there any tutorial online?
Thx
In your pass.json file add location tag also
"locations" : [
{"latitude" : 8.5682,
"longitude" : 76.87349,
"relevantText" : "Store nearby on 3rd and Main."}
],
Change the iphone general location settings to ON
Go through apples programming guide
https://developer.apple.com/library/ios/#documentation/UserExperience/Conceptual/PassKit_PG/Chapters/Creating.html#//apple_ref/doc/uid/TP40012195-CH4-SW53
In short, No. This is because Passbook does not make any callbacks when a location alert is triggered.
Apple refer to location alerts as 'passive alerts'. Their purpose is to recognise that the user may need the pass and to make it available to him. There is no direct or indirect data that will tell us that a location alert has been triggered and certainly nothing that could trigger a push notification.

Resources