We are creating a Bot using Watson, which will provide the rate of food materials to the end user and along with the availability. In order to fetch the availability, we need to call a rest API with the food details, which in turn will provide us the status.
So, here I wanted to know, how we can call rest api from Watson to fetch (feed) data into conversation.
In this case, you can use Watson Conversation, and create the Intents with responses based on the food materials.
You'll use the Context variable to get the food when use types and your application code will do something with this value. In this case, providing the status.
You can create one entity with all food values, and get the value with context variable with:
{
"context": {
"foodValue": "<? #foodtype ?>"
},
Inside your app, debug the return, you'll see one array if user types more than 1 food value.
And, with this values you'll check and return something for the user, I cant show any example because you does not specify what language you're use.
How to use context variable: click here.
See the official for call the API documentation.
See the official documentation about Conversation Service.
Check one project with Weather example from IBM Developer, this project gets the City from user typed and with this data do something in the app, in this case, return the Weather.
Related
For example, I have a skill that tells the user about food interactions with medications. The user tells Alexa which medication they want to know about.
Each intent corresponds to a different medication interaction (e.g., grapefruit juice, antacids, etc.). Under the GrapefruitDrugs slot-type, I have several slot values (all of them are drug names). When the GrapefruitIntent is called, it gives the user a warning about this interaction. The problem is that I have other slot-types that share the same drugs and their warnings also need to be communicated to the user.
'Drug X', however, will interact with both grapefruit juice AND antacids. So the slot-type AntacidDrugs also has 'Drug X' listed as a slot value. I want both intents to be called.
How do I make sure that both intents get called? I've looked into chaining intents, however I have yet to see an example other than one that links an intent to the LaunchRequest.
Intents are here just to point out what kind of sentence you have received on your webhook. Webhook can be on lambda or can be your custom server (e.g. Java, PHP, NodeJs …). In any case, here is your code and logic. Here you know at what state your conversation is and how should given intent be interpreted.
So in some conversation states you will react the same for two intents, while in some cases you might interpret the same intent in two different ways.
I have a skill in Alexa, Cortana and Google and in each case, there is a concept of terminating the flow after speaking the result or keeping the mic open to continue the flow. The skill is mostly in an Http API call that returns the information to speak, display and a flag to continue the conversation or not.
In Alexa, the flag returned from the API call and passed to Alexa is called shouldEndSession. In Google Assistant, the flag is expect_user_response.
So in my code folder, the API is called from the Javascript file and returns a JSON object containing three elements: speech (the text to speak - possibly SSML); displayText (the text to display to the user); and shouldEndSession (true or false).
The action calls the JavaScript code with type Search and a collect segment. It then outputs the JSON object mentioned above. This all works fine except I don't know how to handle the shouldEndSession. Is this done in the action perhaps with the validate segment?
For example, "Hi Bixby, ask car repair about changing my tires" would respond with the answer and be done. But something like "Hi Bixby, ask car repair about replacing my alternator". In this case, the response may be "I need to know what model car you have. What model car?". The user would then say "Toyota" and then Bixby would complete the dialog with the answer or maybe ask for more info.
I'd appreciate some pointers.
Thanks
I think it can easily be done in Bixby by input prompt when a required input is missing. You can build input-view to better enhance the user experience.
To start building the capsule, I would suggest the following:
Learn more about Bixby on https://bixbydevelopers.com/dev/docs/dev-guide
Try some sample capsules and watch some tutorial videos on https://bixbydevelopers.com/dev/docs/sample-capsules
If you have a Bixby enabled Samsung device, check our marketplace for ideas and inspirations.
My interaction has to be executed in two steps. in first step. Alexa will get some information to the user. On the basis of published information, Alexa will ask to take another action from the user's answer (Which is yes/no).
So is it possible to invoke one intent and on the basis of previous intent's reply through reprompt, i would be able to call another intent of alexa app.
if so, then how can i do that through akexa node js SDK v2.
Use Case:
My app is connecting with third party API which needs alexa to use Account Linking. Scenario is, when my account is linked and i have valid access token. Here's how conversation will go:
User: Alexa, ask "a" to get me overdue invoices.
Alexa: You have "x" number of invoices. Do you want to send payment reminders?
User: Yes.
Alexa: Your request to send reminders has been registered. Anything else I can help?
User: No Thanks.
So for this, I have to communicate with an external API two times. One, while getting overdue invoices, and two, while sending reminders.
From your example, I'm guessing you have invoices as one intent, then reminders as another intent. If reminders is only ever used immediately after invoices, then I'd make them a single intent. But if you want users to create reminders at any point, or if you have multiple intents that could flow into the reminders intent, then separate intents can work.
Check out:
How to Pass a new Intent
Each of the Dialog directives includes an updatedIntent property that can take an Intent object. Use this to:
Trigger a dialog on a completely different intent. For example, after completing the dialog for a BookFlight intent, you could return Dialog.Delegate with updatedIntent set to BookRentalCar to start new dialog.
...
When you use updatedIntent to change to a different intent, the directive acts against the new intent instead of the original:
...
When you use updatedIntent to set or change data on the original intent, make sure that the intent name and full set of slots matches the intent sent to your skill.
So to use your example, this is what you would do:
User: Alexa, ask SkillName to get me overdue invoices.
Trigger invoices intent.
Make API call, get # invoices.
Return Dialog.Delegate or Dialog.ElicitSlot or Dialog.ConfirmIntent with updatedIntent set to reminders intent. (include any and all slots, must be a full intent object)
Set the outputSpeech to "You have X number of invoices. Do you want to send payment reminders?"
Alexa: You have X number of invoices. Do you want to send payment reminders?
User: Yes.
Depending on which Dialog Directive you've chosen to use, it will return the user's answer differently. Either filling a slot you've prepared, or in a confirmationStatus, or AMAZON.YesIntent.
Check for the correct one of those, and make your reminders API call if confirmed.
Return fulfilled intent with output message:
Alexa: Your request to send reminders has been registered. Anything else I can help?
I built a conversations dialog model that works perfectly when tested on the www.ibmwatsonconversation.com workspace.
However, when I use the API calling the same workspace on my web app, the response given through the API not the same.
Below is the flow:
Intent 1
Intent 2 -> Entity 1
Intent 3
Intended behavior :
1.Ask question with intent 2, get reply from intent 2 node.
2.Enter entity 1, get reply from entity 1 node.
Actual behaviour (only from API):
Ask question with intent 2, get reply from intent 2 node.
Enter entity 1, get reply from intent 1
The most likely cause for this is that you are not passing back the context object at every call. Conversation is stateless, so without the context object it can't determine where you are, and will default to root.
Your first call will create the context object, and you can keep passing that back.
If this isn't is the issue, you need to supply a demo of the issue with dummy data, or a screenshot of your dialog flow (related part only).
I want to be able to allow my writers to see how much traffic their articles are getting. I can do this in Google Analytics but can't figure out how to share this data with them without giving them access to all the data so I was thinking of adding another analytics service that would insert a unique code for each author on their articles. I already have the GA code and quantcast code so I don't want to bog down my site much more. Should I use a pixel tracker or javascript tracker?
UPDATE: Here is the code I use in analytics to track my authors.
var pageTracker = _gat._getTracker("UA-xxxxxxx-x");
pageTracker._trackPageview();
} catch(err) {}
<?php if ( is_singular()) { ?>
pageTracker._trackEvent('Authors','viewed','<?php the_author_meta('ID'); ?>');
<?php } ?>
you could use a custom field to track the writers by a unique id that they probably have. Then you could use GA's api to pull data where custom field value = unique id and display it in their profile or wherever you want them to see it.
One option would be to use a server-local Redis instance and use the PHP Redis library to increment a local counter using the author ID and article IDs.
For example, if in redis you use a sorted set with AuthorID as the redis key, and use the article ID (or however you identify an article) as a member that you increment using zincrby for each load you'll have the data readily available and under your control. You could then have a PHP page that pulls the author's data from Redis and display it in whatever format you need. For example you could build a table showing them traffic for each of their articles, or make pretty graphs to display it. You could extend the above to do per-day traffic (for example) by using a key structure of "AUTHORID:YYYY-MM-DD" instead of just author ID.
The hit penalty for tracking this is much lower than reaching out to an external site - it should be on the order of single-digit milliseconds. Even if your Redis instance was elsewhere the response times should still be lower than external tracking. I know you are using GA but this is a simple to implement method you could consider.
This slightly depends on how many authors you have and your level of involvement, main type I would use is either
Create a separate view per author and filter in his / hers traffic
Use a google docs plugin to pull down authors data and share
Use the API to pull down relevant information
Happy to give mor specifics if you could guide in more details what you want