Create a dialog node that allows for document upload in Watson Assistant - ibm-watson

I have created a chat-bot using IBM Watson Assistant and I am trying to find a way to allow the end user to upload documents through the Watson API. Has anyone else tried to achieve this before?

The Watson service only takes text and then tries to classify and respond to it. Your application layer will have to either process this document into some form of a json string, or just collect it and do whatever else you want with it, and then send some kind of indicator to Watson to move on with the conversation.

Related

Connect IBM Speech to Text service to IBM Watson Assistant

I'm using IBM Speech to Text service (STT) and I want to connect it to IBM Watson Assistant (WA) Plus Plan to allow ask questions in speech instead of text only.
What I want to have is a microphone icon in the chat window, in which after clicking this microphone icon a user can talk and and ask a question.
I tried to the documentation on how to connect STT to WA, however the only thing I found is how to connect STT to WA through a voice telephone line.
Any help, please?
Thanks
With the Watson Assistant web chat, you can connect it to both TTS and STT services.
For TTS, the short explanation is to use the receive event that is fired whenever web chat receives a message. You can send the message to your TTS service to speak the desired text.
For STT, you'll need to add a button of some sort to the UI. You are a little limited here - you won't be able to put a microphone icon inside the input field, but you can put one directly above the input field using one of the writeableElements (beforeInputElement being the most appropriate). Once the button is clicked, you'll make a call to your STT service. When it returns the appropriate text, you can use the send method to send the text to WA.
We even have a complete tutorial showing you how to get all the pieces working together: https://github.com/watson-developer-cloud/assistant-toolkit/tree/master/integrations/webchat/examples/speech-and-text
And links to the relevant documentation:
https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-instance-methods#writeableelements
https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-events#receive
https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-instance-methods#send

How to use the Watson Assistant "listLogs" API function for a versioned workspace?

How can I use the Watson Assistant "listLogs" API Function to list the user conversations from a specific assistant ? We have one skill linked to three assistants for our environments (DEV/TEST/PROD), and I want to retrieve the conversations from PROD assistant only. What filters do I have to use ?
What I already tried:
When using the listLogs function with just the "workspace_id" as parameter, it is returning just an empty list.
When using the listAllLogs function with a filter parameter (language::de,workspace_id::my-workspace-id), the resulting list is empty as well
When using the listAllLogs function with another filter parameter (language::de,meta.conversation.assistant_and_skill_reference::"my-assistant-id:main+skill"), again an empty list is returned
As the skill is used in live chatbot, there are thousands of logged conversations, all visible in the Analytics tab of the Watson Assistant console, so the data is definitely there.
UPDATE: This is the out from the Watson Conversation Tool, it's empty
I finally got the information from IBM Support that this is currently not possible.

Retrieve a Chat Log watson assistant

I'm using Watson Assistant and Cloud Function in a basic chatbot. How can i retrieve via Cloud Function (node.js) the chatlog of a specific conversation? I'd like to implement this user functionality. So for example, if the user types "Chat Log", Watson Assistant send him back (via Cloud Function) his chatlog. Thanks.
In case you are using v1 version of the Watson Assistant(WA), you will get the logs via the API: https://cloud.ibm.com/apidocs/assistant/assistant-v1#listlogs
In case you are using v2 version of the Watson Assistant(WA), you will get the logs via the API: https://cloud.ibm.com/apidocs/assistant/assistant-v2#listlogs
In both of these versions, you will find a filter parameter which can be set to the current chat conversation ID or session ID and could be used to get the chat log.
Now there is a REST API that allows to get the user conversations - however this gets all users conversations so you would need to implement some kind of a proxy that would filter these logs anyway.
For this particular use case, I believe, it would be the best to log the conversations into a separate database where the data would be organized by user id. First of all - separate results for particular user can be achieved easilly in this case, second of all - in IBM Cloud in Watson Assistant the chat logs are kept for 30 days - that might not be enough for this kind of functionality.

employee call in and give trip information to be saved in database

I would like to code something up where my employees can call in and Watson will ask them the important questions, and they can just tell Watson the information and Watson then output that information into a CSV, XLS or etc. format possibly even a database.
It seems that I should be able to do this because of the way it can converse with people through messenger etc.
I know it is probably a 3 pronged approach.
Ideas?
#Florentino DeLaguna, in this case, you can use Conversation Service and Text to Speech and Speech to text API's from IBM Watson. See options you can use for that:
In theory, you would have to built an application that integrates with one URA (using Asterisk for example), convert the Speech to Text, send that text for Conversation Service, and the response of the Conversation you would have to transform into voice and send to the URA . In practice, there are some conversational problems, especially from Speech to Text. But the return voice you can use some effects using the IBM Watson Text to Speech (faster and slower voices, control of pauses, put emotions ...).
Obs.: The URA audios are in narrowband, 8khz, and most Speech to Text services only accept broadband, 16khz.
Obs II.: You app (like Asterisk) need to be able to consume a REST API and / or make use of Web Sockets then it will be able to invoke the Watson Speech to Text service.
Another option is to route a call out of Asterisk to the new IBM Voice Gateway which is a SIP endpoint that fronts a Watson self-service agent by orchestrating Speech To Text, Text To Speech and the Watson Conversation service. You can think of IBM Voice Gateway as a stand alone, Cognitive IVR system. Go here for more details.
Another potential option is to use MRCP. IBM has a services solution that will allow you to reach the Watson STT and TTS engines using MRCP. Not sure if Asterisk supports MRCP but that is typically how traditional IVRs integrate with ASRs.
Important: The options 2 and 3 are answered for another person, see the official answer.
See more about these API's:
Speech to Text
Text to Speech
Conversation
Have a look to the Voximal solution, it integrates all the SpeechToText Cloud API (and TextToSpeech) as an Asterisk application throw a VoiceXML standard browser.
All is integrated in the VoiceXML interpreter, you got the full text result of the transcription, and you can push it to a chatbot to detect the intent of the users and pick dynamic parameters like date, number, city, and more... for example by using api.ai.
Voximal supports the STT from Google, Microsoft, IBM/watson (and soon Amazon).
The 3 API listed by Sayuri are embedded in the solution.

Implementing Search option in Mean Stack Application

Can someone please suggest how to implement a search feature in the application built using angularjs,nodejs and mongodb this feature should be like when a user enters letter a then all the book names which is starting with a from the database should be displayed in the drop down (eg: tags drop down below in stack overflow)
Any suggestion and help?
You can have a web service running such that a REST api call from angular is made to your web service whenever some one presses a letter in the search box.
The web service code should handle querying the database and sending the results.
You can use a cache to make it faster

Resources