I have created a watson chatbot with conversation, STT and TTS services from below node.js code,
https://github.com/mirkorodriguez/ibm-watson-conversation-full
I have used my English language conversation, STT and TTS services , I am able to get the proper responses back from conversation but when i click on microphone and speak in English, the application is taking the input as Spanish language and responding back with Spanish. Please let me know what needs to be changed to get english language instead of spanish enter image description hereinput.below is the screenshot for the same.
That will be because the sample that you are using will have hard-coded the language to be Spanish. It looks like the sample you are pointing at is actually a clone of the Standard Car Dashboard demo, which is in English, so all you need to do is go to the original sample.
If you do want to change the language, then how to is encapsulated in the forum entry here - https://developer.ibm.com/answers/questions/447834/change-the-text-to-speech-voice-to-frensh-in-the-c.html?childToView=447992#answer-447992
Related
I'm using IBM Speech to Text service (STT) and I want to connect it to IBM Watson Assistant (WA) Plus Plan to allow ask questions in speech instead of text only.
What I want to have is a microphone icon in the chat window, in which after clicking this microphone icon a user can talk and and ask a question.
I tried to the documentation on how to connect STT to WA, however the only thing I found is how to connect STT to WA through a voice telephone line.
Any help, please?
Thanks
With the Watson Assistant web chat, you can connect it to both TTS and STT services.
For TTS, the short explanation is to use the receive event that is fired whenever web chat receives a message. You can send the message to your TTS service to speak the desired text.
For STT, you'll need to add a button of some sort to the UI. You are a little limited here - you won't be able to put a microphone icon inside the input field, but you can put one directly above the input field using one of the writeableElements (beforeInputElement being the most appropriate). Once the button is clicked, you'll make a call to your STT service. When it returns the appropriate text, you can use the send method to send the text to WA.
We even have a complete tutorial showing you how to get all the pieces working together: https://github.com/watson-developer-cloud/assistant-toolkit/tree/master/integrations/webchat/examples/speech-and-text
And links to the relevant documentation:
https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-instance-methods#writeableelements
https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-events#receive
https://web-chat.global.assistant.watson.cloud.ibm.com/docs.html?to=api-instance-methods#send
I am on the IBM Watson Plus (Trial) plan. I just created a very simple assistant and added a Dialog skill. However, when I want to try my chatbot, I get the following message on top of the screen:
Watson is training Results in the try out panel may not reflect the
latest edits.
no matter what I type, I get this message:
I didn't understand can you try again
I removed my Watson resource and created a new one however, I still have the same issue.
I also tried the "Customer Care Sample Skill" as well. However, still facing the same issue. Can you please help me?
I need to send a list of questions to be asked from Alexa, and get the responses in the form of text (not voice). The responses must be exactly the same as what the Alexa app provides. All the documentation/examples I've seen so far is for the case where the goal is to create a new Alexa skill which is not my goal.
I'd appreciate it if someone could point me to an example or documentation for what I'm trying to accomplish.
Alexa responses and requests are in a JSON format. It's easier to see the interaction when you're testing a skill so even though it's not your end goal it would probably be useful to create a dummy skill and test it in the alexa developer console.
If you're trying to make a chatbot type project you would create an API Gateway endpoint for your Alexa Skill's lambda function and then use that with dialogflow and its text based integrations such as messenger, web demo, twilio, etc.
Here is an example in the documentation of what a response and request looks like:
https://developer.amazon.com/docs/custom-skills/request-and-response-json-reference.html
Does anyone have a working example of code for a dialogflow bot where the user can switch from an english Facebook bot to a french one?
EDIT: Ideally the bot would initially engage in english then would prompt the user to select a language as the first intent. Then would switch over to the other language. Is there a way to do this? Do I have to reinitiate an instance of dialogflow(api.ai) body with the updated lang variable? Or can I send response with "lang" in the json to switch?
I would like to code something up where my employees can call in and Watson will ask them the important questions, and they can just tell Watson the information and Watson then output that information into a CSV, XLS or etc. format possibly even a database.
It seems that I should be able to do this because of the way it can converse with people through messenger etc.
I know it is probably a 3 pronged approach.
Ideas?
#Florentino DeLaguna, in this case, you can use Conversation Service and Text to Speech and Speech to text API's from IBM Watson. See options you can use for that:
In theory, you would have to built an application that integrates with one URA (using Asterisk for example), convert the Speech to Text, send that text for Conversation Service, and the response of the Conversation you would have to transform into voice and send to the URA . In practice, there are some conversational problems, especially from Speech to Text. But the return voice you can use some effects using the IBM Watson Text to Speech (faster and slower voices, control of pauses, put emotions ...).
Obs.: The URA audios are in narrowband, 8khz, and most Speech to Text services only accept broadband, 16khz.
Obs II.: You app (like Asterisk) need to be able to consume a REST API and / or make use of Web Sockets then it will be able to invoke the Watson Speech to Text service.
Another option is to route a call out of Asterisk to the new IBM Voice Gateway which is a SIP endpoint that fronts a Watson self-service agent by orchestrating Speech To Text, Text To Speech and the Watson Conversation service. You can think of IBM Voice Gateway as a stand alone, Cognitive IVR system. Go here for more details.
Another potential option is to use MRCP. IBM has a services solution that will allow you to reach the Watson STT and TTS engines using MRCP. Not sure if Asterisk supports MRCP but that is typically how traditional IVRs integrate with ASRs.
Important: The options 2 and 3 are answered for another person, see the official answer.
See more about these API's:
Speech to Text
Text to Speech
Conversation
Have a look to the Voximal solution, it integrates all the SpeechToText Cloud API (and TextToSpeech) as an Asterisk application throw a VoiceXML standard browser.
All is integrated in the VoiceXML interpreter, you got the full text result of the transcription, and you can push it to a chatbot to detect the intent of the users and pick dynamic parameters like date, number, city, and more... for example by using api.ai.
Voximal supports the STT from Google, Microsoft, IBM/watson (and soon Amazon).
The 3 API listed by Sayuri are embedded in the solution.