I am building an conversational IVR by Watson Assistant. How can I let it play on hold music? or on hold text-to-speech?
The whole idea of a chatbot is to get away from "dead space" and Musak. What is it that you are trying to do?
Related
I'm working on an application where we want to try a robot voice for user interactions instead of the current Speech Services standard voices.
That would make the application more exciting since our bot will be talking to kids.
The application shall be speaking Brazil Portuguese.
Questions:
Is there a built-in language model that would accomplish that for pt-BR?
If not would it be possible to customize the standard voice via SSML or C#?
Suggestions are also welcome!
You can look into using espeak for generating a robot-sounding voice. You can also do it in SSML using the "range" parameter with the prosody element. Currently only Microsoft (Azure cloud, SAPI5 and WinRT's Windows.Media.Speech) engines support the "range" attribute.
Example:
<speak version="1.0" xml:lang="pt-BR">
<prosody pitch="x-low" range="-100%">All your base are belong to us</prosody>
</speak>
I am new to Ionic and going to build an app which can show some information of art of work. The data would be text, pictures, audios and videos. Thus I have got two questions:
1. Which database should I choose on the server side for storing these data?
I have done some search on Google and it looks like SQLite or MongoDB is a popular choice, but I am not sure which one is better to fit my case.
2. What api can I use for streaming videos from the server?
The questions may be so basic, but if you can give me any guidance, it would be helpful. Thank you.
I have some devices I want to give my clients. E.g. they take it home.
basically I want them to be able to ask the device like an echo dot:
Ask MYAPP, what song is Number One
Ask MYAPP, what song is Number Two
.. etc
and then it reads the name of a song.
My question is: I have never worked with alexa or amazon service.
How long will it take to get it certified?
Do i need to get it certified?
is there an issue with playing a song?
I don't own a device, can I test it well enough without owning one?
is the alexa skill api easy and I get this done rather quickly or is it difficult to get started?
what's a good place to help get me started? i quickly looked at creating a skill set and the procedure seems heavy weight. Is there maybe a forum or some chat where the gurus hang out?
How long will it take to get it certified? - Once you submit the app it will take max 7 business day to get certified (Most of my apps certified in 2 days) - Please read here for certification checklist
Do I need to get it certified? - Yes, it should get certified for your app to be available on Amazon Alexa skill store. If it is not in skill store then other people cannot download to their device and will be available only in your account. To test app you don't need certification as you can try it from your Amazon account
is there an issue with playing a song? - You can play any audio files but current limit of audio file is 90 seconds. Please read more here
I don't own a device, can I test it well enough without owning one?- You don't need a device to test it. You can use echosim - https://echosim.io/ to test your app. Alternatively you can use Raspberry PI as you can enable Raspberry PI as an Alexa enabled device
is the alexa skill api easy and I get this done rather quickly or is it difficult to get started? - It is very easy to do. trust me I have learned and created an app in a week or so
what's a good place to help get me started?- First you need an Amazon account ( I believe you already have). Please find below links for simple end to end samples,
https://developer.amazon.com/alexa-skills-kit/alexa-skill-quick-start-tutorial
https://developer.amazon.com/blogs/post/Tx3DVGG0K0TPUGQ/New-Alexa-Skills-Kit-Template:-Step-by-Step-Guide-to-Build-a-Fact-Skill
There are couple of courses available in Udemy as well
Since this question is referred as related to some current #alexa-skill questions, I like to give some updates to the different points where Amazon has improved the Alexa environment within the 5 years after the initial answers:
Do I need to get it certified? Beside publishing a skill to all Alexa users, there are some other possibilities. You could add further users to your account (but be aware then they see all skills and depending on the roles might also do changes). Another option on skill level access is beta testing, but this is very limited. Last option is Alexa for business, where a skill can be distributed to devices of an organization - this is quite complex, but offers additional context and the option to limit accessibility of the skill to just the organization.
is there an issue with playing a song? besides integrating the audio in SSML, you have the Audio Player Directive, but be aware, that while your audio length has no limits, you leave the skill session. With Alexa Presentation Language Audio (APL-A) you keep the dialog session and have more audio capabilities as on SSML, but still face length limits. Staying inside the skill while not having audio length limits is possible when using APL (Video-)Player component with size 0, but this limits your skill to screen devices.
I don't own a device, can I test it well enough without owning one? The previous answer is not valid, since echosim.io is offline since April 5, 2021. But nowadays the development console has a very good simulator. Additionally you can use a local simulator with Visual Studio Code & ASK Toolkit
testing
is the alexa skill api easy and I get this done rather quickly or is it difficult to get started? In the last view years, Amazon extended the options on how to build & host a skill. With Alexa-hosted you do not need to care about AWS and connecting Alexa cloud with AWS or your own hosting solution and make use of all Skill features. If you need a simpler logic, you could use Alexa Blueprints, which covers the logic and you just provide the content (if you found a matching blueprint for your needed logic) - btw this is also an additional option for the certification question, since a blueprint is normally just for your account and you can share your blueprint instance with others, too.
I want to implement a feature that you can scan an image of reality by your phone, you will generate a feature code from the image, and then upload it to cloud service. If the database of cloud service has this code, you can download something related to the image. Now, the main problem with me, I need a system or cloud service to help me to identify the images, I don't want to do too much things, so is there hava an existing cloud service to support me do that? Free or paid are ok.
Microsoft has launched recently a new set of machine-learning APIs called "Project Oxford" that include functionality for face detection and recognition, speech recognition and synthesis, vision and understanding of natural languages
Face APIs provide state-of-the-art algorithms to process face images, like face detection with gender and age prediction, recognition, alignment and other application level features. For more information, see Project Oxford at www.projectoxford.ai/face.
Related Link http://azure.microsoft.com/en-in/marketplace/partners/faceapis/faceapis/
http://www.codeproject.com/Articles/989752/Integrate-Windows-Azure-Face-APIs-in-a-Cplusplus-a
I'm looking to learn about running my own google wave server. There are videos on how to set it up and get it in the command line, but my question is.. okay - where do you go from there? How do you take this service that is running in the command line and apply it to the web? Is there documentation on doing just that?
I have looked at the embedded API, but I do not think that's what I want. I'd also love for the frontend to be built in PHP - would anyone have any idea how to communicate PHP to Wave?
Thanks,
Matt Mueller
Okay ya'll. I emailed a few of the key Google Wave developers and surprisingly one of them responded! Here's what he said:
"Thanks for contacting me.
Unfortunately there's still a big gap
between the code we have opened so far
and building a UI. The conversation
model describes how to interpret a
wave as a conversation but we have yet
to open up the code that does that (we
will though!). So it would be a big
challenge at the moment."
So we can only wait I suppose!