I am looking for a documentation or Github to build a HTML 5 (non gaming) skill in Alexa.
The Web API for Games is currently the only Alexa Skills Kit API that exposes HTML5 on compatible devices and the docs for that say specifically it's only approved for games.
https://developer.amazon.com/en-US/docs/alexa/web-api-for-games/understand-alexa-web-api-for-games.html#requirements-for-the-skill-and-web-app
Related
I doesn't have much experience with Alexa Voice Service. But i hope someone can help me in this scenario.
I just used the Raspberry Example for Alexa Voice Service in the past.
But this example hat the Limitation that Amazon Music was not supported.
Also other 3rd Party Apps which use Amazon Voice Services (e.g. Reverb) are restricted and can't use Amazon Music.
As far as i know, this is a limitation because license issues.
I have now an Use Case to develop either an Desktop App or an Web-App which should interact with Alexa, but also be possible to play Amazon Music.
My Question: Has someone experience with Alexa Voice Service, to use it, AND have access to Amazon Music? (maybe there is a solution to pay something for licence etc.(
Or is Alexa Voice Service in general restricted with Amazon Music.
Kind Regards
Stefan
Alexa Voice Service can certainly be used with Amazon Music (and several other music providers). You need to get approval from Amazon for the music service you want to use. You can find more details by going to your AVS dashboard and looking in the entertainment tab on the product you want to enable music.
We have implemented the Alexa skill with Node.js application. But the Node.js application domain is sub-domain like https://test.example.com/,
Does Alexa support sub-domains?
it shouldn't matter if you have a subdomain for your end point as long as it matches the requirements alexa specifies
for more information on that check the alexa documentation
Is there some google assistant api guide or tutorial? I cannot find anything related by these keywords. There seems have some Android app integration guide, but I want to integrate with my cloud service, not android app.
I find IFTTT have connected Google Assistant to several services, so I want to add some intents to my custom service.
I have built an Alexa app by using Alexa Skill Kit to handle my customize intent, and want to find something similar in Google Assistant developer playground, but I have no clue.
Thanks!
Google Assistant API was officially launched by Google for Windows, Mac, and Linux by which you can Get Google Assistant on Windows, Mac, and Linux.
If you wish to create voice applications for Google Assistant which are called as Google actions you will have to rely on the developers guide posted here
There is also an introductory course on udemy for the same.
I personally have used dialogFlow
and for the backend and I used firebase and have hosted a few apps into the store
Google Assistant will open its SDK to developers this December.
There's been a quite a lot of development on supporting the developer ecosystem, including the release of the Google Assistant SDK, app templates, and the ability to host and edit your integration via Firebase Functions.
For some code samples see
Conversational Components for Google Assistant
DialogFlow (previously Api.ai) v2 Samples
I'm building a voice activated AI system for my home. A la Echo, I want to be able to start streaming music on my android host when I say "play some rock". I can handle the ai part, but I need a web API endpoint to start streaming music.
Here's Web API Endpoint Reference. This Web API endpoints gives external applications access to Spotify catalog and user data. And some Web API Code Examples & Libraries and here's Working With Playlists.
Also have a look at this. This repo is a Go wrapper for working with Spotify's Web API which aims to support every task listed in the Web API Endpoint Reference.
Hope this helps!
The Mopidy music server can stream music from spotify and offers lots of API options for clients.
I'm planning to use Sencha 2.0 as my platform for mobile apps development. And I'm planning to use Speech recognition in the app - is there a Speech Recognition API that will work well across iOS and Android platforms?
To my knowledge, the answer is no.
Most speech recognition apps on smartphones do the speech processing on servers. Google provides built in speech recognition through the Speech Input API for Android. This api on the client will record the users speech, send it to a Google server for analysis, and return the recognized text. Google provides this service for Android apps for free. Some folks have reverse engineered the Speech recognition service Google provides for Chrome if you want an idea of how it works.
Today, Apple's iOS does not include a comparable API. There is hope that in the future they will expose an API to leverage the Siri servers for 3rd party apps, but today they do not. So, to build speech enabled apps for the iPhone requires deploying or contracting speech recognition services. Nuance, iSpeech, and others offer iOS SDKs for speech recognition in mobile apps.
Others on StackOverflow have discussed using PocketSphinx as a client based speech recognition engine, but I have no experience with that.
Though I guess it is possible for Sencha or PhoneGap to provide a common API for speech recognition, since there is no standard or free speech recognition solution for iPhone, it seems unlikely that these frameworks would be able to solve this complex problem. Perhaps if Apple exposes Siri in their SDK, a client framework could provide a common solution for iPhone and Android.