What are the AI features of Watson Assistant? - artificial-intelligence

It is hard to tell from IBM's documentation but what features of Watson Assistant are actually based on AI?

Features based on NLU (Natural Language Understanding) that they named, also known as NLP(Natural Language Processing) in AI field. Using this service we can implement chatbot or other AI based app.

Related

Amazon Alexa Programming Language

Does anyone know the actual programming language(s) used to develop Amazon Alexa, not the skills but Alexa itself? I have been searching online but the answers I'm seeing are all related to Alexa skills' development.
I suppose there is not that one language.
First at all you have all the Natural language processing (NLP) and machine learning stuff.
Most probably delivered by this team: https://www.amazon.jobs/en/landing_pages/amazon-aachen-development-center. See the job descriptions there - they are looking for experience in: "Java, C++ or Python".
You can also watch about job descriptions of the cambridge team: https://amazon.jobs/de/landing_pages/Cambridge. There are also some short videos.
Also the overall team https://www.amazon.jobs/en/teams/alexa-information is looking for similar languages. See the job descriptions there.

Understanding speech recognition of Amazon Alexa and Google Assistant

According to my experience I have found out following outcomes which may not be correct. I want your opinions on that.
When I design interaction model in Alexa, then Alexa's speech recognition seems to adapt with the change in language model.
In case of Google Assistant, (I am using dialogflow here) any change in language model is not helping with the speech recognition. It seems to use google's first intuition to resolve speech into text.
If you use entities, then those are passed on to the Assistant platform to help with speech biasing.

Using a Watson knowledge-studio model directly

Has anyone ever tried to use the model that has been generated by the Watson Knowledge studio outside of the Alchemy language API?
Or do I always need to upload the model to knowledge studio and from then on talk to the api?
Though I have always used my Knowledge Studio based models on Natural Language Understanding, I believe it's possible to deploy these models to Discovery and Watson Explorer as well for text extraction.
Check this documentation for details on how to deploy the model to different components.

Ai Assistant programming guidance needed

Can any one tell me or guide me in programming an ai Assistant something like Jarvis or Google Assistant etc which has both online and offline voice recognition capability.
I am new to Ai so I tried many tuorials and all still not able to understand or build one. Also don't know where to start and how to start please any help I really need help.
To be frank, natural language processing is one of the most complex and technically difficult fields in computer science. so if you want to make your own version of google assistant, it would help to have an advanced degree in AI, a million dollars in research funding and your own team of engineers.
That being said, a chatbot makes for a really fun hobby project. For now, try not to worry about online and offline voice recognition capability. Make a text-based chatbot that handles basic conversation. You can always add more capability later, and you'll probably have your bearings by then and know what to do.
A good place may be microsoft's new bot framework. I've never used it, myself, but its goal is to take some of the technologies behind the likes of Google Assistant and Jarvis, and to make them available for the everyday developer. It seems to fit your use-case, and as a microsoft product, it'll (probably) have some documentation or tutorials for you to get started.
There are a couple of options to get started.
First off, try to build a
bot using C# for native windows
applications. Microsoft has great documentation for the same, and there are a couple of great tutorials on YouTube for the same.
You can also try
api.ai
to build a bot. It's a bit less hands on, but a good way to get started.
To really try doing everything yourself, try learning a bot of machine learning first. Google has great YouTube tutorials for the same.
Try:
C# bot on windows
Google machine learning
The best choice to start is api.ai. It is simple to learn and integrate, and have a good response time. I tried most of the chatbot engines, apply to the natural language by phone to build voice assistants (Voximal). An important factor in this case is the response time. If you plan to integrate a lot of complex datas the reponse time will increase, and remember that you need to add the time duration of the SpeechToText and TextToSpeech too...
Use my project as an inspiration, is a personal A.I. assistant that is running on Windows 10/11(maybe even 8, not tested). It uses Tokenization and content analysis and association with set parameters for natural language processing and offline and online speech recognition for speech recognition. It can search content on Amazon, Google, Google Images, Google News, Netflix, Wikipedia and eBay. It can open and close multiple applications and it can also navigate in the settings menu on windows on any page or sub-section.
The project is here: https://github.com/CSharpTeoMan911/Eva

Guidelines for LBS Mobile application development

i need some help!, i am planning to develop such LBS Mobile Application which find nearest things based on gps data from mobile.
1.what are the best free and (preferably) open source technologies for development?.
2.What programming language to use for development of such application?.
3.what are the points to be considered?
I need the general overview of the requirements for planning, I was interested in having a general understanding of the data, tools, and frameworks required to accomplish the job.
The future proof way to write your application is using Web technologies.
Iphone and Android devices already support the W3C Geolocation API, with more on the way.
I recommend you take a look at a sample http://geo.webvm.net/ to get you started.
On Symbian phones, you can access location information via C++, java (when JSR-256 is implemented) and probably python.
You might also want to look into the Qt runtime as that is the new technology to use for Symbian development.
To start with Symbian application development, start with the Fundation's developer wiki
Both StackOverflow and Forum Nokia contain information about how to use JSR-256.
Relevant plug: There is whole chapter on LBS in Quick Recipes on Symbian OS.

Resources