Is it possible to automate the process of educating Watson? - ibm-watson

We look forward to maximise the use of Watson in our project. For this purpose we have to educate waston with all project documents, User manuals etc and use the Watson Conversation API possibly. But this seems a manual process of creating entities, intents and then dialogues. Is there a way Watson can identify potential entities and intents on its own? Or some way of automating or fast tracking the process of identifying entities, intents and creation of dialogues.
Can RPA (Robotic process automation) tools such as WinAuto or BluePrism be combined to use for this purpose ? Please suggest.
Thanks.

Depending on what you're seeking to do, Watson Conversation Service may not be the right answer for your use case.
What I read, looks more like a case for Watson Discovery Service, possibly combined with Watson Knowledge Studio.
You can read more about it here: https://www.ibm.com/blogs/watson/2016/12/watson-discovery-service-understand-data-scale-less-effort/?spMailingID=27478889&spUserID=MTY4Mzc0OTEyOTIxS0&spJobID=960800840&spReportId=OTYwODAwODQwS0
And secondly on the integration:
https://www.ibm.com/watson/developercloud/doc/wks/wks_mapublish.shtml

In the API documentation it states that methods are available to create all artefacts (workspaces, entities, intents, dialognodes etc..)
https://www.ibm.com/watson/developercloud/conversation/api/v1/?curl#update_workspace
But the SDK's don't support it yet (at least the Java SDK)

Related

IBM Visual Recognition: How do I back up a custom classifier?

The IBM Visual Recognition classifier is simple to use and works well. However, custom classifier creation is expensive ($0.10/image) and time-consuming. Accidental deleting of a custom classifier puts any workflow using that classifier at risk. There is no obvious way in the API or dashboard to download, duplicate, or lock a custom classifier. This is a concern for production use.
How can I back up a custom classifier created using IBM Watson Visual Recognition? This question went unanswered on IBM's developer forum and I am hoping someone from IBM can provide guidance here.
Thank you!
There is no API method to back up or download a custom classifier. However if you still have the training data you can train a classifier with the same data and you will have the same performance again, with a different classifier_id. This would incur additional expense, as you noted.
We recognize the usefulness of this idea as a feature and will take it into consideration for development.
In the meantime, if a classifier is accidentally deleted or changed, please file a Bluemix support ticket, we may be able to help.
It's currently not possible to backup or download a custom classifier (only create, update, and delete them via the API).

IBM Watson Service for identifying particular characteristic such as "helpfulness" in person's tweets or facebook posts?

I am currently exploring three services for identifying person's tweets or facebook post's are helpfulness or not:
Personality Insights
Natural Language Understanding
Discovery
will I need to write my on wrapper on these services to identify the helpfulness characteristic or is there any other way to just query & get result.
can anyone please guide which service I need to use for this task
Thanks
According to Neil, sure, all depends on how you define helpfulness.
Discovery:
If you want use Discovery you need some base to get the data, you can filter the data about you want with filter. By using data analysis combined with cognitive intuition to take your unstructured data and enrich it so you can discover the information you need.
Personality:
If you want use Personality, understand personality characteristics, needs, and values in written text. The service uses linguistic analytics to infer individuals' intrinsic personality characteristics, including Big Five, Needs, and Values, from digital communications such as email, text messages, tweets, and forum posts.
Watson Knowledge Studio:
If you want to work with models for tweets, you can use WKS (Watson knowledge Studio), this service provides easy-to-use tools for annotating unstructured domain literature and uses those annotations to create a custom machine-learning model that understands the language of the domain. The accuracy of the model improves through iterative testing, ultimately resulting in an algorithm that can learn from the patterns that it sees and recognize those patterns in large collections of new documents. For example, if you want learn about car, you can simple give some models to WKS.
It all depends on how you define helpfulness. Whether it is in general, or helpful to answering a question etc.
For Personality Insights, have a look at https://www.ibm.com/watson/developercloud/doc/personality-insights/models.html which has all the traits, as well as what they mean. The closest trait to helpfulness is probably Conscientiousness.
Neil

How to build a cloud application and keep portability intact?

Please check the answer and comments of my previous question in order to get a better understanding of my situation. If I use Google DataStore on AppEngine, my application will be tightly coupled and hence loose portability.
I'm working on Android and will be using backend which will reside in the cloud. I need client-cloud communication. How do I build an application maintaining portability. What design patterns, architectural patterns should I be using?
Should I use a broker pattern? I'm perplexed.
Google AppEngine provides JPA based interfaces for its datastore. As long as you are writing your code using JPA APIs, it will be easy to port the same to other datastores (Hibernate for example also implements JPA).
I would ensure that the vendor specific code doesn't percolate beyond a thin layer that sits just above the vendor's APIs. That would ensure that when I have to move to a different vendor, I know exactly which part of code would be impacted.
It u really want to avoid portability issues use google cloud sql instead. If u use the datastore unless its a trivial strucfure you sill not be able to trivially port it eve if you use pure jpa/jdo, because those were really not meant for nosql. Google has particularifies with indexes etc.
Of course sql is more expensive and has size limits
In order to maintain portability for my application, I've chosen Restlet, which offers Restful web apis, over endpoints. Restlet would help me to communicate between server and client.
Moreover, it would not get my application locked in to a particular vendor.

Core understanding f what salesforce is

firstly I apologise if this is a ridiculously simple question to answer but it has been bothering me for a while.
I am trying to understand what salesforce actually is, I mean in technical terms. I have read the websites documentation and the wikipedia page but I am trying to understand what's behind all this fluffy terminology.
My understanding is that salesforce is a cloud based database which stores a very high volume of information and all salesforce apps consists of scripts that query this database and model them in different ways depending on the intended application, is this correct?
Thanks !
Software as a Service (SaaS)
To get program you need to download it, install, configure and so on. If your system have a lot of users it's very hard to configure ans support single user installation.
Imagine that you improved application, new release for example. You need update every instance.
With SaaS model you have a shared web application, that do the same thing as old downloadable one. But it's much easier to support it, because ideally there is just one instance of it.
Salesforce is a company that provides its own system by SaaS model, but not only. It is also a platform for developing new applications.

How to integrate various application and provide common interface to access their data?

we have a few various applications that stores its data and we need one common service which provides access to these data.
With the applications I mean for example Atlassian Jira, Confluence, SVN, Git, LDAP, few internal mysql databases, etc. Some of them offers you SOAP API, REST API, various command line clients, for some you have to directly access database to get data.
What we want is a common REST API interface, to access all possible data sources. Of course, we have to solve authentication and authorization, caching and many more tasks.
It seems that something like ESB - Enterprise service bus and EIP - Enterprise integration patterns is answer to our needs.
For start, we are playing and actually dig in to Apache Camel - it's not full EIP stack, it's "just" a integration framework. But I guess it's good enough for us right now.
My question is - what you mean about the solution? Are we on the good way?
Thanks!
Camel has a lot of connectors, so that would be a great start.
If you are afraid it is too thin, then take a look at Apache ServiceMix, which provides a deployment (OSGi) container for camel routes (and other things). Camel comes bundled within the standard service mix release out of the box.
The hard task is probably to design the generic API good enough to cover your use cases.
A GIT repo and a Database are very different, is this very generic? Do you only want to access "text" data or something?
I like the approach with camel non the less, since it's rather generic and flexible in these kind of scenarios. That you will need

Resources