We are trying to implement a natural language search function using the IBM Watson Cognitive Insights (CI) service. We want the user to be able to type in a question using natural language and then return the appropriate document(s) from a CI corpus. We are using CI rather than the Watson QA service to avoid the need for training and to keep Watson infrastructure costs down (i.e. avoid the need for a dedicated instance of Watson for each corpus/use case).
We are able to build the necessary corpus through the CI API but we are not sure which APIs to use in what order to accomplish the most precise/accurate query possible.
Our initial thought was to:
Accept the user’s natural language question and Post that text string to the “Identifies concepts in a piece of text” API (listed 6th from the bottom in the CI API Reference document) to get a list of concepts related to the question.
Then do a GET using the “Performs a conceptual search within a corpus” API (listed 3rd from the bottom in the CI API Reference document) to get a list of related documents back from the corpus.
The first question - is this the right way to go about achieving our objective described in the first paragraph of this post? Should we be combining the CI APIs differently or using multiple Watson services together to achieve the objective?
If our initial approach is the right one, then we are finding that when we submit a simple question (e.g. “How can I repair MySQL database corruption”) to the “Identifies concepts in a piece of text” API we are not getting a comprehensive list of associated concepts back. For example:
curl -u userid:password -k -d "How can I repair MySQL database corruption" https://gateway.watsonplatform.net/concept-insights-beta/api/v1/graph/wikipedia/en-20120601?func=annotateText
returns:
[{"concept":"/graph/wikipedia/en-20120601/MySQL","coords":[[17,22]],"weight":0.85504603}]
Yet clearly there are other concepts associated with the example question (repair, corruption, database, etc.).
In another example we just submitted the text “repair” to the “Identifies concepts in a piece of text” API:
curl -u userid:password -k -d "repair" https://gateway.watsonplatform.net/concept-insights-beta/api/v1/graph/wikipedia/en-20120601?func=annotateText
and it returned:
[{"concept":"/graph/wikipedia/en-20120601/Repair","coords":[[0,6]],"weight":0.65392953}]
It seems that we should have gotten back the “Repair” concept from the first example also. Why would the API return the “repair” concept when we submit "repair" but not when we submit the text “How can I repair MySQL database corruption” which also includes the word “repair.”
Please advise as to the best way to implement a natural language search function based on the Watson Concept Insights service (perhaps in combination with other services if appropriate).
Thank you very much for your question and my apologies for being so late in answering it.
The first question - is this the right way to go about achieving our objective >described in the first paragraph of this post? Should we be combining the CI >APIs differently or using multiple Watson services together to achieve the objective?
Doing the steps above would be a natural way to accomplish what you want to do. Please note however that the "annotate text" API uses currently exactly the same technology that the system has for connecting documents in your corpus to concepts in the core knowledge graph and as such, it is more "paragraph" oriented rather than individual question oriented. To be more precise, the problem of extracting concepts in a smaller piece of text is generally more difficult than in a larger piece of text because in the latter there is more context that can be used to make the right choices. Given this observation, the annotate text API goes the more conservative route again given its paragraph focus.
Having said that, the /v2 API that we now have does improve the speed and quality of the concept extraction technology, so it is possible that you would be more successful in using it in order to extract topics from natural language questions. Here's what I would do/watch out for:
1) Clearly display to the user what CI extracted from the natural language in the input. Our APIs give you a way to retrieve a little abstract per concept which can be used to explain to a user what a concept means - do use that.
2) Give the user the ability to eliminate a concept from the extracted concept list (strike it out)
3) Since the concepts in concept insights currently correspond roughly to the notion of "topics", there is no way to deduce more abstract intent (for example, if the key to the meaning of a question is on a verb or an adjective as opposed to a noun, concept insights would be a poor way to deduce it). Watson does have technology oriented towards question answering as you pointed out before (the natural language classifier being one component of that), so I would take a look at that.
Yet clearly there are other concepts associated with the example question >(repair, corruption, database, etc.).
The answer for this and the rest of the posted question is in a sense above - our intention was to provide a technology first for "larger text" which as I explained is an easier task. Since this question was first posted and today, we did introduce new annotation technology (/v2) so I would encourage the reader to see whether it performs a little better.
For the longer term, we do have the intention to give the user a formal way to specify context for a general application so that the chances of extraction of relevant concepts increase. We also have a plan to have the user be able to specify custom concepts, as it has been observed in the past that some topics of interest to users are impossible to match in our current design because they are not in wikipedia.
Related
I'm trying to implement a collaborative canvas in which many people can draw free-handly or with specific shape tools.
Server has been developed in Node.js and client with Angular1-js (and I am pretty new to them both).
I must use a consensus algorithm for it to show always the same stuff to all the users.
I'm seriously in troubles with it since I cannot find a proper tutorial its use. I have been looking and studying Paxos implementation but it seems like Raft is very used in practical.
Any suggestions? I would really appreciate it.
Writing a distributed system is not an easy task[1], so I'd recommend using some existing strongly consistent one instead of implementing one from scratch. The usual suspects are zookeeper, consul, etcd, atomix/copycat. Some of them offer nodejs clients:
https://github.com/alexguan/node-zookeeper-client
https://www.npmjs.com/package/consul
https://github.com/stianeikeland/node-etcd
I've personally never used any of them with nodejs though, so I won't comment on maturity of clients.
If you insist on implementing consensus on your own, then raft should be easier to understand — the paper is surprisingly accessible https://raft.github.io/raft.pdf. They also have some nodejs implementations, but again, I haven't used them, so it is hard to recommend any particular one. Gaggle readme contains an example and skiff has an integration test which documents its usage.
Taking a step back, I'm not sure if the distributed consensus is what you need here. Seems like you have multiple clients and a single server. You can probably use a centralized data store. The problem domain is not really that distributed as well - shapes can be overlaid one on top of the other when they are received by server according to FIFO (imagine multiple people writing on the same whiteboard, the last one wins). The challenge is with concurrent modifications of existing shapes, by maybe you can fallback to last/first change wins or something like that.
Another interesting avenue to explore here would be Conflict-free Replicated Data Types — CRDT. Folks at github used them to implement collaborative "pair" programming in atom. See the atom teletype blog post, also their implementation maybe useful, as collaborative editing seems to be exactly the problem you try to solve.
Hope this helps.
[1] Take a look at jepsen series https://jepsen.io/analyses where Kyle Kingsbury tests various failure conditions of distribute data stores.
Try reading Understanding Paxos. It's geared towards software developers rather than an academic audience. For this particular application you may also be interested in the Multi-Paxos Example Application referenced by the article. It's intended both to help illustrate the concepts behind the consensus algorithm and it sounds like it's almost exactly what you need for this application. Raft and most Multi-Paxos designs tend to get bogged down with an overabundance of accumulated history that generates a new set of problems to deal with beyond simple consistency. An initial prototype could easily handle sending the full-state of the drawing on each update and ignore the history issue entirely, which is what the example application does. Later optimizations could be made to reduce network overhead.
I want to design a Semantic Search engine for my final year Master's degree. I have been doing a fair amount of reading both casually on the web and academic papers so I am not a total noob in this field.
My aim is to build a semantic search engine, which parses out the HTML content into its equivatlent RDF triples,stores the triples in a triplestore, through which the engine will try to respond to the query fired using SPARQL. I want to do something out of the box unlike the other students . So, I decided to build a semantic search engine.
Right now, I had a running search engine using Solr which performs keyword search, what I want to do is the semantic search. I know some open source tools regarding Web 3.0 but not sure whether they will be compatible with Solr or not.
So, can you please provide me some help for building the same.
Thanks.
Regards
Although it sounds hard, but you will not be able to capture everything.
You need a lot of data. Of course, there already is a lot of data arranged in formats like owl and rdf which you may use (e.g. WordNet, Yago, GeoNames etc), but although they are of huge size, they only focus on very small portions of a possible discourse universe.
Developing a good semantic search takes a lot of resources and brain power. Projects, like for example KompParse at the German Research Center for Artificial Intelligence, which only focus on a small part of human conversation (gossip or buying furniture) have been running for several years with several employees by now and are still just "ok".
Understanding semantics has already been implemented in different search engines, take google for example, or wolfram alpha. So this topic might not even be as much "out of the box" as you think.
So I will go with user723630 and strongly advise you, to focus on a smaller topic. You will still achieve a lot, but you will not get frustrated.
I have recently complete Hartl's Rails tutorial, and have devised a little app that I would like to try to build out. I would like to understand whether I would need to learn, and integrate a separate database (such as PostgreSQL) to my Rails model in order to model the functions that involves matching users based on shared qualities with each other.
For example, if user a and user z both indicates that they like chinese food as well as like spicy meatballs, they would be "matched" with each other, where either users could initiate a "friendship request". However, if a were to like the food items described, but user b like spicy meatballs and like tapas, a and b would not be matched with each other (because they only like 1 thing in common.. or some other stupid matching logic :) ). Of course, the actual model may be slightly more complicated than this, where there may be other factors like "music" or "sport", in which a matching process would consider certain qualities superior, and weighs them more than others in a matching process.
My questions are -
Would Rails be sufficient to handle the slightly complex database described above, where users enter their info, and a "black box" model would then spit out matches? Are there any resources that I could look into to learn more about databases in Rails?
If not, which type of database would you recommend that I learn more about? Would PostgreSQL be a good candidate?
Thanks!
Edit -
For clarification, my question stems from the topic discussed in this part in the tutorial, where it noted about how working with Rails could enable one to "barely ever have to think about how Rails stores data". I was wondering if this would still be the case when the database would need to perform some type of work, such as matching, on its own.
Wow wow wow. Firstly rails isn't a db. Models are reflections of databases and a structured way to view records as instance variables.
So yes you need a db. Recommend you start yourself off with SQLite since its nice and small and data lives as a file within your app. Then upgrade MySQL later on (NB: this is generic advice without knowing what projects you might do next).
Resources:
I highly recommend Railscasts.com
Start with this one and work your way up:
http://railscasts.com/episodes/310-getting-started-with-rails
When creating a AI talking bot what kind of methods of design should I use? Should it be one function, multiple modules, should it have classes?
Understanding language is complicated, so the goal you need to determine first is what aspect of language you want to understand.
An AI must be able to understand what the person says to it, then relate it to what it already knows, and then generate a legitimate response.
These three steps can all be thought of as nearly independent, so you need to address each on its own.
The brain, the world's best language processor, uses a Neural Network, but that's not likely to work well for you.
A logic-based proof solving system, where facts that follow from facts are derived would probably work best, and I know of at least one system that uses it fairly effectively.
I'd start with an existing AI program (like the famous Eliza) and run its output through a speech synthesizer.
Some source for Eliza is available here. One open source speech synthisizer is FreeTTS.
If you're using a language other than Java, there are similar candidates AI bots and text-to-speech code out there.
I've started to do some work in this space using this open source project called Talkify:
https://github.com/manthanhd/talkify
It is a bot framework intended to help orchestrate flow of information between bot providers like Microsoft (Skype), Facebook (Messenger) etc and your backend services. The framework doesn't really provide implementation for the bot providers yet but does provide hooks into its natural language recognition engine.
The built in natural language recognition library can be used to classify sentences to topics which you can then map to skill functions.
Give it a try! I'd really like people's input to see if how it can be improved.
I'm building an application as part of a course project which allows users to rate an place/event .
My basic question is how should I proceed in this ?
In what language should I write my code ?
If a user provides a 5-star rating, how do I collect it and put it in the database ?
Any guidelines on how to put the initial steps would be very helpful as my knowledge relating to web services is very weak.
Any help/pointers to more information would be very appreciated
Your question is overly vague and isn't related to system administration, so it's probably going to get closed once someone comes along with the appropriate privileges. However, as long as it's here...
My basic question is how should I proceed in this ?
If you are completely at a loss you may want to spend some time consulting with your instructor, who may be able to suggest likely sources of information and provide some direction for your work.
In what language should I write my code ?
You should write your code in a language with which you are familiar that allows you to fulfill the requirements of the project. Anything else will needlessly extend the amount of time it takes to complete the project. If your goal is to learn a new language, you face a mind-numbing number of choices. Python, Ruby, and PHP are all see wide use in web applications. On Microsoft platforms, .NET is quite popular. You will find proponents and detractors for all of them.
If a user provides a 5-star rating, how do I collect it and put it in the database ?
This looks suspiciously like you're asking us to do your assignment for you.