In SOLR, I have a document that has id, words (indexed), raw_text fields. I want to search just words field this way:
Words are infinitives of an article (or say keywords). For parsing and lemmatization(stemming) I use another tool, so that's not the point of the question.
E.g.: for these two articles(texts) words would be:
1 Yesterday I didn't go to work, because it was holiday.
words: yesterday go work because holiday
2 Tommorrow I am going to work in the morning and in the evening I am going shopping.
words: tommorrow go work morning evening go shop
3 words: go tommorrow work
In a search for "go " I want to have 2 retreived first (be more relevant) because of having more "go"-s than 1. Also I want to use longer queries with a bunch of words and have retrieved articles containig most of them most times.
E.g: search for: "go tommorrow work" would return 2 more relevant than 3 because there are two "go"-s contrary to only one in 3
So the question: how should I store words? multiValued or just single ? What field type should be used?
Thank you!
(Single-valued) text would suit you.
The text comes with tokenization, stemming and stop word analyzers.
Stemming uses heuristics to derive the root of a word. Among other things, it would find the root of your articles even in infinitive form :-)
Trying it out for your samples (with a few additions):
Original: Yesterday [yesterday's] I didn't go to work [working, workable], because it was holiday [holidays].
Stemmed: Yesterdai yesterdai s I didn t go to work work workabl becaus it wa holidai holidai
Original: Tommorrow I am going [go,going,gone] to work in the morning [mornings] and in the evening I am going shopping [shoppers, shops].
Stemmed: Tommorrow I am go go go gone to work in the morn morn and in the even I am go shop shopper shop
Because it uses heuristics, "workable" does not share roots with "work" and "gone" doesnt share roots with "go". But its a tradeoff that works much simpler and faster while not diminishing the result quality.
And "didnt" and "I" are stop words according to this list, so they are automatically eliminated.
If you ever observe unacceptable results too often, take the trouble to implement Wordnet. They have lemma, part of speech and other natural language goodies.
Related
I'm wondering about using Watson assistant as a simple tool for informal testing of medical students. I'm a bit confused as to whether this is an appropriate use. I have played around but am quite stuck.
I have a symptom X in mind, that, if the user asks about, Watson would spit out 3 questions sequentially, and test the users responses against some specific terms.
These questions look like
1. how much water does a 'symptom X' patient drink ?
Watson would take their input and compare it against definition somehow
what are the 3 diseases that can manifest with 'symptom X' ?
Watson would then take their input and compare it against the known list
what tests should be run immediately on a patient presenting with 'symptom X' ?
Watson would then compare their input to known list
Am I way off base with how I am using trying to use it?
-So far I have set up
intent = test_me (eg Can you test me)
#entity = symptom X
My first dialog node is if #test_me and #symptom X ->
'Sure, I can test you on symptom X'. I'm going to ask you 3 questions on this.
Pause.
Response -> how much water does a 'symptom X' patient drink ?
Their response would be along the lines of 'more than 100ml/kg/day'
How can I evaluate this response?
Is what I'm trying to do beyond the scope of a chatbot / WA?
The simple way would be by adding NLU (Natural Language Understanding) to the solution. If the language is English, NLU by default would get the 100ml as a Quantity and you can also use the syntax enchantment, if you need to apply a different rule when the user writes things like "more".
If there are more complexity to the sentences and NLU by default is not enough, you can train a custom model using WKS (Watson Knowledge Studio) and use it with NLU. The same applies for languages where the default model doesn't give you enough info.
NLU also have some understand of a good number of medical terms, that seems to be of use for your solution.
If you need to do it using only Watson Assistant, the only solution I can imagine is to use regex to get the number and the type (ml/day/km/etc). Something like "(\d+)(\w{2})"
In DevPost Watson Developer Challenge for Conversational Applications post, I saw Watson (maybe) able to analyze following phrase "I want to visit Tokyo, Sydney, Manchester, and Reykjavik during a trip that takes 30 days".
Is there a better way to extract those array of locations without having to predefine max no of location variables (i.e. set location1 - 5) and manually specify various grammar items like $ (Locations)={location1} * (Locations)={location2} * (Locations)={location3} * (Locations)={location4} as per Pizza example dialog? I would like to follow up with comment such as "That's a lot" if location > 4, or "Sure" if less.
You could try something like alchemy or relationship extraction to identify all of the languages, and then simply add them to the user profile in Dialog. But today, the best way to do this within a broader conversation will be to do it the same way the pizza sample does as you outlined above.
Edit: See my answer. Problem was in our code. MR works fine, it may have a status reporting problem, but at least the input readers work fine.
I ran an experiment several times now and I am now sure that mapreduce (or DatastoreInputReader) has odd behavior. I suspect this might have something to do with key ranges and splitting them, but that is just my guess.
Anyway, here's the setup we have:
we have an NDB model called "AdGroup", when creating new entities
of this model - we use the same id returned from AdWords (it's an
integer), but we use it as string: AdGroup(id=str(adgroupId))
we have 1,163,871 of these entities in our datastore (that's what
the "Datastore Admin" page tells us - I know it's not entirely
accurate number, but we don't create/delete adgroups very often, so
we can say for sure, that the number is 1.1 million or more).
mapreduce is started (from another pipeline) like this:
yield mapreduce_pipeline.MapreducePipeline(
job_name='AdGroup-process',
mapper_spec='process.adgroup_mapper',
reducer_spec='process.adgroup_reducer',
input_reader_spec='mapreduce.input_readers.DatastoreInputReader',
mapper_params={
'entity_kind': 'model.AdGroup',
'shard_count': 120,
'processing_rate': 500,
'batch_size': 20,
},
)
So, I've tried to run this mapreduce several times today without changing anything in the code and without making changes to the datastore. Every time I ran it, mapper-calls counter had a different value ranging from 450,000 to 550,000.
Correct me if I'm wrong, but considering that I use the very basic DatastoreInputReader - mapper-calls should be equal to the number of entities. So it should be 1.1 million or more.
Note: the reason why I noticed this issue in the first place is because our marketing guys started complaining that "it's been 4 days after we added new adgroups and they still don't show up in your app!".
Right now, I can think of only one workaround - write all keys of all adgroups into a blobstore file (one per line) and then use BlobstoreLineInputReader. The writing to blob part would have to be written in a way that does not utilize DatastoreInputReader, of course. Should I go with this for now, or can you suggest something better?
Note: I have also tried using DatastoreKeyInputReader with the same code - the results were similar - mapper-calls were between 450,000 and 550,000.
So, finally questions. Is it important how you generate ids for your entities? Is it better to use int ids instead of str ids? In general, what can I do to make it easier for mapreduce to find all of my entities mapping them?
PS: I'm still in the process of experimenting with this, I might add more details later.
After further investigation we have found that the error was actually in our code. So, mapreduce actually works as expected (mapper is called for every single datastore entity).
Our code was calling some google services functions that were sometimes failing (the wonderful cryptic ApplicationError messages). Due to these failures, MR tasks were being retried. However, we have set a limit on taskqueue retries. MR did not detect nor report this in any way - MR was still showing "success" in the status page for all shards. That is why we thought that everything is fine with our code and that there is something wrong with the input reader.
There's a movie which name I can't remember. It's about a carnival or amusement park with a horror house and a bunch of teens who are murdered one by one by something with a clowns mask. I've seen this movie about 20 years ago, and it's sequel, but can't remember it exactly. (And also forgot it's title.) As a result, I started wondering about how to solve something technical.
Assume that I have a database with the story plot and other data of each and every movie published. (Something like the IMDb.) And I would have an edit field where a user can just enter a description in plain text. The system would then start analysing this text to find the movie(s) that would qualify to this description.
For example (different movie), I enter this in the edit field: "Some movie about an Egyptian king who attacks a bunch of indians on horseback, but he's badly wounded and his horse dies while he lost this battle."
The system should then report the movie "Alexander" from 2004 as answer, but possibly a few more. (Even allowing a few errors in the description.)
To create such a system where a description gets analysed to find a matching record by searching through descriptions, what techniques should I need for something as complex as that? Not that I want to build something like that right now, but more out of curiosity if I ever want to pick up some interesting new project.
(I wanted to award extra points for those who recognise the movie I've mentioned in the beginning. But one Google-attempt later and I found it myself!)
Btw, it's not the search engine itself that interests me, but analysing the description to get to something a search engine will understand! With the example movie, it's human logic that helped me to find the title. (And it's annoying that this movie isn't for sale in the Netherlands.) Human logic will always be a requirement but it's about analysing the user input, which is in the form of a story or description, with possible errors.
You should check out document classification.
A few document classification techniques
Naive Bayes classifier
tf–idf
For what I can tell by your own comments, Google is the technique to be used. ;-) But, honestly, I think more or less any search engine would do.
Edit: heh, you removed your comment, but I do remember you mentioned Google as the one deserving extra points.
Edit+: well, you mentioned Google again, but I don't want to remove my first edit. ;-)
Pure speculation: Would something trivial such as taking every word of more than 4 letters in the description "Egyptian, Indian, horse battle etc." and fuzzy matching against a database of such summaries work? Perhaps with some normalisation eg. king == leader == emperor?
Hmmm ... Young Man, Girlfriend, swimming pool, mother, wedding does that get us to The Graduate? Well I guess with a small amount of specifics "Robinson" it might.
You can do lots of interesting stuff with the imdb keyword search:
http://akas.imdb.com/keyword/carnival/clown/murder/
You can specify multiple keywords, it suggests movies and more keywords which are in similar context with your given keywords.
The data contained in imdb is publicy available for non-commercial use and can be downloaded as text files. You could build a database from it.
So, I have an autocomplete dropdown with a list of townships. Initially I just had the 20 or so that we had in the database... but recently, we have noticed that some of our data lies in other counties... even other states. So, the answer to that was buy one of those databases with all towns in the US (yes, I know, geocoding is the answer but due to time constraints we are doing this until we have time for that feature).
So, when we had 20-25 towns the autocomplete worked stellarly... now that there are 80,000 it's not as easy.
As I type I am thinking that the best way to do this is default to this state, then there will be much less. I will add a state selector to the page that defaults to NJ then you can pick another state if need be, this will narrow down the list to < 1000. Though, I may have the same issue? Does anyone know of a work around for an autocomplete with a lot of data?
should I post teh codez of my webservice?
Are you trying to autocomplete after only 1 character is typed? Maybe wait until 2 or more...?
Also, can you just return the top 10 rows, or something?
Sounds like your application is suffocating on the amount of data being returned, and then attempted to be rendered by the browser.
I assume that your database has the proper indexes, and you don't have a performance problem there.
I would limit the results of your service to no more than say 100 results. Users will not look at any more than that any how.
I would also only being retrieving the data from the service once 2 or 3 characters are entered which will further reduce the scope of the query.
Good Luck!
Stupid question maybe, but... have you checked to make sure you have an index on the town name column? I wouldn't think 80K names should be stressing your database...
I think you're on the right track. Use a series of cascading inputs, State -> County -> Township where each succeeding one grabs the potential population based on the value of the preceding one. Each input would validate against its potential population to avoid spurious inputs. I would suggest caching the intermediate results and querying against them for the autocomplete instead of going all the way back to the database each time.
If you have control of the underlying SQL, you may want to try several "UNION" queries instead of one query with several "OR like" lines in its where clause.
Check out this article on optimizing SQL.
I'd just limit the SQL query with a TOP clause. I also like using a "less than" instead of a like:
select top 10 name from cities where #partialname < name order by name;
that "Ce" will give you "Cedar Grove" and "Cedar Knolls" but also "Chatham" & "Cherry Hill" so you always get ten.
In LINQ:
var q = (from c in db.Cities
where partialname < c.Name
orderby c.Name
select c.Name).Take(10);