Full Text Search by prefixed FORMSOF result - sql-server

Is there a way to concatenate a prefix onto all the results of a FORMSOF() lookup when doing a CONTAINSTABLE() query? I work in the nordic ski industry, and we sell "rollerskis" for summer training. As this is a pretty obscure word, the parser doesn't quite give me the right inflectional forms I'd like. Specifically, if I try to run a FORMSOF(INFLECTIONAL,"rollerski"), the parsing function sys.dm_fts_parser returns the following terms (no thesaurus, English language):
{"rollerski", "rollerskiing", "rollerskies", "rollerskied"}
That's close to what I need, but it's notably missing the pluralized rollerskis, which is used throughout our website, most notably in the name of several products and product categories. What I would like to do to get a more accurate list is return all the inflectional forms of "ski" and prefix each of them with "roller". That would give me the following list of terms:
{"rollerski", "rollerskis'", "rollerskis","rollerskiing","rollerskies","rollerskied","rollerski's"}
Is there a way I can achieve this within the CONTAINSTABLE() query?

Related

IBM Watson retrieve and rank service - boolean operator

I'm writing the csv file to train a ranker in Watson Retrieve and Rank service, with many rows [query,"id_doc","relevance_score",...].
I have two questions about the structure of this file:
I have to distinguish two documents, depending on whether or not the query contains the word "not". More specific:
the body and the title of the first document contain "manager"
the body and the title of the second document contain "not manager"
Thus, if the query is like "I'm a manager. How do I....?" then the first document is correct, but not the second one.
if the query is like "I'm not a manager..." then the second document is correct, but not the first one.
Is there any particular syntax that can be used to write the query in a proper way? Maybe using boolean operator? Is this file the right place to apply this kind of filter?
2. This service has also a web interface to train a ranker. The rating used in this site is: 1-> incorrect answer, 2-> relevant to the topic but doesn't answer to the question, 3-> good, but can be improved, 4->perfect answer.
Is the relevance score used in this file the same one of the web interface?
Thank you!
Is there any particular syntax that can be used to write the query in a proper way? Maybe using boolean operator? Is this file the right place to apply this kind of filter?
As you hinted, this file is not quite the appropriate place for using filters. The training data will be used to figure out what types of lexical overlap features the ranker should pay attention to when trying to optimize the ordering of the search results from Solr (see discussion here for more information: watson retrieve-and-rank - manual ranking).
That said, you can certainly add at least two rows to your training data like so:
The first can have the question text "I'm a manager. How do I do something" along with the corresponding correct doc id and a positive integer relevance label.
The second row can have the question text "I'm a not manager. How do I do something" along with the answering doc id for non-managers and a positive integer relevance label.
With a sufficient number of such examples, hopefully the ranker will learn to pay attention to bigram lexical overlap features. If this is not working, you can certainly play with pre-detecting manager vs not manager and apply appropriate filters, but I believe that's done with a separate parameter (fq?)...so you might have to modify train.py to pass the filter query appropriately (the default train.py takes the full query and passes it via the q to the /fcselect endpoint).
Is the relevance score used in this file the same one of the web interface?
Not quite, the web interface uses the 1-4 star rating to improve the UI for data collection, but then compresses the star ratings to a smaller relevance label scale when generating the training data for the ranker. I think the compression gives bad answers (i.e. star ratings < 3) a relevance label of 0 and passes the higher star ratings as is so that effectively there are 3 levels of rating (though maybe someone on the UI team can add clarification on the details if need be). It is important for the underlying ranking algorithm that bad answers receive a relevance label of 0.

Solr/Lucene query lemmatization with context

I have successfully implemented a Czech lemmatizer for Lucene. I'm testing it with Solr and it woks nice at the index time. But it doesn't work so well when used for queries, because the query parser doesn't provide any context (words before or after) to the lemmatizer.
For example the phrase pila vodu is analyzed differently at index time than at query time. It uses the ambiguous word pila, which could mean pila (saw e.g. chainsaw) or pít (the past tense of the verb "to drink").
pila vodu ->
Index time: pít voda
Query time: pila voda
.. so the word pila is not found and not highlighted in a document snippet.
This behaviour is documented at the solr wiki (quoted bellow) and I can confirm it by debugging my code (only isolated strings "pila" and "vodu" are passed to the lemmatizer).
... The Lucene QueryParser tokenizes on white space before giving any text to the Analyzer, so if a person searches for the words sea biscit the analyzer will be given the words "sea" and "biscit" seperately, ...
So my question is:
Is it possible to somehow change, configure or adapt the query parser so the lemmatizer would see the whole query string, or at least some context of individual words? I would like to have a solution also for different solr query parsers like dismax or edismax.
I know that there is no such issue with phrase queries like "pila vodu" (quotes), but then I would lose the documents without the exact phrase (e.g. documents with "pila víno" or even "pila dobrou vodu").
Edit - trying to explain / answer following question (thank you #femtoRgon):
If the two terms aren't a phrase, and so don't necessarily come together, then why would they be analyzed in context to one another?
For sure it would be better to analyze only terms coming together. For example at the indexing time, the lemmatizer detects sentences in the input text and it analyzes together only words from a single sentence. But how to achieve a similar thing at the query time? Is implementing my own query parser the only option? I quite like the pf2 and pf3 options of the edismax parser, would I have to implement them again in case of my own parser?
The idea behind is in fact a bit deeper because the lemmatizer is doing word-sense-disambiguation even for words that has the same lexical base. For example the word bow has about 7 different senses in English (see at wikipedia) and the lemmatizer is distinguishing such senses. So I would like to exploit this potential to make searches more precise -- to return only documents containing the word bow in the concrete sense required by the query. So my question could be extended to: How to get the correct <lemma;sense>-pair for a query term? The lemmatizer is very often able to assign the correct sense if the word is presented in its common context, but it has no chance when there is no context.
Finally, I implemented my own query parser.
It wasn't that difficult thanks to the edismax sources as a guide and a reference implementation. I could easily compare my parser results with the results of edismax...
Solution :
First, I analyze the whole query string together. This gives me the list of "tokens".
There is a little clash with stop words - it is not that easy to get tokens for stop words as they are omitted by the analyzer, but you can detect them from PositionIncrementAttribute.
From "tokens" I construct the query in the same way as edismax do (e.g. creating all 2-token and/or 3-token phrase queries combined in DisjunctionMaxQuery instances).

Searching for words that are contained in other words

Let's say that one of my fields in the index contains the word entrepreneurial. When I search for the word entrepreneur I don't get that document. But entrepreneur* does.
Is there a mode/parameter in which queries search for document that have words that contain a word token in search text?
Another example would be finding a doc that has Matthew when you're looking for Matt.
Thanks
We don't currently have a mode where all input terms are treated as prefixes. You have a few options depending of what exactly are you looking for:
Set the target searchable field to a language specific analyzer. This is the nicest option from the linguistics perspective. When you do this, if appropriate for the language we'll do stemming which helps with things such as "run" versus "running". It won't help with your specific sample of "entrepreneurial" but generally speaking this helps significantly with recall.
Split search input before sending it to search and add "" to all. Depending on your target language this is relatively easy (i.e. if there are spaces) or very hard. Note that prefixes don't mix well with stemming unless take them into account and search both (e.g. something like search=aa bb -> (aa | aa) (bb | bb*))
Lean on suggestions. This is more of a different angle that may or may not match your scenario. Search suggestions are good at partial/prefix matching and they'll help users land on the right terms. You can read more about this here.
perhaps this page might be of interest..?
https://msdn.microsoft.com/en-us/library/azure/dn798927.aspx
search=[string]
Optional. The text to search for. All searchable fields are searched by
default unless searchFields is specified. When searching searchable fields, the search text itself is tokenized, so multiple terms can be separated by white space (e.g.: search=hello world). To match any term, use * (this can be useful for boolean filter queries). Omitting this parameter has the same effect as setting it to *. See Simple query syntax in Azure Search for specifics on the search syntax.

How can I sort appengine search index results by relevance?

I'm working on a project that uses Google App Engine's text search API to allow users to search for documents that include a words field. I'm sorting using a MatchScorer, which according to the documentation "assigns a score based on term frequency in a document".
When a user enters a query like "business promo", I convert this into a query string that looks like words:business OR words:promo. I would have expected that this would return documents that contain both the words "business" and "promo" before documents that only contain one of the words (since the documentation says it assigns a score based on term frequency in the document). However, I frequently see results that contain only one of the words before documents that contain both.
I've also tried querying using the RescoringMatchScorer, but see the same problem using this scorer.
I've thought about doing separate queries - ones that AND the search terms and ones that OR the search terms - but this would require many queries if the user enters more than two search terms. For example, if I searched for "advanced business solutions", I'd need queries like this to cover all the bases:
words:advanced AND words:business AND words:solutions
words:advanced AND words:business
words:advanced AND words:solutions
words:business AND words:solutions
words:advanced OR words:business OR words:solutions
Does anyone have any hints on how to perform searches that return more relevant results (i.e. more search term matches) before less relevant results?
Perhaps it depends on how you interpret the phrase "term frequency". I think you're interpreting it to mean "how many of my search terms appear in the document". But it could also mean "how many times (any of) the search terms appears in each document", and indeed -- at least according to some simple experiments I've done -- the latter seems to be the actual behavior.
For example, a document that contains the word "business" 20 times and never mentions the word "promo" would be scored higher than a document that contains "business" and "promo" only once each. Does that jibe with the behavior you're seeing?

Simple search in App Engine

I want people to be able to search from a title field and a short description field (max 150 characters), so no real full-text search. Mainly they search for keywords, like "salsa" or "club", but I also want them to be able to search for "salsa" and match words like "salsaclub", so at least some form of partial matching.
Would the new Search API be useful for this kind of search, or would I be better off putting all keywords, including possible partial matches, in a list and filter on this list?
Trying to put all the keywords and partial matches (some sort of support for stemming etc) might work if you limit yourself to small numbers of query terms (ie 1 or 2) anything more complex will become costly. If you want anything more than a one or two terms I would look at the alternatives.
You haven't said if your using python or java, go php. If python have a look at Whoosh for appengine https://github.com/tallstreet/Whoosh-AppEngine or go with the Search API.

Resources