This is my synonyms.txt
file system => filesystem
file set => fileset
version , release
latest, new
content, information
I have changed the synonyms.txt but synonyms are not working also help me to how to give space separated synonyms.
eg.
foo bar => foobar
The field type "watson_text_en" we use in retrieve and rank doesn't have synonyms filter by default. You would need to update your schema.xml by adding that filter to make it available. Here is an instruction of where and what to add: In your schema.xml, in section, add <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/> into the tag list.
Depending on your requirement, you can add it to both/either of and , which tell solr whether to apply it in indexing and/or query time. Adding it to "index" would require reindexing to make the change effective, while adding into "query" does not. Also, list will run in the order you put it, so you can choose where to put this filter to let it run before/after certain filters. For example, if you put it before solr.LowerCaseFilterFactory, it's better to toggle on ignoreCase="true", because it will run before everything is transformed into lower case
Just to note regarding adding the filter into 'Query' - according to the Solr docs, http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory this is a Very Bad Thing to Do.
Related
I created my own core on http://localhost:8983/solr and added some documents so I could query. But When I query something like"dog", I want those documents that contains "pooch" will be returned too. So I want to implement SVD algorithm to make some improvement on my results.
Since I am new to the search engine thing. All I know is that I can use Mahout to implement SVD, but it seems a little bit difficult coz I have to install Maven, Hadoop and Mahout.
Any suggestion will be appreciated.
You can use SynonymGraphFilterFactory
This filter maps single- or multi-token synonyms, producing a fully correct graph output. This filter is a replacement for the Synonym Filter, which produces incorrect graphs for multi-token synonyms.
If you use this filter during indexing, you must follow it with a Flatten Graph Filter to squash tokens on top of one another like the Synonym Filter.
Create a file i.e mysynonyms.txt in the directory your_collection/conf/ and put the synonyms with => sign
pooch,pup,fido => dog
huge,ginormous,humungous => large
And Example Schema will be :
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.SynonymGraphFilterFactory" synonyms="mysynonyms.txt"/>
<filter class="solr.FlattenGraphFilterFactory"/> <!-- required on index analyzers after graph filters -->
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.SynonymGraphFilterFactory" synonyms="mysynonyms.txt"/>
</analyzer>
Source : https://cwiki.apache.org/confluence/display/solr/Filter+Descriptions
The is another way to augment your index with terms not in the content. Synonyms is good as #ashraful says. But there are 2 other problems you will run into:
words used but not in the synonym list
behavioral search: using other user behavior as a hint to what they are looking for
These require you to augment the index with terms learned from 1) other searches, and 2) user behavior. Mahout's Correlated Cross Occurrence algorithm can help with both. You can set it up to find terms that lead to people reading an item and (if you have something like purchase or other preference data) conversion items that correlate with items in the index. In the second case you would add user conversions to the search query to personalize the results.
A blog about the technique here: http://actionml.com/blog/personalized_search
The page on Mahout docs here: http://mahout.apache.org/users/algorithms/intro-cooccurrence-spark.html
You should also look at word2vec, which will (given the right training data) find that "dog" and "pooch" are synonyms regardless of the synonym list because it is learned from the data. I'm not sure how you add word2vec to Solr but it is integrated into Fusion, the closed source product of Lucid.
I have a Solr index filled with documents, with a field named issuer.
There is a document with issuer=first issuer.
I'm trying to implement matching of two consequent words. The first word needs to match completely, the second needs to match partially.
What I am trying to achieve is:
I search for something like: issuer:first\ iss*
I expect it to match "first iss uer"
I tried the following solutions but none is working:
issuer:first\ iss* -> returns nothing
issuer:"first iss"* -> returns everything
issuer:(first iss*) -> also returns "issuer first"
Does anybody have a clue on how to achieve the desired result?
My suggestion is to add a shiringle filter based field type to your schema. Below is a simple definition:
<fieldtype name="shingle">
<analyzer>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.ShingleFilterFactory" minShingleSize="2" maxShingleSize="5"/>
</analyzer>
</fieldtype>
You then add another field with this type as shown below:
<field name="issuer_sh" type="shingle" indexed="true" stored="false"/>
At query time, you can issue the following query:
issuer_sh:"first iss*"
The shingleFilter creates n-gram tokens from your text. For instance, if the issuer field contains "first issue", then Solr will create and index the following tokens:
first
issue
first issue
You can't search with wildcards in phrase queries. Without changing how you are indexing (see #ameertawfik's answer), the standard query parser doesn't provide a good way to do this. You can, however, use the surround query parser to search using spans. This query would then look like:
1N(first, iss*)
Keep in mind, surround query parser does not analyze, so 1N(first, iss*) and 1N(First, iss*) will not find the same results.
You could also construct this query using lucene's SpanQueries directly, of course, like:
SpanQuery[] queries = new SpanQuery[2];
queries[0] = new SpanTermQuery(new Term("issuer","first"));
queries[1] = new SpanMultiTermQueryWrapper(new PrefixQuery(new Term("issuer","iss")));
Query finalQuery = new SpanNearQuery(queries, 0, true);
I have a solr setup(1.4) having a text field with ebook data. The params while hitting solr are -
"hl.fragsize":"0",
"indent":"1",
"hl.simple.pre":"{{{",
"hl.fl":"body_eng",
"hl.maxAnalyzedChars":"-1",
"wt":"json",
"hl":"true",
"rows":"1",
"fl":"ia,body_length,page_count",
"q":"ia:talesofpunjabtol00stee AND PUNJAB",
"q.op":"AND",
"f.body_eng.hl.snippets":"428",
"hl.simple.post":"}}}",
"hl.usePhraseHighlighter":"true"}},
However, the results show only 20 highlighted occurrences of word PUNJAB.
I tried f.body_eng.hl.snippets":"428" but this even isnt working.
body_eng is a big text field. The highlighting works only till some length. I have tried other words as well. In all the examples, highlighting works till around 54K letter counts.
What could be the reason?
First of all: 1.4 is a very old version of Solr. I'm not sure if per field values were supported at that time (Highlighting itself was introduced with Solr 1.3). The default highlighter was changed in 3.1.
You should however be able to highlight all occurences in a field by supplying a large value for hl.maxAnalyzedChars (not sure if -1 will do what you want). Another option to try should be to have a large hl.maxAnalyzedChars value and a large hl.fragsize value (use the same value for both fields and not 0).
If you're still unable to get it to work, test it on a more recent version of Solr to see if it's an issue that has already been fixed.
So, after lot of asking around, Its working now.
The query params is correct. The schema was causing problems. Changes done were -
<filter class="solr.SnowballPorterFilterFactory" language="English" />
was replaced with
with <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" />
I'm learning Solr and have become confused trying to figure out ICUCollation, what it does, what it is for and how to use it. From here. I haven't found any good explanation of this online. The doc appear to be saying that I need to use this ICUCollation and implies that it does magical things for me, but does not seem to explain exactly why or exactly what, and how it integrates with anything else.
Say I have a text field in French and I want stopwords removed, accents, punctuation and case ignored and stemming... how does ICUCollation come into this? Do I set solr.ICUCollationField and locale='fr' and it will do everything else automatically? Or do I set solr.ICUCollationField and then tokenizer and filters on this in addition? Or do I not use solr.ICUCollationField at all because that's for something completely different? And if so, then what?
Collation is the organisation of written information into an order - ICUCollactionField (the API documentation also provides a good description) is meant to enable you to provide locale aware sorting, as the sort order is defined by cultural norms and specific language properties. This is useful to allow different sorting based on those rules, such as the difference between Norwegian and Swedish, where a Swede would order Å before Æ/Ä and Ø/Ö, while a Norwegian would order it Æ/Ä, Ø/Ö and then Å.
Since you usually don't want to sort by a tokenized field (exception: KeywordTokenizer) or a multivalued field, these fields are usually not processed any more than allowing for the sorting / collation to be performed.
There is a case to be made for collation filters for searching as well, as search in practice is just comparison. This means that if you're aiming to search for two words that would be identical when compared in the locale provided, it would be a hit. The tokens indexed will not make any sense when inspected, but as long as the values are reduced to the same token both when indexing and searching, it would work. There's an example of this on the wiki under UnicodeCollation.
Collation does not affect stopwords (StopFilterFactory), accents (ICUFoldingFilterFactory), punctuation, case (depending on locale - if the locale for sorting is case aware, then it does not) (LowercaseFilterFactory or ICUNormalizer2FilterFactory) or stemming (SnowballPorterFilterFactory). Have a look at the suggested filters for that. Most filters or tokenizers in Solr does very specific tasks, and try to avoid doing "everything and the kitchen sink" in one single filter.
You normally have two or more fields for one text input if you want to do different things like:
search: text analysis
sort: language sensitive / case insensitive sorting
facet: string
For search use something like:
<fieldType name="textFR" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.ICUTokenizerFactory"/>
<filter class="solr.ICUFoldingFilterFactory"/>
<filter class="solr.ElisionFilterFactory"/>
<filter class="solr.KeywordRepeatFilterFactory"/>
<filter class="solr.FrenchLightStemFilterFactory"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
</fieldType>
For sorting use:
<fieldType name="textSortFR" class="solr.ICUCollationField"
locale="fr"
strength="primary" />
or simply:
<fieldType name="textSort" class="solr.ICUCollationField"
locale=""
strength="primary" />
(If you have to support many languages. Should work fine enough in most cases.)
Do make use of the Analysis UI in the SOLR Admin: open the analysis view for your index, select the field type (e.g. your sort field), add a representative input value in the left text area and a test value in the right field (in case of sorting, this right side value is not as interesting as the sort field is not used for matching).
The output will show you whether:
accents are removed
elisions are removed
lower casing is applied
etc.
For example, if you see that elisions (l'atelier) are not remove (atelier) but you would like to discard it for sorting you would have to add the elision filter (see example for search field type above).
https://cwiki.apache.org/confluence/display/solr/Language+Analysis
I'm trying to execute synonym filtering at query time so that if I search for X, results for Y also show up.
I go to where Solr is being run, edit the .txt file and add X, Y on a new line.
This does not work. I check the schema and I see:
<analyzer type="query">
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true" />
What am I missing?
EDIT
Assessing configuration files
tomcat6/Catalina/localhost seems to point to the correct location
<Context docBase="/data/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true">
<Environment name="solr/home" type="java.lang.String" value="/data/solr" override="true" />
</Context>
Also, in the Solr admin I see this. What does cwd mean?
cwd=/usr/share/tomcat6 SolrHome=/data/solr/
Use the SynonymFilterFactory only at index time, not query time. There are some subtle but well-understood problems with synonyms at query time.
See: http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory
After you move synonyms to the index analyzer chain, check that they are working with the Analysis page in the admin UI.
The answer from #Walter Underwood is good, but incomplete.
Whether you use the SynonymFilterFactory at index or query time depends on your default operator.
So, let's say we have a synonym file with this entry:
5,five
If your default operator is OR (which is the default default operator), then you should set up your synonyms on the query filter. This way a query for "5" will be passed to the backend as a query for "5" OR "five", say, and the backend would respond appropriately. At the same time, you can make changes to your synonym file without reindexing, and your index is smaller since it doesn't have to have so many tokens.
However, if you change the default operator to AND, you should set up your synonyms on the index filter instead. If you don't, a query for "5" would go to the backend as "5" AND "five", and it would not match the documents that it's expected to. This, alas, makes the index bigger, and also means new synonyms require complete reindexes.
Note: The documentation for this is currently wrong, leaving out all these details.