I have added some documents in my solr index using requestHandler and now I am trying to query them from the web UI, I am getting the correct result when my query parameter is in the fomat
[id]:[search-item]
but i want to search it without parsing in this format, so for example i have to search for cat, i just type "cat" and it gives me the result, and not "animal:cat",
I am new to solr so I am not very sure, where am I going wrong
Use the DisMax query parsers/handlers
Extract from DisMax documentation
The DisMax query parser is designed to process simple phrases (without
complex syntax) entered by users and to search for individual terms
across several fields using different weighting (boosts) based on the
significance of each field. Additional options enable users to
influence the score based on rules specific to each use case
(independent of user input).
In general, the DisMax query parser's interface is more like that of
Google than the interface of the 'standard' Solr request handler. This
similarity makes DisMax the appropriate query parser for many consumer
applications. It accepts a simple syntax, and it rarely produces error
messages.
Also see DisMax and full documentation of the DisMax query parser here
Related
I have managed to create a dataset using Apache Solr. I have also managed to make queries, such as in this example:
content:(test1 OR test2) OR title: test2
I would now like to search the dataset using an entire string, in similar fashion to searching on google. Is the correct way to approach this to keep using or tags on the title and content for each word within the query, or is there a better way to achieve this ? (I am not looking for exact matches, just the most relevant ones)
You can use dismax or edismax for your approach and can pass the phrases if you have with the boosting.
The DisMax query parser is designed to process simple phrases (without
complex syntax) entered by users and to search for individual terms
across several fields using different weighting (boosts) based on the
significance of each field. Additional options enable users to
influence the score based on rules specific to each use case
(independent of user input).
The detailed parameters are found on the solr page at Solr Dismax
I gone through dismax query parser and standard query parser and found the standard query parser is different in handling error and hence more prone to error.so what are the different area in which one is powerful than other and what is the specific difference between them.
The key advantage of the standard query parser is that it supports a
robust and fairly intuitive syntax allowing you to create a variety of
structured queries. The largest disadvantage is that it’s very
intolerant of syntax errors, as compared with something like the
DisMax query parser which is designed to throw as few errors as
possible.
Standard Query parses is also known as Lucene query parser, so it's expect queries to be following correct syntax.
The DisMax query parser is designed to process simple phrases (without
complex syntax) entered by users and to search for individual terms
across several fields using different weighting (boosts) based on the
significance of each field. Additional options enable users to
influence the score based on rules specific to each use case
(independent of user input).
In general, the DisMax query parser’s interface is more like that of
Google than the interface of the 'lucene' (aka Standard) Solr query
parser. This similarity makes DisMax the appropriate query parser for
many consumer applications. It accepts a simple syntax, and it rarely
produces error messages.
The DisMax query parser supports an extremely simplified subset of the
Lucene QueryParser syntax. As in Lucene, quotes can be used to group
phrases, and +/- can be used to denote mandatory and optional clauses.
All other Lucene query parser special characters (except AND and OR)
are escaped to simplify the user experience. The DisMax query parser
takes responsibility for building a good query from the user’s input
using Boolean clauses containing DisMax queries across fields and
boosts specified by the user. It also lets the Solr administrator
provide additional boosting queries, boosting functions, and filtering
queries to artificially affect the outcome of all searches.
For more information on Standard Query Parser - https://lucene.apache.org/solr/guide/7_6/the-standard-query-parser.html , on DisMax - https://lucene.apache.org/solr/guide/7_6/the-dismax-query-parser.html
I'm building a search query using the edismax parser and specifying the query fields. Sometimes I need to search across multiple collections and sometimes I am just searching against a single collection. In either case, I am generating a single query by specifying the collections parameter in addition to my query fields.
This means that my qf parameter may list fields that do not exist in one or more collections. Normally this isn't a problem and I get back the results I expect (provided I am using the edismax parser). However, I have noticed that if I perform a fuzzy search this way, I am getting inconsistent results.
For example:
http://localhost:8983/solr/activity/select?q=jva040~2&defType=edismax&qf=Code
gives me results with Codes like
RVA010, JAA048, RVA041
but if I issue a query with a non-existent field in the activity collection like
http://localhost:8983/solr/activity/select?q=jva040~2&defType=edismax&qf=Code+Poop
I get results with Codes like
53721ILTHRS-CHFSPMT-2, 53721ILTHRS-CHFSCOS-2, 53721ILTHRS-CHFSNEO--11/2/15
Is this a bug within Solr or am I constructing this query wrong? I am using Solr version 5.2.1
What is the simplest way to query Solr for the documents that contain text similiar to a (longish) passage. This is similar to what ElasticSearch match queries do or what probabilistic search engines like Indri do by default. This is something between an and and an or query. None of the terms is required, but you get documents that contain many of the terms. You can also just pass a passage of raw text to the engine and it returns documents with high term overlap with the passage without having to try to parse or tokenize the text in the client. The best I option can see in the Solr query reference is to tokenize the query text myself and then insert an OR between each pair of terms and return the top N results. Is there more concise way of doing it with Solr?
The answer above is correct. You can choose to find documents similar to another document in the index, similar to a given external URL or similar to some given text. You can choose what field(s) to target and various other parameters. Here's the official Solr Reference Guide documentation page for MLT: https://cwiki.apache.org/confluence/display/solr/MoreLikeThis
I would like to fire complex queries in Solr 4. If I am using Lucene, I can search using XML Query parser and get the results I need. However, I am not able to see how to use the XML Query Parser in Solr.
I need to be able to execute queries with proximity searches, booleans, wildcards, span or, phrases (although these can be handled by proximity searches).
Guidance on material on how to proceed also welcome.
Regards
Puneet
As far as I know it's still a work in progress. More info can be found at their Jira. You can of course use the normal query language, it's also capable of doing pretty complex things, for example:
"a proximity search"~2 AND *wildcards* OR "a phrase"
As you can see you can search for phrases, boolean operators (AND, OR, ...), span, proximity and wildcards. For more information about the query syntax look at the Lucene documentation. Solr also added some extra features on top of the Lucene query parser and more information about that can be found at the Solr wiki.
Solr 4.8 now has the "complexphrase" query parser built in that can construct all sorts of complex proximity queries (i.e. phrase queries with embedded boolean logic and wildcards).
you can use the query url as
http://xx.xxx.xx.xx:8983/solr/collectionname/select?indent=on&q=
{!complexphrase%20inOrder=true}"good*"&wt=json&fl=Category,keywords,ImageID