I have a query in Elastic Search shown below
sample code
How can I perform the same using Solr?
You can search by query string like given below.
http://domain/solr/core_name/select?q=*:*&fq=studentId:14466&wt=json&indent=true
You can do a single match against a field in standard Lucene syntax:
q=field:value
Having field names with characters outside of [a-zA-Z0-9] isn't recommended, but you could probably escape it properly with student\ I\'d:14466.
Related
I used icu_tokenizer using custom analyzer to create a search index for Japanese words. Index was created successfully. Using icu_tokenizer as for asian languages it works better than the default azure search tokenizer.
Now when I use query for string Ex:- 赤城 I see multiple search results (total 131) from the index. But when I use the wild card search with the same word, Ex: 赤城* (adding * at the end of the word) or /赤城.*/ (using regex search query) i see 0 search results. The weird part is that * seems to work with single japanese character 赤* gives me same number of search results as 赤 gives. But as soon as I increase the number of japanese characters from 1, wild card queries with * stops working and returns 0 search result. All of these queries I am testing it on search explorer on Azure portal using querytype=full (lucene syntax query)
In my application search terms are normally used as prefix search so normally we append * at the end of the search string to fetch search results but looks like these lucene wildcard queries with japanse characters just do not work. Any idea, how can I make these prefix queries (using wildcard * at end of search strings) work when search strings are given in japanese characters?
Any quick help will be much appreciated!!
I tested with my installation now and I can confirm that wildcards only work with Japanese content when you use a Japanese analyzer.
In my example I set up one index using a property Body that does not have a specific analyzer defined. Then I set up another index where Body uses the ja.microsoft language analyzer. The content in both indexes are identical. I then tried to search for 自動車 (automobile) with a trailing wildcard.
自動車* returns multiple hits from my index using the japanese analyzer. No hits are returned from the index without a specific analyzer defined.
sorry for the late reply.
Have you tried using one of the Japanese language analyzers? For example, ja.microsoft
Also, if you want to use prefix search, you can try experimenting with the suggester feature which is designed to be efficient for this scenario.
I need to do solr search for a string like BEBIL1407-GREEN with special character(-) but it is ignoring - and searches for only with BEBIL1407. I need to search with a hole word.Im using solr_4.5.1
Example Query :
q=BEBIL1407-GREEN&qt=select&start=0&rows=24&fq=clg%3A%222%22&fq=isAproved%3A%22Y%22&fl=id
Your question is about searching for BEBIL1407-GREEN but finding BEBIL1407.
You did not post your schem or your query parser.
As default solr using the standard query parser on field "text" with fieldtype "text_general".
You can test with the solr analysis screen the way from a word (in real text) to the corresponding token in the index.
For "text_general" the word "BEBIL1407-GREEN" goes to two token: "bebil1407" and "green".
The Standard-Parser does support escaping of special characters this would help if your word starts with a hyphen(minus sign). But in this case most possible the tokenizer is the reason of "finding unexpected documents".
Solution:
You can search with a phrase. In this case "BIBIL1407-GREEN" will also find "BIBIL1407 GREEN"
You can use an other FieldType e.g. one with WhiteSpaceTokenizer
Hope this helps, otherwise post your search field and your definition from schema.xml...
My requirement is simple.
I need to search with the keyword similar to SQL LIKE.
Now the search shows results for "words" rather than checking partial characters.
Ex:-
Search query: "test"
Expected results: "test%" - Which gives "test", "tested",
"testing", etc...
Actual result: "test"
I found many query suggestions for SOLR. But I need to find the exact mechanism to put that on conf xml files.
Thanks in advance.
The quick and dirty solution is to use wildcard in your search query using an asterisk (*). For example: test*
The more proper solution would be to use stemming to remove common word endings when you index and query the data. In the default schema, the text_en_splitting field type would do this for you. Just define your field as text_en_splitting.
Are you building auto-complete?
If so, use Suggester. It's part of Solr, and it does what you're talking about extremely efficiently using either a dictionary file, or a field in your index you've designated.
http://wiki.apache.org/solr/Suggester
I have a solr index with indexed text.
I'd like to query documents that start with a certain term.
I didn't find a way to do that with the lucene or dismax query parser.
Is there a way to do that?
A solution I thought of is to index the strings with a special token at the beginning of each line, i.e: "STARTOFTEXT" and then query for "STARTOFTEXT something".
Is there a nicer solution?
What about making a field in the schema that contains the first word? Then when when you build the document you can grab the first word and store it separately from the rest of the text.
Solr newbie here.
I have created a Solr index and write a whole bunch of docs into it. I can see
from the Solr admin page that the docs exist and the schema is fine as well.
But when I perform a search using a test keyword I do not get any results back.
On entering * : *
into the query (in Solr admin page) I get all the results.
However, when I enter any other query (e.g. a term or phrase) I get no results.
I have verified that the field being queried is Indexed and contains the values I am searching for.
So I am confused what I am doing wrong.
Probably you don't have a <defaultSearchField> correctly set up. See this question.
Another possibility: your field is of type string instead of text. String fields, in contrast to text fields, are not analyzed, but stored and indexed verbatim.
I had the same issue with a new setup of Solr 8. The accepted answer is not valid anymore, because the <defaultSearchField> configuration will be deprecated.
As I found no answer to why Solr does not return results from any fields despite being indexed, I consulted the query documentation. What I found is the DisMax query parser:
The DisMax query parser is designed to process simple phrases (without complex syntax) entered by users and to search for individual terms across several fields using different weighting (boosts) based on the significance of each field. Additional options enable users to influence the score based on rules specific to each use case (independent of user input).
In contrast, the default Lucene parser only speaks about searching one field. So I gave DisMax a try and it worked very well!
Query example:
http://localhost:8983/solr/techproducts/select?defType=dismax&q=video
You can also specify which fields to search exactly to prevent unwanted side effects. Multiple fields are separated by spaces which translate to + in URLs:
http://localhost:8983/solr/techproducts/select?defType=dismax&q=video&qf=features+text
Last but not least, give the fields a weight:
http://localhost:8983/solr/techproducts/select?defType=dismax&q=video&qf=features^20.0+text^0.3
If you are using pysolr like I do, you can add those parameters to your search request like this:
results = solr.search('search term', **{
'defType': 'dismax',
'qf': 'features text'
})
In my case the problem was the format of the query. It seems that my setup, by default, was looking and an exact match to the entire value of the field. So, in order to get results if I was searching for the sit I had to query *sit*, i.e. use wildcards to get the expected result.
With solr 4, I had to solve this as per Mauricio's answer by defining type="text_en" to the field.
With solr 6, use text_general.