I'm having a hard time pinning down why my Solr date range search is not working. I am building on an existing working search, adding two new fields to assist with accommodation search.
I add the following two fields to the schema - The first is effectively an array of dates, and the second is a single value:
<field name="available_checkin_dates" type="date" indexed="true" stored="false" multiValued="true" />
<field name="available_unit_count" type="int" indexed="true" stored="false" />
I verified that the index document was created and sent to Solr with the two fields populated, but the following search terms yield no results:
* AND available_checkin_dates:[* TO NOW]
* AND available_checkin_dates:[NOW TO *]
* AND available_checkin_dates:"2012-08-31T00:00:00.0000000Z"
* AND available_checkin_dates:"2012-08-31T00:00:00Z"
* AND available_unit_count:1
* AND available_unit_count:*
Either I'm using the wrong syntax, or the documents didn't get indexed. I'm having a hard time reading the catalina logs, and I can't find a tool that inspects the actual indexed documents.
Any ideas on how to help me nail this one down? I'm a relative Solr newbie.
Never mind, there was a problem with the auto-commit settings, so the buffer wasn't getting flushed. Documents were getting committed with commit as false, but the auto-commit settings weren't in place to flush when the level of uncommitted documents reached a certain number.
Related
I've 10s of fileds defined in my Solr manaed-schema, out of those two are as below:
<field name="isBookmarked" type="boolean" indexed="true" stored="true" required="false" multiValued="false" />
<field name="bookmarkedPathologists" type="string" indexed="true" stored="true" required="false" multiValued="true" />
Now, here I want to set isBookmarked value to 'true' OR'false' if bookmarkedPathologists has SOME value passed while querying on the fly.
Post that I'm sorting on isBookmarked field.
Is it possible? Help anticipated
I struggled a lot and finally got luck to solve my problem using below possible solution.
As on the fly updated changes need to be committed to Solr before getting sorted result on and hence my application which is Solr Client, couldn't get updated/dirty values to sort on, if any.
So I added a Filter Query to my Simple Query Criteria as * exists(query({!v='bookmarkedPathologists:patho'})) : will filter my all(*) results with new on the fly created field named as exists(query({!v='bookmarkedPathologists:patho'})) in JSON response as below:-
:
:
"isBookmarked": false,
"bookmarkedPathologists": [
"patho1"
],
:
:
"_version_": 1582235372763480000,
"exists(query({!v='bookmarkedPathologists:patho'}))": false
Post that I just put sort-order over the same i.e. exists(query({!v='bookmarkedPathologists:patho'})) as exists(query({!v='bookmarkedPathologists:patho'})) asc
So Solr returned sorted response over exists(query({!v='bookmarkedPathologists:patho'})).
Solr Function Query helped me a lot from Function Queries
As I understand you want to update the field while querying the data from it.
SOLR programmed in java language and to interface with SOLR is done using REST kind of services.
And service for search is on:
/solr/<CollectionName>/select
And service for update is on:
/solr/update
So you can`t do both with using same query.
But you want to update externally (using other query) then refer.
I am new to Solr and I need to implement a full-text search of some PDF files. The indexing part works out of the box by using bin/post. I can see search results in the admin UI given some queries, though without the matched texts and the context.
Now I am reading this post for the highlighting part. It is for an older version of Solr when managed schema was not available. Before fully understand what it is doing I have some questions:
He defined two fields:
<field name="content" type="text_general" indexed="false" stored="true" multiValued="false"/>
<field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
But why are there two fields needed? Can I define a field
<field name="content" type="text_general" indexed="true" stored="true" multiValued="true"/>
to capture the full text?
How are the fields filled? I don't see relevant information in TikaEntityProcessor's documentation. The current text extractor should already be Tika (I can see
"x_parsed_by":
["org.apache.tika.parser.DefaultParser","org.apache.tika.parser.pdf.PDFParser"]
in the returned JSON of some query). But even I define the fields as he said I cannot see them in the search results as keys in JSON.
The _text_ field seems a concatenation of other fields, does it contain the full text? Though it does not seem to be accessible by default.
To be brief, using The Elements of
Statistical Learning as an example, how to highlight the relevant texts for the query "SVM"? And if changing the file name into "The Elements of Statistical Learning - Trevor Hastie.pdf" and post it, how to highlight "Trevor Hastie" for the query "id:Trevor Hastie"?
Before I get started on the questions let me just give a brief how solr works. Solr in its core uses lucene when simply put is a matching engine. It creates inverted indexes of document with the phrases. What this means is for each phrase it has a list of documents which makes it so fast. Getting to your questions:
Solr does not convert your pdf to text,well its the update processor configured in the handler which does it ,again this can be configured in solrconfig.xml or write your own handler here.
Coming back why are there two fields. To simply put the first one(content) is a stored field which stores the data as it is. And the second one is a copyfield which copies the data for each document as per the configuration in schema.xml.
We do this because we can then choose the indexing strategy such as we add a lowercase filter factory to text field so that everything is indexed in lower case. Then "Sam" and "sam" when searched returns the same results.Or remove certain common occurring words such as "a","the" which will unnecessarily increase your index size. Which uses a lot of memory when you are dealing with millions of records, then you want to be careful which fields to index to better utilise the resources.
The field "text" is a copyfield which copies data from certain fields as mentioned in the schema to text field. Then when searching in general one does not need to fire multiple queries for each field. As everything thing is copied into "text" field and you get the result. This is the reason it's "multivaled". As it can stores an array of data. Content is a stored field and text is not,and opposite for indexed because when you return your result to the end user you show him what ever you saved not the stripped down data that you just did with the text field applying multiple filters(such as removing stop words and applying case filters,stemming etc).
This is the reason you do not see "text" field in the search result as this is used solr.
For highlighting see this.
For more these are some great blog yonik and joel.
Hope this helps. :)
I'm improving an existing search system which is using Solr 3.6
I'm trying to boost search results using following function:
{!boost b=recip(sub(1,floor(strdist("someText",myField,jw))),1000000,1,1)}searchText
searchText - some text that user searches for;
myField - custom indexed document's field, value cam be empty or not empty string;
In short, this function divides by 1000001 scores of all search results where myField's value is not equal to someText. In this way results with specified myField's value are on top ordered by their original score.
Thus, the field is there, the value is present in the field, but the result's score is also divided and the result is somewhere deep down...
When I use:
fq=myField:[* TO *]
Solr filters out results where field's value is not empty string. So, it is recognized...
There is another legacy string field. When I apply my function using that field, everything works as it should. But when I use my field it fails.
Do you have any ideas of what might be wrong? What should I look for?
Please help. I've spent lots of time without success already, but I'm new to Solr and not able to resolve this issue so far...
Thank you!
I've kept asking people, so, luckily, I've resolved my problem :)
So the problem was in the declaration of the field in schema.xml.
The legacy field was declared like this:
<field name="searchText" type="string" indexed="true" stored="true" multiValued="false"/>
and my field was declared like this:
<field name="myField" type="text" indexed="true" stored="true" multiValued="false"/>
And that's why my boost function did not work.
So I've changed to type "string" and everything works OK now :)
I have a Solr 4.4.0 core configured that contains about 630k documents with an original size of about 10 GB. Each of the fields gets copied to the text field for purposes of queries and highlighting. When I execute a search without highlight, the results come back in about 100 milliseconds, but when highlighting is turned on, the same query takes 10-11 seconds. I also noticed that subsequent queries for the same terms continued to take about the same 10-11 seconds.
My initial configuration of the field was as follows
<field name="text" type="text_general" indexed="true" stored="true"
multiValued="true"
omitNorms="true"
termPositions="true"
termVectors="true"
termOffsets="true" />
The query that is sent is similar to the following
http://solrtest:8983/solr/Incidents/select?q=error+code&fl=id&wt=json&indent=true&hl=true&hl.useFastVectorHighlighter=true
All my research seems to provide no clue as to why the highlight performance is so bad. On a whim, I decided to see if the omitNorms=true attribute could have an effect, I modified the text field, wiped out the data, and reloaded from scratch.
<field name="text" type="text_general" indexed="true" stored="true"
multiValued="true"
termPositions="true"
termVectors="true"
termOffsets="true" />
Oddly enough, this seemed to fix things. The initial query with highlighting took 2-3 seconds with subsequent queries taking less than 100 milliseconds.
However, because we want the omitNorms=true in place, my permanent solution was to have two copies of the "text" field, one with the attribute and one without. The idea was to perform queries against one field and highlighting against the other. So now the schema looks like
<field name="text" type="text_general" indexed="true" stored="true"
multiValued="true"
omitNorms="true"
termPositions="true"
termVectors="true"
termOffsets="true" />
<field name="text2" type="text_general" indexed="true" stored="true"
multiValued="true"
termPositions="true"
termVectors="true"
termOffsets="true" />
And the query is as follows
http://solrtest:8983/solr/Incidents/select?q=error+code&fl=id&wt=json&indent=true&hl=true&hl.fl=text2&hl.useFastVectorHighlighter=true
Again, the data was cleared and reloaded with the same 630k documents but this time the index size is about 17 GB. (As expected since the contents on the "text" field is duplicated.)
The problem is that the performance numbers are back to the original 10-11 seconds each run. Either the first removal of omitNorms was a fluke or there is something else is going on. I have no idea what...
Using jVisualVM to capture a CPU sample shows the following two methods using most of the CPU
org.apache.lucene.search.vectorhighlight.FieldPhraseList.<init>() 8202 ms (72.6%)
org.eclipse.jetty.util.BlockingArrayQueue.poll() 1902 ms (16.8%)
I have seen the init method as low as 54% and the poll number as high as 30%.
Any ideas? Any other places I can look to track down the bottleneck?
Thanks
Update
I have done a bunch of testing with the same dataset but different configurations and here is what I have found...although I do not understand my findings.
Speedy highlighting performance requires that omitNorms not be set to true. (Have no idea what omitNorms and highlighting has to do with one another.)
However, this is only seems to be true if both the query and highlighting are executed against the same field (i.e. df = hl.fl). (Again, no idea why...)
And another however, only if done against the default text field that exists in the schema.
Here is how I tested -->
Test was against about 525,000 documents
Almost all of the fields were copied to the multi-valued text field
In some tests, almost all of the fields were also copied to a send multi-valued text2 field (this field was identical to text except it had the opposite omitNorms setting
Each time the configuration was changed, the Solr instance was stopped, the data folder was deleted, and the instance was started back up
What I found -->
When just the text field was used and omitNorms = true was present, performance was bad (10 second response time)
When just the text field was used and omitNorms = true was not present, performance was great (sub-second response times)
When text did not have omitNorms = true and text2 did, queries wit highlighting against text returned in sub-second times, all other combinations resulted in 10-30 second response times.
When text did have omitNorms = true and text2 did not, all combinations of queries with highlighting returned in 7-10 seconds.
I am soooo confused....
I know that this is a bit dated, but I've ran into the same issue and wanted to chime in with our approach.
We are indexing text from a bunch of binary docs and need Solr to maintain some metadata about the document as well as text. Users need to search for docs based on metadata and full text search within the content as well as see highlights and snippets of relevant content. The performance problem gets worse if the content for highlighting/snippet is located further within each document (e.x. page 50 instead of page 2)
Due to poor performance of highlighting, we had to break up each document into multiple solr records. Depending on the length of the content field, we will chop it up into smaller chunks, copy the metadata attributes to each record and assign a per-document unique id to each record. Then at query time, we will search the content field of all these records and group by that unique field we assigned. Since the content field is smaller, Solr will not have to go deep into each content field, plus from an end user standpoint, this is completely transparent; although it does add a bit of indexing overhead for us.
Additionally, if you choose this approach, you may want to consider overlapping the seconds a little bit between each "sub document" to ensure that if there is phrase match at the boundary of two seconds it will get properly returned.
Hope it helps.
Given: a list of consultants with a list of intervals when they are NOT available:
<consultant>
<id>1</id>
<not-available>
<interval><from>2013-01-01</from><to>2013-01-10</to>
<interval><from>2013-20-01</from><to>2013-01-30</to>
...
</not-available>
</consultant>
...
I'd like to search for consultants that are available (!) for at least X days in a specific interval from STARTDATE to ENDDATE.
Example: Show me all consultants that are available for at least 5 days in the range 2013-01-01 - 2013-02-01 (this would match consultant 1 because he is free from 2013-01-11 to 2013-01-19).
Question 1: How should my solr document look like?
Question 2: How has the query to look like?
As a general advice: precalculate as much as you can, store the data that you are querying for rather than the data you are getting as input.
Also, use several indexes based on different entities - if you have the liberty to do so, and if the queries would become simpler and more straight forward.
Ok, generalities aside and on to your question.
From your example I take it that you currently store in the index if a consultant is not available - probably, because that is what you get as input. But what you want to query is when they are available. So, you should think about storing the availability rather then the non-availability.
EDIT:
The most forward way to query this is to use the intervals as entities such that you do not have to resort to special SOLR features to query the start and the end of an interval on two multi valued fields.
Once you have stored the availability intervals you can also precalculate and store their lengths:
<!-- id of the interval -->
<field name="id" type="int" indexed="true" stored="true" multiValued="false" />
<field name="consultant_id" type="int" indexed="true" stored="true" multiValued="false" />
<!-- make sure that the time is set to 00:00:00 (*/DAY) -->
<field name="interval_start" type="date" indexed="true" stored="true" multiValued="false" />
<!-- make sure that the time is set to 00:00:00 (*/DAY) -->
<field name="interval_end" type="date" indexed="true" stored="true" multiValued="false" />
<field name="interval_length" type="int" indexed="true" stored="true" multiValued="false" />
Your query:
(1.) Optionally, retrieve all intervals that have at least the requested length:
fq=interval_length:[5 to *]
This is an optional step. You might want to benchmark whether it improves the query performance.
Additionally, you could also filter on certain consultant_ids.
(2.) The essential query is for the interval (use q.alt in case of dismax handler):
q=interval_start:[2013-01-01T00:00:00.000Z TO 2013-02-01T00:00:00.000Z-5DAYS]
interval_end:[2013-01-01T00:00:00.000Z+5DAYS TO 2013-02-01T00:00:00.000Z]
(added linebreak for readability, the two components of the query should be separated by regular space)
Make sure that you always set the time to the same value. Best is 00:00:00 because that is what /DAY does: http://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/util/DateMathParser.html .
The less different values the better the caching.
More info:
http://wiki.apache.org/solr/SolrQuerySyntax - Solr Range Query
http://wiki.apache.org/solr/SolrCaching#filterCache - caching of fq filter results
EDIT:
More info on q and fq parameters:
http://wiki.apache.org/solr/CommonQueryParameters
They are handled differently when it comes to caching. That's why I added the other link (see above), in the first place. Use fq for filters that you expect to see often in your queries. You can combine multiple fq parameters while you can only specify q once per request.
How can I "use several indexes based on different entities"?
Have a look at the multicore feature: http://wiki.apache.org/solr/CoreAdmin
Would it be overkill to save for each available day: date;num_of_days_to_end_of_interval - should make querying much simpler?
Depends a bit on how much more data you are expecting in that case. I'm also not exactly sure that it would really help you for the query you posted. The date range queries are very flexible and fast. You don't need to avoid them. Just make sure you specify the time as broad as you can to allow for caching.