I'm trying to build a solr query for timecode values with the following format:
run_time:
00:25:00
00:30:00
01:00:00
I'd like to filter a range of timecodes. Everything 30 minutes or under, for example.
I've tried a couple of queries with no success.
run_time:[* TO 00:30:00]
run_time:[* TO 00\:30\:00]
run_time:[* TO *00:31:00Z]
Any thoughts?
Read here for more details about solr date operations:
run_time:[ NOW-30MINUTES TO NOW ]
Your use case appears to be searching for processes which ran for 30 minutes or less. Date field will not be the right fit here.
A better approach would be to store the run time information in a number field.
run_time:
25
30
60
The you can use a range query on this field to get desired results.
runtime:[0 TO 30]
This will fetch records having runtime between 0 to 30minutes including 0,30
Related
Using Solr 8.0.0, with each document holding a start timestamp field and an end timestamp field, how would I query in a way that returns just the duration between these dates? So I would be going for an equation like this:
(Endtime - Starttime) - 500 seconds = 23 seconds over expected duration.
But getting the result across all documents in the collection.
Would this be the subject of a streaming expression? Any example code you can give? I specifically want to keep this calculation load within the SolrCloud.
You can use a function query. The ms function gives you the difference in milliseconds between two dates. You can use sub to subtract 500 seconds from that number.
You can use the frange query parser to filter documents that match a given range. Which means we end up with something like:
q={!frange l=0}sub(ms(endtime,starttime),500000)
I have some documents inserted in vespa. All the documents have a particular timestamp(Some has 1:00 PM, some has 1:15pm etc). If I query to vespa with current timestamp and timestamp after 15 minutes interval(let it be 12:55 PM(current time) and 1:10PM in my query), then vespa must return the document with the timestamp(ie 1:00PM ) lies between them. How can I achieve it? Please help
See https://docs.vespa.ai/documentation/query-language.html and the range operator which works on numeric fields.
I need to apply a range facet to a date field. But the gaps should be in minute intervals, like /10MIN. Solr uses DateMathParser to parse given gaps and unfortunately it does not support minute intervals (minimum supported time gap is HOUR). Any ideas to handle is issue?
Thanks in advance.
My bad, my bad. You can easily use "MINUTE" for a minute range facet. e.g. use "+10MINUTE" for 10 minutes of intervals.
What am I doing wrong in this query?
SELECT * FROM TreatmentPlanDetails
WHERE
accountId = 'ag5zfmRvbW9kZW50d2ViMnIRCxIIQWNjb3VudHMYtcjdAQw' AND
status = 'done' AND
category = 'chirurgia orale' AND
setDoneCalendarEventStartTimestamp >= [timestamp for 6 june 2012] AND
setDoneCalendarEventStartTimestamp <= [timestamp for 11 june 2012] AND
deleteStatus = 'notDeleted'
ORDER BY setDoneCalendarEventStartTimestamp ASC
I am not getting any record and I am sure there are records meeting the where clause conditions. To get the correct records I have to widen the timestamp interval by 1 millisecond. Is it normal? Furthermore, if I modify this query by removing the category filter, I am getting the correct results. This is definitely weird.
I also asked on google groups, but I got no answer. Anyway, for details:
https://groups.google.com/forum/?fromgroups#!searchin/google-appengine/query/google-appengine/ixPIvmhCS3g/d4OP91yTkrEJ
Let's talk specifically about creating timestamps to go into the query. What code are you using to create the timestamp record? Apparently that's important, because fuzzing with it a little bit affects the query. It may be relevant that in the datastore, timestamps are recorded as integers representing posix timestamps with microseconds, i.e. the number of microseconds since 1/1/1970 UTC (not counting leap seconds). It's also relevant that dates (i.e. without a time) are represented as midnight, i.e. the earliest time on that day. But please show us the exact code. (It may also be important to show the actual content of the record that you're attempting to retrieve.)
An aside that is not specific to your question: Entity property names count as part of your storage quota. If this is going to be a huge dataset, you might pay more $$ than you'd like for property names like setDoneCalendarEventStartTimestamp.
Because you write :
if I modify this query by removing the category filter, I am getting
the correct results
this probably means that the category was not indexed at the time you write the matching records to the data store. You have to re-write your records to the data store if you want them added to the newly created index.
I'm trying to get SOLR range query working. I have a database with over 12 milion documents, and i am filtering by few parameters for example:
product_category:"category1" AND product_group:"group1" AND product_manu:"manufacturer1"
The query itself returns about 700 documents and executes in two-three seconds on average.
But when i want to add date range facet to that query (i want to see how many products were added each day for past x years) it executes in 50 seconds or more. So it seems that it would be faster to just retrieve all matching documents and perform manual counting in java.
So i guess i must be doing something wrong with faceting?
here is an example faceted query:
start=0&rows=0&facet.query=productDate%3A[0999-12-26T23%3A36%3A00.000Z+TO+2012-05-22T15%3A58%3A05.232Z]&q=source%3A%22source1%22+AND+productCategory%3A%22category1%22+AND+type%3A%22type1%22&facet=true&facet.limit=-1&facet.sort=count&facet.range=productDate&facet.range.start=NOW%2FDAY-5000DAYS&facet.range.end=NOW%2FDAY%2B1DAY&facet.range.gap=%2B1DAY
My only explanation is that SOLR is counting fields on some larger document pool than my 700 documents resulting from "q=" parameter. Or maybe i should filter documents in another way?
I have tried changing filterCache size and it works, but it seems to be a waste of memory for queries like these. After all aggregating over 700 documents should be very fast shouldnt it?