Grouping results in SOLR? - solr

I have a Solr index with a schema that looks like this:
{
"responseHeader": {
"status": 0,
"QTime": 0,
"params": {
"q": "*:*",
"q.op": "OR",
"_": "1673422604341"
}
},
"response": {
"numFound": 1206,
"start": 0,
"numFoundExact": true,
"docs": [
{
"material_name_s":"MaterialName1",
"company_name_s": "CompanyName1",
"price_per_lb_value_f": 1.11,
"received_date_dt": "2015-01-01T00:00:00Z"
},
{
"material_name_s":"MaterialName1",
"company_name_s": "CompanyName2",
"price_per_lb_value_f": 2.22,
"received_date_dt": "2020-01-01T00:00:00Z"
},
{
"material_name_s":"MaterialName1",
"company_name_s": "CompanyName3",
"price_per_lb_value_f": 3.33,
"received_date_dt": "2021-01-01T00:00:00Z"
},
{
"material_name_s":"MaterialName2",
"company_name_s": "CompanyName1",
"price_per_lb_value_f": 4.44,
"received_date_dt": "2016-01-01T00:00:00Z"
},
{
"material_name_s":"MaterialName2",
"company_name_s": "CompanyName2",
"price_per_lb_value_f": 5.55,
"received_date_dt": "2021-01-01T00:00:00Z"
},
{
"material_name_s":"MaterialName2",
"company_name_s": "CompanyName3",
"price_per_lb_value_f": 6.66,
"received_date_dt": "2022-01-01T00:00:00Z"
}
]
}
}
These are historical prices for different materials from different companies.
I would like to get the lowest price_per_lb_value_f for each material_name_s in last 2 years, so the results would look like this:
{
"response": {
"numFound": 2,
"start": 0,
"numFoundExact": true,
"docs": [
{
"material_name_s":"MaterialName1",
"company_name_s": "CompanyName3",
"price_per_lb_value_f": 3.33,
"received_date_dt": "2021-01-01T00:00:00Z"
},
{
"material_name_s":"MaterialName2",
"company_name_s": "CompanyName2",
"price_per_lb_value_f": 5.55,
"received_date_dt": "2021-01-01T00:00:00Z"
}
]
}
}
Is this kind of grouping is even possible to do with Solr?
I'm a newbie to Solr, so any help would be appreciated.

grouping is possible in Solr.
You can get the result you want with the following queries:
Field collapsing approach (recommended in your case):
https://solr.apache.org/guide/solr/latest/query-guide/collapse-and-expand-results.html
http://localhost:8983/solr/test/select?indent=true&q.op=OR&q=received_date_dt:[NOW-3YEAR%20TO%20*]&fq={!collapse%20field=material_name_s%20min=price_per_lb_value_f}
q:received_date_dt:[NOW-3YEAR TO *] // Range query to filter only the documents received in the last 3 years otherwise I wouldn't get documents received on 2021-01-01
fq:{!collapse field=material_name_s min=price_per_lb_value_f} // It shows only one document within all documents with the same value of material_name_s. It gets the document with the min price_per_lb_value_f
Grouping approach: https://solr.apache.org/guide/solr/latest/query-guide/result-grouping.html
http://localhost:8983/solr/test/select?indent=true&q.op=OR&q=received_date_dt:[NOW-3YEAR%20TO%20*]&group=true&group.field=material_name_s&group.sort=price_per_lb_value_f%20asc
q:received_date_dt:[NOW-3YEAR TO *] // same filter as before
group:true // enable grouping
group.field:material_name_s // groups by material_name_s
group.sort:price_per_lb_value_f asc // sort each group by the field price_per_lb_value_f in ascending order
group.limit not specified as the default value is 1 // it sets the number of results for each group

Related

Showing facet result even though not a single result found for solrQuery

I'm Working with Solr 8.0.0 & i am facing the problem when using the exclude Tag
My solr query looks like below:
http://localhost:8984/solr/HappyDemo202/select?q=*:*&
rows=6&
start=0&
wt=json&
fq={!tag=CATFACET}cat:((desktops))&
fq={!tag=TAGFACET}tag:((cool))&
fq={!tag=Price}Price:[1200 TO 1245]&
json.facet={CatFacet:{type:terms,field:cat,domain:{excludeTags:CATFACET},limit:-1,sort:{count:desc}},TagsFacet:{ type:terms,field:tag,domain:{excludeTags:TAGFACET},limit:-1,sort:{count:desc}}}
Output Of Query looks like below:
{ "responseHeader": {
"status": 0,
"QTime": 0,
"params": {
"q": "*:*",
"json.facet": "{CatFacet:{type:terms,field:cat,domain:{excludeTags:CATFACET},limit:-1,sort:{count:desc}},TagsFacet:{ type:terms,field:tag,domain:{excludeTags:TAGFACET},limit:-1,sort:{count:desc}}}",
"start": "0",
"fq": [
"{!tag=CATFACET}cat:((desktops))",
"{!tag=TAGFACET}tag:((cool))",
"{!tag=Price}Price:[1200 TO 1245]"
],
"rows": "6",
"wt": "json"
}
}, "response": {
"numFound": 0,
"start": 0,
"docs": [] },
"facets": {
"count": 0,
"CatFacet": {
"buckets": []
},
"TagsFacet": {
"buckets": [
{
"val": "new",
"count": 1
},
{
"val": "new1",
"count": 1
}
]
} } }
When you check the Output of Query,CatFacet is not showing any facet result because numFound is 0 but TagsFacet is showing the two facet result like new & new1. I don't know what going wrong , tagFacet must not show the two facet result if numFound is 0.
Can you please suggest,what's going wrong ? Any help will be appreciable.
You're explicitly asking for the fq for tag to be excluded for the tag facet ({excludeTags:TAGFACET}) - meaning that if you didn't have that fq there, there would be results - and you're asking for a count without that field.
If you want the facet to only count those documents being returned, drop the excludeTags value for all those facets that should only be returned for documents that are included in the result set.

Elasticsearch - value in array filter

I want to filter out all documents which contain a specific value in an array field. I.e. the value is an element of that array field.
To be specific - I want to select all documents which names contains test-name, see the example below.
So when I do an empty search with
curl -XGET localhost:9200/test-index/_search
the result is
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 50,
"max_score": 1,
"hits": [
{
"_index": "test-index",
"_type": "test",
"_id": "34873ae4-f394-42ec-b2fc-41736e053c69",
"_score": 1,
"_source": {
"names": [
"test-name"
],
"age": 100,
...
}
},
...
}
}
But in case of a more specific query
curl -XPOST localhost:9200/test-index/_search -d '{
"query": {
"bool": {
"must": {
"match_all": {}
},
"filter": {
"term": {
"names": "test-name"
}
}
}
}
}'
I don't get any results
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
There are some questions similar to this one. Although, I cannot get any of the answers to work for me.
System specs: Elasticsearch 5.1.1, Ubuntu 16.04
EDIT
curl -XGET localhost:9200/test-index
...
"names": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
...
That's because the names field is analyzed and test name gets indexed as two tokens test and name.
Searching for the test name term will hence not yield anything. If you use match instead, you'll get the document.
If you want to check for the exact value test name (i.e. the two tokens one after another), then you need to change your names field to a keyword type instead of text
UPDATE
According to your mapping, the names field is analyzed, you need to use the names.keyword field instead and it will work, like this:
curl -XPOST localhost:9200/test-index/_search -d '{
"query": {
"bool": {
"must": {
"match_all": {}
},
"filter": {
"term": {
"names.keyword": "test-name"
}
}
}
}
}'

Solr removing the 'e' from ace001 search term

Solr is removing the letter 'e' from search queries...
I'm pretty new when it comes to Solr so I don't really know where to start looking to figure this out but whenever I send a search query Solr is stripping out the 'e' character...
As you can see here when I try and search the term ace001
{
"responseHeader": {
"status": 0,
"QTime": 1,
"params": {
"q": "_text:ace001",
"indent": "true",
"wt": "json",
"debugQuery": "true",
"_": "1478467316690"
}
},
"response": {
"numFound": 0,
"start": 0,
"docs": []
},
"debug": {
"rawquerystring": "_text:ace001",
"querystring": "_text:ace001",
"parsedquery": "PhraseQuery(_text:\"ac 001 ac 001\")",
"parsedquery_toString": "_text:\"ac 001 ac 001\"",
"explain": {},
"QParser": "LuceneQParser",
"timing": {
"time": 1,
"prepare": {
"time": 1,
"query": {
"time": 1
},
"facet": {
"time": 0
},
"mlt": {
"time": 0
},
"highlight": {
"time": 0
},
"stats": {
"time": 0
},
"spellcheck": {
"time": 0
},
"debug": {
"time": 0
}
},
"process": {
"time": 0,
"query": {
"time": 0
},
"facet": {
"time": 0
},
"mlt": {
"time": 0
},
"highlight": {
"time": 0
},
"stats": {
"time": 0
},
"spellcheck": {
"time": 0
},
"debug": {
"time": 0
}
}
}
}
}
Searching a different term such as 'acb001' doesn't strip the 'b' but I noticed it does separate the numbers from the letters. I'd want Solr to match the term 'acb001' in the text field...
extract:
"rawquerystring": "_text:acb001",
"querystring": "_text:acb001",
"parsedquery": "PhraseQuery(_text:\"acb 001 acb 001\")",
"parsedquery_toString": "_text:\"acb 001 acb 001\"",
"explain": {},
"QParser": "LuceneQParser",
Would really appreciate some direction here as to how I can either further debug or ideally fix this so ace001 returns all the occurrences of just that.
Edit:
Schema is standard/default http://pastebin.com/59LbmJUp
this is happening because of solr.PorterStemFilterFactory. your default search field id is htmltext which has
<filter class="solr.PorterStemFilterFactory"/>
in the query analysis.
the PorterStemmer stems the word "ace" to "ac".
you can check it here https://tartarus.org/martin/PorterStemmer/voc.txt
search for the word "ace".
now look here which has corresponding output after stemming https://tartarus.org/martin/PorterStemmer/output.txt the corresponding word after stemming which will be "ac"
to solve this revmoe the filter during query as well as index in solrconfig.xml
Also you are using WordDelimiterFilterFactory, which will split words on alphanumeric bounderies. that is why you see "ac" and "001", if you do not want that then remove that filter too in schema.xml
you are using default schema.xml which has a lot of these unnecessary filters which you might not even need. I would suggest to strip it down to a few filters. and then add filters as you need instead of the other way.

Solr sorting text in Polish

I have solr 5.2.1 and such definition of field which is used for sorting:
<fieldType name="polishSortVarchar" class="solr.ICUCollationField" locale="pl_PL" strength="secondary" />
After reindex sorting almost work as I want:
{
"responseHeader": {
"status": 0,
"QTime": 2,
"params": {
"fl": "name_varchar",
"sort": "sort_name_varchar asc",
"indent": "true",
"q": "*:*",
"_": "1454575147254",
"wt": "json",
"rows": "10"
}
},
"response": {
"numFound": 5250,
"start": 0,
"docs": [
{
"name_varchar": "\"Europą\" na Antarktydę"
},
{
"name_varchar": "1:0 dla Korniszonka"
},
{
"name_varchar": "1001 faktów o roślinach"
}
]
}
}
As You see on first position is phrase with " on 1st char, I want filter special chars and sort only by letters (so this phrase will be sorted by 'E' on first position).
Anybody?
I can't find solution directly in SOLR, so I clean unnecessary chars during indexation.
$sortValue = preg_replace('/[^A-Za-z0-9- zżźćńółęąśŻŹĆĄŚĘŁÓŃ]/u', '', $sortValue);

solr query returning doclist of length 1, despite numfound being greater than 1

When querying solr with a group-by field, I a response with "num_found" greater than 1, yet the "docs" attribute only shows 1 record.
The query is something like:
http://.../solr/.../select?q=*%3A*&fq=...&wt=json&indent=true&group=true&group.field=GroupingField_s&group.ngroups=true
The results are something like:
"grouped": {
"GroupingField_s": {
"matches": 3130,
"ngroups": 283,
"groups": [
{
"groupValue": "1111",
"doclist": {
"numFound": 7,
"start": 0,
"docs": [ {/*only 1 record shown here*/} ]
},
{
"groupValue": "222",
"doclist": {
"numFound": 5,
"start": 0,
"docs": [ {/*only 1 record shown here*/} ]
}, ....
]
}
You'll have to set the group.limit parameter. This defaults to 1.
group.limit integer Specifies the number of results to return for each group. The default value is 1.
See Result Grouping.

Resources