I have a document has an array like
doc1
{
"item_type":"bag",
"color":["red","blue","green","orange"]
}
doc2
{
"item_type":"shirt",
"color":["red"]
}
when I do a multi_match search like
{
"query": {
"multi_match": {
"query": "red bag",
"type": "cross_fields",
"fields": ["item_type","color"]
}
}
}
The doc2 has much higher score, I understand color filed has less items get higher score and it get worse if I have more colors in doc1.
So is there a way I can ask Elasticsearch to score the same for an array field no matter how many items are there?
If you do not want to account for field length (fieldNorm) during the scoring you could disable norms for a field in the mapping.
For example the mapping for the above example would be
{
"properties": {
"item_type": {
"type": "string"
},
"color": {
"type": "string",
"norms": {
"enabled": false
}
}
}
}
This article from elasticsearch definitive guide gives a good insight into field-length-norms.
Related
I'm on Elastic Search 6.8.22
I have multiple users and each one has multiple papers ("valid" or not):
{"name":"Amy",
"papers":[
{"type":"idcard", "country":"fr", "valid":"no"},
{"type":"idcard", "country":"us", "valid":"yes"}
]}
{"name":"Brittany",
"papers":[
{"type":"idcard", "country":"fr", "valid":"no"},
{"type":"idcard", "country":"us", "valid":"no"}
]}
{"name":"Chloe",
"papers":[
{"type":"idcard", "country":"fr", "valid":"yes"},
{"type":"idcard", "country":"us", "valid":"no"}
]}
I'm trying to find only user with a paper: "valid" for "fr":
{"query": {
"bool": {
"filter": [
{"match":{"papers.valid": "yes"}},
{"match":{"papers.country": "fr"}}
]}}}
It returns Chloe, which is fine (she has a paper which is both "valid" and "fr").
But it also returns Amy; because she has one "valid" paper and another one which is "fr".
This is due to the fact that ES doesn't understand array of objects and flattens everything into values with arrays (as far as I understand).
I've tried using "combined term queries" from this link, but I guess it only works for arrays of "primitive" (not complex objects).
I've seen that I can transform arrays into nested objects to do what I need, but it seems to be overcomplicated and would slow down the queries (because of hidden joins).
My question is:
Is there any way I can search if a document has in its array of objects, one that match multiple criteria at the same time ?
(Originally, I wanted a query that checks if every "papers" in the array matched criteria, but that seems impossible, ex. all papers of type "idcard" must be "valid")
You need to define papers as a nested field in the mapping, then you can run a nested search on it
https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html
So if for example, your mapping will be this:
{
"mappings": {
"properties": {
"name": {
"type": "keyword"
},
"papers": {
"type": "nested",
"properties": {
"type": {
"type": "keyword"
},
"country": {
"type": "keyword"
},
"valid": {
"type": "keyword"
}
}
}
}
}
}
this query will work
{
"query": {
"nested": {
"path": "papers",
"query": {
"bool": {
"filter": [
{
"term": {
"papers.valid": "yes"
}
},
{
"term": {
"papers.country": "fr"
}
}
]
}
}
}
}
}
I'm searching for some text in a field.
but the problem is whenever two documents contain all search tokens, the document which has more search tokens gets more points instead of the document that has less length.
My ElasticSearch index contains some names of foods. and I wanna search for some food in it.
The documents structure are like this
{"text": "NAME OF FOOD"}
Now I have two documents like
1: {"text": "Apple Syrup Apple Apple Syrup Apple Smoczyk's"}
2: {"text": "Apple Apple"}
If I search using this query
{
"query": {
"match": {
"text": {
"query": "Apple"
}
}
}
}
The first document comes first because contains more Apple in it.
which is not my expected result. I will be good that the second document gets more point because has Apple in it and its length is shorter then first one.
Elastic search scoring gives weightage to term frequency , field length. In general shorter fields are scored higher but term frequency can offset it.
You can use unique filter to generate unique tokens for the text. This way multiple occurrence of same token will not effect the scoring.
Mapping
{
"mappings": {
"properties": {
"text": {
"type": "text",
"analyzer": "my_analyzer"
}
}
},
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"filter": [
"unique", "lowercase"
]
}
}
}
}
}
Analyze
GET index29/_analyze
{
"text": "Apple Apple",
"analyzer": "my_analyzer"
}
Result
{
"tokens" : [
{
"token" : "apple",
"start_offset" : 0,
"end_offset" : 5,
"type" : "<ALPHANUM>",
"position" : 0
}
]
}
Only single token is generated even though apple appears twice.
I am using Elasticsearch with no modifications whatsoever. This means the mappings, norms, and analyzed/not_analyzed is all default config. I have a very small data set of two items for experimentation purposes. The items have several fields but I query only on one, which is a multi-valued/array of strings field. The doc looks like this:
{
"_index": "index_profile",
"_type": "items",
"_id": "ega",
"_version": 1,
"found": true,
"_source": {
"clicked": [
"ega"
],
"profile_topics": [
"Twitter",
"Entertainment",
"ESPN",
"Comedy",
"University of Rhode Island",
"Humor",
"Basketball",
"Sports",
"Movies",
"SnapChat",
"Celebrities",
"Rite Aid",
"Education",
"Television",
"Country Music",
"Seattle",
"Beer",
"Hip Hop",
"Actors",
"David Cameron",
... // other topics
],
"id": "ega"
}
}
A sample query is:
GET /index_profile/items/_search
{
"size": 10,
"query": {
"bool": {
"should": [{
"terms": {
"profile_topics": [
"Basketball"
]
}
}]
}
}
}
Again there are only two items and the one listed should match the query because the profile_topics field matches with the "Basketball" term. The other item does not match. I only get a result if I ask for clicked = ega in the should.
With Solr I would probably specify that the fields are multi-valued string arrays and are to have no norms and no analyzer so profile_topics are not stemmed or tokenized since all values should be treated as tokens (even the spaces). Not sure this would solve the problem but it is how I treat similar data on Solr.
I assume I have run afoul of some norm/analyzer/TF-IDF issue, if so how do I solve this so that even with two items the query will return ega. If possible I'd like to solve this index or type wide rather than field specific.
Basketball (with capital B) in terms will not be analyzed. This means this is the way it will be searched in the Elasticsearch index.
You say you have the defaults. If so, indexing Basketball under profile_topics field means that the actual term in the index will be basketball (with lowercase b) which is the result of the standard analyzer. So, either you set profile_topics as not_analyzed or you search for basketball and not Basketball.
Read this about terms.
Regarding to setting all the fields to not_analyzed you could do that with a dynamic template. Still with a template you can do what Logstash is doing: defining a .raw subfield for each string field and only this subfield is not_analyzed. The original/parent field still holds the analyzed version of the same text, maybe you will use in the future the analyzed field.
Take a look at this dynamic template. It's the one Logstash is using.
More specifically:
{
"template": "your_indices_name-*",
"mappings": {
"_default_": {
"_all": {
"enabled": true,
"omit_norms": true
},
"dynamic_templates": [
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "string",
"index": "analyzed",
"omit_norms": true,
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
]
}
}
}
I'm using Elasticsearch for this project but a Solr solution might be appropriate too. In the query I'd like to include a portion of a should clause that will return results even if none of the other terms can. This will be used for document popularity. I'll periodically calculate reading popularity and add a float field to each doc with a numeric value.
The idea is to return docs based on terms but when that fails, return popular docs ranked by popularity. These should be ordered by term match scores or magnitude of popularity score.
I realize that I could quantize the popularity and treat it like a tag "hottest", "hotter", "hot"... but would like to use numeric field since the ranking is well defined.
Here is the current form of my data (from fetch by id):
GET /index/docs/ipad
returns a sample object
{
"_index": "index",
"_type": "docs",
"_id": "doc1",
"_version": 1,
"found": true,
"_source": {
"category": ["tablets", "electronics"],
"text": ["buy", "an", "ipad"],
"popularity": 0.95347457,
"id": "doc1"
}
}
Current query format
POST /index/docs/_search
{
"size": 10,
"query": {
"bool": {
"should": [
{"terms": {"text": ["ipad"]}}
],
"must": [
{"terms": {"category": ["electronics"]}}
]
}
}
}
This may seem an odd query format but these are structured objects, not free form text.
Can I add popularity to this query so that it returns items ranked by popularity magnitude along with those returned by the should terms? I'd boost the actual terms above the popularity so they'd be favored.
Note I do not want to boost by popularity, I want to return popular if the rest of the query returns nothing.
One approach I can think of is wrapping match_all filter in constant score
and using sort on score followed by popularity
example:
{
"size": 10,
"query": {
"bool": {
"should": [
{
"terms": {
"text": [
"ipad"
]
}
},
{
"constant_score": {
"filter": {
"match_all": {}
},
"boost": 0
}
}
],
"must": [
{
"terms": {
"category": [
"electronics"
]
}
}
],
"minimum_should_match": 1
}
},
"sort": [
{
"_score": {
"order": "desc"
}
},
{
"popularity": {
"unmapped_type": "double"
}
}
]
}
You want to look into the function score query and a decay function for this.
Here's a gentle intro: https://www.found.no/foundation/function-scoring/
I'm currently trying to do something fancy in elasticsearch...and it ALMOST works.
Use case: I have to limit the number of results per a certain field to (x) results.
Example: In a result set of restaurants I only want to return two locations per restaurant name. If I search Mexican Food, then I should get (x) Taco Bell hits, (x) Del Taco Hits and (x) El Torito Hits.
The Problem: My aggregation is currently only matching partials of the term.
For Instance: If I try to match company_name, it will create one bucket for taco and another bucket for bell, so Taco Bell might show up in 2 buckets, resulting in (x) * 2 results for that company.
I find it hard to believe that this is the desired behavior. Is there a way to aggregate by the entire search term?
Here's my current aggregation JSON:
"aggs": {
"by_company": {
"terms": {
"field": "company_name"
},
"aggs": {
"first_hit": {
"top_hits": {"size":1, "from": 0}
}
}
}
}
Your help, as always, is greatly appreciated!
Yes. If your "company_name" is just a regular string with the standard analyzer, OR your whatever analyzer you are using for "company_name" is splitting the name then this is your answer. ES stores "terms", not words, or entire text unless you are telling it to.
Assuming your current analyzer for that field does just what I described above, then you need another - let's call it "raw" - field that should mirror your company_name field but it should store the company name as is.
This is what I mean:
{
"mappings": {
"test": {
"properties": {
...,
"company_name": {
"type": "multi_field",
"fields": {
"company_name": {
"type": "string" #and whatever you currently have in your mapping for `company_name`
},
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
And in your query, you'll do it like this:
"aggs": {
"by_company": {
"terms": {
"field": "company_name.raw"
},
"aggs": {
"first_hit": {
"top_hits": {"size":1, "from": 0}
}
}
}
}