In my Solr index I have two different types of items, A and B, in my index that have distinct fields foo and bar, respectively, but similar values that I need to group together for faceting.
A:
foo: /* could be "abc", "def" or "ghi" */
B:
bar: /* could be "abc", "ghi", or "jkl" */
It's easy enough to get the facet information for each of these fields separately:
http://myServer:<port>/<SolrPath>/q=<query>&facet.field=foo&facet.field=bar
Which gives me:
"facet_count": {
"facet_fields": {
"foo": ["abc", 10, "def", 20 "ghi", 30],
"bar": ["abc", 3, "ghi", 8, "jkl", 1]
}
}
Is there a way in Solr to specify that I want the fields A.foo and B.bar to be "lumped together" into the same facet? In other words, I need to make the facet information in the response looks like this:
"facet_count": {
"facet_fields": {
"foo": ["abc", 13, "def", 20 "ghi", 38, "jkl", 1]
}
}
No, my advice would be to index the values into a single field. Using copy field directives this would look like this (in schema.xml):
<copyField source="foo" dest="foobar" />
<copyField source="bar" dest="foobar" />
The would preserve the original foo and bar fields. To get your combined facets, simply facet on the new field:
?q=*:*
&facet=true
&facet.field=foobar
Edit: it might be possible with facet queries, but only if the list of unique values is small and limited, and you're willing to write a separate facet query for each value. Even then, the results will look different (count per query instead an array of field value, count).
Related
I have documents with an array of events objects :
{
"events": [
{
"name": "A"
},
{
"name": "C"
},
{
"name": "D"
},
{
"name": "B"
},
{
"name": "E"
}
]
},
{
"events": [
{
"name": "A"
},
{
"name": "B"
},
{
"name": "S"
},
{
"name": "C"
}
]
}
]
In this array, I want to count the number of events that are in a said order, with intervening events. For example, I look for the order [A,B,C], with the array [A,x,x,B,x], I should count 2, with [A,B,x,x,C] I should have 3. (x is just a placeholder for anything else)
I want to summarize this information for all my documents in the shape of an array, with the number of matches for each element. With the previous example that would give me [2,2,1], 2 matches for A, 2 matches for B, 1 match for C.
My Current aggregation is generated in javascript and follow this pattern :
Match documents with the event array containing A
Slice the array from A to the end of the array
Count the number of documents
Append the count of matching document to the summarizing array
Match documents with the event array containing B
Slice the array from B to the end of the array
Count the number of documents
etc
However, when an event does not appear in any of the arrays, it falls shorts, as there are no documents, I do not have a way to store the summarizing array. For example, with the events array [A,x,x,B,x] [A,B,x,x,C] and trying to match [A,B,D], I would expect [2,2,0], but I have [] as when trying to match D nothing comes up, and the aggregation cannot continue.
Here is the aggregation I'm working with : https://mongoplayground.net/p/rEdQD4FbyC4
change the matching letter l.75 to something not in the array to have the problematic behavior.
So is there a way to not lose my data when there is no match? like bypassing aggregation stages, I could not find anything related to bypassing stages in the mongoDB documentation.
Or are you aware of another way of doing this?
We ended using a reduce to solve our problem
The reduce is on the events array and with every event we try to match it with the element in sequence at the position "size of the accumulator", if it is a match it's added to the accumulator, ortherwise no, etc
here is the mongo playground : https://mongoplayground.net/p/ge4nlFWuLsZ\
The sequence we want to match is in the field "sequence"
The matched elements are in the "matching" field
What are the potential problems for the following design decision?
Suppose you have a MongoDB collection where, for each document, you want to store many documents in one of the embedded fields. Think about a kind of one-to-many relationship.
For different reasons, an array is to be avoided, meaning, the documents in the collection won't be like this
{
p: 1,
q: [
{ k1: 1, k2 : "p", x: "aaa" },
{ k1: 2, k2 : "b", x: "bbb" }
]
}
Instead, I choose to do the following
{
p: 1,
q: {
KEY1 : { k1: 1, k2 : "a", x: "aaa" },
KEY2 : { k1: 2, k2 : "b", x: "bbb" }
}
}
where KEY1 and KEY are unique strings representing the documents {k1: 1, k2 : "a" } and {k1: 2, k2 : "b"}, respectively.
Such string could be calculated in many ways, as far as the representation is unique. For example, {k1: 1, k2 : "a"} and {k2 : "a" , k1: 1} should have the same string, and should be different from the one of {k2 : "a" , k1: "1"}. It should take into account that the values for some ki could also be a document.
By the way, I cannot use a hash function for calculating KEY, as I need to store all the documents.
(If you are still here, the reason I didn't use an array is because I need atomicity when adding documents to the field q, and I need to modify the field x, although k1 and k2 will not be modified was added to q. This design was based on this question: MongoDB arrays - atomic update or push element. $addToSet only work for whole documents)
Two possible source of problems:
The numbers of possible KEY would grow fast. (Although in my case it should be under the thousands)
The keys themselves could be very long strings. Can it degrade the performance?
Technical view
About feasibility, the documentation of MongoDB only says fields cannot be _id and cannot include the characters $, . or null. The BSON spec only says it should be an Modified UTF-8 string, not including null.
I'd have done the following, but MongoDB complained the keys should be non mutable:
{
p: 1,
q: {
{ k1: 1, k2 : "a" } : { x: "aaa" },
{ k1: 2, k2 : "b" } : { x: "bbb" }
}
}
You can, however, use related similar notation with using the operator $group in the aggregation framework. Just similar notation: You cannot save such things into a collection.
This whole idea, it seems, would not be necessary if the documents {k1: 1, k2 : "a" } where to be store directly in the collection, meaning not embedded. In that case, I just would set k1 and k2 as a unique index, and then use update/upsert to insert without repetition. All this overkill is because that cannot in an Array. Indeed, it seem an array is almost like a collection, where the _id is the position in the array. If I'm not wrong in this paragraph, then what ever is representable in the top-level collection, should be representable in an embedded document.
EDIT: What about using a collection instead of embedding?
(Edited after comment by #hd)
My end goal is to have a one-to-many relationship with atomicity, especially while updating the many-side.
Lets explore the idea of having separate documents for representing the one-to-many relationship. It mean two collections:
Collection cOne
{
p: 1,
q: _id_in_Many
}
Collection cMany
{
id: ...,
p: 1,
q: { k1: 1, k2 : "p" },
x: "aaa",
}
In this case, I should use an unique index in cMany + updateOne/upsert to ensure uniqueness, {p: 1, q: 1}. But then indexes has a limit of 1024 bytes per entry, but {k1: ..., k2 : ...} could go beyond it, especially if the values contain utf-8 strings.
If I use anyway a KEY generated as explained, like this
{
id: ...,
p: 1,
key: KEY1,
k1 : 1,
k2 : "p" ,
x: "aaa",
}
Then the posibility of hitting the 1024 bytes limit persist for the index {p : 1, key: 1}. I got to say, I don't expect the {k1: ..., k2 : ...} to go far beyond 1k. I'm aware of the 16b limit per document.
Maybe there is a principled way to have collections unique on a field which values would let the index entries go over 1k, but I couldn't find it. The mongo documentation of upsert says "To avoid multiple upserts, ensure that the filter fields are uniquely indexed."
In contrast, it seems there is no official restriction on the length of the fields names, and field assignment should, as any other document update, unique.
EDIT 2: Are arrays and documents more powerful than Collections?
(Edited after comment by #hd)
Since I haven't found a way to add an arbitrary document to a Collection, preserving uniqueness, we could argue that Documents and Arrays are more powerful, uniqueness-wise, than Collections. Documents field names are unique, and Arrays at least support $addToSet, that would be enough if I only had the keys k1, k2 but not mutable x.
I have a keyword array field (say f) and I want to filter documents with an exact array (e.g. filter docs with f = [1, 3, 6] exactly, same order and number of terms).
What is the best way of doing this?
Regards
One way to achieve this is to add a script to the query which would also check the number of elements in the array.
it script would be something like
"filters": [
{
"script": {
"script": "doc['f'].values.length == 3"
}
},
{
"terms": {
"f": [
1,
3,
6
],
"execution": "and"
}
}
]
Hope you get the idea.
I think an even better idea would be to store the array as a string (if there are not many changes to the structure of the graph) and matching the string directly. This would be much faster too.
We've got a MongoDB (v2.4) collection that contains time-series snapshots:
{"foo": "bar",
"timeseries": [{"a": 1, "b": 2},
{"a": 2, "b": 3},
...]}
{"foo": "baz",
"timeseries": [{"a": 0, "b": 1},
{"a": 2, "b": 3},
...]}
I need to group all the entries by the foo key, and then sum the the a values of the last entry in each of the timeseries values of each document (timeseries[-1].a, as it were), per key. I want to believe there's some combination of $group, $project, and $unwind that can do what I want without having to resort to mapReduce.
Are you looking for something along the lines of:
> db.collection.aggregate([
{$unwind: "$timeseries"},
{$group: {_id: "$_id", foo: {$last: "$foo"},
last_value: {$last: "$timeseries.a"}}},
{$group: { _id: "$foo", total: { $sum: "$last_value" }}}
])
{ "_id" : "baz", "total" : 2 }
{ "_id" : "bar", "total" : 2 }
the $unwind stage will produce one document per-item in the timeserie
after that, documents are grouped back again, keeping only the $last value
finally, a second group clause will group (and sum values) by the foo field.
As a final note, I don't think this will be time efficient for very long time series as basically MongoDB will have to iterate over all items just in order to reach the last one.
I need to retrieve documents that contain at least one value inside an array. The structure of my document is:
{ "_id": 3,
"username": "111111",
"name": "XPTO 1",
"codes": [ 2, 4, 5 ],
"available": true }
{ "_id": 4,
"username": "22222",
"name": "XPTO 2",
"codes": [ 3, 5 ],
"available": true }
I need to do a find by "codes" and if i search for value "5", i need to retrieve all documents that contains this value inside their array.
I've tried to use #elemMatch but no success...
db.user.find({codes: {"$elemMatch": {codes: [2,8]}}}, {"codes":1})
How can i do this?
Thanks in advance.
You can check for values inside an array just like you compare the values for some field.
So, you would need to do it like this, without using $elemMatch: -
If you want to check whether an array contain a single value 5: -
db.user.find({codes: 5}, {codes:1})
This will return all the document, where codes array contain 5.
If you want to check whether an array contain a value out of given set of values: -
db.user.find({codes: {$in: [2, 8]}}, {codes:1})
This will return documents with array containing either 2 or 8
If you want to check whether an array contain all the values in a list: -
db.user.find({codes: {$all: [2, 5]}}, {codes:1})
This will return all document with array containing both 2 and 5.