Grouping in MongoDB and adding to Group - database

I have a MeterReadings collection that looks like the following.
{
"_id" : ObjectId("5fc768b33561870a262813c6"),
"installedAppId" : "A",
"readings" : [
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984662,
"u" : "W",
"v" : 100
}
]
}
{
"_id" : ObjectId("5fc768c73561870a262813c7"),
"installedAppId" : "B",
"readings" : [
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984662,
"u" : "W",
"v" : 200
}
]
}
{
"_id" : ObjectId("5fc768e43561870a262813c8"),
"installedAppId" : "A",
"readings" : [
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 300
}
]
}
My desired output is to group by installedAppId and then have each readings in one entry attached to it from all the matching installedAppId's. Shown below is what I am aiming for.
{
"_id" : ObjectId("5fc768b33561870a262813c6"),
"installedAppId" : "A",
"readings" : [
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984662,
"u" : "W",
"v" : 100
},{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 300
}
]
}
{
"_id" : ObjectId("5fc768c73561870a262813c7"),
"installedAppId" : "B",
"readings" : [
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984662,
"u" : "W",
"v" : 200
}
]
}
Grouping by installedAppId does return two groups using the data above.
> db.MeterReadings.aggregate([ {$group: {_id: {installedAppId: "$installedAppId"}}} ])
{ "_id" : { "installedAppId" : "B" } }
{ "_id" : { "installedAppId" : "A" } }
As each reading is different though adding readings as another entry in $group is the same as just querying the entire database.
> db.MeterReadings.aggregate([ {$group: {_id: {installedAppId: "$installedAppId", readings: "$readings"}}} ]).pretty()
{
"_id" : {
"installedAppId" : "A",
"readings" : [
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 10.2
}
]
}
}
{
"_id" : {
"installedAppId" : "B",
"readings" : [
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984662,
"u" : "W",
"v" : 10.2
}
]
}
}
{
"_id" : {
"installedAppId" : "A",
"readings" : [
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984662,
"u" : "W",
"v" : 10.2
}
]
}
}
Any help is welcome!

Either $push or $addToSet seems to do the trick, adds arrays however for each. Would be nicer if could just push to one array.
addToSet
> db.MeterReadings.aggregate([{$group : {_id : "$installedAppId", readings: {$addToSet : "$readings"}}}]).pretty()
{
"_id" : "B",
"readings" : [
[
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 200
}
]
]
}
{
"_id" : "A",
"readings" : [
[
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 300
}
],
[
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 100
}
]
]
}
push
> db.MeterReadings.aggregate([{$group : {_id : "$installedAppId", readings: {$push: "$readings"}}}]).pretty()
{
"_id" : "B",
"readings" : [
[
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 200
}
]
]
}
{
"_id" : "A",
"readings" : [
[
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 100
}
],
[
{
"n" : "daf43d66-6c3b-4553-80af-6a0b1cf97418:power",
"t" : 1606902984672,
"u" : "W",
"v" : 300
}
]
]
}

Related

Remove mongo specific nested documents in array for each document

{
"_id" : 123,
"a" : [
{
"b" : 1,
"bb" : 2
},
{
"c" : 2,
"cc" : 3
}
],
"ab" : [
{
"d" : 4,
"dd" : 5
},
{
"e" : 5,
"ee" : 6
}
]
}
Need to remove mongo specific nested document in array for each document
Output should be like: based on inputs _id:123,ab.d=4
{
"_id" : 123,
"a" : [
{
"b" : 1,
"bb" : 2
},
{
"c" : 2,
"cc" : 3
}
],
"ab" : [
{
"e" : 5,
"ee" : 6
}
]
}
Your are looking for an update with $pull operator (https://docs.mongodb.com/manual/reference/operator/update/pull/)
In your case:
db.mycollection.update({"_id":123}, {$pull: {"ab":{"d":4}}})

Highest value from sub-arrays in documents

I have this requirement, where i have a collection as below:
{
"_id" : 1,
"name" : "sam",
"Array" : [
{ "K" : "A", "V" : 8 },
{ "K" : "B", "V" : 5 },
{ "K" : "C", "V" : 13 }
]
},
{
"_id" : 2,
"name" : "tom",
"Array" : [
{ "K" : "D", "V" : 12 },
{ "K" : "E", "V" : 14 },
{ "K" : "F", "V" : 2 }
]
},
{
"_id" : 3,
"name" : "jim",
"Array" : [
{ "K" : "G", "V" : 9 },
{ "K" : "H", "V" : 4 },
{ "K" : "I", "V" : 2 }
]
}
I would like to run a query that returns the sub-document of each _id with the highest "V", so in that case I would get:
{ "_id" : 1, "name" : "sam", "Array" : [ { "K" : "C", "V" : 13 } ] }
{ "_id" : 2, "name" : "tom", "Array" : [ { "K" : "E", "V" : 14 } ] }
{ "_id" : 3, "name" : "jim", "Array" : [ { "K" : "G", "V" : 9 } ] }
You use can select only the sub-documents where the V field's value is equal to the maximum value in the array using $filter and the $max operator.
The $addFields pipeline stage is used here to specify all other fields in the document.
db.collection.aggregate([
{
"$addFields":{
"Array":{
"$filter":{
"input":"$Array",
"cond":{
"$eq":[
"$$this.V",
{
"$max":"$Array.V"
}
]
}
}
}
}
}
])

Querying mongo array of embedded documents in aggregation pipeline

I was looking into the different ways of querying on array of embedded documents in aggregation pipeline MongoDB. Looks like MongoDB has less support for this.
Let's say we have following documents in test collection:
/* 1 */
{
"_id" : ObjectId("59df2c39fbd406137d4290b3"),
"a" : 1.0,
"arr" : [
{
"key": 1,
"sn" : "a",
"org": "A"
}
]
}
/* 2 */
{
"_id" : ObjectId("59df2c47fbd406137d4290b4"),
"a" : 2.0,
"arr" : [
{
"sn" : "b",
"key": 2,
"org": "B"
}
]
}
/* 3 */
{
"_id" : ObjectId("59df2c50fbd406137d4290b5"),
"a" : 3.0,
"arr" : [
{
"key": 3,
"sn" : "c",
"org": "C"
}
]
}
/* 4 */
{
"_id" : ObjectId("59df2c85fbd406137d4290b6"),
"a" : 1.0,
"arr" : [
{
"key": 1,
"sn" : "a",
"org": " A"
}
]
}
/* 5 */
{
"_id" : ObjectId("59df2c9bfbd406137d4290b7"),
"a" : 3.0,
"arr" : [
{
"sn" : "b",
"key": 2,
}
]
}
/* 6 */
{
"_id" : ObjectId("59df2e41fbd406137d4290b8"),
"a" : 4.0,
"arr" : [
{
"sn" : "b",
"key" : 2
}
]
}
/* 7 */
{
"_id" : ObjectId("59df2e5ffbd406137d4290b9"),
"a" : 5.0,
"arr" : [
{
"key" : 2,
"sn" : "b"
},
{
"sn" : "a",
"key" : 1
}
]
}
And I wanted to categorize the above documents based on "arr.sn" field value using below query:
db.test.aggregate([{"$addFields": {"Category" : { $switch: {
branches : [
{ case : { $eq : [ "$arr.nm", "a" ] }, then : "Category 1"}
],
default : "No Category"
}}}}])
but $eq operator is not giving correct result, if I use the same $eq in find method, it works:
db.test.find({"arr.sn" : "a"})
I am looking at the way to do it with only single field, here in case "arr.sn" field. Is there any way to project the field from embedded documents from the array?
Any help would be appreciated.
$eq(aggregation) compares both value and type different from query eq opeator which can compare values for any type.
You need $in(aggregation) to verify value in a array.
Something like
[
{
"$addFields": {
"Category": {
"$switch": {
"branches": [
{
"case": {
"$in": [
"a",
"$arr.sn"
]
},
"then": "Category 1"
}
],
"default": "No Category"
}
}
}
}
]

How to update array of embedded collection field based on another collection embedded field value in MongoDB?

I've searched in this forum for my below issue and I'm not able to find solution.
Inventory collction:
{
"_id" : ObjectId("555b1978af015394d1016374"),
"Billno" : "ABC1",
"Device_id" : "strsdedgfrtg12",
"item" : [
{
"item_id" : 232,
"size" : "S",
"qty" : 25
},
{
"item_id" : 272,
"size" : "M",
"qty" : 5
}
],
"category" : "clothing"
}
inventory_new collection:
{
"_id" : ObjectId("555b1978af015394d1016374"),
"Billno" : "ABC1",
"Device_id" : "strsdedgfrtg12",
"item" : [
{
"item_id" : 232,
"size" : "S",
"qty" : 25
},
{
"item_id" : 272,
"size" : "M",
"qty" : 5000
}
],
"category" : "clothing"
}
Now I've to update the inventory collection embedded array's item "qty" with inventory_new collections embedded item "qty" field value.. I've tried below code.. But I'm not succeed. Please advice.
db.inventory.find().forEach(function (doc1) {
var doc2 = db.inventory_copy.find({ Billno: doc1.Billno},{ Device_id: doc1.Device_id},{ item.item_id: doc1.item.item_id}, {item.qty: 1 });
if (doc2 != null) {
doc1.item.qty = doc2.item.qty;
db.inventory.save(doc1);
}
});
Thanks
Try the following update:
db.inventory.find().forEach(function (doc1) {
var doc2 = db.inventory_copy.findOne(
{
"Billno": doc1.Billno,
"Device_id": doc1.Device_id,
"item.item_id": doc1.item.item_id
}
);
if (doc2 != null) {
doc1.item = doc2.item;
db.inventory.save(doc1);
}
});

MongoDB: deleting duplicate documents with arrays

I'm trying to remove all duplicates in a collection with ensureIndex and dropDups, but this method doesn't seem to work with arrays.
For example, if I have a collection that looks like this:
{ "_id" : ObjectId("54d8f889e3fdfe0cd8b769ed"), "field1" : "a", "field2" : [ "a", "b" ] }
{ "_id" : ObjectId("54d8f89be3fdfe0cd8b769ee"), "field1" : "a", "field2" : [ "a", "b" ] }
{ "_id" : ObjectId("54d8f8a3e3fdfe0cd8b769ef"), "field1" : "a", "field2" : [ "a", "c" ] }
{ "_id" : ObjectId("54d8f8abe3fdfe0cd8b769f0"), "field1" : "a", "field2" : [ "b", "a" ] }
{ "_id" : ObjectId("54d8f8c5e3fdfe0cd8b769f1"), "field1" : "b", "field2" : [ "a", "b" ] }
and use ensureIndex like this:
> db.test.ensureIndex({field1: 1, field2: 1}, {unique: true, dropDups: true})
the result would be:
> db.test.find()
{ "_id" : ObjectId("54d8f89be3fdfe0cd8b769ee"), "field1" : "a", "field2" : [ "a", "b" ] }
{ "_id" : ObjectId("54d8f8c5e3fdfe0cd8b769f1"), "field1" : "b", "field2" : [ "a", "b" ] }
Is there a way to do this so that only exact Duplicates (in my example collection only the first or second entry) get deleted?
As I know this feature doesn't work in arrays. Any particular reason why you can't just use $addToSet when you insert the data?
Check this question, maybe help you MongoDB: Unique index on array element's property

Resources