Update object in array by Id - database

I have a document in mongodb that stores books read by a user.I want to be able to update the data read field given a book_id. How can I do this in a mongo query? Example would be if book_id =111111 update "date_read" to April 5 2020.
{
_id:"1456786576454",
books:[
{
"book_id":"132451",
"date_read":"Jan 20 2020"
},
{
"book_id":"111111",
"date_read":"Feb 5 2020"
}
]
}

Related

Get the difference between data in date range aggregation

I have data im getting using a date range to date and from date :
{
$match: {
"Date_stock":
{
$gte: new Date(req.body.fromDate),
$lt: new Date(req.body.toDate)
}
}
},
I'm trying to get the difference between each doc's stock like doc 1 stock - doc 2 stock .
I'm using group to get the data:
{ $group: {
"stock":{$toInt:"$stock"}
}}
I dont have any related databases involved . each document has stock number .
how do i subtract the same value from group ?
is this possible to do ?

How to fetch only condition satisfied/match object from an array in mongodb?

I am new to MongoDB.
This is my 'masterpatients' collection it has many documents. every documents contain 'visits' array and every visits array contains multiple objects. I want only those object which is satisfied with my input.
I am expecting only below the expected output. if the facility match with my input and visit date range match with my provided input then the query should return only that object as I have given below.
_id:5ef59134a3d8d92e580510fe
flag:0
name:"emicon_test"
dob:2020-06-25T00:00:00.000+00:00
visits:[
{
visit:2020-06-09T10:36:10.635+00:00,
facility:"Atria Lady Lake"
},
{
visit:2020-05-09T10:36:10.635+00:00,
facility:"demo"
}]
_id:5ee3213040f8830e04ff74a8
flag:0
name:"xyz"
dob:1995-06-25T00:00:00.000+00:00
visits:[
{
visit:2020-05-01T10:36:10.635+00:00,
facility:"pqr"
},
{
visit:2020-05-15T10:36:10.635+00:00,
facility:"demo"
},
{
visit:2020-05-09T10:36:10.635+00:00,
facility:"efg"
}]
My query input parameters is facility='demo' and visit date range is from '1st May 2020' to '10th May 2020'
output expected:
_id:5ef59134a3d8d92e580510fe
flag:0
name:"emicon_test"
dob:2020-06-25T00:00:00.000+00:00
visits:[
{
visit:2020-05-09T10:36:10.635+00:00,
facility:"demo"
}]
Thanks in advance.
I got an answer.
MasterPatientModel.aggregate([
{
'$unwind':"$visits"
},
{"$match": {"visits.visit": {"$gte": new Date(req.body.facilitySummaryFromdate), "$lte": new Date(req.body.facilitySummaryTodate)
} , "visits.facility": req.body.facilitySummary
}
}
])
You cannot filter the contents of a mongo collection property on the server.
Make visits array into a top level collection/model and you can filter by criteria on the server.

Apache solr filter based on array values

I have 2 data's which I have mentioned below:
{
id: intro.original
publicationID: TRENTXWB_EM_R1
hasBeenModifiedBy: [DAL]
isModificationFor: null
text: ... intro ...
}
{
id: intro.dal
publicationID: TRENTXWB_EM_R1
hasBeenModifiedBy: []
isModificationFor: DAL
text: ... intro ...
}
Need to develop a filter query which checks "hasBeenModifiedBy".If the array contains 'DAL' it has to commit that datset(i.e. igonore the dataset). So, in this case, we have to get the second dataset which doesn't have "DAL" in "hasBeenModified" array.
Please suggest me an approach.

dataweave filter and maxBy on nested array list

I have a list of students and their marks for respective subjects. I want to filter all students of a specific grades and then find the student who got maximum marks in a specific object.
[
{
"name": "User 01",
"grade": 1,
"schoolName": "school01",
"marks": {
"english": 10,
"math": 30,
"social": 30
}
},
{
"name": "User 02",
"grade": 1,
"schoolName": "school02",
"marks": {
"english": 10,
"math": 20,
"social": 30
}
}
]
I am able to perform both the operations independently. can someone help me find the student object who got max marks in math in a specific grade.
If I understand your requirement correctly this script does it. Just change the variables grade and topic to the specific values you are interested in.
Generally speaking it is always better to provide example outputs and whatever you got as script to understand better the context, in addition to the input samples.
%dw 2.0
output application/json
var grade = 1
var topic = "math"
---
flatten(
payload map (alumn, order) ->
(alumn.marks pluck ((value, key, index) ->
{
name: alumn.name,
grade: alumn.grade,
result:value,
topic: key
})
)
) // restructure the list to one result per element
filter ((item, index) -> (item.grade == grade)) // filter by grade
maxBy ((item) -> item.result) // get the maximum result
I used it below to achieve it.
%dw 2.0
output application/json
var grade = 1
var topic = "math"
---
payload filter (
((item, index) -> item.grade == grade)
) maxBy ($.marks.math as String {format: "000000"})

Search a fixed amount of documents over a period of time in MongoDB

We have a database with a lot of documents, which gets bigger as time goes on. Right now, query time isn't a problem since the data is only ~1 year old or so. But the bigger this gets, the longer queries will take if we query everything.
Our idea was to take every nth document, the more documents there are, you leave some data out, but you still get a good image from data over the time. However, this is hard to do in Mongo and doesn't seem to work at all, since it still traverses all documents.
Is there a way to set a fixed query time, no matter how many documents, or at least reduce it? It doesn't matter if we lose data overall, as long as we get documents from every time range.
I don't know exactly how your data looks like, but here is an example of what I mean. Let's assume this is your data stored in the database.
/* 1 */
{
"_id" : ObjectId("59e272e74d8a2fe38b86187d"),
"name" : "data1",
"date" : ISODate("2017-11-07T00:00:00.000Z"),
"number" : 15
}
/* 2 */
{
"_id" : ObjectId("59e272e74d8a2fe38b86187f"),
"name" : "data2",
"date" : ISODate("2017-11-06T00:00:00.000Z"),
"number" : 19
}
/* 3 */
{
"_id" : ObjectId("59e272e74d8a2fe38b861881"),
"name" : "data3",
"date" : ISODate("2017-10-06T00:00:00.000Z"),
"number" : 20
}
/* 4 */
{
"_id" : ObjectId("59e272e74d8a2fe38b861883"),
"name" : "data4",
"date" : ISODate("2017-10-05T00:00:00.000Z"),
"number" : 65
}
I understand you want to compare some values throughout months or even years. So you could do the following
db.getCollection('test').aggregate([
{
$match: {
// query on the fields with index
date: {$gte: ISODate("2017-10-05 00:00:00.000Z"),
$lte: ISODate("2017-11-07 00:00:00.000Z")}
}
},
{
// retrieve the month from each document
$project: {
_id: 1,
name: 1,
date: 1,
number: 1,
month: {$month: "$date"}
}
},
{
// group them by month and perform some accumulator operation
$group: {
_id: "$month",
name: {$addToSet: "$name"},
dateFrom: {$min: "$date"},
dateTo: {$max: "$date"},
number: {$sum: "$number"}
}
}
])
I would suggest you save the pre aggregated data, this way instead of searching through 30 documents per month for example you'd only need to search for 1 per month. And you'd only have to aggregate the complete data only once, if you have the pre aggregated results stored then you'd only have to run the pre aggregation for the new data that are coming in.
Is that maybe something you are looking for?
Also if you have indexes and they fields you query have indexes then this helps as well. Otherwise MongoDB has to scan every document in a collection.

Resources