I have an array of "states" in my documents:
{
"_id: ObjectId("53026de61e30e2525d000004")
"states" : [
{
"name" : "complete",
"userId" : ObjectId("52f4576126cd0cbe2f000005"),
"_id" : ObjectId("53026e16c054fc575d000004")
},
{
"name" : "active",
"userId" : ObjectId("52f4576126cd0cbe2f000005"),
"_id" : ObjectId("53026de61e30e2525d000004")
}
]
}
I just insert a new state onto the front of the array when there is a new state. Current work around until mongo 2.6 is released here: Can you have mongo $push prepend instead of append?
However I do not want users to be able to save the same state twice in row. I.E. if its already complete you should not be able to add another 'complete' state. Is there a way that I can check the first element in the array and only insert the new state if its not the same in one query/update command to mongo.
I say one query/update due to the fact that mongo does not support transactions so I don't want to query for the first element in the array then send another update statement, as that could cause problems if another state got inserted between my query and my update.
You can qualify your update statement with a query, for example:
db.mydb.states.update({"states.name":{$nin:["newstate"]}},{$addToSet:{"states":{"name":"newstate"}}})
This will prevent updates from a user if the query part of the update returns no document. You can additionally add more fields to filter on on the query part.
Related
I want to add more values into a field with only one value in MongoDB.
Below is the code in inserting the document of an Ed Sheeran song to the playlist collection:
Below is the output of Ed Sheeran document after using the find and pretty methods:
I keep getting an error every time I add more songs using the update method and the $set operator:
I am having a real-time Firebase with time, value as following, timing is 1 min delay btw each object.
"-LhIaB7SP0y-FLb1xFFx" : {
"time" : 1560475623,
"value" : 11.614479842990287
},
"-LhIaJ6PjtbX1VHKlwFM" : {
"time" : 1560475681,
"value" : 11.642968895431837
},
"-LhIaXbX42k8dmApfztL" : {
"time" : 1560475741,
"value" : 11.707783121665505
},
"-LhIaqgYSpUmKbcH1MTN" : {
"time" : 1560475802,
"value" : 11.704004474172576
},
"-LhIb-20G9jnx61vNjS-" : {
"time" : 1560475861,
"value" : 11.69861155382089
},
"-LhIbDdTEdWrhirbjVRa" : {
"time" : 1560475921,
"value" : 11.661539551497276
},
"-LhIbSGKvS2POggUCots" : {
"time" : 1560475981,
"value" : 11.581711077020692
}
I can retrieve data for "time" in order. But I want to filter for every 5 mins, or 1 day, and a week.
this.items = this.db.list(`history/data`, ref => ref.orderByChild("time").limitToLast(1000));
Is there firebase list filtering for that?
The query model of the Firebase Realtime Database works as follow for your code:
It orders the child nodes of the reference by the child you indicate.
It finds the last node in the result, and then returns the 1000 nodes before that.
You can add a condition to your query with startAt(), endAt, and/or equalTo, so make the database start/end at a specific set of child nodes within the range. But there is no way within these conditions to skip child nodes in the middle. Once the database has found a child node to start returning, it will return all child nodes from there on until the conditions are no longer met.
The simplest way I can think of to implement your requirement is to store the data in the aggregation buckets that you want to query on. So if you want to allow reading the data in a way that gives you the first and last item of every week, and the first and last item of every week, you'd store:
2019w24_first: "-LhIaB7SP0y-FLb1xFFx",
2019w24_last: "-LhIbSGKvS2POggUCots",
2019m06_first: "-LhIaB7SP0y-FLb1xFFx",
2019m06_last: "-LhIbSGKvS2POggUCots"
And then each time when you write the data, you update the relevant aggregates too.
This sounds incredibly inefficient for folks who come from a background with relational/SQL databases, but is actually very common in NoSQL databases. By making your write operations do some extra work, and storing some duplicate data, your read operations becomes massively more scaleable.
For some more information on these types of data modeling choices, I recommend:
reading NoSQL data modeling
watching Firebase for SQL developers
watching Getting to know Cloud Firestore, which may be for a different Firebase Database, but many of the principles apply equally.
I am using Parse Server, which runs on MongoDB.
Let's say I have collections User and Comment and a join table of user and comment.
User can like a comment, which creates a new record in a join table.
Specifically in Parse Server, join table can be defined using a 'relation' field in the collection.
Now when I want to retrieve all comments, I also need to know, whether each of them is liked by the current user. How can I do this, without doing additional queries?
You might say I could create an array field likers in Comment table and use $elemMatch, but it doesn't seem as a good idea, because potentially, there can be thousands of likes on a comment.
My idea, but I hope there could be a better solution:
I could create an array field someLikers, a relation (join table) field allLikers and a number field likesCount in Comment table. Then put first 100 likers in both someLikers and allLikers and additional likers only in the allLikers. I would always increment the likesCount.
Then when querying a list of comments, I would implement the call with $elemMatch, which would tell me whether the current user is inside someLikers. When I would get the comments, I would check whether some of the comments have likesCount > 100 AND $elemMatch returned null. If so, I would have to run another query in the join table, looking for those comments and checking (querying by) whether they are liked by the current user.
Is there a better option?
Thanks!
I'd advise agains directly accessing MongoDB unless you absolutely have to; after all, the way collections and relations are built is an implementation detail of Parse and in theory could change in the future, breaking your code.
Even though you want to avoid multiple queries I suggest to do just that (depending on your platform you might even be able to run two Parse queries in parallel):
The first one is the query on Comment for getting all comments you want to display; assuming you have some kind of Post for which comments can be written, the query would find all comments referencing the current post.
The second query again is for on Comment, but this time
constrained to the comments retrieved in the first query, e.g.: containedIn("objectID", arrayOfCommentIDs)
and constrained to the comments having the current user in their likers relation, e.g.: equalTo("likers", currentUser)
Well a join collection is not really a noSQL way of thinking ;-)
I don't know ParseServer, so below is just based on pure MongoDB.
What i would do is, in the Comment document use an array of ObjectId's for each user who likes the comment.
Sample document layout
{
"_id" : ObjectId(""),
"name" : "Comment X",
"liked" : [
ObjectId(""),
....
]
}
Then use a aggregation to get the data. I asume you have the _id of the comment and you know the _id of the user.
The following aggregation returns the comment with a like count and a boolean which indicates the user liked the comment.
db.Comment.aggregate(
[
{
$match: {
_id : ObjectId("your commentId")
}
},
{
$project: {
_id : 1,
name :1,
number_of_likes : {$size : "$liked"},
user_liked: {
$gt: [{
$size: {
$filter: {
input: "$liked",
as: "like",
cond: {
$eq: ["$$like", ObjectId("your userId")]
}
}
}
}, 0]
},
}
},
]
);
this returns
{
"_id" : ObjectId(""),
"name" : "Comment X",
"number_of_likes" : NumberInt(7),
"user_liked" : true
}
Hope this is what your after.
I have a MEAN stack application.
In my database, a document has a mongo ObjectId like such :
{ "_id" : ObjectId("57e15b1009cb82cafafafd73"), "name" : "Hello", "artist_id" : "world", "year" : "2000" }
But when I load the document in my front end, the _id gets converted to a string, and my object looks like this when logged in the browser:
{ "_id" : "57e15b1009cb82cafafafd73", "name" : "Hello", "artist_id" : "world", "year" : "2000" }
What is annoying is that when I want to modify my database from my front end (update or delete an existing document), I have to convert the _id from string to ObjectId in order to target the document in my database...
So in my node application, I have to systematically massage the _id with new ObjectId(stringId) because the Id's sent by the browser are strings...
Obvisouly I'm missing out on something.
How can I make things more elegant ?
This is normal to convert string as objectId using ObjectId() because when you are passing it to web it's converted as json string so it is not an objectId anymore but when you are using an objectid from a document at your server side in that case you need not convert it to objectid.
You always need objectId() constructor if your _id type is string.
What module you use for mongodb, I think mongoose?
If you are using mongoose then you do not have to convert your _id into ObjectId, just run the query with string.
db.collection.find({_id: "57e15b1009cb82cafafafd73"});
Thanks for your replies.
So an obvious thing thats going to happen in a MEAN application, is that the ObjectIds of the documents will be converted to strings as they are consumed as JSON objects.
That's going to require heavy use of new ObjectId() when recording objects in the frontend that had their ID generated by Mongo back to the database...
Any other option than using Mongoose ? I'm very happy using the standard node mongo driver for now besides having this string conversion issue...
I have a cloudant DB where each document looks like:
{
"_id": "2015-11-20_attr_00",
"key": "attr",
"value": "00",
"employeeCount": 12,
"timestamp": "2015-11-20T18:16:05.366Z",
"epocTimestampMillis": 1448043365366,
"docType": "attrCounts"
}
For a given attribute there is an employee count. As you can see I have a record for the same attribute every day. I am trying to create a view or index that will give me the latest record for this attribute. Meaning if I inserted a record on 2015-10-30 and another on 2015-11-10, then the one that is returned to me is just employee count for the record with timestamp 2015-11-10.
I have tried view, but I am getting all the entries for each attribute not just the latest. I did not look at indexes because I thought they do not get pre calculated. I will be querying this from client side, so having it pre calculated (like views are) is important.
Any guidance would be most appreciated. thank you
I created a test database you can see here. Just make sure your when you insert your JSON document into Cloudant (or CouchDB), your timestamps are not strings but JavaScript data objects:
https://examples.cloudant.com/latestdocs/_all_docs?include_docs=true
I built a search index like this (name the design doc "summary" and the search index "latest"):
function (doc) {
if ( doc.docType == "totalEmployeeCounts" && doc.key == "div") {
index("division", doc.value, {"store": true});
index("timestamp", doc.timestamp, {"store": true});
}
}
Then here's a query that will return only the latest record for each division. Note that the limit value will apply to each group, so with limit=1, if there are 4 groups you will get 4 documents not 1.
https://examples.cloudant.com/latestdocs/_design/summary/_search/latest?q=*:*&limit=1&group_field=division&include_docs=true&sort_field=-timestamp
Indexing TimeStamp as a string is not recommended.
Reference:
https://cloudant.com/blog/defensive-coding-in-mapindex-functions/#.VvRVxtIrJaT
I have the same problem. I converted the timestamp value to milliseconds (number) and then indexed that value.
var millis= Date.parse(timestamp);
index("millis",millis,{"store": false});
You can use the same query as Raj suggested but with the 'millis' field instead of the timestamp .