What's the difference between findAndModify and update in MongoDB? - database

I'm a little bit confused by the findAndModify method in MongoDB. What's the advantage of it over the update method? For me, it seems that it just returns the item first and then updates it. But why do I need to return the item first? I read the MongoDB: the definitive guide and it says that it is handy for manipulating queues and performing other operations that need get-and-set style atomicity. But I didn't understand how it achieves this. Can somebody explain this to me?

If you fetch an item and then update it, there may be an update by another thread between those two steps. If you update an item first and then fetch it, there may be another update in-between and you will get back a different item than what you updated.
Doing it "atomically" means you are guaranteed that you are getting back the exact same item you are updating - i.e. no other operation can happen in between.

findAndModify returns the document, update does not.
If I understood Dwight Merriman (one of the original authors of mongoDB) correctly, using update to modify a single document i.e.("multi":false} is also atomic. Currently, it should also be faster than doing the equivalent update using findAndModify.

From the MongoDB docs (emphasis added):
By default, both operations modify a single document. However, the update() method with its multi option can modify more than one document.
If multiple documents match the update criteria, for findAndModify(), you can specify a sort to provide some measure of control on which document to update.
With the default behavior of the update() method, you cannot specify which single document to update when multiple documents match.
By default, findAndModify() method returns the pre-modified version of the document. To obtain the updated document, use the new option.
The update() method returns a WriteResult object that contains the status of the operation. To return the updated document, use the find() method. However, other updates may have modified the document between your update and the document retrieval. Also, if the update modified only a single document but multiple documents matched, you will need to use additional logic to identify the updated document.
Before MongoDB 3.2 you cannot specify a write concern to findAndModify() to override the default write concern whereas you can specify a write concern to the update() method since MongoDB 2.6.
When modifying a single document, both findAndModify() and the update() method atomically update the document.

One useful class of use cases is counters and similar cases. For example, take a look at this code (one of the MongoDB tests):
find_and_modify4.js.
Thus, with findAndModify you increment the counter and get its incremented
value in one step. Compare: if you (A) perform this operation in two steps and
somebody else (B) does the same operation between your steps then A and B may
get the same last counter value instead of two different (just one example of possible issues).

This is an old question but an important one and the other answers just led me to more questions until I realized: The two methods are quite similar and in many cases you could use either.
Both findAndModify and update perform atomic changes within a single request, such as incrementing a counter; in fact the <query> and <update> parameters are largely identical
With both, the atomic change takes place directly on a document matching the query when the server finds it, ie an internal write lock on that document for the fraction of a millisecond that the server confirms the query is valid and applies the update
There is no system-level write lock or semaphore which a user can acquire. Full stop. MongoDB deliberately doesn't make it easy to check out a document then change it then write it back while somehow preventing others from changing that document in the meantime. (While a developer might think they want that, it's often an anti-pattern in terms of scalability and concurrency ... as a simple example imagine a client acquires the write lock then is killed while holding it. If you really want a write lock, you can make one in the documents and use atomic changes to compare-and-set it, and then determine your own recovery process to deal with abandoned locks, etc. But go with caution if you go that way.)
From what I can tell there are two main ways the methods differ:
If you want a copy of the document when your update was made: only findAndModify allows this, returning either the original (default) or new record after the update, as mentioned; with update you only get a WriteResult, not the document, and of course reading the document immediately before or after doesn't guard you against another process also changing the record in between your read and update
If there are potentially multiple matching documents: findAndModify only changes one, and allows you customize the sort to indicate which one should be changed; update can change all with multi although it defaults to just one, but does not let you say which one
Thus it makes sense what HungryCoder says, that update is more efficient where you can live with its restrictions (eg you don't need to read the document; or of course if you are changing multiple records). But for many atomic updates you do want the document, and findAndModify is necessary there.

We used findAndModify() for Counter operations (inc or dec) and other single fields mutate cases. Migrating our application from Couchbase to MongoDB, I found this API to replace the code which does GetAndlock(), modify the content locally, replace() to save and Get() again to fetch the updated document back. With mongoDB, I just used this single API which returns the updated document.

Related

Want to capture fields which get updated in Salesforce

I wish to create a generic component which can save the Object Name and field Names with old and new values in a BigObject.
The brute force algo says, on every update of each object, get field API names using describe and check old and new value of those fields. If it gets modified insert it into new BigObject.
But it will consume a lot of CPU time and I am looking for an optimum solution to handle this.
Any suggestions are appreciated.
Well, do you have any code written already? Maybe benchmark it and then see what you can optimise instead of overdesigning it from the start... Keep it simple, write test harness and then try to optimise (without breaking unit tests).
Couple random ideas:
You'd be doing that in a trigger? So your "describe" could happen only once. You don't need to describe every single field, you need only one operation outside of trigger's main loop.
Set<String> fieldNames = Account.sObjectType.getDescribe().fields.getMap().keyset();
System.debug(fieldNames);
This will get you "only" field names but that's enough. You don't care whether they're picklists or dates or what. Use that with generic sObject.get('fieldNameHere') and it's a good start.
or maybe without describe at all. sObject's getPopulatedFieldsAsMap() will give you cool Map which you can easily iterate & compare.
or JSON.serialize the old & new version of the object and if they aren't identical - you know what to do. No idea if they'll always serialise with same field order though so checking if the maps are identical might be better
do you really need to hand-craft this field history tracking like that? You have 1M records free storage but it could explode really easily in busier SF org. Especially if you have workflows, processes, other triggers that would translate to multiple updates (= multiple trigger runs) in same transaction. Perhaps normal field history tracking + chatter feed tracking + even salesforce shield (it comes with 60 more fields tracked I think) would be more sensible for your business needs.

What is expected behavior of cloudant PUT with rev for non-existent item?

In a Cloudant database, what is the expected behavior of calling PUT on a document that doesn't exist with a revision defined?
The documentation says:
To update (or create) a document, make a PUT request with the updated
JSON content and the latest _rev value (not needed for creating new
documents) to https://$USERNAME.cloudant.com/$DATABASE/$DOCUMENT_ID.
I had assumed that if I did provide a revision, that the db would detect that it was not a match and reject the request. In my test cases I have inconsistent behavior. Most of the time I get the expected 409, Document update conflict. However, occasionally, the document ends up getting created (201), and assigned the next revision.
My test consists of creating a document and then using that revision to update a different document.
POST https://{url}/{db} {_id: "T1"} - store the returned revision
PUT https://{url}/{db}/T2 {_rev: }
So if the revision returned was something like 1-79c389ffdbcfe6c33ced242a13f2b6f2, then in the cases where the PUT succeeds, it returns the next revision (like 2-76054ab954c0ef41e9b82f732116154b).
EDIT
If I simplify the test to one step, I can also get different results.
PUT https://{url}/{db}/DoesNotExist {_rev: "1-ffffffffffffffffffffffffffffffff"}
Cloudant is an eventually consistent database. You're seeing the effects of that. Most of the time the cluster has time to reach a consistent state between your two api calls and you'll get the expected update conflict. Sometimes you hit the inconsistency window, as your first call has not yet been replicated around the cluster and you hit a different node. It's a valuable insight: it's not safe to read your writes.

Google AppEngine DataStore constistency

My current understanding of Google AppEngine's High Replication DataStore is the following:
Gets and puts of individual entities are always strongly consistent, i.e. once a put of this entry completes, no later get will ever return a version earlier than the completed put. Or, more precisely, as soon as any one get returns the new version, no later get will ever return the old version again.
Gets and puts of multiple entities are strongly consistent, if they belong to the same ancestor group and are performed in a transaction, i.e. if I have two entities that are both being modified in a transaction by a put and "simultaneously" read in a different transaction with a get, the get will either return the old version of both entries or the new version of both entries, depending on whether the put-transaction has completed at the time of the get or not, but it will never return the old value of one entity and the new value of the other.
Queries with an ancestor filter can be chosen to be strongly or eventually consistent, where a strongly consistent query takes longer to complete, but will always return the "same" version (old or new) of all entities updated in the same transaction in this ancestor group and never some old and some new versions.
Queries that span ancestors are always eventually consistent, i.e. might return an old version of one result entity and a new version of another.
Did I get this right? Is this actually documented anywhere? (I only found some documentation about the query consistency here (between the first and second "Note") and here, but it doesn't talk about gets and puts...)
Yes, you're correct. They just word it slightly differently:
https://developers.google.com/appengine/docs/java/datastore/
Right at the beginning there are 5 point form features. The last two describe your question, except that they refer to "reads" instead of "gets".
This probably adds to your confusion, but when they mean "read" or "get", it really means fetching an entity directly - by key or id. If you call the python 'get' function with an attribute other than the key or id, it's actually issuing a query, which is eventually consistent (unless it's an ancestor query).

solr 4's atomic updates insight

Are atomic updates significantly faster than fetching data from a source and then making a whole new document and indexing it. Basically I would like to know how exactly solr's atomic updates work?
It actually reindexes the whole document, see http://wiki.apache.org/solr/Atomic_Updates.
Atomic update could be faster because it does not involve fetching the current document from Solr first and then reposting the modified document. You can save on network time. Internally Solr will use the existing values for fields not specified in the atomic update (which is why you need to keep all values as stored).
Atomic update also helps you avoid conflicts since you need not worry if somebody else changes the document by the time you post your modified document. This problem could be dealt by using optimistic concurrency also.

Database Supporting Update by Function rather than by Value

Desired behavior:
In Clojure's implementation of Agents, to update an Agent, one does NOT send a new value. One sends a function, which is called on the old value, and the return value is set as the new value of the Agent.
This makes certain things easy: for example, if I have a queue, and I have two concurrent threads that both want to append to the queue (and I don't care which order they append), each thread can just fire off a (fn [x] (cons x new_value)) ... and it just works. Whereas, if it was updating by value, I'd have to do a compare and swap of some sort.
Question:
Is there any database that supports this type of updating? For example, I was recently looking at MongoDB. However, MongoDB supports only $inc/$dec, and not arbitrary functions for updating the documents.
Thanks!
PS -- I don't need transactions / ACID / BASE / ... all I really want is a simple document store that supports updating via functions rather than values.
MongoDB actually supports multiple different modifier operations (full list here) and while I can see $inc/$dec alone might be limiting, a simple variant of specific example you give of adding to a queue could be handled using $push or $addToSet (depending on whether you wanted duplicates to appear in your queue or not as $addToSet will only add a value if it doesn't already exist).
For more complex cases (supporting compare and swap type of semantics) you can look at findAndModify command in MongoDB if you don't find something simple that fits your needs exactly.

Resources