What is CAS in NoSQL and how to use it? - database

Write operations on Couchbase accept a parameter cas (create and set). Also the return result object of any non-data fetching query has cas property in it. I Googled a bit and couldn't find a good conceptual article about it.
Could anyone tell me when to use CAS and how to do it? What should be the common work-flow of using CAS?
My guess is we need to fetch CAS for the first write operation and then pass it along with next write. Also we need to update it using result's CAS. Correct me if I am wrong.

CAS actually stands for check-and-set, and is a method of optimistic locking. The CAS value is associated with each document which is updated whenever the document changes - a bit like a revision ID. The intent is that instead of pessimistically locking a document (and the associated lock overhead) you just read it's CAS value, and then only perform the write if the CAS matches.
The general use-case is:
Read an existing document, and obtain it's current CAS (get_with_cas)
Prepare a new value for that document, assuming no-one else has modified the document (and hence caused the CAS to change).
Write the document using the check_and_set operation, providing the CAS value from (1).
Step 3 will only succeed (perform the write) if the document is unchanged between (1) and (3) - i.e. no other user has modified it in the meantime. Typically if (3) does fail you would retry the whole sequence (get_with_cas, modify, check_and_set).
There's a much more detailed description of check-and-set in the Couchbase Developer Guide under Concurrent Document Mutations.

Related

Prevent overriding keys in Key-Value datastores

This picture below shows a sequence diagram for two clients storing values into a Key-Value datastore:
The problem I'm trying to solve is how to prevent overriding keys. The way the applications (Client_A, and Client_B) prevent this is by checking if key exists first before storing. The issue now is if both clients manage to get the same "does not exist" result, any of the two clients would be able to overwrite the values.
What strategy can be done to be able to prevent such from happening in a database client design?
A "key-value store", as it's usually defined, doesn't store duplicate keys at all. If two clients write to the same key, then only one value is stored -- the one from which ever client wrote "latest".
In order to reliably update values in a consistent way (where the new value depends on the old value associated with a key, or even whether or not there was an old value), your key-value store needs to support some kinds of atomic operations other than simple get and set.
Memcache, for example, supports atomic compare-and-set operations that will only set a value if it hasn't been set by someone else since you read it. Amazon's DynamoDB supports atomic transactions, atomic counters, etc.
START TRANSACTION;
SELECT ... FOR UPDATE;
take action depending on result
UPDATE ...;
COMMIT;
The "transaction" makes the pair. SELECT and UPDATE, "atomic".
Write this sort of code for any situation where another connection can sneak in and mess up what you are doing.
Note: The code written here uses MySQL's InnoDB syntax and semantics; adjust accordingly for other brands.

Flink job production readiness - validate UUIDs assigned to all operators

The flink production readiness (https://ci.apache.org/projects/flink/flink-docs-stable/ops/production_ready.html) suggests assigning UUIDs to all operators. I'm looking for a way to validate that all operators in a given job graph have been assigned UUIDs -- ideally to be used as a pre-deployment check in our CI flow.
We already have a process in place that uses the PackagedProgram class to get a JSON-formatted 'preview plan'. Unfortunately, that does not include any information about the assigned UUIDs (or lack thereof).
Digging into the code behind generating the JSON preview plan (PlanJSONDumpGenerator), I can trace how it visits each of the nodes as a DumpableNode<?>, but from there, I can't find anything that leads me to the definition of the operator with it's UUID.
When defining the job (using the DataStream API), the UUID is assigned on a StreamTransformation<T>. Is there anyway to connect the data in the PackagedProgram back to the original StreamTransformation<T>s to get the UUID?
Or is there a better approach to doing this type of validation?

Working with accumulated bucket values in Entity Framework

I'm attempting to find design patterns/strategies for working with accumulated bucket values in a database where concurrency can be a problem. I don't know the proper search terms to use to find information on the topic.
Here's my use case (I'm using code-first Entity Framework, so EF-specific advice is welcome):
I have a database table that contains a quantity value. This quantity value can be incremented or decremented by multiple clients at the same time (due to this, I call this value a "bucket" value as it is a bucket for a bunch of accumulated activity; this is in opposition of the other strategy where you keep all activity and calculate the value based on the activity). I am looking for strategies on ensuring accuracy of this "bucket" value (within the context of EF) that takes into consideration that multiple clients may attempt to change it simultaneously (concurrency).
The answer "you must track activity and derive your value from that activity" is acceptable, but I want to consider all bucket-centric solutions as well.
I am looking for advice on search terms to use to find good information on this topic as well as specific links.
Edit: You may assume that all activity is relative to the "bucket" value (no clients will be making an absolute change to the value; they will only increment or decrement).
Without directly coding the SQL Queries that update the buckets, you would have to use client-side Optimistic Concurrency. See Entity Framework Optimistic Concurrency Patterns. Clients whose update would overwrite a change will get an exception, after which you can reload with the current value and retry. This pattern requires a ROWVERSION column on the target table.
If you code the updates in TSQL you can code an atomic update, something like
update foo with (updlock)
set bucket_a = bucket_a + 1
output inserted.*
where id = #id
(The 'updlock' isn't strictly necessary in this query, but is good form any time you want to ensure this kind of isolation)

solr 4's atomic updates insight

Are atomic updates significantly faster than fetching data from a source and then making a whole new document and indexing it. Basically I would like to know how exactly solr's atomic updates work?
It actually reindexes the whole document, see http://wiki.apache.org/solr/Atomic_Updates.
Atomic update could be faster because it does not involve fetching the current document from Solr first and then reposting the modified document. You can save on network time. Internally Solr will use the existing values for fields not specified in the atomic update (which is why you need to keep all values as stored).
Atomic update also helps you avoid conflicts since you need not worry if somebody else changes the document by the time you post your modified document. This problem could be dealt by using optimistic concurrency also.

What's the difference between findAndModify and update in MongoDB?

I'm a little bit confused by the findAndModify method in MongoDB. What's the advantage of it over the update method? For me, it seems that it just returns the item first and then updates it. But why do I need to return the item first? I read the MongoDB: the definitive guide and it says that it is handy for manipulating queues and performing other operations that need get-and-set style atomicity. But I didn't understand how it achieves this. Can somebody explain this to me?
If you fetch an item and then update it, there may be an update by another thread between those two steps. If you update an item first and then fetch it, there may be another update in-between and you will get back a different item than what you updated.
Doing it "atomically" means you are guaranteed that you are getting back the exact same item you are updating - i.e. no other operation can happen in between.
findAndModify returns the document, update does not.
If I understood Dwight Merriman (one of the original authors of mongoDB) correctly, using update to modify a single document i.e.("multi":false} is also atomic. Currently, it should also be faster than doing the equivalent update using findAndModify.
From the MongoDB docs (emphasis added):
By default, both operations modify a single document. However, the update() method with its multi option can modify more than one document.
If multiple documents match the update criteria, for findAndModify(), you can specify a sort to provide some measure of control on which document to update.
With the default behavior of the update() method, you cannot specify which single document to update when multiple documents match.
By default, findAndModify() method returns the pre-modified version of the document. To obtain the updated document, use the new option.
The update() method returns a WriteResult object that contains the status of the operation. To return the updated document, use the find() method. However, other updates may have modified the document between your update and the document retrieval. Also, if the update modified only a single document but multiple documents matched, you will need to use additional logic to identify the updated document.
Before MongoDB 3.2 you cannot specify a write concern to findAndModify() to override the default write concern whereas you can specify a write concern to the update() method since MongoDB 2.6.
When modifying a single document, both findAndModify() and the update() method atomically update the document.
One useful class of use cases is counters and similar cases. For example, take a look at this code (one of the MongoDB tests):
find_and_modify4.js.
Thus, with findAndModify you increment the counter and get its incremented
value in one step. Compare: if you (A) perform this operation in two steps and
somebody else (B) does the same operation between your steps then A and B may
get the same last counter value instead of two different (just one example of possible issues).
This is an old question but an important one and the other answers just led me to more questions until I realized: The two methods are quite similar and in many cases you could use either.
Both findAndModify and update perform atomic changes within a single request, such as incrementing a counter; in fact the <query> and <update> parameters are largely identical
With both, the atomic change takes place directly on a document matching the query when the server finds it, ie an internal write lock on that document for the fraction of a millisecond that the server confirms the query is valid and applies the update
There is no system-level write lock or semaphore which a user can acquire. Full stop. MongoDB deliberately doesn't make it easy to check out a document then change it then write it back while somehow preventing others from changing that document in the meantime. (While a developer might think they want that, it's often an anti-pattern in terms of scalability and concurrency ... as a simple example imagine a client acquires the write lock then is killed while holding it. If you really want a write lock, you can make one in the documents and use atomic changes to compare-and-set it, and then determine your own recovery process to deal with abandoned locks, etc. But go with caution if you go that way.)
From what I can tell there are two main ways the methods differ:
If you want a copy of the document when your update was made: only findAndModify allows this, returning either the original (default) or new record after the update, as mentioned; with update you only get a WriteResult, not the document, and of course reading the document immediately before or after doesn't guard you against another process also changing the record in between your read and update
If there are potentially multiple matching documents: findAndModify only changes one, and allows you customize the sort to indicate which one should be changed; update can change all with multi although it defaults to just one, but does not let you say which one
Thus it makes sense what HungryCoder says, that update is more efficient where you can live with its restrictions (eg you don't need to read the document; or of course if you are changing multiple records). But for many atomic updates you do want the document, and findAndModify is necessary there.
We used findAndModify() for Counter operations (inc or dec) and other single fields mutate cases. Migrating our application from Couchbase to MongoDB, I found this API to replace the code which does GetAndlock(), modify the content locally, replace() to save and Get() again to fetch the updated document back. With mongoDB, I just used this single API which returns the updated document.

Resources