DB concurrency - Checksum vs Timestamp - database

we are in the process of moving to a new database system. The database is of ISAM type and the API does not provide a way to detect if a record has been changed by another user.
Therefore I need to implement this functionality on the client side. I am currently calculating a checksum using the before and after record buffers and comparing the result.
My question is, since there is a chance that the same checksum value can be calculated for two different records, would it be better to have a timestamp field instead?
How is record changed detection normally handled?
Thank you.

Better would be not a timestamp, which is unreliable, but an integer field version, which your client code may use to detect concurrent changes in DB.
This is called "optimistic locking", when your transaction doesn't lock any DB resources, until it's time to update DB. At this moment it locks needed DB resources (e.g., tables), reads version from DB and checks if it's has expected value. If yes, this means it's safe to update database along with version number in DB. If no, this means there was concurrent update and transaction needs to abort.
Of course, if you would have lot of aborts, it means you would need "Pessimistic locking", where your app locks any resources for the whole transaction. If your DB driver not support this, you'll need some other shared lock, like a mutex. This approach decreases throughput in most cases, since concurrent transactions must wait until one transaction frees locked resources.

Related

Simultaneous access to database; keeping data consistent across all connections

I made a database system right here:
(comments on the normalization are highly appreciated as well - I have a feeling you'll hate me on what I did with tblIsolateSensitivity; tblHAIFile only has a bunch of Boolean fields and foreign keys).
Let's say we have x number of terminals accessing the database. X1 edits Patient 01, X2 edits Patient 02, X3 deletes Patient 01 at the same time. How can I ensure that the data between the three terminals are all up-to-date and consistent?
At the moment, I am querying the data only when the query is needed to be done (ie: when the user searches for a record, or if the program needs to verify something against a database record), meaning the data is only as updated as the most recent query that the user makes. This makes it difficult to ensure that the data is up-to-date on all terminals. Of course, for deleted entries, I have error handling to handle that, but for the rest, well...
So, my question is: how do you guys typically handle this kind of situation? Is there a name for this concept so that I can look it up and read long?
From a database design perspective, you should read up on optimistic concurrency and pessimistic concurrency. These are two options for making sure that you either don't have two users modifying the same record at the same time, or at least if you do allow that, the conflict is detected so it can be resolved.
The basic idea behind optimistic concurrency is that you allow multiple users to view and modify the data at the same time, on the assumption that this will be relatively rare. However, before any user writes changes to the data, a check is made to ensure that the underlying data hasn't changed since it was originally read. In some cases you do this manually with a read before update, checking each column value against a cached value. However, that is cumbersome. Some DBMS systems have features that make this simpler. For example, SQL Server has the ROWVERSION (formerly known as TIMESTAMP) data type, which lets you check easily using a single value whether someone else has changed a record since the last time you read it.
The basic idea behind pessimistic concurrency is that you put a lock on a record in the expectation that you're going to change it. While you hold the lock, the DBMS will prevent anyone else from getting their own lock.
The advantage of optimistic concurrency is that it's pretty light weight, doesn't interfere too much with your application, and let's you manually (or automatically) resolve any conflicts on those rare occasions when they happen. You also don't have to worry about someone reading a record, locking it and then going home for the weekend.
The advantage of pessimistic concurrency is that it prevents collisions, but it can stop one user from working while they wait for another to finish what they're doing.
From the perspective of notifying users when records change in the background (i.e. they're changed by another user) that isn't a database design feature. It may be a feature of your application logic or of your application's data access layer.

Eventual consistency with both database and message queue records

I have an application where I need to store some data in a database (mysql for instance) and then publish some data in a message queue. My problem is: If the application crashes after the storage in the database, my data will never be written in the message queue and then be lost (thus eventual consistency of my system will not be guaranted).
How can I solve this problem ?
I have an application where I need to store some data in a database (mysql for instance) and then publish some data in a message queue. My problem is: If the application crashes after the storage in the database, my data will never be written in the message queue and then be lost (thus eventual consistency of my system will not be guaranted). How can I solve this problem ?
In this particular case, the answer is to load the queue data from the database.
That is, you write the messages that need to be queued to the database, in the same transaction that you use to write the data. Then, asynchronously, you read that data from the database, and write it to the queue.
See Reliable Messaging without Distributed Transactions, by Udi Dahan.
If the application crashes, recovery is simple -- during restart, you query the database for all unacknowledged messages, and send them again.
Note that this design really expects the consumers of the messages to be designed for at least once delivery.
I am assuming that you have a loss-less message queue, where once you get a confirmation for writing data, the queue is guaranteed to have the record.
Basically, you need a loop with a transaction that can roll back or a status in the database. The pseudo code for a transaction is:
Begin transaction
Insert into database
Write to message queue
When message queue confirms, commit transaction
Personally, I would probably do this with a status:
Insert into database with a status of "pending" (or something like that)
Write to message queue
When message confirms, change status to "committed" (or something like that)
In the case of recovery from failure, you may need to check the message queue to see if any "pending" records were actually written to the queue.
I'm afraid that answers (VoiceOfUnreason, Udi Dahan) just sweep the problem under the carpet. The problem under carpet is: How the movement of data from database to queue should be designed so that the message will be posted just once (without XA). If you solve this, then you can easily extend that concept by any additional business logic.
CAP theorem tells you the limits clearly.
XA transactions is not 100% bullet proof solution, but seems to me best of all others that I have seen.
Adding to what #Gordon Linoff said, assuming durable messaging (something like MSMQ?) the method/handler is going to be transactional, so if it's all successful, the message will be written to the queue and the data to your view model, if it fails, all will fail...
To mitigate the ID issue you will need to use GUIDs instead of DB generated keys (if you are using messaging you will need to remove your referential integrity anyway and introduce GUIDS as keys).
One more suggestion, don't update the database, but inset only/upsert (the pending row and then the completed row) and have the reader do the projection of the data based on the latest row (for example)
Writing message as part of transaction is a good idea but it has multiple drawbacks like
If your
a. database/language does not support transaction
b. transaction are time taking operation
c. you can not afford to wait for queue response while responding to your service call.
d. If your database is already under stress, writing message will exacerbate the impact of higher workload.
the best practice is to use Database Streams. Most of the modern databases support streams(Dynamodb, mongodb, orcale etc.). You have consumer of database stream running which reads from database stream and write to queue or invalidate cache, add to search indexer etc. Once all of them are successful you mark the stream item as processed.
Pros of this approach
it will work in the case of multi-region deployment where there is a regional failure. (you should read from regional stream and hydrate all the regional data stores.)
No Overhead of writing more records or performance bottle necks of queues.
You can use this pattern for other data sources as well like caching, queuing, searching.
Cons
You may need to call multiple services to construct appropriate message.
One database stream might not be sufficient to construct appropriate message.
ensure the reliability of your streams, like redis stream is not reliable
NOTE this approach also does not guarantee exactly once semantics. The consumer logic should be idempotent and should be able to handle duplicate message

Concurrent editing of same data

I recently came up with a case that makes me wonder if I'm a newbie or something trivial has escaped to me.
Suppose I have a software to be run by many users, that uses a table. When the user makes login in the app a series of information from the table appears and he has just to add and work or correct some information to save it. Now, if the software he uses is run by many people, how can I guarantee is he is the only one working with that particular record? I mean how can I know the record is not selected and being worked by 2 or more users at the same time? And please I wouldn't like the answer use “SELECT FOR UPDATE... “
because for what I've read it has too negative impact on the database. Thanks to all of you. Keep up the good work.
This is something that is not solved primarily by the database. The database manages isolation and locking of "concurrent transactions". But when the records are sent to the client, you usually (and hopefully) closed the transaction and start a new one when it comes back.
So you have to care yourself.
There are different approaches, the ones that come into my mind are:
optimistic locking strategies (first wins)
pessimistic locking strategies
last wins
Optimistic locking: you check whether a record had been changed in the meanwhile when storing. Usually it does this by having a version counter or timestamp. Some ORMs and frameworks may help a little to implement this.
Pessimistic locking: build a mechanism that stores the information that someone started to edit something and do not allow someone else to edit the same. Especially in web projects it needs a timeout when the lock is released anyway.
Last wins: the second person storing the record just overwrites the first changes.
... makes me wonder if I'm a newbie ...
That's what happens always when we discover that very common stuff is still not solved by the tools and frameworks we use and we have to solve it over and over again.
Now, if the software he uses is runed by many people how can I guarantee is he
is the only one working with that particular record.
Ah...
And please I wouldn't like the answer use “SELECT FOR UPDATE... “ because for
what I've read it has too negative impact on the database.
Who cares? I mean, it is the only way (keep a lock on a row) to guarantee you are the only one who can change it. Yes, this limits throughput, but then this is WHAT YOU WANT.
It is called programming - choosing the right tools for the job. IN this case impact is required because of the requirements.
The alternative - not a guarantee on the database but an application server - is an in memory or in database locking mechanism (like a table indicating what objects belong to what user).
But if you need to guarantee one record is only used by one person on db level, then you MUST keep a lock around and deal with the impact.
But seriously, most programs avoid this. They deal with it either with optimistic locking (second user submitting changes gets error) or other programmer level decisions BECAUSE the cost of such guarantees are ridiculously high.
Oracle is different from SQL server.
In Oracle, when you update a record or data set the old information is still available because your update is still on hold on the database buffer cache until commit.
Therefore who is reading the same record will be able to see the old result.
If the access to this record though is a write access, it will be a lock until commit, then you'll have access to write the same record.
Whenever the lock can't be resolved, a deadlock will pop up.
SQL server though doesn't have the ability to read a record that has been locked to write changes, therefore depending which query you're running, you might lock an entire table
First you need to separate queries and insert/updates using a data-warehouse database. Which means you could solve slow performance in update that causes locks.
The next step is to identify what is causing locks and work out each case separately.
rebuilding indexes during working hours could cause very nasty locks. Push them to after hours.

Replicate a database using snapshots and transaction logs

For learning purposes, I want to write my own database, that is able to replicate itself. I have made some progress, but now I am facing a problem that I can not solve. Supposed I have a database (let's call this source) that I would like to replicate to another database (let's call this target).
The basic principle is easy: In the source you don't store actual tables, but instead a log of transactions. It's easy to send over the transaction log to the target, where the database then rebuilds itself. If you want to update the target, you simply request the part of the transaction log that has changed ever since. Basically this is what almost every database does.
While this works, it has one major drawback: If a table already exists for a long time, the transaction log is very long, and hence replicating the table requires lots of time…
To avoid this you can store the current state as well. This means you have an up-to-date snapshot that you can copy fast. Additionally, the target has to subscribe to the transaction log of the source. Once it contains additional entries, the target applies them to its copied table. This works well, too, and it's way better in terms of performance and transferred volume.
But now I am facing a problem: Supposed the snapshot is large, then it may happen that changes are made to it while it is being delivered. That means that the copied snapshot contains some old and some new data. Now, how do I get the target database in a consistent state? Even if I know from where to start the transaction log, I either have to apply a change that was already applied to some of the records, or I have to leave it out, but then a change is not applied at all to some other records.
Of course I could use the isolation level sequential, but then performance drops. Of course I could do what e.g. CouchDB does and remember the current table revision in every record, and keep a copy of every record for every revision. But then the required space grows enormously.
So, what shall I do?
Everything that I was able to find on the web always either relies on the idea of replaying the entire transaction log, or by using a process as in CouchDB which takes up huge amounts of space.
Any ideas?
Your snapshot needs to be consistent and you need to know at what time (in regards to the tx log) it is consistent. You then apply any transactions that have been committed since this point.
Obtaining a consistent snapshot can be done with exclusive locking, which may delay other transactions from committing, or using row versions (MVCC).
Good luck with your project.

Optimistic vs. Pessimistic locking

I understand the differences between optimistic and pessimistic locking. Now, could someone explain to me when I would use either one in general?
And does the answer to this question change depending on whether or not I'm using a stored procedure to perform the query?
But just to check, optimistic means "don't lock the table while reading" and pessimistic means "lock the table while reading."
Optimistic Locking is a strategy where you read a record, take note of a version number (other methods to do this involve dates, timestamps or checksums/hashes) and check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic. (i.e. hasn't been updated between when you check the version and write the record to the disk) and update the version in one hit.
If the record is dirty (i.e. different version to yours) you abort the transaction and the user can re-start it.
This strategy is most applicable to high-volume systems and three-tier architectures where you do not necessarily maintain a connection to the database for your session. In this situation the client cannot actually maintain database locks as the connections are taken from a pool and you may not be using the same connection from one access to the next.
Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks. To use pessimistic locking you need either a direct connection to the database (as would typically be the case in a two tier client server application) or an externally available transaction ID that can be used independently of the connection.
In the latter case you open the transaction with the TxID and then reconnect using that ID. The DBMS maintains the locks and allows you to pick the session back up through the TxID. This is how distributed transactions using two-phase commit protocols (such as XA or COM+ Transactions) work.
When dealing with conflicts, you have two options:
You can try to avoid the conflict, and that's what Pessimistic Locking does.
Or, you could allow the conflict to occur, but you need to detect it upon committing your transactions, and that's what Optimistic Locking does.
Now, let's consider the following Lost Update anomaly:
The Lost Update anomaly can happen in the Read Committed isolation level.
In the diagram above we can see that Alice believes she can withdraw 40 from her account but does not realize that Bob has just changed the account balance, and now there are only 20 left in this account.
Pessimistic Locking
Pessimistic locking achieves this goal by taking a shared or read lock on the account so Bob is prevented from changing the account.
In the diagram above, both Alice and Bob will acquire a read lock on the account table row that both users have read. The database acquires these locks on SQL Server when using Repeatable Read or Serializable.
Because both Alice and Bob have read the account with the PK value of 1, neither of them can change it until one user releases the read lock. This is because a write operation requires a write/exclusive lock acquisition, and shared/read locks prevent write/exclusive locks.
Only after Alice has committed her transaction and the read lock was released on the account row, Bob UPDATE will resume and apply the change. Until Alice releases the read lock, Bob's UPDATE blocks.
Optimistic Locking
Optimistic Locking allows the conflict to occur but detects it upon applying Alice's UPDATE as the version has changed.
This time, we have an additional version column. The version column is incremented every time an UPDATE or DELETE is executed, and it is also used in the WHERE clause of the UPDATE and DELETE statements. For this to work, we need to issue the SELECT and read the current version prior to executing the UPDATE or DELETE, as otherwise, we would not know what version value to pass to the WHERE clause or to increment.
Application-level transactions
Relational database systems have emerged in the late 70's early 80's when a client would, typically, connect to a mainframe via a terminal. That's why we still see database systems define terms such as SESSION setting.
Nowadays, over the Internet, we no longer execute reads and writes in the context of the same database transaction, and ACID is no longer sufficient.
For instance, consider the following use case:
Without optimistic locking, there is no way this Lost Update would have been caught even if the database transactions used Serializable. This is because reads and writes are executed in separate HTTP requests, hence on different database transactions.
So, optimistic locking can help you prevent Lost Updates even when using application-level transactions that incorporate the user-think time as well.
Conclusion
Optimistic locking is a very useful technique, and it works just fine even when using less-strict isolation levels, like Read Committed, or when reads and writes are executed in subsequent database transactions.
The downside of optimistic locking is that a rollback will be triggered by the data access framework upon catching an OptimisticLockException, therefore losing all the work we've done previously by the currently executing transaction.
The more contention, the more conflicts, and the greater the chance of aborting transactions. Rollbacks can be costly for the database system as it needs to revert all current pending changes which might involve both table rows and index records.
For this reason, pessimistic locking might be more suitable when conflicts happen frequently, as it reduces the chance of rolling back transactions.
Optimistic locking is used when you don't expect many collisions. It costs less to do a normal operation but if the collision DOES occur you would pay a higher price to resolve it as the transaction is aborted.
Pessimistic locking is used when a collision is anticipated. The transactions which would violate synchronization are simply blocked.
To select proper locking mechanism you have to estimate the amount of reads and writes and plan accordingly.
Optimistic assumes that nothing's going to change while you're reading it.
Pessimistic assumes that something will and so locks it.
If it's not essential that the data is perfectly read use optimistic. You might get the odd 'dirty' read - but it's far less likely to result in deadlocks and the like.
Most web applications are fine with dirty reads - on the rare occasion the data doesn't exactly tally the next reload does.
For exact data operations (like in many financial transactions) use pessimistic. It's essential that the data is accurately read, with no un-shown changes - the extra locking overhead is worth it.
Oh, and Microsoft SQL server defaults to page locking - basically the row you're reading and a few either side. Row locking is more accurate but much slower. It's often worth setting your transactions to read-committed or no-lock to avoid deadlocks while reading.
I would think of one more case when pessimistic locking would be a better choice.
For optimistic locking every participant in data modification must agree in using this kind of locking. But if someone modifies the data without taking care about the version column, this will spoil the whole idea of the optimistic locking.
There are basically two most popular answers. The first one basically says
Optimistic needs a three-tier architectures where you do not necessarily maintain a connection to the database for your session whereas Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking you need either a direct connection to the database.
Another answer is
optimistic (versioning) is faster because of no locking but (pessimistic) locking performs better when contention is high and it is better to prevent the work rather than discard it and start over.
or
Optimistic locking works best when you have rare collisions
As it is put on this page.
I created my answer to explain how "keep connection" is related to "low collisions".
To understand which strategy is best for you, think not about the Transactions Per Second your DB has but the duration of a single transaction. Normally, you open trasnaction, performa operation and close the transaction. This is a short, classical transaction ANSI had in mind and fine to get away with locking. But, how do you implement a ticket reservation system where many clients reserve the same rooms/seats at the same time?
You browse the offers, fill in the form with lots of available options and current prices. It takes a lot of time and options can become obsolete, all the prices invalid between you started to fill the form and press "I agree" button because there was no lock on the data you have accessed and somebody else, more agile, has intefered changing all the prices and you need to restart with new prices.
You could lock all the options as you read them, instead. This is pessimistic scenario. You see why it sucks. Your system can be brought down by a single clown who simply starts a reservation and goes smoking. Nobody can reserve anything before he finishes. Your cash flow drops to zero. That is why, optimistic reservations are used in reality. Those who dawdle too long have to restart their reservation at higher prices.
In this optimistic approach you have to record all the data that you read (as in mine Repeated Read) and come to the commit point with your version of data (I want to buy shares at the price you displayed in this quote, not current price). At this point, ANSI transaction is created, which locks the DB, checks if nothing is changed and commits/aborts your operation. IMO, this is effective emulation of MVCC, which is also associated with Optimistic CC and also assumes that your transaction restarts in case of abort, that is you will make a new reservation. A transaction here involves a human user decisions.
I am far from understanding how to implement the MVCC manually but I think that long-running transactions with option of restart is the key to understanding the subject. Correct me if I am wrong anywhere. My answer was motivated by this Alex Kuznecov chapter.
In most cases, optimistic locking is more efficient and offers higher performance. When choosing between pessimistic and optimistic locking, consider the following:
Pessimistic locking is useful if there are a lot of updates and
relatively high chances of users trying to update data at the same
time. For example, if each operation can update a large number of
records at a time (the bank might add interest earnings to every
account at the end of each month), and two applications are running
such operations at the same time, they will have conflicts.
Pessimistic locking is also more appropriate in applications that contain small tables that are frequently updated. In the case of these so-called hotspots, conflicts are so probable that optimistic locking wastes effort in rolling back conflicting transactions.
Optimistic locking is useful if the possibility for conflicts is very
low – there are many records but relatively few users, or very few updates and mostly read-type operations.
One use case for optimistic locking is to have your application use the database to allow one of your threads / hosts to 'claim' a task. This is a technique that has come in handy for me on a regular basis.
The best example I can think of is for a task queue implemented using a database, with multiple threads claiming tasks concurrently. If a task has status 'Available', 'Claimed', 'Completed', a db query can say something like "Set status='Claimed' where status='Available'. If multiple threads try to change the status in this way, all but the first thread will fail because of dirty data.
Note that this is a use case involving only optimistic locking. So as an alternative to saying "Optimistic locking is used when you don't expect many collisions", it can also be used where you expect collisions but want exactly one transaction to succeed.
Lot of good things have been said above about optimistic and pessimistic locking.
One important point to consider is as follows:
When using optimistic locking, we need to cautious of the fact that how will application recover from these failures.
Specially in asynchronous message driven architectures, this can lead of out of order message processing or lost updates.
Failures scenarios need to be thought through.
Let's say in an ecommerce app, a user wants to place an order. This code will get executed by multiple threads. In pessimistic locking, when we get the data from the DB, we lock it so no other thread can modify it. We process the data, update the data, and then commit the data. After that, we release the lock. Locking duration is long here, we have locked the database record from the beginning till committing.
In optimistic locking, we get the data and process the data without locking. So multiple threads can execute the code so far concurrently. This will speed up. While we update, we lock the data. We have to verify that no other thread updated that record. For example, If we had 100 items in inventory and we have to update it to 99 (because your code might be quantity=queantity-1) but if another thread already used 1 it should be 98. We had race condition here. In this case, we restart the thread so we execute the same code from the beginning. But this is an expensive operation, you already came to end but then restart. if we had a few race conditions, that would not be a big deal, If the race condition was high, there would be a lot of threads to restart. We might run in a loop. In the race condition is high, we should be using `pessimistic locking
Optimistic locking means exclusive lock is not used when reading a row so lost update or write skew is not prevented. So, use optimistic locking:
If lost update or write skew doesn't occur.
Or, if there are no problems even if lost update or write skew occurs.
Pessimistic locking means exclusive lock is used when reading a row so lost update or write skew is prevented. So, use pessimistic locking:
If lost update or write skew occurs.
Or if there are some problems if lost update or write skew occurs.
In MySQL and PostgreSQL, you can use exclusive lock with SELECT FOR UPDATE.
You can check my answer of the lost update and write skew examples with optimistic locking(without SELECT FOR UPDATE) and pessimistic locking(with SELECT FOR UPDATE) in MySQL.
On a more practical note, when updating a distributed system, optimistic locking in the DB may be inadequate to provide the consistency needed across all parts of the distributed system.
For example, in applications built on AWS, it is common to have data in both a DB (e.g. DynamoDB) and a storage (e.g. S3). If an update touches both DynamoDB and S3, an optimistic locking in DynamoDB could still leave the data in S3 inconsistent. In this type of cases, it is probably safer to use a pessimistic lock that is held in DynamoDB until the S3 update is finished. In fact, AWS provides a locking library for this purpose.
Optimistic locking and Pessimistic locking are two models for locking data in a database.
Optimistic locking : where a record is locked only when changes are committed to the database.
Pessimistic locking : where a record is locked while it is edited.
Note : In both data-locking models, the lock is released after the changes are committed to the database.

Resources