Can Lost Update happen in read committed isolation level in PostgreSQL? - database

I have a query like below in PostgreSQL:
UPDATE
queue
SET
queue.status = 'PROCESSING'
WHERE
queue.status = 'WAITING' AND
queue.id = (SELECT id FROM queue WHERE STATUS = 'WAITING' LIMIT 1 )
RETURNING
queue.id
and many workers try to process one work at a time (that's why I have sub-query with limit 1). After this update, each worker grabs information about the id and processes the work, but sometimes they grab the same work and process it twice or more. The isolation level is Read Committed.
My question is how can I guarantee one work is going to be processed once? I know there is so many post out there but I can say I have tried most of them and it didn't help () ;
I have tried SELECT FOR UPDATE, but it caused deadlocked situation.
I have tried pg_try_advisory_xact_lock, but it caused out of shared
memory
I tried adding AND pg_try_advisory_xact_lock(queue.id) to the outer query's WHERE clause, but ... [?]
Any help would be appreciated.

A lost update won't occur in the situation you describe, but it won't work properly either.
What will happen in the example you've given above is that given (say) 10 workers started simultaneously, all 10 of them will execute the subquery and get the same ID. They will all attempt to lock that ID. One of them will succeed; the others will block on the first one's lock. Once the first backend commits or rolls back, the 9 others will race for the lock. One will get it, re-check the WHERE clause and see that the queue.status test no longer matches, and return without modifying any rows. The same will happen with the other 8. So you used 10 queries to do the work of one query.
If you fail to explicitly check the UPDATE result and see that zero rows were updated you might think you were getting lost updates, but you aren't. You just have a concurrency bug in your application caused by a misunderstanding of the order-of-execution and isolation rules. All that's really happening is that you're effectively serializing your backends so that only one at a time actually makes forward progress.
The only way PostgreSQL could avoid having them all get the same queue item ID would be to serialize them, so it didn't start executing query #2 until query #1 finished. If you want to you can do this by LOCKing the queue table ... but again, you might as well just have one worker then.
You can't get around this with advisory locks, not easily anyway. Hacks where you iterated down the queue using non-blocking lock attempts until you got the first lockable item would work, but would be slow and clumsy.
You are attempting to implement a work queue using the RDBMS. This will not work well. It will be slow, it will be painful, and getting it both correct and fast will be very very hard. Don't roll your own. Instead, use a well established, well tested system for reliable task queueing. Look at RabbitMQ, ZeroMQ, Apache ActiveMQ, Celery, etc. There's also PGQ from Skytools, a PostgreSQL-based solution.
Related:
In PostgreSQL, do multiple UPDATES to different rows in the same table having a locking conflict?
Can multiple threads cause duplicate updates on constrained set?
Why do we need message brokers like rabbitmq over a database like postgres?

SKIP LOCKED can be used to implement queue in PostgreSql. see

In PostgreSQL, lost update happens in READ COMMITTED and READ UNCOMMITTED but if you use SELECT FOR UPDATE in READ COMMITTED and READ UNCOMMITTED, lost update doesn't happen.
In addition, lost update doesn't happen in REPEATABLE READ and SERIALIZABLE whether or not you use SELECT FOR UPDATE. *Error happens if there is a lost update condition.

Related

Prioritizing Transactions in Google AppEngine

Let's say I need to perform two different kinds write operations on a datastore entity that might happen simultaneously, for example:
The client that holds a write-lock on the entry updates the entry's content
The client requests a refresh of the write-lock (updates the lock's expiration time-stamp)
As the content-update operation is only allowed if the client holds the current write-lock, I need to perform the lock-check and the content-write in a transaction (unless there is another way that I am missing?). Also, a lock-refresh must happen in a transaction because the client needs to first be confirmed as the current lock-holder.
The lock-refresh is a very quick operation.
The content-update operation can be quite complex. Think of it as the client sending the server a complicated update-script that the server executes on the content.
Given this, if there is a conflict between those two transactions (should they be executed simultaneously), I would much rather have the lock-refresh operation fail than the complex content-update.
Is there a way that I can "prioritize" the content-update transaction? I don't see anything in the docs and I would imagine that this is not a specific feature, but maybe there is some trick I can use?
For example, what happens if my content-update reads the entry, writes it back with a small modification (without committing the transaction), then performs the lengthy operation and finally writes the result and commits the transaction? Would the first write be applied immediately and cause a simultaneous lock-refresh transaction to fail? Or are all writes kept until the transaction is committed at the end?
Is there such a thing as keeping two transactions open? Or doing an intermediate commit in a transaction?
Clearly, I can just split my content-update into two transactions: The first one sets a "don't mess with this, please!"-flag and the second one (later) writes the changes and clears that flag.
But maybe there is some other trick to achieve this with fewer reads/writes/transactions?
Another thought I had was that there are 3 different "blocks" of data: The current lock-holder (LH), the lock expiration (EX), and the content that is being modified (CO). The lock-refresh operation needs to perform a read of LH and a write to EX in a transaction, while the content-update operation needs to perform a read of LH, a read of CO, and a write of CO in a transaction. Is there a way to break the data apart into three entities and somehow have the transactions span only the needed entities? Since LH is never modified by these two operations, this might help avoid the conflict in the first place?
The datastore uses optimistic concurrency control, which means that a (datastore primitive) transaction waits until it is committed, then succeeds only if someone else hasn't committed first. Typically, the app retries the failed transaction with fresh data. There is no way to modify this first-wins behavior.
It might help to know that datastore transactions are strongly consistent, so a client can first commit a lock refresh with a synchronous datastore call, and when that call returns, the client knows for sure whether it obtained or refreshed the lock. The client can then proceed with its update and lock clear. The case you describe where a lock refresh and an update might occur concurrently from the same client sounds avoidable.
I'm assuming you need the lock mechanism to prevent writes from other clients while the lock owner performs multiple datastore primitive transactions. If a client is actually only doing one update before it releases the lock and it can do so within seconds (well before the datastore RPC timeout), you might get by with just a primitive datastore transaction with optimistic concurrency control and retries. But a lock might be a good idea for simple serialization of, say, edits to a record in a user interface, where a user hits an "edit" button in a UI and you want that to guarantee that the user has some time to prepare and submit changes without the record being changed by someone else. (Whether that's the user experience you want is your decision. :) )

Dealing with race condition in transactional database table

Let me lay the scenario out first. Say you have a database for a business app and one of the things it tracks is inventory. The system says you have 5 screws in stock. Say you needed all 5. The system creates an inventory transaction record for -5. After you commit that transaction, since you know you had 5 before and you pulled out 5, if you sum up all the inventory transaction records for that screw the total should be 0. The problem occurs when two people are trying to do this at the same time. Say one person wants 4 and the other wants 2. Both client apps check the quantity beforehand and they are both told 5. At the exact same time one creates a transaction for -4 and the other for -2. The results in the total inventory quantity to be -1 which should never be possible because the system should not allow negative inventory.
How would you solve this if you didn't have a server application to help you? I mention that because a server coordinating the inventory transactions is how I would solve it but right now our product has no server application. We just have client apps which talk to a Firebird database directly. I'm trying to figure out how to do this with just the client apps and database. One thing that might help is that Firebird has something called a Generator which is basically a unique number generator that is atomic so you are guaranteed that if you asked Firebird to increment the generator and give you the next number that it will not give anyone else that same number.
My mind was going down the route of trying to create a makeshift record lock using a generator. I thought I could have them both check a "lock" field on the Item table. If it is null, then noone has a lock. If it is non-null it is locked so you need to keep checking back until it is not locked. If there is no lock you ask the generator for a uniq number and store that in the locking field for the Item you want to lock. You commit that transaction then go back and check to see if it is indeed the case that the Item table's lock field contains the number you put there. If it does then you have successfully locked and if it doesn't then that means someone was locking it at the same time and you lost the race. Once you are done you null out the lock and the client that is waiting will then see the null, lock it themselves and repeat.
This itself has a race condition I believe though. Trxn1 (transaction 1) checks lock and finds null. Trxn2 checks lock and finds null. Trxn1 gets new lock number from generator. Trxn2 gets new lock from generator. Trxn1 says update Item record with my lock if lock is still null which it is. Trxn1 commits trxn then starts a new Trxn1 and proves the lock contains his lock id and it does so it knows it has permission to make inventory transactions and it starts doing so. Right after Trxn1 checks to see if it got the lock Trxn2 commits its update statement that stored its lock if the lock was null. If Trxn2 executed his update statement before Trxn1 committed the lock then Trxn2 would still see the value as null and the update would occur. If Trxn2's lock commit happens after Trxn1 committed lock and already verified it we have a problem. Trxn1 is making changes to Item transaction table. Trxn2 got his lock committed because the lock was null in its transaction world when it did it and when it commits Trxn2's update statement will overwrite Trxn1's lock because the null check in the update statement happened before both committed, not at the time of commit. So now both think they have a lock and we will end up with negative inventory.
Can anyone think of a way to solve this short of having a server application with some kind of queueing system (FIFO)? I would prefer if it could all be done via clients "talking to the database" to coordinate this but that may not be possible technically speaking. Sorry If this got a bit wordy :D
Solution Edit:
jtahlborn seems to have the right idea. I somehow didn't realize that Firebird does in fact have row level locking. Simple select statements (no joins, group by, etc) can have "with lock" appended to the end of the statement and any row returned by the statement will be locked until the transaction is committed or rolled back. Noone else can obtain a lock on that row nor make changes to it. Because I don't want to lock the entire ITEM table while I'm inserting rows in to the Item transaction table, I am going to create a table just for locking that has one column (the ItemID field). Because the second transaction will get an error when it tries to do it's own lock, it doesn't matter that I am never actually modifying anything on the locking table itself. Failing to get a lock gives me all the information I need. I will put triggers on the insert / delete of the ITEM table so that for every Item record this is also a record in the ITEMLOCK table. Here is the process I'm going to use.
Start database transaction
Attempted to obtain lock on ITEMLOCK row with the ItemID of the Item you want to change
If you can't get a lock keep trying until the record is unlocked
Once locked go prove that the quantity on hand of that Item is enough to cover what you
want to take out, because they could have old data this might not be
the case and it will drop out here and message the user
If sufficient quantities exist insert your inventory transaction record in the inventory transaction table
Commit transaction which in turn releases the lock
Note: Matthieu M mentioned the FOR UPDATE clause. It is mentioned in the documentation along with the WITH LOCK clause. As I understand it you can use that when you are locking multiple rows with one statement. I am not one hundred percent sure, but it seems like doing this with WITH LOCK will trying an all or nothing approach and FOR UPDATE will lock each one separately one at a time. I am not sure what happens if it locked the first 100 records you asked for but on the 101th record it couldn't get a lock. Does it then release the 100 locks you did get? I will need to lock more than one Item at a time, but I do not feel comfortable with FOR UPDATE since I feel like I don't truly understand the difference. I also probably want to know which Item was already locked for user messaging purposes (going to put a timeout so trxns wont wait forever for a lock) so I will be locking one at at time using WITH LOCK.
Note 2: I want to point out to anyone using this in their own code to be careful. I am going to have a very simple loop when waiting for a lock to be released (is it released yet? how about now? now?). If I had a ton of users possibly trying to lock the same row at the same time there may be a deadlock scenario. Say you have a slow client. That client may always end up with the short end of the stick because every time the lock was release some other client then grabbed it faster than the slow client could. If this happened over and over this would be essentially a deadlock scenario. If I was worried about that I would need a way to figure out who is first in line. In my case, database transactions should be short lived, we never have more than 50 users (not a cloud system), and it is highly unlikely that they all are using this part of the system at the same time trying to modify the exact same Item's inventory quantity.
The simplest solution is to lock some primary row (like the main "item") and use this as your distributed locking mechanism. (assuming your database supports row-level locks, as most modern dbs do).
I recommend reading up about the CAP theorem and how it may be an explanation for the scenario you are describing. EDIT: Having read in more detail, my comment may be of limited use because it seems you already know this and are trying to solve the problem within Firebird.

SQL Server deadlock between two update statements

I have a website that has a very popular forum on it and occasionally throughout the day I see several deadlocks happening between two identical (minus the data within them) update statements on the same forum. I'm not exactly sure why this is happening on this query as there are many other queries on the site that run with high concurrency without issue.
Full Image
The query between the two processes is nearly identical, the graph shows it as:
update [Forum] set [DateModified] = #DateModified, [LatestLocalThreadID] = #LatestLocalThreadID where ID = 310
Can anyone shed any light on what could be causing this?
This is because there is a foreign key to ForumThreads that generates an S-lock when you set LatestLocalThreadID (to make sure that the row still exist when the statement completes). A possible fix would be to prefix the update statement with
SELECT *
FROM ForumThreads WITH (XLOCK, ROWLOCK, HOLDLOCK)
WHERE ID = #LatestLocalThreadID
in order to X-lock on that. You can also try UPDLOCK as a less aggressive mode. This can of course cause deadlocks in other places, but it is the best first try.
Basically deadlocks are prevented by accessing the objects (tables, pages, rows) always In the same order. In your example there's one process accessing forum first and forumThread second and another thread doing it vice versa. An update usually searches first for the rows to update and uses S-locks during the search. The rows it has identified to be changed, are locked by X-locks and then the actual change happens.
The quick and dirty solutions might be to do a begin Tran then lock the objects in the order you need and do the update followed by the commit that will release the locks again. But this will bring down the overall thruput of your website because of blocking locks.
The better way is to identify the two statements (you might edit your question and give us the other one when you found it) and the execution plan of them. It should be possible to rewrite the transactions somehow to access all objects in the same order - and prevent the deadlock.

Oracle deadlock without explicit locking and read committed isolation level, why?

I get this error Message: ORA-00060: deadlock detected while waiting for resource even though I am not using any explicit table locking and my isolation level is set to READ COMMITTED.
I use multiple threads over the Spring TransactionTemplate with default propagation. In my business logic the data is separated so that two transaction will never have the same set of data. Therefor I don't need SERIALIZABLE
Why can Oracle detect a deadlock? Deadlocks are impossible in this constellation, or am I missing something? If I'm not missing anything then my separation algorithm must be wrong, right? Or could there be some other explaination?
Oracle by default does row level locking. You mention using multiple threads. I suspect one thread is locking one row then attempting to lock another which has been locked by another thread. That other thread is then attempting to lock the row the first thread locked. At this point, Oracle will automatically detect a deadlock and break it. The two rows mentioned above could be in the same table or in different tables.
A careful review of what each thread is doing is the starting point. It may be necessary to decide to not run things in parallel, or it may be necessary to use an explicit locking mechanism (select for update for example).
LMK of what you find and of any additional questions….
K
Encountering deadlocks has nothing to do per se with the serialization level. When a row is inserted/updated/deleted oracle locks the row. If you have two transactions running concurrently and trying to change the same row, you can encounter a deadlock. The emphasis in on "CAN". This generally happens if different type of transactions take locks in a different order, which is a sign of bad transaction design.
As was previously mentioned a trace file is generated on encountering a deadlock. If you look at the trace file, you can determine which two sessions are involved in the deadlock. In addition it also shows the respective SQL statements.

SQL Server Deadlock Fix: Force join order, or automatically retry?

i have a stored procedure that performs a join of TableB to TableA:
SELECT <--- Nested <--- TableA
Loop <--
|
---TableB
At the same time, in a transaction, rows are inserted into TableA, and then into TableB.
This situation is occasionally causing deadlocks, as the stored procedure select grabs rows from TableB, while the insert adds rows to TableA, and then each wants the other to let go of the other table:
INSERT SELECT
========= ========
Lock A Lock B
Insert A Select B
Want B Want A
....deadlock...
Logic requires the INSERT to first add rows to A, and then to B, while i personally don't care the order in which SQL Server performs its join - as long as it joins.
The common recommendation for fixing deadlocks is to ensure that everyone accesses resources in the same order. But in this case SQL Server's optimizer is telling me that the opposite order is "better". i can force another join order, and have a worse performing query.
But should i?
Should i override the optimizer, now and forever, with a join order that i want it to use?
Or should i just trap error native error 1205, and resubmit the select statement?
The question isn't how much worse the query might perform when i override the optimizer and for it to do something non-optimal. The question is: is it better to automatically retry, rather than running worse queries?
Is it better to automatically retry deadlocks. The reason being that you may fix this deadlock, only to hit another one later. The behavior may change between SQL releases, if the size of the tables changes, if the server hardware specifications change, and even if the load on the server changes. If the deadlock is frequent, you should take active steps to eliminate it (an index is usually the answer), but for rare deadlocks (say every 10 mins or so), retry in the application can mask the deadlock. You can retry reads or writes, since the writes are, of course, surrounded by proper begin transaction/commit transaction to keep all write operations atomic and hence able to retry them w/o problems.
Another avenue to consider is turning on read committed snapshot. When this is enabled, SELECT will simply not take any locks, yet yield consistent reads.
To avoid deadlocks, one of the most common recommendations is "to acquire locks in the same order" or "access objects in the same order". Clearly this makes perfect sense, but is it always feasible? Is it always possible? I keep encountering cases when I cannot follow this advice.
If I store an object in one parent table and one or more child ones, I cannot follow this advice at all. When inserting, I need to insert my parent row first. When deleting, I have to do it in the opposite order.
If I use commands that touch multiple tables or multiple rows in one table, then usually I have no control in which order locks are acquired, (assuming that I am not using hints).
So, in many cases trying to acquire locks in the same order does not prevent all deadlocks. So, we need some kind of handling deadlocks anyway - we cannot assume that we can eliminate them all. Unless, of course, we serialize all access using Service Broker or sp_getapplock.
When we retry after deadlocks, we are very likely to overwrite other processes' changes. We need to be aware that very likely someone else modified the data we intended to modify. Especially if all the readers run under snapshot isolation, then readers cannot be involved in deadlocks, which means that all the parties involved in a deadlock are writers, modified or attempted to modify the same data. If we just catch the exception and automatically retry, we can overwrite someone else's changes.
This is called lost updates, and this is usually wrong. Typically the right thing to do after a deadlock is to retry on a much higher level - re-select the data and decide whether to save in the same way the original decision to save was made.
For example, if a user pushed a Save button and the saving transaction was chosen as a deadlock victim, it might be a good idea to re-display the data on the screen as of after the deadlock.
Trapping and rerunning can work, but are you sure that the SELECT is always the deadlock victim? If the insert is the deadlock victim, you'll have to be much more careful about retrying.
The easiest solution in this case, I think, is to NOLOCK or READUNCOMMITTED (same thing) your select. People have justifiable concerns about dirty reads, but we've run NOLOCK all over the place for higher concurrency for years and have never had a problem.
I'd also do a little more research into lock semantics. For example, I believe if you set transaction isolation level to snapshot (requires 2005 or later) your problems go away.

Resources