In my application, I have couple of threads that execute some logic.
At the end they adding new row to some table.
Before adding the new row, they check if a previous entry with the same details does not already exist. If one found - they updating instead of adding.
The problem is when some thread A do the check, see that no previous entity with the same details exist, and just before he add a new row, the thread B search the DB for the same entity. Thread B see that no such entity exist so he add new row too.
The result is that there are two rows with the same data in the table.
Note: no table key violated, because the thread get the next sequence just before adding the row, and the table key is some ID that does not related to the data.
Even if I will change the table key so it will be a combination of the data, It will prevent two rows with the same data, but will cause a DB error when the second thread will try to add the row.
Thank you in advance for the help, Roy.
You should be using a queue, possibly blocking queue. Threads A and B (producers) would add objects to the queue and another thread C (consumer) would poll the queue and remove the oldest object from the queue persisting it to the DB. This will prevent the problem when both A and B in the same time want to persist equal objects
You speak of "rows" so presumably this is a SQL database?
If so, why not just use transactions?
(Unless the threads are sharing a database connection, in which case a mutex might help, but I would prefer to give each thread a separate connection.)
I would recommend avoid locking in the client layer. Synchronized only works within one process, later you may scale so that your threads are across several JVMs or indeed machines.
I would enforce uniqueness in the DB, as you suggest this will then cause an exception for the second inserter. Catch that exception and do an update if that's the business logic you need.
But consider this argument:
Sometimes either of the following sequences may occur:
A insert Values VA, B updates to values VB.
B insert VB, A updates to VA.
If the two threads are racing either of these two outcomes VA or VB is equally valid. So you can't distinguish the second case from A inserts VA and B just fails!
So in fact there may be no need for the "fail and then update" case.
I think this is a job for SQL constraints, namely "UNIQUE" on the set of columns that have the data + the appropriate error handling.
Most database frameworks (Hibernate in Java, ActiveRecord etc in Ruby) have a form of optimistic locking. What this means is that you execute each operation on the assumption that it will work without conflict. In the special case where there is a conflict, you check this atomically at the point where you do the database operation, throw an exception, or error return code, and retry the operation in your client code after requerying etc.
This is usually implemented using a version number on each record. When a database operation is done, the row is read (including the version number), the client code updates the data, then saves it back to the database with a where clause specifying the primary key ID AND the version number being the same as it was when it was read. If it is different - this means another process has updated the row, and the operation should be retried. Usually this means re-reading the record, and doing that operation again on it with the new data from the other process.
In the case of adding, you would also want a unique index on the table, so the database refuses the operation, and you can handle that in the same code.
Pseudo code would look something like
do {
read row from database
if no row {
result_code = insert new row with data
} else {
result_code = update row with data
}
} while result_code != conflict_code
The benefit of this is that you don't need complicated synchronization/locking in your client code - each thread just executes in isolation, and uses the database as the consistency check (which it is very quick, and good at). Because you're not locking on some shared resource for every operation, the code can run much faster.
It also means that you can run multiple separate operating system processes to split the load and/or scale the operation over multiple servers as well without any code changes to handle conflicts.
You need to wrap the calls to check and write the row in a critical section or mutex.
With a critical section, interrupts and thread-switching are disabled while you perform the check and write, so both threads can't write at once.
With a mutex, the first thread would lock the mutex, perform its operations, then unlock the mutex. The second thread would attempt to do the same but the mutex lock would block until the first thread released the mutex.
Specific implementations of critical section or mutex functionality would depend on your platform.
You need to perform the act of checking for existing rows and then updating / adding rows inside a single transaction.
When you perform your check you should also acquire an update lock on those records, to indicate that you are going to write to the database based on the information that you have just read, and that no-one else should be allowed to change it.
In pseudo T-SQL (for Microsoft SQL Server):
BEGIN TRANSACTION
SELECT id FROM MyTable WHERE SomeColumn = #SomeValue WITH UPDLOCK
-- Perform your update here
END TRANSACTION
The update lock wont prevent people reading from those records, but it will prevent people from writing anything which might change the output of your SELECT
Multi Threading is always mind-bending ^^.
Main thing to do is to delimit the critical resources and critical operations.
Critical resource : your table.
Critical operation : adding yes, but
the whole procedure
You need to lock access to your table from the beginning of the check, until the end of the add.
If a thread attempt to do the same, while another is adding/checking, then he waits until the thread finish its operation. As simple as that.
Related
I am having a nodejs program which uses sequelize to create tables and insert data based on it.
Now, in future we are going to have multiple instances of the program and so we don't want multiple instances to read from the table during program startup so that only one instance can do the setup thing if required and other instance shouldn't get 'any access' to the table until the first instance has completed it's work.
I have checked 'transaction locking' - shared and exclusive but both of them seems to be giving reading access to the tables which I don't want.
My requirement is specifically that once a transaction acquires lock on a table, other transaction shouldn't be able to read from that table unless first one has completed it's work. How can I do this?
In MySQL use LOCK TABLES to lock an entire table.
In postgresql LOCK TABLE whatever IN EXCLUSIVE MODE; does the trick.
For best results have your app, when starting, look for a particular table. Do something simple and fast such as SELECT id FROM whatever LIMIT 1; to probe whether the table exists. If your app gets an exception because the table isn't there, then do
CREATE TABLE whatever ....;
LOCK TABLES whatever WRITE;
from the app creating the table. It blocks access to the table from all instances of your app except the one that gets the LOCK.
Once your table is locked, the initial SELECT I suggested will block from other clients. There's a possible race condition if two clients try to create the table more-or-less concurrently. But the extra CREATE TABLE will throw an exception.
Note: if you LOCK more than one table, and it's possible to run the code from more than one instance of the app, always always lock the tables in the same order, or you have the potential for a deadlock.
As documented in the manual the statement to lock a table is, LOCK TABLE ...
If you lock a table in exclusive mode, then no other access is allowed - not even a SELECT. Exclusive mode is the default:
If no lock mode is specified, then ACCESS EXCLUSIVE, the most restrictive mode, is used.
The manual explains the different lock modes:
ACCESS EXCLUSIVE
This mode guarantees that the holder is the only transaction accessing the table in any way.
In a database, we would not like the table to be dropped during we are modifying a row in this table. Per my understanding, a read lock on table + a write lock on row when write a row in table should be enough(based on that a write lock is needed when drop the table), why do we need a intent lock in this case? seems many databases using intent lock which confused me very much. I think pthread_rwlock should be enough.
I read here that they only exists for performance. Imagine you want to drop a table but you would have to check for every row if its locked or not - that would be time consuming, and you would have to lock every row that you checked.
Heres a citation from the blog post:
From a technical perspective the Intent Locks are not really needed by
SQL Server. They have to do with performance optimization. Let’s have
a look on that in more detail. With an Intent Lock SQL Server just
indicates at a higher level within the Lock Hierarchy that you have
acquired a Lock somewhere else. A Intent Shared Lock tells SQL Server
that there is a Shared Lock somewhere else. A Intent Update or Intent
Exclusive Lock does the same, but this time SQL Server knows that
there is an Update Lock or an Exclusive Lock somewhere. It is just an
indication, nothing more.
But how does that indication help SQL Server with performance
optimization? Imagine you want to acquire an Exclusive Lock at the
table level. In that case, SQL Server has to know if there is an
incompatible lock (like a Shared or Update Lock) somewhere else on a
record. Without Intent Locks SQL Server would have to check every
record to see if an incompatible lock has been granted.
But with an Intent Shared Lock on the table level, SQL Server knows
immediately that a Shared Lock has been granted somewhere else, and
therefore an Exclusive Lock can’t be granted at the table level.
That’s the whole reason why Intent Locks exist in SQL Server: to allow
efficient checking if an incompatible lock exists somewhere within the
Lock Hierarchy. Quite easy, isn’t it?
read lock on table + a write lock on row
This would break meaning of the read lock on the table.
Assume concurrent SELECT operation, which expects unmodified table during execution. This operation will take read lock on the table ... and it will succeed in your implementation. This is bad, as table is actually modified during row modification.
Instead, follow locks combination is used for modify row in the table:
IX(Intent eXclusive) on table + X(eXclusive, similar to "write lock") on row
This combination is compatible (that is, can be executed concurrently) with modification of another row, but it is incompatible with
S(Share, similar to "read lock") on table
used by SELECT.
Locks compatibility table can be found, e.g., on wiki.
One of the conclusions today is "intent lock can lock a parent node AND all its children nodes in a read only mode in a cheaper/safer way".
Take an example for making a table read only case, how to lock it in S-X mode?
We lock the table in S mode, then user still can modify the rows with S(table) + W(row) way. to avoid that, we need to lock every row in a S mode to make sure rows will not be updated. The cost is so huge, and it has a bug that user can insert new rows as well. -- cost too much and not safe.
We lock the table in X mode, How other can read the rows (S on table + S on row), no way, since mode_X on table blocked MODE_S on table. That's not read only.
The right solution with the intent lock is:
Lock the table in MODE_S. that's all!
any intention to modify the rows needs to take a MODE_IX lock on the table, but it is blocked by MODE_S. the solution is cheap/efficient and safe!
I am trying to understand how some code with a transaction will work if run by multiple threads. There is a unique 5 character ID that needs to be inserted per record and must be unique for each "job" which is checked by a unique index.
We currently have code that is single threaded and catches duplicate errors, creates a new random ID and tries again. However we are considering moving the ID creation to a later step of processing which is multiple threaded. There is also code in the transaction that uses the random ID but we want to be sure it has the ID that went into the record and that if the code at the end of the transaction fails that the record update does not occur.
What I am trying to understand is if the transaction can only fail when the update query is run and if another error is thrown by the additional code, or if it could possibly fail on commit after the additional code has run.
Here is the code outline....
until transaction completes successfully
try
begin transaction
create randomid
update an existing record with randomid
do something that should only be done once
commit transaction
catch duplicate id error
I have tried some tests that didn't show a problem but not sure if my tests were sufficient so would like to better understand what would be going on in this situation.
Also, I assume the update queries can't happen in parallel but from my testing it appears that the additional code does. Is that always the case or does it depend on other considerations?
If you have a unique index on a column and you update this column, SQL Server will put a lock on the index (key range). This will prevent all other processes from updating the same record. They will have to wait for the lock to be released and will be blocked until that time. Once the lock is released, the unique index will prevent them from updating the table.
As other processes have to wait, depending on your timeout setting and the length of the execution, they may time out. You may wish to handle that in addition to handling duplicate exception.
If your processes try to insert different values into the column with the unique index, that may happen in parallel, depending on other locks on the table.
First I'd like to describe the mechanism of a locking solution I'd like to implement. Basically an item can be opened in read or write mode. However if an user opens the item in write mode, no other user should be able to open it in edit mode. The item means a case in a customer service application.
In order to to this I came up with the following: The table will contain a flag which indicates if an item is checked out for edit, and an 'end time', while this flag is valid. The default value for it is 3 minutes, if no user interaction happens during this time, the flag can be ignored next time when an user tries to open the same item.
On the UI side, I use jQuery to monitor if an user is active. If he or she is, a periodic AJAX call extends his or her time frame so he or she can continue working on the item. When the user saves the item, the flag will be removed. The end time is necessary to handle situations when the browser crashes or when the user goes to drink a coffee and leaves the item open for an hour.
So, the question. :) If an user opens the item in edit mode first I have to read the flag & time values for the time item, and if I find these valid (flag is not set, or set but not valid because of the time) and I have to update them with new values.
What kind of transaction level should I use for this in EF, if any? Or should I write stored procedures to handle the select & update in a transaction? If so, what kind of locking method should I use?
You are describing pessimistic locking, there is really no debate on that. There are detailed instructions on what you want to do in the excellent MVC/EF tutorial http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
There’s a chapter early on about pessimistic.
Optimistic locking is still OK in this case. You can use timestamp / rowversion and your flag together. The flag will be used to handle your application logic - only single user can edit the record and the timestamp will be used to avoid race condition when setting the flag because only single thread will be able to read the record and write it back. If any other thread tries to read the record concurrently and saves it after the first thread it will get concurrency exception.
If you don't want to use timestamp different transaction isolation level will not help you because isolation level doesn't force queries to lock records. You must manually write SQL query and use UPDLOCK hint to lock the record by querying and after that execute update. You can do this in stored procedure.
The answer below is not a good way to implement pessimistic concurrency. You should not implement this at the application level. The RDBMS have better tools for this.
If you are locking a row in the db, this is by definition pessimistic.
Since you are controlling the pessimistic concurrency at the application level, I don't think it matters which transaction scope EF uses. EF will automatically start a db-level transaction when you SaveChanges.
To prevent multiple threads from executing the lock / unlock from your app, you can lock the section of code that queries & updates like so:
public static object _lock = new object();
public class MyClassThatManagesConcurrency
{
public void MyMethodThatManagesConcurrency()
{
lock(_lock)
{
// query for the data
// determine if item should be unlocked
// dbContext.SaveChanges();
}
}
}
With the above, no 2 threads will ever execute code inside the lock section at the same time. However, I am not sure why this is necessary. If all you are doing is reading the object and unlocking it when time has expired, and 2 threads enter the method at the same time, either way, the item will become unlocked.
On the other hand, if your db row for this object has a timestamp column (not a datetime column but a columng for versioning rows), and 2 threads enter the method at the same time, the second will receive a concurrency exception. But unless you have are versioning rows at the db level, I don't think you need to do any locking.
Reply to comment
Ok I get it now, you are right. But you are still locking at the application level, which means it should not matter which db transaction ef chooses. To prevent 2 users from unlocking the same object, use the C# lock block I posted above.
I have a server application, and a database. Multiple instances of the server can run at the same time, but all data comes from the same database (on some servers it is postgresql, in other cases ms sql server).
In my application, there is a process that is performed which can take hours. I need to ensure that this process is only executed one at a time. If one server is processing, no other server instance can process until the first one has completed.
The process depends on one table (let's call it 'ProcessTable'). What I do is, before any server starts the hour-long process, I set a boolean flag in the ProcessTable which indicates that this record is 'locked' and is being processed (not all records in this table are processed / locked, so I need to specifically mark each record which is needed by the process). So when the next server instance comes along while the previous instance is still processing, it sees the boolean flags and throws an exception.
The problem is, that 2 server instances might both be activated at nearly the same time, and when both check the ProcessTable, there may not be any flags set, but both servers are actually in the process of 'setting' the flags but since the transaction hasn't yet commited for either process, neither process will see the locking done by the other process. This is because the locking mechanism itself may take a few seconds, so there is that window of opportunity where 2 servers might still be able to process at the same time.
It appears that what I need is a single record in my 'Settings' table which should store a boolean flag called 'LockInProgress'. So before even a server can lock the needed records in the ProcessTable, it first must make sure that it has full rights to do the locking by checking the 'LockInProgress' column in the Settings table.
So my question is, how do I prevent two servers from both modifying that LockInProgress column in the settings table, at the same time... or am I going about this in the wrong manner?
Please note that I need to support both postgresql and ms sql server as some servers use one database, and some servers use the other.
Thanks in advance...
How about obtaining a lock on the record first and then update the record to show "locked". This would avoid the 2nd instance to get a lock successfully and thereby the update of record fails.
The point is to make sure the lock and update as one atomic step.
Make a stored procedure that hands out the lock, and run it under 'serializable' isolation. This will guarantee that one and only one process can get at the resource at any given time.
Note that this means that the second process trying to get at the lock will block until the first process releases it. Also, if you have to get multiple locks in this manner, make sure that the design of the process guarantees that the locks will be acquired and released in the same order. This will avoid deadlock situations where two processes hold resources while waiting for each other to release locks.
Unless you can't deal with your other processes blocking this would probably be easier to implement and more robust than attempting to implement 'test and set' semantics.
I've been thinking about this, and I think this is the simplest way of doing things; I just execute a command like this:
update settings set settingsValue = '333' where settingsKey = 'ProcessLock' and settingsValue = '0'
'333' would be a unique value which each server process gets (based on date/time, server name, + random value etc).
If no other process has locked the table, then the settingsValue would be = to 0, and that statement would adjust the settingsValue.
If another process has already locked the table, then that statement becomes a no-op, and nothing get's modified.
I then immediately commit the transaction.
Finally, I requery the table for the settingsValue, and if it is the correct value, then our lock succeeded and we continue on, otherwise an exception is thrown, etc. When we're done with the lock, we reset the value back down to 0.
Since I'm using SERIALIZATION transaction mode, I can't see this causing any issues... please correct me if I'm wrong.