Here is my scenario why I need a row lock across transactions..
change the columns value to 5 (in SQL Server)
change the columns value to 5 (in another resource, this can be a file or etc.)
Of course it's the case when everything is gone well. but if any problem occurs while doing the second change operation, I need to rollback the first change. And also while doing the second change, nobody should be allowed to read or to write this row in SQL Server.
So I need to do that
lock the column
change the columns value to 5 (in SQL Server)
change the columns value to 5 (in another resource)
if the above change is successfully done
commit the column
else
rollback the column
unlock the column
And I also need something for the murphy case. If I cannot reach the database after locking the row (in order to unlock or to rollback), it should be unlocked in a few seconds.
Is it possible to have something to do that in SQL Server or what ?
Read up on distributed transactions and a compensating ressource manager. THen you realize you can do all that in ONE transaction, managed by your transaction coordinator.
Related
I have been asked to check a production issue for which I need help. I am trying to understand the isolation levels and different locks available in SQL server.
I have a table JOB_STATUS having columns job_name (string, primary key), job_status (string), is_job_locked (string)
Sample data as below:
job_name
job_status
is_job_locked
JOB_A
INACTIVE
N
JOB_B
RUNNING
N
JOB_C
SUCCEEDED
N
JOB_D
RUNNING
N
JOB_E
INACTIVE
N
Multiple processes can update the table at the same time by calling a stored procedure and passing the job_name as input parameter. It is fine if two different rows are getting updated by separate processes at the same time.
BUT, two processes should not update the same row at the same time.
Sample update query is as follows:
update JOB_STATUS set is_job_locked='Y' where job_name='JOB_A' and is_job_locked='N';
Here if two processes are updating the same row, then one process should wait for the other one to complete. Also, if the is_job_locked column value is changed to Y by one process, then the other process should not update it again (which my update statement should handle if locking is proper).
So how can I do this row level locking and make sure the update query reads the latest data from the row before making an update using a stored procedure.
Also, would like to get the return value whether the update query updated the row or it did not as per the condition, so that I can use this value in my further application flow.
RE: "Here if two processes are updating the same row, then one process should wait for the other one to complete. "
That is how locking works in SQL Server. An UPDATE takes an exclusive lock on the row -- where "exclusive" means the English meaning of the word: the UPDATE process has excluded (locked out) all other processes while it is running. The other processes now wait for the UPDATE to complete. This includes READ processes for transaction isolation levels READ COMMITTED and above. When the UPDATE lock is released, then the next statement can access the value.
IF what you are looking for is that 2 processes cannot change the same row in a single table at the same time, then SQL Server does that for you out of the box and you do not need to add your own "is_job_locked" column.
However, typically an is_job_locked column is used to control access beyond a single table. For example, it may be used to prevent a second process from starting a job that is already running. Process A would mark is_job_locked, then start the job. Process B would check the flag before trying to start the job.
I did not had to use explicit lock or provide any isolation level as it was a single update query in the stored procedure.
At a time SQL server is only allowing one process to update a row which is then read committed by second process and not updated again.
Also, I used ##ROWCOUNT to get the No. of rows updated. My issue is solved now.
Thanks for the answers and comments.
I just want to know what happen if a table is being truncated by another connection and the other connection is retrieving the data out of it?
Let say, the first connection is retrieving thousands of rows, then after a few seconds another connection is truncating the said table. does the first connection locks the table and restrict the second connection from truncating until the first connection is done or otherwise?
I know the use of #temptables to avoid this complicated scenario but I want to know how SQL Server handle this kind of situation.
Thank you.
Actually it is the lock mechanism that controls the different tasks:
Let's say, connection 1 reads tableX, and connection2 also reads tableX, then both connection hold the lock called shared lock, and they are compatible with each other, so no prob.
Another scenario, connection 1 reads the tableX, so it holds the shared lock; then connetion 2 tries to update or delete the rows, then what happens is that it waits for the first connection 1 to finish reading, then able to continue.
does the first connection locks the table and restrict the second
connection from truncating until the first connection is done or
otherwise?
Yes, exactly so.
It's impossible to truncate a table while some other session does a select from it.
It does not depend on transaction isolation level because table truncation needs Sch-M on a table and this lock is incompatible with all the other locks.
In case of read committed the first session already has IS on a table and this conflicts with Sch-M.
In case of read uncommitted the first session has Sch-S on a table that conflicts with Sch-M.
In case of read committed snapshot/snapshot the reading session also takes Sch-S lock that conflicts with Sch-M so in every case the truncate operation waits the select session to release its lock on a table level.
I have a big problem in working with table in memory in SQL Server 2014.
I know that there is no readpast lock in SQL Server. but in some scenarios it can cause decrease in performance.
Suppose that there are 20 records in one table. Each record has one column LockStatus with an initial value of Wait.
If two consumers want to pick top(10) of records, what happens?
Consumer one gets first 10 records and changes their status to Locked and while it is using them, the second consumer tries to pick top(10) but it will be aborted:
The current transaction attempted to update a record that has been
updated since this transaction started. the transaction was aborted.
With readpast lock we could say to the consumer 2 to pick second 10 records, instead of being aborted.
I have a server application, and a database. Multiple instances of the server can run at the same time, but all data comes from the same database (on some servers it is postgresql, in other cases ms sql server).
In my application, there is a process that is performed which can take hours. I need to ensure that this process is only executed one at a time. If one server is processing, no other server instance can process until the first one has completed.
The process depends on one table (let's call it 'ProcessTable'). What I do is, before any server starts the hour-long process, I set a boolean flag in the ProcessTable which indicates that this record is 'locked' and is being processed (not all records in this table are processed / locked, so I need to specifically mark each record which is needed by the process). So when the next server instance comes along while the previous instance is still processing, it sees the boolean flags and throws an exception.
The problem is, that 2 server instances might both be activated at nearly the same time, and when both check the ProcessTable, there may not be any flags set, but both servers are actually in the process of 'setting' the flags but since the transaction hasn't yet commited for either process, neither process will see the locking done by the other process. This is because the locking mechanism itself may take a few seconds, so there is that window of opportunity where 2 servers might still be able to process at the same time.
It appears that what I need is a single record in my 'Settings' table which should store a boolean flag called 'LockInProgress'. So before even a server can lock the needed records in the ProcessTable, it first must make sure that it has full rights to do the locking by checking the 'LockInProgress' column in the Settings table.
So my question is, how do I prevent two servers from both modifying that LockInProgress column in the settings table, at the same time... or am I going about this in the wrong manner?
Please note that I need to support both postgresql and ms sql server as some servers use one database, and some servers use the other.
Thanks in advance...
How about obtaining a lock on the record first and then update the record to show "locked". This would avoid the 2nd instance to get a lock successfully and thereby the update of record fails.
The point is to make sure the lock and update as one atomic step.
Make a stored procedure that hands out the lock, and run it under 'serializable' isolation. This will guarantee that one and only one process can get at the resource at any given time.
Note that this means that the second process trying to get at the lock will block until the first process releases it. Also, if you have to get multiple locks in this manner, make sure that the design of the process guarantees that the locks will be acquired and released in the same order. This will avoid deadlock situations where two processes hold resources while waiting for each other to release locks.
Unless you can't deal with your other processes blocking this would probably be easier to implement and more robust than attempting to implement 'test and set' semantics.
I've been thinking about this, and I think this is the simplest way of doing things; I just execute a command like this:
update settings set settingsValue = '333' where settingsKey = 'ProcessLock' and settingsValue = '0'
'333' would be a unique value which each server process gets (based on date/time, server name, + random value etc).
If no other process has locked the table, then the settingsValue would be = to 0, and that statement would adjust the settingsValue.
If another process has already locked the table, then that statement becomes a no-op, and nothing get's modified.
I then immediately commit the transaction.
Finally, I requery the table for the settingsValue, and if it is the correct value, then our lock succeeded and we continue on, otherwise an exception is thrown, etc. When we're done with the lock, we reset the value back down to 0.
Since I'm using SERIALIZATION transaction mode, I can't see this causing any issues... please correct me if I'm wrong.
I've got in an ASP.NET application this process :
Start a connection
Start a transaction
Insert into a table "LoadData" a lot of values with the SqlBulkCopy class with a column that contains a specific LoadId.
Call a stored procedure that :
read the table "LoadData" for the specific LoadId.
For each line does a lot of calculations which implies reading dozens of tables and write the results into a temporary (#temp) table (process that last several minutes).
Deletes the lines in "LoadDate" for the specific LoadId.
Once everything is done, write the result in the result table.
Commit transaction or rollback if something fails.
My problem is that if I have 2 users that start the process, the second one will have to wait that the previous has finished (because the insert seems to put an exclusive lock on the table) and my application sometimes falls in timeout (and the users are not happy to wait :) ).
I'm looking for a way to be able to have the users that does everything in parallel as there is no interaction, except the last one: writing the result. I think that what is blocking me is the inserts / deletes in the "LoadData" table.
I checked the other transaction isolation levels but it seems that nothing could help me.
What would be perfect would be to be able to remove the exclusive lock on the "LoadData" table (is it possible to force SqlServer to only lock rows and not table ?) when the Insert is finished, but without ending the transaction.
Any suggestion?
Look up SET TRANSACTION ISOLATION LEVEL READ COMMITTED SNAPSHOT in Books OnLine.
Transactions should cover small and fast-executing pieces of SQL / code. They have a tendancy to be implemented differently on different platforms. They will lock tables and then expand the lock as the modifications grow thus locking out the other users from querying or updating the same row / page / table.
Why not forget the transaction, and handle processing errors in another way? Is your data integrity truely being secured by the transaction, or can you do without it?
if you're sure that there is no issue with cioncurrent operations except the last part, why not start the transaction just before those last statements, Whichever they are that DO require isolation), and commit immediately after they succeed.. Then all the upfront read operations will not block each other...