We have some SQL Server stored procedure.
It selects some rows from a table and puts them into the temp table to apply and calculate some data validations.
The next part of the procedure either updates the actual table based on the temp table data or sends back the error status.
Initially selected rows can only be updated once and no further updates are allowed to the same rows.
The problem, we are facing is like some time, 2 simultaneous threads execute the procedure at the same time and both pass the initial validation block as in-memory temp data is not processed yet. 2nd thread is able to overwrite the first transaction.
We applied the transaction mechanism to prevent duplicate inserts and updates by checking the affected rows count by update query and aborting the transaction.
I am not sure if it's correct and optimized or not.
Also, can we lock rows with a select statements as well ?
This has been solved using the UPDLOCK on select query inside the transaction.
It locks the specific rows and allow the transaction to proceed in isolation.
Thanks Everyone for your help.
Related
I have table A and a stored procedure that deletes all data from that table periodically. All queries in the stored procedure are packed into 1 transaction. But sometimes the stored procedure execution takes up to 5 minutes. Could it be that executing stored procedure will block inserts on the same table A?
The stored procedure will never be called again until the previous call has been completed.
Will it be different for READ COMMITTED and READ COMMITTED SNAPSHOT ISOLATION?
Yes, a statement like DELETE FROM YourTable; would take out a table lock blocking all other changes to the table until it was done. I don't think that changing the isolation level will help much, unless you put snapshot on the whole database (i.e., Snapshot Isolation).
Usually you want to try a different approach for cases like this. Either:
Try breaking the DELETE up into smaller "chunks" so that it each chunk takes less time and will not block the entire table. Or if this is not appropriate, then ...
Create an empty duplicate of YourTable, then change the name of YourTable to something like Yourtable_deleting and change the new table's name to YourTable. Then DELETE (or just DROP) Yourtable_deleting.
I have been asked to check a production issue for which I need help. I am trying to understand the isolation levels and different locks available in SQL server.
I have a table JOB_STATUS having columns job_name (string, primary key), job_status (string), is_job_locked (string)
Sample data as below:
job_name
job_status
is_job_locked
JOB_A
INACTIVE
N
JOB_B
RUNNING
N
JOB_C
SUCCEEDED
N
JOB_D
RUNNING
N
JOB_E
INACTIVE
N
Multiple processes can update the table at the same time by calling a stored procedure and passing the job_name as input parameter. It is fine if two different rows are getting updated by separate processes at the same time.
BUT, two processes should not update the same row at the same time.
Sample update query is as follows:
update JOB_STATUS set is_job_locked='Y' where job_name='JOB_A' and is_job_locked='N';
Here if two processes are updating the same row, then one process should wait for the other one to complete. Also, if the is_job_locked column value is changed to Y by one process, then the other process should not update it again (which my update statement should handle if locking is proper).
So how can I do this row level locking and make sure the update query reads the latest data from the row before making an update using a stored procedure.
Also, would like to get the return value whether the update query updated the row or it did not as per the condition, so that I can use this value in my further application flow.
RE: "Here if two processes are updating the same row, then one process should wait for the other one to complete. "
That is how locking works in SQL Server. An UPDATE takes an exclusive lock on the row -- where "exclusive" means the English meaning of the word: the UPDATE process has excluded (locked out) all other processes while it is running. The other processes now wait for the UPDATE to complete. This includes READ processes for transaction isolation levels READ COMMITTED and above. When the UPDATE lock is released, then the next statement can access the value.
IF what you are looking for is that 2 processes cannot change the same row in a single table at the same time, then SQL Server does that for you out of the box and you do not need to add your own "is_job_locked" column.
However, typically an is_job_locked column is used to control access beyond a single table. For example, it may be used to prevent a second process from starting a job that is already running. Process A would mark is_job_locked, then start the job. Process B would check the flag before trying to start the job.
I did not had to use explicit lock or provide any isolation level as it was a single update query in the stored procedure.
At a time SQL server is only allowing one process to update a row which is then read committed by second process and not updated again.
Also, I used ##ROWCOUNT to get the No. of rows updated. My issue is solved now.
Thanks for the answers and comments.
I've a running system where data is inserted periodically into MS SQL DB and web application is used to display this data to users.
During data insert users should be able to continue to use DB, unfortunatelly I can't redesign the whole system right now. Every 2 hours 40k-80k records are inserted.
Right now the process looks like this:
Temp table is created
Data is inserted into it using plain INSERT statements (parameterized queries or stored proceuders should improve the speed).
Data is pumped from temp table to destination table using INSERT INTO MyTable(...) SELECT ... FROM #TempTable
I think that such approach is very inefficient. I see, that insert phase can be improved (bulk insert?), but what about transfering data from temp table to destination?
This is waht we did a few times. Rename your table as TableName_A. Create a view that calls that table. Create a second table exactly like the first one (Tablename_B). Populate it with the data from the first one. Now set up your import process to populate the table that is not being called by the view. Then change the view to call that table instead. Total downtime to users, a few seconds. Then repopulate the first table. It is actually easier if you can truncate and populate the table becasue then you don't need that last step, but that may not be possible if your input data is not a complete refresh.
You cannot avoid locking when inserting into the table. Even with BULK INSERT this is not possible.
But clients that want to access this table during the concurrent INSERT operations can do so when changing the transaction isolation level to READ UNCOMMITTED or by executing the SELECT command with the WITH NOLOCK option.
The INSERT command will still lock the table/rows but the SELECT command will then ignore these locks and also read uncommitted entries.
I am working with a stored procedure that:
determines the number of rows in the table where the chosenBy column is null
picks one of these rows at random
updates the chosenBy column of this row
returns the row to the client
How do I prevent clients from choosing the same row in situations where they choose at exactly the same time?
I have tried various table hints and isolation levels but just get deadlock exceptions at the client. I just want the second call to wait for the fraction of a second until the first call is completed.
One way of avoiding deadlocks (as indicated in your question title) would be to serialise access to that procedure.
You can do this with sp_getapplock and sp_releaseapplock
See Application Locks (or Mutexes) in SQL Server 2005 for some example code.
I am using update statement in a script task of SSIS which updates a particular row of a table. This process happens in a loop for multiple rows.
Sometimes it happens that another application also fires an update on that same table for a different set of rows. For this application I'm getting deadlock exception on lock.
How can I avoid this situation? I want both updates to work at same time as the row sets being updated are different.
Is the anything to lock only that row which is getting updated?
Regards,
Solo
From the MSDN site with a search on TSQL, UPDATE RowLock. There are also PageLock and TableLocks. You can also use transactions if you need to update more than one table. On your reads you may want to use (NOLOCK) if dirty reads are OK (the up side is no read locks).
update Production.Location with (ROWLOCK)
set CostRate = 100.00
where LocationID = 1