How to prevent deadlock of table in SQL Server - sql-server

I have a table were values can be altered by different users and records of 100k rows.
I made a stored procedure where in, it has a begin tran and at the last part
to either commit or rollback the changes depending on the situation.
So for now the problem we're encountering is a lock of that table. For example 1st user is executing the stored procedure thru the system, then the other users won't be able to select or also execute the stored procedure because the table is currently locked.
So is there anyway where I can avoid lock other than using dirty read. Or a way where I can rollback the changes made without using begin tran, because it is the main reason why the table is locked up.

Yes, you can at least (quick & dirty) enable SNAPSHOT isolation level for transactions. That will prevent locks inside the transactions.
ALTER DATABASE MyDatabase
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON
See for details.

Related

Concurrency handling stored procedure

My stored procedure is like this.
Im maintaining a RowVersion in Table A.
Starts Transaction
Read RowVersion from Table A rw1
...
Some Calculations
...
Read RowVersion from Table A as rw2
Update Some Tables including Table A
IF(rw1==rw2)
COMMIT
ELSE
ROLLBACK
Currently im using READ COMMIT as Isolation Level But When its updating Table A RowVersion is also changing.
Goal : when two or more users logged in to system and press a button at same time (which will execute this SP) first one only execute the SP and not allow other one to execute

Best way to refresh external data in SQL Server

My application uses much data from external sources. This data must be updated regularly.
I use the following approach for each table with external data when it's time to update.
Create new table, populate it with data, create indexes.
Delete existing table and rename the new one.
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET XACT_ABORT ON;
EXEC ('drop table TableWithExternalData')
EXEC sp_rename 'new_TableWithExternalData', 'TableWithExternalData'
COMMIT
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
During that, in application logs I can see errors like:
Invalid object name 'dbo.TableWithExternalData'.
or
Invalid object name 'dbo.TableWithExternalData'.
Could not use view or function 'dbo.vSomeView' because of binding errors.
What should I do to update external data without errors?
UPDATE
Reasons for using this method were the following.
It is "official": used by SSDT when generating table change scripts. SSMS uses the same method, but without setting transaction isolation level.
It works very fast, without long-living locks. Unfortunately, I have no info about changes in the datasource, and some of the tables are rather big. Full join for changes, or just truncate && insert is slower.
It is simple to implement and support. In rare cases of schema changes I can just add a column to the newly imported table, and after the scheduled data refresh I have the new column in production.
But it produces runtime errors in application using the database, so I need some other approach.
UPDATE 2
Slightly modified code works.
But still the question is: is that the optimal solution for external data refresh?
Reason of runtime errors: other transactions waiting for object_id, not object name; after table is deleted, there is no more object with the old ID.
Solution: do not delete the old table in the same transaction, but rename it.
More info: http://www.sqlnotes.info/2011/12/02/sp_rename-causes-lock-leaking/
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET XACT_ABORT ON;
EXEC sp_rename 'TableWithExternalData', 'old_TableWithExternalData'
EXEC sp_rename 'new_TableWithExternalData', 'TableWithExternalData'
COMMIT
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
EXEC ('drop table old_TableWithExternalData')

sql server update blocked by another transaction, concurrency in update

Two SP's are getting executed one after another and the second one is getting blocked by first one. They both are trying to update same table. Two SP's are as following
CREATE PROCEDURE [dbo].[SP1]
Begin
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRANSACTION ImpSchd
update Table t1 .......... ................................//updating
a set of [n1,n2....n100] records
COMMIT TRANSACTION ImpSchd
SET TRANSACTION ISOLATION LEVEL
READ COMMITTED;
END
2.
CREATE PROCEDURE [dbo].[SP2]
Begin
update Table t1 .......... ................................//updating
a set of [n101,n102.....n200] records
END
My question is when sp1 is running is snapshot level isolation why is it blocking sp2 (n both are updating different set of records)?
If i run first sp for two different set of records simultaneously it
works perfectly.
How can I overcome this situation ?
If using the snapshot level isolation is to be set for each sp updating the same table then it would be a larger change.
if two sp has to update same records in a table, how should i handle that(both sp will update different columns)?
Isolation levels only are for not blocking selects,so any DML wont be affected by Isolation levels.In this case update takes IX lock on table,page followed by taking xlock on row to update.Since you are updating in bulk ,table itself might have been locked due to lock escalation.Hope this helps

Truncate Vs Delete and Table Locks SQL Server 2012

Right now I am having an issue with a stored procedure that is locking up when running.
It's a conversion from Sybase.
The stored procedure originally would do
TRUNCATE TABLE appInfo
Then repopulate the data within the same stored procedure, but in SQL Server this seems to be causing locks to the users.
Its not a high traffic database.
The change I tried was the to do
BEGIN TRAN
DELETE TABLE appInfo
COMMIT TRAN
Then repopulate the data, but the users are getting a NO_DATA_FOUND error on this one.
So if I TRUNCATE they get data, but it causes a lock
If I do a delete there is no data found.
Anyone have any insight into this condition and a solution? I was thinking of taking the truncate out to a separate stored procedure and called from within the parent procedure, but that might just be pushing the issue down the road and not actually solving it.
Thanks in advance
When you truncate a table the entire table is locked (from MSDN https://technet.microsoft.com/en-us/library/ms177570%28v=sql.105%29.aspx - TRUNCATE TABLE always locks the table and page but not each row.) When you issue a delete table it locks a row, deletes it, and then locks the next row and deletes it. Your users are continuing to hit the table as it is happening. I would go with truncate because its almost always faster.

TSQL logging inside transaction

I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.

Resources