Does the delete operation block any insert into the same table? - sql-server

I have table A and a stored procedure that deletes all data from that table periodically. All queries in the stored procedure are packed into 1 transaction. But sometimes the stored procedure execution takes up to 5 minutes. Could it be that executing stored procedure will block inserts on the same table A?
The stored procedure will never be called again until the previous call has been completed.
Will it be different for READ COMMITTED and READ COMMITTED SNAPSHOT ISOLATION?

Yes, a statement like DELETE FROM YourTable; would take out a table lock blocking all other changes to the table until it was done. I don't think that changing the isolation level will help much, unless you put snapshot on the whole database (i.e., Snapshot Isolation).
Usually you want to try a different approach for cases like this. Either:
Try breaking the DELETE up into smaller "chunks" so that it each chunk takes less time and will not block the entire table. Or if this is not appropriate, then ...
Create an empty duplicate of YourTable, then change the name of YourTable to something like Yourtable_deleting and change the new table's name to YourTable. Then DELETE (or just DROP) Yourtable_deleting.

Related

Sql Server Stored Procedure Concurrency Issue

We have some SQL Server stored procedure.
It selects some rows from a table and puts them into the temp table to apply and calculate some data validations.
The next part of the procedure either updates the actual table based on the temp table data or sends back the error status.
Initially selected rows can only be updated once and no further updates are allowed to the same rows.
The problem, we are facing is like some time, 2 simultaneous threads execute the procedure at the same time and both pass the initial validation block as in-memory temp data is not processed yet. 2nd thread is able to overwrite the first transaction.
We applied the transaction mechanism to prevent duplicate inserts and updates by checking the affected rows count by update query and aborting the transaction.
I am not sure if it's correct and optimized or not.
Also, can we lock rows with a select statements as well ?
This has been solved using the UPDLOCK on select query inside the transaction.
It locks the specific rows and allow the transaction to proceed in isolation.
Thanks Everyone for your help.

We only have SQL Server Standard edition so no access to snapshot functionality. Options?

We only have SQL Servre Standard edition so I can't use the Snapshot functionality. Before spending the time just want to know if the following is possible (or if there is a better way) please:
At the end of every month I need to take a snapshot of the month and store it in table b. The following month take another snapshot and append that snapshots data to table b. And so on....
Is it possible to create a stored procedure to run at the end of every month that stores the snapshot data into a temp table A. Then using another stored procedure, take data from temp table A and append to table B? The second procedure can have a drop table A.
Cheers.
Yes, it is possible.
If I understand you, more or less, this is what you want:
Lock the table
Select everything into a staging table
Move everything from that staging table into your destination
You can lock the entire table (this will prevent changes, but can lead to deadlocks).
INSERT INTO stagingTable (
... -- field list
)
SELECT
... -- field list
FROM
myTable WITH (TABLOCK)
;
TABLOCK will place a shared lock on the table which will be released when the statement is executed (READ COMMITTED isolation level) or after the transaction is committed/rolled back (SERIALIZABLE).
If you want to keep the lock during the whole transaction, you can add the HOLDLOCK hint too, which switches the isolation level to serializable for the object, thus the lock will be released after COMMIT. Don't forget to start a transaction and commit/roll it back.
You can also use TABLOCKX, which is an exclusive lock preventing all processes to acquire a lock on the table or on anything on lower levels (pages, rows, etc) in the table. This will prevent concurrent reads too!
You can let the SQL Server to decide which lock it wants to use (a.k.a. omit the hint), in this case SQL Server may choose to use more granular locks (such as page or row locks) instead of locking the whole table.

What is the fastest way to insert data to MS SQL database without locking it?

I've a running system where data is inserted periodically into MS SQL DB and web application is used to display this data to users.
During data insert users should be able to continue to use DB, unfortunatelly I can't redesign the whole system right now. Every 2 hours 40k-80k records are inserted.
Right now the process looks like this:
Temp table is created
Data is inserted into it using plain INSERT statements (parameterized queries or stored proceuders should improve the speed).
Data is pumped from temp table to destination table using INSERT INTO MyTable(...) SELECT ... FROM #TempTable
I think that such approach is very inefficient. I see, that insert phase can be improved (bulk insert?), but what about transfering data from temp table to destination?
This is waht we did a few times. Rename your table as TableName_A. Create a view that calls that table. Create a second table exactly like the first one (Tablename_B). Populate it with the data from the first one. Now set up your import process to populate the table that is not being called by the view. Then change the view to call that table instead. Total downtime to users, a few seconds. Then repopulate the first table. It is actually easier if you can truncate and populate the table becasue then you don't need that last step, but that may not be possible if your input data is not a complete refresh.
You cannot avoid locking when inserting into the table. Even with BULK INSERT this is not possible.
But clients that want to access this table during the concurrent INSERT operations can do so when changing the transaction isolation level to READ UNCOMMITTED or by executing the SELECT command with the WITH NOLOCK option.
The INSERT command will still lock the table/rows but the SELECT command will then ignore these locks and also read uncommitted entries.

How do I create a stored procedure whose effects cannot be rolled back?

I want to have a stored procedure that inserts a record into tableA and updates record(s) in tableB.
The stored procedure will be called from within a trigger.
I want the inserted records in tableA to exist even if the outermost transaction of the trigger is rolled back.
The records in tableA are linearly linked and I must be able to rebuild the linear connection.
Write access to tableA is only ever through the triggers.
How do I go about this?
What you're looking for are autonomous transactions, and these do not exist in SQL Server today. Please vote / comment on the following items:
http://connect.microsoft.com/SQLServer/feedback/details/296870/add-support-for-autonomous-transactions
http://connect.microsoft.com/SQLServer/feedback/details/324569/add-support-for-true-nested-transactions
What you can consider doing is using xp_cmdshell or CLR to go outside the SQL engine to come back in (these actions can't be rolled back by SQL Server)... but these methods aren't without their own issues.
Another idea is to use INSTEAD OF triggers - you can log/update other tables and then just decide not to proceed with the actual action.
EDIT
And along the lines of #VoodooChild's suggestion, you can use a #table variable to temporarily hold data that you can reference after the rollback - this data will survive a rollback, unlike an insert into a #temp table.
See this post Logging messages during a transaction for a (somewhat convoluted) effective way of achieving what you want: the insert into the logging table is persisted even if the transaction had rolled back. The method Simon proposes has several advantages: requires no changes to the caller, is fast and is scalable, and it can be used safely from within a trigger. Simon's example is for logging, but the insert can be for anything.
One way is to create a linked server that points to the local server. Stored procedures executed over a linked server won't be rolled back:
EXEC LinkedServer.DbName.dbo.sp_LogInfo 'this won''t be rolled back'
You can call a remote stored procedure from a trigger.

SQL Server Isolation Levels - Repeatable Read

I'm having problems getting my head round why this is happening. Pretty sure I understand the theory, but something else must be going on that I don't see.
Table A has the following schema:
ID [Primary Key]
Name
Type [Foreign Key]
SprocA sets Isolation Level to Repeatable Read, and Selects rows from Table A that have Type=1. It also updates these rows.
SprocB selects rows from Table A that have Type=2.
Now given that these are completely different rowsets, if I execute both at the same time (and put WAITFOR calls to slow it down), SprocB doesn't complete until SprocA.
I know it's to do with the query on Type, as if I select based on the Primary ID then it allows concurrent access to the table.
Anyone shed any light?
Cheers
With Repeatable Read set for the isolation level, you will hold a shared lock on all data you read until the transaction completes. That is until you COMMIT or ROLLBACK.
This will lower the concurrency of your application's access to this data. So if your first procedure SELECTS from table then calls a WAITFOR then SELECTS again etc within a transaction you will hold the shared lock the entire time until you commit the transaction or the process completes.
If this is a test procedure you are working with try added a COMMIT after each select and see if that helps the second procedure to run concurrently.
Good luck!
Kevin
SQL Server uses indexes to do range locks (which is what repeatable reads often use) so if you don't have index on Type perhaps it locks entire table...
The thing to remember is that the locked rows are black boxes to the other process.
You know that SprocA is just reading for type = 1 and that SprocbB is just reading for type = 2.
However, SprocB does not know what SprocA is going to do to those records. Before the transaction is completed, SprocA may update all of the records to type = 2. In that case, SprocB would be working incorrectly if it did not wait for SprocA to complete.
Maintaining concurrency when performing range locks / bulk changes is tough.

Resources