How to execute inserts and updates outside a transaction in T-SQL - sql-server

I have stored procedures in SQL Server T-SQL that are called from .NET within a transaction scope.
Within my stored procedure, I am doing some logging to some auditing tables. I insert a row into the auditing table, and then later on in the transaction fill it up with more information by means of an update.
What I am finding, is that if a few people try the same thing simultaneously, 1 or 2 of them will become transaction deadlock victims. At the moment I am assuming that some kind of locking is occurring when I am inserting into the auditing tables.
I would like to execute the inserts and updates to the auditing tables outside of the transaction I am executing, so that the auditing will occur anyway, even if the transaction rolls back. I was hoping that this might stop any locks occurring, allowing more than one person to execute the procedure at once.
Can anyone help me do this in T-SQL?
Thanks,
Rich
Update- I have since found that the auditing was unrelated to the transaction deadlock, thanks to Josh's suggestion of using SQL Profiler to track down the source of the deadlock.

TranactionScope supports Suppress:
using (TransactionScope scope = new TransactionScope())
{
// Transactional code...
// Call a SQL stored procedure (but suppress the transaction)
using (TransactionScope suppress = new TransactionScope(TransactionScopeOption.Suppress))
{
using (SqlConnection conn = new SqlConnection(...))
{
conn.Open();
SqlCommand sqlCommand = conn.CreateCommand();
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlCommand.CommandText = "MyStoredProcedure";
int rows = (int)sqlCommand.ExecuteScalar();
}
}
scope.Complete();
}
But I would have to question why logging/auditing would run outside of the transaction? If the transaction is rolled back you will still have committed auditing/logging records and that's probably not what you want.
You haven't provided much information as to how you are logging. Does your audit table have Foreign keys pointing back to your main active tables? If so, remove the foreign keys (assuming the audit records only come from 'known' applications).

you could save your audits to a table variable (which are not affected by transactions) and then at the end of your SP (outside the scope of the transaction) insert the rows into the audit table.
However, it sounds like you are trying to fix the symptoms rather than the problem. you may want to track down the deadlocks and fix them.

I had a similar requirement where I needed to log errors into an errorlog table, but found that Rollback were wiping them out.
Solved this problem by popping previously inserted error records into a table variable, calling Rollback, then pushing back (inserting) the records into the table.
Works like a charm but code is messy, on account of it having to be inline. Can't put ROLLBACK into a stored procedure otherwise will get "Transaction count after EXECUTE... “ error.

Why are you updating the auditing table? If you were only doing inserts you might help prevent lock escalations. Also have you examined the deadlock trace to determine what exactly you were deadlocking?
You can do this by enabling trace flag 1204. Or running SQL Profiler. This will give you detailed information that will let you know what kind of deadlock (locks, threads, parrallel etc...).
Check out this article on Detecting and Ending Deadlocks.
One other way to do auditing is to decouple from the business transaction completly by sending all logging events to a queue at the application tier, this minimizes the impact logging has on your business transaction but is probally a very large for an existing application.

Related

Nhibernate .SaveOrUpdate, how to get if row was update or not, means RowCount

I am updating a column in a SQL table and I want to check if it was updated successfully or it was updated already and my query didn't do anything
as we get ##rowcount in SQL Server.
In my case, I want to update a column named lockForProcessing, so if it is already processing, then my query would not affect any row, it means someone else is already processing it, else I would process it.
If I understand you correctly, your problem is related to a multi threading / concurrency problem, where the same table may be updated simultaneously.
You may want to have a look at the :
Chapter 11. Transactions And Concurrency
The ISession is not threadsafe!
The entity is not stored the moment the code session.SaveOrUpdate() is executed, but typically after transaction.Commit().
stored and commited are two different things.
The entity is stored after any session.Flush(). Depending on the IsolationLevel, the entity won't be seen by other transactions.
The entity is commited after a transaction.Commit(). A commit also flushes.
Maybe all you need to do is choose the right IsolationLevel when beginning transactions and then read the table row to get the current value:
using (var transaction = session.BeginTransaction(IsolationLevel.Serializable))
{
session.Get(); // Read your row
transaction.Commit();
}
Maybe it is easier to create some locking or pipeline mechanism in your application code though. Without knowing more about who is accessing the database (other transactions, sessions, processes?) it is hard to answer more precisely.

How do I create a stored procedure whose effects cannot be rolled back?

I want to have a stored procedure that inserts a record into tableA and updates record(s) in tableB.
The stored procedure will be called from within a trigger.
I want the inserted records in tableA to exist even if the outermost transaction of the trigger is rolled back.
The records in tableA are linearly linked and I must be able to rebuild the linear connection.
Write access to tableA is only ever through the triggers.
How do I go about this?
What you're looking for are autonomous transactions, and these do not exist in SQL Server today. Please vote / comment on the following items:
http://connect.microsoft.com/SQLServer/feedback/details/296870/add-support-for-autonomous-transactions
http://connect.microsoft.com/SQLServer/feedback/details/324569/add-support-for-true-nested-transactions
What you can consider doing is using xp_cmdshell or CLR to go outside the SQL engine to come back in (these actions can't be rolled back by SQL Server)... but these methods aren't without their own issues.
Another idea is to use INSTEAD OF triggers - you can log/update other tables and then just decide not to proceed with the actual action.
EDIT
And along the lines of #VoodooChild's suggestion, you can use a #table variable to temporarily hold data that you can reference after the rollback - this data will survive a rollback, unlike an insert into a #temp table.
See this post Logging messages during a transaction for a (somewhat convoluted) effective way of achieving what you want: the insert into the logging table is persisted even if the transaction had rolled back. The method Simon proposes has several advantages: requires no changes to the caller, is fast and is scalable, and it can be used safely from within a trigger. Simon's example is for logging, but the insert can be for anything.
One way is to create a linked server that points to the local server. Stored procedures executed over a linked server won't be rolled back:
EXEC LinkedServer.DbName.dbo.sp_LogInfo 'this won''t be rolled back'
You can call a remote stored procedure from a trigger.

Creating an audit table in SQL Server not affected by Transaction rollbacks

I'm new to SQL. I have a large number of stored procedures in my production database. I planned to write an audit table that would be used by these stored procedures to keep track of changes ( these stored procedures would write to this audit table ). But the issue is that when a transaction rolls back, the rows inserted into the audit table also get rolled back. Is there any way to create a table that is not affected by transaction rollbacks. Any other idea that satisfies my requirement is welcome!!!
You can't, once a session starts a transaction all activity on that session is contained inside the transaction.
What you can do is to open a different session, for instance a CLR procedure that connects as an ordinary client (not using the context connection) and audits from this connection.
But auditing actions that rollback is a bit unusual, since you are auditing things never occurred from the database perspective and the audit record and the actual database state will conflict.
OK if you want to know what was rolled back here is what you do:
Let your exisiting audit process handle succesful inserts.
Put the values for the insert into a table variable in your sp. It is important that it is a table variable and not a temp table. Now inthe catch block for the transaction perform the rollback. This will not clear the table variable. Then insert into your audit table the values from the table variable (Add a field to the audt table so you can mark the records as rolled back and possibly one for the error message.)
We don't do this specifically for auditing but we have done this to record the errors.

Sql Server 2005 - manage concurrency on tables

I've got in an ASP.NET application this process :
Start a connection
Start a transaction
Insert into a table "LoadData" a lot of values with the SqlBulkCopy class with a column that contains a specific LoadId.
Call a stored procedure that :
read the table "LoadData" for the specific LoadId.
For each line does a lot of calculations which implies reading dozens of tables and write the results into a temporary (#temp) table (process that last several minutes).
Deletes the lines in "LoadDate" for the specific LoadId.
Once everything is done, write the result in the result table.
Commit transaction or rollback if something fails.
My problem is that if I have 2 users that start the process, the second one will have to wait that the previous has finished (because the insert seems to put an exclusive lock on the table) and my application sometimes falls in timeout (and the users are not happy to wait :) ).
I'm looking for a way to be able to have the users that does everything in parallel as there is no interaction, except the last one: writing the result. I think that what is blocking me is the inserts / deletes in the "LoadData" table.
I checked the other transaction isolation levels but it seems that nothing could help me.
What would be perfect would be to be able to remove the exclusive lock on the "LoadData" table (is it possible to force SqlServer to only lock rows and not table ?) when the Insert is finished, but without ending the transaction.
Any suggestion?
Look up SET TRANSACTION ISOLATION LEVEL READ COMMITTED SNAPSHOT in Books OnLine.
Transactions should cover small and fast-executing pieces of SQL / code. They have a tendancy to be implemented differently on different platforms. They will lock tables and then expand the lock as the modifications grow thus locking out the other users from querying or updating the same row / page / table.
Why not forget the transaction, and handle processing errors in another way? Is your data integrity truely being secured by the transaction, or can you do without it?
if you're sure that there is no issue with cioncurrent operations except the last part, why not start the transaction just before those last statements, Whichever they are that DO require isolation), and commit immediately after they succeed.. Then all the upfront read operations will not block each other...

Long query prevents inserts

I have a query that runs each night on a table with a bunch of records (200,000+). This application simply iterates over the results (using a DbDataReader in a C# app if that's relevant) and processes each one. The processing is done outside of the database altogether. During the time that the application is iterating over the results I am unable to insert any records into the table that I am querying for. The insert statements just hang and eventually timeout. The inserts are done in completely separate applications.
Does SQL Server lock the table down while a query is being done? This seems like an overly aggressive locking policy. I could understand how there could be a conflict between the query and newly inserted records, but I would be perfectly ok if records inserted after the query started were simply not included in the results.
Any ways to avoid this?
Update:
The WITH (NOLOCK) definitely did the trick. As some of you pointed out, this isn't the cleanest approach. I can't really query everything into memory given the amount of records and some of the columns in this table are binary (some records are actually about 1MB of total data).
The other suggestion, was to query for batches of records at a time. This isn't a bad idea either, but it does bring up a new issue: database independent queries. Right now the application can work with a variety of different databases (Oracle, MySQL, Access, etc). Each database has their own way of limiting the rows returned in a query. But maybe this is better saved for another question?
Back on topic, the "WITH (NOLOCK)" clause is certainly SQL Server specific, is there any way to keep this out of my query (and thus preventing it from working with other databases)? Maybe I could somehow specify a parameter on the DbCommand object? Or can I specify the locking policy at the database level? That is, change some properties in SQL Server itself that will prevent the table from locking like this by default?
If you're using SQL Server 2005+, then how about giving the new MVCC snapshot isolation a try. I've had good results with it:
ALTER DATABASE SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE SET MULTI_USER;
It will stop readers blocking writers and vice-versa. It eliminates many deadlocks, at very little cost.
It depends what Isolation Level you are using. You might try doing your selects using the With (NoLock) hint, that will prevent the read locks, but will also mean the data being read might change before the selecting transaction completes.
The first thing you could do is try to add the "WITH (NOLOCK)" to any tables you have in your query. This will "Tame down" the locking that SQL Server does. An example of using "NOLOCK" on a join is as follows...
SELECT COUNT(Users.UserID)
FROM Users WITH (NOLOCK)
JOIN UsersInUserGroups WITH (NOLOCK) ON
Users.UserID = UsersInUserGroups.UserID
Another option is to use a dataset instead of a datareader. A datareader is a "fire hose" technique that stays connected to the tables while your program is processing and basically handling the table row by row through the hose. A dataset uses a "disconnected" methodology where all the data is loaded into memory and then the connection is closed. Your program can then loop the data in memory without having to worry about locking. However, if this is a really large amount of data, there maybe memory issues.
Hope this helps.
If you add the WITH (NOLOCK) hint after a table name in the FROM clause it should make sure it doesn't lock, and it doesn't care about reading data that is locked. You might get "out of date" results if you are writing at the same time, but if you don't care about that then you should be fine.
I reckon your best way of avoiding this is to do it in SQL rather than in the application.
You can add a
WAITFOR DELAY '000:00:01'
at the end of each loop iteration to provide time for other processes to run - just make sure that you haven't initiated a TRANSACTION such that all other processes are locked out anyway
The query is performing a table lock, thus the inserts are failing.
It sounds to me like you're keeping a lock on the table while processing the results.
You should instead load them into an array or collection of some sort, and close the database connection.
Then process the array.
In addition, while you're doing your select use either:
WITH(NOLOCK) or WITH(READPAST)
I'm not a big fan of using lock hints as you could end up with dirty reads or other weirdness. A couple of other ideas:
Can you break the number of rows down so you don't grab 200k at a time? Is there a way to tell whether you've processed a row - a flag, a timestamp - you could use to make the query? Your query could be 'SELECT TOP 5000 ...' getting a differnet 5k each time. Shorter queries mean shorter-lived locks.
If you can use smaller sets of rows I like the DataSet vs. IDataReader idea. You will be loading data into memory and not consuming any SQL locks, but the amount of memory can cause other problems.
-Brian
You should be able to set the isolation level at the .NET level so that you don't have to include the WITH (NOLOCK) hint.
If you want to go with the batching option, you should be able to specify the Rowcount setting from the .NET level which would tell the database to only return n number of records. By setting these settings at the .NET level they should become database independent and work across all the platforms.

Resources