TransactionScope And Function Scope- Are These Connections In-Scope? - sql-server

Suppose you set up a TransactionScope object as illustrated per the Microsoft example here. Now suppose that you need to update a lot of database tables, and you want them all in the scope of the TransactionScope object. Continually nesting SqlConnection and SqlCommand objects 10 deep will create a source code mess. If instead you call other functions which create connections (in your data access layer, for example), will they be within scope of the TransactionScope object?
Example:
' Assume variable "x" is a business object declared and populated with data.
Using scope As New TransactionScope()
Dal.Foo.SaveProducts(x.Products)
Dal.Foo.SaveCustomer(x.Customer)
Dal.Foo.SaveDetails(x.Details)
' more DAL calls ...
Dal.Foo.SaveSomethingElse(x.SomethingElse)
scope.Complete()
End Using
Assume that each DAL function contains its own using statements for connections. Example:
Public Shared Sub SaveProducts(x As Object)
Using conn As New SqlConnection("connection string")
Using cmd As New SqlCommand("stored procedure name", conn)
With cmd
' etc.
End With
End Using
End Using
End Sub

Yes, they will be inside the TransactionScope. What the TransactionScope basically does is to create a Transaction object and set Transaction.Current to that.
In other words, this:
Using scope As New TransactionScope()
... blah blah blah ...
End Using
is basically the same as this:
try
{
// Transaction.Current is a thread-static field
Transaction.Current = new CommittableTransaction();
... blah blah blah ...
}
finally
{
Transaction.Current.Commit(); // or Rollback(), depending on whether the scope was completed
Transaction.Current = null;
}
When a SqlConnection is opened, it checks if Transaction.Current (on this thread) is null or not, and if it is not null then it enlists (unless enlist=false in the connection string). So this means that SqlConnection.Open() doesn't know or care if the TransactionScope was opened in this method or a method that called this one.
(Note that if you wanted the SqlConnection in the child methods to NOT be in a transaction, you can make an inner TransactionScope with TransactionScopeOption.Suppress)

When you create a TransactionScope, all connections you open while the TransactionScope exists join the transaction automatically (they're 'auto enlisted'). So you don't need to pass connection strings around.
You may still want to, when SQL Server sees different transactions (even if they are all contained by one DTC transaction), it doesn't share locks between them. If you open too many connections and do a lot of reading and writing, you're headed for a deadlock.
Why not put the active connection in some global place and use it?
Some more info after some research. Read this: TransactionScope automatically escalating to MSDTC on some machines? .
If you're using SQL Server 2008 (and probably 2012, but not any other database), some magic is done behind the scenes, and if you open two SQL Connections one after the other, they are going to be united into a single SQL transaction, and you're not going to have any locking problem.
However, if you're using a different database, or you may open two connections concurrently, you will get a DTC transaction, which means SQL Server will not manage the locks properly, and you may encounter very unpleasant and unexpected deadlocks.
While it's easy to make sure you're only running on SQL Server 2008, making sure you don't open two connections at the same time is a bit harder. It's very easy to forget it and do something like this:
class MyPersistentObject
{
public void Persist()
{
using(SQLConnection conn=...)
{
conn.Open()
WriteOurStuff()
foreach(var child in this.PersistedChildren)
child.Persist()
WriteLogMessage()
}
}
}
If the child's Persist method opens another connection, your transaction is escalated into a DTC transaction and you're facing potential locking issues.
So I still suggest maintaining the connection in one place and using it through your DAL. It doesn't have to be a simple global static variable, you can create a simple ConnectionManager class with a ConnectionManager.Current property which will hold the current connection. Make ConnectionManager.Current as [ThreadStatic] and you solved most of your potential problems. That's exactly how the TransactionScope works behind the scenes.

Related

Allow Multiple Transactions on One Connection

I have console c# program that is accessing a database.
Part of the code is doing some inserts and updates that I want to control with a transaction. This is the part of the code that is handling the business logic.
Another part of the code is doing some inserts and updates that is more system support logic that I want to commit immediately on insert. Specifically this code is inserting a row when the program starts and updating the row with the end. It also logs certain events in the code. I don't want these logged events to go away just because something failed in the business logic.
I tried to do the business logic with SqlCommand like this:
SqlCommand command = new SqlCommand(sql, connection, transaction);
and system logic like this:
SqlCommand command = new SqlCommand(sql, connection);
But I get this error:
ExecuteNonQuery requires the command to have a transaction when
connection assigned to the command is in a pending local transaction.
My goal is to have the business logic commit only upon transaction.Commit() and the system logic to commit immediately.
Can I accomplish that with two separate transactions?
Will I need to open two different connections?
In the end I created two different Connections: one for transaction based IO and another for non-transactions based IO.
I then used code that looked like this to set the transaction on each command:
transaction = new connection.BeginTransaction();
someCommand1.Transaction = transaction;
someCommand2.Transaction = transaction;
Where someCommand1/2 are SqlCommand()s previously created in the code.
This avoids the problem of calling SqlCommand multiple times.
Turned out to be very clean in the code.

Access 2013: could I use a data macro to trigger an MS SQL SP without delaying the execution in Access?

My client uses a multi-user split Access database (ie back end DB on server, client DB on each PC) as a point of sale system as well as for management functions.
He now needs it to update a remote MS SQL database but cannot afford to slow down the Access clients as customers are waiting. If I add code to each update / append / delete in the access client DBs to run the SQL SP it would slow down each transaction too much (I have tried that).
I am wondering whether I could use trigger macros on the back-end Access DB to run the SQL SPs without slowing down the client DB. Would the client DB have to wait for the trigger macro to run before it resumed its work, or would this be a good way to disconnect the client from an SQL update that is taking place on the server?
I have never used trigger macros and it is going to be a lot of work to research and create these on each table in order to test it so, if anyone can answer the above it could save me many hours of (possibly wasted) work!
I am wondering whether I could use trigger macros on the back-end Access DB to run the SQL SPs
Not directly, no. Event-driven Data Macros in Access cannot execute external code (VBA routines, pass-through queries, etc.) and cannot operate on linked tables.
What you could do is use event-driven Data Macros on the main table(s) to gather the information required by the stored procedures and save it in separate "queuing" table(s) in the back-end, then run a scheduled task every few minutes (or so) to call the stored procedures and pass them the "queued" information.
I simpler solution might be to use a linked table to the SQL db, then create and call a VBA function that build and executes the update query using. Using the SQLPassthough on the Linked table. This allows the function to return immediately, avoids maintenance and overhead in the SQL db, as well as all the initial setup time. Also the linked table can retain all connection including the username and password, if desired.
STUB of DAO SQLPassthrough call from Data Macro
VBA Function (generic code)
Public Function SetTimeStamp(ByVal RecordID as Variant)
Dim strSQL As String
Dim db As DAO.Database
strSQL = "UPDATE tblName SET tblName.TimeStampField = " & Now()
strSQL = strSQL & " WHERE RecordIDField = " & RecordID
Set db = CurrentDB()
db.execute strSQL, dbSQLpassThrough + dbRunAsync
End Function
To implement in the After Update event, use the SetLocalVar to call the Function
Data Macro (generic code)
If Updated("MyField1") or Updated("MyField2") or Updated("MyField3")
SetLocalVar
Name varTemp
Expression =SetTimeStamp([TableName].[RecordIDField]
End If
This will cause the function to Execute. It in turn run the query using the SQLPassThrough and Asynchroneous options, which causes Zero slowdown to the Access app. It can easily be modified to pass in the Table Name and TimeStamp Field names as parameters, so that may be used on any table, write more fields, etc.
I find the advantage to a single Access function, is that if you decide to enhance it, you only need to add the Data field(s) to your tables and fix a single function. Also there are no scheduled tasks, or queues to maintain, making this a cleaner solution.
Art

Launch stored procedure and continue running it even if disconnected

I have a database where data is processed in some kind of batches, where each batch may contain even a million records. I am processing data in a console application, and when I'm done with a batch, I mark it as Done (to avoid reading it again in case it does not get deleted), delete it and move on to a next batch.
I have the following simple stored procedure which deletes processed "batches" of data
CREATE PROCEDURE [dbo].[DeleteBatch]
(
#BatchId bigint
)
AS
SET XACT_ABORT ON
BEGIN TRANSACTION
DELETE FROM table1 WHERE BatchId = #BatchId
DELETE FROM table2 WHERE BatchId = #BatchId
DELETE FROM table3 WHERE BatchId = #BatchId
COMMIT
RETURN ##Error
I am using NHibernate with command timeout value 10 minutes, and the DeleteBatch procedure call times out occasionally.
Actually I don't want to wait for DeleteBatch to complete. I already have marked the batch as Done, so I want to go processing a next batch or maybe even exit my console application, if there are no more pending batches.
I am using Microsoft SQL Express 2012.
Is there any simple solution to tell the SQL server - "launch DeleteBatch and run it asynchronously even if I disconnect, and I don't even need the result of the procedure"?
It would also be great if I could set a lower processing priority for DeleteBatch because other queries are more important than DeleteBatch.
I dont know much about NHibernate. But if you were or can use ADO.NET in this scenario then you can implement asynchronous database operations easliy using the SqlCommand.BeginExecuteNonQuery Method in C#. This method starts the process of asynchronously executing a Transact-SQL statement or stored procedure that does not return rows, so that other tasks can run concurrently while the statement is executing.
EDIT: If you really want to exit from your console app before the db operation ends then you will have to manually create threads in your code and perform the db operation in those threads. Now when you close your console app these threads would still be alive because Threads created using System.Thread.Thread are foreground threads by default. But having said that it is also important to consider how many threads you will create. In your case you would have to assign 1 thread for each batch. If number of batches is very large then large number of threads would need to be created which would inturn eat a large amount of your CPU resources and would even freeze your OS for a long time.
Another simple solution I could suggest is to insert the BatchIds into some database table. Create an INSERT TRIGGER on that table. This trigger would then call a stored proc with BatchId as its parameter and would perform the required tasks.
Hope it helps.
What if your console application were, instead of trying to delete the batch, just write the batch id into a "BatchIdsToDelete" table. Then, you could use an agent job running every x minutes/seconds or whatever, to delete the top x percent records for a given batch id, and maybe sleeping a little before tackling the next x percent.
Maybe worth having a look at that?
Look at this article which explains how to do reliable asynchronous procedure execution, code included. IS based on Service Broker.
the problem with trying to use .NEt async features (like BeginExecute, or task etc) is that the call is unreliable: if the process exits before the procedure completes the execution is canceled in the server as the session is disconnected.
But you need to also look at the task itself, why is the deletion taking +10 minutes? is it blocked by contention? are you missing indexes on BatchId? Use the Performance Troubleshooting Flowchart.
Late to the party, but if someone else has this problem use SQLCMD. With express you are limited in the number of users (I think 2, but it may have changed since I the last time I did much with express). You can have sqlcmd, run queries, stored procedures ...
And you can kick off the sqlcmd with Windows Scheduler. A script, an outlook rule ...
I used it to manage like 3 or 4 thousand SQL Server Express instances, with their nightly maintenance scheduled with the Windows Scheduler.
You could also create and run a PowerShell script, it's more versatile and probably a more widely used than sqlcmd.
I needed a same thing..
After searching for long time I found the solution
Its d easiest way
SqlConnection connection = new SqlConnection();
connection.ConnectionString = "your connection string";
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connection.ConnectionString);
builder.AsynchronousProcessing = true;
SqlConnection newSqlConn = new SqlConnection(builder.ConnectionString);
newSqlConn.Open();
SqlCommand cmd = new SqlCommand(storeProcedureName, newSqlConn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.BeginExecuteNonQuery(null, null);
Ideally SQLConnection object should take an optional parameter / property, URL of a web service, be that WCF or WebApi, or something yet to be named, and if the user wishes to, notify user of execution advance and / or completion status by calling this URL with well known message.
Theoretically DBConnection is extensible object one is free to implement. However, it will take some review of what really can be and needs to be done, before this approach can be said feasible.

Does SQLite support transactions across multiple databases?

I've done some searching and also read the FAQ on the SQLite site, no luck finding an answer to my question.
It could very well be that my database approach is flawed, but at the moment, I would like to store my data in multiple SQLite3 databases, so that means separate files. I am very worried about data corruption due to my application possibly crashing, or a power outage in the middle of changing data in my tables.
In order to ensure data integrity, I basically need to do this:
begin transaction
modify table(s) in database #1
modify table(s) in database #2
commit, or rollback if error
Is this supported by SQLite? Also, I am using sqlite.net, specifically the latest which is based on SQLite 3.6.23.1.
UPDATE
One more question -- is this something people would usually add to their unit tests? I always unit test databases, but have never had a case like this. And if so, how would you do it? It's almost like you have to pass another parameter to the method like bool test_transaction, and if it's true, throw an exception between database accesses. Then test after the call to make sure the first set of data didn't make it into the other database. But maybe this is something that's covered by the SQLite tests, and should not appear in my test cases.
Yes transactions works with different sqlite database and even between sqlite and sqlserver. I have tried it couple of times.
Some links and info
From here - Transaction between different data sources.
Since SQLite ADO.NET 2.0 Provider supports transaction enlistement, not only it is possible to perform a transaction spanning several SQLite datasources, but also spanning other database engines such as SQL Server.
Example:
using (DbConnection cn1 = new SQLiteConnection(" ... ") )
using (DbConnection cn2 = new SQLiteConnection(" ... "))
using (DbConnection cn3 = new System.Data.SqlClient.SqlConnection( " ... ") )
using (TransactionScope ts = new TransactionScope() )
{
cn1.Open(); cn2.Open(); cn3.Open();
DoWork1( cn1 );
DoWork2( cn2 );
DoWork3( cn3 );
ts.Complete();
}
How to attach a new database:
SQLiteConnection cnn = new SQLiteConnection("Data Source=C:\\myfirstdatabase.db");
cnn.Open();
using (DbCommand cmd = cnn.CreateCommand())
{
cmd.CommandText = "ATTACH DATABASE 'c:\\myseconddatabase.db' AS [second]";
cmd.ExecuteNonQuery();
cmd.CommandText = "SELECT COUNT(*) FROM main.myfirsttable INNER JOIN second.mysecondtable ON main.myfirsttable.id = second.mysecondtable.myfirstid";
object o = cmd.ExecuteScalar();
}
Yes, SQLite explicitly supports multi-database transactions (see https://www.sqlite.org/atomiccommit.html#_multi_file_commit for technical details). However, there is a fairly large caveat. If the database file is in WAL mode, then:
Transactions that involve changes against multiple ATTACHed databases
are atomic for each individual database, but are not atomic across all
databases as a set.

Why is my CONTEXT_INFO() empty?

I have a method that sets up my linq data context. Before it returns the DC it calls a stored proc that sets up the CONTEXT_INFO value to identify the current user.
A trigger picks up any changes made and using this context data writes an audit record.
I noticed that my context data was in the audit table blank so I wrote a simple unit test to step through this process and I still get nothing. However if I paste all the Linq-To-SQL statements into a query window the context data is there.
Looking at a profiler trace it makes quite a few sp_reset_connection calls in this process. I had understood that these should not have an affect on the CONTEXT_INFO value though.
So what's going on here?
A Linq to SQL DataContext does not actually hold the connection open when you execute queries, either using query comprehension or ExecuteQuery/ExecuteMethod call, and CONTEXT_INFO only lives in the context of a single connection.
In order to get this to work, you need to manually open the connection on the DataContext using context.Connection.Open() before setting the context_info. Once the connection is already open, successive queries won't auto-close the connection when they're finished.
Note - the technical reason for this is that it invokes ExecuteReader on the IDbCommand with CommandBehavior.CloseConnection set, unless the connection was already open. You can see the same behaviour yourself if you use SqlCommand/IDbCommand objects with the same flag set.
Edit - I guess I should also point out that if the connection is pooled, technically the physical connection is "open" the whole time, but the IDbConnection is still getting closed, which is what causes the connection resets.
sp_reset_connection does reset context_info. sp_reset_connection is the procedure called by the client app pools when recycling a connection, so it appears that you're seeting the context on one connection, closing the connection and expecting the context to be set on a new connection, whcih is obviously erroneous.

Resources