I am using remote connections to connect a client and server, after 6 months of working smoothly a transaction got stuck, probably because there was a cut in the connection while the transaction was running.
How can I prevent a transaction to get stuck in the case of a connection lost?
Isn't SQL supposed to cancel the transaction if not finished in some time?
UPDATE:
I am using the default SQL Server isolation (Read commited) and this is the way to replicate it:
I tried SET XACT_ABORT is ON as suggested but no luck, problem remains, this is the sequence of events to replicate this issue:
Set a breakpoint in the middle of the transaction and start
debugging
Once the transaction reached the breakpoint, disconnect the
computer from network (simulating there was an abnormal
disconnection)
Continue debugging the process and wait for .NET SqlClient to
throw the error (No network)
Plug PC back to network (simulating internet connection has
returned)
SQL Server does not finish or rollback the transaction, therefore
tables used in the first middle of the transaction are locked
You Need to SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back.
Check out his link for More information
!https://learn.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql?view=sql-server-2017
Related
Cannot successfully execute an SSIS package with BEGIN TRAN functionality.
I'm at a loss with an SSIS package I inherited. It contains:
1 Script Task
3 Execute SQL tasks
5 Data flow tasks (each contains a number of merges, lookups, data inserts and other transformations)
1 file system task of the package.
All of these are encapsulated in a Foreach loop container. I've been tasked with modifying the package so that if any of the steps within the control/data flow fails, the entire thing is rolled back. Now I've tried two different approaches to accomplish this:
I. Using Distributed Transactions.
I ensured that:
MSDTC was running on target server and executing client (screenshot enclosed)
msdtc.exe was added as an exception to server and client firewall
Inbound and outbound rules were set for both server and client to allow DTC connections.
ForeachLoop Container TrasanctionLevel: Required
All other tasks TransactionLevel: Supported
My OLEDB Connection has RetainSameConnection set to TRUE and I'm using SQL Server Authentication with Save Password checked
When I execute the package, it fails right after the script task (first step).
After spending an entire week trying to figure out a workaround, I decided to try SQL Tasks to try to accomplish my goal using 3 Execute SQL Tasks:
BEGIN TRAN before the ForeachLoop Container
COMMIT TRAN after the ForeachLoop Container with a Success Constraint
ROLLBACK TRAN after the ForeachLoop Container with a Failure constraint
In this case, the ForeachLoop container and all other tasks have TransactionLevel property set to Supported. Now here, the problem is that the package executes up to the fourth data flow task and hangs there forever. After logging into SQL Server and verifying the running sessions, I noticed sys.sp_describe_first_result_set;1 as a headblocker session.
Doing some research, I found it could be related to a few TRUNCATE statements in some of my Data flow tasks which could cause a schema lock. I went ahead and changed the ValidateExternalMetaData property to False for all tasks within my data flow and changed my truncate statements to DELETE statements instead. Re-ran package and still hangs in the same spot with the same headblocker. As an alternative, I tried creating a second OLEDB connection to the same database, assigned that new OLEDB Connection to my BEGIN, ROLLBACK and COMMIT SQL tasks with RetainSameConnectionProperty set to TRUE and changed the RetainSameConnectionProperty to FALSE (and tried it with TRUE as well) in the original OLEDB connection (the one used by the data flow tasks). This worked in the sense that the package appeared to execute (It ran and Commit Tran executed fine) and then I ran it again with a forced error to cause it to fail and the Rollback TRAN task executed successfully, however, when I queried the affected tables, the transaction hadn't rolled back, all new records were inserted and old ones were updated (the begin tran was clearly started in a different connection and hence didn't affect the package's workflow). I'm not sure what else to try at this point. Any help would be truly appreciated, I’m about to go nuts with this!
P.S. additionally, all objects have "DelayValidation" set to true on everything and SQL Server version is 2012.
To test error handling in an application, I'm looking for a way to let a transaction commit result in an error.
The application is written in C and uses ODBC to talk to a SQL Server 2017 data source. The application starts a database transaction and executes an arbitrary SQL (which I can change for the sake of the test). Then, it commits the transaction (using ODBCs SQLEndTran()). I want to build a test that verifies the error handling of the commit.
Is there an easy way and reliable way to let the commit fail, e.g by executing some specific SQL script before the commit, or by changing the database or the data source settings?
EDIT / clarification: What I need to fail is the transaction commit itself (specifically the SQLEndTran() complete call with an error). SQL before that shall complete successfully.
If you are able to time it correctly in a testing framework you can do few things:
1. Kill session from a separate connection in a testing framework.
2. Change firewall configuration to emulate network error.
3. Switch database to single user mode or stop SQL Service.
Easiest way is to force a divide by zero.
declare #SomeVal int = 0
set #SomeVal = 2 / #SomeVal
--EDIT--
Since I guess you want the commit to fail you could simply add a rollback right before the commit. Then the exception would be thrown on the commit statement.
Section 3.4 of the Postgres documentation covers transactions.
I thought a transaction worked according to the following rules:
The client sends a BEGIN statement to the Database server on a connection. Call this connection “connection_one”.
The client sends whatever queries they want to the Database server. All of these queries are sent via “connection_one”.
If at any time the connection (in this example “connection_one”) is lost before a COMMIT statement reaches the Database server, the Database server rollsback to before the BEGIN statement.
If a COMMIT statement is issued and received by the Database server, then the changes are saved and then transaction block has completed.
It looks like the above is not the case though. My confusion is that it looks like I have to actually issue a ROLLBACK command and have it reach the Database Server in order for partial changes not to be saved. Is this really the case or am I missing something? If it is the case is there any way I can get the above behavior to occur or is there some reason I would not want the above behavior to occur? My concern is what if the connection is lost before I am able to ROLLBACK.
Thanks.
I'm using SQL 2000 for my application. My application is using N tables.
My application has a wrapper for SQL server called Database server. It is running as a 24/7 windows service.
If I have checked the integrity check option in the SQL maintenance plan, when this task is running one time after that one of my tables has been locked and it has been never unlocked.
So my history of the database transaction has been lost.
Please provide your suggestion how to solve this problem.
What if you have a client-side command timeout? And the locks are your own locks as a result of the DBCC?
Your code will timeout waiting for the DBCC to finish, but any locks it's already issued are not rolled back.
A command timeout tells SQL Server to simply stop processing. To release locks you need to either ROLLACK on the connection or close the connection.
Options:
Use SET XACT_ABORT in the SQL: Do I really need to use “SET XACT_ABORT ON”? (SO)
On client error, try and rollback yourself (Literally IF ##TRANCOUNT > 0 ROLLBACK TRAN)
This question already has answers here:
What does sp_reset_connection do?
(2 answers)
Closed 4 years ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Trying to understand what Sql Profiler means by emitting "sp_reset_connection".
I have the following, "exec sp_reset_connection" line followed by BatchStarting and Completed,
RPC:Completed exec sp_reset_connection
SQL:BatchStarting SELECT [c].[TestID] AS [TestID], [c].[Description] AS [Description] FROM [dbo].[Test] AS [c]
SQL:BatchCompleted SELECT [c].[TestID] AS [TestID], [c].[Description] AS [Description] FROM [dbo].[Test] AS [c]
Basically does first line "exec sp_reset_connection" mean the whole process (my connection was opened, the select stmt is run, then the connection is closed and released back to pool) just take place? Or my connection is still in open stage.
And, why does the sp_reset_connection executed before my own select statement, shouldn't it the reset come after user's sql?
I'm trying to know is there a way to know in more detail when a connection is opened and closed?
By seeing "exec sp_reset_connection", does that mean my connection is closed?
Like the other answers said, sp_reset_connection indicates that connection pool is being reused. Be aware of one particular consequence!
Jimmy Mays' MSDN Blog said:
sp_reset_connection does NOT reset the
transaction isolation level to the
server default from the previous
connection's setting.
UPDATE: Starting with SQL 2014, for client drivers with TDS version 7.3 or higher, the transaction isolation levels will be reset back to the default.
ref: SQL Server: Isolation level leaks across pooled connections
Here is some additional information:
What does sp_reset_connection do?
Data access API's layers like ODBC,
OLE-DB and System.Data.SqlClient all
call the (internal) stored procedure
sp_reset_connection when re-using a
connection from a connection pool. It
does this to reset the state of the
connection before it gets re-used,
however nowhere is documented what
things get reset. This article tries
to document the parts of the
connection that get reset.
sp_reset_connection resets the
following aspects of a connection:
All error states and numbers
(like ##error)
Stops all EC's (execution contexts)
that are child threads of a parent EC
executing a parallel query
Waits for any outstanding I/O
operations that is outstanding
Frees any held buffers on the
server by the connection
Unlocks any buffer resources
that are used by the connection
Releases all allocated memory
owned by the connection
Clears any work or temporary
tables that are created by the
connection
Kills all global cursors owned by the
connection
Closes any open SQL-XML handles that are open
Deletes any open SQL-XML related work tables
Closes all system tables
Closes all user tables
Drops all temporary objects
Aborts open transactions
Defects from a distributed transaction when enlisted
Decrements the reference count
for users in current database which
releases shared database locks
Frees acquired locks
Releases any acquired handles
Resets all SET options to the default values
Resets the ##rowcount value
Resets the ##identity value
Resets any session level trace
options using dbcc traceon()
Resets CONTEXT_INFO to NULL in SQL Server 2005 and newer [ not part of the original article ]
sp_reset_connection will NOT reset:
Security context, which is why
connection pooling matches connections
based on the exact connection string
Application roles entered using sp_setapprole, since application roles could not be reverted at all prior to SQL Server 2005. Starting in SQL Server 2005, app roles can be reverted, but only with additional information that is not part of the session. Before closing the connection, application roles need to be manually reverted via sp_unsetapprole using a "cookie" value that is captured when sp_setapprole is executed.
Note: I am including the list here as I do not want it to be lost in the ever transient web.
It's an indication that connection pooling is being used (which is a good thing).
Note however:
If you issue SET TRANSACTION ISOLATION LEVEL in a stored procedure or trigger, when the object returns control the isolation level is reset to the level in effect when the object was invoked. For example, if you set REPEATABLE READ in a batch, and the batch then calls a stored procedure that sets the isolation level to SERIALIZABLE, the isolation level setting reverts to REPEATABLE READ when the stored procedure returns control to the batch.
http://msdn.microsoft.com/en-us/library/ms173763.aspx