This question already has answers here:
What does sp_reset_connection do?
(2 answers)
Closed 4 years ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Trying to understand what Sql Profiler means by emitting "sp_reset_connection".
I have the following, "exec sp_reset_connection" line followed by BatchStarting and Completed,
RPC:Completed exec sp_reset_connection
SQL:BatchStarting SELECT [c].[TestID] AS [TestID], [c].[Description] AS [Description] FROM [dbo].[Test] AS [c]
SQL:BatchCompleted SELECT [c].[TestID] AS [TestID], [c].[Description] AS [Description] FROM [dbo].[Test] AS [c]
Basically does first line "exec sp_reset_connection" mean the whole process (my connection was opened, the select stmt is run, then the connection is closed and released back to pool) just take place? Or my connection is still in open stage.
And, why does the sp_reset_connection executed before my own select statement, shouldn't it the reset come after user's sql?
I'm trying to know is there a way to know in more detail when a connection is opened and closed?
By seeing "exec sp_reset_connection", does that mean my connection is closed?
Like the other answers said, sp_reset_connection indicates that connection pool is being reused. Be aware of one particular consequence!
Jimmy Mays' MSDN Blog said:
sp_reset_connection does NOT reset the
transaction isolation level to the
server default from the previous
connection's setting.
UPDATE: Starting with SQL 2014, for client drivers with TDS version 7.3 or higher, the transaction isolation levels will be reset back to the default.
ref: SQL Server: Isolation level leaks across pooled connections
Here is some additional information:
What does sp_reset_connection do?
Data access API's layers like ODBC,
OLE-DB and System.Data.SqlClient all
call the (internal) stored procedure
sp_reset_connection when re-using a
connection from a connection pool. It
does this to reset the state of the
connection before it gets re-used,
however nowhere is documented what
things get reset. This article tries
to document the parts of the
connection that get reset.
sp_reset_connection resets the
following aspects of a connection:
All error states and numbers
(like ##error)
Stops all EC's (execution contexts)
that are child threads of a parent EC
executing a parallel query
Waits for any outstanding I/O
operations that is outstanding
Frees any held buffers on the
server by the connection
Unlocks any buffer resources
that are used by the connection
Releases all allocated memory
owned by the connection
Clears any work or temporary
tables that are created by the
connection
Kills all global cursors owned by the
connection
Closes any open SQL-XML handles that are open
Deletes any open SQL-XML related work tables
Closes all system tables
Closes all user tables
Drops all temporary objects
Aborts open transactions
Defects from a distributed transaction when enlisted
Decrements the reference count
for users in current database which
releases shared database locks
Frees acquired locks
Releases any acquired handles
Resets all SET options to the default values
Resets the ##rowcount value
Resets the ##identity value
Resets any session level trace
options using dbcc traceon()
Resets CONTEXT_INFO to NULL in SQL Server 2005 and newer [ not part of the original article ]
sp_reset_connection will NOT reset:
Security context, which is why
connection pooling matches connections
based on the exact connection string
Application roles entered using sp_setapprole, since application roles could not be reverted at all prior to SQL Server 2005. Starting in SQL Server 2005, app roles can be reverted, but only with additional information that is not part of the session. Before closing the connection, application roles need to be manually reverted via sp_unsetapprole using a "cookie" value that is captured when sp_setapprole is executed.
Note: I am including the list here as I do not want it to be lost in the ever transient web.
It's an indication that connection pooling is being used (which is a good thing).
Note however:
If you issue SET TRANSACTION ISOLATION LEVEL in a stored procedure or trigger, when the object returns control the isolation level is reset to the level in effect when the object was invoked. For example, if you set REPEATABLE READ in a batch, and the batch then calls a stored procedure that sets the isolation level to SERIALIZABLE, the isolation level setting reverts to REPEATABLE READ when the stored procedure returns control to the batch.
http://msdn.microsoft.com/en-us/library/ms173763.aspx
Related
I am using remote connections to connect a client and server, after 6 months of working smoothly a transaction got stuck, probably because there was a cut in the connection while the transaction was running.
How can I prevent a transaction to get stuck in the case of a connection lost?
Isn't SQL supposed to cancel the transaction if not finished in some time?
UPDATE:
I am using the default SQL Server isolation (Read commited) and this is the way to replicate it:
I tried SET XACT_ABORT is ON as suggested but no luck, problem remains, this is the sequence of events to replicate this issue:
Set a breakpoint in the middle of the transaction and start
debugging
Once the transaction reached the breakpoint, disconnect the
computer from network (simulating there was an abnormal
disconnection)
Continue debugging the process and wait for .NET SqlClient to
throw the error (No network)
Plug PC back to network (simulating internet connection has
returned)
SQL Server does not finish or rollback the transaction, therefore
tables used in the first middle of the transaction are locked
You Need to SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back.
Check out his link for More information
!https://learn.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql?view=sql-server-2017
Section 3.4 of the Postgres documentation covers transactions.
I thought a transaction worked according to the following rules:
The client sends a BEGIN statement to the Database server on a connection. Call this connection “connection_one”.
The client sends whatever queries they want to the Database server. All of these queries are sent via “connection_one”.
If at any time the connection (in this example “connection_one”) is lost before a COMMIT statement reaches the Database server, the Database server rollsback to before the BEGIN statement.
If a COMMIT statement is issued and received by the Database server, then the changes are saved and then transaction block has completed.
It looks like the above is not the case though. My confusion is that it looks like I have to actually issue a ROLLBACK command and have it reach the Database Server in order for partial changes not to be saved. Is this really the case or am I missing something? If it is the case is there any way I can get the above behavior to occur or is there some reason I would not want the above behavior to occur? My concern is what if the connection is lost before I am able to ROLLBACK.
Thanks.
My application executes many queries and it is sure that all connections are closed well. PgAdmin shows many queries have gone "Idle in transaction" and finally DB becomes unresponsive. Is there a way to get the query caused to be 'Idle in transaction' ? Or any other tool which can track it ? Postgres 8.1 is used.
Edit: Connection Pool is used. Also, the state ' in transaction' got cleared after couple of minutes. Then, if any connection is opened, how this get cleared ?
If you check information in Postgres documentation regarding this:
idle in transaction (waiting for client inside a BEGIN block), or a
command type name such as SELECT. Also, waiting is attached if the
server process is presently waiting on a lock held by another server
process
I would suggest following things:
enable logging of "long queries" using log_min_duration_statement
and log_lock_waits option in postgresql.conf in Error Reporting and Logging section
check Lock Management parameters of postgresql.conf configuration file,deadlock_timeout option in particular
check Lock Monitoring article on Postgres Wiki and pg_locks view in Postgres
This is clean signal, so some about closing transaction and closing sessions is wrong in your application. The queries works well. Check your application - unexpected exceptions, fails, ... Some applications are pretty buggy - usually it is pretty serious problem. Orphaned transactions block VACUUM and block reusing connections.
I was assigned to implement an application (in C++) to evaluate pending submissions (a submission is a programming algorithm to a given problem). A site (in ASP.NET MVC) posts problems and allows the users to submit their answers, then marks the submissions as "pending to evaluation" on the database (SQL Server 2008R2) and that is when my work begins:
I'll have 3 (or maybe more) instances of my application running as services.
Each instance has to check if any pending submissions exists in the DB every 2 seconds.
If it exists I retrieve and compile it, after successful compilation I execute it and finally, after execution, check the correctness of the answer. Then I update that submission setting the results and deleting it from the pending table.
I need to specify in the DB the current status of the pending submission (compiling, running, judging).
The time to evaluate a submition is ~(1-3)s and the same instance never evaluates more that one submission at the same time.
My problem is: How to connect to the DB server?
I have 3 possibles solutions and I need to know what should be better (in order to increase efficiency) and why:
1 - Establish a connection to the DB once I instantiate the application and never close it (close it when I delete the instance or shut down the server, that theoretically never will happen.)
2 - Open a connection each 2s in order to get the pending submission (if any one exists) wait for the full evaluation process to end, sets the evaluations results and then close the connection.
3 - Same as 2, but closing the connection when I retrieve the submission, when the compilation finish, open it again and update pending submission's status, close it, when the execution finish, open it again and update pending submission's status, close it, finally when the judging finish open it and set the evaluation result.
You don't say what database access library you are using (ODBC, ado.net, other?). Opening and closing database connections is a relatively expensive operation. You should be using some sort of connection pooling scheme in your db access framework. A pool of connections is opened for a period of time, and when your app opens a connection it will get handed an already open connection from a pool. That will make it more efficient. Go read about connection pooling
for SQL Server
I'm using SQL 2000 for my application. My application is using N tables.
My application has a wrapper for SQL server called Database server. It is running as a 24/7 windows service.
If I have checked the integrity check option in the SQL maintenance plan, when this task is running one time after that one of my tables has been locked and it has been never unlocked.
So my history of the database transaction has been lost.
Please provide your suggestion how to solve this problem.
What if you have a client-side command timeout? And the locks are your own locks as a result of the DBCC?
Your code will timeout waiting for the DBCC to finish, but any locks it's already issued are not rolled back.
A command timeout tells SQL Server to simply stop processing. To release locks you need to either ROLLACK on the connection or close the connection.
Options:
Use SET XACT_ABORT in the SQL: Do I really need to use “SET XACT_ABORT ON”? (SO)
On client error, try and rollback yourself (Literally IF ##TRANCOUNT > 0 ROLLBACK TRAN)