In a asp.net/Sqlserver project, we create connections using ado.net (sql authentication) and we see a behaviour where there are a lot of active connections in "sleeping","Awaiting command" status
The code does the following - Get a connection from common function, update db, Commit transaction, close & dispose transaction, close connection.
1) In sqlserver 2008 when our program runs for sometime (it updates the db every few secs), the number of active connections increases dramatically and sqlserver starts refusing new connections (as the default connections is 100)
2) In sqlserver 2005, we see that the connections are getting reused and work fine. Our max connections does not go above 15-20.
We found an issue from MSFT on 2008 and conveyed to the client.
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=383517 - Talks about 2008 not releasing closed connections immediately.
In the client place, we see the same issue in sql2005 too.
My question, is when the .net program calls close() on a connection, how long does sqlserver keeps it active ?
Thanks a lot for any hints
Regards
Anand
Connections are going to the pool. If they are not reused from there, and the number of connections increases, you certainly did not clean them up properly. Make use of using blocks for any disposable type (also transactions, commands and whatever).
To clear the connection pool you can call this static method:
SqlConnection.ClearAllPools();
This should remove all connections and are not used in the pool. The others are still in use.
Related
I am using a SQL Server database with Nodejs. I am using connection pool to perform various queries. When I run sp_who2, I can see that the there are almost 20 processes which have status sleeping and command awaiting command.
Should I go ahead and delete these processes? I read in some other post that this happens when you create a transaction in SQL Server but do not close / commit / rollback that transaction. I do not see any point in my application where I did not commit or rollback transaction on error. So I am not sure where the error came from.
I have a feeling that leaving those processes there is going to cause query timeout issues in the future. Is there a way to see what query caused the sleeping but waiting state?
I normally see many sleeping connections. I consider it normal. If you have sleeping connections with open transactions and locks, then you need to investigate. I would try to identify the host and PID holding the lock. In some cases the resolution is a polite talk with the person responsible for not closing their transaction.
A connection pool is a pool of connections to SQL Server. They will be idle and sleeping unless they are in use. Generally, there is a timeout for the connections in the pool. (For example, if you look at the ODBC control panel, the connection pooling tab will generally show a 60 second timeout. It might also always keep a minimum number of idle connections.) Check if you have a minimum number of idle connections. Once you know your timeout, verify that the connections are timing out as expected...eventually. If not, I would look for a connection leak or a connection pool issue. Is the application releasing the connection when done? Does GC have to run before the connection goes away?
Years ago there was an issue where a connection could go back into the pool with an open transaction. It was not until the connection was being prepared for reuse that it was finally reset. This issue has been fixed.
Another past issue was a broken connection. For example, if the SQL Server was rebooted, all idle connections are broken. However, it was not until the connection was requested that this was checked. A connection failure timeout was required for each connection in the pool before it was replaced. This was a PITA.
I have this 'issue' since a long time and I am really wondering if this is just me or if there actually is a way of preventing the following:
UPDATED
In Visual Studio, when using the Server Explorer on a .mdf database, in a Entity Framework Code first approach project whenever I am opening the Database manually to see the data of certain tables (clicking on Show table data), it seems that even when I close the connection like this:
the database connection stays open somehow in the background.
I am getting "... the Database is currently in use ..." error if wanting to debug afterwards, after closing the connection, even when restarting the solution.
When I close all sqlservr.exe process(es) in the Task manager that does the trick.
Note that this is a local solution and a local database (.mdf) i am using for testing purposes. Nothing or no one else is using this solution.
I am quite sure this is not the behavior it should have right?
What am I doing wrong or what can I do to not have this behavior if this is not by default?
Thank you in advance for any feedback!
Include the "Pooling" flag in the connect string set to false:
Pooling=False
However, this might not be the best option in a productive environment:
Connection pooling reduces the number of times that new connections must be opened. The pooler maintains ownership of the physical connection. It manages connections by keeping alive a set of active connections for each given connection configuration. Whenever a user calls Open on a connection, the pooler looks for an available connection in the pool. If a pooled connection is available, it returns it to the caller instead of opening a new connection. When the application calls Close on the connection, the pooler returns it to the pooled set of active connections instead of closing it. Once the connection is returned to the pool, it is ready to be reused on the next Open call. (...) SQL Server Connection Pooling (ADO.NET)
i am using sql server 2012 with desktop application as client , the application get errors after period of time when no activity on it , i googled about this issue all solutions points me to AUTO_CLOSE option on database but it's already set to false .
i thing is something missing in connection string (ADO Extension)
To be honest, if you have long running connections, you can hit these errors regardless due to firewalls / routers closing connections, etc. The correct solution is to instantiate a connection when you need it, use the connection and release it. With connection pooling, this is not really a performance problem.
If your long-running application is "bursty", it is sometimes convenient to open the connection, do a number of commands -- then when you go idle, release the connection and wait the next burst of activity.
For some time now our flagship application has been having mysterious errors. The error message is the generic
[DBNETLIB][ConnectionWrite (send()).]General network error. Check your network documentation.
This is reliably reproduced by leaving the app open for the night and resuming work in the morning. Since it's a backend server app this is a normal scenario.
The funny thing is - we've migrated from SQL Server 7 to 2000 to 2008 and the issue is present on all of them. But what seems to matter is the OS on which we run the app. On WinXP it works fine, on Vista/7 it fails. So the problem is at the client end.
The results of Google on the error message cover a very wide spectrum of different causes (since this is a very generic error) and none of the scenarios found there are similar to ours.
So perhaps someone around here will know what the problem is in our case?
You should be able to reproduce this error condition on demand by:
1. Opening a database connection (in your client application)
2. Unplugging the network cable
3. Plugging network cable back in (wait until the network connection is restored)
4. Using the previously opened connection to query the database
As far as I can tell from experience, client side ADO code is not able to consistently determine if an underlying network connection is actually valid or not. Checking if the database connection is open (in the client code) returns true. However, performing any operations on that connection results in a General network error.
The connection pool appears to be able to determine when a connection goes 'bad' so it never returns a bad connection to the application. It simply opens a new connection instead.
So, if a database connection is kept alive for a long time (used or unused) by the application, the underlying TCP/IP connectivity can get broken.
The bottom line is that database connections should be closed and returned back to the connection pool when not in use.
Edit
Also, depending on the number of clients connecting to the db, not using the connection pool can cause another issue. You may hit the maximum number of sockets open on the server side. This is from memory. Once a connection is closed on the client side, the connection on the server goes into a TIME_WAIT state. By default, the server socket takes about 4 minutes to close, so it is not available to other clients during that time. The bottom line is that there is a limited number of available sockets on the server. Keeping too many connections open can create a problem.
One project I worked on easily hit this socket limit with around 120 users. A new 'feature' was added that absolutely hammered the server, and after a few hours of using the app, things would suddenly slow to a crawl for everyone. SQL server was not closing enough sockets in time for new connection requests. Although there are 65K sockets altogether, only the first 5000 are made available to the ADO (this is a default registry setting thing, so can be changed).
The number of sockets in TIME_WAIT state would slowly build up until the OS would not allocate any more. So clients had to wait until server side sockets closed and a new connection could then be created.
Have you tried disabling SNP/TCP Chimneying?
Had a similar error. For me it was indirectly caused by mismatched calls to WSACleanup and WSAStartup.
The program called WSACleanup more times than WSAStartup. This would cause a reference counter (somewhere in the sockets library) to reach zero too early.
I think effectively from that moment on all sockets owned by the process are broken.
And this would also kill the SQL client since it uses sockets to 'talk' to the SQL server as well.
I have a client-server app that uses .NET SqlClient Data Provider to connect to sql server - pretty standard stuff. By default how long must connections be idle before the connection pooling manager will close the database connection and remove it from the pool? What setting if any controls this?
This MSDN document only says
The connection pooler removes a connection from the pool after it has been idle for a long time, or if the pooler detects that the connection with the server has been severed.
A few years ago the answer beneath was the situation, but now it's changed so you can refer to the source and write up a summary :)
Old answer
This excellent article tells us what we need to know, using reflection to reveal the inner workings of connection pooling.
From how I understand it, 'closed' connections are cleaned up periodically on a semi-random interval. The cleanup process runs somewhere between every 2min and 3min 50s, but it needs to run twice before a 'closed' connection will be properly closed. Therefore after 7min 40s of being 'closed' the underlying sql connection should be properly closed, but it could be as short as 2min. At the time of writing the first connection pool created in a process would always have a timer interval of 3min 10s, so you'd normally see sql connections being closed somewhere between 3min 10s and 6min 20s after you call Close() on the ADO object.
Obviously this uses undocumented code so could change in future - or could even have changed since that article was written.
Please go through this:
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring%28VS.80%29.aspx
The part
"The following table lists the valid
names for connection pooling values
within the ConnectionString."
seems to be of your interest.