Why would my app keep opened too many sleeping connections? - sql-server

I have a .net core app which is deployed as a Webjob on Azure. It listens to a topic and according to what it reads it performs CRUD operations on a SQL Server Database (the App uses EF core for that).
The thing is that, as the application runs, the number of opened connections increases and most of them are not used for a long time (even for days).
is there a way to make the app not to create too many sleeping connections?
I have tried to run my app locally, using a local SQL Server DB (Express). When I ran it, it only kept opened about 3 connections (with the same amount of message handled as when it is deployed as a webjob).

is there a way to make the app not to create too many sleeping connections?
Yes. Likely the current behavior is fine, and it isn't creating too many sleeping connections. But if you want to change the connection pooling behavior, you can:
The ConnectionString property of the SqlConnection object supports
connection string key/value pairs that can be used to adjust the
behavior of the connection pooling logic. For more information, see
ConnectionString.
SQL Server Connection Pooling

Related

What is the difference between pooling at Entity Framework vs SQL Server level?

I have a .NET 6 Web API that is hosted on server A. SQL Server is on server B. Both servers are in the same network.
Each endpoint of the Web API makes use of the Entity Framework to query data from the database.
I wanted to enable pooling at the Entity Framework level so that connections are reused. But I'm reading the SQL Server has its own pool of connections anyways. Link: https://learn.microsoft.com/en-us/ef/core/performance/advanced-performance-topics?tabs=with-di%2Cwith-constant#dbcontext-pooling
Note that DbContext pooling is orthogonal to database connection
pooling, which is managed at a lower level in the database driver.
So I want to ask - What is the difference between pooling at Entity Framework vs SQL Server level?
I wanted to enable pooling at the Entity Framework level so that connections are reused
Entity Framework doesn't get involved at the "connections are reused" level. Pooling in that regard is a process of ADO.net forging e.g. a TCP connection to a database (which is a relatively long and resource intensive operation) and keeping it open. When your old school code like using var conn = new SqlConnection("connstr here") calls conn.Open() one of these already-connected connections is leased from the pool and handed to you; you do some queries and then Close (or dispose, which closes) the connection but that doesn't actually disconnect the database; it just returns the connection to the pool
As noted, EF doesn't get involved in this; it's been a thing since long before EF was invented and is active by default unless you've turned it off specifically. EF will use ADO.net connections like any other app, so it already benefits from connection pooling passively
The article youre reading is about a different set of things being pooled. Typically a DbContext is a light weight short lived thing that represents a device that forms queries to a database; you're supposed to make one, use it for a few queries and then throw it away.
It's designed for fast creation but that doesn't mean there aren't minor improvements to be had and if you've identified a situation where you need to wring every last drop of performance out of the system then EF offers a way to dispense with the typical "throw it away and make another" route of making sure you're using a fresh DbContext by providing a facility for DbContexts to be pooled, and rather than being made anew they're reset
It's probably unlikely you're in that place where pooling your contexts would make a significant difference; you're asking about enabling connection pooling but in all likelihood it'll be already enabled because you would know if you'd put "Pooling=false" in a connection string.
For more info on connection pooling, see https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling

How much CPU overhead does each SQL server connection add? Should I use one connection or multiple connections?

I am writing a server backend for an android app which needs to access a database. My backend application and the SQL server are on the same machine so I don't have to worry about traffic slowing down my application but I don't know how to get user data from the database in the most efficient way.
When a new user connects to the server, a new thread is started for managing that connection. The client sends a few packets for identification but eventually all clients need some online data every t seconds. This t is the same for every client. The clients don't need to request this data every t seconds, instead the server sends it to them. The data itself is on a SQL server and is updated every few seconds.
Now I want to know which one is better (for CPU performance - both my application and the SQL server itself):
Create a new connection to the SQL server for each thread and let the thread handle it.
Create only one connection to the SQL server and instead have a list of clients that need online data. Get all the needed data form the SQL server every t seconds and then distribute that data between threads so that they can send it to the client.
There will be at most 300 clients.
EDIT: To clarify, I'm writing my application using C++ and using SQLAPI++ for database connections and I'm not sure if this library actually uses connection pooling.
Still, even if I leave the connection management to the library, the question is: Should I let each thread execute its own commands or have one thread execute them all at once in the form of one command? Does it have a significant impact on performance? (my application or the SQL server)

integrate kafka with Odoo

I have Odoo front end on aws ec2 instance and connected it with postgresql on ElephentQl site with 15 concurrent connections
so I want to make sure that this connection limits will pose no problem so i wanna use kafka to perform database write instead of Odoo doing it directly but found no recourses online to help me out
Is your issue about Connection Pooling? PostgreSQL includes two implementations of DataSource for JDBC 2 and two for JDBC 3, as shown here.
dataSourceName String Every pooling DataSource must have a unique name.
initialConnections int The number of database connections to be created when the pool is initialized.
maxConnections int The maximum number of open database connections to allow.
When more connections are requested, the caller will hang until a connection is returned to the pool.
The pooling implementations do not actually close connections when the client calls the close method, but instead return the connections to a pool of available connections for other clients to use. This avoids any overhead of repeatedly opening and closing connections, and allows a large number of clients to share a small number of database connections.
Additionally, you might want to investigate, Pgbouncer. Pgbouncer is a stable, in-production connection pooler for PostgreSQL. PostgreSQL doesn’t realise the presence of PostgreSQL. Pgbouncer can do load handling, connection scheduling/routing and load balancing. Read more from this blog that shows how to integrate this with Odoo. There are a lot of references from this page.
Finally, I would second OneCricketeer's comment, above, on using Amazon RDS, unless you are getting a far better deal with ElephantSQL.
On using Kafka, you have to realise that Odoo is a frontend application that is synchronous to user actions, therefore you are not architecturally able to have a functional system if you put Kafka in-between Odoo and the database. You would input data and see it in about 2-10 minutes. I exaggerate but; If that is what you really want to do then by all means, invest the time and effort.
Read more from Confluent, the team behind Kafka that came out of LinkedIn on how they use a solution called BottledWater to do some cool streams over PostgreSQL, that should be more like what you want to do.
Do let us know which option you selected and what worked! Keep the community informed.

Is there any restriction on concurrent SQL Server connections posed by IIS server?

Is there any limit on the no. of simultaneous SQL connections by IIS server?
Or is it solely dependent on the memory available?
Or is there any other factor which would impose any restriction on the no. of concurrent SQL connections?
This is controlled via connection pooling which you can configure through web.config
http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
if you have your .NET code running in the web application properly designed to open and close immediately the connections after usage, the .NET connection pool will handle everything for you transparently.
it is my understanding that if you always use the same connection string only one connection will be created and managed and this is probably the best way to do it.
always avoid to keep any SQL Connection object opened and not closed or disposed immediately.
Yes there is, see the specs here.
It's 32,767. If you are hitting it you're probably not disposing connections.

Connection pooling for a rich client accessing a database directly

I have a legacy application WinForms that connect directly to a SQL Server 2005 database.
There are many client applications open at the same time (several hundreds), so I want to minimize the number of connections to the database.
I can release connections early and often, and keep the timeout value low.
Are there other things I need to consider?
Try to use the same connection string when you create a new connection, so .Net will use one connection pool.
Dispose your connection as soon as possible.
You can set max pool size in the connection string itself to determine the max number of active connections.
You should consider introducing a connection pool.
In the Java world you usually get this "for free" with an application server. However this would be oversized anyway if everything you care for is the database connection pooling.
The general idea is to have one process (on the server) open a limited number of parallel connections to the database. You would do this in some sort of "proxy" application (a mini application server of sorts) and re-use the expensive to create database connections for incoming connections to your app that are cheaper to create and throw away.
Of course this require some changes to the client side as well, so maybe it is not the ideal solution if you cannot accept that as a precondition.

Resources