Making a SQL connection is expensive and slow, so we use concepts like Connection Pools in 3-tier applications.
When using an Azure function that accesses a SQL database, we have to connect to the database and then execute our logic. Doesn't this make azure functions really slow? Doesn't this kill database performance by overusing connections?
Is there a way to use a reusable connection pool in Azure functions?
No, you will get the connection pooling on Azure Functions, similar to what you get in a "normal" App Service. Function instances are not recreated for each call; instead, multiple subsequent invocations may be served by the same instance. Each App Service Plan Instance will have its own connection pool.
Of course, if you are under very high load and numerous instances are running in parallel, they will all hit your database at the same time. I.e. there is no cross-instance pooling.
Related
I have a .NET 6 Web API that is hosted on server A. SQL Server is on server B. Both servers are in the same network.
Each endpoint of the Web API makes use of the Entity Framework to query data from the database.
I wanted to enable pooling at the Entity Framework level so that connections are reused. But I'm reading the SQL Server has its own pool of connections anyways. Link: https://learn.microsoft.com/en-us/ef/core/performance/advanced-performance-topics?tabs=with-di%2Cwith-constant#dbcontext-pooling
Note that DbContext pooling is orthogonal to database connection
pooling, which is managed at a lower level in the database driver.
So I want to ask - What is the difference between pooling at Entity Framework vs SQL Server level?
I wanted to enable pooling at the Entity Framework level so that connections are reused
Entity Framework doesn't get involved at the "connections are reused" level. Pooling in that regard is a process of ADO.net forging e.g. a TCP connection to a database (which is a relatively long and resource intensive operation) and keeping it open. When your old school code like using var conn = new SqlConnection("connstr here") calls conn.Open() one of these already-connected connections is leased from the pool and handed to you; you do some queries and then Close (or dispose, which closes) the connection but that doesn't actually disconnect the database; it just returns the connection to the pool
As noted, EF doesn't get involved in this; it's been a thing since long before EF was invented and is active by default unless you've turned it off specifically. EF will use ADO.net connections like any other app, so it already benefits from connection pooling passively
The article youre reading is about a different set of things being pooled. Typically a DbContext is a light weight short lived thing that represents a device that forms queries to a database; you're supposed to make one, use it for a few queries and then throw it away.
It's designed for fast creation but that doesn't mean there aren't minor improvements to be had and if you've identified a situation where you need to wring every last drop of performance out of the system then EF offers a way to dispense with the typical "throw it away and make another" route of making sure you're using a fresh DbContext by providing a facility for DbContexts to be pooled, and rather than being made anew they're reset
It's probably unlikely you're in that place where pooling your contexts would make a significant difference; you're asking about enabling connection pooling but in all likelihood it'll be already enabled because you would know if you'd put "Pooling=false" in a connection string.
For more info on connection pooling, see https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling
I have a .net core app which is deployed as a Webjob on Azure. It listens to a topic and according to what it reads it performs CRUD operations on a SQL Server Database (the App uses EF core for that).
The thing is that, as the application runs, the number of opened connections increases and most of them are not used for a long time (even for days).
is there a way to make the app not to create too many sleeping connections?
I have tried to run my app locally, using a local SQL Server DB (Express). When I ran it, it only kept opened about 3 connections (with the same amount of message handled as when it is deployed as a webjob).
is there a way to make the app not to create too many sleeping connections?
Yes. Likely the current behavior is fine, and it isn't creating too many sleeping connections. But if you want to change the connection pooling behavior, you can:
The ConnectionString property of the SqlConnection object supports
connection string key/value pairs that can be used to adjust the
behavior of the connection pooling logic. For more information, see
ConnectionString.
SQL Server Connection Pooling
In SQL Server, What is enable_broker?
What is the risk?
SQL Table Dependency wants that.
1) https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/sql-server-service-broker?view=sql-server-2017
SQL Server Service Broker provides native support for messaging and
queuing applications in the SQL Server Database Engine. This makes it
easier for developers to create sophisticated applications that use
the Database Engine components to communicate between disparate
databases. Developers can use Service Broker to easily build
distributed and reliable applications.
2) https://www.sqlservercentral.com/Forums/Topic818423-146-1.aspx
One risk being if there are already service brokers setup to use that
DB they will probably break, and any live connections will be killed
and rollback.
3) Sql Dependency with Service Broker
It's not necessary but if you want to see changes you need it or you can use an other service like periodic polling
Be careful using the SqlDependency class to monitor changes in the
database tables - it has the problems with the memory leaks.
I hope it will help you !
Hypothetical scenario:
I have a database server that has significantly more RAM/CPU than could possibly be used in its current system. Connecting an application server to it, would I get better preformance using pooling to use multiple connections that each have smaller executions, or a single connection with a larger execution?
More importantly, why? I'm having trouble finding any reference material to pull me one way or the other.
I always vote for connection pooling for a couple of reasons.
the pool layer will deal with failures and grabbing a working connection when you need it
you can service multiple requests concurrently by using different connections at the same time. a single connection will often block and queue up requests to the db
establishing a connection to a db is expensive - pools can do this up front and in the background as needed
There's also a handy discussion in this answer.
I understand that a typical .NET application that accesses a(n SQL Server) database doesn't have to do anything in particular in order to benefit from the connection pooling. Even if an application repeatedly opens and closes database connections, they do get pooled by the framework (assuming that things such as credentials do not change from call to call).
My usage scenario seems to be a bit different. When my service gets instantiated, it opens a database connection once, does some work, closes the connection and returns the result. Then it gets torn down by the WCF, and the next incoming call creates a new instance of the service.
In other words, my service gets instantiated per client call, as in [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]. The service accesses an SQL Server 2008 database. I'm using .NET framework 3.5 SP1.
Does the connection pooling still work in this scenario, or I need to roll my own connection pool in form of a singleton or by some other means (IInstanceContextProvider?). I would rather avoid reinventing the wheel, if possible.
Typical WCF application that accesses a(n SQL Server) database doesn't have to do anything in particular in order to benefit from the connection pooling. Even if an application repeatedly opens and closes database connections, they do get pooled by the framework (assuming that things such as credentials do not change from call to call).
The service instancing model creates and tears down an instance of your class, not an entire appdomain. The SqlClient connection pool is per AppDomain, so you'll get your free lunch.
Even though this is an old post, I feel it is important to add to it.
ADO.NET database connection pooling does NOT work in per-call WCF services, if you follow the typical scenario (instantiating ADO.NET objects within the service object).
While I do understand the above theory and arguments, they are just that: theory.
A simple Windows form application which goes through the step of open, query, close multiple times will show you that the first Open() call takes quite long such as 2 or 3 seconds, and subsequent calls and queries are fast - the effect of connection pooling.
If you put identical code into a per-call WCF service, you get the 2-3 seconds delay on EVERY CALL, the first call and all subsequent calls.
Conclusion - ADO.NET database connection pooling does NOT work in per-call WCF services if you do the typical ADO instantiation within the service.
You would have to instantiate ADO objects in a custom service host, and add appropriate synchronization code if you need, or live with no database connection pooling.