Azure SQL - Timeout reached - sql-server

I have a capacity test running on a S4 Azure database, and I am getting this error:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I have 500 "users" hitting my site. My connectionstring is this
Server=tcp:database.net,1433;Initial Catalog=database-prd;Persist Security Info=False;User ID=username;Password=password;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;
I have checked my code and what I do is:
Using "using"
using (SqlConnection connection = new SqlConnection(_connectionString))
{
connection.Open();
//... logic
}
Scoped repository
serviceCollection.AddScoped<IRepository, SqlServerRepository>();
I am now thinking of the default Max Pool Size. I haven't set it on the connectionstring. Should I do this? I have an S4, and the properties is:
Max concurrent sessions: 4800
Max concurrent Workers (requests): 200
according to this: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-dtu-resource-limits-single-databases#standard-service-tier-continued
What should I set the pool size to? Does it even matter? As I have understand it "Max Pool Size" is a client side and it defaults to 100. I could try to raise it a bit to maybe 500 or 800.
Where it max out is on some pretty simple selects
select p1,p2,p3 from baskets where Id=1234
and the same for the lines. Not to complex. The only complex query I have has like 4 or 5 joins, but it is not hit that much.
Do anyone here have some points to Max Pool Size? Does it even matter?

Related

Golang - many in use connections

#golang #oracle
Im trying to understand how the Max connection works. Basically I have this db configuration:
params.MinSessions = 5
params.MaxSessions = 6
params.SessionTimeout = 0
params.WaitTimeout = 5 * time.Second
params.SessionIncrement = 0
params.ConnClass = "GOLANGPOOL"
// Connect!
result, err := sql.Open("godror", params.StringWithPassword())
result.SetMaxIdleConns(0)
However I can see 242 connections using sql.DB.Stats:
DB Established Open Conn (use + idle): 242
DB Idle Conn: 0
DB In Use Conn: 242
DB Max Idle Closed: 766
DB Max Idle Time Closed: 0
DB Max Lifetime Closed: 0
DB Max Open Conn: 0
DB Wait Count: 0
DB Wait Duration (sec): 0
How is it possible? The limit shouldn't be 6?
Thanks
In Oracle connections and sessions are different concepts.
A connection is a network connection to the DB, while a session is an
encapsulation of a user's interaction with the DB...
refering to this book, and Relation between Oracle session and connection pool.
Assuming you are using latest version of driver, https://github.com/godror/godror
sql.Open("godror", params.StringWithPassword())
implies standaloneConnection=0 setting.
The stats you are seeing is from go sql connection pool. The go sql calls the driver connect method which inturn tries to get connection from another pool (OCI maintains it due to setting standaloneConnection=0 ).
The max outbound connections hasn't exceeded the params.MaxSessions but its just the go sql connection counter
numOpen, ....
It is ideal you tune the go sql pool settings closer to another pool values so that the go routines just don't block.
You can check the OCI pool stats by using GetPoolStats()
method from godror.Conn and confirm the real number of maximum outbound connections. example here
https://github.com/godror/godror/blob/main/z_test.go
DB In Use Conn: 242
DB Max Idle Closed: 766
The sum is almost 1000, like the value of this default
poolMaxSessions=1000
I think you don’t have 242 simultaneous connections in use. You have a pull of connections and the database will limit the number of simultaneous sessions.
You should check how the sql package handles it (it is open source!) and how the specific driver handles it (also open source!) and if necessary open an issue on the driver project
https://github.com/godror/godror

Operational Error: An Existing connection was forcibly closed by the remote host. (10054)

Am getting this Operational Error, periodically probably when the application is not active or idle for long hours. On refreshing the page it will vanish. Am using mssql pyodbc connection string ( "mssql+pyodbc:///?odbc_connect= ...") in Formhandlers and DbAuth of gramex
How Can I keep the connection alive in gramex?
Screenshot of error
Add pool_pre_ping and pool_recycle parameters.
pool_pre_ping will normally emit SQL equivalent to “SELECT 1” each time a connection is checked out from the pool; if an error is raised that is detected as a “disconnect” situation, the connection will be immediately recycled. Read more
pool_recycle prevents the pool from using a particular connection that has passed a certain age. Read more
eg: engine = create_engine(connection_string, encoding='utf-8', pool_pre_ping=True, pool_recycle=3600)
Alternatively, you can add these parameters for FormHandler in gramex.yaml. This is required only for the first FormHandler with the connection string.
kwargs:
url: ...
table: ...
pool_pre_ping: True
pool_recycle: 60

What causes SQL Server high network latency?

I am using this command https://docs.dbatools.io/#Test-DbaNetworkLatency to test network latency with SQL Server 2016. And it gives me 100ms network latency result (from NetworkOnlyTotal output). However, if I ping the sql server instance I get only 11ms. I wonder what causes the extra 90ms latency in SQL Server. Is it expected? Or what configuration should I look at?
I tried with the -Count parameter and found that the NetworkOnlyTotal doesn't change too much or sometimes even dropped. Does this value mean average?
See below two examples, one is to run query 1 time while the other is to run the query 10 times. The result about NetworkOnlyTotal is even better for 10 times query. From its name, it looks like it is the total time of the 10 requests. But why is the value dropping?
Test-DbaNetworkLatency -SqlCredential $credential -SqlInstance $instance -Count 1
output:
ExecutionCount : 1
Total : 141.55 ms
Average : 141.55 ms
ExecuteOnlyTotal : 69.13 ms
ExecuteOnlyAverage : 69.13 ms
NetworkOnlyTotal : 72.42 ms
Test-DbaNetworkLatency -SqlCredential $credential -SqlInstance $instance -Count 10
output:
ExecutionCount : 10
Total : 180.33 ms
Average : 18.03 ms
ExecuteOnlyTotal : 127.38 ms
ExecuteOnlyAverage : 12.74 ms
NetworkOnlyTotal : 52.95 ms
I wonder what causes the extra 90ms latency in SQL Server. Is it expected?
Probably the one-time connection stuff.
1) Establishing a TCP/IP session
2) Negotiating connection protocol encryption
3) Logging in and creating a session
Try a higher -Count Establishing a connection and a session take some time, and shouldn't really be counted as "network latency", as clients will hold open and reuse connections.
The description of the product indicates that "It will then output how long the entire connection and command took, as well as how long only the execution of the command took." Additionally, it says that it will execute the command three times. And the tool will need to take a little time to authenticate the connection with SQL Server. So, it seems reasonable to me.

Occasionally retrieving "connection timed out" errors when querying Postgresql

I get this error every so often when utilizing sqlx with pgx, and I believe it's a configuration error on my end and a db concept I'm not grasping:
error: 'write tcp [redacted-ip]:[redacted-port]->[redacted-ip]:[redacted-port]: write: connection timed out
This occurs when attempting to read from the database. I init sqlx in the startup phase:
package main
import (
_ "github.com/jackc/pgx/stdlib"
"github.com/jmoiron/sqlx"
)
//NewDB attempts to connect to the DB
func NewDB(connectionString string) (*sqlx.DB, error) {
db, err := sqlx.Connect("pgx", connectionString)
if err != nil {
return nil, err
}
return db, nil
}
Any structs responsible for interacting with the database have access to this pointer. The majority of them utilize Select or Get, and I understand those return connections to the pool on their own. There are two functions that use Exec, and they only return the result and error at the end of the function.
Other Notes
My Postgres db supports 100 max_connections
I only showed a few active connections at the time of this error
I am not using SetMaxIdleConnections or SetMaxOpenConnections
Refreshing the page and triggering the request again always works
Any tips on what might be happening here?
EDIT: I did not mention this server is on compose.io, which in turn is hosted on AWS. Is it possible AWS turns these connections into zombies because they've been open for so long and the timeout occurs after unsuccessfully trying them one by one?
With the help of some rough calculations, I've set the maximum lifetime of these connections to 10 minutes. I inserted this code into the init function in the question above to limit the number of open connections, idle connections, and to limit to the life of the connection to 30s.
db.SetConnMaxLifetime(time.Duration(30) * time.Second)
db.SetMaxOpenConns(20)
db.SetMaxIdleConns(20)
Hopefully this helps someone else.
SELECT * FROM pg_stat_activity; is great for nailing down connections as well.

Difference between setmaxidle,setminidle,setInitialSize and setmaxactive pool properties?

Please explain in detail about pool properties in Tomcat7 mentioned below with examples:
What is the use of keeping connections as idle?
setMinIdle()
setMaxIdle()
setMaxActive()
setInitialSize()
Considering Apache Tomcat
setMinIdle()
The minimum number of established connections that should be kept in
the pool at all times. The connection pool can shrink below this
number if validation queries fail and connections get closed. Default
value is derived from getInitialSize() (also see
setTestWhileIdle(boolean) The idle pool will not shrink below this
value during an eviction run, hence the number of actual connections
can be between getMinIdle() and somewhere between getMaxIdle() and
getMaxActive()
setMaxIdle()
The maximum number of connections that should be kept in the idle pool
if isPoolSweeperEnabled() returns false. If the If
isPoolSweeperEnabled() returns true, then the idle pool can grow up to
getMaxActive() and will be shrunk according to
getMinEvictableIdleTimeMillis() setting. Default value is
maxActive:100
setMaxActive()
The maximum number of active connections that can be allocated from
this pool at the same time. The default value is 100
setInitialSize()
Set the number of connections that will be established when the
connection pool is started. Default value is 10. If this value exceeds
setMaxActive(int) it will automatically be lowered.

Resources