Please explain in detail about pool properties in Tomcat7 mentioned below with examples:
What is the use of keeping connections as idle?
setMinIdle()
setMaxIdle()
setMaxActive()
setInitialSize()
Considering Apache Tomcat
setMinIdle()
The minimum number of established connections that should be kept in
the pool at all times. The connection pool can shrink below this
number if validation queries fail and connections get closed. Default
value is derived from getInitialSize() (also see
setTestWhileIdle(boolean) The idle pool will not shrink below this
value during an eviction run, hence the number of actual connections
can be between getMinIdle() and somewhere between getMaxIdle() and
getMaxActive()
setMaxIdle()
The maximum number of connections that should be kept in the idle pool
if isPoolSweeperEnabled() returns false. If the If
isPoolSweeperEnabled() returns true, then the idle pool can grow up to
getMaxActive() and will be shrunk according to
getMinEvictableIdleTimeMillis() setting. Default value is
maxActive:100
setMaxActive()
The maximum number of active connections that can be allocated from
this pool at the same time. The default value is 100
setInitialSize()
Set the number of connections that will be established when the
connection pool is started. Default value is 10. If this value exceeds
setMaxActive(int) it will automatically be lowered.
Related
#golang #oracle
Im trying to understand how the Max connection works. Basically I have this db configuration:
params.MinSessions = 5
params.MaxSessions = 6
params.SessionTimeout = 0
params.WaitTimeout = 5 * time.Second
params.SessionIncrement = 0
params.ConnClass = "GOLANGPOOL"
// Connect!
result, err := sql.Open("godror", params.StringWithPassword())
result.SetMaxIdleConns(0)
However I can see 242 connections using sql.DB.Stats:
DB Established Open Conn (use + idle): 242
DB Idle Conn: 0
DB In Use Conn: 242
DB Max Idle Closed: 766
DB Max Idle Time Closed: 0
DB Max Lifetime Closed: 0
DB Max Open Conn: 0
DB Wait Count: 0
DB Wait Duration (sec): 0
How is it possible? The limit shouldn't be 6?
Thanks
In Oracle connections and sessions are different concepts.
A connection is a network connection to the DB, while a session is an
encapsulation of a user's interaction with the DB...
refering to this book, and Relation between Oracle session and connection pool.
Assuming you are using latest version of driver, https://github.com/godror/godror
sql.Open("godror", params.StringWithPassword())
implies standaloneConnection=0 setting.
The stats you are seeing is from go sql connection pool. The go sql calls the driver connect method which inturn tries to get connection from another pool (OCI maintains it due to setting standaloneConnection=0 ).
The max outbound connections hasn't exceeded the params.MaxSessions but its just the go sql connection counter
numOpen, ....
It is ideal you tune the go sql pool settings closer to another pool values so that the go routines just don't block.
You can check the OCI pool stats by using GetPoolStats()
method from godror.Conn and confirm the real number of maximum outbound connections. example here
https://github.com/godror/godror/blob/main/z_test.go
DB In Use Conn: 242
DB Max Idle Closed: 766
The sum is almost 1000, like the value of this default
poolMaxSessions=1000
I think you don’t have 242 simultaneous connections in use. You have a pull of connections and the database will limit the number of simultaneous sessions.
You should check how the sql package handles it (it is open source!) and how the specific driver handles it (also open source!) and if necessary open an issue on the driver project
https://github.com/godror/godror
Am getting this Operational Error, periodically probably when the application is not active or idle for long hours. On refreshing the page it will vanish. Am using mssql pyodbc connection string ( "mssql+pyodbc:///?odbc_connect= ...") in Formhandlers and DbAuth of gramex
How Can I keep the connection alive in gramex?
Screenshot of error
Add pool_pre_ping and pool_recycle parameters.
pool_pre_ping will normally emit SQL equivalent to “SELECT 1” each time a connection is checked out from the pool; if an error is raised that is detected as a “disconnect” situation, the connection will be immediately recycled. Read more
pool_recycle prevents the pool from using a particular connection that has passed a certain age. Read more
eg: engine = create_engine(connection_string, encoding='utf-8', pool_pre_ping=True, pool_recycle=3600)
Alternatively, you can add these parameters for FormHandler in gramex.yaml. This is required only for the first FormHandler with the connection string.
kwargs:
url: ...
table: ...
pool_pre_ping: True
pool_recycle: 60
In the link agent, I came across attributes like maxPropagationDelay and reservationGuardTime. What is the role of these attributes? Where I can find more information about these attributes.
These are parameters of specific LINK protocols.
maxPropagationDelay is used to determine timeouts based on expected round-trip-times in the network. It should be set to a value that depends on the geographical size of your network if the network is small enough for a single hop connection between any pair of nodes. Otherwise it should be set to a value based on the maximum communication range of your modem.
reservationGuardTime is a small extra time that a channel is reserved for, to allow for practical timing jitter of modems. Usually the default value for this provided by the agent will be good enough for most purposes.
The Underwater networks handbook to be released with the upcoming version of UnetStack3 will provide a lot more guidance on many of these parameters, and on how to set up various types of networks using Unetstack.
You can access more information about any parameters of any of the Agents in UnetStack using the help command. For the Link Agent, you'll see this in UnetStack 1.4.
> help link
link - access to link agent
Examples:
link // access parameters
link.maxRetries = 5 // set maximum retries for reliable delivery
link << new DatagramReq(to: 2, data: [1,2,3], reliability: true)
// send reliable datagram
Parameters:
MTU - maximum data transfer size
maxRetries - maximum retries for reliable delivery
reservationGuardTime - guard period (s)
maxPropagationDelay - maximum propagation delay (s)
dataChannel - channel to use for data frames (0 = control, 1 = data)
reservationGuardTime is the additional guard time that can be added to the frame duration when reserving a channel (using MAC) to ensure channel reservations have some delay in between for the nodes to be able react.
maxPropagationDelay is used to estimate the maximum time that an acknowledge to a request (or a series of requests if fragmentation is needed) might take and used to set timeouts for transmissions, or to make channel reservations (if using a MAC). Depending on your simulation/setup, you can change this number to be longest time (one-way) between two nodes which can communicate.
I have a capacity test running on a S4 Azure database, and I am getting this error:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I have 500 "users" hitting my site. My connectionstring is this
Server=tcp:database.net,1433;Initial Catalog=database-prd;Persist Security Info=False;User ID=username;Password=password;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;
I have checked my code and what I do is:
Using "using"
using (SqlConnection connection = new SqlConnection(_connectionString))
{
connection.Open();
//... logic
}
Scoped repository
serviceCollection.AddScoped<IRepository, SqlServerRepository>();
I am now thinking of the default Max Pool Size. I haven't set it on the connectionstring. Should I do this? I have an S4, and the properties is:
Max concurrent sessions: 4800
Max concurrent Workers (requests): 200
according to this: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-dtu-resource-limits-single-databases#standard-service-tier-continued
What should I set the pool size to? Does it even matter? As I have understand it "Max Pool Size" is a client side and it defaults to 100. I could try to raise it a bit to maybe 500 or 800.
Where it max out is on some pretty simple selects
select p1,p2,p3 from baskets where Id=1234
and the same for the lines. Not to complex. The only complex query I have has like 4 or 5 joins, but it is not hit that much.
Do anyone here have some points to Max Pool Size? Does it even matter?
According to the Push queue in GAE there are a number of task request headers.
X-AppEngine-QueueName, the name of the queue (possibly default)
X-AppEngine-TaskName, the name of the task, or a system-generated unique ID if no name was specified
X-AppEngine-TaskRetryCount, the number of times this task has been retried; for the first attempt, this value is 0. This number includes attempts where the task failed due to a lack of available instances and never reached the execution phase.
X-AppEngine-TaskExecutionCount, the number of times this task has previously failed during the execution phase. This number does not include failures due to a lack of available instances.
X-AppEngine-TaskETA, the target execution time of the task, specified in milliseconds since January 1st 1970.
Is there a way to check how many tasks are already enqueued?
Not from the headers, no. But you can use the QueueStatistics class to query that information.