The docs describe avg_wait_time as:
Time spent by clients waiting for a server in microseconds (average per second).
We see occasional spikes in avg_wait_time (normally it's 0). During those spikes, best I can tell, there are available/idle servers. What could be causing the wait time to be greater than 0 in these cases?
Reading the flow from hackernoon leads to that your connection pools are exhausted, and the new connections need to wait until a free spot becomes available for either connecting to a pool, either to get to the executing phase.
These server connections, that clients get linked to, are
“pooled” — limited in number and reused. Because of that it might
occur that while client sends some request (beginning a transaction or
performing a query) a corresponding server connection pool is
exhausted, i.e. pgbouncer oppened as many connections as were allowed
it and all of them are occupied by (linked to) some other clients.
PgBouncer in this scenario puts client into a queue, and this client’s
connection goes to a CL_WAITING state. This might happen as well while
client only logging in, so there’s CL_WAITING_LOGIN for that also:
Related
I am using a SQL Server database with Nodejs. I am using connection pool to perform various queries. When I run sp_who2, I can see that the there are almost 20 processes which have status sleeping and command awaiting command.
Should I go ahead and delete these processes? I read in some other post that this happens when you create a transaction in SQL Server but do not close / commit / rollback that transaction. I do not see any point in my application where I did not commit or rollback transaction on error. So I am not sure where the error came from.
I have a feeling that leaving those processes there is going to cause query timeout issues in the future. Is there a way to see what query caused the sleeping but waiting state?
I normally see many sleeping connections. I consider it normal. If you have sleeping connections with open transactions and locks, then you need to investigate. I would try to identify the host and PID holding the lock. In some cases the resolution is a polite talk with the person responsible for not closing their transaction.
A connection pool is a pool of connections to SQL Server. They will be idle and sleeping unless they are in use. Generally, there is a timeout for the connections in the pool. (For example, if you look at the ODBC control panel, the connection pooling tab will generally show a 60 second timeout. It might also always keep a minimum number of idle connections.) Check if you have a minimum number of idle connections. Once you know your timeout, verify that the connections are timing out as expected...eventually. If not, I would look for a connection leak or a connection pool issue. Is the application releasing the connection when done? Does GC have to run before the connection goes away?
Years ago there was an issue where a connection could go back into the pool with an open transaction. It was not until the connection was being prepared for reuse that it was finally reset. This issue has been fixed.
Another past issue was a broken connection. For example, if the SQL Server was rebooted, all idle connections are broken. However, it was not until the connection was requested that this was checked. A connection failure timeout was required for each connection in the pool before it was replaced. This was a PITA.
in c3p0, maxIdleTime means:
maxIdleTime: (Default: 0) Seconds a Connection can remain pooled but unused before being discarded. Zero means idle connections never expire.
I don't get it. Let's say, if there are 10 awaiting connections in the connection pool, and after maxIdleTime has passed, there are still no new database requests, then all these connections should be discarded? then there will be 0 awaiting connections in the pool?
If there are 10 Connections in the pool and no activity, then after maxIdleTime has passed, yes, they will all be expired.
But that does not mean there will be no Connections left in the pool. At the same time as the pool expires old Connections, it will acquire begin new Connections from the DBMS in order to uphold the minPoolSize configuration parameter.
If you have kept a large pool size (i.e. greater than what you anticipate as 99th percentile load), this will mean that you will have lots of idle connections lying around and it is better to clean them up periodically via maxIdleTime and idleConnectionTestPeriod parameter
May be, you will need to add too idleConnectionTestPeriod and preferredTestQuery be sure that connection are valid or not
I hope there is some expert on C3P0 that can help me answer the following question.
First, here's the general problem I'm trying to solve. We have an application connected to a database. When the database goes out, requests start taking several seconds to be processed, as opposed to a few milliseconds. This is because C3P0 will attempt to create new connections to the database. It will eventually timeout and the request will be rejected.
I came up with a proposal to fix it. Before grabbing a connection from the pool, I'll query C3P0's APIs to see if there are any connections in the pool. If there are none, we'll immediately drop the request. This way, our latency should remain in the milliseconds, instead of waiting until the timeout occurs. This solution works because C3P0 is capable of removing connections if it detects that they've gone bad.
Now, I set up a test with the values for "setTestConnectionOnCheckin" and "setTestConnectionOnCheckout" as "false". According to my understanding, this would mean that C3P0 would not test a connection (or, let's say, a connection in use, because there's also the idleConnectionTestPeriod setting). However, when I run my test, immediately after shutting off the database, C3P0 detects it and removes the connections from the pool. To give you a clearer picture, here's the execution's result:
14:48:01 - Request processed successfully. Processing time: 5 ms.
14:48:02 - Request processed successfully. Processing time: 4 ms.
14:48:03 - (Database is shut down at this point).
14:48:04 - java.net.ConnectException.
14:48:05 - Request rejected. Processing time: 258 ms.
14:48:06 - Request rejected. Processing time: 1 ms.
14:48:07 - Request rejected. Processing time: 1 ms.
C3P0 apparently knew that the database went down and removed the connections from the pool. It probably took a while, because the very first request after the database was shut off took longer than the others. I have run this test several times and that single request can take from 1 ms up to 3.5 seconds (which is the timeout time). This entry appears as many times as the number of connections I have defined for my pool. I have omitted all the rest for simplicity.
I think it's great that C3P0 is capable of removing the connections from the pool right away (well, as quickly as 258 ms. in the above example), but I'm having troubles explaining other people why that works. If "setTestConnectionOnCheckin" and "setTestConnectionOnCheckout" are set to "false", how is C3P0 capable of knowing that a connection went bad?
Even if they were set to "true", testing a connection is supposed to attempt executing a query on the database (something like "select 1 + 1 from dual"). We the database goes down, shouldn't the test timeout? In other words, shouldn't C3P0 take 3.5 seconds to determine that a connection has gone bad?
Thanks a lot, in advance.
(apologies... this'll be terse, i'm phonebound.)
1) even if no explicit Connection testing has been configured, c3p0 tests Connections that experience Exceptions while checked-out to determine whether they remain suitable for pooling.
2) a good JDBC driver will throw Exceptions quickly if the DBMS is unavailable. there's no reason why these internal Connection tests should be slow.
3) rather than polling for unused Connections to avoid waiting for checking / new acquisitions, you might consider just setting the config parameter checkoutTimeout.
good luck!
If my connection pooling is set to 10 and 100 users hit a page utilizing connection to DB at almost the same time.
Than in this case will the 90 users have to wait for connections to get free?
OR
More connections would be created for 90 users but they shall not be returned to pool?
FYI: I know connection pooling and related concepts. The query is in relation to page which generate large reports.
They will have to wait for a connection to be returned to the pool if the maximum of 10 has reached, see: http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
The connection pooler satisfies
requests for connections by
reallocating connections as they are
released back into the pool. If the
maximum pool size has been reached and
no usable connection is available, the
request is queued. The pooler then
tries to reclaim any connections until
the time-out is reached (the default
is 15 seconds). If the pooler cannot
satisfy the request before the
connection times out, an exception is
thrown.
I was wondering what is maxPoolSize for? what is minPoolSize for?
How do I know which property value to use for my database?
EDITED
I am using Oracle 10g, connecting with Hibernate and bitronix on windows OS.
minPoolSize is the minimum number of database connections being opened all the time, even when none of them are used. maxPoolSize on the other hand represents the maximum number of concurrent connection. Now:
Use minPoolSize to always keep few connections opened (even when idle) so that you don't have to wait for establishing a new network connection when request arrives to the system with low load. This is basically a set of connections waiting "for emergency"
maxPoolSize is much more important. If the system is under heavy load and some request tries to open maxPoolSize + 1 connection, the connection pool will refuse, in turns causing the whole request to be discarded. On the other hand setting this parameter to to high value shift the bottleneck from your application to the database server, as it has limited capacity.