We have a java server connecting to a MySQL 5 database usingHibernate as our persistence layer which is using c3p0 for DB connection pooling.
I've tried following the c3p0 and hibernate documentation:
Hibernate - HowTo Configure c3p0 connection pool
C3P0 Hibernate properties
C3P0.properties configuration
We're getting an error on our production servers stating that:
... Caused by:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:
No operations allowed after connection
closed.Connection was implicitly
closed due to underlying
exception/error:
BEGIN NESTED EXCEPTION
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException
MESSAGE: The last packet successfully
received from the server was45000
seconds ago.The last packet sent
successfully to the server was 45000
seconds ago, which is longer than the
server configured value of
'wait_timeout'. You should consider
either expiring and/or testing
connection validity before use in your
application, increasing the server
configured values for client timeouts,
or using the Connector/J connection
property 'autoReconnect=true' to avoid
this problem.
STACKTRACE:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
The last packet successfully received
from the server was45000 seconds
ago.The last packet sent successfully
to the server was 45000 seconds ago,
which is longer than the server
configured value of 'wait_timeout'.
You should consider either expiring
and/or testing connection validity
before use in your application,
increasing the server configured
values for client timeouts, or using
the Connector/J connection property
'autoReconnect=true' to avoid this
problem.
We have our c3p0 connection pool properties setup as follows:
hibernate.c3p0.max_size=10
hibernate.c3p0.min_size=1
hibernate.c3p0.timeout=5000
hibernate.c3p0.idle_test_period=300
hibernate.c3p0.max_statements=100
hibernate.c3p0.acquire_increment=2
The default MySQL wait_timetout is set to 28800 seconds (8 hours), the reported error is saying that it's been over 45000 seconds (about 12.5 hours). Although the c3p0 configuration states that it will "timeout" idle connections that haven't been used after 5000 seconds and it will check every 300 seconds, thus an idle connection should never live longer than 5299 seconds right?
I've tested locally by setting my developer MySQL (my.ini on windows, my.cnf on Unix) wait_timeout=60 and lowering the c3p0 idle timeout values below 60 seconds, and it will properly timeout idle connections and create new ones. I also check to ensure that we're not leaking DB connections and holding onto a connection, and it doesn't appear we are.
Here's the c3p0.properties file I'm using to test in my developer environment to ensure c3p0 is properly handling connections.
hibernate.properties (testing with MySQL wait_timeout=60)
hibernate.c3p0.max_size=10
hibernate.c3p0.min_size=1
hibernate.c3p0.timeout=20
hibernate.c3p0.max_statements=100
hibernate.c3p0.idle_test_period=5
hibernate.c3p0.acquire_increment=2
c3p0.properties
com.mchange.v2.log.FallbackMLog.DEFAULT_CUTOFF_LEVEL=ALL
com.mchange.v2.log.MLog=com.mchange.v2.log.FallbackMLog
c3p0.debugUnreturnedConnectionStackTraces=true
c3p0.unreturnedConnectionTimeout=10
Make sure that c3p0 really is starting by examine the log. I, for some reason, had two versions of hibernate (hibernate-core3.3.1.jar and hibernate-3.2.6GA.jar) on my classpath. I also used hibernate annotatations version 3.4.0GA which is not compatible with 3.2.x. (dont know if that had something to do with the original problem).
After removal of one of the hibernate jar's (cant remember which i deleted, probably hibernate-3.2.6GA.jar) c3p0 finally started and i got rid of the annoying com.mysql.jdbc.exceptions.jdbc4.CommunicationsException that happend efter 8h inactivity.
Related
I have configured openliberty (version 21) with a database (oracle) connection as follows in the server.xml :
<dataSource jndiName="jdbc/myds" transactional="true">
<connectionManager maxPoolSize="20" minPoolSize="5" agedTimeout="120s" connectionTimeout="10s"/>
<jdbcDriver libraryRef="jdbcLib" />
<properties.oracle URL="jdbc:oracle:thin:#..." user="..." password="..."/>
</dataSource>
The server starts and I can make queries to the database via my rest api but I have noticed that I only use 1 active database connection and parallel http queries result in queuing databases queries over that 1 connection.
I have verified this by monitoring the active open database connections in combination with slow queries (I make several rest calls in parallel). Only 1 connection is opened and 1 query is processes after the other. How do I open a connection pool with for example 5-20 connections for parallel operation.
Based on your described usage, the connection pool should be creating connections as requests come in if there are no connections available in the free pool.
Your connectionTimeout is configured to be 10 seconds. To ensure that your test really is running in parallel would be to make two requests to the server. The server should create a connection, use it, wait 11 seconds, then close the connection.
If your requests are NOT running in parallel, you will not get any exception since the second request won't start until after the first one finished and that would be an issue with your test procedure.
If your requests are running in parallel, and you do not get any exception output from Liberty. Then Liberty likely is making multiple connections and that can be confirmed by enabling J2C trace.
See: https://openliberty.io/docs/21.0.0.9/log-trace-configuration.html
Enable: J2C=ALL
If your requests are running in parallel, and no more than one connection is being created, then you will get a ConnectionWaitTimeoutException. This could be caused by the driver not being able to create more than one connection, incorrect use of the Oracle Connection Pool (UCP), or a number of other factors. I would need more information to debug that issue.
I've just installed SlashDB and connected to an Azure SQL DB successfully. Querying works and everything is fine. However, after a while, if I retry my previously working query, I get an error from SlashDB:
500 Internal Server Error (pyodbc.OperationalError) ('08S01', u'[08S01] [FreeTDS][SQL Server]Write to the server failed (20006) (SQLExecDirectW)')
I'm not writing anything to the server, if that matters. But if I retry the query immediately, it works. My deep analysis (=guess) of this all is that the SQL Server terminates the idle connection. Now, I'd like SlashDB to retry when it fails, instead of returning error to the the client. Is this possible?
Apparently Azure SQL DB could be breaking connections due to either their redundancy limitations or the 30 minute idle.
https://azure.microsoft.com/en-us/blog/connections-and-sql-azure/
For performance reasons SlashDB does not establish a new connection for every request, but instead maintains a pool of connections.
MySQL, has a similar behavior, which is widely known (60 minutes idle timeout) and SlashDB actually has a logic to attempt reconnect. It should implement the same for all database types, but I need to confirm that with the development team (and fix if not the case).
In the meantime, you could either retry on the client side or send a periodic request for to avoid the timeout.
1.We have a J2EE application using Servlets & JSPs running on Jboss EAP-6.2, and using SQL server database.
2.Everything was fine on UAT system where users count was 20 but when we moved the same application to production system where users count is more than 80 , we are facing issue in Jboss regarding connection pool count. This count keeps on decreasing and after 8-10 hours, users are not able to login in the system so we need to Flush the connection pool manually by clicking the Flush button available in Datasource section in Profile tab.
3.We have checked there are no connection leakage, as we have closed all the database connection in the Finally{ } section.
4.We have also increased the max-min pool size in STANDALONE.XML file and added some validation tags recommended by RedHat site.Please see attached file.
Question- Is there any way by which we can automate the Flush button functionality available on Jboss Adminstrator console so that idle connections will get destroyed automatically.
Attached - Jboss console view of connection pool.
There are below recommendations to tune-up your datasource first.
1# There is a known problem with prefill = true in your release line
<prefill>true</prefill>
Please set this to false.
2# Use below datasource connection validation mechanisms:
<validation>
<validate-on-match>true</validate-on-match>
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker" />
</validation>
3# It is not recommend the following for datasources:
- jta="false" : It should be true
- use-ccm="false" : It should be true
4# You may want to ensure your database server is configured to not timeout connections that are idle for less than 5 minutes (the configured timeout period for your datasource in JBoss). The JBoss timeout should be lower/less than the timeout period configured on the database server to permit JBoss to gracefully timeout connections rather than permitting them to time out externally.
5# Connections reserved by application components are not subject to timeout by JBoss. Connections obtained by DataSource.getConnection() cannot be timed out by JBoss until after they have been returned to the pool (by calling Connection.close()) and remained unused in the pool for idle-timeout-minutes. Connection status is InUse between DataSource.getConnection() and Connection.close() even if no database command is active because these connections are owned by application code.
Apply the above and check the behaviour
The idle-timeout-minutes is the maximum time, in minutes, before an idle (unreserved/unused) connection is closed. The actual maximum time depends upon the idleRemover scan time, which is half of the smallest idle-timeout-minutes of any pool.
The idle connection is removed by IdleRemover after :
idle-timeout-minutes + 1/2 (idle-timeout-minutes)
The idle-timeout-minutes property should be configured to a value greater than 0 but less than the timeout period specified on the database server, network firewalls, etc. to permit graceful termination by JBoss before idle connections are externally severed.
I hope there is some expert on C3P0 that can help me answer the following question.
First, here's the general problem I'm trying to solve. We have an application connected to a database. When the database goes out, requests start taking several seconds to be processed, as opposed to a few milliseconds. This is because C3P0 will attempt to create new connections to the database. It will eventually timeout and the request will be rejected.
I came up with a proposal to fix it. Before grabbing a connection from the pool, I'll query C3P0's APIs to see if there are any connections in the pool. If there are none, we'll immediately drop the request. This way, our latency should remain in the milliseconds, instead of waiting until the timeout occurs. This solution works because C3P0 is capable of removing connections if it detects that they've gone bad.
Now, I set up a test with the values for "setTestConnectionOnCheckin" and "setTestConnectionOnCheckout" as "false". According to my understanding, this would mean that C3P0 would not test a connection (or, let's say, a connection in use, because there's also the idleConnectionTestPeriod setting). However, when I run my test, immediately after shutting off the database, C3P0 detects it and removes the connections from the pool. To give you a clearer picture, here's the execution's result:
14:48:01 - Request processed successfully. Processing time: 5 ms.
14:48:02 - Request processed successfully. Processing time: 4 ms.
14:48:03 - (Database is shut down at this point).
14:48:04 - java.net.ConnectException.
14:48:05 - Request rejected. Processing time: 258 ms.
14:48:06 - Request rejected. Processing time: 1 ms.
14:48:07 - Request rejected. Processing time: 1 ms.
C3P0 apparently knew that the database went down and removed the connections from the pool. It probably took a while, because the very first request after the database was shut off took longer than the others. I have run this test several times and that single request can take from 1 ms up to 3.5 seconds (which is the timeout time). This entry appears as many times as the number of connections I have defined for my pool. I have omitted all the rest for simplicity.
I think it's great that C3P0 is capable of removing the connections from the pool right away (well, as quickly as 258 ms. in the above example), but I'm having troubles explaining other people why that works. If "setTestConnectionOnCheckin" and "setTestConnectionOnCheckout" are set to "false", how is C3P0 capable of knowing that a connection went bad?
Even if they were set to "true", testing a connection is supposed to attempt executing a query on the database (something like "select 1 + 1 from dual"). We the database goes down, shouldn't the test timeout? In other words, shouldn't C3P0 take 3.5 seconds to determine that a connection has gone bad?
Thanks a lot, in advance.
(apologies... this'll be terse, i'm phonebound.)
1) even if no explicit Connection testing has been configured, c3p0 tests Connections that experience Exceptions while checked-out to determine whether they remain suitable for pooling.
2) a good JDBC driver will throw Exceptions quickly if the DBMS is unavailable. there's no reason why these internal Connection tests should be slow.
3) rather than polling for unused Connections to avoid waiting for checking / new acquisitions, you might consider just setting the config parameter checkoutTimeout.
good luck!
I have a Java EE web application deployed on glassfish 3.1.1 which I want to host on Windows Azure.
The application uses hibernate as jpa.
I defined a JDBC Connection Pool for the Azure database.
(basically, these are the defaults)
Initial and Minimum Pool Size: 8 Connections
Maximum Pool Size: 32 Connections
Pool Resize Quantity: 2 Connections
Idle Timeout: 300 Secconds
Max Wait Time: 60000 Milliseconds
Additional Properties:
User: user#serverName
ServerName: serverName.database.windows.net
Password: myPass
databaseName: mydatabase
If i ping it from the glassfish interface it works, so the properties I provide are ok.
Setting the new jdbc connection pool (the one for azure) resulted in the tables being created on the sql azure database (i have "hibernate.hbm2ddl.auto" set to update) - so there isn't a problem with the database connection/parameters.
If the application uses the database immediately after the server started, all goes well (it can retrieve/store data)
When the application tries to use the database after being idle for a while, I get this:
link to exception
If I flush the connection (from glassfish admin) it starts to work again, until it goes idle for a period of time.
So basically, as long as it executes database operations all works well, but if there are no database operations for a while, the next db operation will result in that exception.
I've google'd this and it seems to have something to do with the azure database server closing the idle connection, but I couldn't find a solution for the problem.
I never had this problem when using PostgreSql
One possible reason cause this problem: SQL Azure closes idle connections after 5 minutes. To work around this issue, you have to close the connection, and create a new connection. In general, it is recommended to close idle connections even when connecting to other databases. This helps to reduce system resource usage.