I have a project in Spring Boot (1.5.1.RELEASE) which is using Postgres DB (9.1-901-1).
While I am running this application in production it will create upto 100 number of Idle connection in DB.
So I override the default configuration to control creating 'N' number of idle connection. Please check below configuration:
datasource:
driverClassName: org.postgresql.Driver
url: jdbc:postgresql://localhost:5432/db_name
username: root
password: root
tomcat:
# default value is 100 but postgres' default is 100 as well. To prevent "PSQLException: FATAL: sorry, too many
# clients already", we decrease the max-active value here. Which should be sufficient, by the way
max-active: 10
max-idle: 10
min-idle: 5
max-wait: 30000
time-between-eviction-runs-millis: 5000
min-evictable-idle-time-millis: 60000
jmx-enabled: true
Now Its creating 5 Idle connection to DB.
I am verifying that by executing below query.
select * from pg_stat_activity;
Now My question is, Do I really need 5 Idle connection for Production environment.
What will happen if I change my configuration like below? Will this work without any problem?
max-active: 1
max-idle: 1
min-idle: 0
And also would like to know how PgBouncer will help for this case? Is it necessary to have PgBouncer for Postgres?
The configuration you have proposed is definitely not recommended. A complete DB connection cycle will go through
Establish TCP connection
Validate the credentials
Connection is ready
Execute commands
Disconnect
By maintaining idle connections (connection pool) with the DB you are saving times spent steps 1-3 thus achieving better performance.
You should tune the settings on the DB based on the max instances of the microservices that will connect. for e.g. if max number of microservice instances is 5 and the service is configured to maintain 50 idle connetions then ensure that your DB is configured to cater to atleast 250 connections.
To arrive at min connection settings for the microservices you will need to do some tests based on your non functional requirements and load tests on the services.
Related
I have configured openliberty (version 21) with a database (oracle) connection as follows in the server.xml :
<dataSource jndiName="jdbc/myds" transactional="true">
<connectionManager maxPoolSize="20" minPoolSize="5" agedTimeout="120s" connectionTimeout="10s"/>
<jdbcDriver libraryRef="jdbcLib" />
<properties.oracle URL="jdbc:oracle:thin:#..." user="..." password="..."/>
</dataSource>
The server starts and I can make queries to the database via my rest api but I have noticed that I only use 1 active database connection and parallel http queries result in queuing databases queries over that 1 connection.
I have verified this by monitoring the active open database connections in combination with slow queries (I make several rest calls in parallel). Only 1 connection is opened and 1 query is processes after the other. How do I open a connection pool with for example 5-20 connections for parallel operation.
Based on your described usage, the connection pool should be creating connections as requests come in if there are no connections available in the free pool.
Your connectionTimeout is configured to be 10 seconds. To ensure that your test really is running in parallel would be to make two requests to the server. The server should create a connection, use it, wait 11 seconds, then close the connection.
If your requests are NOT running in parallel, you will not get any exception since the second request won't start until after the first one finished and that would be an issue with your test procedure.
If your requests are running in parallel, and you do not get any exception output from Liberty. Then Liberty likely is making multiple connections and that can be confirmed by enabling J2C trace.
See: https://openliberty.io/docs/21.0.0.9/log-trace-configuration.html
Enable: J2C=ALL
If your requests are running in parallel, and no more than one connection is being created, then you will get a ConnectionWaitTimeoutException. This could be caused by the driver not being able to create more than one connection, incorrect use of the Oracle Connection Pool (UCP), or a number of other factors. I would need more information to debug that issue.
I am a bit new to postgresql db. I have done a setup over Azure Cloud for my PostgreSQL DB.
It's Ubuntu 18.04 LTS (4vCPU, 8GB RAM) machine with PostgreSQL 9.6 version.
The problem that occurs is when the connection to the PostgreSQL DB stays idle for some time let's say 2 to 10 minutes then the connection to the db does not respond such that it doesn't fulfill the request and keep processing the query.
Same goes with my JAVA Spring-boot Application. The connection doesn't respond and the query keep processing.
This happens randomly such that the timing is not traceable sometimes it happens in 2 minutes, sometimes in 10 minutes & sometimes don't.
I have tried with PostgreSQL Configuration file parameters. I have tried:
tcp_keepalive_idle, tcp_keepalive_interval, tcp_keepalive_count.
Also statement_timeout & session_timeout parameters but it doesn't change anyway.
Any suggestion or help would be appreciable.
Thank You
If you are setting up PostgreSQL DB connection on Azure VM you have to be aware that there are Unbound and Outbound connections timeouts . According to
https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#idletimeout ,Outbound connections have a 4-minute idle timeout. This timeout is not adjustable. For inbound timeou there is an option to change in on Azure Portal.
We run into similar issue and were able to resolve it on client side. We changed Spring-boot default Hikari configuration as follow:
hikari:
connection-timeout: 20000
validation-timeout: 20000
idle-timeout: 30000
max-lifetime: 40000
minimum-idle: 1
maximum-pool-size: 3
connection-test-query: SELECT 1
connection-init-sql: SELECT 1
1.We have a J2EE application using Servlets & JSPs running on Jboss EAP-6.2, and using SQL server database.
2.Everything was fine on UAT system where users count was 20 but when we moved the same application to production system where users count is more than 80 , we are facing issue in Jboss regarding connection pool count. This count keeps on decreasing and after 8-10 hours, users are not able to login in the system so we need to Flush the connection pool manually by clicking the Flush button available in Datasource section in Profile tab.
3.We have checked there are no connection leakage, as we have closed all the database connection in the Finally{ } section.
4.We have also increased the max-min pool size in STANDALONE.XML file and added some validation tags recommended by RedHat site.Please see attached file.
Question- Is there any way by which we can automate the Flush button functionality available on Jboss Adminstrator console so that idle connections will get destroyed automatically.
Attached - Jboss console view of connection pool.
There are below recommendations to tune-up your datasource first.
1# There is a known problem with prefill = true in your release line
<prefill>true</prefill>
Please set this to false.
2# Use below datasource connection validation mechanisms:
<validation>
<validate-on-match>true</validate-on-match>
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker" />
</validation>
3# It is not recommend the following for datasources:
- jta="false" : It should be true
- use-ccm="false" : It should be true
4# You may want to ensure your database server is configured to not timeout connections that are idle for less than 5 minutes (the configured timeout period for your datasource in JBoss). The JBoss timeout should be lower/less than the timeout period configured on the database server to permit JBoss to gracefully timeout connections rather than permitting them to time out externally.
5# Connections reserved by application components are not subject to timeout by JBoss. Connections obtained by DataSource.getConnection() cannot be timed out by JBoss until after they have been returned to the pool (by calling Connection.close()) and remained unused in the pool for idle-timeout-minutes. Connection status is InUse between DataSource.getConnection() and Connection.close() even if no database command is active because these connections are owned by application code.
Apply the above and check the behaviour
The idle-timeout-minutes is the maximum time, in minutes, before an idle (unreserved/unused) connection is closed. The actual maximum time depends upon the idleRemover scan time, which is half of the smallest idle-timeout-minutes of any pool.
The idle connection is removed by IdleRemover after :
idle-timeout-minutes + 1/2 (idle-timeout-minutes)
The idle-timeout-minutes property should be configured to a value greater than 0 but less than the timeout period specified on the database server, network firewalls, etc. to permit graceful termination by JBoss before idle connections are externally severed.
I have a Java EE web application deployed on glassfish 3.1.1 which I want to host on Windows Azure.
The application uses hibernate as jpa.
I defined a JDBC Connection Pool for the Azure database.
(basically, these are the defaults)
Initial and Minimum Pool Size: 8 Connections
Maximum Pool Size: 32 Connections
Pool Resize Quantity: 2 Connections
Idle Timeout: 300 Secconds
Max Wait Time: 60000 Milliseconds
Additional Properties:
User: user#serverName
ServerName: serverName.database.windows.net
Password: myPass
databaseName: mydatabase
If i ping it from the glassfish interface it works, so the properties I provide are ok.
Setting the new jdbc connection pool (the one for azure) resulted in the tables being created on the sql azure database (i have "hibernate.hbm2ddl.auto" set to update) - so there isn't a problem with the database connection/parameters.
If the application uses the database immediately after the server started, all goes well (it can retrieve/store data)
When the application tries to use the database after being idle for a while, I get this:
link to exception
If I flush the connection (from glassfish admin) it starts to work again, until it goes idle for a period of time.
So basically, as long as it executes database operations all works well, but if there are no database operations for a while, the next db operation will result in that exception.
I've google'd this and it seems to have something to do with the azure database server closing the idle connection, but I couldn't find a solution for the problem.
I never had this problem when using PostgreSql
One possible reason cause this problem: SQL Azure closes idle connections after 5 minutes. To work around this issue, you have to close the connection, and create a new connection. In general, it is recommended to close idle connections even when connecting to other databases. This helps to reduce system resource usage.
We have a java server connecting to a MySQL 5 database usingHibernate as our persistence layer which is using c3p0 for DB connection pooling.
I've tried following the c3p0 and hibernate documentation:
Hibernate - HowTo Configure c3p0 connection pool
C3P0 Hibernate properties
C3P0.properties configuration
We're getting an error on our production servers stating that:
... Caused by:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:
No operations allowed after connection
closed.Connection was implicitly
closed due to underlying
exception/error:
BEGIN NESTED EXCEPTION
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException
MESSAGE: The last packet successfully
received from the server was45000
seconds ago.The last packet sent
successfully to the server was 45000
seconds ago, which is longer than the
server configured value of
'wait_timeout'. You should consider
either expiring and/or testing
connection validity before use in your
application, increasing the server
configured values for client timeouts,
or using the Connector/J connection
property 'autoReconnect=true' to avoid
this problem.
STACKTRACE:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
The last packet successfully received
from the server was45000 seconds
ago.The last packet sent successfully
to the server was 45000 seconds ago,
which is longer than the server
configured value of 'wait_timeout'.
You should consider either expiring
and/or testing connection validity
before use in your application,
increasing the server configured
values for client timeouts, or using
the Connector/J connection property
'autoReconnect=true' to avoid this
problem.
We have our c3p0 connection pool properties setup as follows:
hibernate.c3p0.max_size=10
hibernate.c3p0.min_size=1
hibernate.c3p0.timeout=5000
hibernate.c3p0.idle_test_period=300
hibernate.c3p0.max_statements=100
hibernate.c3p0.acquire_increment=2
The default MySQL wait_timetout is set to 28800 seconds (8 hours), the reported error is saying that it's been over 45000 seconds (about 12.5 hours). Although the c3p0 configuration states that it will "timeout" idle connections that haven't been used after 5000 seconds and it will check every 300 seconds, thus an idle connection should never live longer than 5299 seconds right?
I've tested locally by setting my developer MySQL (my.ini on windows, my.cnf on Unix) wait_timeout=60 and lowering the c3p0 idle timeout values below 60 seconds, and it will properly timeout idle connections and create new ones. I also check to ensure that we're not leaking DB connections and holding onto a connection, and it doesn't appear we are.
Here's the c3p0.properties file I'm using to test in my developer environment to ensure c3p0 is properly handling connections.
hibernate.properties (testing with MySQL wait_timeout=60)
hibernate.c3p0.max_size=10
hibernate.c3p0.min_size=1
hibernate.c3p0.timeout=20
hibernate.c3p0.max_statements=100
hibernate.c3p0.idle_test_period=5
hibernate.c3p0.acquire_increment=2
c3p0.properties
com.mchange.v2.log.FallbackMLog.DEFAULT_CUTOFF_LEVEL=ALL
com.mchange.v2.log.MLog=com.mchange.v2.log.FallbackMLog
c3p0.debugUnreturnedConnectionStackTraces=true
c3p0.unreturnedConnectionTimeout=10
Make sure that c3p0 really is starting by examine the log. I, for some reason, had two versions of hibernate (hibernate-core3.3.1.jar and hibernate-3.2.6GA.jar) on my classpath. I also used hibernate annotatations version 3.4.0GA which is not compatible with 3.2.x. (dont know if that had something to do with the original problem).
After removal of one of the hibernate jar's (cant remember which i deleted, probably hibernate-3.2.6GA.jar) c3p0 finally started and i got rid of the annoying com.mysql.jdbc.exceptions.jdbc4.CommunicationsException that happend efter 8h inactivity.