It does not take effect after modifying the pool configuration item maxconnect=30 in taosadapter.toml - tdengine

Java uses restful to connect to taosdb to insert data, and setting maxconnect=30 under the pool configuration item in taosadapter.toml does not take effect.
The maximum number of connections in a single connection pool is still 20, and the maxIdle configuration and idleTimeout configuration need to be enabled at the same time as the maxConnect configuration, otherwise an error will be reported when connecting: error "connect taosd error" model=restful sessionID=6 error=invalid capacity settings
Database parameters used: show databases
Verbs used: Insert/Import/Select?
Environment:
OS: [e.g. CentOS 7.0]
Memory, CPU, current Disk Space
TDengine Version [e.g. 1.6.1.7]

Related

How to resolve DB connection invalidated warning in Airflow Scheduler?

I am upgrading our Airflow instance from 1.9 to 1.10.3 and whenever the scheduler runs now I get a warning that the database connection has been invalidated and it's trying to reconnect. A bunch of these errors show up in a row. The console also indicates that tasks are being scheduled but if I check the database nothing is ever being written.
The following warning shows up where it didn't before
[2019-05-21 17:29:26,017] {sqlalchemy.py:81} WARNING - DB connection invalidated. Reconnecting...
Eventually, I'll also get this error
FATAL: remaining connection slots are reserved for non-replication superuser connections
I've tried to increase the SQL Alchemy pool size setting in airflow.cfg but that had no effect
# The SqlAlchemy pool size is the maximum number of database connections in the pool.
sql_alchemy_pool_size = 10
I'm using CeleryExecutor and I'm thinking that maybe the number of workers is overloading the database connections.
I run three commands, airflow webserver, airflow scheduler, and airflow worker, so there should only be one worker and I don't see why that would overload the database.
How do I resolve the database connection errors? Is there a setting to increase the number of database connections, if so where is it? Do I need to handle the workers differently?
Update:
Even with no workers running, starting the webserver and scheduler fresh, when the scheduler fills up the airflow pools the DB connection warning starts to appear.
Update 2:
I found the following issue in the Airflow Jira: https://issues.apache.org/jira/browse/AIRFLOW-4567
There is some activity with others saying they see the same issue. It is unclear whether this directly causes the crashes that some people are seeing or whether this is just an annoying cosmetic log. As of yet there is no resolution to this problem.
This has been resolved in the latest version of Airflow, 1.10.4
I believe it was fixed by AIRFLOW-4332, updating SQLAlchemy to a newer version.
Pull request

Closed db connections are not released from pool

We are having dropwizard application using default configurations provided by dropwizard-jdbi for connecting to database.
Using the following to get sql connection object
Connection dbConnection = handle.getConnection();
Did a code walk-though and verified that the connections that are opened are closed.
But when i check v$session, I can see some inactive-sessions still present and are not getting released for long time.
I am using default connection pool provided by dropwizard.
Please let me know how to get the inactive sessions released.
What are your settings in the configuration file for Dropwizard?
If you have a look at http://www.dropwizard.io/1.3.0/docs/manual/configuration.html#database and then the service configuration there is an option for connections to keep alive.
# the minimum number of connections to keep open
minSize: 10
But most of the times you want to have some connections open this will speed up your application. Your application doesn't have to validate and connect to the database again and again for every call. That's one of the purposes of a connection pool.

Jboss EAP 6.2 AvailableCount in connection pool decresing

1.We have a J2EE application using Servlets & JSPs running on Jboss EAP-6.2, and using SQL server database.
2.Everything was fine on UAT system where users count was 20 but when we moved the same application to production system where users count is more than 80 , we are facing issue in Jboss regarding connection pool count. This count keeps on decreasing and after 8-10 hours, users are not able to login in the system so we need to Flush the connection pool manually by clicking the Flush button available in Datasource section in Profile tab.
3.We have checked there are no connection leakage, as we have closed all the database connection in the Finally{ } section.
4.We have also increased the max-min pool size in STANDALONE.XML file and added some validation tags recommended by RedHat site.Please see attached file.
Question- Is there any way by which we can automate the Flush button functionality available on Jboss Adminstrator console so that idle connections will get destroyed automatically.
Attached - Jboss console view of connection pool.
There are below recommendations to tune-up your datasource first.
1# There is a known problem with prefill = true in your release line
<prefill>true</prefill>
Please set this to false.
2# Use below datasource connection validation mechanisms:
<validation>
<validate-on-match>true</validate-on-match>
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker" />
</validation>
3# It is not recommend the following for datasources:
- jta="false" : It should be true
- use-ccm="false" : It should be true
4# You may want to ensure your database server is configured to not timeout connections that are idle for less than 5 minutes (the configured timeout period for your datasource in JBoss). The JBoss timeout should be lower/less than the timeout period configured on the database server to permit JBoss to gracefully timeout connections rather than permitting them to time out externally.
5# Connections reserved by application components are not subject to timeout by JBoss. Connections obtained by DataSource.getConnection() cannot be timed out by JBoss until after they have been returned to the pool (by calling Connection.close()) and remained unused in the pool for idle-timeout-minutes. Connection status is InUse between DataSource.getConnection() and Connection.close() even if no database command is active because these connections are owned by application code.
Apply the above and check the behaviour
The idle-timeout-minutes is the maximum time, in minutes, before an idle (unreserved/unused) connection is closed. The actual maximum time depends upon the idleRemover scan time, which is half of the smallest idle-timeout-minutes of any pool.
The idle connection is removed by IdleRemover after :
idle-timeout-minutes + 1/2 (idle-timeout-minutes)
The idle-timeout-minutes property should be configured to a value greater than 0 but less than the timeout period specified on the database server, network firewalls, etc. to permit graceful termination by JBoss before idle connections are externally severed.

using JDBC Connection Pool with SQL Azure Database gives "Error in allocating a connection"

I have a Java EE web application deployed on glassfish 3.1.1 which I want to host on Windows Azure.
The application uses hibernate as jpa.
I defined a JDBC Connection Pool for the Azure database.
(basically, these are the defaults)
Initial and Minimum Pool Size: 8 Connections
Maximum Pool Size: 32 Connections
Pool Resize Quantity: 2 Connections
Idle Timeout: 300 Secconds
Max Wait Time: 60000 Milliseconds
Additional Properties:
User: user#serverName
ServerName: serverName.database.windows.net
Password: myPass
databaseName: mydatabase
If i ping it from the glassfish interface it works, so the properties I provide are ok.
Setting the new jdbc connection pool (the one for azure) resulted in the tables being created on the sql azure database (i have "hibernate.hbm2ddl.auto" set to update) - so there isn't a problem with the database connection/parameters.
If the application uses the database immediately after the server started, all goes well (it can retrieve/store data)
When the application tries to use the database after being idle for a while, I get this:
link to exception
If I flush the connection (from glassfish admin) it starts to work again, until it goes idle for a period of time.
So basically, as long as it executes database operations all works well, but if there are no database operations for a while, the next db operation will result in that exception.
I've google'd this and it seems to have something to do with the azure database server closing the idle connection, but I couldn't find a solution for the problem.
I never had this problem when using PostgreSql
One possible reason cause this problem: SQL Azure closes idle connections after 5 minutes. To work around this issue, you have to close the connection, and create a new connection. In general, it is recommended to close idle connections even when connecting to other databases. This helps to reduce system resource usage.

Hibernate c3p0 connection pool not timing out idle connections

We have a java server connecting to a MySQL 5 database usingHibernate as our persistence layer which is using c3p0 for DB connection pooling.
I've tried following the c3p0 and hibernate documentation:
Hibernate - HowTo Configure c3p0 connection pool
C3P0 Hibernate properties
C3P0.properties configuration
We're getting an error on our production servers stating that:
... Caused by:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:
No operations allowed after connection
closed.Connection was implicitly
closed due to underlying
exception/error:
BEGIN NESTED EXCEPTION
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException
MESSAGE: The last packet successfully
received from the server was45000
seconds ago.The last packet sent
successfully to the server was 45000
seconds ago, which is longer than the
server configured value of
'wait_timeout'. You should consider
either expiring and/or testing
connection validity before use in your
application, increasing the server
configured values for client timeouts,
or using the Connector/J connection
property 'autoReconnect=true' to avoid
this problem.
STACKTRACE:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
The last packet successfully received
from the server was45000 seconds
ago.The last packet sent successfully
to the server was 45000 seconds ago,
which is longer than the server
configured value of 'wait_timeout'.
You should consider either expiring
and/or testing connection validity
before use in your application,
increasing the server configured
values for client timeouts, or using
the Connector/J connection property
'autoReconnect=true' to avoid this
problem.
We have our c3p0 connection pool properties setup as follows:
hibernate.c3p0.max_size=10
hibernate.c3p0.min_size=1
hibernate.c3p0.timeout=5000
hibernate.c3p0.idle_test_period=300
hibernate.c3p0.max_statements=100
hibernate.c3p0.acquire_increment=2
The default MySQL wait_timetout is set to 28800 seconds (8 hours), the reported error is saying that it's been over 45000 seconds (about 12.5 hours). Although the c3p0 configuration states that it will "timeout" idle connections that haven't been used after 5000 seconds and it will check every 300 seconds, thus an idle connection should never live longer than 5299 seconds right?
I've tested locally by setting my developer MySQL (my.ini on windows, my.cnf on Unix) wait_timeout=60 and lowering the c3p0 idle timeout values below 60 seconds, and it will properly timeout idle connections and create new ones. I also check to ensure that we're not leaking DB connections and holding onto a connection, and it doesn't appear we are.
Here's the c3p0.properties file I'm using to test in my developer environment to ensure c3p0 is properly handling connections.
hibernate.properties (testing with MySQL wait_timeout=60)
hibernate.c3p0.max_size=10
hibernate.c3p0.min_size=1
hibernate.c3p0.timeout=20
hibernate.c3p0.max_statements=100
hibernate.c3p0.idle_test_period=5
hibernate.c3p0.acquire_increment=2
c3p0.properties
com.mchange.v2.log.FallbackMLog.DEFAULT_CUTOFF_LEVEL=ALL
com.mchange.v2.log.MLog=com.mchange.v2.log.FallbackMLog
c3p0.debugUnreturnedConnectionStackTraces=true
c3p0.unreturnedConnectionTimeout=10
Make sure that c3p0 really is starting by examine the log. I, for some reason, had two versions of hibernate (hibernate-core3.3.1.jar and hibernate-3.2.6GA.jar) on my classpath. I also used hibernate annotatations version 3.4.0GA which is not compatible with 3.2.x. (dont know if that had something to do with the original problem).
After removal of one of the hibernate jar's (cant remember which i deleted, probably hibernate-3.2.6GA.jar) c3p0 finally started and i got rid of the annoying com.mysql.jdbc.exceptions.jdbc4.CommunicationsException that happend efter 8h inactivity.

Resources