Earlier I use to get ORA-12570:Network Session: Unexpected packet read error while querying with below config:
Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xyz.xyz89.test)(PORT=xxxxx))(CONNECT_DATA=(SID=xxxxxxx)));User Id=test;Password=test;Connection Timeout=600;Max Pool Size=150;Min Pool Size=10;
Later I updated my connection string as below which fixes ORA-12570 issue
Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xyz.xyz89.test)(PORT=xxxxx))(CONNECT_DATA=(SID=xxxxxxx)));User Id=test;Password=test;Connection Timeout=600;Max Pool Size=150;Min Pool Size=10;Validate Connection=True;
But, I am facing performance issue while connecting to Oracle database(with Validate Connection=True) to pull some data using Dapper.
Can anyone please suggest why my query takes almost 2.5 hours to give the response most of the times and few times it gives response very quickly. Does it means there is no valid connection available in pool to pick?
Related
I am upgrading our Airflow instance from 1.9 to 1.10.3 and whenever the scheduler runs now I get a warning that the database connection has been invalidated and it's trying to reconnect. A bunch of these errors show up in a row. The console also indicates that tasks are being scheduled but if I check the database nothing is ever being written.
The following warning shows up where it didn't before
[2019-05-21 17:29:26,017] {sqlalchemy.py:81} WARNING - DB connection invalidated. Reconnecting...
Eventually, I'll also get this error
FATAL: remaining connection slots are reserved for non-replication superuser connections
I've tried to increase the SQL Alchemy pool size setting in airflow.cfg but that had no effect
# The SqlAlchemy pool size is the maximum number of database connections in the pool.
sql_alchemy_pool_size = 10
I'm using CeleryExecutor and I'm thinking that maybe the number of workers is overloading the database connections.
I run three commands, airflow webserver, airflow scheduler, and airflow worker, so there should only be one worker and I don't see why that would overload the database.
How do I resolve the database connection errors? Is there a setting to increase the number of database connections, if so where is it? Do I need to handle the workers differently?
Update:
Even with no workers running, starting the webserver and scheduler fresh, when the scheduler fills up the airflow pools the DB connection warning starts to appear.
Update 2:
I found the following issue in the Airflow Jira: https://issues.apache.org/jira/browse/AIRFLOW-4567
There is some activity with others saying they see the same issue. It is unclear whether this directly causes the crashes that some people are seeing or whether this is just an annoying cosmetic log. As of yet there is no resolution to this problem.
This has been resolved in the latest version of Airflow, 1.10.4
I believe it was fixed by AIRFLOW-4332, updating SQLAlchemy to a newer version.
Pull request
What does it do? How does it work? Why am I supposed to test the database connection before "borrowing it from the pool"?
I was not able to find any related information as to why I should be using it. Just how to use it. And it baffles me.
Can anyone provide some meaningful definition and possibly resources to find out more?
"test-on-borrow" indicates that a connection from the pool has to be validated usually by a simple SQL validation query defined in "validationQuery". These two properties are commonly used in conjunction to make sure that the current connections in the pool are not stale (no longer connected to the DB actively as a result of a DB restart, or timeouts enforced by the DB, or whatever other reason that might cause stale connections). By testing the connections on borrow, the application can automatically reconnect to the DB using new connections (and dropping the invalid ones) without a manual restart of the app and thus preventing DB connection errors in the app.
You can find more information on jdbc connection pool attributes here:
https://tomcat.apache.org/tomcat-8.0-doc/jdbc-pool.html#Common_Attributes
I have a Symfony command line task that has a habit of dropping the mysql connection.
Its a data import task. Which fetches data from multiple connections. Its not one big query but a few smaller ones.
It seems to drop the connection the first time it is ran. About half way through the script. However the second time its ran (from the beginning) it always completes the task.
Its not timing out on the query as the error response I get is that the connection has been dropped and it runs ok on its own. So im thinking that its some kind of timeout issue that is avoided when its ran the second time due to query caching speeding up the script.
So my question is how do I refresh the database connection?
[Doctrine\DBAL\DBALException]
SQLSTATE[HY000]: General error: 2013 Lost connection to MySQL server during query
A different approach is to check if Doctrine is still connected to the MySQL server through the ping() method in the connection. If the connection is lost, close the active connection since it is not really closed yet and start a new one.
if(FALSE == $em->getConnection()->ping()){
$em->getConnection()->close();
$em->getConnection()->connect();
}
I guess you mean to connect to the database if the connection is lost for some reason. Given an EntityManager, you can do it the following way:
$success = $_em->getConnection()->connect();
With getConnection, you are retrieving the connection object doctrine uses (Doctrine\DBAL\Connection), which exposes the connect method.
You can call connect anytime, as it checks wether a connection is already established. If this is the case, it returns false.
There is also a isConnected method to check wether a connection is established. You could use that to see where exactly the connection is dropping to get a clearer picture of what is happening.
I have 3 servers set up for SQL mirroring and automatic failover using a witness server. This works as expected.
Now my application that connects to the database, seems to have a problem when a failover occurs - I need to manually intervene and change connection strings for it to connect again.
The best solution I've found so far involves using Failover Partner parameter of the connection string, however it's neither intuitive nor complete: Data Source="Mirror";Failover Partner="Principal" found here.
From the example in the blog above (scenario #3) when the first failover occurs, and principal (failover partner) is unavailable, data source is used instead (which is the new principal). If it fails again (and I only tried within a limited period), it then comes up with an error message. This happens because the connection string is cached, so until this is refreshed, it will keep coming out with an error (it seems connection string refreshes ~5 mins after it encounters an error). If after failover I swap data source and failover partner, I will have one more silent failover again.
Is there a way to achieve fully automatic failover for applications that use mirroring databases too (without ever seeing the error)?
I can see potential workarounds using custom scripts that would poll currently active database node name and adjust connection string accordingly, however it seems like an overkill at the moment.
Read the blog post here
http://blogs.msdn.com/b/spike/archive/2010/12/15/running-a-database-mirror-setup-with-the-sqlbrowser-service-off-may-produce-unexpected-results.aspx
It explains what is happening, the failover partner is actually being read from the sql server not from your config. Run the query in that post to find out what is actually being used as the failover server. It will probably be a machine name that is not discoverable from where your client is running.
You can clear the application pool in the case a failover has happened. Not very nice I know ;-)
// ClearAllPools resets (or empties) the connection pool.
// If there are connections in use at the time of the call,
// they are marked appropriately and will be discarded
// (instead of being returned to the pool) when Close is called on them.
System.Data.SqlClient.SqlConnection.ClearAllPools();
We use it when we change an underlying server via SQL Server alias, to enforce a "refresh" of the server name.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.clearallpools.aspx
The solution is to turn connection pooling off Pooling="false"
Whilst this has minimal impact on small applications, I haven't tested it with applications that receive hundreds of requests per minute (or more) and not sure what the implications are. Anyone care to comment?
Try this connectionString:
connectionString="Data Source=[MSSQLPrincipalServerIP,MSSQLPORT];Failover Partner=[MSSQLMirrorServerIP,MSSQLPORT];Initial Catalog=DatabaseName;Persist Security Info=True;User Id=userName; Password=userPassword.; Connection Timeout=15;"
If you are using .net development, you can try to use ObjAdoDBLib or PigSQLSrvLib and PigSQLSrvCoreLib, and the code will become simple.
Example code:
New object
ObjAdoDBLib
Me.ConnSQLSrv = New ConnSQLSrv(Me.DBSrv, Me.MirrDBSrv, Me.CurrDB, Me.DBUser, Me.DBPwd, Me.ProviderSQLSrv)
PigSQLSrvLib or PigSQLSrvCoreLib
Me.ConnSQLSrv = New ConnSQLSrv(Me.DBSrv, Me.MirrDBSrv, Me.CurrDB, Me.DBUser, Me.DBPwd)
Execute this method to automatically connect to the online database after the mirror database fails over.
Me.ConnSQLSrv.OpenOrKeepActive
For more information, see the relevant links.
https://www.nuget.org/packages/ObjAdoDBLib/
https://www.nuget.org/packages/PigSQLSrvLib/
https://www.nuget.org/packages/PigSQLSrvCoreLib/
Today we had a lot more activity than normal between our Ruby on Rails application and our remote legacy SQL Server 2005 database, and we started getting the error below intermittently. What is is? How can I prevent it (besides avoiding the situation, which we're working on)?
Error Message:
ActiveRecord::StatementInvalid: DBI::DatabaseError: 08S01 (20020) [unixODBC][FreeTDS][SQL Server]
Bad token from the server: Datastream processing out of sync: SELECT * FROM [marketing] WHERE ([marketing].[contact_id] = 832085)
You need to use some sort of connection pooling.
Windows itself (including Windows Server X) will only allow a certain number of socket connections to be created in a given time frame, even if you close them. All others will fail after that.
A connection pool will keep the same sockets open, avoiding the problem. Also, new connections are real slow.
This Microsoft article says:
Often caused by an abruptly terminated network connection, which causes a damaged Tabular Data Stream token to be read by the client.
Was the server network bound? I have no experience with SQLÂ Server, but does it have a limit on the number of connections you can make?
Add the following to your script; the first statement you run:
SET NO_BROWSETABLE OFF