I have a stand-alone script that reads/writes from/to Postgre using Django ORM.
I get this error occasionally
DatabaseError: query timeout server
closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I need to re-establish the connection and retry the processing code in the script, but can't seem to find a way. The following code raises 'InterfaceError: connection already closed' on retry, so it doesn't work.
for repeat in range(5):
try:
.....................PROCESSING CODE...................
except DatabaseError, e:
time.sleep(30)
else:
break
else:
return
Any idea?
I have a similar need for recreating the database connection and I'm trying the following black magic to reset the connection in django 1.3:
from django.db import connection
connection.connection.close()
connection.connection = None
I don't have PostgreSQL handy to try this out, but it seems to work for MySQL and sqlite at least. Also, if you're using multi-db, you're going to have to perform this step on your specific connection from the django.db.connections dictionary.
Related
I am writing an Elixir project that connects to a Postgres database via Ecto. The database server is on a different server from the application itself, and is more subject to outages that would not affect the Elixir project than if they were running on the same hardware.
When the application starts normally, the database connection seems to be made automatically, and everything works fine. But if there is a connection error, Ecto simply spews any errors into the log.
What I would like to do is detect the current connection status, and report that information via a simple Plug route to an external load balancer, such that traffic can be routed to a separate server with an active connection.
The trouble is, I'm unsure how to determine if Ecto has a viable connection to the database, apart from listening to the log, which doesn't then report that the database connection has been restored.
What can I call to determine if an Ecto connection is live and usable, preferably without making no-op queries against that connection?
Ecto simply spews any errors into the log.
This is the default strategy with enabled backoff for the connection. You can disable the backoff by setting backoff_type to :stop in repo options and deal with errors by yourself.
What can I call to determine if an Ecto connection is live and usable, preferably without making no-op queries against that connection?
I think you can check if the connection "usable" only by using it :)
try do
Ecto.Adapters.SQL.query(MyApp.Repo, "SELECT 1")
rescue
e in DBConnection.ConnectionError -> :down
end
BTW, the module mentioned above is the default pool module for the connection you can rely on its internals(don't) and do the checkout manually. Own pool can be implemented instead of this one :)
To detect whether a connection to the database is live and usable, you can try connecting to it directly outside of Ecto using :gen_tcp.connect/3. The code could look something like this:
case :gen_tcp.connect('127.0.0.1', 5432, []) do
{:ok, _} ->
# code for handling ok scenario
{:error, error} ->
# code for handling error scenario. Typically, you'll get the econnrefused error when the server is down
end
Note that Ecto gets the details of the server to connect to through the Mix config you set for it. If you want to change the server it connects to, you'll have to run Application.put_env/4 and then restart the Ecto repository for Ecto to recognize the new config. For example:
Application.put_env(:my_app, MyApp.Repo, [adapter: Ecto.Adapters.Postgres, username: "postgres", password: "postgres",database: "new_db", hostname: "different_host.com", pool_size: 20])
Supervisor.stop(MyApp.Repo)
I have a Symfony command line task that has a habit of dropping the mysql connection.
Its a data import task. Which fetches data from multiple connections. Its not one big query but a few smaller ones.
It seems to drop the connection the first time it is ran. About half way through the script. However the second time its ran (from the beginning) it always completes the task.
Its not timing out on the query as the error response I get is that the connection has been dropped and it runs ok on its own. So im thinking that its some kind of timeout issue that is avoided when its ran the second time due to query caching speeding up the script.
So my question is how do I refresh the database connection?
[Doctrine\DBAL\DBALException]
SQLSTATE[HY000]: General error: 2013 Lost connection to MySQL server during query
A different approach is to check if Doctrine is still connected to the MySQL server through the ping() method in the connection. If the connection is lost, close the active connection since it is not really closed yet and start a new one.
if(FALSE == $em->getConnection()->ping()){
$em->getConnection()->close();
$em->getConnection()->connect();
}
I guess you mean to connect to the database if the connection is lost for some reason. Given an EntityManager, you can do it the following way:
$success = $_em->getConnection()->connect();
With getConnection, you are retrieving the connection object doctrine uses (Doctrine\DBAL\Connection), which exposes the connect method.
You can call connect anytime, as it checks wether a connection is already established. If this is the case, it returns false.
There is also a isConnected method to check wether a connection is established. You could use that to see where exactly the connection is dropping to get a clearer picture of what is happening.
Can jdbc connections which are closed due to database un-availability be recovered.
To give back ground I get following errors in sequence. It doesn't look to be manual re-start. The reason for my question is that I am told that the app behaved correctly without
the re-start. So if the connection was lost, can it be recovered, after a DB re-start.
java.sql.SQLException: ORA-12537: TNS:connection closed
java.sql.SQLRecoverableException: ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
IBM AIX RISC System/6000 Error: 2: No such file or directory
java.sql.SQLRecoverableException: ORA-01033: ORACLE initialization or shutdown in progress
No. The connection is "dead". Create a new connection.
A good approach is to use a connection pool, which will test if the connection is still OK before giving it to you, and automatically create a new connection if needed.
There are several open source connection pools to use. I've used Apache's JDCP, and it worked for me.
Edited:
Given that you want to wait until the database comes back up if it's down (interesting idea), you could implement a custom version of getConnection() that "waits a while and tries again" if the database doesn't respond.
p.s. I like this idea!
The connection cannot be recovered. What can be done is to failover the connection to another database instance. RAC and data guard installations support this configuration.
This is no problem for read-only transactions. However for transactions that execute DML this can be a problem, especially if the last call to the DB was a commit. In case of a commit the client cannot tell if the commit call completed or not. When did the DB fail; before executing the commit, or after executing the commit (but not sending back the acknowledgment to the client). Only the application has this logic and can do the right thing. If the application after failing over does not verify the state of the last transaction, duplicate transactions are possible. This is a known problem and most of us experienced it buying tickets or similar web transactions.
Is there a way to ask NHibernate to automatically retry failed connections to a database? Specifically, if my network connection is too unreliable, sometimes NH will fail to connect to my remote SQL Server.
You might (just an idea) be able to get retries by overriding the connection driver's GenerateCommand() method. There you would return a wrapped IDbCommand that retries as necessary.
If you are working with "occasionally connected" requirements, see this question.
Today we had a lot more activity than normal between our Ruby on Rails application and our remote legacy SQL Server 2005 database, and we started getting the error below intermittently. What is is? How can I prevent it (besides avoiding the situation, which we're working on)?
Error Message:
ActiveRecord::StatementInvalid: DBI::DatabaseError: 08S01 (20020) [unixODBC][FreeTDS][SQL Server]
Bad token from the server: Datastream processing out of sync: SELECT * FROM [marketing] WHERE ([marketing].[contact_id] = 832085)
You need to use some sort of connection pooling.
Windows itself (including Windows Server X) will only allow a certain number of socket connections to be created in a given time frame, even if you close them. All others will fail after that.
A connection pool will keep the same sockets open, avoiding the problem. Also, new connections are real slow.
This Microsoft article says:
Often caused by an abruptly terminated network connection, which causes a damaged Tabular Data Stream token to be read by the client.
Was the server network bound? I have no experience with SQLÂ Server, but does it have a limit on the number of connections you can make?
Add the following to your script; the first statement you run:
SET NO_BROWSETABLE OFF