I was assigned to implement an application (in C++) to evaluate pending submissions (a submission is a programming algorithm to a given problem). A site (in ASP.NET MVC) posts problems and allows the users to submit their answers, then marks the submissions as "pending to evaluation" on the database (SQL Server 2008R2) and that is when my work begins:
I'll have 3 (or maybe more) instances of my application running as services.
Each instance has to check if any pending submissions exists in the DB every 2 seconds.
If it exists I retrieve and compile it, after successful compilation I execute it and finally, after execution, check the correctness of the answer. Then I update that submission setting the results and deleting it from the pending table.
I need to specify in the DB the current status of the pending submission (compiling, running, judging).
The time to evaluate a submition is ~(1-3)s and the same instance never evaluates more that one submission at the same time.
My problem is: How to connect to the DB server?
I have 3 possibles solutions and I need to know what should be better (in order to increase efficiency) and why:
1 - Establish a connection to the DB once I instantiate the application and never close it (close it when I delete the instance or shut down the server, that theoretically never will happen.)
2 - Open a connection each 2s in order to get the pending submission (if any one exists) wait for the full evaluation process to end, sets the evaluations results and then close the connection.
3 - Same as 2, but closing the connection when I retrieve the submission, when the compilation finish, open it again and update pending submission's status, close it, when the execution finish, open it again and update pending submission's status, close it, finally when the judging finish open it and set the evaluation result.
You don't say what database access library you are using (ODBC, ado.net, other?). Opening and closing database connections is a relatively expensive operation. You should be using some sort of connection pooling scheme in your db access framework. A pool of connections is opened for a period of time, and when your app opens a connection it will get handed an already open connection from a pool. That will make it more efficient. Go read about connection pooling
for SQL Server
Related
I have configured openliberty (version 21) with a database (oracle) connection as follows in the server.xml :
<dataSource jndiName="jdbc/myds" transactional="true">
<connectionManager maxPoolSize="20" minPoolSize="5" agedTimeout="120s" connectionTimeout="10s"/>
<jdbcDriver libraryRef="jdbcLib" />
<properties.oracle URL="jdbc:oracle:thin:#..." user="..." password="..."/>
</dataSource>
The server starts and I can make queries to the database via my rest api but I have noticed that I only use 1 active database connection and parallel http queries result in queuing databases queries over that 1 connection.
I have verified this by monitoring the active open database connections in combination with slow queries (I make several rest calls in parallel). Only 1 connection is opened and 1 query is processes after the other. How do I open a connection pool with for example 5-20 connections for parallel operation.
Based on your described usage, the connection pool should be creating connections as requests come in if there are no connections available in the free pool.
Your connectionTimeout is configured to be 10 seconds. To ensure that your test really is running in parallel would be to make two requests to the server. The server should create a connection, use it, wait 11 seconds, then close the connection.
If your requests are NOT running in parallel, you will not get any exception since the second request won't start until after the first one finished and that would be an issue with your test procedure.
If your requests are running in parallel, and you do not get any exception output from Liberty. Then Liberty likely is making multiple connections and that can be confirmed by enabling J2C trace.
See: https://openliberty.io/docs/21.0.0.9/log-trace-configuration.html
Enable: J2C=ALL
If your requests are running in parallel, and no more than one connection is being created, then you will get a ConnectionWaitTimeoutException. This could be caused by the driver not being able to create more than one connection, incorrect use of the Oracle Connection Pool (UCP), or a number of other factors. I would need more information to debug that issue.
I have a Symfony command line task that has a habit of dropping the mysql connection.
Its a data import task. Which fetches data from multiple connections. Its not one big query but a few smaller ones.
It seems to drop the connection the first time it is ran. About half way through the script. However the second time its ran (from the beginning) it always completes the task.
Its not timing out on the query as the error response I get is that the connection has been dropped and it runs ok on its own. So im thinking that its some kind of timeout issue that is avoided when its ran the second time due to query caching speeding up the script.
So my question is how do I refresh the database connection?
[Doctrine\DBAL\DBALException]
SQLSTATE[HY000]: General error: 2013 Lost connection to MySQL server during query
A different approach is to check if Doctrine is still connected to the MySQL server through the ping() method in the connection. If the connection is lost, close the active connection since it is not really closed yet and start a new one.
if(FALSE == $em->getConnection()->ping()){
$em->getConnection()->close();
$em->getConnection()->connect();
}
I guess you mean to connect to the database if the connection is lost for some reason. Given an EntityManager, you can do it the following way:
$success = $_em->getConnection()->connect();
With getConnection, you are retrieving the connection object doctrine uses (Doctrine\DBAL\Connection), which exposes the connect method.
You can call connect anytime, as it checks wether a connection is already established. If this is the case, it returns false.
There is also a isConnected method to check wether a connection is established. You could use that to see where exactly the connection is dropping to get a clearer picture of what is happening.
Can jdbc connections which are closed due to database un-availability be recovered.
To give back ground I get following errors in sequence. It doesn't look to be manual re-start. The reason for my question is that I am told that the app behaved correctly without
the re-start. So if the connection was lost, can it be recovered, after a DB re-start.
java.sql.SQLException: ORA-12537: TNS:connection closed
java.sql.SQLRecoverableException: ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
IBM AIX RISC System/6000 Error: 2: No such file or directory
java.sql.SQLRecoverableException: ORA-01033: ORACLE initialization or shutdown in progress
No. The connection is "dead". Create a new connection.
A good approach is to use a connection pool, which will test if the connection is still OK before giving it to you, and automatically create a new connection if needed.
There are several open source connection pools to use. I've used Apache's JDCP, and it worked for me.
Edited:
Given that you want to wait until the database comes back up if it's down (interesting idea), you could implement a custom version of getConnection() that "waits a while and tries again" if the database doesn't respond.
p.s. I like this idea!
The connection cannot be recovered. What can be done is to failover the connection to another database instance. RAC and data guard installations support this configuration.
This is no problem for read-only transactions. However for transactions that execute DML this can be a problem, especially if the last call to the DB was a commit. In case of a commit the client cannot tell if the commit call completed or not. When did the DB fail; before executing the commit, or after executing the commit (but not sending back the acknowledgment to the client). Only the application has this logic and can do the right thing. If the application after failing over does not verify the state of the last transaction, duplicate transactions are possible. This is a known problem and most of us experienced it buying tickets or similar web transactions.
For some time now our flagship application has been having mysterious errors. The error message is the generic
[DBNETLIB][ConnectionWrite (send()).]General network error. Check your network documentation.
This is reliably reproduced by leaving the app open for the night and resuming work in the morning. Since it's a backend server app this is a normal scenario.
The funny thing is - we've migrated from SQL Server 7 to 2000 to 2008 and the issue is present on all of them. But what seems to matter is the OS on which we run the app. On WinXP it works fine, on Vista/7 it fails. So the problem is at the client end.
The results of Google on the error message cover a very wide spectrum of different causes (since this is a very generic error) and none of the scenarios found there are similar to ours.
So perhaps someone around here will know what the problem is in our case?
You should be able to reproduce this error condition on demand by:
1. Opening a database connection (in your client application)
2. Unplugging the network cable
3. Plugging network cable back in (wait until the network connection is restored)
4. Using the previously opened connection to query the database
As far as I can tell from experience, client side ADO code is not able to consistently determine if an underlying network connection is actually valid or not. Checking if the database connection is open (in the client code) returns true. However, performing any operations on that connection results in a General network error.
The connection pool appears to be able to determine when a connection goes 'bad' so it never returns a bad connection to the application. It simply opens a new connection instead.
So, if a database connection is kept alive for a long time (used or unused) by the application, the underlying TCP/IP connectivity can get broken.
The bottom line is that database connections should be closed and returned back to the connection pool when not in use.
Edit
Also, depending on the number of clients connecting to the db, not using the connection pool can cause another issue. You may hit the maximum number of sockets open on the server side. This is from memory. Once a connection is closed on the client side, the connection on the server goes into a TIME_WAIT state. By default, the server socket takes about 4 minutes to close, so it is not available to other clients during that time. The bottom line is that there is a limited number of available sockets on the server. Keeping too many connections open can create a problem.
One project I worked on easily hit this socket limit with around 120 users. A new 'feature' was added that absolutely hammered the server, and after a few hours of using the app, things would suddenly slow to a crawl for everyone. SQL server was not closing enough sockets in time for new connection requests. Although there are 65K sockets altogether, only the first 5000 are made available to the ADO (this is a default registry setting thing, so can be changed).
The number of sockets in TIME_WAIT state would slowly build up until the OS would not allocate any more. So clients had to wait until server side sockets closed and a new connection could then be created.
Have you tried disabling SNP/TCP Chimneying?
Had a similar error. For me it was indirectly caused by mismatched calls to WSACleanup and WSAStartup.
The program called WSACleanup more times than WSAStartup. This would cause a reference counter (somewhere in the sockets library) to reach zero too early.
I think effectively from that moment on all sockets owned by the process are broken.
And this would also kill the SQL client since it uses sockets to 'talk' to the SQL server as well.
I have a client-server app that uses .NET SqlClient Data Provider to connect to sql server - pretty standard stuff. By default how long must connections be idle before the connection pooling manager will close the database connection and remove it from the pool? What setting if any controls this?
This MSDN document only says
The connection pooler removes a connection from the pool after it has been idle for a long time, or if the pooler detects that the connection with the server has been severed.
A few years ago the answer beneath was the situation, but now it's changed so you can refer to the source and write up a summary :)
Old answer
This excellent article tells us what we need to know, using reflection to reveal the inner workings of connection pooling.
From how I understand it, 'closed' connections are cleaned up periodically on a semi-random interval. The cleanup process runs somewhere between every 2min and 3min 50s, but it needs to run twice before a 'closed' connection will be properly closed. Therefore after 7min 40s of being 'closed' the underlying sql connection should be properly closed, but it could be as short as 2min. At the time of writing the first connection pool created in a process would always have a timer interval of 3min 10s, so you'd normally see sql connections being closed somewhere between 3min 10s and 6min 20s after you call Close() on the ADO object.
Obviously this uses undocumented code so could change in future - or could even have changed since that article was written.
Please go through this:
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring%28VS.80%29.aspx
The part
"The following table lists the valid
names for connection pooling values
within the ConnectionString."
seems to be of your interest.