I have some ADO.net code that is suddenly failing on my machine (luckily not any of the customer's!) due to "...ran out of resources." or "...connection closed." Looking in SQLSMS, I see that there are >55 open connections, which I guess is the error.
I strongly suspect this is due to too many open queries, and that's due to something (a recordset?) not being closed properly. Is there any simple way to list the SQL for these connections? The data in sysprocesses is always "AWAITING COMMAND". If not in SMS, perhaps in ADO?
Related
We are developping an application in C# that uses ODBC and the "Adaptive server enterprise" driver to extract data from a Sybase DB.
We have a long SQL batch query that create a lot of intermediate temporary tables and returns several DataTable objects to the application. We are seeing exceptions saying TABLENAME not found where TABLENAME is one of our intermediate temporary tables. When I check the status of the OdbcConnection object in the debugger it is Closed.
My question is very general. Is this the price you pay for having long-running complicated queries? Or is there a reliable way to get rid of such spurious disconnects?
Many thanks in advance!
There's a couple of ODBC timeout parameters - see SDK docs at:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc20116.1550/html/aseodbc/CHDCGBEH.htm
Specifically CommandTimeOut and ConnectionTimeOut which you can set accordingly.
But much more likely is you're being blocked or similar when the process is running - maybe ask your DBA to check your query plan for the various steps in your batch and look for specific problem areas such as tablescans etc which could be masking your timeout issue.
Sometimes a server where an instance of SQL Server is running needs a restart. This occurs typically when the memory is fully used and rebooting somehow drastically solves the issue. The reason of full memory can be difficult to pinpoint (may be an old service who is running with poor garbage collection or other similar issues, or simply windows OS builtin problems, ...).
When the server is in such instable state a client server application runs into troubles because simple queries fail because SQL Server is not able to handle even simple queries and error messages are returned.
What I'd like to achieve is, just after connection is estabilished, ask the server "do you feel good?".
Is there a way to perform this in T-SQL?
Somehow my desired logic is:
connect
ask the server "do you feel good" ("EXEC sp_doyoufeelgood")
if it feels good continue
else close the application and inform the user "the server is encountering some problems, please contact your sysadmin"
Is there a reliable way to check for a SQL Server instance status?
Take a look at this one could be interesting:
sp_BlitzĀ® http://www.brentozar.com/blitz/
In our college, we are conducting a contest, which is in the form of Multiple choice questions.
For that we are using VB as a front-end , MS Access as a back-end.
Ref:
The Application VB 6 runs with no problem and the participant entry is added into the database along with their scores, when 1 or 2 clients access the database simultaneously.
Problem:
But when more than 2 clients access the database simultaneously, the Application crashes.
1.In some clients, we are getting runtimeerror printing some large negative value with a
message "Operations query cannot modify the database".
2.In some clients, the VB 6.0 crashes and closes suddenly.
These errors occurs when we are tried to access the database using
OK,SUBMIT buttons.
Could you tell me why this error occurs and how can I correct it.
My Questions are..
1.Is giving the burden of all clients to a single laptop is the problem here?If there is some other problem please explain it.
2.Why I am getting the error as "Operation query cannot modify database", If so then how it works when 1 or 2 clients access the database simultaneously.
Access databases (and other directly file-based DBs) are not really built for multi-access. There are some facilities in place to help it work, but in my experience, it is quite unreliable.
You need a database server running, which can allow multiple clients to the same database simultaneously. A free option is MySQL. There is also a free version of Microsoft SQL Server available.
"Operation query cannot modify database"
Badly designed Access databases can have issues with users trying to modify records if they don't have defined Primary keys. This is especially true when multiple people are accessing because it literally can't tell which record to modify if two people try to do the same thing. Sometimes it will let you insert but not update.
Further if you are looking for performance, Access is just the wrong tool. It has very little in the way of performance tuning options or abilities. SQl Server Express or mySQl would have more things available to diagnose and fix a performance issue.
I'm working on a project where a PHP dialog system is communicating with a Microsoft SQL Server 2008 and I need more speed on the PHP side.
After profiling my PHP scripts, I discovered that a call to mssql_connect() needs about 200 milliseconds on that particular system. For some simple dialogs this is about 60% of the whole script runtime. So I could gain a huge performance boost by speeding up this call.
I already assured that only one single connection handle is produced for every request to my PHP scripts.
Is there a way to speed up the initial connection with SQL Server? Some restrictions apply, though:
I can't use PDO (there's a lot of legacy code here that won't work with it)
I don't have access to the SQL Server configuration, so I need a PHP-side solution
I can't upgrade to PHP 5.3.X, again because of crappy legacy code.
Hm. I don't know much about MS SQL, but optimizing that single call may be tough.
One thing that comes to mind is trying mssql_pconnect(), of course:
First, when connecting, the function would first try to find a (persistent) link that's already open with the same host, username and password. If one is found, an identifier for it will be returned instead of opening a new connection.
But you probably already have thought of that.
The second thing, you are not saying whether MS SQL is running on the same machine as the PHP part, but if it isn't, maybe there's a basic networking issue at hand? How fast is a classic ping between one host and the other? The same would go for a virtual machine that is not perfectly configured. 200 milliseconds really sounds very, very slow.
Then, in the User Contributed Notes to mssql_connect(), there is talk about a native PHP driver for MS SQL. I don't know anything about it, whether it will pertain the "old" syntax and whether it is usable in your situation, but it might be worth a look.
The User Contributed Notes are always worth a look, there are tidbits like this one:
Just in case it helps people here... We were being run ragged by extremely slow connections from IIS6 --> SQL Server 2000. Switching from CGI to ISAPI fixed it somewhat, but the initial connection still took along the lines of 10 seconds, and eventually the connections wouldn't work any more.
The solution was to add the database server IP address to the HOST file on the server, pointing it to the internal machine name. Looks like some kind of DNS lookup was the culprit.
Now connections and queries are flying, and the world is once again right.
We have been going through quite a bit of optimization between php 5.3, FreeTDS and mssql lately. Assuming that you have adequate server resources, we are finding that two changes made the database interaction much faster and much more reliable.
Using mssql_pconnect() instead of mssql_connect() eliminated an
intermittent "cannot connect to server" issue. I read a lot of posts
that indicated negative issues associated with persistent
connections but so far we haven't seen anything to suggest that it's
an issue. The php server seems to keep between 20 and 60 persistent
connections open to the db server depending upon load.
Using the IP address of the database server in the freetds.conf file
instead of the hostname also lent a speed increase.
The only thing i could think of is to use an ip adress instead of an hostname for the sql connect to spare the dns lookup. Maybe persistant connections are an option and a little bit faster.
We are having trouble with a Java web application running within Tomcat 6 that uses JDBC to connect to a SQL Server database.
After a few requests, the application server dies and the in the log files we find exceptions related to database connection failures.
We are not using any connection pooling right now and we are using the standard JDBC/ODBC/ADO driver bridge to connect to SQL Server.
Should we consider using connection pooling to eliminate the problem?
Also, should we change our driver to something like jTDS?
That is the correct behavior if you are not closing your JDBC connections.
You have to call the close() method of each JDBC resource when you are finished using it and the other JDBC resources you obtained with it.
That goes for Connection, Statement/PreparedStatement/CallableStatement, ResultSet, etc.
If you fail to do that, you are hoarding potentially huge and likely very limited resources on the SQL server, for starters.
Eventually, connections will not be granted, get queries to execute and return results will fail or hang.
You could also notice your INSERT/UPDATE/DELETE statements hanging if you fail to commit() or rollback() at the conclusion of each transaction, if you have not set autoCommit property to true.
What I have seen is that if you apply the rigor mentioned above to your JDBC client code, then JDBC and your SQL server will work wonderfully smoothly. If you write crap, then everything will behave like crap.
Many people write JDBC calls expecting "something" else to release each thing by calling close() because that is boring and the application and server do not immediately fail when they leave that out.
That is true, but those programmers have written their programs to play "99 bottles of beer on the wall" with their server(s).
The resources will become exhausted and requests will tend to result in one or more of the following happening: connection requests fail immediately, SQL statements fail immediately or hang forever or until some godawful lengthy transaction timeout timer expires, etc.
Therefore, the quickest way to solve these types of SQL problems is not to blame the SQL server, the application server, the web container, JDBC drivers, or the disappointing lack of artificial intelligence embedded in the Java garbage collector.
The quickest way to solve them is to shoot the guy who wrote the JDBC calls in your application that talk to your SQL server with a Nerf dart. When he says, "What did you do that for...?!" Just point to this post and tell him to read it. (Remember not to shoot for the eyes, things in his hands, stuff that might be dangerous/fragile, etc.)
As for connection pooling solving your problems... no. Sorry, connection pools simply speed up the call to get a connection in your application by handing it a pre-allocated, perhaps recycled connection.
The tooth fairy puts money under your pillow, the Easter bunny puts eggs & candy under your bushes, and Santa Clause puts gifts under your tree. But, sorry to shatter your illusions - the SQL server and JDBC driver do not close everything because you "forgot" to close all the stuff you allocated yourself.
I would definitely give jTDS a try. I've used it in the past with Tomcat 5.5 with no problems. It seems like a relatively quick, low impact change to make as a debugging step. I think you'll find it faster and more stable. It also has the advantage of being open source.
In the long term, I think you'll want to look into connection pooling for performance reasons. When you do, I recommend having a look at c3p0. I think it's more flexible than the built in pooling options for Tomcat and I generally prefer "out of container" solutions so that it's less painful to switch containers in the future.
It's hard to tell really because you've provided so little information on the actual failure:
After a few requests, the application
server dies and the in the log files
we find exceptions related to database
connection failures.
Can you tell us:
exactly what the error is that
you're seeing
give us a small
example of the code where you
connect and service one of your
requests
is it after a consistent
number of transactions that it
fails, or is it seemingly random
I have written a lot of database related java code (pretty much all my code is database related), and used the MS driver, the jdt driver, and the one from jnetDirect.
I'm sure if you provide us more details we can help you out.