Set query timeout in T-SQL - sql-server

I'm executing queries to SQL Server from an application. Sometimes one of those queries lasts very long. Too long actually, it usually indicates that it will eventually fail. I would like to specify a maximum duration, after which the query should just fail.
Is there a way to specify a command timeout in T-SQL?
I know a (connection and) command timeout can be set in the connection string. But in this application I cannot control the connection string. And even if I could it should be longer for the other queries.

As far as I know you cannot limit query time unless specified in the connection string (which you can't do) or if the query is executed over a linked server.

Your DBA can set the timeout to a linked server as well as to queries but a direct query does not let you do so yourself. The bigger question I would have is why does the query fail? Is there a preset timeout already on the server (hopefully), are you running out of memory and paging, or any of a million other reasons. Do you have a DBA? Because if one of my servers was being hammered by such bad code, I would be contacting the person who was executing it. If he hasn't, you should reach out and ask for help determining the failure reason.

If the unexpectedly long duration happens to be due to a (local) lock, there is a way to break out of it. Set the lock timeout before running the query:
SET LOCK_TIMEOUT 600000 -- Wait 10 minutes max to get the lock.
Do not forget to set it back afterwards to prevent subsequent queries on the connection from timing out:
SET LOCK_TIMEOUT -1 -- Wait indefinitely again.

Related

I'm getting the default execution timeout even though I've set everything to zero

I've got a stored procedure that's timing out despite the fact that I've set both the server's execution-timeout and the connection's execution-timeout to zero, which should make it unlimited.
It times out at exactly 10 minutes, which is the default timeout, so it would seem to be still getting that from somewhere.
Any ideas?
Note that this stored procedure used to run for hours w/o timing out, but recently I've made some changes to it, such as using a cursor for the iteration, using a temporary table and using some explicit transactions -- maybe that has something to do with the problem.
Fixed it! It seems there’s a third (and possibly even a fourth) place where the timeout can be set – under Options -> Query Execution (the possible fourth is in the Query window’s context menu Query Options).

Fails when an lock of an object exists in SQL Server

I followed the example given here of sys.dm_tran_locks, but instead of the second session blocks until the rollback of the first session, I need it to fail automatically if the lock exists, perhaps wait for a little amount of time before it fails.
Is there any parameter that I could configure to get that behavior? Other solutions are welcome.
but instead of the second session blocks until the rollback of the first session, I need it to fail automatically if the lock exists, perhaps wait for a little amount of time before it fails.
you will need to add SET options to your query
SET LOCK_TIMEOUT 1800; --milliseconds
GO
when you run the above query in a second session,it will wait only for specified time and will return error

DB2 operation Timeout or deadlock

I have an JDBC DB2error, operation timeout or deadlock , error number: -913.
scenario: Operation 1 performing update a row in a table, which may take 2 minutes to complete the operation.
Operation 2 is trying to read the same row by quote number.
have an default locking CS( transaction_read_commited).
I'm seeing 'operation Timeout or deadlock' after 60 seconds,
Is this timeout or deadlock scenario?
Is there any way that I can avoid deadlock by increasing connection timeout or lock timeout?
Suggestion around would be appreciate..
You can increase the lock timeout by modifying the locktimeout parameter.
db2 update db cfg using locktimeout 180
This change the wait for 2 minutes. You can also put -1, to wait indefinitely.
For more information http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.config.doc/doc/r0000329.html
The scenario is not a deadlock, because the operation 2 does not hold any resource, it is just trying to access the row being updated.
2 minutes a row? What they heck are you trying to do?
In any case, yes, this is a timeout issue - your operation 2 is using the (assumedly) default timeout. This can be set per file, and (at least for the iSeries, and probably for all versions of DB2) defaults to 60 seconds.
I'm not sure if this value can be set from SQL alone - you have to use the iSeries' native commands CHGPF or CHGLF (paramter WAITFILE/WAITRECORD, in seconds), if that's your platform (you didn't specify). I don't really recommend it though - see if you can't get that update statement running faster... or, see about changing your architecture to allow somehow.

Terminate a long running process

Hi I am using C on Solaris. I have a process which connects to a database. I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections.
I know I can check for the same
processes with the process id's. But I
am more concerned about in knowing how
to determine if a same process is
running the same query?
Any help is deeply appreciable.
"I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections."
Not sure what problem you are addressing.
If you drop the connection, then the database session may persist for some time. Potentially still holding locks too.
Also, if a PL/SQL block is looping and running 1000 queries each taking a tenth of a second, should that be counted as 1 statement for your abort logic ?
You can look at V$SESSION and the SQL_ID or SQL_HASH_VALUE. Then check again after fifteen seconds and see if it has changed. You can also look at v$sessstat / v$statname for things like "execute count" or "user calls" to determine whether it is the same SQL running for a long time or multiple SQL calls.
if you start your queries straight from your client, you can check v$session.last_call_et. This column shows how many seconds ago the last server call started for this session. A server call in this case is the execute query. This won't work is your client starts a block of pl/sql and happens to start the querie[s] from there. In that case last_call_et will point to the start of the pl/sql block, since that was the last thing started by your session.
This could be the easiest.
Does it help?
Ronald - http://ronr.blogspot.com
My advise is that you fix the root cause instead of treating the symptoms.

Is it possible in DB2 or in any Database to detect if the table is locked or not?

Is it possible in DB2 to detect if the table is locked or not. Actually whenever we use Select statement and if that table is locked [ may be because of on going execution of insertion or deletion ] , then we have to wait till the table is unlocked.
In our application sometimes it goes to even 2-3 mins. What i think is, if i can have some mechanism by which i can detect the locked table, then i will not even try to fetch the records, instead i will splash some message.
Not only in DB2, but is it possible to detect this in any Database.
I've never used DB2, but according to the documentation it seems you can use the following to make queries not wait for a lock:
SET CURRENT LOCK TIMEOUT NOT WAIT
Alternatively, you can set the lock timeout value to 0
SET CURRENT LOCK TIMEOUT 0
Both the statements have the same effect.
Once you have this, you can try to select from the table and catch the error.
I would recommend against NO WAIT, and rather, specify a low LOCK TIMEOUT (10-30s). If the target table is only locked temporarily (small update, say for 1 second), your second app will timeout immediately. If you have a 10s timeout, the second app would simply wait for the first app to COMMIT or ROLLBACK (1 sec) then move forward.
Also consider there's a bit of a "first come, first served" policy when it comes to handing out locks - if the second app "gives up", a third app could get in and grab the locks needed by the second. It's possible that the second app experiences lock starvation because it keeps giving up.
If you are experiencing ongoing concurrency issues, consider lock monitoring to get a handle on how the database is being accessed. There's lots of useful statistics (such as average lock-wait time, etc.) that can help you tune your parameters and application behaviour.
DB2 V9.7 Infocenter - Database Monitoring

Resources