I have an JDBC DB2error, operation timeout or deadlock , error number: -913.
scenario: Operation 1 performing update a row in a table, which may take 2 minutes to complete the operation.
Operation 2 is trying to read the same row by quote number.
have an default locking CS( transaction_read_commited).
I'm seeing 'operation Timeout or deadlock' after 60 seconds,
Is this timeout or deadlock scenario?
Is there any way that I can avoid deadlock by increasing connection timeout or lock timeout?
Suggestion around would be appreciate..
You can increase the lock timeout by modifying the locktimeout parameter.
db2 update db cfg using locktimeout 180
This change the wait for 2 minutes. You can also put -1, to wait indefinitely.
For more information http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.config.doc/doc/r0000329.html
The scenario is not a deadlock, because the operation 2 does not hold any resource, it is just trying to access the row being updated.
2 minutes a row? What they heck are you trying to do?
In any case, yes, this is a timeout issue - your operation 2 is using the (assumedly) default timeout. This can be set per file, and (at least for the iSeries, and probably for all versions of DB2) defaults to 60 seconds.
I'm not sure if this value can be set from SQL alone - you have to use the iSeries' native commands CHGPF or CHGLF (paramter WAITFILE/WAITRECORD, in seconds), if that's your platform (you didn't specify). I don't really recommend it though - see if you can't get that update statement running faster... or, see about changing your architecture to allow somehow.
Related
Been running a query which has run for 7 days and was near its end, when I got a network error:
Msg 121, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
The query is still running in its process on SQL Server and is not yet rolling back.
It was looping through 10 parameters and then for each parameter carrying out however many updates were required to match up all the records somewhere between 10 and 50 updates per parameter until no rows were effected and them moving onto the next parameter.
It had reached the point were only 1 row was being updated at a time on the last parameter after 7 days when I had a short network drop.
I have used dirty read to copy the results out to a different table.
It still shows up in ActivityManager (active expensive queries), and sp_who/sp_who2 and using sys.sysprocesses.
After the update statement it should go on to print out the number of iterations and then de-allocate the parameters that were passed in through a cursor.
It is a while rowcount > 0 inside a while ##FETCH_STATUS = 0 where the cursor is going through a comma separated list of parameters.
Looking at sys.sysprocesses, the CPU count continues to increase and it shows 2 open_tran
Is it possible to connect to a specific process in SQL Server?
If so what client can I use (SQL Server Management Studio, code, or OS mssql on linux)?
If not possible to connect, can I monitor to see if it completes?
Any other suggestions?
I used a dirty read to copy all the data out of the table being updated.
The query did complete, it continued running for about an hour and then committed everything prior to the process ending.
I was then able to compare the dirty read copy against the final results.
Next time I go for updating with a loop until no more updates, I will put in a go to commit as it goes along.
It was roughly 1,000 ten minute update queries that were run. The previous set up update queries were about 4 min each, so I looped them up with parameters, one mistake since it was easy to was to add in 10 parameters rather than 5 so 7 days vs 3.5 days, and an initial estimate of 1.5 days.
I'm executing queries to SQL Server from an application. Sometimes one of those queries lasts very long. Too long actually, it usually indicates that it will eventually fail. I would like to specify a maximum duration, after which the query should just fail.
Is there a way to specify a command timeout in T-SQL?
I know a (connection and) command timeout can be set in the connection string. But in this application I cannot control the connection string. And even if I could it should be longer for the other queries.
As far as I know you cannot limit query time unless specified in the connection string (which you can't do) or if the query is executed over a linked server.
Your DBA can set the timeout to a linked server as well as to queries but a direct query does not let you do so yourself. The bigger question I would have is why does the query fail? Is there a preset timeout already on the server (hopefully), are you running out of memory and paging, or any of a million other reasons. Do you have a DBA? Because if one of my servers was being hammered by such bad code, I would be contacting the person who was executing it. If he hasn't, you should reach out and ask for help determining the failure reason.
If the unexpectedly long duration happens to be due to a (local) lock, there is a way to break out of it. Set the lock timeout before running the query:
SET LOCK_TIMEOUT 600000 -- Wait 10 minutes max to get the lock.
Do not forget to set it back afterwards to prevent subsequent queries on the connection from timing out:
SET LOCK_TIMEOUT -1 -- Wait indefinitely again.
I have a several very expensive queries which seem to hog resources that seems to put the system over the top.
Is there a delay function I can call to wait until processor resources come back down in SQL Server 2000 - 2008?
My eventual goal is to go back and make these more efficient, use a sproc, but in the meantime I need to get these to work asap because I'm rewriting legacy code.
you could try something like this:
DECLARE #Busy int
,#Ticks int
SELECT #Busy=##CPU_BUSY
,#Ticks=7777 --<you have to determine this value based on your machine
WAITFOR DELAY '00:00:10' --10 seconds
WHILE ##CPU_BUSY-#Ticks>#BUSY
BEGIN
--too busy, wait longer
#BUSY=##CPU_BUSY
WAITFOR DELAY '00:00:10' --10 seconds
END
EXEC YourProcedureHere
to determine the #Ticks value, just write a loop to print out the difference between ##CPU_BUSY values every 10 seconds. When the system is at your low load, use this value as #Ticks.
You can't control or throttle CPU except for higher editions of SQL Server 2008.
Your best option seems to be to set options to allow only half (or less) your CPUs to be used for any query. This can be done 2 ways
at the server level for all queries using "max degree of parallelism Option"
per query, for offending query with MAXDOP hint
Also see:
KB article "General guidelines to use to configure the MAXDOP option"
SO question: Control the CPU usage during TSQL query- sql 2008 (not a duplicate of this)
Edit:
The question would be: do you want to delay execution (with all the issues like CommandTimneout, user response time etc) or improve concurrency for all queries
This answer should improve concurrency: I usually deal with client apps and I can't make a business user wait.
When delaying execution, you also have to delay all queries (say to disallow the expensive queries from running) which reduce concurrency throughout as calls will back up. And you'll have to be careful about 2 expensive queries starting around the same time
The only thing I can think of here to actually kick things off in quiet times is to use scheduled tasks and osql to execute your statements. Scheduled tasks has to option to run when idle.
I'm not sure about the 50% bit though.
This strategy shouldn't be too sensitive to SQL version either.
Hi I am using C on Solaris. I have a process which connects to a database. I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections.
I know I can check for the same
processes with the process id's. But I
am more concerned about in knowing how
to determine if a same process is
running the same query?
Any help is deeply appreciable.
"I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections."
Not sure what problem you are addressing.
If you drop the connection, then the database session may persist for some time. Potentially still holding locks too.
Also, if a PL/SQL block is looping and running 1000 queries each taking a tenth of a second, should that be counted as 1 statement for your abort logic ?
You can look at V$SESSION and the SQL_ID or SQL_HASH_VALUE. Then check again after fifteen seconds and see if it has changed. You can also look at v$sessstat / v$statname for things like "execute count" or "user calls" to determine whether it is the same SQL running for a long time or multiple SQL calls.
if you start your queries straight from your client, you can check v$session.last_call_et. This column shows how many seconds ago the last server call started for this session. A server call in this case is the execute query. This won't work is your client starts a block of pl/sql and happens to start the querie[s] from there. In that case last_call_et will point to the start of the pl/sql block, since that was the last thing started by your session.
This could be the easiest.
Does it help?
Ronald - http://ronr.blogspot.com
My advise is that you fix the root cause instead of treating the symptoms.
Is it possible in DB2 to detect if the table is locked or not. Actually whenever we use Select statement and if that table is locked [ may be because of on going execution of insertion or deletion ] , then we have to wait till the table is unlocked.
In our application sometimes it goes to even 2-3 mins. What i think is, if i can have some mechanism by which i can detect the locked table, then i will not even try to fetch the records, instead i will splash some message.
Not only in DB2, but is it possible to detect this in any Database.
I've never used DB2, but according to the documentation it seems you can use the following to make queries not wait for a lock:
SET CURRENT LOCK TIMEOUT NOT WAIT
Alternatively, you can set the lock timeout value to 0
SET CURRENT LOCK TIMEOUT 0
Both the statements have the same effect.
Once you have this, you can try to select from the table and catch the error.
I would recommend against NO WAIT, and rather, specify a low LOCK TIMEOUT (10-30s). If the target table is only locked temporarily (small update, say for 1 second), your second app will timeout immediately. If you have a 10s timeout, the second app would simply wait for the first app to COMMIT or ROLLBACK (1 sec) then move forward.
Also consider there's a bit of a "first come, first served" policy when it comes to handing out locks - if the second app "gives up", a third app could get in and grab the locks needed by the second. It's possible that the second app experiences lock starvation because it keeps giving up.
If you are experiencing ongoing concurrency issues, consider lock monitoring to get a handle on how the database is being accessed. There's lots of useful statistics (such as average lock-wait time, etc.) that can help you tune your parameters and application behaviour.
DB2 V9.7 Infocenter - Database Monitoring