Reconnecting to long running query on SQL Server - sql-server

Been running a query which has run for 7 days and was near its end, when I got a network error:
Msg 121, Level 20, State 0, Line 0
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
The query is still running in its process on SQL Server and is not yet rolling back.
It was looping through 10 parameters and then for each parameter carrying out however many updates were required to match up all the records somewhere between 10 and 50 updates per parameter until no rows were effected and them moving onto the next parameter.
It had reached the point were only 1 row was being updated at a time on the last parameter after 7 days when I had a short network drop.
I have used dirty read to copy the results out to a different table.
It still shows up in ActivityManager (active expensive queries), and sp_who/sp_who2 and using sys.sysprocesses.
After the update statement it should go on to print out the number of iterations and then de-allocate the parameters that were passed in through a cursor.
It is a while rowcount > 0 inside a while ##FETCH_STATUS = 0 where the cursor is going through a comma separated list of parameters.
Looking at sys.sysprocesses, the CPU count continues to increase and it shows 2 open_tran
Is it possible to connect to a specific process in SQL Server?
If so what client can I use (SQL Server Management Studio, code, or OS mssql on linux)?
If not possible to connect, can I monitor to see if it completes?
Any other suggestions?

I used a dirty read to copy all the data out of the table being updated.
The query did complete, it continued running for about an hour and then committed everything prior to the process ending.
I was then able to compare the dirty read copy against the final results.
Next time I go for updating with a loop until no more updates, I will put in a go to commit as it goes along.
It was roughly 1,000 ten minute update queries that were run. The previous set up update queries were about 4 min each, so I looped them up with parameters, one mistake since it was easy to was to add in 10 parameters rather than 5 so 7 days vs 3.5 days, and an initial estimate of 1.5 days.

Related

Hibernate and sql server -- select IN with 8000 items

We have a query that does a sql server select with IN that we had originally anticipated a few items (under 20?) -- now it's being asked for 8000. This causes a timeout.
Hibernate generates the query just fine, but as I understand it, sql server doesn't optimize for more than 64 items in an IN query at a time and performance falls off after that. We've proved this running some queries manually -- first result of 64 takes ~5 seconds, the rest comes in 2 seconds. The raw query takes minutes to complete.
Is there some way to tell hibernate to break this up or can (should?) I write some kind of extension/plugin for hibernate that says "if you ask for more than 64 items, break those up, thread them, stitch them back together"?

Entity Framework just stopping with timeout during INSERT

I have a small c# application which using Entitiy Framework 6 to parse text files into some database structure.
In general file content is parsed into 3 tables:
Table1 --(1-n)-- Table2 --(1-n)-- Table3
the application worked for months without any issues on Dev, Stage and Production environment.
Last week it stopped on stage and now I am trying to figure out why.
One file contains ~ 100 entries Table1, ~2000 Entries Table 2, ~2000 Entries Table 3
.SaveChanges() is called after each file.
I get the following timeout exception:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated.
AutoDetectChangesEnabled is set to false.
Because there is a 4th table were I execute one update statement after each file there were transactions arround the whole thing, so I removed the 4th table and transaction stuff but the problem persists.
To test if it's just an performance issue I set Database.CommandTimeout = 120 without any effect, it's still running into timeout after 2 minutes.
(Before the issue one file was stored in about 5 seconds which is absolutely ok)
If I look at the SQL Server using SQL Server Profiler I can see the following after .SaveChanges() is called:
SQL Server Profiler
Only the first few INSERT statements for Table3 are shown (always first 4-15 statements and all of them shortly after .SaveChanges())
After that: no new entries until the timeout occurs.
I have absolutely no idea what to check because there is no error or something like that in code.
If I look at SQL Server, there is absolutely no reason for it to delay the queries or something like that (CPU, memory and disk space are ok).
Would be glad for each comment on this, if you want more infos please let me know.
Best Regards
Fixed it by rebuilding fragmented indexes in Table1.
The following article was helpful to understand how to take care of fragmented indexes:
https://solutioncenter.apexsql.com/why-when-and-how-to-rebuild-and-reorganize-sql-server-indexes/
(If some mod is still thinking this is no valid answer, any explanation would be great)

SSIS 2012 catalog memory leak

I have package which has 1 for each loop container which has bunch of inserts and selects.
The list that the loop iterates is about few million complex rows or so..
Package is in Integration Services catalog, where it's ran by simply executing in SSMS (no agent job).
When i look in the resource monitor, memory for the ISServerExec.exe (comparable for dtsexec.exe) is growing every second (it takes about 1 second of the for each loop to complete).
After awhile all the memory in the windows server is used and server ends up paging to disk. And then the waiting times for the loop's queries become huge, 20 - 30 seconds per query.
What I am doing wrong?
I would write the list to a SQL table, then loop using a For Loop container wrapped around your For Each container.
At the start of the For Loop container I would read a single record from the list table using SELECT TOP 1, and deliver it into the Recordset variable. The scope of that variable should be moved to the For Loop container.
At the end of the For Loop Container I would update a flag and/or a datetime column to indicate that the row has been processed and should not be included in the next iteration of the initial SELECT.
Along the way you can update the list table to indicate progress/status of each row.
This design is also useful for logging and restart requirements.

SQL Server: If I stop a single long running update before it is finished, will it roll back automatically?

I happened to execute a query similar to this one:
update table1
set data=(select data from table1 where key1=val1 and key2=val2)
which was supposed to update only one row, but since I missed the second where clause, I guess it started to update every row in the database, which contains a few million rows.
The correct query would have taken about 0 seconds and would be:
update table1
set data=(select data from table1 where key1=val1 and key2=val2)
where key1=val1 and key2=val3
After a few seconds, I realized it took too long and stopped it.
The database is set to full recovery mode and running on sql server 2008 r2.
The question is, what was the effect of this query? My hope is that there would be no effect since the query was stopped before completion and SQL Server rolled back the changes automatically. Is that correct?
If not, how do I roll back the database to its state at a particular point in time (right before I did the unfortunate update)?
(I saw this question: If I stop a long running query, does it rollback? but it is different in that it performs several changes as opposed to just one.)
(And yes, I do have very recent backups, but given the size of the DB I would prefer not to have to restore from backup)
If your command to cancel came in time, it was rolled back in its entirety. DML statements are always all or nothing. You should probably check the data to make sure that your cancel did arrive in time. It might have arrived in the last millisecond or so after the transaction was already committed.

Terminate a long running process

Hi I am using C on Solaris. I have a process which connects to a database. I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections.
I know I can check for the same
processes with the process id's. But I
am more concerned about in knowing how
to determine if a same process is
running the same query?
Any help is deeply appreciable.
"I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections."
Not sure what problem you are addressing.
If you drop the connection, then the database session may persist for some time. Potentially still holding locks too.
Also, if a PL/SQL block is looping and running 1000 queries each taking a tenth of a second, should that be counted as 1 statement for your abort logic ?
You can look at V$SESSION and the SQL_ID or SQL_HASH_VALUE. Then check again after fifteen seconds and see if it has changed. You can also look at v$sessstat / v$statname for things like "execute count" or "user calls" to determine whether it is the same SQL running for a long time or multiple SQL calls.
if you start your queries straight from your client, you can check v$session.last_call_et. This column shows how many seconds ago the last server call started for this session. A server call in this case is the execute query. This won't work is your client starts a block of pl/sql and happens to start the querie[s] from there. In that case last_call_et will point to the start of the pl/sql block, since that was the last thing started by your session.
This could be the easiest.
Does it help?
Ronald - http://ronr.blogspot.com
My advise is that you fix the root cause instead of treating the symptoms.

Resources