SQL script stuck with status: runnable and wait: preemptive_os_reportevent - sql-server

An Index Defrag was executed and was later on terminated. It impacted 2 other processes to long run. One process cancelled by itself and the other was terminated. While re-running I was checking the status from sys.dm_exec_requests and notice that on the last part of the query which will insert the data into a table, it is changing status from running to runnable: preemptive_os_reportevent, etc.. Later on, the job once again cancelled by itself.
I want to learn why is the script changing status like that? Is that expected? And if something else is causing it to long run, what else should I check?
Note: I was also checking other active scripts at the time it was running and none was using the same target table.

This has been resolved yesterday. Apparently, the syslogs was full and is prohibiting the script to write logs therefore it is stuck and can't complete. Feel free to add inputs if you have some.

Related

SQL: Avoid batch failing when processing

I have a SQL Server job that picks up a max of 1000 items each time in the queue for processing at an interval of 1 minutes.
In the job I use MERGE INTO to the table I need and mark the status of these items as complete and the job completes and will process the next batch in the next interval.
All good so far except recently there has been an incident where there is a problem in one of the item and since we are processing the batch in one single SQL statement, the whole batch has failed due to that one single item.
No big deal as we later identified the faulty one and have it patched and re-processed the whole failed batch.
What I am interested to know is what are some of the things I can do to avoid failing of the entire batch?
This time I know that the reason of this faulty item hence I can add a check to flush out this faulty item before the single MERGE INTO statement, but that does not cover other unknown errors.

Delete Query for one record going super slow

I'm trying to delete one single record from the database.
Code is very simple:
SELECT * FROM database.tablename
WHERE SerialNbr = x
This gives me the one record I'm looking for. It has that SerialNbr plus a number ids that are foreign keys to other tables. I took care of all foreign key constraints to where the next line of the code will start running.
After that the code is followed by:
DELETE FROM tablename
WHERE SerialNbr = x
This should be a relatively simple and quick query I would think. However it has now run for 30 minutes with no results. It isn't yelling about any problems with foreign keys or anything like that, it just is taking a very very long time to process. Is there anything I can do to speed up this process? Or am I just stuck waiting? It seems something is wrong that deleting one single record would take this long.
I am using Microsoft SQL Server 2008.
It's not taking a long time to delete the row, it's waiting in line for its turn to access the table. This is called blocking, and is a fundamental part of how databases work. Essentially, you can't delete that row if someone else has a lock on it - they may be reading it and want to be sure it doesn't change (or disappear) before they're done, or they may be trying to update it (to an unsatisfactory end, of course, if you wait it out, since once they commit your delete will remove it anyway).
Check the SPID for the window where you're running the query. If you have to, stop the current instance of the query, then run this:
SELECT ##SPID;
Make note of that number, then try to run the DELETE again. While it's sitting there taking forever, check for a blocker in a different query window:
SELECT blocking_session_id FROM sys.dm_exec_requests WHERE session_id = <that spid>;
Take the number there, and issue something like:
DBCC INPUTBUFFER(<the blocking session id>);
This should give you some idea about what the blocker is doing (you can get other information from sys.dm_exec_sessions etc). From there you can decide what you want to do about it - issue KILL <the spid>;, wait it out, go ask the person what they're doing, etc.
You may need to repeat this process multiple times, e.g. sometimes a blocking chain can be several sessions deep.
What I think is happening is there is some kind of performance hunt in the database or that particular table you can see that by simply run sp_who2 and kill the SP's. Be careful on running the kill sp because it might not be your query.
Delete From Database.Tablename
where SerialNbr=x

Run SQL Agent Job in endless loop

There is an SQL Agent Job containing a complex Integration Services Package performing some ETL Jobs. It takes between 1 and 4 hours to run, depending on our data sources.
The Job currently runs daily, without problems. What I would like to do now is to let it run in an endless loop, which means: When it's done, start over again.
The scheduler doesn't seem to provide this option. I found that it would be possible to use the steps interface to go to step one after the last step is finished, but there's a problem using that method: If I need to stop the job, I would need to do that in a forceful way. However I would like to be able to let the job stop after the next iteration. How can I do that?
Thanks in advance for any help!
Since neither Martin nor Remus created an answer, here is one so the question can be accepted.
The best way is to simply set the run frequency to a very low value, like one minute. If it is already running, a second instance will not be created. If you want to stop the job after the current run, simply disable the schedule.
Thanks!
So you want that when you want to stop the job, after the running iteration, it should stop - if I am getting you correctly.
You can do one thing here.
Have one table for configuration which is having boolean value.
Add one step into the job. i.e. Before iteration, check the value from table. If it's true, then only run the ETL packages.
So, each time it finds its true, it'll follow endless loop.
When you want to stop the job, set that value in table to false.
When the current job iteration completes, it'll go to find the value from your table, will find it false, and the iteration will stop.
you can always set the "on success" action to go to step one, creating an endless loop, but as you said, if you want to stop the job you'll have to force it.
Other than that, an simple control table on the database with a status and a second job that queries this table and fires your main job depending on the status. Coupe of possible architectures here, just pick the one that suits you better
You could use service broker within the database. The job you need to run can be started by queuing a 'start' message and when it finishes it can send itself a message to start again.
To pause the process you can just deactivate the queue processor.

Terminate a long running process

Hi I am using C on Solaris. I have a process which connects to a database. I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections.
I know I can check for the same
processes with the process id's. But I
am more concerned about in knowing how
to determine if a same process is
running the same query?
Any help is deeply appreciable.
"I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections."
Not sure what problem you are addressing.
If you drop the connection, then the database session may persist for some time. Potentially still holding locks too.
Also, if a PL/SQL block is looping and running 1000 queries each taking a tenth of a second, should that be counted as 1 statement for your abort logic ?
You can look at V$SESSION and the SQL_ID or SQL_HASH_VALUE. Then check again after fifteen seconds and see if it has changed. You can also look at v$sessstat / v$statname for things like "execute count" or "user calls" to determine whether it is the same SQL running for a long time or multiple SQL calls.
if you start your queries straight from your client, you can check v$session.last_call_et. This column shows how many seconds ago the last server call started for this session. A server call in this case is the execute query. This won't work is your client starts a block of pl/sql and happens to start the querie[s] from there. In that case last_call_et will point to the start of the pl/sql block, since that was the last thing started by your session.
This could be the easiest.
Does it help?
Ronald - http://ronr.blogspot.com
My advise is that you fix the root cause instead of treating the symptoms.

At what point will a series of selected SQL statements stop if I cancel the execution request in SQL Server Management Studio?

I am running a bunch of database migration scripts. I find myself with a rather pressing problem, that business is waking up and expected to see their data, and their data has not finished migrating. I also took the applications offline and they really need to be started back up. In reality "the business" is a number of companies, and therefore I have a number of scripts running SPs in one query window like so:
EXEC [dbo].[MigrateCompanyById] 34
GO
EXEC [dbo].[MigrateCompanyById] 75
GO
EXEC [dbo].[MigrateCompanyById] 12
GO
EXEC [dbo].[MigrateCompanyById] 66
GO
Each SP calls a large number of other sub SPs to migrate all of the data required. I am considering cancelling the query, but I'm not sure at what point the execution will be cancelled. If it cancels nicely at the next GO then I'll be happy. If it cancels mid way through one of the company migrations, then I'm screwed.
If I cannot cancel, could I ALTER the MigrateCompanyById SP and comment all the sub SP calls out? Would that also prevent the next one from running, whilst completing the one that is currently running?
Any thoughts?
One way to acheive a controlled cancellation is to add a table containing a cancel flag. You can set this flag when you want to cancel exceution and your SP's can check this at regular intervals and stop executing if appropriate.
I was forced to cancel the script anyway.
When doing so, I noted that it cancels after the current executing statement, regardless of where it is in the SP execution chain.
Are you bracketing the code within each migration stored proc with transaction handling (BEGIN, COMMIT, etc.)? That would enable you to roll back the changes relatively easily depending on what you're doing within the procs.
One solution I've seen, you have a table with a single record having a bit value of 0 or 1, if that record is 0, your production application disallows access by the user population, enabling you to do whatever you need to and then set that flag to 1 after your task is complete to enable production to continue. This might not be practical given your environment, but can give you assurance that no users will be messing with your data through your app until you decide that it's ready to be messed with.
you can use this method to report execution progress of your script.
the way you have it now is every sproc is it's own transaction. so if you cancel the script you will get it update only partly up to the point of the last successfuly executed sproc.
you cna however put it all in a singel transaction if you need all or nothign update.

Resources