How can I get a SQL server job to restart itself? - sql-server

A little bit stuck on a problem we are having, let me try my best to explain the scenario, issues and what we have tried.
We have an SQL job which is logging to a specific file tagged with a date appended on to the end
The first step checks what our log file is e.g. log_01012012.log, if this is incorrect we updated the table sysjobsteps to have the new file location for all steps - e.g log_02012012.log. We are doing this by calling a stored proc in the first job step.
Whilst this gets updated in the table the remaining steps keep using the old log, due to what I'm assuming is that this table only gets read once when the job is started.
We have tried to restart the job within the stored proc with the following code:
EXEC msdb.dbo.sp_stop_job #job_name = 'My Job'
waitfor delay '00:00:15'
EXEC msdb.dbo.sp_start_job #job_name = 'My Job'
However, when we kill the job it appears as it kills the stored procedure (which i guess is a child of the job) which means it never gets to the step to restart the job.
Is there a way in which a job can restart itself so it looks at the sysjobsteps table again and uses the correct location for the logging?
Things which might solve the issue would be
Being able to restart a job from the job itself
Being able to refresh the job in some respect.
Anything I need to clarify I will do my best but we are currently stuck and all input will be greatly appreciated!

go to sql server agent on SSMS
expand Jobs
create a job,
define the steps (sp or simple query)
define the schedule (how often or even start the job when the sqlserver restarts)
set notification to email you when job completes ( succeeded/failed )
hope this helps.

You could do something fancy with Service Broker. Something like:
1) Put a message into a broker queue as the last step of your job. The content can be empty; it's just a token to say "hey... the job needs to be resubmitted".
2) Write an activation stored procedure for the queue that puts in a delay (like you're doing in your already existing procedure) and then resubmit the job
Or, you instead of hardcoding the log file name in the job steps themselves, put that data in a table somewhere. Then, in your step that changes the location define the success condition as "go to next step" and the failure condition as "go to step 1". Then modify the code that changes the file location to return an error (thus triggering the job step's failure condition).

Realise nearly a decade old, but I had the same problem and solved it as follows. Using the code that the OP suggested:
EXEC msdb.dbo.sp_start_job #job_name = 'My Job A'
Simply create a matching pair of Jobs, each of which has an identical first step (doing the actual work) and a final step containing the above code but pointing at the opposite job name (basically you can't restart the same job from itself as far as I can tell). Just make sure you set the Step Behaviour in the Advanced tab to execute this final step in each case and the job will just rumble away indefinitely.

Related

SQL Error during Lazy Loop awaiting Azure DB resize

I want to automate some DB scaling in my Azure SQL database.
This can be easily initiated using this:
ALTER DATABASE [myDatabase]
MODIFY (EDITION ='Standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB);
But that command returns instantly, whilst the resize takes a few 10s of seconds to complete.
We can check the actual current size using the following, which doesn't update until the change is complete:
SELECT DATABASEPROPERTYEX('myDatabase', 'ServiceObjective')
So naturally I wanted to combine this with a WHILE loop and a WAITFOR DELAY, in order to create a stored procedure that will change the DB size, and not return until the change has completed.
But when I wrote that stored procedure (script below) and ran it, I get the following error every time (at about the same time that the size change completes):
A severe error occurred on the current command. The results, if any, should be discarded.
The resize succeeds, but I get errors instead of a cleanly finishing stored procedure call.
Various things I've already tested:
If I separate the "Initiate" and the "WaitLoop" sections, and start the WaitLoop in a separate connection, after initiation but before completion, then that also gives the same error.
Adding a TRY...CATCH block doesn't help either.
Removing the stored procedure aspect, and just running the code directly doesn't fix it either
My interpretation is that the Resize isn't quite as transparent as one might hope, and that connections created before the resize completes get corrupted in some sense.
Whatever the exact cause, it seems to me that this stored procedure just isn't achievable at all; I'll have to do the polling from my external process - opening new connections each time. It's not an awful solution, but it is less pleasant than being able to encapsulate the whole thing in a single stored procedure. Ah well, such is life.
Question:
Before I give up on this entirely ... does anyone have an alternative explanation or solution for this error, which would thus allow a single stored procedure call to change the size and then not return until that sizeChange actually completed?
Initial stored procedure code (simplified to remove parameterisation complexity):
CREATE PROCEDURE [trusted].[sp_ResizeAzureDbToS3AndWaitForCompletion]
AS
ALTER DATABASE [myDatabase]
MODIFY (EDITION ='Standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB);
WHILE ((SELECT DATABASEPROPERTYEX('myDatabase', 'ServiceObjective')) != 'S3')
BEGIN
WAITFOR DELAY '00:00:05'
END
RETURN 0
Whatever the exact cause, it seems to me that this stored procedure
just isn't achievable at all; I'll have to do the polling from my
external process - opening new connections each time.
Yes this is correct. As described here when you change the service objective of a database
A new compute instance is created with the requested service tier and
compute size... the database remains online during this step, and
connections continue to be directed to the database in the original
compute instance ... [then] existing connections to the database in
the original compute instance are dropped. Any new connections are
established to the database in the new compute instance.
The bolded text will kill your stored procedure execution. You need to do this check externally

Launch stored procedure and continue running it even if disconnected

I have a database where data is processed in some kind of batches, where each batch may contain even a million records. I am processing data in a console application, and when I'm done with a batch, I mark it as Done (to avoid reading it again in case it does not get deleted), delete it and move on to a next batch.
I have the following simple stored procedure which deletes processed "batches" of data
CREATE PROCEDURE [dbo].[DeleteBatch]
(
#BatchId bigint
)
AS
SET XACT_ABORT ON
BEGIN TRANSACTION
DELETE FROM table1 WHERE BatchId = #BatchId
DELETE FROM table2 WHERE BatchId = #BatchId
DELETE FROM table3 WHERE BatchId = #BatchId
COMMIT
RETURN ##Error
I am using NHibernate with command timeout value 10 minutes, and the DeleteBatch procedure call times out occasionally.
Actually I don't want to wait for DeleteBatch to complete. I already have marked the batch as Done, so I want to go processing a next batch or maybe even exit my console application, if there are no more pending batches.
I am using Microsoft SQL Express 2012.
Is there any simple solution to tell the SQL server - "launch DeleteBatch and run it asynchronously even if I disconnect, and I don't even need the result of the procedure"?
It would also be great if I could set a lower processing priority for DeleteBatch because other queries are more important than DeleteBatch.
I dont know much about NHibernate. But if you were or can use ADO.NET in this scenario then you can implement asynchronous database operations easliy using the SqlCommand.BeginExecuteNonQuery Method in C#. This method starts the process of asynchronously executing a Transact-SQL statement or stored procedure that does not return rows, so that other tasks can run concurrently while the statement is executing.
EDIT: If you really want to exit from your console app before the db operation ends then you will have to manually create threads in your code and perform the db operation in those threads. Now when you close your console app these threads would still be alive because Threads created using System.Thread.Thread are foreground threads by default. But having said that it is also important to consider how many threads you will create. In your case you would have to assign 1 thread for each batch. If number of batches is very large then large number of threads would need to be created which would inturn eat a large amount of your CPU resources and would even freeze your OS for a long time.
Another simple solution I could suggest is to insert the BatchIds into some database table. Create an INSERT TRIGGER on that table. This trigger would then call a stored proc with BatchId as its parameter and would perform the required tasks.
Hope it helps.
What if your console application were, instead of trying to delete the batch, just write the batch id into a "BatchIdsToDelete" table. Then, you could use an agent job running every x minutes/seconds or whatever, to delete the top x percent records for a given batch id, and maybe sleeping a little before tackling the next x percent.
Maybe worth having a look at that?
Look at this article which explains how to do reliable asynchronous procedure execution, code included. IS based on Service Broker.
the problem with trying to use .NEt async features (like BeginExecute, or task etc) is that the call is unreliable: if the process exits before the procedure completes the execution is canceled in the server as the session is disconnected.
But you need to also look at the task itself, why is the deletion taking +10 minutes? is it blocked by contention? are you missing indexes on BatchId? Use the Performance Troubleshooting Flowchart.
Late to the party, but if someone else has this problem use SQLCMD. With express you are limited in the number of users (I think 2, but it may have changed since I the last time I did much with express). You can have sqlcmd, run queries, stored procedures ...
And you can kick off the sqlcmd with Windows Scheduler. A script, an outlook rule ...
I used it to manage like 3 or 4 thousand SQL Server Express instances, with their nightly maintenance scheduled with the Windows Scheduler.
You could also create and run a PowerShell script, it's more versatile and probably a more widely used than sqlcmd.
I needed a same thing..
After searching for long time I found the solution
Its d easiest way
SqlConnection connection = new SqlConnection();
connection.ConnectionString = "your connection string";
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connection.ConnectionString);
builder.AsynchronousProcessing = true;
SqlConnection newSqlConn = new SqlConnection(builder.ConnectionString);
newSqlConn.Open();
SqlCommand cmd = new SqlCommand(storeProcedureName, newSqlConn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.BeginExecuteNonQuery(null, null);
Ideally SQLConnection object should take an optional parameter / property, URL of a web service, be that WCF or WebApi, or something yet to be named, and if the user wishes to, notify user of execution advance and / or completion status by calling this URL with well known message.
Theoretically DBConnection is extensible object one is free to implement. However, it will take some review of what really can be and needs to be done, before this approach can be said feasible.

Good way to call multiple SQL Server Agent jobs sequentially from one main job?

I've got several SQL Server Agent jobs that should run sequentially. To keep a nice overview of the jobs that should execute I have created a main job that calls the other jobs with a call to EXEC msdb.dbo.sp_start_job N'TEST1'. The sp_start_job finishes instantly (Job Step 1), but then I want my main job to wait until job TEST1 has finished before calling the next job.
So I have written this small script that starts executing right after the job is called (Job Step 2), and forces the main job to wait until the sub job has finished:
WHILE 1 = 1
BEGIN
WAITFOR DELAY '00:05:00.000';
SELECT *
INTO #jobs
FROM OPENROWSET('SQLNCLI', 'Server=TESTSERVER;Trusted_Connection=yes;',
'EXEC msdb.dbo.sp_help_job #job_name = N''TEST1'',
#execution_status = 0, #job_aspect = N''JOB''');
IF NOT (EXISTS (SELECT top 1 * FROM #jobs))
BEGIN
BREAK
END;
DROP TABLE #jobs;
END;
This works well enough. But I got the feeling smarter and/or safer (WHILE 1 = 1?) solutions should be possible.
I'm curious about the following things, hope you can provide me with some insights:
What are the problems with this approach?
Can you suggest a better way to do this?
(I posted this question at dba.stackexchange.com as well, to profit from the less-programming-more-dba'ing point of view too.)
If you choose to poll a table, then you'd need to look at msdb.dbo.sysjobhistory and wait until the run_status is not 4. Still gonna be icky though.
Perhaps a different approach would be for the last step of the jobs, fail or success, to make an entry back on the "master" job server that the process has completed and then you simply look locally. Might also make tracking down "what the heck happened" easier by consolidating starts and stops at a centralized job server.
A third and much more robust approach would be to use something like Service Broker to handle communicating and signaling between processes. That'll require much more setup but it'd be the most mechanism for communicating between processes.
No problem with the approach. I was doing somewhat like your requirement only and i used sysjobhistory table from msdb to see the run status because of some other reasons.
Coming back to your question, Please refer msdb.dbo.sp_start_job stored procedure using the same approach and its been used by one default Microsoft BizTalk job 'MessageBox_Message_ManageRefCountLog_BizTalkMsgBoxDb' to call another dependent default biztalk job 'MessageBox_Message_Cleanup_BizTalkMsgBoxDb'. Even there is one stored procedure in BizTalk messagebox to check the status of job. Please refer 'int_IsAgentJobRunning' in BizTalk messagebox.

Is there any events after SQL Server job finished?

I want to get the latest raw job history and save them in my format after job finished every times.
I have written a stored procedure to get the history with sp_help_jobhistory, then format the result, then save them into a new table.
But, when to call this stored procedure?
Is there some event fired when the job finishes?
Maybe there are some others solutions.
Thanks for your comments/answers.
As Akhil said, simply add a step in your job and ensure that it is chained correctly (assuming your job chains steps based on success only, on the final step success, execute your stored proc).
I have tried the solution, and it works well.
Step 1: Will call sp BeginHistoryLog, this sp will get the information of job from [msdb].[dbo].[sysjobs] tabel by job name. And this SP will write the initial data intto JobHistory which log the history message.
Step 2: Will call the sp which do the actual work.
Step 3: Will call SP EndHistoryLog, this SP will get the step2 execution information from msdb.dbo.sysjobhistory and INNER JOIN [msdb].[dbo].[sysjobs] by job id and step id. And this sp will write the execution inforamtion into JobHistory table.

At what point will a series of selected SQL statements stop if I cancel the execution request in SQL Server Management Studio?

I am running a bunch of database migration scripts. I find myself with a rather pressing problem, that business is waking up and expected to see their data, and their data has not finished migrating. I also took the applications offline and they really need to be started back up. In reality "the business" is a number of companies, and therefore I have a number of scripts running SPs in one query window like so:
EXEC [dbo].[MigrateCompanyById] 34
GO
EXEC [dbo].[MigrateCompanyById] 75
GO
EXEC [dbo].[MigrateCompanyById] 12
GO
EXEC [dbo].[MigrateCompanyById] 66
GO
Each SP calls a large number of other sub SPs to migrate all of the data required. I am considering cancelling the query, but I'm not sure at what point the execution will be cancelled. If it cancels nicely at the next GO then I'll be happy. If it cancels mid way through one of the company migrations, then I'm screwed.
If I cannot cancel, could I ALTER the MigrateCompanyById SP and comment all the sub SP calls out? Would that also prevent the next one from running, whilst completing the one that is currently running?
Any thoughts?
One way to acheive a controlled cancellation is to add a table containing a cancel flag. You can set this flag when you want to cancel exceution and your SP's can check this at regular intervals and stop executing if appropriate.
I was forced to cancel the script anyway.
When doing so, I noted that it cancels after the current executing statement, regardless of where it is in the SP execution chain.
Are you bracketing the code within each migration stored proc with transaction handling (BEGIN, COMMIT, etc.)? That would enable you to roll back the changes relatively easily depending on what you're doing within the procs.
One solution I've seen, you have a table with a single record having a bit value of 0 or 1, if that record is 0, your production application disallows access by the user population, enabling you to do whatever you need to and then set that flag to 1 after your task is complete to enable production to continue. This might not be practical given your environment, but can give you assurance that no users will be messing with your data through your app until you decide that it's ready to be messed with.
you can use this method to report execution progress of your script.
the way you have it now is every sproc is it's own transaction. so if you cancel the script you will get it update only partly up to the point of the last successfuly executed sproc.
you cna however put it all in a singel transaction if you need all or nothign update.

Resources