I would like to know what happens if an agent job (with recurring interval) in MS SQL server runs long enough so that it overlaps recurring execution.
According to my tests, paralleled execution does not happen, which is good.
What I'm trying t find out is that will the next execution be ignored because the previous one is not finished yet, OR will it be queued?
By "queued" I mean executing the queued requests immediately after the previous completed discarding the schedule.
Thank you
It will not be queued, it will be skipped. Easy to test: Create a job with a WAITFOR DELAY '00:05:00';, schedule job for < 2 minutes from now, then start the job manually. It will run for 5 minutes, once.
Related
I have an SSIS job which pulls data from one database and pushes into another. Currently the actions are triggered when a record is inserted into a table.
My understanding is using a SQL Server trigger to launch an SSIS Job is not advised. Suggesting to me the preferred route for this use case is to use a recurring schedule.
If I schedule every 10 seconds, will the ETL job launch again if the previous run has not finished? (Is there a better word to describe this behavior in the computing spacing?) If the job relaunches, is there a preferred way to accomplish this behavior?
If I schedule every 10 seconds, will the ETL job launch again if the previous run has not finished?
No. The next run time is computed once the job finishes, based on the "Starting at" and the next interval that meets the cycle interval.
While it is running the "Start Job at Step" option on the SQL Server Management Studio interface will be grayed out.
If you try to kick off the job again forcefully using sp_start_job, you'll get a error message saying it's already running.
I have a scheduled SSIS job which runs daily. The job runs a select query on a SQL 2008 R2 server and then generates a text file. Generally, the job takes less than a minute to complete but sometimes it just takes hours to complete.
Symptoms :
1. The job gets stuck at "Pre-Execute phase is beginning" for just over 10 hours.
2. Then it starts processing the output file.
3. The job then immediately gets stuck at "Rows were provided to a data flow component as input" for another 10 hours.
I don't know what needs to be changed at package level because it works fine 90% of the time. Also, I tried to find a pattern and don't see any. The data being extracted remains almost same. There is no other connection to the table during the job run.
I work with an environment that uses Merge Replication to publish a dozen publications to 6 a dozen subscribers every 10 minutes. When certain jobs are running simultaneously, deadlocks and blocking is encountered and the replication process is not efficient.
I want to create a SQL Server Agent Job that runs a group of Merge Replication Jobs in a particular order waiting for one to finish before the next starts.
I created an SSIS package that started the jobs in sequence, but it uses sp_start_job and when run it immediately starts all the jobs so they are running together again.
A side purpose is to be able to disable replication to a particular server instead of individually disabling a dozen jobs or temporarily disabling replication completely to avoid 70+ individual disablings.
Right now, if I disable a Merge Replication job, the SSIS package will still start and run it anyway.
I have now tried creating an SSIS package for each Replication Job and then creating a SQL Server Agent job that calls these packages in sequence. That job takes 8 seconds to finish while the individual packages it is calling (starting a replication job) takes at least a minute to finish. In other words, that doesn't work either.
The SQL Server Agent knows when a Replication job finishes! Why doesn't an SSIS package or job step know? What is the point of having a control flow if it doesn't work?
Inserting waits is useless. the individual jobs can take anywhere from 1 second to an hour depending on what needs replicating.
May be I didn't see real problem but it is naturally that you need synchronization point and there are many ways to create it.
For example you could still run jobs simultaneously but let first job lock a resource that is needed for second, that will wait till resource will be unlocked. Or second job can listen log table in loop (with wait for a "minute" and self cancel after "an hour")...
I have an agent job set to run log backups every two hours from 2:00 AM to 11:59 PM (leaving a window for running a full or differential backup). A similar job is set up in every one of my 50 or so instances. I may be adding several hundred instances over time (we host SQL Servers for some of our customers). They all backup to the same SAN disk volume. This is causing latency issues and otherwise impacting performance.
I'd like to offset the job run times on each instance by 5 minutes, so that instance one would run the job at 2:00, 4:00, etc., instance two would run it at 2:05, 4:05, etc., instance three would run it at 2:10, 4:10, etc. and so on. If I offset the start time for the job on each instance (2:00 for instance one, 2:05 for instance two, 2:10 for instance three, etc.), can I reasonably expect that I will get my desired result of not having all the instances run the job at the same time?
If this is the same conversation we just had on twitter: when you tell SQL Server Agent to run every n minutes or every n hours, the next run is based on the start time, not the finish time. So if you set a job on instance 1 to run at 2:00 and run every 2 hours, the 2nd run will run at 4:00, whether the first run took 1 minute, 12 minutes, 45 minutes, etc.
There are some caveats:
there can be minor delays due to internal agent synchronization, but I've never seen this off by more than a few seconds
if the first run at 2:00 takes more than 2 hours (but less than 4 hours), the next time the job runs will be at 6:00 (the 4:00 run is skipped, it doesn't run at 4:10 or 4:20 to "catch up")
There was another suggestion to add a WAITFOR to offset the start time (and we should discard random WAITFOR, because that is probably not what you want - random <> unique). If you want to hard-code a different delay on each instance (1 minute, 2 minutes, etc.) then it is much more straightforward to do that with a schedule than by adding steps to all of your jobs. IMHO.
Perhaps you could setup a centralized DB that manages the "schedule" and have the jobs add/update a row when they run. This way each subsequent server can start the job that "polls" when it can start. This way any latency in the jobs will cause the others to wait so you don't have a disparity in your timings when one of the servers is thrown off.
Being a little paranoid I'd add a catchall scenario that says after "x" minutes of waiting proceed anyway so that a delay doesn't cascade far enough that the jobs don't run.
I have a job that runs every 5 minutes. Every once in a while, if our database load is large, the job will take longer than that. The job will then no longer run, because the next scheduled time is in the past. Is there a way I can get SQL Server to work around this so that it runs again even after one of these long runs?
If you were to have a script that runs continuously, you can spawn a second script from within the script every 5 minutes (without waiting for it to exit). There are many alternative (better) ways to do scheduling in Windows involving custom applications.
But then you will have overlapping script runs if one goes beyond it's 5 minutes, which is probably not what you want. A workaround is to create a temp file when the second script starts and delete it when it's done, and check for its existence in the beginning of the script, if it exists, you exit.