Need to split two SQL Server Agent Jobs into different schedules - sql-server

I have two SQL Agent jobs that share the same schedule due to an error I made during the creation of the second job. I generated a script in SSMS and edited some values, but I left the schedule_uid the same. Now it turns out that while those two jobs are running at the same time, they are corrupting each other's data.
What I need to do is leave the original job alone, but create a new schedule and have the second job use this new schedule. However, my searches for the correct way to do this have all resulted in dead-ends.
None of this can be done using a UI .. it all must be scripted so it can be run during a maintenance window without me present.
Thanks in advance for any assistance.

Use msdb.dbo.sp_detach_schedule followed by msdb.dbo.sp_add_jobschedule.

Related

Triggering Hangfire Jobs using SQL

I have a Hangfire service running on one of my servers, I'm a DBA and sometimes I'm asked to trigger jobs using the Dashboard, but it takes me a lot of time to connect to the jobs' server due to some connectivity and security issues.
And to overcome that, I want to trigger those jobs by inserting in Hangfire's tables on the database, I can already query those tables to find which job executed when and whether they failed, succeeded or still enqueued, does anyone know an approach to do so?
I've included a sample of two tables which I think will be used to do this trick, their names are Hash and Set respectively:
Hangfire normally uses a gui like swagger in .net (http://localhost:5000/hangfire) , there should be a immediate trigger feature. If not a second option is changing the cron expression for every minute or maybe every 30 seconds.

How to make the entries delete automatically after certain time in Potgresql

I have created a database and it is going to have some temporary values such as login type and some hask keys that I want it to be deleted after certain period of time.
I have installed postgres on docker.
I found out a solution through pgagent, but I don't know if it works on docker. Can anyone help me figure out how to do it?
Thanks in advance
You can create the function/procedure to delete the records based on the criteria.
Schedule the execution of the function either using the cron job or the pgagent to execute at the specified time. For executing on the cron job you need to create the shell scrip to call from the cron job.

Running several SQL jobs simultaneously under one job

Is it possible in SQL Server to run several jobs simultaneously under different sessions under same job.
For example, I have N stored procedures to run. They all have to be run under different sessions and start at the same time. I don't want to create N jobs, I want all of them start at the same time under 1 job.
In the past I've had one job create and start several other jobs using the sp_add_job command. If you set the delete level to 3 then the job will then get automatically deleted once it has completed.
The disadvantages are security and monitoring all the jobs.
I don't see any other option than using ssis sql script tasks for different scripts without any link between them and executing them. This will allow to run different SP or sql script to run parallel.Thanks!

Create SQL Server Agent job by using existing job script?

I have two sql server agent jobs: Job1 and Job2. Since Job2 is very similar to Job1, I right-clicked on Job1 > Script Job As > Create To and used that script to create the Job2.
Now I'm seeing that any change I make in the schedule of Job2 is also affecting Job1, and I'm assuming that it's happening because both have the same #schedule_uid.
So, two questions:
Is it correct to generate a job by using the sql script of another job? If it's correct, how can I fix this error where changes made in one job affect the other job?
Thanks.
Schedules are distinct objects within SQL Server and as you have found are independent to the Job, referenced by an ID.
If you are creating jobs via script you just need to either not assign a schedule, create a new schedule as well for every job, or define a set of possible schedules that fit around your requirements/maintenance windows and specify the correct ID in your script.
Obviously any two jobs that share a schedule will be affected if you change the schedule so, if you foresee a lot of individual job management/tweaking going on, it may be best that your script creates a schedule and then reference that new schedule in the job creation.
If you look in your script you will see that the #schedule_uid is the same. If you would fetch the new #schedule_uid (and all other hardcoded id's) from the tables in the MSDB database you will get a correct running job

How do I check job status from SSIS control flow?

Here's my scenario - I have an SSIS job that depends on another prior SSIS job to run. I need to be able to check the first job's status before I kick off the second one. It's not feasible to add the 2nd job into the workflow of the first one, as it is already way too complex. I want to be able to check the first job's status (Failed, Successful, Currently Executing) from the second one's, and use this as a condition to decide whether the second one should run, or wait for a retry. I know this can be done by querying the MSDB database on the SQL Server running the job. I'm wondering of there is an easier way, such as possibly using the WMI Data Reader Task? Anyone had this experience?
You may want to create a third package the runs packageA and then packageB. The third package would only contain two execute package tasks.
http://msdn.microsoft.com/en-us/library/ms137609.aspx
#Craig
A status table is an option but you will have to keep monitoring it.
Here is an article about events in SSIS for you original question.
http://www.databasejournal.com/features/mssql/article.php/3558006
Why not use a table? Just have the first job update the table with it's status. The second job can use the table to check the status. That should do the trick if I am reading the question correctly. The table would (should) only have one row so it won't kill performance and shouldn't cause any deadlocking (of course, now that I write it, it will happen) :)
#Jason: Yeah, you could monitor it or you could have a trigger start the second job when the end status is recieved. :)

Resources