I need a way to execute a stored procedure whenever my ssis package ends.
Regardless if it’s a failure or success.
Is there any way to do this without connecting failure events from each of my tasks? I’ve been looking for an OnPackageEnd event or something but I can’t see it.
Do any of you have any ideas?
In the package put all the tasks in a container. And below the container put the execute proceduer task and for precedence constraint choose values as "Completion"(The line will be in Blue color, green by default). So irrespective of the package status (success or fail) the stored proc will be executed.
Well I think the simplest thing is to add the execution of the proc as a second step in the job that executes the package, you can specify there that you can go to the next step on failure as well as on success.
Or you put the Exec SQL task to execute the package at the end of the process (that gets the success branch) and put it in the event handler for all failures (we do the event handler at the package level not for individual steps), we do that for one step where we run the same proc but with different input values in case of failure or success.
You can create an event handler onPostExecute of the pacakge and add the proc on an execute SQL Task in there
OR
add all your current components inside a sequence container and drag the green arrow to a new execute sql task component with your procedure.
Double click the green arrow and select "Completion" instead of "Success"
OnPostExecute gets executed for every tasks. So this is not good for a requirement that expects OnPackageEnd kind of an event.
Related
1st .)
I have a Sequence container.
It has 4 different execute sql tasks and 4 different DFT where data is inserting into different tables .
I want to implement transaction with or without MSDTC service on the package failure i.e., each and every data should be rollback on failure of any of the DFT or execute SQL task .
How to implement it? when I am trying to implement with MSDTC service I get the "OLEDB Connection" error and without MSDTC the data is getting inserted only the last execute Sql task is getting rolled back . How to implement this on ssis 2017?
2nd.)
when I tried without MSDTC by setting the property of ServerConnection RetainSameConnection as TRUE and took two more execute sql task for begin transaction and commit. I faced a issue with the EVENT HANDLER i.e., I was not able to log error into different table. Either the Rollback is working or the Event Handler when tried to manipulate.
Soon as the error occurred the control goes to the event handler and then Rollback every thing including task in event handler
3rd.)
The Sequence Container is used for parallel execution of tasks. So the particular task among the 4 getting failed only that particular task getting rolled back rest SQL task was inserting data into tables.
Thanks in Advance!! ;-)
One option I've used (without MSDTC) is to configure your OLEDB conection as RetainSameConnection=True
(Via Properties Window)
Then Begin a Transaction Before your Sequence Container & Commit afterwards (all sharing the same OLEDB Connection.
Works quite well & pretty easy to implement.
According to my Scenario :
I used a sequence container(which contains different DFT's and tasks),and have taken 3 more Execute sql task :
1st Begin Transaction T1(before the sequence container)
2nd Commit transaction T1(after the sequence container)
3rd Rollback transaction T1(after the sequence container) with precedence as failure i.e, only when the sequence container fails the Execute Sql task containing rollback operation executes.
Note : I tried to rollback this way but only the current execute sql task i.e., the nearest to it was getting rolled back rest of the data were inserted. So whats the solution? In that same execute sql task I have truncated the table where rows were getting inserted. So, when the sequence container fails the execute sql task will truncate all the data in the respective table. (Rollback transaction T1 go truncate table Table_name1 go truncate table table_name2 go truncate table table_name3)
*IMPORTANT :***To make the above operation work make sure in the connection manager properties **RetainSameConnection is set to True by default it's false.
Now, to log errors into user defined tables we make use of event handler.So the scenario was, when the sequence container gets failed everything gets rolled back including the table used in execute sql task in the event handler. So whats the solution?
When you are not using SSIS transaction properties, by default every task have the properties set to supported. The Execute sql task in the Event handler also has the same properties as Supported so it follows the same transaction.To make the event handler work properly change the connection of the Execute Sql Task i.e, take a different connection and set its TransactionProperty to NotSupported. Thus, it will not follow the same transaction and when any error occurs it will log the errors into the table.
Note : we are using sequence container for parallel execution of tasks.What if? the error occurs inside the sequence container in any of the task and task does not allow to sequence container to fail.In that case, connect all the task serially.Yes, that makes no sense of the sequence container.I found my solution to work that way.
Hope it helps all! ;-)
we have a requirement where SSIS job should trigger based on the availability of value in the status table maintained,point to remember here that we are not sure about the exact time when the status is going to be available so my SSIS process must continuously look for the value in status table,if value(ex: success) is available in status table then job should trigger.here we have 20 different ssis batch processes which should invoke based on respective/related status value is available.
What you can do is:
Scheduled the SSIS package that run frequently.
For that scheduled package, assign the value from the table to a package variable
Use either expression for disabling the task or constraint expression to let the package proceeds.
Starting a SSIS package takes some time. So I would recommend to create a package with the following structure:
Package variable Check_run type int, initial value 1440 (to stop run after 24 hours if we run check every minute). This is to avoid infinite package run.
Set For Loop, check if Check_run is greater than zero and decrement it on each loop run.
In For loop check your flag variable in Exec SQL task, select single result value and assign its result to a variable, say, Flag.
Create conditional execution branches based on Flag variable value. If Flag variable is set to run - start other packages. Otherwise - wait for a minute with Exec SQL command waitfor delay '01:00'
You mentioned the word trigger. How about you create a trigger when that status column meets the criteria to run the packages:
Also this is how to run a package from T-SQL:
https://www.timmitchell.net/post/2016/11/28/a-better-way-to-execute-ssis-packages-with-t-sql/
You might want to consider creating a master package that runs all the packages associated with this trigger.
I would take #Long's approach, but enhance it by doing the following:
1.) use Execute SQL Task to query the status table for all records that pertain to the specific job function and load the results into a recordset. Note: the variable that you are loading the recordset into must be of type object.
2.) Create a Foreach Loop enumerator of type ADO to loop over the recordset.
3.) Do stuff.
4.) When the job is complete, go back to the status table and mark the record complete so that it is not processed again.
5.) Set the job to run periodically (e.g., minute, hourly, daily, etc.).
The enhancement hear is that no flags are needed to govern the job. If a record exists then the foreach loop does its job. If no records exist within the recordset then the job exits successfully. This simplifies the design.
A little bit stuck on a problem we are having, let me try my best to explain the scenario, issues and what we have tried.
We have an SQL job which is logging to a specific file tagged with a date appended on to the end
The first step checks what our log file is e.g. log_01012012.log, if this is incorrect we updated the table sysjobsteps to have the new file location for all steps - e.g log_02012012.log. We are doing this by calling a stored proc in the first job step.
Whilst this gets updated in the table the remaining steps keep using the old log, due to what I'm assuming is that this table only gets read once when the job is started.
We have tried to restart the job within the stored proc with the following code:
EXEC msdb.dbo.sp_stop_job #job_name = 'My Job'
waitfor delay '00:00:15'
EXEC msdb.dbo.sp_start_job #job_name = 'My Job'
However, when we kill the job it appears as it kills the stored procedure (which i guess is a child of the job) which means it never gets to the step to restart the job.
Is there a way in which a job can restart itself so it looks at the sysjobsteps table again and uses the correct location for the logging?
Things which might solve the issue would be
Being able to restart a job from the job itself
Being able to refresh the job in some respect.
Anything I need to clarify I will do my best but we are currently stuck and all input will be greatly appreciated!
go to sql server agent on SSMS
expand Jobs
create a job,
define the steps (sp or simple query)
define the schedule (how often or even start the job when the sqlserver restarts)
set notification to email you when job completes ( succeeded/failed )
hope this helps.
You could do something fancy with Service Broker. Something like:
1) Put a message into a broker queue as the last step of your job. The content can be empty; it's just a token to say "hey... the job needs to be resubmitted".
2) Write an activation stored procedure for the queue that puts in a delay (like you're doing in your already existing procedure) and then resubmit the job
Or, you instead of hardcoding the log file name in the job steps themselves, put that data in a table somewhere. Then, in your step that changes the location define the success condition as "go to next step" and the failure condition as "go to step 1". Then modify the code that changes the file location to return an error (thus triggering the job step's failure condition).
Realise nearly a decade old, but I had the same problem and solved it as follows. Using the code that the OP suggested:
EXEC msdb.dbo.sp_start_job #job_name = 'My Job A'
Simply create a matching pair of Jobs, each of which has an identical first step (doing the actual work) and a final step containing the above code but pointing at the opposite job name (basically you can't restart the same job from itself as far as I can tell). Just make sure you set the Step Behaviour in the Advanced tab to execute this final step in each case and the job will just rumble away indefinitely.
I am running a bunch of database migration scripts. I find myself with a rather pressing problem, that business is waking up and expected to see their data, and their data has not finished migrating. I also took the applications offline and they really need to be started back up. In reality "the business" is a number of companies, and therefore I have a number of scripts running SPs in one query window like so:
EXEC [dbo].[MigrateCompanyById] 34
GO
EXEC [dbo].[MigrateCompanyById] 75
GO
EXEC [dbo].[MigrateCompanyById] 12
GO
EXEC [dbo].[MigrateCompanyById] 66
GO
Each SP calls a large number of other sub SPs to migrate all of the data required. I am considering cancelling the query, but I'm not sure at what point the execution will be cancelled. If it cancels nicely at the next GO then I'll be happy. If it cancels mid way through one of the company migrations, then I'm screwed.
If I cannot cancel, could I ALTER the MigrateCompanyById SP and comment all the sub SP calls out? Would that also prevent the next one from running, whilst completing the one that is currently running?
Any thoughts?
One way to acheive a controlled cancellation is to add a table containing a cancel flag. You can set this flag when you want to cancel exceution and your SP's can check this at regular intervals and stop executing if appropriate.
I was forced to cancel the script anyway.
When doing so, I noted that it cancels after the current executing statement, regardless of where it is in the SP execution chain.
Are you bracketing the code within each migration stored proc with transaction handling (BEGIN, COMMIT, etc.)? That would enable you to roll back the changes relatively easily depending on what you're doing within the procs.
One solution I've seen, you have a table with a single record having a bit value of 0 or 1, if that record is 0, your production application disallows access by the user population, enabling you to do whatever you need to and then set that flag to 1 after your task is complete to enable production to continue. This might not be practical given your environment, but can give you assurance that no users will be messing with your data through your app until you decide that it's ready to be messed with.
you can use this method to report execution progress of your script.
the way you have it now is every sproc is it's own transaction. so if you cancel the script you will get it update only partly up to the point of the last successfuly executed sproc.
you cna however put it all in a singel transaction if you need all or nothign update.
I have an SSIS Mulitcast object that splits my flow into 2 paths.
In the first path I insert the flow into another database.
In the second path I update the rows of the flow to show that they were inserted.
I need a way to make one path wait until the other path has finished. (So I can handle any insert errors and not update the rows for those that were errors.)
Any Help is appreciated.
I looked into this more and here is the answer I came up with:
On each output of the multicast I put a sort operation, then I join them using a MergeJoin operation. After that I do a SQL Update using an OLE DB command on the items that don't have an error value from the mergejoin.
If you know roughly how long it will take for one fork to finish, you could use an OLE DB Command on your second fork. Type in "waitfor delay '00:00:15'" to cause the server to wait 15 seconds before going to the next step.
You can't do that in SSIS. You can only enforce precedence constraints (i.e. enforce order of the run) at the Control Flow level.
In that case I would not use a multicast but a regular dataflow so that one doesn't start before the first one completed successfully
The best option is to actually use a Script component. From there you can manage do the insert and update sequentially with try catch blocks.