SSIS FOR LOOP ends based on the value in a sql table - sql-server

I have created an SSIS package and it has a for Loop. I have a table in SQL Server that stores the time the package started. and a column for Remainging_Time(Minute), which is a countdown(in minutes) value. Now I want the SSIS For Loop to run until the Remainging_Time(Minute)value is 0.
Description in the pic:

I think your problem is two fold. First, the for loop is going to perform a task, then check the eval expression. If your counter is really just keeping track of elapsed time, it is possible that your time will expire while it is performing whatever tasks you assign.
If you don't care about that level of precision, I would store the Remainging_Time as a package variable, not in a database table. You can use a SQL task with expressions to update the value of your package variable as the last task inside your loop. It doesn't make sense to have to make DB call every time through the loop unless there is some external process that is updating that value in the DB... and if that's the case, that's not really a for loop, that's more of a while loop, in which case I'd do a script task instead of a for loop and handle whatever you're trying to do in there.
Let me know if that makes sense. I can add more detail if I know more about what you're trying to do.

Related

Snowflake: is it ever okay to run infinite loops? I need to track changes from a table every second

My use case is that I need to track all the changes (insertions/updates/deletions) from a table.
My idea is to create a stream on that table, and consume that stream every second or so, exporting all the changes to another history table (mytable_history).
A task would be the perfect candidate for that. But unfortunately, a task can only be scheduled for 1 minute or more. I'll be getting same-row updates per second, so I'd really need the task to run every second at least.
My idea now is to run an infinite LOOP, using SYSTEM$WAIT to consume the stream every 1 second and inserting the data to the history table.
Is this a bad idea? What could go wrong?
Thanks
I can add two points to your idea:
Please note that "DML updates to the source object in parallel transactions are tracked by the change tracking system but do not update the stream until the explicit transaction statement is committed and the existing change data is consumed." (https://docs.snowflake.com/en/user-guide/streams-intro.html#table-versioning)
Your warehouse would run all day to process this, that's why your costs would increase noticeable.

ssis-How do I check job status in a table continuously from SSIS control flow?

we have a requirement where SSIS job should trigger based on the availability of value in the status table maintained,point to remember here that we are not sure about the exact time when the status is going to be available so my SSIS process must continuously look for the value in status table,if value(ex: success) is available in status table then job should trigger.here we have 20 different ssis batch processes which should invoke based on respective/related status value is available.
What you can do is:
Scheduled the SSIS package that run frequently.
For that scheduled package, assign the value from the table to a package variable
Use either expression for disabling the task or constraint expression to let the package proceeds.
Starting a SSIS package takes some time. So I would recommend to create a package with the following structure:
Package variable Check_run type int, initial value 1440 (to stop run after 24 hours if we run check every minute). This is to avoid infinite package run.
Set For Loop, check if Check_run is greater than zero and decrement it on each loop run.
In For loop check your flag variable in Exec SQL task, select single result value and assign its result to a variable, say, Flag.
Create conditional execution branches based on Flag variable value. If Flag variable is set to run - start other packages. Otherwise - wait for a minute with Exec SQL command waitfor delay '01:00'
You mentioned the word trigger. How about you create a trigger when that status column meets the criteria to run the packages:
Also this is how to run a package from T-SQL:
https://www.timmitchell.net/post/2016/11/28/a-better-way-to-execute-ssis-packages-with-t-sql/
You might want to consider creating a master package that runs all the packages associated with this trigger.
I would take #Long's approach, but enhance it by doing the following:
1.) use Execute SQL Task to query the status table for all records that pertain to the specific job function and load the results into a recordset. Note: the variable that you are loading the recordset into must be of type object.
2.) Create a Foreach Loop enumerator of type ADO to loop over the recordset.
3.) Do stuff.
4.) When the job is complete, go back to the status table and mark the record complete so that it is not processed again.
5.) Set the job to run periodically (e.g., minute, hourly, daily, etc.).
The enhancement hear is that no flags are needed to govern the job. If a record exists then the foreach loop does its job. If no records exist within the recordset then the job exits successfully. This simplifies the design.

Cascading precedence constraints SSIS

I have a question on how to set up precedence constraints in SSIS.
I have the following flow in the package:
The execute SQL task returns a string value in this format '1111' that is then stored in the variable called "data"
The point of it all is to control which script tasks gets to execute
For example, if the value is "1111" then all 4 scripts get to run.
If the value is "1011" then scripts 1,3,4 get to run... you get the picture
The scripts DEPEND on the previous one in all cases.
The constraints evaluate with an expression such as this: SUBSTRING(#[User::data], 2,1)=="1" for script 2, for example.
The problem:
If one of the scripts dont run, then the next one wont either (because the constraint never got to the evaluated). For example, for data = "1011", the scripts 3 and 4 never get to run because number 2 never ran...
Do you know a better way to make this work?
Using SQL Server 2008 with BIDS
I agree with #Iamdave for his direction that says modify your script in each script do the check there if it should or should not execute rather than using expression constraints.
However, because someone might want to do this with Data Flow Tasks or something here is a way to do it with SSIS components. Your problem is that you want to conditionally execute a task but whether or not that task gets executed you then want to conditionally execute another task. By placing each script task in a Sequence Container the precedence between sequence containers will always make the other container execute as long as that prior container does not fail or does not have a expression as a constraint. But once in the container you need to be able to set a Conditional Precedence. You can do that by adding a dummy task of some kind ahead of the script task and adding the constraint on that precedence.
Add some conditional code into your scripts.
For example, at the start of script task 1:
if firstcharacter(Variable) = 1
do things
else
do nothing
And then in script task 2:
if secondcharacter(Variable) = 1
do things
else
do nothing
and just make sure that all the data passes through, only being transformed if the right character in your variable is met.

Persist Data in SSIS for Next Execution

I have data to load where I only need to pull records since the last time I pulled this data. There are no date fields to save this information in my destination table so I have to keep track of the maximum date that I last pulled. The problem is I can't see how to save this value in SSIS for the next time the project runs.
I saw this:
Persist a variable value in SSIS package
but it doesn't work for me because there is another process that purges and reloads the data separate from my process. This means that I have to do more than just know the last time my process ran.
The only solution I can think of is to create a table but it seems a bit much to create a table to hold one field.
This is a very common thing to do. You create an execution table that stores the package name, the start time, the end time, and whether or not the package failed/succeeded. You are then able to pull the max start time of the last successfully ran execution.
You can't persist anything in a package between executions.
What you're talking about is a form of differential replication and this has been done many many times.
For differential replication it is normal to store some kind of state in the subscriber (the system reading the data) or the publisher (the system providing the data) that remembers what state you're up to.
So I suggest you:
Read up on differential replication design patterns
Absolutely put your mind at rest about writing data to a table
If you end up having more than one source system or more than one source table your storage table is not going to have just one record. Have a think about that. I answered a question like this the other day - you'll find over time that you're going to add handy things like the last time the replication ran, how long it took, how many records were transferred etc.
Is it viable to have a SQL table with only one row and one column?
TTeeple and Nick.McDermaid are absolutely correct, and you should follow their advice if humanly possible.
But if for some reason you don't have access to write to an execution table, you can always use a script task to read/write the last loaded date to a text file on on whatever local file-system you're running SSIS on.

Run SQL Agent Job in endless loop

There is an SQL Agent Job containing a complex Integration Services Package performing some ETL Jobs. It takes between 1 and 4 hours to run, depending on our data sources.
The Job currently runs daily, without problems. What I would like to do now is to let it run in an endless loop, which means: When it's done, start over again.
The scheduler doesn't seem to provide this option. I found that it would be possible to use the steps interface to go to step one after the last step is finished, but there's a problem using that method: If I need to stop the job, I would need to do that in a forceful way. However I would like to be able to let the job stop after the next iteration. How can I do that?
Thanks in advance for any help!
Since neither Martin nor Remus created an answer, here is one so the question can be accepted.
The best way is to simply set the run frequency to a very low value, like one minute. If it is already running, a second instance will not be created. If you want to stop the job after the current run, simply disable the schedule.
Thanks!
So you want that when you want to stop the job, after the running iteration, it should stop - if I am getting you correctly.
You can do one thing here.
Have one table for configuration which is having boolean value.
Add one step into the job. i.e. Before iteration, check the value from table. If it's true, then only run the ETL packages.
So, each time it finds its true, it'll follow endless loop.
When you want to stop the job, set that value in table to false.
When the current job iteration completes, it'll go to find the value from your table, will find it false, and the iteration will stop.
you can always set the "on success" action to go to step one, creating an endless loop, but as you said, if you want to stop the job you'll have to force it.
Other than that, an simple control table on the database with a status and a second job that queries this table and fires your main job depending on the status. Coupe of possible architectures here, just pick the one that suits you better
You could use service broker within the database. The job you need to run can be started by queuing a 'start' message and when it finishes it can send itself a message to start again.
To pause the process you can just deactivate the queue processor.

Resources