I am trying to use a SSIS package to insert data from a file into a table but only if all the data in the file is good. I have read around and realise that I can split my good data and bad data with a conditional split.
However I cannot come up with a way to not write the good data if there is some bad data rows.
I can solve my problem use a staging table. I just thought I would ask if I am missing a more elegant way to do this within SSIS package rather than load then transform with TSQL.
Thanks
SSIS way allows wrapping actions in a Transaction. According to your task, you need to count bad rows in the dataflow, and if there is at least one bad row - do nothing i.e. rollback.
Below is how I would do it in Pure SSIS. Create a sequence and specify TransactionOption=Required on it, move your dataflow to the sequence. Add Count Rows transformation to your bad rows dataflow and store its result to some variable. After DataFlow inside sequence - create conditional task link where you check whether bad_rowcount variable > 0, and on the next - do little script task which raise an error to roll back transaction.
Pure SSIS - yes! Simpler than using staging table - not sure.
Related
I am creating an SSIS package to import CSV file data into a SQL Server table.
Some of the rows in the CSV files will have missing values.
For example, if a row has the format: value1,value2,value3 and value2 is missing,
then it will render as: value1,,value3 in the csv file.
When the above happens (value2 is missing) in my SSIS package, I want NULL to go into the receiving SQL Server column that would hold value2.
I understand that I can add a "Script" task to my SSIS package to apply this rule. However, I'm concerned that this will drastically reduce the performance of my SSIS package. I'm not an expert on the inner workings of SSIS/SQL Server, but I'm concerned that this script will cause my script to lose "BULK INSERT" capabilities (and other efficiencies) since the script will have to inspect every row and apply the changes as needed.
Can anyone confirm if adding such a script will cause major performance impacts? Or does the SSIS/SQL-Server engine run the script on every row and then bulk-insert? Is there another way I can apply this rule without taking a performance hit?
Firstly, you can use script task when required. Script task will be executed only once for each execution of the whole package not for every row. For every row there is another component called script component. When the other regular SSIS tasks are not enough to achieve what you want you can surely use script component. I don't believe it is a performance killer unless you implement it badly.
Secondly, this particular requirement you can simply use Flat File Source task to import your csv file. It will put the value NULL when there is no value. I'm considering this is a valid csv value and each row has correct number of comma for every fields (total field - 1 actually) even if value is empty or null for some fields.
I have the below within the Data-flow area. The problem I'm experiencing is that even if the result is 0, it is still creating the file.
Can anyone see what I'm doing wrong here?
This is pretty much expected and known annoying behavior.
SSIS will create an empty flat file, even if unchecked: "column names in a first data row".
The workarounds are:
remove such file by a file system task if #RowCountWriteOff = 0 just after the execution of a dataflow.
as alternative, do not start a dataflow if expected number of rows in the source is 0:
Update 2019-02-11:
Issue I have is that I have 13 of these export to csv commands in the
data flow and they are costly queries
Then double querying a source to check a row-count ahead will be even more expensive and perhaps better to reuse a value of variable #RowCountWriteOff.
Initial design has 13 dataflows, adding 13 constraints and 13 filesystem tasks the main control flow will make package more complex and harder to maintain
Therefore, suggestion is to use a OnPostExecute event handler, so cleanup logic is isolated to some certain dataflow:
Update 1 - Adding more details based on OP comments
Based on your comment i will assume that you want to loop over many tables using SQL Commands, check if table contains row, if so then you should export rows to flat files, else you should ignore the tables. I will mention the steps that you need to achieve that and provide links that contains more details for each step.
First you should create a Foreach Loop container to loop over tables
You should add an Execute SQL Task with a count command SELECT COunt(*) FROM ....) and store the Resultset inside a variable
Add a Data Flow Task that import data from OLEDB Source to Flat File Destination.
After that you should add a precedence constraint with expression, to the Data Flow Task, with expression similar to #[User::RowCount] > 0
Also, it is good to check the links i provided because they contains a lot of useful informations and step by step guides.
Initial Answer
Preventing SSIS from creating empty flat files is a common issue that you can find a lot of references online, there are many workarounds suggested and many methods that may solves the issue:
Try to set the Data Flow Task Delay Validation property to True
Create another Data Flow Task within the package, which will be used only to count rows in the Source, if it is bigger than 0 then the precedence constraint should led to the other Data Flow Task
Add a File System Task after the Data Flow Task which delete the output file if RowCount is o, you should set the precedence constraint expression to ensure that.
References and helpful links
How to prevent SSIS package creating empty flat file at the destination
Prevent SSIS from creating an empty flat file
Eliminating Empty Output Files in SSIS
Prevent SSIS for creating an empty csv file at destination
Check for number of rows returned and do not create empty destination file
Set the Data Flow Task Delay Validation property to True
Having seen other questions with answers that don't totally address what I am after, I am wondering how in SSIS to use an OLE DB Command transformation to do an Insert and immediately get the resulting primary key for each row inserted as a new column, all within the same Data Flow Task. That sounds like it should be a common, built-in, fairly simple thing to ask for in SSIS, right?
So the obvious first choice for me would be to use an OLE DB Command where I do a SELECT and include an OUTPUT clause in my command:
INSERT INTO dbo.MyReleaseTable(releaseDate)
OUTPUT ?=Inserted.id
VALUES (?)
Only I can't figure out how to do this in an OLE DB Command (with an output) and it not complain. I've read about using stored procedures to do this, so am I required to use a stored procedure if I want to do this?
Let's say this won't work. I could use a Script Transformation and execute direct SQL in that, right? Well if that's what I must do, then the line between using custom code and SSIS block-components gets blurred and I am tempted to throw SSIS away and just do the whole ETL in code.
Then I hear talk about using an Execute SQL task. So now I can't even do 1 data flow within 1 data flow task? Am I getting that right? I'd like to keep 1 single data flow contained within 1 data flow task and not have to break my 1 flow out between separate tasks.
If it turns out that this seemingly simple data flow objective is not built into SSIS then I will consider dumping SSIS altogether. Talend has a free ETL offering, don't they?
Well, this can be done with SSIS inside DataFlow, but with some tricks. You need to create a stored procedure with input and output parameters and reuse it in DataFlow, as described here, fetching result value.
Drawbacks of this approach:
You need to create a Stored Procedure
Each row is processed with SP, which causes implicit transactions, instead of batch processing. This can slow down your package.
Solution without performance penalty - do it in two DataFlows, first doing value insert into some temp table, and the second DF - doing SQL MERGE command at OLE DB source and handling output data as you wish. All this inside transaction, handled either by MSDTC or by your own.
I'm trying to manipulate a column in SSIS which looks like below after i removed unwanted rows with derived column and conditional split in my data flow task. The source for this is a flatfile.
XXX008001161022061116030S1TVCO3057
XXX008002161022061146015S1PUAG1523
XXX009001161022063116030S1DVLD3002
XXX009002161022063146030S1TVCO3057
XXX009003161022063216015S1PUAG1523
XXX010001161022065059030S1MVMA3020
XXX010002161022065129030S1TVCO3057
XXX01000316102206515901551PPE01504
The first three numbers from the left (starting with "008" first row) represent a series, and the next three ("001") represent another number within the series. what i need is to change all of the first three numbers starting from "001" to the end.
The desired reslut would thus look like:
XXX001001161022061116030S1TVCO3057
XXX001002161022061146015S1PUAG1523
XXX002001161022063116030S1DVLD3002
XXX002002161022063146030S1TVCO3057
XXX002003161022063216015S1PUAG1523
XXX003001161022065059030S1MVMA3020
XXX003002161022065129030S1TVCO3057
XXX00300316102206515901551PPE01504
...
My potential solution would be to load the file to a temporary database table and query it with SQL from there, but i am trying to avoid this.
The final destination is a flatfile.
Does anybody have any ideas how to pull this off in SSIS? Other solutions are appreciated also.
Thanks in advance
I would definitely use the staging table approach and use windows functions to accomplish this. I could see a use case if SSIS was on another machine than the database engine and there was a need to offload the processing to the SSIS box.
In that case I would create a script transformation. You can process each row and make the necessary changes before passing the row to the output. You can use C# or VB.
There are many examples out there. Here is MSDN article - https://msdn.microsoft.com/en-us/library/ms136114.aspx
I am trying to create a transformation which takes values from a table input (lets say, 10 rows) and in turns creates variables from the values from the rows. For each row in the original set, I then need to do a new transformation using the variables.
How can I loop through a bunch of rows, one by one, reading it into variables which will be used later in a transformation of its own?
looping in PDI is a bit complicated.
You can use the following proceed:
Create a job in PDI. First execute a Transformation, which reads or
generates the rows you need and use the "Copy rows to result" step.
After that create a execute Job step in the first Job. Here you have
to check the "Execute for every input row" Option in the "Advanced"
settings of the job. In this job you create and execute the final
Transformation, which is transforming your data.
In this transformation you have to use the "get rows from result".
Here you can finally read the variables you have defined before.
Hope i could help you.
Best regards.