I have this SSIS task, I read the contents of a CSV file and then I insert into a table
In one of the columns I should perform a trim in the values before inserting them in the table, how can I modify the csv before the insert?
Add a Derived Column transformation in the data flow between the Flat File Source and the ADO/ODBC/OLE Destination there.
If you want to trim, then you need to apply both a left and a right trim operation. I favor creating new columns versus renaming existing as I find it's easier to debug.
Assuming I have an inbound column named Col1, I would define a new column called Col1_Trimmed And remember that SSIS column names are case sensitive
LTRIM(RTRIM([Col1]]
Caveats about what is whitespace in the documentation for LTRIM
Related
I m a bit new to SSIS and wanted to know how can we Insert data of flat file in two sql tables.some of the columns of the flat file are Inserted in one tables and based on one Indicator present in the flat file I wanted to Insert the next column in another table on the basis of account number.
Some additional records are being sent now in the flat file along with the old data.
As mentioned based on the Indicator memo(new field sent in the flat file) needs to inserted in the history table.I tried to conditional split the data and then Insert but somehow didn't work then I tried writing a query in ole dB command but that is also not working.
So I need a way to import CSVs that vary in column names, column order, and number of columns. They will always be CSV and of course comma-delimited.
Is it possible to generate both FMT and a temp table creation script of a CSV file?
From what I can gather, you need one or the other. For example, you need the table to generate the FMT file using the bcp utility. And you need the FMT file to dynamically build a create script for a table.
Using just SQL and to dynamically load files text files there is no quick way to do this. I see one option:
Get the data into SQL Server as a single column (bcp it in or use
t-sql and openrowset to load, SSIS, etc...). Be sure to include in this table a second column that is an identity (I'll call it "row_nbr"). You will need this to find the first row to get column names from the header in the file.
Parse the first record "where row_nbr = 1" to get the header record. You will need a string parse function (find online, or create your own) to substring out each column name.
Build dynamic SQL statement to create a new table with the parsed
out number of fields you just found. Must calculate lengths and use
a generic "varchar" data type since you wont know how to type the
data. Use column names found above.
Once you have a table created with the correct number of adequately
sized columns, you can create the format file.
I assumed, in my answer, that you are comfortable with doing all these things, just shared the logical flow at a high level. I can add more if you need more detail.
I have been searching on the internet for a solution to my problem but I can not seem to find any info. I have a large single text file ( 10 million rows), I need to create an SSIS package to load these records into different tables based on the transaction group assigned to that record. That is Tx_grp1 would go into Tx_Grp1 table, Tx_Grp2 would go into Tx_Grp2 table and so forth. There are 37 different transaction groups in the single delimited text file, records are inserted into this file as to when they actually occurred (by time). Also, each transaction group has a different number of fields
Sample data file
date|tx_grp1|field1|field2|field3
date|tx_grp2|field1|field2|field3|field4
date|tx_grp10|field1|field2
.......
Any suggestion on how to proceed would be greatly appreciated.
This task can be solved with SSIS, just with some experience. Here are the main steps and discussion:
Define a Flat file data source for your file, describing all columns. Possible problems here - different data types of fields based on tx_group value. If this is the case, I would declare all fields as strings long enough and later in the dataflow - convert its type.
Create a OLEDB Connection manager for the DB you will use to store the results.
Create a main dataflow where you will proceed the file, and add a Flat File Source.
Add a Conditional Split to the output of Flat file source, and define there as much filters and outputs as you have transaction groups.
For each transaction group data output - add Data Conversion for fields if necessary. Note - you cannot change data type of existing column, if you need to cast string to int - create a new column.
Add for each destination table an OLEDB Destination. Connect it to proper transaction group data flow, and map fields.
Basically, you are done. Test the package thoroughly on a test DB before using it on a production DB.
I have a CSV file where there is a header row and data rows in the same file.
I want to get information from both rows during the same load.
What is the easiest way to do this?
i.e File Example - Import.CSV
2,11-Jul-2011
Mr,Bob,Smith,1-Jan-1984
Ms,Jane,Doe,23-Apr-1981
In the first row, there a a count of the number of rows and the date of transmission.
In the second and subsequent rows is the actual data, in this Title, FirstName, LastName, Birthdate
SQL Server Integration Services Conditional Split Transformation should do it.
I wonder what would You do with that info in the pipeline. However, there is only one solution to read it in one pass (take a look at notes/limitations at the end):
Create a data flow
Put File source component and set it the way You want
Add script task to count the number of rows
Put conditional split transformation where condition is mycounter=0
One path from condition split will be the first row of file (mycounter=0) and the other path will be the rest of the rows (2 in your example).
Note#1: file source can set only one metadata for each column in the source. This means that if your first column of data is string (Mr, Ms, ...) then You have to set it as string data type in the source. Otherwise, if You set it as integer (DT_Ix) it
will fail as soon as it encounters row with string data (Mr, Ms, ...) in the first column of file. This applies to all columns, not just the first one.
Note #2: SSIS will see only the number of columns You told it to. This means that You have to have the same number of columns in EACH row. Otherwise, You have ragged csv file and You need to take another approach - search the Internet. But those solutions also require different layout of csv.
Answers in the following links explain how to load parent-child data from a flat file into an SQL Server database when both parent and child rows exist in the same file next to each other.
How do I split flat file data and load into parent-child tables in database?
How to load a flat file with header and detail data into a database using SSIS package?
I have to work in a DTS environment in 2005 (too complicated to explain) and I have a comma delimited text file that is to be appended to the main table. I'd like to pull the last column in the text file out for the first record and use it as select criteria for a delete command. But how can I do this in the older DTS environment?
Here's the line of foobar data
9,36,7890432174,2007-12-17 00:00:00.000,21,15.22,99,11,49,28,2009-07-12 00:00:00
what I want to do is create a sql statement that will delete all the records where a certain column is equal to "2009-07-12 00:00:00"
Thanks.
There are at least two ways to implement this in DTS.
The first is to
load the text file into a staging table
select the date value from the temporary table and assign it to a package variable
carry out the delete using the package variable as an input parameter
insert from the staging table into the main table
clear down the staging table
This assumes that there is some way to identify the order of the rows in the text file from the data. If not, you could add an identity column to the staging table definition.
The second is to
extract the value from the input file using a script task and assign it to a package variable
carry out the delete using the package variable as an input parameter
insert from the text file into the main table
EDIT
I believe it's also possible to use the generic text file ODBC driver to access the text file like a database table.