As part of migration from traditional system to new technology, I need to rename N number of files[.txt, .pdf, .xl, etc] available in the particular folder using SSIS.
Move the file to destination
Parse the prefix of files which is used as ID for associating with the record in the table.
Ex: 1012BA12_Attach_Emp.doc [ID=1012BA12]
Then I need to go to database and lookup the new ID.
Ex: old ID=1012BA12 and equivalent new ID=512
Then replace the old ID with new one.
Ex: 512_Attach_Emp.doc
Insert one row to some table with respect new name & path.
I have been used the for each file enumerator, Execute sqltask and file system task
but it's consuming a day to do so.
Please guide me best approach.
The issue you are having is likely to be on the database side, not SSIS.
Do you have indexes on the tables you are accessing?
Are the files local to the SSIS instance, or does SSIS access the files remotely?
Related
I have a SQL table that stores filename and ssis package name. Whenever the file gets dropped to a directory, the corresponding ssis package gets triggered referring the mapping table.
If I store the file name as say, a*.csv in database and the corresponding ssis package as sample-ssis.dtsx, Will I be able to trigger the same package for any csv file starting with "a"? Can someone please help me with this.
Sure, you can read the file name into a variable and use a script task to loop through your mapping table and see if any of the filename-with-wildcard entries in the mapping table match the file name in the variable.
I get files to a shared location . Every file has different meta ie. file name, date created.
I have to extract the data using SSIS if and only if file content is different than previously processed files.
This should be fairly straight-forward -
Use a ForEach container configured to For Each File setting. Folder name would be the shared location. File Name should be a wildcard (example, *.csv)
Create a table in SQL called LoadedFiles which will hold the names of the files loaded. Note that when you create the ForEach container you would have also created a variable that would hold the file-name. Now in the ForEach container, check if the value in this variable exists already in the LoadedFiles table. If it doesn't, only then load.
I've assumed that all the files have the same metadata (column names and data types). Even if they do not, you can employ the same logic.
Also, if it isn't obvious, for this to work you need to insert a new row into the LoadedFiles table every time you do decide to load a file.
EDIT: It seems same file name does not equate to same content for the OP. In that case, he should just do a MERGE on the SQL table instead of a blind insert.
MERGE on the primary key and IF MATCHED do nothing else INSERT
I got work around
SSIS execute process task and i have called FC.exe
http://www.howtogeek.com/206123/how-to-use-fc-file-compare-from-the-windows-command-prompt/
I'm little new to SSIS and I have a need to import some flat files into SQL tables in the same structure.
(Assume the table is already exist in the same structure and table name and flat file name is same)
I thought to create a generic package (sql 2014) to import all those file by looping through a folder.
I try to create a data flow task in a foreach loop container in the data flow task I dropped a flat file source and ADO.Net destination .
I have set the file source to a variable so that every time it loops through it get the new file. similarly for the ADO.net table name I set it to a the variable so that each time it select a different table according to the file name.
since both source column names and destination column names are same I assume it will map the columns automatically.
but with a simple map it didn't let me to run the package so added a column on the source and selected a table and mapped it.
when I run the package I assumed it will automatically re map everything.
but for the first file it ran but second file it failed complaining with map issues.
can some one let me know whether this is achievable by doing some dynamic mapping?? or using any other way.
any help would be much appreciated.
thanks
Ned
I have several CSV files and have their corresponding tables (which will have same columns as that of CSVs with appropriate datatype) in the database with the same name as the CSV. So, every CSV will have a table in the database.
I somehow need to map those all dynamically. Once I run the mapping, the data from all the csv files should be transferred to the corresponding tables.I don't want to have different mappings for every CSV.
Is this possible through informatica?
Appreciate your help.
PowerCenter does not provide such feature out-of-the-box. Unless the structures of the source files and target tables are the same, you need to define separate source/target definitions and create mappings that use them.
However, you can use Stage Mapping Generator to generate a mapping for each file automatically.
PMy understanding is you have mant CSV files with different column layouts and you need to load them into appropriate tables in the Database.
Approach 1 : If you use any RDBMS you should have have some kind of import option. Explore that route to create tables based on csv files. This is a manual task.
Approach 2: Open the csv file and write formuale using the header to generate a create tbale statement. Execute the formula result in your DB. So, you will have many tables created. Now, use informatica to read the CSV and import all the tables and load into tables.
Approach 3 : using Informatica. You need to do lot of coding to create a dynamic mapping on the fly.
Proposed Solution :
mapping 1 :
1. Read the CSV file pass the header information to a java transformation
2. The java transformation should normalize and split the header column into rows. you can write them to a text file
3. Now you have all the columns in a text file. Read this text file and use SQL transformation to create the tables on the database
Mapping 2
Now, the table is available you need to read the CSV file excluding the header and load the data into the above table via SQL transformation ( insert statement) created by mapping 1
you can follow this approach for all the CSV files. I haven't tried this solution at my end but, i am sure that the above approach would work.
If you're not using any transformations, its wise to use Import option of the database. (e.g bteq script in Teradata). But if you are doing transformations, then you have to create as many Sources and targets as the number of files you have.
On the other hand you can achieve this in one mapping.
1. Create a separate flow for every file(i.e. Source-Transformation-Target) in the single mapping.
2. Use target load plan for choosing which file gets loaded first.
3. Configure the file names and corresponding database table names in the session for that mapping.
If all the mappings (if you have to create them separately) are same, use Indirect file Method. In the session properties under mappings tab, source option.., you will get this option. Default option will be Direct change it to Indirect.
I dont hav the tool now to explore more and clearly guide you. But explore this Indirect File Load type in Informatica. I am sure that this will solve the requirement.
I have written a workflow in Informatica that does it, but some of the complex steps are handled inside the database. The workflow watches a folder for new files. Once it sees all the files that constitute a feed, it starts to process the feed. It takes a backup in a time stamped folder and then copies all the data from the files in the feed into an Oracle table. An Oracle procedure gets to work and then transfers the data from the Oracle table into their corresponding destination staging tables and finally the Data Warehouse. So if I have to add a new file or a feed, I have to make changes in configuration tables only. No changes are required either to the Informatica Objects or the db objects. So the short answer is yes this is possible but it is not an out of the box feature.
I have a connection manager (oledb) that points to a folder that has 20 dbf files. All the dbf files have the same schema. Just the data are for specific entities. I want to take all the data from the 20 dbf files and insert them into one table (in sql server). What tasks enables me to do this?
You should create "For each loop" container, to write each file path to variable f.e. #fp
After that, you put inside DataFlow task and configure the connection
After, you should create another variable like #table (substring your #fp,to only file name) and put this variable in DataFlow task in source table.
ready example