Loading many flatfiles into SQL Server 2005 - sql-server

I have a very annoying task. I have to load >100 CSV-files from a folder to SQL Server database. The files have column names in first row. Data type can be varchar for all columns. The table names in database can just be filename of the CSVs. What I am currently doing is that I use Import/Export Wizard from SSMS, I choose flatfile from dropdown box, choose the file, next->next->next and finish! Any ideas how can I automate such a task in Integration services or with any other practical method?
Note: Files are on my local PC, DB-server is somewhere else, so I cannot use BULK INSERT.

You can use SSIS - Foeach loop container to extract file names - by arranging to particular format.Use a variable to dynamically fill the variable with file name.Then in dataflowtask , use flat file source for source - oledb destination.
Please post some sample file names.so that i can learn and guide you properly.
Thanks
Achudharam

Related

Find source and destination tables across all packages in a solution?

I have around 10 solutions .sln, each of them with multiple packages in it.
I would like to be able to get:
solution name
package name
mapping between the source and target columns
source and target table(s)
source connection or source DB
Would that be possible by parsing the .dtsx file? I understand I can save it to an .xml file.
For now, what I did was to configure the SQL Server Profiler and to run all packages to retrieve the current SQL queries, dumping them in a table and parsing them directly in SQL Server.
I wonder if there is not a better solution using an external tool (like biml) or by parsing the files directly.
Any suggestions are appreciated

SSIS: How to Load differently named flat files from a folder using SSIS

I'm very new to SSIS packages so please forgive me if this is a simple query.
I have 2 SSIS packages that have been set up;
The first picks up a csv file, formats the data slightly (cuts off a prefix in one of the columns) and places it in another folder with an updated filename and timestamp.
The Second package imports the formatted file into a SQL database table.
The issue that I have is that the incoming file names for the first package may differ, the structure of the data will remain the same.
Is there a way to configure the flatfile connection manager to pickup any files in the C:Incoming\ folder?
Thanks in advance.
You can use a foreach loop container, to get the files in a folder, and use expressions in flat file connection to get it.
For a detailed answer, you can refer to this article:
Loop through Flat Files in SQL Server Integration Services

Daily Feed a SQL server table from a CSV file in sFTP

I have a CSV file that can be acessed only via sFTP.
The CSV file is daily updated (same structure but different values).
My aim is to daily copy the values of the CSV and paste it into a SQL Server table. Of course the process needs to be automated.
My CSV also contains too many row. The first column of the csv is 'ID'. And I have a fixed list of 'ID'. So I need to do some filtering before to paste into SQL Server
What would be the best option to reach the aim? Using an external ETL, Batch, PowerShell, SQL Script ?
Integration services (SSIS) is a good choice, because you can use a combinition of tasks (FTP connection, flat file source, t-SQL, ....), and you can integrate an SSIS package in SQL Server job to be executed daily.

Import database (SQL file) in SQL Server Management Studio

I've created the structure of my database first in PhpMyAdmin and exported it to a .sql file.
Now I'm looking everywhere in SQL Server Management Studio where I can import/add the data in a new database.
Does anybody where to look or what to click?
I'm using the 2014 version (CTP2)
If you have a .sql file which contains SQL statements, you can just copy and paste the contents (or open the file in a query window) and run it. This assumes it has all of the create table etc. statements to create the schema/structure and not just insert statements for the data.
Check the top of the file to make sure that it is first selecting the correct database, if not add a USE statement to select the correct database.
You didn't say how big the file was, but if it is quite large and has the insert statements (data as well as schema), then you'll probably want to run by CLI using sqlcmd command. Much faster and SSMS won't freak out.
Another alternative option to running the .sql file/code is to set up a data source for mysql and just use odbc to access the database itself.
Bear in mind that there are real and very annoying differences between mysql and t-sql that can make migration a pain. If you're just creating a few tables, it may not be an issue, but if there are a ton of tables with lots of fields of different data types, you may run into issues.
If you are looking to import table structure, you can copy-paste the content and run inside SSMS in a query window. Beware of syntax differences with MySQL and SQL Server. You will most likely get errors. You need to convert your SQL script from MySQL dialect to SQL Server dialect (or just add them manually if they are not too many). If you set the databases to a SQL standard-compatibility mode at the very beginning, you will have much less trouble.
If you are ONLY looking just to import the data into existing tables inside the SQL Server only, you can do the same (i.e. copy-paste and run in query window). You will have less trouble with that.
Open the server, open "Databases" and right click the database, go to "Tasks" and then Import Data...
I have had the most 'trouble free' success importing to SQL via a flat file method (comma delimited .txt file), the only stipulation when creating a flat file (i.e from Access) make sure the text identifier is set to {none} and not "".
To import the file: in the SQL Server Management Studio right click on Databases and create a new database. Then right click on the new database -> Tasks -> Import Data... The import window opens: in the DATA SOURCE option select Flat File Source and select the .txt file...click NEXT. In the DESTINATION field select SQL Server Native Client 11.0 and go through the import process. This worked very well for me.

Speeding Up ETL DB2 to SQL Server?

I came across this blog post when looking for a quicker way of importing data from a DB2 database to SQL Server 2008.
http://blog.stevienova.com/2009/05/20/etl-method-fastest-way-to-get-data-from-db2-to-microsoft-sql-server/
I'm trying to figure out how to achieve the following:
3) Create a BULK Insert task, and load up the file that the execute process task created. (note you have to create a .FMT file for fixed with import. I create a .NET app to load the FDF file (the transfer description) which will auto create a .FMT file for me, and a SQL Create statement as well – saving time and tedious work)
I've got the data in a TXT file and a separate FDF with the details of the table structure. How do I combine them to create a suitable .FMT file?
I couldn't figure out how to create the suitable .FMT files.
Instead I ended up creating replica tables from the source DB2 system in SQL Server and ensured that that column order was the same as what was coming out from the IBM File Transfer Utility.
Using an Excel sheet to control what File Transfers/Tables should be loaded, allowing me to enable/disable as I please, along with a For Each Loop in SSIS I've got a suitable solution to load multiple tables quickly from our DB2 system.

Resources