I came across this blog post when looking for a quicker way of importing data from a DB2 database to SQL Server 2008.
http://blog.stevienova.com/2009/05/20/etl-method-fastest-way-to-get-data-from-db2-to-microsoft-sql-server/
I'm trying to figure out how to achieve the following:
3) Create a BULK Insert task, and load up the file that the execute process task created. (note you have to create a .FMT file for fixed with import. I create a .NET app to load the FDF file (the transfer description) which will auto create a .FMT file for me, and a SQL Create statement as well – saving time and tedious work)
I've got the data in a TXT file and a separate FDF with the details of the table structure. How do I combine them to create a suitable .FMT file?
I couldn't figure out how to create the suitable .FMT files.
Instead I ended up creating replica tables from the source DB2 system in SQL Server and ensured that that column order was the same as what was coming out from the IBM File Transfer Utility.
Using an Excel sheet to control what File Transfers/Tables should be loaded, allowing me to enable/disable as I please, along with a For Each Loop in SSIS I've got a suitable solution to load multiple tables quickly from our DB2 system.
Related
I frequently need to validate CSVs submitted from clients to make sure that the headers and values in the file meet our specifications. Typically I do this by using the Import/Export Wizard and have the wizard create the table based on the CSV (file name becomes table name, and the headers become the column names). Then we run a set of stored procedures that checks the information_schema for said table(s) and matches that up with our specs, etc.
Most of the time, this involves loading multiple files at a time for a client, which becomes very time consuming and laborious very quickly when using the import/export wizard. I tried using an xp_cmshell sql script to load everything from a path at once to have the same result, but xp_cmshell is not supported by AzureSQL DB.
https://learn.microsoft.com/en-us/azure/azure-sql/load-from-csv-with-bcp
The above says that one can load using bcp, but it also requires the table to exist before the import... I need the table structure to mimic the CSV. Any ideas here?
Thanks
If you want to load the data into your target SQL db, then you can use Azure Data Factory[ADF] to upload your CSV files to Azure Blob Storage, and then use Copy Data Activity to load that data in CSV files into Azure SQL db tables - without creating those tables upfront.
ADF supports 'auto create' of sink tables. See this, and this
My company is looking to possibly migrate to Snowflake from SQL Server. From what i've read on snowflake documentation, flat files (CSV) can get uploaded and set into a staging table then use COPY INTO that loads data into physical table.
example: put file://c:\temp\employees0*.csv #sf_tuts.public.%emp_basic;
My question is, can this be automated via a job or script within snowflake? this includes the copy into command.
Yes, there are several ways to automate jobs in Snowflake as already commented by others. Putting your code in a Stored Procedure and call it via a Task in schedule is an option.
There is also a command line interface in Snowflake called SnowSQL.
I want to import data on a weekly basis to an Oracle DB.
I'm receiving this data on specific location in a server in EDR format. For now I'm uploading them manually using Toad for Oracle uploader wizard. Is there any way to upload them automatically using Unix or any kind of scripting?
I would suggest to try out SQL loader through a shell script.
Code:
sqlldr username#server/password control=loader.ctl
two important files:
a. your data file to be uploaded.
b. Control file which states the table to be inserted and the delimiter character and the column fields, etc. basically describe how to load the data.
Oracle Reference
I have 2 DB with the same schema on different servers.
I need to copy data from table T to the same table T in test database in different server and network.
What is the easiest way to do it?
I heard that data can be dumped to flat file and than inserted into database. How does it works?
Can this be achieved using sqlplus and oracle database?
Thank you!
Use Oracle export to export a whole table to a file, copy the file to serverB and import.
http://www.orafaq.com/wiki/Import_Export_FAQ
You can use rsync to sync an oracle .dbf file or files to another server. This has problems and syncing all files works more reliably.
For groups of records, write a query to build a pipe-delimited (or whatever delimiter suits your data) file with rows you need to move. Copy that file to serverB. Write a control file for sqlldr and use sqlldr to load the rows into the table. sqlldr is part of the oracle installation.
http://www.thegeekstuff.com/2012/06/oracle-sqlldr/
If you have db listeners up on each server and tnsnames knows about both, you can directly:
insert into mytable#remote
select * from mytable
where somecolumn=somevalue;
Look at the remote table section:
http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9014.htm
If this is going to be an ongoing thing, create a db link from instance#serverA to instance#serverB.
You can then do anything you have permissions for with data on one instance or the other or both.
http://psoug.org/definition/CREATE_DATABASE_LINK.htm
I have a very annoying task. I have to load >100 CSV-files from a folder to SQL Server database. The files have column names in first row. Data type can be varchar for all columns. The table names in database can just be filename of the CSVs. What I am currently doing is that I use Import/Export Wizard from SSMS, I choose flatfile from dropdown box, choose the file, next->next->next and finish! Any ideas how can I automate such a task in Integration services or with any other practical method?
Note: Files are on my local PC, DB-server is somewhere else, so I cannot use BULK INSERT.
You can use SSIS - Foeach loop container to extract file names - by arranging to particular format.Use a variable to dynamically fill the variable with file name.Then in dataflowtask , use flat file source for source - oledb destination.
Please post some sample file names.so that i can learn and guide you properly.
Thanks
Achudharam