Generating SQL data with CSV files Powerdesigner with 16.7 - powerdesigner

It is possible to create test data for a Physical Data Model. PowerDesigner creates then for every table a CSV file with test data.
Is it possible to feed a PDM in Powerdesigner with CSV files (The other way round)?
Example:
I have folowing table
ID | name | age
Via power designer I can create test data as a csv and I would get following file.
1, Peter, 28
2, Marta, 40
3, Joe, 50
I want to do the opposite thing. I want to define a csv file for every table in my pdm and import it so that power designer creates the sql statements that insert the columns into the data base

Related

Azure Data Factory: Lookup varbinary column in SQL DB for use in a Script activity to write to another SQL DB - ByteArray is not supported

I'm trying to insert into an on-premises SQL database table called PictureBinary:
PictureBinary table
The source of the binary data is a table in another on-premises SQL database called DocumentBinary:
DocumentBinary table
I have a file with all of the Id's of the DocumentBinary rows that need copying. I feed those into a ForEach activity from a Lookup activity. Each of these files has about 180 rows (there are 50 files fed into a new instance of the pipeline in parallel).
Lookup and ForEach Activities
So far everything is working. But then, inside the ForEach I have another Lookup activity that tries to get the binary info to pass into a script that will insert it into the other database.
Lookup Binary column
And then the Script activity would insert the binary data into the table PictureBinary (in the other database).
Script to Insert Binary data
But when I debug the pipeline, I get this error when the binary column Lookup is reached:
ErrorCode=DataTypeNotSupported,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Column: coBinaryData,The data type ByteArray is not supported from the column named coBinaryData.,Source=,'
I know that the accepted way of storing the files would be to store them on the filesystem and just store the file path to the files in the database. But we are using a NOP database that stores the files in varbinary columns.
Also, if there is a better way of doing this, please let me know.
I tried to reproduce your scenario in my environment and got similar error
As per Microsoft document Columns with datatype Byte Array Are not supported in lookup activity is might be the main cause of error.
To workaround this as Follow below steps:
As you explained your case you have a file in which all the Id's of the DocumentBinary rows that need copy in destination are stored. To achieve this, you can simply use Copy activity with the Query where you copy records where the DocumentBinary in column is equal to the Id stored in file
First, I took lookup activity from where I can get Id's of the DocumentBinary rows stored in file
Then I took ForEach I passed the output of lookup activity to ForEach activity.
After this I took Copy activity in forEach activity
Select * from DocumentBinary
where coDocumentBinaryId = '#{item().PictureId}'
In source of copy activity select Use query as Query and pass above query with your names
Now go to Mapping Click on Import Schema then delete unwanted columns and map the columns accordingly.
Note: For this, columns in both tables are of similar datatypes either both uniqueidenntifier or both should be int
Sample Input in file:
Output (Copied only picture id contend in file from source to destination):

Loading files with varying number of columns into the same table with SSIS

I need to load 14 different files with a different number of columns into the same table. Many of the columns are the same, but each file has some columns that are specific to them. The name of each file indicates the columns the file will have.
For example, the file "Temporary_Employees.txt" will always these have columns:
Contract_ID | Person_ID | HireDate | ContractDurationHours | Manager_ID
And the file "PartTime_Employees.txt" will have these columns:
Contract_ID | Person_ID | HireDate | HoursPerWeek | Manager_ID
The destination table has all the columns.
I'm new to SSIS and the only solution I can think of is having 14 data flows with 14 flat file connections... Is there a better way to do this?
You will want to create a flat file connection manager per unique file format.
If there is a shared format, FullTime_Employees.txt and Management_Employees.txt then you can use a Foreach File Enumerator to loop over files (changing the connection manager's connection string property) and have the data flow use the updated connection manager.
Design a data flow to do a single task - this one loads part-time employees, this one loads temporary and this one loads full-time employees. It might feel like more work up front but trying anything clever can be a nightmare to debug.
Nightmare approach
SSIS Task for inconsistent column count import?
If you don't need the "extra" columns populated in the destination, you can use the query approach I outlined on the above old answer. Basically, you'll write a query that enumerates all the column names that are common across files. That will result in a consistent set of metadata available and then you can use a foreach file enumerator to process all the files.
I classify this as a nightmare approach because while it works, if this is your first foray into SSIS, this solution incorporates a lot of advanced techniques.

Retrieving No. of rows being Written to CSV File

I have a task where I need to generate a CSV file from data coming out of two views including a Header having hard coded values and a Trailer at the bottom of the CSV file having These fields- Record_Type = E99, Row_count, and Blank field with 190 length.
I'm able to get the desired output file but I am not able to figure out how to retrieve the NO. of rows coming out of the two vies and write it in between the record type and the blank field at the bottom of the CSV as the whole line is trailer with | delimited.
Please help me figure this out.
Thanks.
My suggestion:
I assume you are using the SSIS Package to solve this problem.
Create a SQL Staging table to store the content which you want to export in CSV file. You may use stored procedure to truncate and refill this staging table by executing it. Execute this store procedure through Execute SQL Task in SSIS Package
Use Data Flow Task to export the data from staging table to CSV file. Input will be SQL Staging table and output will be flat file with Comma(,) delimiter.
I hope it will help you

How can i use SQLLoader to load data into my Database tables directly from a tar.gz file?

I am trying to load data into my oracle database table from an external tar.gz file. I can load data easily from a standard text file using SQLLoader but i'm not sure how to do the same if i have a tar.gz file instead of a word file.
I am found the following link somewhat helpful:
http://www.simonecampora.com/blog/2010/07/09/how-to-extract-and-load-a-whole-table-in-oracle-using-sqlplus-sqlldr-named-pipes-and-zipped-dumps-on-unix/
However the author of the link is using .dat.gz instead of .tar.gz. Is there anyway to load data into my Oracle database table using SQL loader from a tar.gz file instead of a text file?
Also, Part of the problem for me is that i'm supposed to load data from a NEW tar.gz file every hour into the same table. For e.g. In hour 1 i have file1.tar.gz and i load all its 10 rows of data into TABLE in my oracle database. In hour 2 i have file2.tar.gz and i have to load its 10 rows of data into the same TABLE in my oracle database. But the 10 rows extracted by SQLLoader in file2.tar.gz keep replacing the first 10 rows extracted from file1.tar.gz. Any way i can save the rows from file1.tar.gz as row 1-10 and file2.tar.gz rows as row 11-20 using SQL Loader?
The magic is in the "zcat" part. zcat can output from zipped files. Including tar.gz.
For example try: zcat yourfile.tar.gz and you will see output. In the example URL you provided, they're redirecting the output of zcat into a place that SQLLDR can read from.

Import textfiles into linked SQL Server tables in Access

I have an Access (2010) database (front-end) with linked SQL Server-tables (back-end). And I need to import text files into these tables. These text files are very large (some have more than 200.000 records, and approx. 20 fields)
The problem is that I can't import the tex tfiles directly in the SQL tables. Some files contain empty lines at the start, and some other lines that I don't want to import in the tables.
So here's what I did in my Access database:
1) I created a link to the text files.
2) I also have a link to the SQL Server tables
3a) I created an Append-query that copies the records from the linked text file to the linked SQLServer table.
3b) I created a VBA-code that opens both tables, and copies the records from the text file in the SQL Server-table, record for record. (I tried it in different ways: with DAO and ADODB).
[Step 3a and 3b are two different ways how I tried to import the data. I use one of them, not both. I prefer option 3b, because I can run a counter in statusbar to see how many records needs to be imported at any moment; I can see how far he is.]
The problem is that it takes a lot of time to run it... and I mean a LOT of time: 3 hours for a file with 70.000 records and 20 fields!
When I do the same with an Access-table (from TXT to Access), it's much faster.
I have 15 tables like this (with even more records), and I need to make these imports every day. I run this procedure automatically every night (between 20:00 and 6:00).
Is there an easier way to do this?
What is the best way to do this?
This feels like a good case for SSIS to me.
You can create a data flow from a flat file (as the data source) to a SQL DB (as the destination).
You can add some validation or selection steps in between.
You can easily find tutorials like this one online.
Alternatively, you can do what Gord mentioned, and import the data from a text file into a local Access table and then using a single INSERT INTO LinkedTable SELECT * FROM LocalTable to copy the data to the SQL Server table.

Resources