Why SQL Server imports badly my CSV file with bcp? - sql-server

I'm using bcp to insert a CSV file in my SQL table, but i'm having weird results.
All the environment is on Linux, I create the CSV file with IFS=\n in Bash. BCP is running from Bash too like this:
bcp Table1 in "./file.csv" -S server_name -U login_id-P password -d databse_name -c -t"," -r"0x0A"
The file.csv is about 50k rows. BCP tells me he copied 25K rows. No warnings, no errors. I have tried on a smaller sample with only 2 rows and here is the result:
file.csv data:
A.csv, A, B
C.csv, C, D
My table Table1:
CREATE TABLE Table1 (
ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
a VARCHAR(8),
b VARCHAR(8),
c VARCHAR(8)
FOREIGN KEY (a) REFERENCES Table2(a)
);
Table2 is a very basic table just like Table1
What I can see in Microsoft SQL Server Manager Studio when i do SELECT * FROM Table1
ID | a | b | c |
1 A BC C.csv E
I export with bcp like this:
bcp Table1 out"./file_export.csv" -S server_name -U login_id-P password -d databse_name -c -t"," -r"0x0A"
What I get in file_export.csv
1, A, B
C, C.csv, D, E
I've check on NotePad++ any strange End Of File, they are all EF (50k Rows) each values are separated by comas.
UPDATE:
Changing the options -t and -r does not change anything. So my CSV is well formatted. I think the issue the fact that I'm trying to import x columns in a tables of x+1 fields (The auto-incremented ID).
So i'm using the simplest command for bcp
bcp Table1 in "./file.csv" -S server_name -U login_id-P password -d databse_name -c
Here is what i get in my SELECT * FROM Table1;
ID a b c
1 A BC.csv C,D
Here is what I expected:
1,A.csv,A,B
2,C.csv,C,D
Why does my first field is "eaten" and replace by 1 (ID)? Which makes everything swift to the right.

Related

why only first 1000 records inserted in dbeaver?

I am using Dbeaver 22.1.4 on Windows 10 Home Single 64bit. My RAM is 8 Gb. I want to insert 16 millions data from one server to another using dblink (All servers are Linux Ubuntu, running Postgresql 12). The query looks like this ( I split it to 5000 first for testing) :
INSERT INTO table_server1 ([some 115 columns])
SELECT *
FROM dblink('myconn',$MARK$
SELECT [some 115 columns]
FROM public.table_server2 limit 5000
$MARK$) AS t1 (
id varchar, col1 varchar, col2 varchar, col3 integer, ... , col115 varchar);
It only inserts 1000 data which takes 1-2 seconds. It says "Updated rows : 1000" on the result window. There is no error as such.
What happen ? How can I insert all data ? I have edit the config file by modifying the max memory to 2 GB : -Xmx2048m
do you insist on using Dbeaver and/or dblink? If not, and you can connect to terminal on either postgres server, you can do this very fast (no splitting needed) and easily without "middle man" (your machine), directly server-to-server:
psql -d sourcedb -c "\copy (SELECT [some 115 columns] FROM public.table_server2) TO STDOUT" | psql -d targetdb -c "\copy table_server1 FROM STDIN"
Of course you need to specify host, user/password for both sides psql

Netezza External table trims decimals after 0

I am trying to create an external table file from the netezza database. There is one Numeric column in the table with value 0.00. The external table file generated stores only 0 and truncates the rest of it. However, when on using nzsql from the command prompt to pull the same data it stores the entire field as it is in the database. I have tried the IGNOREZERO command when using the create external table but this flag only works if the field is char or varchar.
table field in database:
___________
|NUMERIC_COL|
-------------
| 0.00 |
-------------
The command used to create external table:
nzsql -u -pw -db -c "CREATE EXTERNAL TABLE 'file' USING (IGNOREZERO false) AS SELECT numeric_col FROM table;"
output: 0
Now, if I use nzsql to select the same field
nzsql -u -pw -db -c "SELECT numeric_col FROM table;"
output: 0.00
Is there a flag/command I could use to save the decimals in the external table. Thanks!

"Using 'BCP(Bulk Copy Program)/Bulk Insert' can we send data from One server1 database to another server2 Database..?"

Using BCP (Bulk Copy Program) / bulk insert; can we send data from server1 to another server2 database?
SERVER_1 SERVER_2
| |
DATBASE_1 DATABASE_2
| |
TABLE_1 (5 COLUMNS)
|_____________________________> TABLE_1 (ID, NAME)
|_____________________________> TABLE_2 (AGE, GENDER)
|_____________________________> TABLE_3 (ADDR)
Run BCP once to output all 5 columns of Table_1, then run BCP 3 times to load the 3 different tables on Server_2, using 3 different format files to pick which columns to load into each table. See the BCP Utility documentation for more info, and there's a lot of info on using format files too.

PostgreSQL: How to copy data from one database table to another database

I need simple example how to copy data from database DB1 table T1 to database DB2 table T2.
T2 has identical structure like T1 (same column names, properties. Just different data)
DB2 running on same server like DB1, but on different port.
In the case the two databases are on two different server instances, you could export in CSV from db1 and then import the data in db2 :
COPY (SELECT * FROM t1) TO '/home/export.csv';
and then load back into db2 :
COPY t2 FROM '/home/export.csv';
Again, the two tables on the two different database instances must have the same structure.
Using the command line tools : pg_dump and psql , you could do even in this way :
pg_dump -U postgres -t t1 db1 | psql -U postgres -d db2
You can specify command line arguments to both pg_dump and psql to specify the address and/or port of the server .
Another option would be to use an external tool like : openDBcopy, to perform the migration/copy of the table.
You can try this one -
pg_dump -t table_name_to_copy source_db | psql target_db

MySQL dump specific values in DB

I want to make a dump file of a DB, but all I want from the DB is the rows that are associated with a specific value. For example, I want to create a dump file for all the tables with rows related an organization_id of 23e4r. Is there a way to do that?
mysqldump has a --where option, which lets you specify a WHERE clause, exactly as if you were writing a query, eg:
mysqldump -u<user> -p<password> --where="organization_id=23e4r" <database> <table> > dumpfile.sql
If you want to dump the results from multiple tables that match that criteria, its:
for T in table1 table2 table3; do mysqldump -u<user> -p<password> --where="organization_id=23e4r" <database> $T >> dumpfile.sql;done
Assuming you are using a bash shell, or equivalent

Resources