my postgres version is 12.4 , I'm loading multiple csv files into single table, The problem here is every time firing below command is not good ,any other alternative (or) suggestions are there?
COPY testemail FROM '/md-data/vamshi/s3data/test_Hash_2021.csv' WITH (FORMAT csv);
COPY testmail FROM '/md-data/vamshi/s3data/test_Hash_2025.csv' WITH (FORMAT csv);
COPY testmail FROM '/md-data/vamshi/s3data/test_Hash_2026.csv' WITH (FORMAT csv);
..............etc
and below command also i tried, but its not working,
COPY testemail FROM '/md-data/vamshi/s3data/t*.csv' WITH (FORMAT csv);
you can load all files using:
cat /md-data/vamshi/s3data/test_Hash*.csv | psql -c 'COPY testemail from stdin CSV HEADER'
Related
I have an Azure SQL Server database and a linux box. I have a csv file on the linux machine that I want to import into SQL Server. I have a table already created where I am going to import this file. I have the following questions -
1) Why does this command return an Unknown argument: -S
bcp table in ~/test.csv -S databaseServerName -d dbName -U myUsername -q -c -t
2) How do I import only part of the csv file? It has 20 columns, but I only want to import 2.
3) My table has these two columns - State, Province. My csv file has these two columns that I want to import - State, Region. How do I get Province to map to region.
For #2 and #3, you need to use a BCP format file. This allows you column-level control over which fields from the file go to which columns in the destination and which are left behind (not given a destination).
Use the -f option of BCP and specify the location and name of the format file you want to use. Sorry, no help yet with #1. Have a few questions/suggestions. But im not that familiar with Linux environments.
For part 2 of your question, you can use the Linux cut command to extract just the columns you want. A short awk script can do the same thing (see this SO answer). For both of these, you'll have to identify the "State" and "Region" columns by number. A non-native solution is [querycsv.py][1], which can also rename the "Region" column (disclaimer: I wrote querycsv.py).
For part 3 of your question, you can use the Linux sed command to change the column name on the first line of the CSV file, e.g., sed -e "1s/Region/Province/" file.csv >file2.csv.
i have a small problem with BCP functionality in SQL Server 2012.
The things is:
im loading .jpg image (167KB in size) using below command:
INSERT [tabela_testowa] ( Data )
SELECT * FROM OPENROWSET (BULK N'C:\foty\ch6_MagicShop.jpg', SINGLE_BLOB) a;
and then im trying to export it back to disk using:
BCP "SELECT data FROM tabela_testowa WHERE ID = 1" queryout "C:\test\file.jpg" -T -n -d test
File gets saved on disk no problem, size is also 167 KB but.. it cant be opened like the original copy.
I dont know whatever some parameter is wrong in BCP export? Or maybe it gets corrupted at import stage?
Anyone had similiar problems?
Thank god, thanks to #user_0 answer and #user3494351's cryptic answer and comment and this ancient forum post I finally figured this out after several hours of banging my head against the wall.
The issue is that BCP likes to add an extra 8 bytes to the file by default. This corrupts the file and makes it unable to be opened if you just use the native -n flag.
However, BCP allows you to specify a format file as output that can allow you to tell it not to add the extra 8 bytes. So I have a table I created (to be used in a cursor) in SQL Server that only has ONE ROW and ONE COLUMN with my binary data. Table must exist when you run the first command.
In command line first you need to do this:
bcp MyDatabase.MySchema.MyTempTable format nul -T -n -f formatfile.fmt
This creates formatfile.fmt in the directory you are in. I did on E:\ drive. Here's what it looks like:
10.0
1
1 SQLBINARY 8 0 "" 1 MyColumn ""
That 8 right there is the variable that bcp says how many bytes to add to your file. It is the bastard that is corrupting your files. Change that sucker to a 0:
10.0
1
1 SQLBINARY 0 0 "" 1 MyColumn ""
Now just run your BCP script, drop the -n flag and include the -f flag:
bcp "SELECT MyColumn FROM MyDatabase.MySchema.MyTempTable" queryout "E:\MyOutputpath" -T -f E:\formatfile.fmt
BCP is adding informations to his file. Just few data, but you are not exporting just a jpg file.
You say 167 KB, but watch the real bytes, not the rounded dimension. There will be a difference.
You cannot export the image via BCP.
OK so i solved the issue.
Format file has to be added using -f and path to the file. It can be create by running bcp without any format and order it to save format file to disk. Then we can use this format file so its no longer needed to answer those questions, and file itself has no additional data and can be opened without problems
i have a small problem with BCP functionality in SQL Server 2012.
The things is:
im loading .jpg image (167KB in size) using below command:
INSERT [tabela_testowa] ( Data )
SELECT * FROM OPENROWSET (BULK N'C:\foty\ch6_MagicShop.jpg', SINGLE_BLOB) a;
and then im trying to export it back to disk using:
BCP "SELECT data FROM tabela_testowa WHERE ID = 1" queryout "C:\test\file.jpg" -T -n -d test
File gets saved on disk no problem, size is also 167 KB but.. it cant be opened like the original copy.
I dont know whatever some parameter is wrong in BCP export? Or maybe it gets corrupted at import stage?
Anyone had similiar problems?
Thank god, thanks to #user_0 answer and #user3494351's cryptic answer and comment and this ancient forum post I finally figured this out after several hours of banging my head against the wall.
The issue is that BCP likes to add an extra 8 bytes to the file by default. This corrupts the file and makes it unable to be opened if you just use the native -n flag.
However, BCP allows you to specify a format file as output that can allow you to tell it not to add the extra 8 bytes. So I have a table I created (to be used in a cursor) in SQL Server that only has ONE ROW and ONE COLUMN with my binary data. Table must exist when you run the first command.
In command line first you need to do this:
bcp MyDatabase.MySchema.MyTempTable format nul -T -n -f formatfile.fmt
This creates formatfile.fmt in the directory you are in. I did on E:\ drive. Here's what it looks like:
10.0
1
1 SQLBINARY 8 0 "" 1 MyColumn ""
That 8 right there is the variable that bcp says how many bytes to add to your file. It is the bastard that is corrupting your files. Change that sucker to a 0:
10.0
1
1 SQLBINARY 0 0 "" 1 MyColumn ""
Now just run your BCP script, drop the -n flag and include the -f flag:
bcp "SELECT MyColumn FROM MyDatabase.MySchema.MyTempTable" queryout "E:\MyOutputpath" -T -f E:\formatfile.fmt
BCP is adding informations to his file. Just few data, but you are not exporting just a jpg file.
You say 167 KB, but watch the real bytes, not the rounded dimension. There will be a difference.
You cannot export the image via BCP.
OK so i solved the issue.
Format file has to be added using -f and path to the file. It can be create by running bcp without any format and order it to save format file to disk. Then we can use this format file so its no longer needed to answer those questions, and file itself has no additional data and can be opened without problems
I have a *.dump file (postgresql dump) and I would like to output my_table to my_table.csv. Is there a better way to do this than pg_restore -t my_table db.dump > my_table.txt and then writing a script to create the CSV from the output?
The output from pg_restore --data-only -t my_table db.dump basically is tab-separated headerless tabulated text with some comments and a few extra commands. A script to mangle it into csv with a tool like perl or awk would be pretty simple.
That said, personally I would:
Restore the table to a temporary database created for the purpose. If the table depends on custom types, functions, sequences, etc you will need to restore them too.
In psql, \copy the_table TO 'some_file.csv' WITH (FORMAT CSV, HEADER ON)
This way you can control the representation of nulls and lots more.
I'm trying to get BCP to insert the contents of a text file into a single field.
Example file content
Field1,field2,Field3
1,test,,
2,,test
3,test,test
The following command imports each line above as a new row into my temp table.
bcp mydb..tempTable in c:\testFile.txt -T -c
I think the solution is to use the -r switch to specify the row terminator as the end of the file but I'm unsure how to do this.
EDIT
I found the solution. The textfile I am importing is first created using the BCP, in my example all of the file contents comes from a single nvarchar(max) field and row. If I set the row terminator via -r during the export then this also becomes the end of my file. I can then import using bcp mydb..tempTable in c:\testFile.txt -T -c -r {eof}.
The only issue I have now is that the output from the BCP command states "Error = [Microsoft][SQL Server Native Client 10.0]Unexpected EOF encountered in BCP data-file", however, the data still imports as I want so presumably I can ignore this?