I'm trying to get BCP to insert the contents of a text file into a single field.
Example file content
Field1,field2,Field3
1,test,,
2,,test
3,test,test
The following command imports each line above as a new row into my temp table.
bcp mydb..tempTable in c:\testFile.txt -T -c
I think the solution is to use the -r switch to specify the row terminator as the end of the file but I'm unsure how to do this.
EDIT
I found the solution. The textfile I am importing is first created using the BCP, in my example all of the file contents comes from a single nvarchar(max) field and row. If I set the row terminator via -r during the export then this also becomes the end of my file. I can then import using bcp mydb..tempTable in c:\testFile.txt -T -c -r {eof}.
The only issue I have now is that the output from the BCP command states "Error = [Microsoft][SQL Server Native Client 10.0]Unexpected EOF encountered in BCP data-file", however, the data still imports as I want so presumably I can ignore this?
Related
I have a CSV file I'm creating by exporting a table in SQL Server 2016 SP2 using the bulk copy utility (bcp.exe). I'm setting the code page to 65001 (which Microsoft's documentation states is UTF-8). However, when I stage the file in Snowflake and then try to use the COPY command to move it into a table, I get an error that says, "Invalid UTF8 detected in string '0xFF0xFE00x0000x0010x0010x0060x0000x0000x0000x0000x0010x00n0x0040x00M0x00M0x00c0x00A0x00A0x00M0x00'."
If I use the IGNORE_UTF8_ERRORS flag, I get data in my table that is unintelligible. Any suggestions about how to fix the issue would be gratefully received.
Here's my BCP call:
BCP "SELECT Id, Name FROM database_name.owner_name.table_name WHERE Id = '0011602001r4ddgAZA'" queryout C:\temp\test.csv "-t|" -w -T -S. -C 65001
Here's the code in Snowflake:
--Create a file format
create or replace file format SFCI_Account
type = 'CSV'
field_delimiter = '|'
validate_utf8 = True
;
-- Create a Stage object
create or replace stage SFCI_Account_stage
file_format = SFCI_Account;
-- Check my file is there
list #SFCI_Account_stage;
-- Copy the file into the table
copy into Test
from #SFCI_Account_stage
file_format = (format_name = SFCI_Account)
pattern='.*.csv.gz'
on_error = 'skip_file';
Apparently, all I needed to do was change the -w to -c in my BCP call and add the following:
-r "\r\n"
So, my final BCP call looks like this:
BCP "SELECT Id, Name FROM database_name.owner_name.table_name WHERE Id = '0011602001r4ddgAZA'" queryout C:\temp\test.csv "-t|" -c -T -S. -C 65001 -r "\r\n"
Now, that fixed the issue of the the UTF-8 error, but now I have to figure out how to deal with carriage returns in the data.
I have an Azure SQL Server database and a linux box. I have a csv file on the linux machine that I want to import into SQL Server. I have a table already created where I am going to import this file. I have the following questions -
1) Why does this command return an Unknown argument: -S
bcp table in ~/test.csv -S databaseServerName -d dbName -U myUsername -q -c -t
2) How do I import only part of the csv file? It has 20 columns, but I only want to import 2.
3) My table has these two columns - State, Province. My csv file has these two columns that I want to import - State, Region. How do I get Province to map to region.
For #2 and #3, you need to use a BCP format file. This allows you column-level control over which fields from the file go to which columns in the destination and which are left behind (not given a destination).
Use the -f option of BCP and specify the location and name of the format file you want to use. Sorry, no help yet with #1. Have a few questions/suggestions. But im not that familiar with Linux environments.
For part 2 of your question, you can use the Linux cut command to extract just the columns you want. A short awk script can do the same thing (see this SO answer). For both of these, you'll have to identify the "State" and "Region" columns by number. A non-native solution is [querycsv.py][1], which can also rename the "Region" column (disclaimer: I wrote querycsv.py).
For part 3 of your question, you can use the Linux sed command to change the column name on the first line of the CSV file, e.g., sed -e "1s/Region/Province/" file.csv >file2.csv.
I'm trying to transfer table data from one SQL Server to another and wanting to use the bcp utility for it. This is purely to transfer data between two identical schemas, but I'm not able to use something like SSDT; I need something that can be scriptable and portable so it can be run by others with just SQL server and SSMS access.
I am generating a native output file and format file like so:
$> bcp database.TableName OUT c:\data\bcp\TableName.bcp -T -N -S SQLINSTANCE
$> bcp database.TableName format nul -f c:\data\bcp\TableName.fmt -T -N
Then in Management Studio I am trying to in turn read the files like this:
SELECT
*
FROM
OPENROWSET (BULK 'c:\data\bcp\TableName.bcp',
FORMATFILE = 'c:\data\bcp\TableName.fmt') AS t1
But am getting this error:
The bulk load failed. The column is too long in the data file for row 6, column 19. Verify that the field terminator and row terminator are specified correctly.
I have followed this process before successfully, and it works for other tables. But I'm running into issue with this table. The column mentioned is of datatype nvarchar(max). I can inspect what I think is the "problem" record in the source data and it's just a very long string but I don't see anything else special about it.
Is there something else I should be doing when generating the format file or what else am I missing?
If you are only exporting for the purpose of importing to another SQL Server, native format is the way to go. And is this case you don't need to use format files. Just do a native export and import.
Note you are specifying a capital -N and that's not native. Native is lower -n.
You should export using something like:
bcp database.Schema.TableName OUT c:\data\bcp\TableName.bcp -T -n -S SQLINSTANCE
Then on the importing side I sugest using BULK IMPORT, which don't need a format file for native at all:
BULK INSERT TargetDB.dbo.TargetTable
FROM 'c:\data\bcp\TableName.bcp'
WITH (DATAFILETYPE = 'native');
If you can't use BULK INSERT and must absolutely go for OPENROWSET, you need a format file. bcp can generate that for you, but again, lower case -n:
bcp database.Schema.TableName format nul -f c:\data\bcp\TableName.fmt -T -n -S SQLINSTANCE
Now your OPENROWSET should work.
I'am trying to import data into sql server table from a file using a format file.
In fact I have 2 databases: a production database and a local database
I want to insert some row of the table shipper of the production database in the local one. The table shipper don't have neither the same columns nor the same order of column in the 2 databases.
That's why I used a file format to do my bcp.
I generate file containing the rows I want to insert in my local database with the following commande
bcp "SELECT shipper_id,Shipper_name FROM ProductionDatabase.dbo.shipper where shipper_id >5" queryout shipper.txt -c -T
It works !!
I generate then the format file with the schema of my local table with the following commande
bcp LocalDatabase.dbo.shipper nul -T -n -f shipper-n.fmt
It works !!
Unfortunately when I tried to insert the file data in my local table
with the following commande:
bcp LocalDatabase.dbo.shipper in shipper.txt -T -f shipper-n.fmt
it generates the following error (translated from french)
Can anyone know what is the problem and how can I get arround it.
Thanks in advance
unexpected end of file encountered in the bcp data file
Your format file does not match the data. You are exporting using text using -c
bcp "SELECT shipper_id,Shipper_name FROM ProductionDatabase.dbo.shipper where shipper_id >5" queryout shipper.txt -c -T
But your format file is made for native (binary) data using -n
bcp LocalDatabase.dbo.shipper nul -T -n -f shipper-n.fmt
Either export both as native (my recommendation), or both as text. To prevent this error, export the data file and the format file at the same time, simply add -f shipper.fmt to your export
Text version:
bcp "SELECT shipper_id,Shipper_name FROM ProductionDatabase.dbo.shipper where shipper_id >5" queryout shipper.txt -c -T -f shipper.fmt
or
Native Version:
bcp "SELECT shipper_id,Shipper_name FROM ProductionDatabase.dbo.shipper where shipper_id >5" queryout shipper.txt -n -T -f shipper.fmt
PS. Since you can run into scenarios where your record or row delimiters exist in the data you should pick a character sequence that does not exist in your data as a separator for instance -t"\t|\t" (Tab-Pipe-Tab) for fields and -r"\t|\n" (Tab-Pipe-Newline) for rows. If you combine the format statement with the export the data and the format file will match and you have the freedom to change the separators on a single command line.
Specify separators after the -n or -c on the command line
I have source file in which 90% of first field is empty. I want to load this file to SQL server table with BCP utility. When i run BCP command, BCP utility is not able to recognize or distinguish records.
My Source file has data as below.
|100168|27238800000|14750505|1|273
|100168|27238800000|14750505|1|273
|100681|88392930052|37080101|1|252
|101014|6810000088|90421505|12|799
|101595|22023000000|21050510|8|780
I am using
**bcp [DBNAME].[dbo].[TABLE1] in \\filelocation\filename -e \\filelocation\filename_Error.txt -c -t | -S ServerName -T -h TABLOCK -m 1**
I am getting error message in error.txt
as ## Row 1, Column 28: String data, right truncation ## 100168 27238800000 14750505 1 273
100168|27238800000|14750505|1|273. Here BCP is not able to recognize
records. Due to this BCP is trying loaded next record data into last
field which is causing data truncation.
Table schema is
CREATE TABLE [DBO].[TABLE1](
FLD1 VARCHAR(10)
,FLD2 VARCHAR(10)
,FLD3 VARCHAR(22)
,FLD4 VARCHAR(15)
,FLD5 VARCHAR(10)
,FLD6 VARCHAR(12) )
You need to quote the pipe. Pipe (character |) is used for redirecting standard output for command lines
The following simplified line works with your sample
bcp.exe [db].dbo.[table1] in "path\Data.dat" -S".\instance" -T -c -t"|"
I omitted the error limit -m, log -e and table lock hint -h, but those should not affect the import, but if you still have an issue try quoting parameters like the server name and filenames
I used a text file with standard \r\n row terminators as expected by -c