BULK INSERT from CSV into SQL Server causes error - sql-server

I've got the simple table in CSV format:
999,"01/01/2001","01/01/2001","7777777","company","channel","01/01/2001"
990,"01/01/2001","01/01/2001","767676","hhh","tender","01/01/2001"
3838,"01/01/2001","01/01/2001","888","jhkh","jhkjh","01/01/2001"
08987,"01/01/2001","01/01/2001","888888","hkjhjkhv","jhgjh","01/01/2001"
8987,"01/01/2001","01/01/2001","9999","jghg","hjghg","01/01/2001"
jhkjhj,"01/01/2001","01/01/2001","9999","01.01.2001","hjhh","01/01/2001"
090009,"","","77777","","","01/01/2001"
980989,"01/01/2001","01/01/2001","888","","jhkh","01/01/2001"
0000,"01/01/2001","01/01/2001","99999","jhjh","","01/01/2001"
92929,"01/01/2001","01/01/2001","222","","","01/01/2001"
I'm trying to import that data into SQL Server using BULK INSERT (Transact-SQL)
set dateformat DMY;
BULK INSERT Oracleload
FROM '\\Mac\Home\Desktop\Test\T_DOGOVOR.csv'
WITH
(FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
KEEPNULLS);
On the output I get the next error:
Msg 4864, Level 16, State 1, Line 4
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 2 (date_begin)....
Something wrong with date format maybe. But what script I need to write to fix that error?
Please help.
Thanks in advance.

BULK INSERT (nor bcp) cannot (properly) handle CSV files, specially if they have (correctly) " quotes. Alternatives are SSIS or PowerShell.

I always look at the data in Notepad++ to see if there are some weird characters, or non-printable characters, like a line break or something. For this, it seems like you can open it using Notepad (if you don't have Notepad++) do a find-replace for " to nothing... Save the file, and re-do the Bulk Load.

This record:
jhkjhj,"01/01/2001","01/01/2001","9999","01.01.2001","hjhh","01/01/2001"
The first column has a numeric type of some kind. You can't put the jhkjhj value into that field.
Additionally, some records have empty values ("") in date fields. These are likely to be to interpreted as empty strings, rather than null dates, and not convert properly.
But the error refers to "row 1, column 2". That's this value:
"01/01/2001"
Again, the import is interpreting this as a string, rather than a date. I suspect it's trying to import the quotes (") instead of just using them as separators.
You might try bulk loading to a special holding table, and then re-importing from there. Alternatively, you can change how data is exported or write a program to pre-clean it — strip the quotes from fields that shouldn't have them, isolate records that have data that won't insert to an exception file and report.

Related

SQL Server Bulk Insert CSV Issue

I'm having an issue that I have not encountered before when bulk inserting from a csv file. For whatever reason, the last column isn't being separated on insert. I kept getting type conversion errors that I knew couldn't be true so I changed the datatype to varchar to see what was being inserted. When I looked at the result set, I saw that instead of (ex. 35.44, 56.82 separated in two columns) in the .csv, I saw (ex. 35.44,56.82 all in one column). This of course is why SQL Server was throwing that error, but how can I resolve this. Am I missing something simple?
To sum it up, the Bulk Insert is ignoring the last field terminator and combining the last two columns into one column
My Bulk Insert:
BULK
INSERT [YourTableName]
FROM 'YourFilePathHere'
WITH
(
FIELDTERMINATOR=',',
ROWTERMINATOR = '\n'
)
A row:
YSQ3863,Bag 38x63 YELLOW 50/RL,CS,BAG,17.96,LB,1,50,50,YELLOW,,,,,,63,17.96,,,,38,,2394,,8.15,11.58,19.2,222.41

Bulk Load Data Conversion Error - Can't Find Answer

For some reason I keep receiving the following error when trying to bulk insert a CSV file into SQL Express:
Bulk load data conversion error (type mismatch or invalid character for the
specified codepage) for row 2, column 75 (Delta_SM_RR).
Msg 4864, Level 16, State 1, Line 89
Bulk load data conversion error (type mismatch or invalid character for the
specified codepage) for row 3, column 75 (Delta_SM_RR).
Msg 4864, Level 16, State 1, Line 89
Bulk load data conversion error (type mismatch or invalid character for the
specified codepage) for row 4, column 75 (Delta_SM_RR).
... etc.
I have been attempting to insert this column as both decimal and numeric, and keep receiving this same error (if I take out this column, the same error appears for the subsequent column).
Please see below for an example of the data, all data points within this column contain decimals and are all rounded after the third decimal point:
Delta_SM_RR
168.64
146.17
95.07
79.85
60.52
61.03
-4.11
-59.57
1563.09
354.36
114.78
253.46
451.5
Any sort of help or advice would be greatly appreciated as it seems that a number of people of SO have come across this issue. Also, if anyone knows of another automated way to load a CSV into SSMS, that would be a great help as well.
Edits:
Create Table Example_Table
(
[Col_1] varchar(255),
[Col_2] numeric(10,5),
[Col_3] numeric(10,5),
[Col_4] numeric(10,5),
[Col_5] date,
[Delta_SM_RR] numeric(10,5),
)
GO
BULK INSERT
Example_Table
FROM 'C:\pathway\file.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW = 2
);
Table Schema - This is a standalone table (further calculations and additional tables are built off of this single table, however at the time of bulk insert it is the only table)
It's likely that your data has an error in it. That is, that there is a character or value that can't be converted explicitly to NUMERIC or DECIMAL. One way to check this and fix it is to
Change [Delta_SM_RR] numeric(10,5) to [Delta_SM_RR] nvarchar(256)
Run the bulk insert
Find your error row: select * from Example_Table where [Delta_SM_RR] like '%[^-.0-9]%'
Fix the data at the source, or delete from Example_Table where [Delta_SM_RR] like '%[^-.0-9]%'
The last statements returns/deletes rows where there is something other than a digit, period, or hyphen.
For your date column you can follow the same logic above, by changing the column to VARCHAR, and then find your error by using ISDATE() to find the ones which can't be converted.
I'll bet anything there is some weird character in your data set. Open your data set in Notepad++ and view the data. Any aberration should become apparent very quickly! The problem is coming from Col75 and it's affecting the first several rows, and thus everything that comes after that also fails to load.
Make sure that .csv is not using text qualifiers and that none of your fields in the .csv have a comma inside the desired value.
I am struggling with this issue right now. The issue is that I have a 68 column report I am trying to import.
Column 17 is a "Description" column that has a double quote text qualifier on top of the comma delimitation.
Bulk insert with a comma field terminator won't identify the double quote text qualifier and munge all of the data to the right of the offending column.
It looks like to overcome this, you need to create a .fmt file to instruct the Bulk Insert which columns it needs to treat as simple delimited, and which columns it needs to treat as delimited and qualified (see this answer).

Handling embedded new lines when creating/selecting External Tables in SQL Data Warehouse

In SQL Data Warehouse (editors please don't change this, it is the actual name see: here) I have a JobCandidate_ext external table that looks like this.
CREATE EXTERNAL TABLE [HumanResources].[JobCandidate_ext](
[JobCandidateID] int,
[BusinessEntityID] int,
[Resume] Varchar(8000),
[ModifiedDate] Datetime
)
WITH (
LOCATION='/[HumanResources].[JobCandidate]/data.txt',
DATA_SOURCE=AzureStorage,
FILE_FORMAT=TextFile)
GO
The column [Resume] was an XML type in SQL Server but in SQL Data Warehouse XML types should be converted to varchar(8000) as described here.
I am using a flat file data.txt to export the data to a blob and then create an external table from it.
The [Resume] column has carriage returns in it (as expected from an XML file), and so when you run a SELECT * FROM [HumanResources].[JobCandidate_ext] you get an error. In this case:
Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 2 rows processed.
(/[HumanResources].[JobCandidate]/data.txt)Column ordinal: 0, Expected data type: INT, Offending value: some text .... (Column Conversion Error), Error: Error converting data type NVARCHAR to INT.
I know that I cannot configure a row delimiter when creating external tables as described here.
The row delimiter must be UTF-8 and supported by Hadoop’s LineRecordReader. The row delimiter must be either '\r', '\n', or '\r\n'. These are not user-configurable.
And if you try to put quotes on each column field you get this error while selecting rows from the external table: No closing string delimiter.
Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 1 rows processed.
(/[HumanResources].[JobCandidate]/data.txt)Column ordinal: 2, Expected data type: VARCHAR(8000) collate SQL_Latin1_General_CP1_CI_AS, Offending value: 'ShaiBassli (Tokenization failed), Error: No closing string delimiter.
Is there a way to get around this issue?
Today, PolyBase does not allow for row or field delimiters inside fields i.e. it does not allow you to escape these characters. As Greg pointed out, you can vote for this functionality here: https://feedback.azure.com/forums/307516-sql-data-warehouse/suggestions/10600132-polybase-allow-line-ends-within-qualified-text-f
To workaround this limitation, you can either pre-process the data (using sed or tr for example) to replace unwanted characters before reading it with PolyBase. Or you can switch to other polybase supported file formats RCFile/ORC/Parquet to avoid dealing with row and field delimiters completely.

How can I get a Sql Server Bulk Insert to work when using a 'þ' as a FieldDelimiter?

Executing the following statement against sql server 2005 is failing.
BULK INSERT aTest FROM 'G:/aTest.txt' WITH (FIELDTERMINATOR='þ',ROWTERMINATOR='\n');
The error is this
Msg 4832, Level 16, State 1, Line 10
Bulk load: An unexpected end of file was encountered in the data file.
If I change the FIELDTERMINATOR to a comma and I change the data file to have a comma it works as expected.
Here's my data file (aTest.txt):
1þfirst
2þtwo
The answer to my specific question/problem was to make sure that the data file is ascii encoded because I wanted to use 'þ' as a field terminator. My data file happened to be utf-8 encoded which caused the terminator to be ignored.

Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)"

I try to load my database with tons of data from a .csv file sized 1.4 GB. But when I try to run my code I get errors.
Here's my code:
USE [Intradata NYSE]
GO
CREATE TABLE CSVTest1
(Ticker varchar(10) NULL,
dateval date NULL,
timevale time(0) NULL,
Openval varchar(10) NULL,
Highval varchar(10) NULL,
Lowval varchar(10) NULL,
Closeval varchar(10) NULL,
Volume varchar(10) NULL
)
GO
BULK
INSERT CSVTest1
FROM 'c:\intramerge.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
--Check the content of the table.
SELECT *
FROM CSVTest1
GO
--Drop the table to clean up database.
DROP TABLE CSVTest1
GO
I try to build a database with lots of stockquotes. But I get this error message:
Msg 4832, Level 16, State 1, Line 2 Bulk load: An unexpected end of
file was encountered in the data file. Msg 7399, Level 16, State 1,
Line 2 The OLE DB provider "BULK" for linked server "(null)" reported
an error. The provider did not give any information about
the error. Msg 7330, Level 16, State 2, Line 2 Cannot fetch a row from
OLE DB provider "BULK" for linked server "(null)"
I do not understand much of SQL, but I hope to catch a thing or two. Hope someone see what might be very obvious.
Resurrecting an old question, but in case this helps someone else: after much trial-and-error I was finally (finally!) able to get rid of this error by changing this:
ROWTERMINATOR = '\n'
To this:
ROWTERMINATOR = '0x0A'
I had same issue.
Solution:
Verify the CSV or textfile in text editors like notepad+. Last line might be incomplete. Remove it.
I got the same error when I had a different number of delimited fields in my CSV than columns I had in my table. Check if you have the right number of fields in intramerge.csv.
Methods to determine rows with issues:
Open CSV in spreadsheet, add Filter to all data and look for empty values
and here are the rows with less columns
Use this page https://csvlint.com to create your validation rules and you can detect your problems in your CSV as well.
This is my solution: just give up.
I always end up using SSMS and [ Tasks > Import Data ].
I have never managed to get a real world .csv file to import using this method. This is utterly an useless function that only works on pristine datasets that don't exist in the real world. Perhaps I've never had any luck because the datasets I deal with are quite messy and are generated by third parties.
And if it goes wrong, it doesn't give any clue as to why. Microsoft, you sadden me with your utter incompetence in this area.
Microsoft, perhaps add some error messages, so it says why it rejected it? Which line did it fail on? Which column did it fail on? It's almost impossible to fix the issue if you don't know why it failed!
It was an old question but It seems that my finding would enlight some other people having a similar issue.
The default SSIS timeout value appears to be 30 seconds. This makes any service bound or IO bound operation in your package goes well beyond that timeout value and causes a timeout. Increasing that timeout value (change to "0" for no timeout) will resolve the issue.
I got this error when my format file (i.e. specified using the FORMATFILE param) had a column width that was smaller than the actual column size (e.g. varchar(50) instead of varchar(100)).
I got this exception when the char field in my SQL table was too small for the text coming in. Try making the column bigger.
This might be a bad idea with a full 1.5GB, but you can try it on a subset (start with a few rows):
CREATE TABLE CSVTest1
(Ticker varchar(MAX) NULL,
dateval varchar(MAX) NULL,
timevale varchar(MAX) NULL,
Openval varchar(MAX) NULL,
Highval varchar(MAX) NULL,
Lowval varchar(MAX) NULL,
Closeval varchar(MAX) NULL,
Volume varchar(MAX) NULL
)
... do your BULK INSERT, then
SELECT MAX(LEN(Ticker)),
MAX(LEN(dateval)),
MAX(LEN(timevale)),
MAX(LEN(Openval)),
MAX(LEN(Highval)),
MAX(LEN(Lowval)),
MAX(LEN(Closeval)),
MAX(LEN(Volume))
This will help tell you if your estimates of column are way off. You might also find your columns are out of order, or the BULK INSERT might still fail for some other reason.
I encountered a similar issue, but in this case the file being loaded contained some blank lines. Removing the blank lines solved it.
Alternatively, as the file was delimited, I added the correct number of delimiters to the blank lines, which again allowed the file to import successfully - use this option if the blank lines need to be loaded.
This can also happen if you file columns are separated with ";" but you are using "," as the FIELDTERMINATOR (or the other way around)
i just want to share my solution to this. The problem was the size of table columns, use varchar(255) and all should work.
The bulk insert will not tell you if the import values will "fit" into the field format of the target table.
For example: I tried to import decimal values into a float field. But as the values all had a comma as decimal point, it was unable to insert them into the table (it was expecting a point).
These unexpected results often happen when the provided CVS value is an export from an Excel file. Your computer's regional settings will decide which decimal point will be used when saving an Excel file into a CSV. CSV's provided by different people will cause different results.
Solution: import all fields as VARCHAR, and try to deal with the values afterwards.
For anyone who happens to come across this post, my problem was a simple oversight in regard to syntax. I had this inline with some Python, and brought it straight into SSMS:
BULK
INSERT access_log
FROM '[my path]'
WITH (FIELDTERMINATOR = '\\t', ROWTERMINATOR = '\\n');
The problem being, of course, the double backslashes which were needed in Python for the way I had this embedded as a string in the script. Correcting to '\t' and '\n' obviously fixed it.
Same happend with me, Turns out that this was due to duplicate column names. Renamed the columns to be unique. & It works fine
Please look at your file, if any special characters or spaces at end of the file, then remove and try again.
I came across another potential reason. I got this error when my table had a data source as int but the user had commas in the csv file. Change to number formatting and it imported the data.
My case is I use txt file to import data into SQL Server. All the columns are matched and I can't find what's wrong. At the end, it's encoding problem.
Solution: Use notepad++ to change to the right file encoding.
I am getting this error when I try to pass Null for int columns even though those columns are nullable.
So, I opened the csv file in a editor and replaced all Null values with empty value. And it worked.
Before Data:
636,NULL,NULL,1,5,K0007,105,NULL,2023-02-15 11:27:11.563
After Data:
636,,,1,5,K0007,105,,2023-02-15 11:27:11.563

Resources