how can I Ignore truncation error on bulk insert - sql-server

Is there a way to ignore truncation?
If found this: https://www.sqlshack.com/sql-truncate-enhancement-silent-data-truncation-in-sql-server-2019/ but it doesn't work with bulk insert.
thanks!

Related

SSMS Bulk inserts = Error + Which line is it?

I'm trying to insert a lot of data with SQL Server Management Studio. This is what I do:
I open my file containing a lot of SQL inserts: data.sql
I execute it (F5)
I get a lot of these:
(1 row(s) affected)
and some of these:
Msg 8152, Level 16, State 13, Line 26
String or binary data would be truncated.
The statement has been terminated.
Question: How to get the error line number ? Line 26 doesn't seems to be the correct error line number...
This is something that has annoyed SQL Server developers for years. Finally, with SQL Server 2017 CU12 w/ trace flag 460 they give you a better error message, like:
Msg 2628, Level 16, State 6, Procedure ProcedureName, Line Linenumber
String or binary data would be truncated in table ‘%.*ls’, column
‘%.*ls’. Truncated value: ‘%.*ls
A method to get around this now is to add a print statement after each insert. Then, when you see your rows affected print out, you could see what ever you print.
...
insert into table1
select...
print 'table1 insert complete'
insert into table2
select...
print 'table2 insert complete'
This isn't going to tell you what column, but would narrow it down to the correct insert. You can also add SET NOCOUNT ON at the top of your script if you don't want the rows affected message printed out.
Another addition, if you really are using BULK INSERT and weren't just using the term generally, you can specify an ERRORFILE. This will log the row(s) which caused the error(s) in your BULK INSERT command. It's important to know that by default, BULK INSERT will complete if there are 10 errors or less. You can override this by specifying the MAXERRORS in your BULK INSERT command.

SQL Server Bulk Insert CSV Issue

I'm having an issue that I have not encountered before when bulk inserting from a csv file. For whatever reason, the last column isn't being separated on insert. I kept getting type conversion errors that I knew couldn't be true so I changed the datatype to varchar to see what was being inserted. When I looked at the result set, I saw that instead of (ex. 35.44, 56.82 separated in two columns) in the .csv, I saw (ex. 35.44,56.82 all in one column). This of course is why SQL Server was throwing that error, but how can I resolve this. Am I missing something simple?
To sum it up, the Bulk Insert is ignoring the last field terminator and combining the last two columns into one column
My Bulk Insert:
BULK
INSERT [YourTableName]
FROM 'YourFilePathHere'
WITH
(
FIELDTERMINATOR=',',
ROWTERMINATOR = '\n'
)
A row:
YSQ3863,Bag 38x63 YELLOW 50/RL,CS,BAG,17.96,LB,1,50,50,YELLOW,,,,,,63,17.96,,,,38,,2394,,8.15,11.58,19.2,222.41

BULK INSERT from CSV into SQL Server causes error

I've got the simple table in CSV format:
999,"01/01/2001","01/01/2001","7777777","company","channel","01/01/2001"
990,"01/01/2001","01/01/2001","767676","hhh","tender","01/01/2001"
3838,"01/01/2001","01/01/2001","888","jhkh","jhkjh","01/01/2001"
08987,"01/01/2001","01/01/2001","888888","hkjhjkhv","jhgjh","01/01/2001"
8987,"01/01/2001","01/01/2001","9999","jghg","hjghg","01/01/2001"
jhkjhj,"01/01/2001","01/01/2001","9999","01.01.2001","hjhh","01/01/2001"
090009,"","","77777","","","01/01/2001"
980989,"01/01/2001","01/01/2001","888","","jhkh","01/01/2001"
0000,"01/01/2001","01/01/2001","99999","jhjh","","01/01/2001"
92929,"01/01/2001","01/01/2001","222","","","01/01/2001"
I'm trying to import that data into SQL Server using BULK INSERT (Transact-SQL)
set dateformat DMY;
BULK INSERT Oracleload
FROM '\\Mac\Home\Desktop\Test\T_DOGOVOR.csv'
WITH
(FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
KEEPNULLS);
On the output I get the next error:
Msg 4864, Level 16, State 1, Line 4
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 2 (date_begin)....
Something wrong with date format maybe. But what script I need to write to fix that error?
Please help.
Thanks in advance.
BULK INSERT (nor bcp) cannot (properly) handle CSV files, specially if they have (correctly) " quotes. Alternatives are SSIS or PowerShell.
I always look at the data in Notepad++ to see if there are some weird characters, or non-printable characters, like a line break or something. For this, it seems like you can open it using Notepad (if you don't have Notepad++) do a find-replace for " to nothing... Save the file, and re-do the Bulk Load.
This record:
jhkjhj,"01/01/2001","01/01/2001","9999","01.01.2001","hjhh","01/01/2001"
The first column has a numeric type of some kind. You can't put the jhkjhj value into that field.
Additionally, some records have empty values ("") in date fields. These are likely to be to interpreted as empty strings, rather than null dates, and not convert properly.
But the error refers to "row 1, column 2". That's this value:
"01/01/2001"
Again, the import is interpreting this as a string, rather than a date. I suspect it's trying to import the quotes (") instead of just using them as separators.
You might try bulk loading to a special holding table, and then re-importing from there. Alternatively, you can change how data is exported or write a program to pre-clean it — strip the quotes from fields that shouldn't have them, isolate records that have data that won't insert to an exception file and report.

continue insert statement even though exception is generated

I am trying to execute below query and during that one constraint violation exception is generated and due to that insert statement is terminated.
I want suppose from 10 records 9 records are clean then insertion will done for 9.right now statement is terminated and no insertion is performed.
I am using SQL Server 2012 and i do not want to rollback transaction and Insert ignore command is not there in SQL server and i do not want to insert data which contains error.i just want to insert clean data.
Query :
INSERT INTO rcmschargepostingmastertable
(clinicid,
clinicsiteid,
appointmentid,
patientid
)
SELECT clinicid,
clinicsiteid,
appointmentid,
patientid,
FROM #tempautopostbulkchargepostingmastertable
It is not possible to do what you stated in your comment:
i want to ignore any sql error and want to continue insertion for
clean records
SQL Server doesn't have any pure SQL mechanism for doing this. Your only choice is to use one of the proposed work-arounds (SSIS, WHERE clause).
One work-around that hasn't been mentioned because it's the worst performance-wise, but at least it's one that you haven't shot down, is to replace your set-based insert with a cursor that does the inserts one row at a time.
Then you could put the single-row insert in a TRY block, and if it errors, the cursor will skip it and move on to the next one.
I do not want to insert data which contains error.i just want to insert clean data.
Then you need to identify and filter out the bad data/constraint violating records before inserting into target table which will make your life easier.
........
modifiedbyid
FROM #tempautopostbulkchargepostingmastertable
Where some_column <> 'bad data'
Since you are using SQL Server 2012 you can use TRY_CONVERT to identify and filter out the bad data

Bulk INSERT without FIELDDELIMITER

How can I bulk insert a file like below?
test.txt
012341231
013212313
011312321
012312312
The text file does not contain a delimiter. I have used:
BULK INSERT tbl_import_#id#
FROM '../test.txt'
WITH
(FIELDTERMINATOR = '\t',
ROWTERMINATOR = '\n')
and I got an error for that. Appreciate any help thanks.
There is no problem. You can specify a field terminator even if your file doesn't have any field terminators like \t or ,.
Please try to post what error you have got. Check your FROM file ".../test.txt" location and table schema to import data. Better to post your error. I cannot reproduce your error. It works fine for me (I used your values).
Just run the query without FILEDTERMINATOR
BULK INSERT tbl_import_#id#
FROM '../test.txt'
WITH (ROWTERMINATOR = '\n')
The FIELDTERMINATOR argument would be helpful in case you had multiple columns in your table (more values per row). But I can see that this is not the case, so you don't need to separate values except by rows, which will be records in your table.
EDIT:
In case you can use a different table, just create a table with only 1 column(ID column) and run the import (the query above).
After that, run an ALTER command and add the other columns that you want.

Resources