"Data conversion error" with BULK INSERT - sql-server

I am trying to import some data into SQL Server using the Bulk insert command--
This is the error I am getting--
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 6 (NClaims).
Now, I created a test file with only one row of data which I was able to import successfully--
00000005^^18360810^408^30^0
However when I added 2 more rows of data (which are very similar to the row above) I got the error message that I have given above. These are the 2 additional rows of data--
00000003^^18360801^142^42^0
00000004^^18360000^142^10^0
As you can see there does not seem to be any difference (in terms of data length or data types for the 2 rows above compared with the single row given previously)... So why am I getting this error? How do I fix it?
EDIT--
This is the command I am executing--
BULK INSERT GooglePatentsIndividualDec2012.dbo.patent
FROM 'C:\Arvind Google Patents Data\patents\1patents_test.csv'
WITH ( FIELDTERMINATOR = '^', ROWTERMINATOR='\n');

Be patient and make experiments excluding one thing at a time. For example:
Remove third row and check if everything ok.
If yes, return this row but change 10^0 to 42^0, check again.
Repeat step 2 with all data changing it to values in row 2 which is ok.
You will find the piece of data which causes error.

Related

Converting data in column in SSIS

I'm writing an SSIS package to load data from a .csv into a db.
There's a column in the csv file that is supposed to have a count, but the records sometimes have text, so I can't just load the data in as an integer. It looks something like this:
I want the data to land in the db destination as an integer instead of a string. I want the transformation to change any text to a 1, any blank value to a 1, and leave all the other numbers as-is.
My attempts have so far included using the Derived Column functionality, which I couldn't get the right expression(s) for it seems, and creating a temp table to run a sql query through the data, which kept breaking my data flow.
There are three approaches you can follow.
(1) Using a derived column
You should add a derived column with the following expression to check if the values are numeric or not:
(DT_I4)[count] == (DT_I4)[count] ? [count] : 1
Then in the derived column editor, go to the error output configuration and set the error handling event to Ignore failure.
Now add another derived column to replace null values with 1 :
REPLACENULL([count_derivedcolumn],1)
You can refer to the following article for a step-by-step guide:
Validate Numeric or Non-Numeric Data in SQL Server Integration Services without the Script Task
(2) Using a script component
If you know C# or Visual Basic.NET, you can add a script component to check if the value is numeric and replace nulls and string values with 1
(3) Update data in SQL
You can stage data in its initial form into the SQL database and use an update query to replace nulls and string values with 1 as follows:
UPDATE [staging_table]
SET [count] = 1
WHERE [count] IS NULL or ISNUMERIC([count]) = 0

SSIS skip bad row

I have a file expecting 8 chars per line that I want to load to a table in SQL Server
ABCD1234
ABCD5678
!
DCBA4321
DCBA9876
>
ABCDEFGH
However I may get bad rows. With SSIS I tried all the 3 methods:
Determined with {CL}{RF}, fixed width and finally Ragged Right.
In all cases parsing fails and is redirected to the error table. When I remove the bad lines, everything is fine.
What is strange is that with a small sample like this it still works and is inserted to the expected table.
When file is big, parsing may fail not at the first bad row but second or third and insert all the rest in the ERROR Table.
Isn't it supposed to skip the bad row and insert the good ones in the expected table even when they come after?
Or is there another solution?
Try to add a conditional split component with the following expression in order to ignore bad rows:
LEN([InputColumn]) == 8
I think this will work as expected.
SSIS Basics: Using the Conditional Split

Processing Interactive Grid manually through PL/SQL and keeps throwing out an error

Used this site https://community.oracle.com/thread/3937159?start=0&tstart=0 to learn how to manually process interactive grids. I got it to work on a small table with 3 columns, but when I tried to get it to work for a bigger table, it keeps throwing this error:
PL/SQL: numeric or value error: character string buffer too small for.
I tried only updating 1 column and converting the datatype to the correct one, and it is not going away.
this message usually means you're trying to store 'AAAA' into a column that only accepts 1, 2 or 3 chars, like varchar2(3).
Make sure your columns have a proper limit size for the data you're processing.

Bulk import only works with 1 column csv file

Whenever I try to import a CSV file into sql server with more than one column I get an error (well, nothing is imported). I know the file is terminated fine because it works with 1 column ok if I modify the file and table. I am limiting the rows so it never gets to the end, the line terminator is the correct and valid one (also shown by working when having 1 column only).
All I get is this and no errors
0 rows affected
I've also check all the other various questions like this and they all point to a bad end of file or line terminator, but all is well here...
I have tried quotes and no quotes. For example, I have a table with 2 columns of varchar(max).
I run:
bulk insert mytable from 'file.csv' WITH (FIRSTROW=2,lastrow=4,rowterminator='\n')
My sample file is:
name,status
TEST00040697,OK
TEST00042142,OK
TEST00042782,OK
TEST00043431,BT
If I drop a column then delete the second column in the csv ensuring it has the same line terminator \n, it works just fine.
I have also tried specifying the 'errorfile' parameter but it never seems to write anything or even create the file.
Well, that was embarrassing.
SQL Server in it's wisdom is using \t as the default field terminator for a CSV file, but I guess when the documentation says 'FORMAT = 'CSV'' it's an example and not the default.
If only it produced actual proper and useful error messages...

Loading 532 columns from a CSV file into a DB2 table

Summary : Is there a limit to the number of columns which can be Imported/Loaded from a CSV file? If yes, what is the workaround? Thanks
I am very new to DB2, and I am supposed to import a | (pipe) delimited csv file which contains 532 columns into a DB2 table which also has 532 columns in exact positions as the csv. I also have a smaller file with only 27 columns in both csv and table. I am using the following command:
IMPORT FROM "C:\myfile.csv" OF DEL MODIFIED BY COLDEL| METHOD P (1, 2,....27) MESSAGES "C:\messages.txt" INSERT INTO PRE_SUBS_GPRS2_1010 (col1,col2,....col27);
This works fine.
But in the second file, which is like:
IMPORT FROM "C:\myfile.csv" OF DEL MODIFIED BY COLDEL| METHOD P (1, 2,....532) MESSAGES "C:\messages.txt" INSERT INTO PRE_SUBS_GPRS_1010 (col1,col2,....col532);
It does not work. It gives me an error that says:
SQL3037N An SQL error "-206" occurred during Import processing.
Explanation:
An SQL error occurred during processing of the Action String (for
example, "REPLACE into ...") parameter.
The command cannot be processed.
User Response:
Look at the SQLCODE (message number) in the message for more
information. Make changes and resubmit the command.
I am using the Control Center to run the query, not command prompt.
The problem was because one of the column names in the list of columns of the INSERT statement was more than 30 characters long. It was getting truncated and was not recognized.
Hope this helps others in future. Please let me know if you need further details.
The specific error code is SQL0206 and the documentation about this error is here
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.messages.sql.doc/doc/msql00206n.html
For the limits, I think the maximal quantity of columns in an import should be the maximal quantity permitted for a Table. Take a look in the information center
Database fundamentals > SQL > SQL and XML limits
Maximum number of columns in a table 7 1012
Try to import just one row. If you have problems, probably is due to incompatibility of types, column order, duplicated rows with the already present in the table.

Resources