String or binary data would be truncated When try to insert to a float field - sql-server

I'm working on SQL Server 2008.
I delete all data from a table and then I try to insert value to the table. Here's the code:
TRUNCATE TABLE [dbo].[STRAT_tmp_StratMain]
INSERT INTO [dbo].[STRAT_tmp_StratMain] ([FileNum])
SELECT [dbo].[STRAT_tmp_Customer].[NumericFileNumber]
FROM [dbo].[STRAT_tmp_Customer];
The FileNum in STRAT_tmp_StratMain is float number and is also index and can't be null.
NumericFileNumber is float and can be null but is never null and there are no duplicates in it (each row is unique number).
The table STRAT_tmp_StratMain contain much more fields but all can be null and also has a defualt values.
When I try to run this query I get the error:
Msg 8152, Level 16, State 4, Line 1 String or binary data would be
truncated. The statement has been terminated.
I tried also to do simply:
INSERT INTO [dbo].[STRAT_tmp_StratMain] ([FileNum]) Values (1);
Still get the same error.
Any ideas?
Thanks,
Ilan

I am not able to reproduce your issue. When I run this code on SQL Server 2008, I get no error:
DECLARE #tt TABLE (FileNum float NOT NULL);
INSERT INTO #tt (FileNum) VALUES (1);
Check the Default constraints on all the columns in your target table and make sure none of them would try to insert a string value that would truncated by the datatype limitations of the column.
example: SomeColumn varchar(1) DEFAULT 'Hello'

This due to the data you are trying to insert does not fit in the field: if you have a defined length of (say) 10 or 50 characters but the data you are trying to insert is longer than that.

Related

Microsoft SQL Server Bulk Insert NOT failing when inserting bigint values into int column

We recently had an identifier column move from int values to bigint values. An ETL process which loads these values was not updated. That process is using SQL bulk insert, and we are seeing incorrect values in the destination table. I would have expected a hard failure.
Please note this process has been running successfully for a long time before this issue.
Can anyone tell me what the heck SQL Server is doing here!? I know how to fix the situation, but I'm trying to better understand it for the data cleanup effort that I'll need to complete, as well as just for the fact that it looks like black magic!
I've been able to re-create this issue in SQL Server 2017 and 2019.
Simplified Example CSV file contents:
310067463717
310067463718
310067463719
Example SQL:
create table #t (t int)
bulk insert #t from 'c:\temp\test.csv'
with (datafiletype = 'char',
fieldterminator = '|'
)
select * from #t
Resulting data:
829818405
829818406
829818407
Interestingly, I tried with smaller values, and I do see an error:
Example CSV file contents (2147483647 is the largest int value for SQL Server):
310067463717
310067463718
310067463719
2147483647
2147483648
Running the same SQL code as above, I get an error for one row:
Msg 4867, Level 16, State 1, Line 4
Bulk load data conversion error (overflow) for row 5, column 1 (t).
The resulting data looks like this:
829818405
829818406
829818407
2147483647
I also tried just now with a much higher value, 31006746371945654, and that threw the same overflow error as 2147483648.
And last, I did confirm that if I create the table with the column defined as bigint, the data inserted is correct.
create table #t (t bigint)
bulk insert #t from 'c:\temp\test.csv'
with (datafiletype = 'char',
fieldterminator = '|'
)
select * from #t
Resulting data:
2147483647
2147483648
310067463717
310067463718
310067463719

Error converting nvarchar to numeric while inserting records

Good Day,
I am trying to insert records from csv file into my database table. Problem is in inserting alphanumeric values.
My column datatype is set to NUMERIC(19,0), in this column I am expecting some numeric values to be inserted from. For some specific reasons I am getting alphanumeric values in my csv file. For example:
I am getting value: GBS1182000945008.
My goal here is to remove those three characters and cast the remaining string as Numeric and get it inserted inside my table.
So far I have tried:
CAST((select substring(?,4,30)) AS NUMERIC)
But, I am still getting that annoying error, I cannot just ignore those values by using TRY_CONVERT as I do need those records in my database. What am I missing here?
Edit: I have tested this code separately and it is working as expected, only problem is in using it while inserting values. What I have done is that, I checked whether the given parameter is numeric or not, if it is I am just inserting the param if not then I am converting that param into numeric.
So here is my whole scenario:
If (SELECT ISNUMERIC(?)) = 1 {
// Just insert the parameter as:
Insert INTO table (NUMERIC_FIELD) VALUE(?)
}
ELSE {
Insert INTO table (NUMERIC_FIELD) VALUE(CAST((select substring(?,4,30)) AS NUMERIC))
}
Here ? represents the value from CSV.
Try AS NUMERIC(19,0) instead of AS NUMERIC
Also, please note you can have 30 digits in the extracted substring (it will not fit your 19 digis of the column datatype.

String or binary data would be truncated. The statement has been terminated. System.Data.SqlClient.SqlException (0x80131904)

String or binary data would be truncated. The statement has been terminated.
System.Data.SqlClient.SqlException (0x80131904): String or binary data would be truncated
This exception throws when C#(model) try to save data record for column whose size defined less in SQL SERVER database table where value to pass to this column string length in greater.
To fix this error you only need to alter column of table in SQL SERVER database using SQL Server script.
Only increasing size of column in table works. No need to re deploy the application on PROD/TEST environment.
Please refer this sample below.
CREATE TABLE MyTable(Num INT, Column1 VARCHAR(3))
INSERT INTO MyTable VALUES (1, 'test')
Look at column1 its size is 3 but the given value is of length 4 so you would get the error.
To fix the error:
You should pass the string value less than or equal to it size ie., 3 characters like the below.
INSERT INTO MyTable VALUES (1, 'tes')
If you want to suppress this error
you can use set the below ansi_warnings parameter to off
SET ansi_warnings OFF
if we use ansi_warnings as OFF, the error would be suppressed and whatever can fit in the column, would be inserted, the rest would be truncated.
INSERT INTO MyTable VALUES (1, 'test')
The string 'tes' would be stored in your table and it won't return any error.

How to query for rows containing <Unable to read data> in a column?

I have a SQL table in which some columns, when viewed in SQL Server Manager, contain <Unable to read data>. Does anyone know how to query for <Unable to read data>? I can individually modify the data in this column with update table set column = NULL where key = 'value', but how can I find whether additional rows exist with this bad data?
I would recommend against replacing the data. There is nothing wrong with it, is just that SSMs cannot display it properly in the Edit panel. The data in the database itself is perfectly fine, from your description.
This script shows the problem:
create table test (id int not null identity(1,1) primary key,
large_value numeric(38,0));
go
insert into test (large_value) values (1);
insert into test (large_value) values (12345678901234567890123456789012345678);
insert into test (large_value) values (1234567890123456789012345678901234567);
insert into test (large_value) values (123456789012345678901234567890123456);
insert into test (large_value) values (12345678901234567890123456789012345);
insert into test (large_value) values (1234567890123456789012345678901234);
insert into test (large_value) values (123456789012345678901234567890123);
insert into test (large_value) values (12345678901234567890123456789012);
insert into test (large_value) values (1234567890123456789012345678901);
insert into test (large_value) values (123456789012345678901234567890);
insert into test (large_value) values (12345678901234567890123456789);
insert into test (large_value) values (NULL);
go
select * from test;
go
The SELECT will work fine, but showing the Edit Top 200 Rows in object explorer will not:
There is a Connect Item for this issue. SSMS 2012 still exhibits the same problem.
If we look at the Numeric and Decimal details we'll see that the problem occurs at a weird boundary, at precision 29 which is actually not a SQL Server boundary (precision 28 is):
Precision Storage bytes
1 - 9 5
10-19 9
20-28 13
29-38 17
If we check the .Net (SSMS is a managed application) decimal precision table we can see quickly where the crux of the issue is: Precision is 28-29 significant digits. So the .Net decimal type cannot map high precision (>29) SQL Server numeric/decimal types.
This will affect not only SSMS display, but your applications as well. Specialized applications like SSIS will use high precisions representation like DT_NUMERIC:
DT_NUMERIC An exact numeric value with a fixed precision and scale.
This data type is a 16-byte unsigned integer with a separate sign, a
scale of 0 - 38, and a maximum precision of 38.
Now back to your problem: you can discover invalid entries by simply looking at the value. Knowing that the C# representation range can accommodate values between approximate (-7.9 x 1028 to 7.9 x 1028) / (100 to 28)` (the range depends on the scale) you can search for values outside the range on each column (the actual values to search between will depend on the column scale). But that begs the question 'what to replace the data with?'.
I would recommend instead using dedicated tools for import export, tools that are capable of handling high precision numeric values. SSIS is the obvious candidate. But even the modest bcp.exe would also fit the bill.
BTW if your values are actually incorrect (ie. true corruption) then I would recommend running DBCC CHECKTABLE (...) WITH DATA_PURITY:
DATA_PURITY
Causes DBCC CHECKDB to check the database for column values that are not valid or out-of-range. For example, DBCC CHECKDB detects
columns with date and time values that are larger than or less than
the acceptable range for the datetime data type; or decimal or
approximate-numeric data type columns with scale or precision values
that are not valid.
For databases created in SQL Server 2005 and later, column-value integrity checks are enabled by default and do not require the
DATA_PURITY option. For databases upgraded from earlier versions of
SQL Server, column-value checks are not enabled by default until DBCC
CHECKDB WITH DATA_PURITY has been run error free on the database.
After this, DBCC CHECKDB checks column-value integrity by default.
Q: How can this issue arise for a datetime column?
use tempdb;
go
create table test(d datetime)
insert into test (d) values (getdate())
select %%physloc%%, * from test;
-- Row is on page 0x9100000001000000
dbcc traceon(3604,-1);
dbcc page(2,1,145,3);
Memory Dump #0x000000003FA1A060
0000000000000000: 10000c00 75f9ff00 6aa00000 010000 ....uùÿ.j .....
Slot 0 Column 1 Offset 0x4 Length 8 Length (physical) 8
dbcc writepage(2,1,145, 100, 8, 0xFFFFFFFFFFFFFFFF)
dbcc checktable('test') with data_purity;
Msg 2570, Level 16, State 3, Line 2 Page (1:145), slot 0 in object ID
837578022, index ID 0, partition ID 2882303763115671552, alloc unit ID
2882303763120062464 (type "In-row data"). Column "d" value is out of
range for data type "datetime". Update column to a legal value.
As suggested above ,these errors usually occurs when Precision and scale are not preserved .If your comfortable with SSIS then you can achieve to get those rows which are corrupt .Taking the values which Martin Smith created
CREATE TABLE T(ID int ,C DECIMAL(38,0));
INSERT INTO T VALUES(1,9999999999999999999999999999999999999)
The above table reproduces the error . Here the first column represents the primary key . I inserted around 1000 rows out of which few were corrupted values . Below is the SSIS package design
In the Data Conversion ,i took the column C which had errors and tried to cast it to Decimal(38,0) .Since a conversion or truncation error will occur ,therefore i redirected the error rows to an OLEDB command which basically updates the table and sets the column to NULL
Update T
Set C=NULL
where ID=?
The value of C and ID will be directed to oledb command .In case if there is no error then i'm just inserting into a table ( Actually no need to do this ).This will work if you have a primary key column in your table .
In case if there is any error in date time column a sql query can be written to verify the format of datetime values .Please go through the MSDN link for valid date time value
Select * from YourTable where ISDATE(Col)!=1
I think you can fetch data with cursor. please try again with cursor query such as below query :
DECLARE VerifyCursor CURSOR FOR
SELECT *
FROM MyTable
WHILE 1=1 BEGIN
BEGIN Try
FETCH FIRST FROM VerifyCursor INTO #Column1, #Column2, ...
INSERT INTO #MyTable2(Column1, Column2,...)
VALUES (#Column1, #Column2, ...)
END TRY
BEGIN CATCH
END CATCH
IF (##FETCH_STATUS<>0) BREAK
End
OPEN VerifyCursor
CLOSE VerifyCursor
DEALLOCATE VerifyCursor
Replacing the bad data is simple with an update:
UPDATE table SET column = NULL WHERE key_column = 'Some value'

SQL Server Error : String or binary data would be truncated

My table :
log_id bigint
old_value xml
new_value xml
module varchar(50)
reference_id bigint
[transaction] varchar(100)
transaction_status varchar(10)
stack_trace ntext
modified_on datetime
modified_by bigint
Insert Query :
INSERT INTO [dbo].[audit_log]
([old_value],[new_value],[module],[reference_id],[transaction]
,[transaction_status],[stack_trace],[modified_on],[modified_by])
VALUES
('asdf','asdf','Subscriber',4,'_transaction',
'_transaction_status','_stack_trace',getdate(),555)
Error :
Msg 8152, Level 16, State 14, Line 1
String or binary data would be truncated.
The statement has been terminated.
Why is that ???
You're trying to write more data than a specific column can store. Check the sizes of the data you're trying to insert against the sizes of each of the fields.
In this case transaction_status is a varchar(10) and you're trying to store 19 characters to it.
this type of error generally occurs when you have to put characters or values more than that you have specified in Database table like in this case:
you specify
transaction_status varchar(10)
but you actually trying to store
_transaction_status
which contain 19 characters.
that's why you faced this type of error in this code..
This error is usually encountered when inserting a record in a table where one of the columns is a VARCHAR or CHAR data type and the length of the value being inserted is longer than the length of the column.
I am not satisfied how Microsoft decided to inform with this "dry" response message, without any point of where to look for the answer.

Resources