SQL Server 2008 - Having trouble understanding decimal - sql-server

I need to insert numbers with decimals into a SQL Server 2008 database. It seems like decimal() is the correct data type to use, however, I'm having trouble understanding it exactly.
I found this script (scroll down for decimal):
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=95322
Which allows me to test different decimal settings against numbers, and there's some I don't understand why they pass or fail. The way I understand it, when using decimal(precision, scale), precision is the number of digits to the left of the decimal and scale is the number of digits to the right of the decimal. Using this function, I don't understand why some are passing and why some are failing.
SELECT dbo.udfIsValidDECIMAL('2511.1', 6, 3)
I have 4 digits on the left and 1 on the right, yet this fails.
SELECT dbo.udfIsValidDECIMAL('10.123456789123456789', 18, 17)
SELECT dbo.udfIsValidDECIMAL('10.123456789123456789', 18, 16)
The first one fails, the second one passes. There are 18 digits after the decimal point, so it seems like both should fail (or pass and SQL truncates the number).
Maybe I have a fundamental misunderstanding in how decimal() is supposed to work?

DECIMAL(6,3) means: 6 digits in all, 3 of which to the right of the decimal point.
So you have 3 digits before, 3 digits after the decimal point, and of course it cannot handle 2511.1 - that's got four digits to the left of the decimal point. You'd need DECIMAL(7,3) to handle that.
See the MSDN documentation on DECIMAL:
decimal[ (p[ ,s] )] and numeric[ (p[ ,s] )]
p (precision)
The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. The precision
must be a value from 1 through the maximum precision of 38. The
default precision is 18.
s (scale)
The maximum number of decimal digits that can be stored to the right of the decimal point. Scale must be a value from 0 through p.
Scale can be specified only if precision is specified. The default
scale is 0; therefore, 0 <= s <= p. Maximum storage sizes vary, based
on the precision.

Precision is the number of digits that can be stored total.
So the number to the left of the decimal will be precision - scale.
For example, your first example will fail because you are only allowing for three places to the left of the decimal:
SELECT dbo.udfIsValidDECIMAL('2511.1',6,3)

cast(10.123456789123456789 as decimal(18,17))
A precision of 18 & scale of 17 allows just 1 digit to the left of the decimal place, but there are 2 in that example.
cast(10.123456789123456789 as decimal(18,16)
Has room for 2 digits so succeeds.

Related

Error converting data type varchar to numeric

I have this part of query that causes the above error:
CONVERT(varchar(15),CAST((AmountOfInsurance) as MONEY),1)
What am I doing worng?
This is the declararion of AmountOfInsurance
AmountOfInsurance decimal(19,2),
I hope this will work for you...
CONVERT(varchar(15), CONVERT(money, AmountOfInsurance), 1)
More information: the last parameter decides what the output format looks like:
0 (default) No commas every three digits to the left of the decimal point, and two digits to the right of the decimal point; for example, 4235.98.
1 Commas every three digits to the left of the decimal point, and two digits to the right of the decimal point; for example, 3,510.92.
2 No commas every three digits to the left of the decimal point, and four digits to the right of the decimal point; for example, 4235.9819.
If you want to truncate the pennies, and count in pounds, you can use rounding to the nearest pound, floor to the lowest whole pound, or ceiling to round up the pounds

Why does a FLOAT give me a more accurate result than a DECIMAL?

I am looking for a division result that is extremely accurate.
This SQL returns the following results:
SELECT (CAST(297282.26 AS DECIMAL(38, 30)) / CAST(495470.44 AS DECIMAL(38, 30))) AS ResultDecimal
SELECT (CAST(297282.26 AS FLOAT) / CAST(495470.44 AS FLOAT)) AS ResultFloat
Here is the accurate result from WolframAlpha:
http://www.wolframalpha.com/input/?i=297282.26%2F495470.44
I was under the impression that DECIMAL would be more accurate than FLOAT:
"Because of the approximate nature of the float and real data types, do not use these data types when exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. Instead, use the integer, decimal, money, or smallmoney data types."
https://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx
Why does the FLOAT calculation give me a result more accurate than when using DECIMAL?
I found the best precision to be when you use:
SELECT (CAST(297282.26 AS DECIMAL(15, 9)) / CAST(495470.44 AS DECIMAL(24, 2))) AS ResultDecimal
This gives a result of
0.599999991926864496699338915153
I think the actual value (to 100 digits) is:
0.5999999919268644966993389151530412187657451370862810705720405842980259326873264124495499670979362562...
Please bear in mind SQL Server defines the maximum precision and scale for division as:
max precision = (p1 - s1 + s2) + MAX(6, s1 + p2 + 1) -- up to 38
max scale = MAX(6, s1 + p2 + 1)
Where p1 & p2 are the precision of the two numbers and s1 & s2 are the scale of the numbers.
In this case the maximum precision is (15-9+2) + MAX(6, 9+24+1) = 8 + 34 = 42.
However SQL Server only allows a maximum precision of 38.
The maximum scale = MAX(6, 9+24+1) = 34
Hopefully you already understand that just because the FLOAT version presents more numbers after the decimal point, doesn't necessarily mean that those are the true numbers. This is about precision, not accuracy.
It is the CAST function itself that causes this loss of precision, not the difference between the FLOAT and DECIMAL data types.
To demonstrate this, compare your previous results to the result of this:
SELECT 297282.26 / 495470.44 AS ResultNoCast
In my version of the query, the presence of a decimal point in the literal numbers tells SQL Server to treat the values as DECIMAL datatype, with appropriate length and precision as determined by the server. The result is more precise than when you CAST explicitly to DECIMAL.
A clue to the reason for this can be found hidden in the official documentation of the CAST function, under Truncating and Rounding Results:
When you convert data types that differ in decimal places, sometimes the result value is truncated and at other times it is rounded. The following table shows the behavior.
From | To | Behavior
numeric | numeric | Round
So the fact that each separate literal value is treated as a NUMERIC (same thing as DECIMAL) on the way in, and is being casted to NUMERIC, causes rounding.
Anticipating your next question a little, if you want a more precise result from the NUMERIC/DECIMAL datatype, you just need to tell SQL Server that each component of the calculation is more precise:
SELECT 297282.26000000 / 495470.44000000 AS ResultSuperPrecise
This appears (from experimentation) to be the most precise I can get: either adding or removing a 0 from either the numerator or denominator makes the result less precise. I'm at a loss to explain why that is, because the result is only 23 digits to the right of the decimal point.
It doesn't give you a more accurate result. I say that because the value is an approximate and not all values will be available to stored in a float. On the other side of that coin though is that float has the possibility of a lot more precision. The maximum precision of a decimal/numeric is 38. https://msdn.microsoft.com/en-us/library/ms187746.aspx
When you look at float though the maximum precision is 53. https://msdn.microsoft.com/en-us/library/ms173773.aspx
Okay, here is what I think is going on.
#philosophicles - I think you are right in that the CAST is causing the problem, but not because I am trying to "convert data types that differ in decimal places".
When I execute the following statement
SELECT CAST((297282.26 / 495470.44) AS DECIMAL(38, 30)) AS ResultDecimal
The accurate result for the calculation is
This has way more than 30 digits after the decimal point, and my data type has scale set to 30. So the CAST rounds the value, then just adds zeros to the end until there are 30 digits. We end up with this:
So the interesting thing is how does the CAST determine up to how many decimals to round or truncate the output? I am not sure, but as #philosophicles pointed out, the scale of the input effects the rounding applied on the output.
SELECT CAST(((297282.26/10000) / (495470.44/10000)) AS DECIMAL(38, 30)) AS ResultDecimal
Thoughts?
Also interesting:
However, in simple terms, precision is lost when the input scales are
high because the result scales need to be dropped to 38 with a
matching precision drop.
https://dba.stackexchange.com/questions/41743/automatic-decimal-rounding-issue
The precision and scale of the numeric data types besides decimal are fixed.
https://dba.stackexchange.com/questions/41743/automatic-decimal-rounding-issue

What is the max value of numeric(19, 0) and what happens if it is reached?

In an SQL Server table, I have a column with the following type:
numeric(19, 0)
What is the max value for this type?
This corresponds to a Long in Java, which has a maximum value of 9,223,372,036,854,775,807 (19 digits).
Does the above type in SQL Server have the same max value?
What happens in SQL Server if the above value is reached?
The MSDN documentation rather unhelpfully says:
When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1...Maximum storage sizes vary, based on the precision.
(max precision is numeric(38)).
From experimenting in SQL Server 2012, the maximum value for numeric(19,0) is 10e18 - 1025. Attempting to insert any higher value gives:
Msg 8115, Level 16, State 6, Line xx Arithmetic overflow error converting float to data type numeric.
The maximum does vary by precision and appears to be a bit under 10^(p-1) (where p is precision), though the exact relationship is not obvious. No doubt a more exact answer could be given by delving into the bit-level details of the storage method.
numeric(19,0) specifies a fixed precision number having a precision of 19 digits and 0 digits to the right of the decimal point. It should not overflow. See decimal and numeric for more information.
First, decimal is synonimous with numeric.
decimal (2, 1) means that total number of all digits summed before and after dot can be maximum 2 and the number of digits after dot can be maximum 1. So range is from -9.9 to 9.9
Range for numeric(19, 0) is thus
-9999999999999999999.0 to
9999999999999999999.0
But sometimes also
9999999999999999999.2
will be allowed from my experiments, don't know why.

SQL Server decimal scale length - can be or has to be?

I have really simply question about DECIMAL (and maybe NUMERIC) type in SQL Server 2008 R2.
MSDN said:
(scale)
The maximum number of decimal digits that can be stored to the right of the decimal point. Scale must be a value from 0 through p.
I understand this following way:
if I have DECIMAL(10, 5) - I am able to store 12345.12345 or 12345678.91.
if I have DECIMAL(5, 5) - I can have 12345 or 1234.5 or 1.2345, etc...
Is it clear?
But I got this error message:
SELECT CAST(2.8514 AS DECIMAL(5,5))
Arithmetic overflow error converting numeric to data type numeric.
I thought 5,5 means I can have up to 5 digits and up to 5 CAN BE right of the decimal point.
As I tried:
SELECT CAST(12.851 AS DECIMAL(6,5)) - overflows too
however
SELECT CAST(1.23456 AS DECIMAL(6,5)) - is OK.
So what's the truth?
DECIMAL(a,b) says that I can have up to a digits and JUST b of them are right to the decimal point (and there rest a-b to the left to the dec. point)?
I'm really confused about statement in doc which is copied everywhere. Please take a while and explain me this simple thing.
Lot of thanks!
The easiest way to think of it (for me) is that precision is the total number of digits, of which scale is the number of digits to the right of the decimal point. So DECIMAL(p,s) means p-s digits to the left of the point, and s digits to the right of the point.
That explains all the conversion errors you're seeing: the 2.8514 cannot be decimal(5,5) because p-s = 0; 12.851 cannot be decimal(6,5) because p-s = 1 and so on.

Why can't I enter an integer value into a decimal field?

I'm trying to write an insert statement for a SQL Server table that inserts the value 1 into a decimal field. The field is of the type decimal(10, 10) which, as far as I understand, means that it can have up to 10 digits altogether, and up to 10 of those digits can be after the decimal point. But, when I try to run the insert statement I get the following error:
Arithmetic overflow error converting int to data type numeric.
If I change the data type of the field to decimal(11, 10), it suddenly works. What am I not understanding here? What am I doing wrong?
decimal(10, 10) means all decimal places, no digits to the left of the decimal point!
see here: http://msdn.microsoft.com/en-us/library/aa258832(SQL.80).aspx_
decimal[(p[, s])]
p (precision) Specifies the maximum total number of decimal digits
that can be stored, both to the left
and to the right of the decimal point.
The precision must be a value from 1
through the maximum precision. The
maximum precision is 38. The default
precision is 18.
s (scale) Specifies the maximum number of decimal digits that can be
stored to the right of the decimal
point. Scale must be a value from 0
through p. Scale can be specified only
if precision is specified. The default
scale is 0; therefore, 0 <= s <= p.
Maximum storage sizes vary, based on
the precision.
decimal(11,10) gives you 1 digit the the left of the decimal and 10 to the right, so integer 1 fits now!
EDIT
when using: decimal(p,s), think of p as how many total digits (regardless of left or right of the decimal point) you want to store, and s as how many of those p digits should be to the right of the decimal point.
DECIMAL(10,5)= 12345.12345
DECIMAL(10,2)= 12345678.12
DECIMAL(10,10)= .1234567891
DECIMAL(11,10)= 1.1234567891

Resources