How to make the power() function use decimal instead of integer? - sql-server

The following query throws an error:
SELECT POWER( 10, 13)
Error message:
Msg 232, Level 16, State 3, Line 1
Arithmetic overflow error for type int, value = 10000000000.000000.
Whan can I do to prevent the int overflow?

Add a decimal place:
SELECT POWER( 10.0, 11 - LEN(9)) - 1
That causes an implicit cast from int to DECIMAL(18,6). You could write it for identical results as:
SELECT POWER(CAST(10 as DECIMAL(18,6)), 11 - LEN(9)) - 1
The POWER function return type is documented as:
Returns the same type as submitted in float_expression. For example,
if a decimal(2,0) is submitted as float_expression, the result
returned is decimal(2,0)
This means that your result is a DECIMAL(18,6), which in your case is big enough to hold the final result.

Use following
SELECT POWER( 10.0, 11 - LEN(9)) - 1
Explanation
Arithmetic overflow occurs when a calculation produces a result that falls outside the min & max values that a datatype can store. In your case, POWER( 10, 11 - LEN(9)) produces 10000000000, and the datatype is implicitly implemented as int. This value is greater than the max value that int can take, i.e., 2147483647, thus creating the overflow. If you use POWER( 10.0, 11 - LEN(9)) with 10.0, the calculation produces decimal, as decimal has higher precedence in SQL Server than int. Decimal has a range of - 10^38 +1 through 10^38 - 1, and so does not create overflow for your calculation.

You could use CONVERT:
SELECT CONVERT(DECIMAL(18,2), 10)
This specifies 2 decimal places, but you can modify the numbers within the brackets (18,5) to adjust this precision & scale as desired

Related

Possible to detect when arithmetic overflow would occur before POWER(x,y) is executed?

Example:
-- inputs
declare #x decimal(28,10) = 10001.0
declare #y decimal(18,6) = 7.0
-- later on, inside a udf
select POWER(#x, #y)
Result:
Msg 8115, Level 16, State 6, Line 13
Arithmetic overflow error converting float to data type numeric.
I understand why the overflow is occurring. My question is, is it possible to detect,
just before POWER is executed, whether the overflow would occur? Note that the code is run inside a UDF, so cannot use TRY...CATCH. If I can detect it in advance, I can take avoiding action (e.g. return NULL for the result, which is suitable for my requirements).
You could use Try-Catch or you could use a formula to predict the output digits and return NULL instead. Formula to predict number of digits from here
Predict Number of Digits of Power Function
Declare #Num DECIMAL(28,10) = 10001
,#Exponent DECIMAL(28,10) = 7
,#NumOfDigits INT
/*Predict number of digits from power function*/
SELECT #NumOfDigits = FLOOR(1 + #exponent* CAST(LOG(#Num,10) AS DECIMAL(38,10)))
SELECT
CASE WHEN #NumOfDigits <= 38 /*Max decimal precision, return type from POWER function according to https://learn.microsoft.com/en-us/sql/t-sql/functions/power-transact-sql?view=sql-server-ver15*/
- 10 /*Scale of #Num. Need to leave enough digits to record decimal places*/
THEN POWER(#Num,#Exponent) /*If less than or equal to precision, return value*/
ELSE NULL /*If outside precision, just returns NULL. Could update to return something else*/
END

Microsoft SQL Server : numeric constants precision

I'm puzzled by the lack of precision in this result:
select convert(numeric(11, 10), 1 / 3.0)
0.3333330000
Why only six decimal places when I asked for ten? Here I wrote 3.0 instead of 3 to force floating point division instead of integer division (the value of 1/3 is zero).
But what type is that constant 3.0? It doesn't seem to be any of the native SQL Server floating point types, which all give a different result:
select convert(numeric(11, 10), 1 / convert(real, 3))
0.3333333433
select convert(numeric(11, 10), 1 / convert(double precision, 3))
0.3333333333
select convert(numeric(11, 10), 1 / convert(float, 3))
0.3333333333
Until now I have tried to write 3.0 to get a floating point constant, as would happen in programming languages like C. But that isn't the interpretation in SQL. What's happening?
The answer is that 3.0 does not give a constant of floating point type, but one of numeric type, specifically numeric(2,1). You can see this using:
select sql_variant_property(3.0, 'basetype') as b,
sql_variant_property(3.0, 'precision') as p,
sql_variant_property(3.0, 'scale') as s
b: numeric
p: 2
s: 1
If you specify this type explictly you get the same odd limited-precision result:
select convert(numeric(11, 10), 1 / convert(numeric(2, 1), 3))
0.3333330000
It's still odd -- how did we end up with six decimal places, when the original numeric constant had only one and the output type wants ten? By the rules of numeric type promotion we can see that the division expression gets type numeric(8, 6):
select sql_variant_property(1 / convert(numeric(2, 1), 3), 'basetype') as b,
sql_variant_property(1 / convert(numeric(2, 1), 3), 'precision') as p,
sql_variant_property(1 / convert(numeric(2, 1), 3), 'scale') as s
b: numeric
p: 8
s: 6
Why exactly the reciprocal of something with one d.p. should get six d.p., rather than five or seven, is not clear to me, but I guess SQL has to pick something.
The moral of the story is not to write numeric constants like 1.0 in SQL expecting to get floating point semantics. You will get numeric semantics, but probably with too few decimal places to be useful. Write convert(1, float) or 1e instead if you want floating point, or else use numeric type and check that each step of the calculation has the number of d.p. you want.
Thanks to colleagues who pointed me to https://bertwagner.com/posts/data-type-precedence-and-implicit-conversions/ which explains some of this.

Result when subtrahend calls round function

I'm dividing two floats, multiplying it by 100 and then subtracting it by 100. I'm returning a percentage.
My question is: why is the final result a float that isn't rounded when the right part of the subtraction returns a float of 2 digits?
These is one sequence:
/* 1 */
-- Returns 0.956521739130435, which is correct.
select cast(198 as float)/(cast(198 as float) + cast(9 as float)) -- correct
/* 2 */
-- Returns 95.6521739130435, which is correct.
select 100*(cast(198 as float)/(cast(198 as float) + cast(9 as float))) --correct
/* 3 */
-- It's the same as previous one, but with a ROUND
-- Returns 95.65, which is correct.
select round(100*(cast(198 as float)/(cast(198 as float) + cast(9 as float))),2)
/* 4 */
-- Returns 4.34999999999999, should be 100-95.65, but it's not. ROUND is ignored. Why?
select 100-round(100*(cast(198 as float)/(cast(198 as float) + cast(9 as float))),2)
|-------------- This returns 95.65 --------------------------------|
Another sequence:
/* 1 */
-- Returns 0.956521739130435, which is correct.
select cast(198 as float)/(cast(198 as float) + cast(9 as float))
/* 2 */
-- Returns 0.9565, which is correct.
select round(cast(198 as float)/(cast(198 as float) + cast(9 as float)), 4)
/* 3 */
-- Returns 95.65, which is correct.
select 100*round(cast(198 as float)/(cast(198 as float) + cast(9 as float)), 4)
/* 4 */
-- Returns 4.34999999999999, should be 100-95.65, but it's not. ROUND is ignored. Why?
select 100-(100*round(cast(198 as float)/(cast(198 as float) + cast(9 as float)), 4))
|-------------------- This returns 95.65 --------------------------------|
I'm just curious as to why this happens, although it can easily be fixed with one ROUND at the beginning:
select round(100-(100*(cast(198 as float)/(cast(198 as float) + cast(9 as float)))), 2)
The reason I ask is because it's not something that can be easily reproduced. I tried reproducing it, and out of 2,000 times, it only occurred 12 times. That's less than 1%, but with floats with repetitive numbers after the 2nd decimal (ie. 3.47999999999), which makes sense:
declare #rand int = 1
While(#rand <= 2000)
begin
select 100-round(100*(cast(abs(checksum(NewId()) % 1500) as float)/(cast(abs(checksum(NewId()) % 1500) as float) + cast(abs(checksum(NewId()) % 1500) as float))),2)
set #rand = #rand + 1
end
I guess my other question is: what type is the sql editor returning when it returns 95.65 with select round(100*(cast(198 as float)/(cast(198 as float) + cast(9 as float))),2)?
To expand on Jeroen's comment:
SQL Server's FLOAT type is a double-precision floating-point value. As with (most) floating point formats, the value is stored in binary. Just as the number 1/3 cannot be represented with a finite number of digits after the decimal, the number 95.65 cannot be represented with a finite number of bits. The closest value to 95.65 that can be stored in a FLOAT has the exact value:
95.650000000000005684341886080801486968994140625
If you subtract that number from 100, you get exactly:
4.349999999999994315658113919198513031005859375
When displayed, this is rounded to 15 significant digits, and the value printed is:
4.34999999999999
As discussed, you can solve this problem by using DECIMAL type instead of FLOAT.
There are many resources available on StackOverflow and elsewhere if you'd like to learn more about floating-point math.
-- EDIT --
I'm going to use parenthesis notation for repeating decimals. When I write
0.(3)
that means
0.333333333333333333333333333... and so on forever.
Let's start at the beginning. 168 can be stored in a float. 168+9 is 177. That can be stored in a float. If you divide 168 by 177 the mathematically correct answer is:
0.95(6521739130434782608695)
But this value cannot be stored in a float. The closest value that can be stored in a float is:
0.9565217391304348115710354250040836632251739501953125
Take that number and multiply by 100 , the mathematically correct answer is:
95.65217391304348115710354250040836632251739501953125
Since you multiplied a float by 100, you get a float, and that number cannot be stored in a float, so the closest possible value is:
95.6521739130434838216388016007840633392333984375
You ask that this float be rounded to 2 digits after the decimal. The mathematically correct answer is:
95.65
But since you asked to round a float, the answer is also a float, and that value cannot be stored in a float. The closest possible value is:
95.650000000000005684341886080801486968994140625
You asked to subtract that from 100. The mathematically correct value is:
4.349999999999994315658113919198513031005859375
As it happens, that value can be stored in a float. So that's the value that's being selected.
When converting this number to a string, SQL Server rounds the result to 15 significant digits. So that number, when printed, appears as:
4.34999999999999
When you ran the same calculation on your Java console, the exact same calculations were performed, but when the value was printed, Java rounded to 16 significant digits:
4.349999999999994
-- Another EDIT --
Why can't 96.65 be stored exactly in a float? The float type stores numbers in binary format. If you want to express 96.65 in binary, the mathematically exact value is:
1011111.1010011001100110011001100110011001100110011001(1001)
You can see the pattern. Just as 1/3 is represented as an infinite repeating value in decimal, this value has an infinite repeating value in binary. You can see the pattern (1001) being repeated over and over.
A float can only hold 53 significant bits. And so this is rounded to:
1011111.1010011001100110011001100110011001100110011010
If you convert that number back to decimal, you get the exact value:
95.650000000000005684341886080801486968994140625
-- Yet Another Edit --
You ask what happens when you call Round again on the result.
We started with the number:
4.349999999999994315658113919198513031005859375
You ask that this be rounded to 2 places. The mathematically correct answer is:
4.35
Since you are rounding a float, this result must also be a float. Express this value in binary. The mathematically correct answer is:
100.0101100110011001100110011001100110011001100110011001(1001)
Again, this is a repeating binary value. But float can't store an infinite number of bits. The value is rounded to 53 significant bits. The result is:
100.0101100110011001100110011001100110011001100110011
If you convert this to decimal, the exact value is:
4.3499999999999996447286321199499070644378662109375
That is the value you selected. Now SQL Server needs to print that on the screen. As before, it is rounded to 15 significant digits. The result is:
4.35000000000000
It removes the trailing zeros, and the result you see on the screen is:
4.35
The last round did nothing magic. The answer is still stored as a float, and the answer is still not an exact value. As it happens SQL Server chooses to round values to 15 significant digits when printing a float. In this case, that rounded value happened to match the exact value you were expecting.
If values were rounded to 14 places when printing them, the original query would have appeared to have the value you expected.
If values were rounded to 16 places, then the result of the final round would be shown as
4.3499999999999996

Weired Behavior of Round function in MSSQL Database for Real column only

I found weird or strange behavior of Round function in MSSQL for real column type. I have tested this issue in Azure SQL DB and SQL Server 2012
Why #number=201604.125 Return 201604.1 ?
Why round(1.12345,10) Return 1.1234500408 ?
-- For Float column it working as expected
-- Declare #number as float,#number1 as float;
Declare #number as real,#number1 as real;
set #number=201604.125;
set #number1=1.12345;
select #number as Realcolumn_Original
,round(#number,2) as Realcolumn_ROUND_2
,round(#number,3) as Realcolumn_ROUND_3
, #number1 as Realcolumn1_Original
,round(#number1,6) as Realcolumn1_ROUND_6
,round(#number1,7) as Realcolumn1_ROUND_7
,round(#number1,8) as Realcolumn1_ROUND_8
,round(#number1,9) as Realcolumn1_ROUND_9
,round(#number1,10) as Realcolumn1_ROUND_10
Output for real column type
I suspect what you are asking here is why does:
DECLARE #n real = 201604.125;
SELECT #n;
Return 201604.1?
First point of call for things like this should be the documentation: Let's start with float and real (Transact-SQL). Firstly we note that:
The ISO synonym for real is float(24).
If we then look further down:
float [ (n) ] Where n is the number of bits that are used to store the
mantissa of the float number in scientific notation and, therefore,
dictates the precision and storage size. If n is specified, it must be
a value between 1 and 53. The default value of n is 53. n value
Precision Storage size
1-24 7 digits 4 bytes
So, now we know that a real (aka a float(24)) has precision of 7. 201604.125 has a precision of 9, that's 2 too many; so off come that 2 and 5 in the return value.
Now, ROUND (Transact-SQL). That states:
Returns a numeric value, rounded to the specified length or precision.
When using real/float those digits aren't actually lost, as such, due to the floating point. When you use ROUND, you are specifically stating "I want this many decimal places". This is why you can then see the .13 and the .125, as you have specifically asked for those. When you just returned the value of #number it had a precision of 7, due to being a real, so 201604.1 was the value returned.

What is the max value of numeric(19, 0) and what happens if it is reached?

In an SQL Server table, I have a column with the following type:
numeric(19, 0)
What is the max value for this type?
This corresponds to a Long in Java, which has a maximum value of 9,223,372,036,854,775,807 (19 digits).
Does the above type in SQL Server have the same max value?
What happens in SQL Server if the above value is reached?
The MSDN documentation rather unhelpfully says:
When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1...Maximum storage sizes vary, based on the precision.
(max precision is numeric(38)).
From experimenting in SQL Server 2012, the maximum value for numeric(19,0) is 10e18 - 1025. Attempting to insert any higher value gives:
Msg 8115, Level 16, State 6, Line xx Arithmetic overflow error converting float to data type numeric.
The maximum does vary by precision and appears to be a bit under 10^(p-1) (where p is precision), though the exact relationship is not obvious. No doubt a more exact answer could be given by delving into the bit-level details of the storage method.
numeric(19,0) specifies a fixed precision number having a precision of 19 digits and 0 digits to the right of the decimal point. It should not overflow. See decimal and numeric for more information.
First, decimal is synonimous with numeric.
decimal (2, 1) means that total number of all digits summed before and after dot can be maximum 2 and the number of digits after dot can be maximum 1. So range is from -9.9 to 9.9
Range for numeric(19, 0) is thus
-9999999999999999999.0 to
9999999999999999999.0
But sometimes also
9999999999999999999.2
will be allowed from my experiments, don't know why.

Resources