How can I increase the precision of fractional numbers - sql-server

select 'foo ' + str(1.0/3.0, 15, 12) returns foo 0.333333000000. Is there a possibility to increase the decimal precision so that every digit after the decimal point is a 3?

You need to increase the accuracy of your input values. To quote the documentation when performing a division on a decimal the resulting precision is p1 - s1 + s2 + max(6, s1 + p2 + 1) and the resulting scale is max(6, s1 + p2 + 1).
In your expression, both 1.0 and 3.0 are a decimal(2,1). This means your resulting precision is 2 - 1 + 1 + max(6, 1 + 2 + 1) = 2 + max(6,5) = 2 + 6 = 8. For your scale, the result is max(6, 1 + 2 + 1) = max(6,5) = 6. Thus you're new datatype is a decimal(8,6). This results in the expression 1.0 / 3.0 = 0.333333.
You are then casting this value to a string, with a precision of 15 and a scale of 12. 0.333333 as a decimal(15,12) is 0.333333000000, as the precision has already been lost; SQL server doesn't remember that the value is technically 0.3~.
Thus, to get the answer you want, you need to add more decimal places to your intial values. For example:
SELECT 'foo ' + CONVERT(varchar(20),CONVERT(decimal(15,12),1.000000/3.0000000));
or, use a conversion
SELECT 'foo ' + CONVERT(varchar(20),CONVERT(decimal(15,12),CONVERT(decimal(15,12),1.0)/3.0));
Any questions, please do ask.

Related

Reason to cast both num and denom for division?

Is precision the reason I should cast both num and denominator as decimal so that it returns a decimal? And why does the first & second statement bring different precisions? Both only cast one part.
And instead of casting both to decimal(12,4), why just not cast the denominator to a higher precision?
For example:
select 3/cast(2 as decimal(12,4))
select cast(3 as decimal(12,4))/2
select cast(3 as decimal(12,4))/cast(2 as decimal(12,4))
select 3/cast(2 as decimal(16,4))
RETURNS
1.5000000000000
1.500000
1.50000000000000000
1.50000000000000000
This is related to Precision, Scale, and Length
Precision is the number of digits in a number.
Scale is the number of digits to the right of the decimal point in a number.
For example, the number 123.45 has a precision of 5 and a scale of 2.
The following table defines how the precision and scale of the result are calculated when the result of an operation is of type decimal. The result is decimal when either of the following is true:
Both expressions are decimal.
One expression is decimal and the other is a data type with a lower precedence than decimal.
The operand expressions are denoted as expression e1, with precision p1 and scale s1, and expression e2, with precision p2 and scale s2.
The precision and scale for any expression that is not decimal is the precision and scale defined for the data type of the expression.
Operation || Result precision || Result scale
e1 + e2 || max(s1, s2) + max(p1-s1, p2-s2) + 1 || max(s1, s2)
e1 - e2 || max(s1, s2) + max(p1-s1, p2-s2) + 1 || max(s1, s2)
e1 * e2 || p1 + p2 + 1 || s1 + s2
e1 / e2 || p1 - s1 + s2 + max(6, s1 + p2 + 1) || max(6, s1 + p2 + 1)
e1 % e2 || min(p1-s1, p2 -s2) + max( s1,s2 ) || max(s1, s2)
You can read more in this MSDN article

How scale is defined when decimal and bigint are divided?

I have value A of type DECIMAL(19,8) - the scale is 8, so the number of decimal digits that will be stored to the right of the decimal point is 8.
Now, I am dividing A on B, where B is BIGINT. For, example:
SELECT CAST(3 AS DECIMAL(19, 8)) / CAST(27 AS BIGINT) -- 0.111111111111111111111111111
,CAST(300 AS DECIMAL(19, 8)) / CAST(27 AS BIGINT) -- 11.111111111111111111111111111
,CAST(75003 AS DECIMAL(19, 8)) / CAST(13664400 AS BIGINT) -- 0.005488934750153684025643277
the output values are with length: 29, 30, 29 respectively.
Could anyone tell why the length of the value for the three divisions is not 30? How the SQL Server is calculating the scale of the final result?
Argument 1: 3 AS DECIMAL(19, 8)
Argument 2: 27 AS DECIMAL (18, 0) -- default precision is 18, default scale is 0 (BIGINT was converted to DECIMAL due to type precedence)
p1 = 19
p2 = 18
s1 = 8
s2 = 0
max precision = (p1 - s1 + s2) + MAX(6, s1 + p2 + 1) -- up to 38
max scale = MAX(6, s1 + p2 + 1)
Let's calculate for example 1:
precision: (19 - 8 + 0) + MAX(6, 8 + 18 + 1) = 38
scale: MAX(6, 8 + 18 + 1) = 27
For all your examples you will get always max 27 scale.
0.111111111111111111111111111 (27)
11.111111111111111111111111111 (27)
0.005488934750153684025643277 (27)
The whole part takes only necessary digits (1), (2), (1).
For me everything is perfectly valid.
This answer is based on work of #Paul White from Decimal Truncation In division.
This is call Data Type Precedence.
When a query do something between different but yet compatible types, one of them has to be casted to the other type, eitheir with an explicit or implicit conversion.
If you look at Data Type Conversion (Database Engine)
, you will see that there is an implicit conversion between Decimal and Bigint.
Therefore you query does not requiere an explicit cast.
If you look at Data Type Precedence (Transact-SQL) on MSDN, you will see:
decimal
bigint
This means that decimal has a higher precedence than bigint and the bigint value will converted to decimal.
In the end, you calculation will be:
3,000... / 27,000...
300,000... / 27,000...
75003,000... / 27,000...
If you want it to be 3 / 27, you must do an explicit cast on the Decimal value.

Efficiency of growing a dynamic array by a fixed constant each time?

So when a dynamic array is doubled in size each time an element is added, I understand how the time complexity for expanding is O(n) n being the elements. What about if the the array is copied and moved to a new array that is only 1 size bigger when it is full? (instead of doubling) When we resize by some constant C, it the time complexity always O(n)?
If you grow by some fixed constant C, then no, the runtime will not be O(n). Instead, it will be Θ(n2).
To see this, think about what happens if you do a sequence of C consecutive operations. Of those operations, C - 1 of them will take time O(1) because space already exists. The last operation will take time O(n) because it needs to reallocate the array, add space, and copy everything over. Therefore, any sequence of C operations will take time O(n + c).
So now consider what happens if you perform a sequence of n operations. Break those operations up into blocks of size C; there will be n / C of them. The total work required to perform those operations will be
(c + c) + (2c + c) + (3c + c) + ... + (n + c)
= cn / c + (c + 2c + 3c + ... + nc / c)
= n + c(1 + 2 + 3 + ... + n / c)
= n + c(n/c)(n/c + 1)/2
= n + n(n/c + 1)/2
= n + n2 / c + n / 2
= Θ(n2)
Contrast this with the math for when you double the array size whenever you need more space: the total work done is
1 + 2 + 4 + 8 + 16 + 32 + ... + n
= 1 + 2 + 4 + 8 + ... + 2log n
= 2log n + 1 - 1
= 2n - 1
= Θ(n)
Transplanted from SO Documentation.
Sums of powers of 2 — 1 + 2 + 4 + 8 + 16 + …
The sum
20 + 21 + 22 + ... + 2n-1
simplifies to 2n - 1. This explains why the maximum value that can be stored in an unsigned 32-bit integer is 232 - 1.

Precision, Scale, Sum, Divide.. Truncation

I have the following code:
SELECT -701385.10 -- -701385.10
SELECT SUM(-701385.10) -- -701385.10
SELECT -701385.10/2889991754.89 -- -0.000242694498630
SELECT SUM(-701385.10)/2889991754.89 -- -0.000242
In the last SELECT the result is truncated to 6 decimal places. I've read through the Precision, Scale, and Length article and unless my working is wrong, I can't understand why the truncation is occurring. The type of the expression SUM(-701385.10) should be DECIMAL(38,2) - see SUM - so the type resulting from the division should have:
Precision:
p1 - s1 + s2 + max(6, s1 + p2 + 1)
38 - 2 + 2 + max(6, 2 + 10 + 1)
38 - max(6,13)
38 - 13
25
Scale:
max(6, s1 + p2 + 1)
max(6, 2 + 10 + 1)
max(6, 13)
13
So why are the decimal places being truncated?
Your working is wrong
Precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)
Scale: max(6, s1 + p2 + 1)
Gives
Precision: 38 - 2 + 2 + max(6, 2 + 12 + 1) = 53
Scale: max(6, 2 + 12 + 1) = 15
Which is greater than 38 so you are getting truncation as covered here

Sql Server - Precision handling

I am just reading on the MSDN about Precision Handling.
Taken out from the table on this site:
Operation: e1 / e2
Result precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)
Result scale: max(6, s1 + p2 + 1)
For the explanation of the used expressions:
The operand expressions are denoted as expression e1, with precision p1 and scale s1, and expression e2, with precision p2 and scale s2.
What I do not understand (more like I am not 100% sure I understand it) is this expression
max(6, s1 + p2 + 1)
Can someone explain it to me?
Many thanks :)
See my worked example here T-SQL Decimal Division Accuracy
It means maximum of 6 or (scale1 + precision2 + 1) for the scale of result

Resources