I have the following code:
SELECT -701385.10 -- -701385.10
SELECT SUM(-701385.10) -- -701385.10
SELECT -701385.10/2889991754.89 -- -0.000242694498630
SELECT SUM(-701385.10)/2889991754.89 -- -0.000242
In the last SELECT the result is truncated to 6 decimal places. I've read through the Precision, Scale, and Length article and unless my working is wrong, I can't understand why the truncation is occurring. The type of the expression SUM(-701385.10) should be DECIMAL(38,2) - see SUM - so the type resulting from the division should have:
Precision:
p1 - s1 + s2 + max(6, s1 + p2 + 1)
38 - 2 + 2 + max(6, 2 + 10 + 1)
38 - max(6,13)
38 - 13
25
Scale:
max(6, s1 + p2 + 1)
max(6, 2 + 10 + 1)
max(6, 13)
13
So why are the decimal places being truncated?
Your working is wrong
Precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)
Scale: max(6, s1 + p2 + 1)
Gives
Precision: 38 - 2 + 2 + max(6, 2 + 12 + 1) = 53
Scale: max(6, 2 + 12 + 1) = 15
Which is greater than 38 so you are getting truncation as covered here
Related
select 'foo ' + str(1.0/3.0, 15, 12) returns foo 0.333333000000. Is there a possibility to increase the decimal precision so that every digit after the decimal point is a 3?
You need to increase the accuracy of your input values. To quote the documentation when performing a division on a decimal the resulting precision is p1 - s1 + s2 + max(6, s1 + p2 + 1) and the resulting scale is max(6, s1 + p2 + 1).
In your expression, both 1.0 and 3.0 are a decimal(2,1). This means your resulting precision is 2 - 1 + 1 + max(6, 1 + 2 + 1) = 2 + max(6,5) = 2 + 6 = 8. For your scale, the result is max(6, 1 + 2 + 1) = max(6,5) = 6. Thus you're new datatype is a decimal(8,6). This results in the expression 1.0 / 3.0 = 0.333333.
You are then casting this value to a string, with a precision of 15 and a scale of 12. 0.333333 as a decimal(15,12) is 0.333333000000, as the precision has already been lost; SQL server doesn't remember that the value is technically 0.3~.
Thus, to get the answer you want, you need to add more decimal places to your intial values. For example:
SELECT 'foo ' + CONVERT(varchar(20),CONVERT(decimal(15,12),1.000000/3.0000000));
or, use a conversion
SELECT 'foo ' + CONVERT(varchar(20),CONVERT(decimal(15,12),CONVERT(decimal(15,12),1.0)/3.0));
Any questions, please do ask.
Is precision the reason I should cast both num and denominator as decimal so that it returns a decimal? And why does the first & second statement bring different precisions? Both only cast one part.
And instead of casting both to decimal(12,4), why just not cast the denominator to a higher precision?
For example:
select 3/cast(2 as decimal(12,4))
select cast(3 as decimal(12,4))/2
select cast(3 as decimal(12,4))/cast(2 as decimal(12,4))
select 3/cast(2 as decimal(16,4))
RETURNS
1.5000000000000
1.500000
1.50000000000000000
1.50000000000000000
This is related to Precision, Scale, and Length
Precision is the number of digits in a number.
Scale is the number of digits to the right of the decimal point in a number.
For example, the number 123.45 has a precision of 5 and a scale of 2.
The following table defines how the precision and scale of the result are calculated when the result of an operation is of type decimal. The result is decimal when either of the following is true:
Both expressions are decimal.
One expression is decimal and the other is a data type with a lower precedence than decimal.
The operand expressions are denoted as expression e1, with precision p1 and scale s1, and expression e2, with precision p2 and scale s2.
The precision and scale for any expression that is not decimal is the precision and scale defined for the data type of the expression.
Operation || Result precision || Result scale
e1 + e2 || max(s1, s2) + max(p1-s1, p2-s2) + 1 || max(s1, s2)
e1 - e2 || max(s1, s2) + max(p1-s1, p2-s2) + 1 || max(s1, s2)
e1 * e2 || p1 + p2 + 1 || s1 + s2
e1 / e2 || p1 - s1 + s2 + max(6, s1 + p2 + 1) || max(6, s1 + p2 + 1)
e1 % e2 || min(p1-s1, p2 -s2) + max( s1,s2 ) || max(s1, s2)
You can read more in this MSDN article
I've brute-force written out up to N=4 and I'm wondering if it's possible to express this in a simple recursive formula.
f({A}) = 1 * (A)
= A
f({A,B}) = 2 * (A + B) + 1 * (A) + 1 * (B)
= 3A + 3B
f({A,B,C}) = 3*(A+B+C)+2*(A+B)+2*(B+C)+2*(A)+1*(B)+2*(C)
= 7A + 8B + 7C
f({A,B,C,D}) = 4*(A+B+C+D)+3*(A+B+C)+3*(B+C+D)+4*(A+B)+2*(B+C)+4*(C+D)+4*(A)+2*(B)+2*(C)+4*(C)
= 15A + 18B + 18C + 15D
I'm actually not sure whether it's more important to look at them grouped by contiguous subsections of the original array (as in the first part of each of my equals above) or by the individual numbers.
And I see that if I'm grouping them by individual numbers, the first and last is 2^n - 1 where n is the size of the array.
I think I start to see a pattern here, looking at the data
F([A]) = A
F([A,B]) = 3A + 3B
F([A,B,C]) = 7A + 8B + 7C
F([A,B,C,D]) = 15A + 18B + 18C + 15D
grouping them by common factors and leaving outside the extras
F([A]) = A
F([A,B]) = 3(A+B)
F([A,B,C]) = 7(A + B + C) + B
F([A,B,C,D]) = 15(A + B + C + D) + 3(B+C)
them the pattern that arise is a follow
F([]) = 0
F(X) = (2^n-1)*sum(X) + F(center(X))
where n is the size of X, sum(X) is the summation over the element in X and center(X) is a function that drop the first and last element of the given array
with that, then the next one is
F([A,B,C,D,E]) = 31(A+B+C+D+E) + F([B,C,D])
= 31(A+B+C+D+E) + 7(B+C+D) + C
= 31A + 38B + 39C + 38D + 31E
Looking at the pattern, I come up with a slightly different solution to that of Copperfield which is to keep reapplying the center logic until there is no more center, rather than apply it only once:
F(x) = (2^n - 1) * sum(x) + F(center(x)) + F(center(center(x)) ....
F({A}) = 1A
F({A,B}) = 3A + 3B
F({A,B,C}) = 7A + 8B + 7C
F({A,B,C,D}) = 15A + 18B + 18C + 15D
F({A,B,C,D,E}) = 31A + 38B + 40C + 38D + 31E
F({A,B,C,D,E,F}) = 63A + 78B + 84C + 84D + 78E + 63F
The results are identical until F({A,B,C,D,E}) where the C term is one larger than Copperfield's:
F({A,B,C,D,E}) = 31(A+B+C+D+E) + F({B,C,D}) + F({C})
= 31(A+B+C+D+E) + 7(B+C+D) + C + C
= 31A + 38B + 40C + 38D + 31E
and from there the difference increases. Either interpretation can be made from the data supplied, it's up to the OP to supply the next term to see which solution is correct (or that both are wrong.)
Finally, the OP's last equation has what appears to be an error in final term:
f({A,B,C,D}) = 4*(A+B+C+D)+3*(A+B+C)+3*(B+C+D)+4*(A+B)+2*(B+C)+4*(C+D)+4*(A)+2*(B)+2*(C)+4*(C)
which probably should be:
f({A,B,C,D}) = 4*(A+B+C+D)+3*(A+B+C)+3*(B+C+D)+4*(A+B)+2*(B+C)+4*(C+D)+4*(A)+2*(B)+2*(C)+4*(D)
So when a dynamic array is doubled in size each time an element is added, I understand how the time complexity for expanding is O(n) n being the elements. What about if the the array is copied and moved to a new array that is only 1 size bigger when it is full? (instead of doubling) When we resize by some constant C, it the time complexity always O(n)?
If you grow by some fixed constant C, then no, the runtime will not be O(n). Instead, it will be Θ(n2).
To see this, think about what happens if you do a sequence of C consecutive operations. Of those operations, C - 1 of them will take time O(1) because space already exists. The last operation will take time O(n) because it needs to reallocate the array, add space, and copy everything over. Therefore, any sequence of C operations will take time O(n + c).
So now consider what happens if you perform a sequence of n operations. Break those operations up into blocks of size C; there will be n / C of them. The total work required to perform those operations will be
(c + c) + (2c + c) + (3c + c) + ... + (n + c)
= cn / c + (c + 2c + 3c + ... + nc / c)
= n + c(1 + 2 + 3 + ... + n / c)
= n + c(n/c)(n/c + 1)/2
= n + n(n/c + 1)/2
= n + n2 / c + n / 2
= Θ(n2)
Contrast this with the math for when you double the array size whenever you need more space: the total work done is
1 + 2 + 4 + 8 + 16 + 32 + ... + n
= 1 + 2 + 4 + 8 + ... + 2log n
= 2log n + 1 - 1
= 2n - 1
= Θ(n)
Transplanted from SO Documentation.
Sums of powers of 2 — 1 + 2 + 4 + 8 + 16 + …
The sum
20 + 21 + 22 + ... + 2n-1
simplifies to 2n - 1. This explains why the maximum value that can be stored in an unsigned 32-bit integer is 232 - 1.
I am just reading on the MSDN about Precision Handling.
Taken out from the table on this site:
Operation: e1 / e2
Result precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)
Result scale: max(6, s1 + p2 + 1)
For the explanation of the used expressions:
The operand expressions are denoted as expression e1, with precision p1 and scale s1, and expression e2, with precision p2 and scale s2.
What I do not understand (more like I am not 100% sure I understand it) is this expression
max(6, s1 + p2 + 1)
Can someone explain it to me?
Many thanks :)
See my worked example here T-SQL Decimal Division Accuracy
It means maximum of 6 or (scale1 + precision2 + 1) for the scale of result