Eiffel: REAL_32.to_double gives a strange value - eiffel

Trying to transform a real_32 to real_64, I'm getting
real_32: 61.55
real_64: 61.54999923706055
Am I wrong with the to_double function?

This is expected. In the particular example, the binary representation of the decimal 61.55 with single and double precision respectively is:
REAL_32: 0 10000100 11101100011001100110011
REAL_64: 0 10000000100 1110110001100110011001100110011001100110011001100110
As you can see, the trailing pattern 0011 is recurrent and should go ad infinitum to give a precise value.
When REAL_32 is assigned to REAL_64, the trailing 0011s are not added automatically, but filled with zeroes instead:
REAL_32: 0 10000100 11101100011001100110011
REAL_64: 0 10000000100 1110110001100110011001100000000000000000000000000000
In decimal notation, this corresponds to 61.54999923706055. What is essential here, 61.54999923706055 and 61.55 have exactly the same binary representation when using single precision floating numbers. You can check it yourself with print ({REAL_32} 61.55 = {REAL_32} 61.54999923706055). In other words, the results you get are correct, and the two values are the same. The only difference is that when REAL_32 is printed, it is rounded to lower number of meaningful decimal digits.
This is the reason why accounting and bookkeeping software never uses floating-point numbers, only integer and decimal.

As a workaround working for getting from JSON into typescript deserialization, the following worked:
a_real_32.out.to_real_64

Related

Accuracy loss when converting a tuple of floats into an array in python3

I have some problems with the accuracy when I convert a tuple or list of floats into an array with the 'f' typecode, but with the 'd' typecode it runs "correctly".
For example:
import array
a = (2.16, -22.4, 95.12, -63.47, -0.02, 1245.2)
b = array.array('f', a)
print(b)
# array('f', [2.1600000858306885, -22.399999618530273, 95.12000274658203, -63.470001220703125, -0.019999999552965164, 1245.199951171875])
c = array.array('d', a)
print(c)
# array('d', [2.16, -22.4, 95.12, -63.47, -0.02, 1245.2])
As you can see, the array c contains the same numbers as the tuple a, but the array b has accuracy problems.
However both type(b[0]) and type(c[0]) results in <class 'float'>.
There is actually no accuracy-loss in the way you may suspect here, it's a case of "Representation Error".
The literal value 2.16 does not have an exact representation as a float; after parsing it is stored as 0x400147AE147AE148, because Python always uses double precision (see Numbers.real) to represent floats.
The value is then converted to 0x400A3D71 (in case of f) or stays the same (is case of d). These values correspond to 2.1600000858306884765625 and 2.16000000000000014210854715202, both of which are the most accurate representation of the literal 2.16 one could get. The loss of precision from the original 2.16 is inevitable because 2.16 simply does not exist as a precise value.
What you are seeing in the string representation is the interpreter rounding the underlying float/double to a near value if the loss in precision due to that rounding is considered acceptable. The underlying values are as close to 2.16 as they could possibly get in both cases, just their string representation is different.

Round of Accurately from the last value after decimal

I have stuck (again) and looking for smart human beings of planet earth to help me out.
Background
I have an application which distributes the amounts to some users in a given percentage. Say I have $35000 and it will distribute the amounts to 3 users (A, B and C) in some ratio. So the amount distributed will be
A - 5691.05459265518
B - 14654.473815207
C - 14654.4715921378
which totals up to $35000
The Problem
I have to provide the results on the front end in 2 decimal spaces instead of float. So I use the round function of SQL Server with the precision value of 2 to convert these to 2 decimal spaces with rounding. But the issue is that when I total these values this comes out to be $34999.9999 instead of $35000.
My Findings
I searched a bit and found
If the expression that you are rounding ends with a 5, the Round()
function will round the expression so that the last digit is an even
number. Here are some examples:
Round(34.55, 1) - Result: 34.6 (rounds up)
Round(34.65, 1) - Result: 34.6 (rounds down)
So technically the answer is correct but I am looking for a function or a way to round of the value exactly what it should have been. I found that if I start rounding off (if the value is less than 5 then leave the previous number else increment the previous digit by 1 ) from the last digit after the decimal and keep on backtracking while I am left with only 2 decimal places.
Please advise.

ValueError: matplotlib display text must have all code points < 128 or use Unicode strings

In my code I get an array like this:
array(['2.83100e+07', '2.74000e+07', '2.79400e+07'],dtype='|S11')
How can I "cut" my values like:
2.83100e+07 --> 2.831 ?
Best regards!
using a for loop and round(n)
In [23]: round(66.66666666666,4)
Out[23]: 66.6667
In [24]: round(1.29578293,6)
Out[24]: 1.295783
help on round():
round(number[, ndigits]) -> floating point number
Round a number to a given precision in decimal digits (default 0 digits). This always returns a floating point number. Precision may be negative

Getting the index of a float number in a vector

I've executed this code, but it doesn't work as I expect:
A = 1:0.1:1.4
A =
1.0000 1.1000 1.2000 1.3000 1.4000
A == 1.3000
ans =
0 0 0 0 0
I thought I was going to get:
ans =
0 0 0 1 0
Why does it not work? And how can I make it work as I want?
Thank you.
That's the usual when you compare floats. Try A(4)-1.3. It'll give you something small but not zero. That's because floats have finite precision. In general, it's better not to test for equality with floats.
The usual approach is to define a small tolerance (for example 1e-9) and compare taking that tolerance into account:
abs(A-1.3)<1e-9
0.1 has an infinite expansion when written in base 2:
0.000110011001100110011001100110011001100110011001100110011001100
shell code to obtain that:
bc -lq
obase=2
1/10
Matlab will truncate to 50(?) digits. Because of this, 0.1*3 and 0.3 are different.
It's because of double precision. Try format long g and have a look at A again, you'll see that it's not exactly 1.3. Have a look at the MATLAB wiki to understand why that is. It's never a good idea to do an equality test on a floating point number.

Decimal (10,9) variable can't hold the number 50 (SQL Server 2008)

This one is pretty straightforward. Why does the code below cause the error below?
declare #dTest decimal(10, 9)
set #dTest = 50
Error:
Msg 8115, Level 16, State 8, Line 3
Arithmetic overflow error converting int to data type numeric.
According to the MSDN documentation on decimal(p, s), p (or 10 in my case) is the "maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point" whereas s (or 9 in my case) is the "maximum number of decimal digits that can be stored to the right of the decimal point."
My number, 50, has only 2 digits total (which less than the maximum 10), and 0 digits to the right of the decimal (which is less than the maximum 9), therefore it should work.
I found this question about essentially the same issue, but no one explained why the documentation seems to conflict with the behavior. It seems like the s dimension is actually being interpreted as the fixed number of digits to the right of the decimal, and being subtracted from the p number, which in my case leaves 10 - 9 = only 1 digit remaining to handle the left side.
Can anyone provide a reasonable way to interpret the documentation as written to match the behavior?
EDIT:
I see some explanations below, but they don't address the fundamental problem with the wording of the docs. I would suggest this change in wording:
For "p (precision)" change "The maximum total number of decimal digits that can be stored" to read "The maximum total number of decimal digits that will be stored".
And for "s (scale)" change "The maximum number of decimal digits that can be stored to the right of the decimal point." to "The number of decimal digits that will be stored to the right of the decimal point. This number is substracted from p to determine the maximum number of digits to the left of the decimal point."
I'm going to submit a bug report to Connect unless some one has a better explanation.
10 - 9 is 1. DECIMAL(10, 9) can hold a number in the format 0.000000000. 50 has two digits before the decimal point, and is therefore out of range. You quoted it yourself:
According to the MSDN documentation on decimal(p, s), p (or 10 in my case) is the "maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point" whereas s (or 9 in my case) is the "maximum number of decimal digits that can be stored to the right of the decimal point."
I submitted a bug report to Connect: Misleading documentation on the decimal data type
A reasonable way to interpret the documentation is that trailing decimal zero digits are not ignored. So your number has 9 decimal digits to the right of the decimal point, and they all happen to be 0.
DECIMAL(10, 9) is a fixed precision and scale numeric data type. This means that it always stores the same number of digits to the right of the decimal point. So the data type you specified can only store numbers with one digit to the left of the decimal point and 9 digits to the right. Obviously, 50 does not fit in a number of that format.
Go though the link below.
http://msdn.microsoft.com/en-gb/library/ms190476.aspx
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2.

Resources