A floating point array in C - c

i am developing a code where i have to load floating point values stored in each line at a time in a text file...i loaded each of these data into a array of floats using fscanf()...however i found that the floating points were stored in a different way, example 407.18 was stored as 407.179993, 414.35 as 414.350006...now i am stuck because it is absolutely important that the numbers be stored in the form they were in the file but here it seems to be different even though essentially its the same....how do i get the numbers to store in the original form?

If it's absolutely important that the numbers be stored in the form they were in the file, don't use floating-point values. Not all fractional base-10 values can be represented exactly in binary.
For details, you should consult the article "What Every Computer Scientist Should Know About Floating-Point Arithmetic" [PDF link].
You should store the numbers in string form or as scaled integers.

What you see is correct since floating point cannot exactly represent many real numbers. This is a must-read: What Every Computer Scientist Should Know About Floating-Point Arithmetic

The C++ float and double types are represented as IEEE floating point numbers. There are binary floating point numbers, not decimals. They don't have infinite precision, and even relatively 'simple' decimal values will change when converted to binary floating point and back.
Languages that focus on money (COBOL and PL/I, e.g.) have decimal data types that use slower representations that don't have this problem. There are libraries for this purpose in C or C++, or you can use scaled integers if you don't need too much range.

sfactor,
A bit of clarification: you state "...absolutely important that the numbers be stored in the form they were in the file" - what 'form' is this? I'm assuming a display form?! Here's the deal: If you store in the file 407.18, but it is displayed as 407.179993 - this is totally normal and expected. When you construct your sprintf or whatever formatted print you are using, you must instruct it to limit the precision after the decimal point. This should do the trick:
#include <stdio.h>
void main(void);
void main()
{
printf("%.2f",407.179993);
}

You might want to take a look at an existing library that implements floating point with controlled rounding. I have used MPFR. It is fast and open source. If you are working with money then a different library would be better.

There is a lot of different ways to store numbers - each with their strenghts and weaknesses.
In your case floating point numbers are not the best choice because they don't represent most real numbers exactly.
Scaled integers is a case of the common design idiom: separate representation from the presentation.
Decide on how many digits you want after the decimal point, eg. 2 digits as in 123,45. You should then internaly represent the number as the integer 12345. Whenever you display the value to the user you should insert a decimal point at the relevant spot.
Short InformIT article on Scaled Integers by Danny Kalev
Before going overboard on scaled integers do inspect the domain of your problem closely to assert that the "slack" in the floating point numbers is significant enough to warrent using scaled integers.

Related

Adding two large double numbers in C gives an incorrect result

I would like to ask you a question regarding to adding two big numbers that are double in the C language.
Lets say there are two numbers that are double: 1.31E+42 and 1.399E+43.
If I do the adding in Excel, the result is 15300000000000000000000000000000000000000000. That should be correct.
If I do the adding in the C language, the result is 15299999999999999804719125983728080953278464.
The difference is quite huge. Does anybody know how to get the right result when adding or multiplying big numbers in the C language that are double?
I have to add another important information. One thing is to print it out. I know that it is possible to print the number as you guys suggested. But I also need it as the value for another work. To be specific, it is a task of analysing two circles - their intersection and/or touching (if they touch externally or internally or if there is an intersection).
https://www.bbc.co.uk/bitesize/guides/z9pssbk/revision/4
Two circles will touch if the distance between their centres (V) is equal to the sum of their radii (external touch), or the difference between their radii (internal touch). So I have to do the adding for external touch and then to compare if the distance between their centers V is the same like the sum of their radii. So it is not only about printing the value out.
First circle:
Sx = -3.2E+41, Sy = -3.31E+42, R = 1.31E+42
Second circle:
Sx = 1.354E+43, Sy = 3.17E+42, R = 1.399E+43
The distance between their centers in C language is:
V = 15300000000000002280599204554488630751526912.000000
Sum of their radii is in the C language:
SumR =15299999999999999804719125983728080953278464.000000
According to the reference they should touch externally, so if I do the following condition I get the information that there is no external touch.
if (fabs(V - SumR) < 0.001)
printf("There is an external touch")
else
printf("No external touch")
Floating-point arithmetic approximates real-number arithmetic. When converting decimal numbers to binary floating-point or doing floating-point arithmetic, you generally should not expect the results you would get with real-number arithmetic.
This answer assumes your C implementation uses the IEEE-754 binary64 format for double and performs arithmetic using round-to-nearest-ties-to-even, including conversion from decimal to double.
The binary64 format has a sign, a 53-bit significand (a “fraction portion” of the number), and an 11-bit exponent.
This format cannot represent 1.31•1042. The nearest value it can represent is 1310000000000000060347708657386176332693504. When your C source text contains 1.31e42, your compiler converts it to 1310000000000000060347708657386176332693504. If we write this using hexadecimal for the significand, it is 1.E137CED6DF0D116•2139. You can see the initial “1” and 13 more hexadecimal digits (4 bits each) make up 53 bits.
The format also cannot represent 1.399•1043. The nearest value it can represent is 13989999999999999899113922237014438982975488. When your C source text contains 1.399e43, your compiler converts it to 13989999999999999899113922237014438982975488. Using hexadecimal, this is 1.4131D470653E916•2143
When these are added, the ordinary mathematical result is not representable. The result produced is the nearest representable number, which is 15299999999999999804719125983728080953278464. Using hexadecimal, this is 1.5F45515DD32F616•2143.
This is the right result as expected for floating-point arithmetic. Getting the “right” result for your purpose depends on what you want to accomplish. For many purposes, getting an approximate result with floating-point suffices, and one simply understands that the result is approximate. If an exact result is necessary, alternative formats and software must be used.
If you need to process big numbers, you should use an arbitrary precision arithmetic (a.k.a bignums) library, such as GMPlib.
If you want to use floating point numbers, take time to understand them by reading the floating point guide and about IEEE 754. They don't follow intuitive properties of real numbers (e.g. most operations are not associative).
Read of course Modern C and this C reference website, then the C11 draft standard n1570.

Is there a C equivalent of C#'s decimal type?

In relation to: Convert Decimal to Double
Now, I got to many questions relating to C#'s floating-point type called decimal and I've seen its differences with both float and double, which got me thinking if there is an equivalent to this type in C.
In the question specified, I got an answer I want to convert to C:
double trans = trackBar1.Value / 5000.0;
double trans = trackBar1.Value / 5000d;
Of course, the only change is the second line gone, but with the thing about the decimal type, I want to know it's C equivalent.
Question: What is the C equivalent of C#'s decimal?
C2X will standardize decimal floating point as _DecimalN: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2573.pdf
In addition, GCC implements decimal floating point as an extension; it currently supports 32-bit, 64-bit, and 128-bit decimal floats.
Edit: much of what I said below is just plain wrong, as pointed out by phuclv in the comments. Even then, I think there's valuable information to be gained in reading that answer, so I'll leave it unedited below.
So in short: yes, there is support for Decimal floating-point values and arithmetic in the standard C language. Just check out phuclv's comment and S.S. Anne's answer.
In the C programming language, as others have commented, there's no such thing as a Decimal type, nor are there types implemented like it. The simplest type that is close to it would be double, which is implemented, most commonly, as an IEEE-754 compliant 64-bit floating-point type. It contains a 1-bit sign, an 11-bit exponent and a 52-bit mantissa/fraction. The following image represents it quite well(from wikipedia):
So you have the following format:
A more detailed explanation can be read here, but you can see that the exponent part is a power of two, which means that there will be imprecisions when dealing with division and multiplication by ten. A simple explanation is because division by anything that isn't a power of two is sure to repeat digits indefinitely in base 2. Example: 1/10 = 0.1(in base 10) = 0.00011001100110011...(in base 2). And, because computers can't store an unlimited amount of zeroes, your operations will have to be truncated/approximated.
In the case of C#'s Decimal, from the documentation:
The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction.
This last part is important, because instead of being a multiplication by a power of two, it is a division by a power of ten. So you have the following format:
Which, as you can clearly see, is a completely different implementation from above!
For instance, if you wanted to divide by a power of 10, you could do that exactly, because that just involves increasing the exponent part(N). You have to be aware of the limitation of the numbers that can be represented by Decimal, though, which is at most a measly 7.922816251426434e+28, whereas double can go up to 1.79769e+308.
Given that there are no equivalents (yet) in C to Decimal, you may wonder "what do I do?". Well, it depends. First off, is it really important for you to use a Decimal type? Can't you use a double? To answer that question, it's helpful to know why that type was created in the first place. Again, from Microsoft's documentation:
The Decimal value type is appropriate for financial calculations that require large numbers of significant integral and fractional digits and no round-off errors
And, just at the next phrase:
The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding
So you shouldn't think of Decimal as having "infinite precision", just as being a more appropriate type for calculations that generally need to be made in the decimal system(such as financial ones, as stated above).
If you still want a Decimal data type in C, you'd have to work in developing a library to support addition, subtraction, multiplication, etc --- Because C doesn't support operator overloading. Also, it still wouldn't have hardware support(e.g. from the x64 instruction set), so all of your operations would be slower than those of double, for example. Finally, if you still want something that supports a Decimal in other languages(in your final question), you may look into Decimal TR in C++.
As other pointed out, there's nothing in C standard(s) such as .NET's decimal, but, if you're working on Windows and have the Windows SDK, it's defined:
DECIMAL structure (wtypes.h)
Represents a decimal data type that provides a sign and scale for a
number (as in coordinates.)
Decimal variables are stored as 96-bit (12-byte) unsigned integers
scaled by a variable power of 10. The power of 10 scaling factor
specifies the number of digits to the right of the decimal point, and
ranges from 0 to 28.
typedef struct tagDEC {
USHORT wReserved;
union {
struct {
BYTE scale;
BYTE sign;
} DUMMYSTRUCTNAME;
USHORT signscale;
} DUMMYUNIONNAME;
ULONG Hi32;
union {
struct {
ULONG Lo32;
ULONG Mid32;
} DUMMYSTRUCTNAME2;
ULONGLONG Lo64;
} DUMMYUNIONNAME2;
} DECIMAL;
DECIMAL is used to represent an exact numeric value with a fixed precision and fixed scale.
The origin of this type is Windows' COM/OLE automation (introduced for VB/VBA/Macros, etc. so, it predates .NET, which has very good COM automation support), documented here officially: [MS-OAUT]: OLE Automation Protocol, 2.2.26 DECIMAL
It's also one of the VARIANT type (VT_DECIMAL). In x86 architecture, it's size fits right in the VARIANT (16 bytes).
Decimal type in C# is used is used with precision of 28-29 digits and it has size of 16 bytes.There is not even a close equivalent in C to C#.In Java there is a BigDecimal data type that is closest to C# decimal data type.C# decimal gives you numbers like:
+/- someInteger / 10 ^ someExponent
where someInteger is a 96 bit unsigned integer and someExponent is an integer between 0 and 28.
Is Java's BigDecimal the closest data type corresponding to C#'s Decimal?

Why can't I multiply a float? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Dealing with accuracy problems in floating-point numbers
I was quite surprised why I tried to multiply a float in C (with GCC 3.2) and that it did not do as I expected.. As a sample:
int main() {
float nb = 3.11f;
nb *= 10;
printf("%f\n", nb);
}
Displays: 31.099998
I am curious regarding the way floats are implemented and why it produces this unexpected behavior?
First off, you can multiply floats. The problem you have is not the multiplication itself, but the original number you've used. Multiplication can lose some precision, but here the original number you've multiplied started with lost precision.
This is actually an expected behavior. floats are implemented using binary representation which means they can't accurately represent decimal values.
See MSDN for more information.
You can also see in the description of float that it has 6-7 significant digits accuracy. In your example if you round 31.099998 to 7 significant digits you will get 31.1 so it still works as expected here.
double type would of course be more accurate, but still has rounding error due to it's binary representation while the number you wrote is decimal.
If you want complete accuracy for decimal numbers, you should use a decimal type. This type exists in languages like C#. http://msdn.microsoft.com/en-us/library/system.decimal.aspx
You can also use rational numbers representation. Using two integers which will give you complete accuracy as long as you can represent the number as a division of two integers.
This is working as expected. Computers have finite precision, because they're trying to compute floating point values from integers. This leads to floating point inaccuracies.
The Floating point wikipedia page goes into far more detail on the representation and resulting accuracy problems than I could here :)
Interesting real-world side-note: this is partly why a lot of money calculations are done using integers (cents) - don't let the computer lose money with lack of precision! I want my $0.00001!
The number 3.11 cannot be represented in binary. The closest you can get with 24 significant bits is 11.0001110000101000111101, which works out to 3.1099998950958251953125 in decimal.
If your number 3.11 is supposed to represent a monetary amount, then you need to use a decimal representation.
In the Python communities we often see people surprised at this, so there are well-tested-and-debugged FAQs and tutorial sections on the issue (of course they're phrased in terms of Python, not C, but since Python delegates float arithmetic to the underlying C and hardware anyway, all the descriptions of float's mechanics still apply).
It's not the multiplication's fault, of course -- remove the statement where you multiply nb and you'll see similar issues anyway.
From Wikipedia article:
The fact that floating-point numbers
cannot precisely represent all real
numbers, and that floating-point
operations cannot precisely represent
true arithmetic operations, leads to
many surprising situations. This is
related to the finite precision with
which computers generally represent
numbers.
Floating points are not precise because they use base 2 (because it's binary: either 0 or 1) instead of base 10. And base 2 converting to base 10, as many have stated before, will cause rounding precision issues.

Why does the value of this float change from what it was set to?

Why is this C program giving the "wrong" output?
#include<stdio.h>
void main()
{
float f = 12345.054321;
printf("%f", f);
getch();
}
Output:
12345.054688
But the output should be, 12345.054321.
I am using VC++ in VS2008.
It's giving the "wrong" answer simply because not all real values are representable by floats (or doubles, for that matter). What you'll get is an approximation based on the underlying encoding.
In order to represent every real value, even between 1.0x10-100 and 1.1x10-100 (a truly minuscule range), you still require an infinite number of bits.
Single-precision IEEE754 values have only 32 bits available (some of which are tasked to other things such as exponent and NaN/Inf representations) and cannot therefore give you infinite precision. They actually have 23 bits available giving precision of about 224 (there's an extra implicit bit) or just over 7 decimal digits (log10(224) is roughly 7.2).
I enclose the word "wrong" in quotes because it's not actually wrong. What's wrong is your understanding about how computers represent numbers (don't be offended though, you're not alone in this misapprehension).
Head on over to http://www.h-schmidt.net/FloatApplet/IEEE754.html and type your number into the "Decimal representation" box to see this in action.
If you want a more accurate number, use doubles instead of floats - these have double the number of bits available for representing values (assuming your C implementation is using IEEE754 single and double precision data types for float and double respectively).
If you want arbitrary precision, you'll need to use a "bignum" library like GMP although that's somewhat slower than native types so make sure you understand the trade-offs.
The decimal number 12345.054321 cannot be represented accurately as a float on your platform. The result that you are seeing is a decimal approximation to the closest number that can be represented as a float.
floats are about convenience and speed, and use a binary representation - if you care about precision use a decimal type.
To understand the problem, read What Every Computer Scientist Should Know About Floating-Point Arithmetic:
http://docs.sun.com/source/806-3568/ncg_goldberg.html
For a solution, see the Decimal Arithmetic FAQ:
http://speleotrove.com/decimal/decifaq.html
It's all to do with precision. Your number cannot be stored accurately in a float.
Single-precision floating point values can only represent about eight to nine significant (decimal) digits. Beyond that point, you're seeing quantization error.

Real vs. Floating Point vs. Money

Why when I save a value of say 40.54 in SQL Server to a column of type Real does it return to me a value that is more like 40.53999878999 instead of 40.54? I've seen this a few times but have never figured out quite why it happens. Has anyone else experienced this issue and if so causes it?
Have a look at What Every Computer Scientist Should Know About Floating Point Arithmetic.
Floating point numbers in computers don't represent decimal fractions exactly. Instead, they represent binary fractions. Most fractional numbers don't have an exact representation as a binary fraction, so there is some rounding going on. When such a rounded binary fraction is translated back to a decimal fraction, you get the effect you describe.
For storing money values, SQL databases normally provide a DECIMAL type that stores exact decimal digits. This format is slightly less efficient for computers to deal with, but it is quite useful when you want to avoid decimal rounding errors.
Floating point numbers use binary fractions, and they don't correspond exactly to decimal fractions.
For money, it's better to either store number of cents as integer, or use a decimal number type. For example, Decimal(8,2) stores 8 digits including 2 decimals (xxxxxx.xx), i.e. to cent precision.
In a nutshell, it's for pretty much the same reason that one-third cannot be exactly expressed in decimal. Have a look at David Goldberg's classic paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for details.
To add a clarification, a floating point numbers stored in a computer behaves as described by other posts here, because as described, it is stored in binary format.
This means that unless it's value (both the mantissa and exponent components of the value) are powers of two, and cannot be represented exactly.
Some systems, on the other hand store fractional numbers in decimal (SQL Server Decimal, and Numeric data types, and Oracle Number datatype for example,) and then their internal representation is, therefore, exact for any number that is a power of 10. But then numbers that are not powers of 10 cannot be represented exactly.

Resources