For a long time I thought floating point arithmetic is well-defined and different platforms making the same calculation should get the same results. (Given the same rounding modes.)
Yet Microsoft's SQL Server deems any calculation performed with floating point to have an "imprecise" result, meaning you can't have an index on it. This suggests to me that the guys at Microsoft thought there's a relevant catch regarding floating point.
So what is the catch?
EDIT: You can have indexes on floats in general, just not on computed columns. Also, geodata uses floating point types for the coordinates and those are obviously indexed, too. So it's not a problem of reliable comparison in general.
Floating-point arithmetic is well defined by the IEEE 754 standard. In the documentation you point out, Microsoft has apparently not chosen to adhere to the standard.
There are a variety of issues that makes floating-point reproducibility difficult, and you can find Stack Overflow discussions about them by searching for “[floating-point] reproducibility”. However, most of these issues are about lack of control in high-level languages (the individual floating-point operations are completely reproducible and specified by IEEE 754, and the hardware provides sufficient IEEE 754 conformance, but the high-level language specification does not adequately map language constructs to specific floating-point operations), differences in math library routines (functions such as sin and log are “hard” to compute in some sense, and vendors implement them without what is called correct rounding, so each vendor’s routines have slightly different error characteristics than others), multithreading and other issues allow operations to be performed in different orders, thus yielding different results, and so on.
In a single system such as Microsoft’s SQL Server, Microsoft presumably could have controlled these issues if they wanted to. Still, there are issues to consider. For example, a database system may have a sum function that computes the sum of many things. For speed, you may wish the sum implementation to have the flexibility to add the elements in any order, so that it can take advantage of multiprocessing or of adding the elements in whatever order they happen to be stored in. But adding floating-point data using elementary add operations of the same floating-point format has different results depending on the order of the elements. To make the sum reproducible, you have to specify the order of operation or use extra precision or other techniques, and then performance suffers.
So, not making floating-point arithmetic is a choice that is made, not a consequence of any lack of specification for floating-point arithmetic.
Another problem for database purposes is that even well defined and completely specified floating-point arithmetic has NaN values. (NaN, an abbreviation for Not a Number, represents a floating-point datum that is not a number. A NaN is produced as the result of an operation that has no mathematical result (such as the real square root of a negative number). NaNs act as placeholders so that floating-point operations can continue without interruption, and an application can complete a set of floating-point operations and then take action to replace or otherwise deal with any NaNs that arose.) Because a NaN does not represent a number, it is not equal to anything, not even itself. Comparing two NaNs for equality produces false, even if the NaNs are represented with exactly the same bits. This is a problem for databases, because NaNs cannot be used as a key for looking up records because a NaN search key will never equal a NaN in the key field of a record. Sometimes this is deal with by defining two different ordering relations—one is the usual mathematical comparison, which defines less than, equal to, and greater than for numbers (and for which all three are false for NaNs), and a second which defines a sort order and which is defined for all data, including NaNs.
It should be noted that each floating-point datum that is not a NaN represents a certain number exactly. There is no imprecision in a floating-point number. A floating-point number does not represent an interval. Floating-point operations approximate real arithmetic in that they return values approximately equal to the exact mathematical results, while floating-point numbers are exact. Elementary floating-point operations are exactly specified by IEEE 754. Lack of reproducibility arises in using different operations (including the same operations with different precisions), in using operations in different orders, or in using operations that do not conform to the IEEE 754 standard.
Related
I had a small function where at one point I divided by 0 and created my first NaN. After looking on the internet I did find out the NaN - not a number and NaN != NaN.
My questions are:
During run time how is NaN saved or how does the controller know that a variable has the NaN value?(I am working with small micro controllers(c language), the mechanism is different in programs that are running on a pc(c# and other OOP languages))?
Inf is similar to NaN?
In C, the types of values are determined statically by your source code. For named objects (“variables”), you explicitly declare the types. For constants, the syntax of them (e.g., 3 versus 3.) determines the type. In typical C implementations that compile to machine code on common processors, the processors have different instructions for working with integers and floating-point. The compiler uses integer instructions for integers and floating-point instructions for floating-point values. The floating-point instructions are designed in hardware to work with encodings of floating-point values.
In IEEE-754 binary floating-point, floating-point data is encoded with a sign bit, an exponent field, and a significand field. If the exponent field is all ones and the significand field is not all zeros, the datum represents a NaN. In common modern processors, this is built into the hardware.
Infinity is not largely similar to a NaN. They might both be considered special in that they are not normal numbers and are processed somewhat differently from normal numbers. However, in IEEE-754 arithmetic, infinity is a number and participates in arithmetic. NaN is not a number.
Floating point is implementation defined in the C. So there isn't any guarantees.
Our code needs to be portable, we are discussing whether or not acceptable to use IEEE754 floats in our protocol. For performance reasons it would be nice if we don't have to convert back and forth between a fixed point format when sending or receiving data.
While I know that there can be differences between platforms and architectures regarding the size of long or wchar_t. But I can't seem to find any specific about the float and double.
What I found so far that the byte order maybe reversed on big endian platforms. While there are platforms without floating point support where a code containing float and double wouldn't even link. Otherwise platforms seem to stick to IEEE754 single and double precision.
So is it safe to assume that floating point is in IEEE754 when available?
EDIT: In response to a comment:
What is your definition of "safe"?
By safe I mean, the bit pattern on one system means the same on the another (after the byte rotation to deal with endianness).
Essentially all architectures in current non-punch-card use, including embedded architectures and exotic signal processing architectures, offer one of two floating point systems:
IEEE-754.
IEEE-754 except for blah. That is, they mostly implement 754, but cheap out on some of the more expensive and/or fiddly bits.
The most common cheap-outs:
Flushing denormals to zero. This invalidates certain sometimes-useful theorems (in particular, the theorem that a-b can be exactly represented if a and b are within a factor of 2), but in practice it's generally not going to be an issue.
Failure to recognize inf and NaN as special. These architectures will fail to follow the rules regarding inf and NaN as operands, and may not saturate to inf, instead producing numbers that are larger than FLT_MAX, which will generally be recognized by other architectures as NaN.
Proper rounding of division and square root. It's a whole lot easier to guarantee that the result is within 1-3 ulps of the exact result than within 1/2 ulp. A particularly common case is for division to be implemented as reciprocal+multiplication, which loses you one bit of precision.
Fewer or no guard digits. This is an unusual cheap-out, but means that other operations can be 1-2 ulps off.
BUUUUT... even those except for blah architectures still use IEEE-754's representation of numbers. Other than byte ordering issues, the bits describing a float or double on architecture A are essentially guaranteed to have the same meaning on architecture B.
So as long as all you care about is the representation of values, you're totally fine. If you care about cross-platform consistency of operations, you may need to do some extra work.
EDIT: As Chux mentions in the comments, a common extra source of inconsistency between platforms is the use of extended precision, such as the x87's 80-bit internal representation. That's the opposite of a cheap-out, and (with proper treatment) fully conforms to both IEEE-754 and the C standard, but it will likewise cause results to differ between architectures, and even between compiler versions and following apparently minor and unrelated code changes. However: a particular x86/x64 executable will NOT produce different results on different processors due to extended precision.
There is a macro to check (since C99):
C11 §6.10.8.3 Conditional feature macros
__STDC_IEC_559__ The integer constant 1, intended to indicate conformance to the specifications in annex F (IEC 60559 floating-point arithmetic).
IEC 60559 (short for ISO/IEC/IEEE 60559) is another name for IEEE-754.
Annex F then establishes the mapping between C floating types and IEEE-754 types:
The C floating types match the IEC 60559 formats as follows:
The float type matches the IEC 60559 single format.
The double type matches the IEC 60559 double format.
The long double type matches an IEC 60559 extended format, 357) else a
non-IEC 60559 extended format, else the IEC 60559 double format.
I suggest you need to look more carefully at your definition of portable.
I would also suggest your definition of "safe" is insufficient. Even if the binary representation (allowing for endianness) is okay, the operations on variables may behave differently. After all, there are few applications of floating point that don't involve operations on variables.
If you want to support all host architectures that have ever been created then assuming IEEE floating point format is inherently unsafe. You will have to deal with systems that support different formats, systems that don't support floating point at all, systems for which compilers have switches to select floating point behaviours (with some behaviours being associated with non-IEEE formats), CPUs that have an optional co-processor (so floating point support depends on whether an additional chip is installed, but otherwise variants of the CPU are identical), systems that emulate floating point operations in software (some such software emulators are configurable at run time), and systems with buggy or incomplete implementation of floating point (which may or may not be IEEE based).
If you are willing to limit yourself to hardware of post 2000 vintage, then your risk is lower but non-zero. Virtually all CPUs of that vintage support IEEE in some form. However you still (as with older CPUs too) need to consider what floating point operations you wish to have supported, and the trade-offs you are willing to accept to have them. Different CPUs (or software emulation) have less complete implementation of floating point than others, and some are configured by default to not support some features - so it is necessary to change settings to enable some features, which can impact on performance or correctness of your code.
If you need to share floating point values between applications (which may be on different hosts with different features, built with different compilers, etc) then you will need to define a protocol. That protocol might involve IEEE format, but all your applications will need to be able to handle conversion between the protocol and their native representations.
Almost all common architectures now use IEEE-754, this is not required by the standard. There used to be old non IEE-754 architectures, and some could still be around.
If the only requirement is for exchange of network data, my advice is:
if __STDC_IEC_559__ is defined, only use network order for the bytes and assume you do have standard IEE-754 for float and double.
if __STDC_IEC_559__ is not defined, use a special interchange format, that could be IEE-754 - one single protocol - or anything else - need a protocol indication.
Like others have mentioned, there's the __STDC_IEC_559__ macro, but it isn't very useful because it's only set by compilers that completely implement the respective annex in the C standard. There are compilers that implement only a subset but still have (mostly) usable IEEE floating point support.
If you're only concerned with the binary representation, you should write a feature test that checks the bit patterns of certain floating numbers. Something like:
#include <stdint.h>
#include <stdio.h>
typedef union {
double d;
uint64_t i;
} double_bits;
int main() {
double_bits b;
b.d = 2.5;
if (b.i != UINT64_C(0x4004000000000000)) {
fprintf(stderr, "Not an IEEE-754 double\n");
return 1;
}
return 0;
}
Check a couple of numbers with different exponents, mantissae, and signs, and you should be on the safe side. Since these tests aren't expensive, you could even run them once at runtime.
Strictly speaking, it's not safe to assume floating-point support; generally speaking, the vast majority of platforms will support it. Notable exceptions include (now deprecated) VMS systems running on Alpha chips
If you have the luxury of runtime checking, consider paranoia, a floating-point vetting tool written by William Kahan.
Edit: sounds like your application is more concerned with binary formats as they pertain to storage and/or serialization. I would suggest narrowing your scope to choosing a third-party library that supports this. You could do worse than Google Protocol Buffers.
I am working on some code to be run on a very heterogeneous cluster. The program performs interval arithmetic using 3, 4, or 5 32 bit words (unsigned ints) to represent high precision boundaries for the intervals. It seems to me that representing some words in floating point in some situations may produce a speedup. So, my question is two parts:
1) Are there any guarantees in the C11 standard as to what range of integers will be represented exactly, and what range of input pairs would have their products represented exactly? One multiplication error could entirely change the results.
2) Is this even a reasonable approach? It seems that the separation of floating point and integer processing within the processor would allow data to be running through both pipelines simultaneously, improving throughput. I don't know much about hardware though, so I'm not sure that the pipelines for integers and floating points actually are all that separate, or, if they are, if they can be used simultaneously.
I understand that the effectiveness of this sort of thing is platform dependent, but right now I am concerned about the reliability of the approach. If it is reliable, I can benchmark it and see, but I am having trouble proving reliability. Secondly, perhaps this sort of approach shows little promise, and if so I would like to know so I can focus elsewhere.
Thanks!
I don't know about the Standard, but it seems that you can assume all your processors are using the normal IEEE floating point format. In this case, it's pretty easy to determine whether your calculations are correct. The first integer not representable by the 32-bit float format is 16777217 (224+1), so if all your intermediate results are less than that (in absolute value), float will be fine.
The reverse is also true: if any intermediate result is greater than 224 (in absolute value) and odd, float representation will alter it, which is unacceptable for you.
If you are worried specifically about multiplications, look at how the multiplicands are limited. If one is limited by 211, and the other by 213, you will be fine (just barely). If, for example, both are limited by 216, there almost certainly is a problem. To prove it, find a test case that causes their product to exceed 224 and be odd.
All that you need to know to which limits you may go and still have integer precision should be available to you through the macros defined in <float.h>. There you have the exact description of the floating point types, FLT_RADIX for the radix, FLT_MANT_DIG for the number of the digits, etc.
As you say, whether or not such an approach is efficient will depend on the platform. You should be aware that this is much dependent of the particular processor you'd have, not only the processor family. From one Intel or AMD processor variant to another there could already be sensible differences. So you'd basically benchmark all possibilities and have code that decides on program startup which variant to use.
Scientific notation is the common way to express a number with an explicit order of magnitude. First a nonzero digit, then a radix point, then a fractional part, and the exponent. In binary, there is only one possible nonzero digit.
Floating-point math involves an implicit first digit equal to one, then the mantissa bits "follow the radix point."
So why does frexp() put the radix point to the left of the implicit bit, and return a number in [0.5, 1) instead of scientific-notation-like [1, 2)? Is there some overflow to beware of?
Effectively it subtracts one more than the bias value specified by IEEE 754/ISO 60559. In hardware, this potentially trades an addition for an XOR. Alone, that seems like a pretty weak argument, considering that in many cases getting back to normal will require another floating-point operation.
The rationale says:
4.5.4.2 The frexp function
The functions frexp, ldexp, and modf are primitives used by the
remainder of the library. There was some sentiment for dropping them
for the same reasons that ecvt, fcvt, and gcvt were dropped, but their
adherents rescued them for general use. Their use is problematic: on
nonbinary architectures ldexp may lose precision, and frexp may be
inefficient.
One can speculate that the “remainder of the library” was more convenient to write with frexp's convention, or was already traditionally written against this interface although it did not provide any benefit.
I know that this does not fully answer the question, but it did not quite fit inside a comment.
I should also point out that some of the choices made in the design of the C language predate IEEE 754. Perhaps the format returned by frexp made sense with the PDP-11's floating-point format(s), or any other architecture on which a function frexp was first introduced. EDIT: See also page 155 of the manual for one PDP-11 model.
When I run
printf("%.8f\n", 971090899.9008999);
printf("%.8f\n", 9710908999008999.0 / 10000000.0);
I get
971090899.90089989
971090899.90089977
I know why neither is exact, but what I don't understand is why doesn't the second match the first?
I thought basic arithmetic operations (+ - * /) were always as accurate as possible...
Isn't the first number a more accurate result of the division than the second?
Judging from the numbers you're using and based on the standard IEEE 754 floating point standard, it seems the left hand side of the division is too large to be completely encompassed in the mantissa (significand) of a 64-bit double.
You've got 52 bits worth of pure integer representation before you start bleeding precision. 9710908999008999 has ~54 bits in its representation, so it does not fit properly -- thus, the truncation and approximation begins and your end numbers get all finagled.
EDIT: As was pointed out, the first number that has no mathematical operations done on it doesn't fit either. But, since you're doing extra math on the second one, you're introducing extra rounding errors not present with the first number. So you'll have to take that into consideration too!
Evaluating the expression 971090899.9008999 involves one operation, a conversion from decimal to the floating-point format.
Evaluating the expression 9710908999008999.0 / 10000000.0 involves three operations:
Converting 9710908999008999.0 from decimal to the floating-point format.
Converting 10000000.0 from decimal to the floating-point format.
Dividing the results of the above operations.
The second of those should be exact in any good C implementation, because the result is exactly representable. However, the other two add rounding errors.
C does not require implementations to convert decimal to floating-point as accurately as possible; it allows some slack. However, a good implementation does convert accurately, using extra precision if necessary. Thus, the single operation on 971090899.9008999 produces a more accurate result than the multiple operations.
Additionally, as we learn from a comment, the C implementation used by the OP converts 9710908999008999.0 to 9710908999008998. This is incorrect by the rules of IEEE-754 for the common round-to-nearest mode. The correct result is 9710908999009000. Both of these candidates are representable in IEEE-754 64-bit binary, and both are equidistant from the source value, 9710908999008999. The usual rounding mode is round-to-nearest, ties-to-even, meaning the candidate with the even low bit should be selected, which is 9710908999009000 (with significand 0x1.1400298aa8174), not 9710908999008998 (with significand 0x1.1400298aa8173). (IEEE 754 defines another round-to-nearest mode: ties-to-away, which selects the candidate with the larger magnitude, which is again 9710908999009000.)
The C standard permits some slack in conversions; either of these two candidates conforms to the C standard, but good implementations also conform to IEEE 754.