Storing floating point numbers in a file - c

For a C application that I am implementing, I need to be able to read and write a set of configuration values to a file. These values are floating point numbers. In the future it is possible that another application (could be written in C++, Python, Perl, etc...) will use this same data, so these configuration values need to be stored in a well defined format that is compiler and machine independent.
Byte order conversion functions (ntoh/hton) can be used to handle the Endianness, however what is the best way to get around the different meanings of "float" value? Is there are common method for storing floats? Rounding and truncating is not a problem, just as long as it is defined.

There are probably two main options:
Store in text format. Here you would standardise on a particular format using a well-defined decimal separator and use scientific notation, i.e. 6.66e42.
Store in binary format using the IEEE754 standard. Use either the 4 or 8 byte data type. And as you noted, you'd have to settle on an endianness convention.
A text format is probably more portable because there are machines that do not natively understand IEEE754. That said, such machines are rare in these times.

The C formatted input/output functions have a format specifier for this, %a. It formats a floating-point number in a hexadecimal floating-point format, [-]0xh.hhhhp±d. That is, it has a “-” sign if needed, hexadecimal digits for the fraction part, including a radix point, a “p” (for “power”) to start the exponent and a signed exponent of two (in decimal).
As long as your C implementation uses binary floating-point (or any floating-point such that its FLT_RADIX is a power of two), conversion with the %a format should be exact.

IEEE 754, or ISO/IEC/IEEE 60559:2011, is the standard for floating point used by most languages.
For C, it's officially taken by standard in C11. (C11 Annex F IEC 60559 floating-point arithmetic)

For small amounts of data, such as configuration values, go with text not binary. If you want, go for structured text of some form, such as JSON, XML. Do decide on how many digits to write to represent a floating-point number according to your requirements.
As the range of required portability (across languages, operating systems, time, space, etc) increases so the force of the argument in favour of text becomes stronger.

Related

Is there a fixed point representation available in C or Assembly

As far as I know, representing a fraction in C relies on floats and doubles which are in floating point representation.
Assume I'm trying to represent 1.5 which is a fixed point number (only one digit to the right of the radix point). Is there a way to represent such number in C or even assembly using a fixed point data type?
Are there even any fixed point instructions on x86 (or other architectures) which would operate on such type?
Every integral type can be used as a fixed point type. A favorite of mine is to use int64_t with an implied 8 digit shift, e.g. you store 1.5 as 150000000 (1.5e8). You'll have to analyze your use case to decide on an underlying type and how many digits to shift (that is, assuming you use base-10 scaling, which most people do). But 64 bits scaled by 10^8 is a pretty reasonable starting point with a broad range of uses.
While some C compilers offer special fixed-point types as an extension (not part of the standard C language), there's really very little use for them. Fixed point is just integers, interpreted with a different unit. For example, fixed point currency in typical cent denominations is just using integers that represent cents instead of dollars (or whatever the whole currency unit is) for your unit. Likewise, you can think of 8-bit RGB as having units of 1/256 or 1/255 "full intensity".
Adding and subtracting fixed point values with the same unit is just adding and subtracting integers. This is just like arithmetic with units in the physical sciences. The only value in having the language track that they're "fixed point" would be ensuring that you can only add/subtract values with matching units.
For multiplication and division, the result will not have the same units as the operands so you have to either treat the result as a different fixed-point type, or renormalize. For example if you multiply two values representing 1/16 units, the result will have 1/256 units. You can then either scale the value down by a factor of 16 (rounding in whatever way is appropriate) to get back to a value with 1/16 units.
If the issue here is representing decimal values as fixed point, there's probably a library for this for C, you could try a web search. You could create your own BCD fixed point library in assembly, using the BCD related instructions, AAA (adjusts after addition), AAS (adjusts after subtraction) and AAM (adjusts after multiplication). However, it seems these instructions are invalid in X86 X64 (64 bit) mode, so you'll need to use a 32 bit program, which should be runnable on a 64 bit OS.
Financial institutions in the USA and other countries are required by law to perform decimal based math on currency values, to avoid decimal -> binary -> decimal conversion issues.

Is it safe to assume floating point is represented using IEEE754 floats in C?

Floating point is implementation defined in the C. So there isn't any guarantees.
Our code needs to be portable, we are discussing whether or not acceptable to use IEEE754 floats in our protocol. For performance reasons it would be nice if we don't have to convert back and forth between a fixed point format when sending or receiving data.
While I know that there can be differences between platforms and architectures regarding the size of long or wchar_t. But I can't seem to find any specific about the float and double.
What I found so far that the byte order maybe reversed on big endian platforms. While there are platforms without floating point support where a code containing float and double wouldn't even link. Otherwise platforms seem to stick to IEEE754 single and double precision.
So is it safe to assume that floating point is in IEEE754 when available?
EDIT: In response to a comment:
What is your definition of "safe"?
By safe I mean, the bit pattern on one system means the same on the another (after the byte rotation to deal with endianness).
Essentially all architectures in current non-punch-card use, including embedded architectures and exotic signal processing architectures, offer one of two floating point systems:
IEEE-754.
IEEE-754 except for blah. That is, they mostly implement 754, but cheap out on some of the more expensive and/or fiddly bits.
The most common cheap-outs:
Flushing denormals to zero. This invalidates certain sometimes-useful theorems (in particular, the theorem that a-b can be exactly represented if a and b are within a factor of 2), but in practice it's generally not going to be an issue.
Failure to recognize inf and NaN as special. These architectures will fail to follow the rules regarding inf and NaN as operands, and may not saturate to inf, instead producing numbers that are larger than FLT_MAX, which will generally be recognized by other architectures as NaN.
Proper rounding of division and square root. It's a whole lot easier to guarantee that the result is within 1-3 ulps of the exact result than within 1/2 ulp. A particularly common case is for division to be implemented as reciprocal+multiplication, which loses you one bit of precision.
Fewer or no guard digits. This is an unusual cheap-out, but means that other operations can be 1-2 ulps off.
BUUUUT... even those except for blah architectures still use IEEE-754's representation of numbers. Other than byte ordering issues, the bits describing a float or double on architecture A are essentially guaranteed to have the same meaning on architecture B.
So as long as all you care about is the representation of values, you're totally fine. If you care about cross-platform consistency of operations, you may need to do some extra work.
EDIT: As Chux mentions in the comments, a common extra source of inconsistency between platforms is the use of extended precision, such as the x87's 80-bit internal representation. That's the opposite of a cheap-out, and (with proper treatment) fully conforms to both IEEE-754 and the C standard, but it will likewise cause results to differ between architectures, and even between compiler versions and following apparently minor and unrelated code changes. However: a particular x86/x64 executable will NOT produce different results on different processors due to extended precision.
There is a macro to check (since C99):
C11 §6.10.8.3 Conditional feature macros
__STDC_IEC_559__ The integer constant 1, intended to indicate conformance to the specifications in annex F (IEC 60559 floating-point arithmetic).
IEC 60559 (short for ISO/IEC/IEEE 60559) is another name for IEEE-754.
Annex F then establishes the mapping between C floating types and IEEE-754 types:
The C floating types match the IEC 60559 formats as follows:
The float type matches the IEC 60559 single format.
The double type matches the IEC 60559 double format.
The long double type matches an IEC 60559 extended format, 357) else a
non-IEC 60559 extended format, else the IEC 60559 double format.
I suggest you need to look more carefully at your definition of portable.
I would also suggest your definition of "safe" is insufficient. Even if the binary representation (allowing for endianness) is okay, the operations on variables may behave differently. After all, there are few applications of floating point that don't involve operations on variables.
If you want to support all host architectures that have ever been created then assuming IEEE floating point format is inherently unsafe. You will have to deal with systems that support different formats, systems that don't support floating point at all, systems for which compilers have switches to select floating point behaviours (with some behaviours being associated with non-IEEE formats), CPUs that have an optional co-processor (so floating point support depends on whether an additional chip is installed, but otherwise variants of the CPU are identical), systems that emulate floating point operations in software (some such software emulators are configurable at run time), and systems with buggy or incomplete implementation of floating point (which may or may not be IEEE based).
If you are willing to limit yourself to hardware of post 2000 vintage, then your risk is lower but non-zero. Virtually all CPUs of that vintage support IEEE in some form. However you still (as with older CPUs too) need to consider what floating point operations you wish to have supported, and the trade-offs you are willing to accept to have them. Different CPUs (or software emulation) have less complete implementation of floating point than others, and some are configured by default to not support some features - so it is necessary to change settings to enable some features, which can impact on performance or correctness of your code.
If you need to share floating point values between applications (which may be on different hosts with different features, built with different compilers, etc) then you will need to define a protocol. That protocol might involve IEEE format, but all your applications will need to be able to handle conversion between the protocol and their native representations.
Almost all common architectures now use IEEE-754, this is not required by the standard. There used to be old non IEE-754 architectures, and some could still be around.
If the only requirement is for exchange of network data, my advice is:
if __STDC_IEC_559__ is defined, only use network order for the bytes and assume you do have standard IEE-754 for float and double.
if __STDC_IEC_559__ is not defined, use a special interchange format, that could be IEE-754 - one single protocol - or anything else - need a protocol indication.
Like others have mentioned, there's the __STDC_IEC_559__ macro, but it isn't very useful because it's only set by compilers that completely implement the respective annex in the C standard. There are compilers that implement only a subset but still have (mostly) usable IEEE floating point support.
If you're only concerned with the binary representation, you should write a feature test that checks the bit patterns of certain floating numbers. Something like:
#include <stdint.h>
#include <stdio.h>
typedef union {
double d;
uint64_t i;
} double_bits;
int main() {
double_bits b;
b.d = 2.5;
if (b.i != UINT64_C(0x4004000000000000)) {
fprintf(stderr, "Not an IEEE-754 double\n");
return 1;
}
return 0;
}
Check a couple of numbers with different exponents, mantissae, and signs, and you should be on the safe side. Since these tests aren't expensive, you could even run them once at runtime.
Strictly speaking, it's not safe to assume floating-point support; generally speaking, the vast majority of platforms will support it. Notable exceptions include (now deprecated) VMS systems running on Alpha chips
If you have the luxury of runtime checking, consider paranoia, a floating-point vetting tool written by William Kahan.
Edit: sounds like your application is more concerned with binary formats as they pertain to storage and/or serialization. I would suggest narrowing your scope to choosing a third-party library that supports this. You could do worse than Google Protocol Buffers.

Writing my own float parser

I am trying to write a parser in C and part of its job is to convert a series of characters into a double. Up to now I have been using strtod but I find it to be quite dangerous and it won't handle cases where the number is at the end of the buffer, which is not null terminated.
I thought I'd write my own. If I have a string representation of a number of the form a.b, will I be nieve to think that I can just calculate (double)a + ((double)b / (double)10^n), where n is the number of digits in b?
For example, 23.4563:
a = 23
b = 4563
final answer: 23 + (4563/10000)
Or would that produce inaccurate results with regard to the IEEE format of floats?
It is hard to read floating-point numerals accurately, in the sense that there are various problems that must be carefully addressed, and many people fail to do so. However, it is a solved problem. To start, see How to read floating point numbers accurately, June 1990, by William D. Clinger.
I agree with Roddy, you are likely better off copying the data into a buffer and using existing library functions. (However, you should check that your C implementation provides correctly rounded conversion of floating-point numerals. The C standard does not require it, and some implementations do not provide it.)
You may be interested in this answer of mine to a somewhat related question.
The parser in that answer converts decimal floating point numbers (represented as strings) into IEEE-754 floats and doubles with proper rounding.
As far as I remember, about the only issue in the code is that it may not handle the cases when the exponent part is too big (doesn't fit into an integer) and should amount to returning either an error or INF.
Otherwise, it should give you a good idea of what to do (if you have any idea at all of what you're doing:).
As already said, it's difficult, you need extra precision, etc...
But if you have restricted inputs, and want to know if you can still correctly convert these restricted decimal to binary with semi naive algorithm and standard IEEE 754 ops, you might be interested in my answer to
How to manually parse a floating point number from a string

Why don't the authors of the C99 standard specify a standard for the size of floating point types?

I noticed on Windows and Linux x86, float is a 4-byte type, double is 8, but long double is 12 and 16 on x86 and x86_64 respectively. C99 is supposed to be breaking such barriers with the specific integral sizes.
The initial technological limitation appears to be due to the x86 processor not being able to handle more than 80-bit floating point operations (plus 2 bytes to round it up) but why the inconsistency in the standard compared to int types? Why don't they go at least to 80-bit standardization?
The C language doesn't specify the implementation of various types, so that it can be efficiently implemented on as wide a variety of hardware as possible.
This extends to the integer types too - the C standard integral types have minimum ranges (eg. signed char is -127 to 127, short and int are both -32,767 to 32,767, long is -2,147,483,647 to 2,147,483,647, and long long is -9,223,372,036,854,775,807 to 9,223,372,036,854,775,807). For almost all purposes, this is all that the programmer needs to know.
C99 does provide "fixed-width" integer types, like int32_t - but these are optional - if the implementation can't provide such a type efficiently, it doesn't have to provide it.
For floating point types, there are equivalent limits (eg double must have at least 10 decimal digits worth of precision).
They were trying to (mostly) accommodate pre-existing C implementations, some of which don't even use IEEE floating point formats.
ints can be used to represent abstract things like ids, colors, error code, requests, etc. In this case ints are not really used as integers numbers but as sets of bits (= a container). Most of the time a programmer knows exactly how many bits he needs, so he wants to be able to use just as many bits as needed.
floats on the other hand are design for a very specific usage (floating point arithmetic). You are very unlikely to be able to size precisely how many bits you need for your float.
Actually, most of the time the more bits you have the better it is.
C99 is supposed to be breaking such barriers with the specific integral sizes.
No, those fixed-width (u)intN_t types are completely optional because not all processors use type sizes that are a power of 2. C99 only requires that (u)int_fastN_t and (u)int_leastN_t to be defined. That means the premise why the inconsistency in the standard compared to int types is just plain wrong because there's no consistency in the size of int types
Lots of modern DSPs use 24-bit word for 24-bit audio. There are even 20-bit DSPs like the Zoran ZR3800x family or 28-bit DSPs like the ADAU1701 which allows transformation of 16/24-bit audio without clipping. Many 32 or 64-bit architectures also have some odd-sized registers to allow accumulation of values without overflow, for example the TI C5500/C6000 with 40-bit long and SHARC with 80-bit accumulator. The Motorola DSP5600x/3xx series also has odd sizes: 2-byte short, 3-byte int, 6-byte long. In the past there were lots of architectures with other word sizes like 12, 18, 36, 60-bit... and lots of CPUs that use one's complement of sign-magnitude. See Exotic architectures the standards committees care about
C was designed to be flexible to support all kinds of such platforms. Specifying a fixed size, whether for integer or floating-point types, defeats that purpose. Floating-point support in hardware varies wildly just like integer support. There are different formats that use decimal, hexadecimal or possibly other bases. Each format has different sizes of exponent/mantissa, different position of sign/exponent/mantissa and even the signed format. For example some use two's complement for the mantissa while some others use two's complement for the exponent or the whole floating-point value. You can see many formats here but that's obviously not every format that ever existed. For example the SHARC above has a special 40-bit floating-point format. Some platforms also use double-double arithmetic for long double. See also
What uncommon floating-point sizes exist in C++ compilers?
Do any real-world CPUs not use IEEE 754?
That means you can't standardize a single floating-point format for all platforms because there's no one-size-fits-all solution. If you're designing a DSP then obviously you need to have a format that's best for your purpose so that you can churn as most data as possible. There's no reason to use IEEE-754 binary64 when a 40-bit format has enough precision for your application, fits better in cache and needs far less die size. Or if you're on a small embedded system then 80-bit long double is usually useless as you don't even have enough ROM for that 80-bit long double library. That's why some platforms limit long double to 64-bit like double

C and Infinity Size Variable

How variable beyond "unsigned long long" size is represented in C or any programming language. Is there are variable type to represent Infinity?
See: Floating point Infinity
Both floats and doubles have representations for infinity. Integral types do not have this capacity built into the language but, if you need it, there are tricks you can use to assign INT_MAX or LONG_MAX from limits.h for that purpose.
Of course, to do that, you have to have complete control of the calculations - you don't want a regular calculation ending up with that value and then being treated as infinity from there on.
For numbers larger than those that can be represented by the native types, you can turn to an arbitrary precision math library such as GMP.
Are you talking about representing the number infinity, or working with numbers too big to fit into a single native data type? For the latter, it's possible to create data structures that represent larger numbers, and write your own logic to implement arithmetic for those structures.
I've done this on a basic level for some problems at Project Euler. My approach was to extend the algorithms used in elementary school, which let you do operations on multi-digit numbers by breaking them down into operations using only single-digit numbers plus some carrying, remainders, borrowing, etc.
You might also be interested in this wikipedia article on arbitrary-precision arithmetic.
unsigned long long is the largest of the C standard integer types, it must be able to represent at least 264 – 1. There is no integer type in Standard C that must represent values larger than this. Some implementations may offer a larger type as an extension, but you can't rely on it.
test.c:3: error: ‘long long long’ is too long for GCC
The standard integer types can't represent infinity unless you dedicate a specific bit-pattern for that purpose (for example, ULLONG_MAX). I think the floating point types do have representations for infinity. You can check for whether a floating point type is infinity with the Standard C isinf macro, and you can set a floating point type to infinity with the INFINITY macro. I think that Standard C does not require floating point types to be able to represent positive and negative infinity however.

Resources