Very long definition of PI - c

I'm debugging some old C code and it has a definition #define PI 3.14... where ... is about 50 other digits.
Why is this? I said I could reduce the number to about 16 decimal places but my boss snarled at me saying that the other numbers are there for platform independence and forward compatibility. But will is slow the program down?

No, this will not slow down the program, unless you are running on an incredibly underpowered 1MHz DSP chip that has to do floating point arithmetic in software as opposed to passing it off to a dedicated FPU. This would mean that any mathematical operations that use floating point data are much slower than just using integer arithmetic.
In general, greater precision is only going to introduce a slowdown if the most time-consuming part of your program is doing a lot of calculations in rapid succession, and floating point calculations are especially slow. On a modern CPU, this is generally not the case, with the possible exception of certain chips that cause an 80-cycle stall on things like floating point underflow. That kind of issue likely exceeds the domain of this question.
First, it's better to use a common standard definition of PI, like in the C standard header, <math.h>, where it is defined as #define M_PI 3.14159265358979323846. If you insist, you can go ahead and define it manually.
Also, the best precision currently available in C is the equivalent of about 19 digits.
According to Wikipedia, 80-bit "Intel" IEEE 754 extended-precision
long double, which is 80 bits padded to 16 bytes in memory, has 64
bits mantissa, with no implicit bit, which gets you 19.26 decimal
digits. This has been the almost universal standard for long double
for ages, but recently things have started to change.
The newer 128-bit quad-precision format has 112 mantissa bits plus an
implicit bit, which gets you 34 decimal digits. GCC implements this as
the __float128 type and there is (if memory serves) a compiler option
to set long double to it.
Personally, if I were required to use our own definition of pi, I'd write something like this:
#ifndef M_PI
#define PI 3.14159265358979323846264338327950288419716939937510
#else
#define PI M_PI
#endif
If the latest C standard supports an even wider floating point primitive data type, it's pretty much a guarantee that constants in the math library would be updated to support this.
References
More Precise Floating point Data Types than double?, Accessed 2014-03-13, <https://stackoverflow.com/questions/15659668/more-precise-floating-point-data-types-than-double>
Math constant PI value in C, Accessed 2014-03-13, <https://stackoverflow.com/questions/9912151/math-constant-pi-value-in-c>

The number of digits in a macro definition almost certainly will have no effect at all on run-time performance.
Macro expansion is textual. That means that if you have:
#define PI 3.14159... /* 50 digits */
then any time you refer to PI in code to which that definition is visible, it will be as if you had written out 3.14159....
C has just three floating-point types: float, double, and long double. There sizes and precisions are implementation-defined, but they're typically 32 bits, 64 bits, and something wider than 64 bits (the size of long double typically varies more from system to system than the other two do.)
If you use PI in an expression, it will be evaluated as a value of some specific type. And in fact, if there's no L suffix on the literal, it will be of type double.
So if you write:
double x = PI / 2.0;
it's as if you had written:
double x = 3.14159... / 2.0;
The compiler will probably evaluate the division at compile time generating a value of type double. Any extra precision in the literal will be discarded.
To see this, you can try writing a small program that uses the PI macro and examining an assembly listing.
For example:
#include <stdio.h>
#define PI 3.141592653589793238462643383279502884198716939937510582097164
int main(void) {
double x = PI;
printf("x = %g\n", x);
}
On my x86_64 system, the generated machine code has no reference to the full precision value. The instruction corresponding to the initialization is:
movabsq $4614256656552045848, %rax
where 4614256656552045848 is a 64-bit integer corresponding to the binary IEEE double-precision representation of a number as close as possible to 3.141592653589793238462643383279502884198716939937510582097164.
The actual stored floating-point value on my system happens to be exactly:
3.1415926535897931159979634685441851615905761718750000000000000000
of which only about 16 decimal digits are significant.

Related

can't figure out the sizeof(long double) in C is 16 bytes or 10 bytes

Although there are some answers are on this websites, I still can't figure out the meaning of sizeof(long double). Why is the output of printing var3 is 3.141592653589793115998?
When I try to execute codes from another person, it runs different from another person. Could somebody help me to solve this problem?
My testing codes:
float var1 =3.1415926535897932;
double var2=3.1415926535897932;
long double var3 =3.141592653589793213456;
printf("%d\n",sizeof(float));
printf("%d\n",sizeof(double));
printf("%d\n",sizeof(long double));
printf("%.16f\n",var1);
printf("%.16f\n",var2);
printf("%.21Lf\n",var3);
output of my testing codes:
4
8
16
3.1415927410125732
3.1415926535897931
3.141592653589793115998
Codes are the same with another person, but the output from another person is:
4
8
12
3.1415927410125732
3.1415926535897931
3.141592741012573213359
Could somebody tell me why the output of us are different?
Floating point numbers -inside our computers- are not mathematical real numbers.
They have lots of counter-intuitive properties (e.g. (1.0-x) + x in your C code can be different of 1....). For more, read the floating-point-gui.de
Be also aware that a number is not its representation in digits. For example, most of your examples are approximations of the number π (which, intuitively speaking, has an infinite number of digits or bits, since it is a trancendental number, as proven by Évariste Galois and Niels Abel). The continuum hypothesis is related.
I still can't figure out the meaning of sizeof(long double).
It is the implementation specific ratio of the number of bytes (or octets) in a long double automatic variable vs the number of bytes in a char automatic variable.
The C11 standard (read n1570 and see this reference) does allow an implementation to have sizeof(long double) being, like sizeof(char), equal to 1. I cannot name such an implementation, but it might be (theoretically) the case on some weird computer architectures (e.g. some DSP).
Could somebody tell me why the output of us are different?
What make you think they could be equal?
Practically speaking, floating point numbers are often IEEE754. But on IBM mainframes (e.g. z/Series) or on VAXes they are not.
float var1 =3.1415926535897932;
double var2 =3.1415926535897932;
Be aware that it could be possible to have a C implementation where (double)var1 != var2 or where var1 != (float)var2 after executing these above instructions.
If you need more precision that what long double achieve on your particular C implementation (e.g. your recent GCC compiler, which could be a cross-compiler), consider using some arbitrary precision arithmetic library such as GMPlib.
I recommend carefully reading the documentation of printf(3), and of every other function that you are using from your C standard library. I also suggest to read the documentation of your C compiler.
You might be interested by static program analysis tools such as Frama-C or the Clang static analyzer. Read also this draft report.
If your C compiler is a recent GCC, compile with all warnings and debug info, so gcc -Wall -Wextra -g and learn how to use the GDB debugger.
Could somebody tell me why the output of us are different?
C allows different compilers/implementations to use different floating point encoding and handle evaluations in slightly different ways.
Precision
The difference in sizeof hint that the 2 implementations may employ different precision. Yet the difference could be due to padding, In this case, extra bytes added to preserve an alignment for performance reasons.
A better precision assessment is to print epsilon: the difference between 1.0 and the next larger value of the type.
#include <float.h>
printf("%e %e %Le\n", FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON);
Sample result
1.192093e-07 2.220446e-16 1.084202e-19
FLT_EVAL_METHOD
When this is 0, floating point types evaluate to that type. With other values like 2, floating point evaluate using wider types and only in the end save the result to the target type.
printf("FLT_EVAL_METHOD %d\n", FLT_EVAL_METHOD);
Two of several possible values indicated below:
FLT_EVAL_METHOD
0 evaluate all operations and constants just to the range and precision of the type;
2 evaluate all operations and constants to the range and precision of the long double type.
Notice the constants 3.1415926535897932, 3.141592653589793213456 are both normally double constants. Neither has an L suffix that would make the long double. Both have the same double value of 3.1415926535897931... and val2, val3 should get the same value. Yet with FLT_EVAL_METHOD==2, constants can evaluated as a long double and that is certainly what happened in "the output from another person" code.
Print FLT_EVAL_METHOD to see that difference.

platform independent way to reduce precision of floating point constant values

The use case:
I have some large data arrays containing floating point constants that.
The file defining that array is generated and the template can be easily adapted.
I would like to make some tests, how reduced precision does influence the results in terms of quality, but also in compressibility of the binary.
Since I do not want to change other source code than the generated file, I am looking for a way to reduce the precision of the constants.
I would like to limit the mantissa to a fixed number of bits (set the lower ones to 0). But since floating point literals are in decimal, there are some difficulties, specifying numbers in a way that the binary representation does contain all zeros at the lower mantissa bits.
The best case would be something like:
#define FP_REDUCE(float) /* some macro */
static const float32_t veryLargeArray[] = {
FP_REDUCE(23.423f), FP_REDUCE(0.000023f), FP_REDUCE(290.2342f),
// ...
};
#undef FP_REDUCE
This should be done at compile time and it should be platform independent.
The following uses the Veltkamp-Dekker splitting algorithm to remove n bits (with rounding) from x, where p = 2n (for example, to remove eight bits, use 0x1p8f for the second argument). The casts to float32_t coerce the results to that type, as the C standard otherwise permits implementations to use more precision within expressions. (Double-rounding could produce incorrect results in theory, but this will not occur when float32_t is the IEEE basic 32-bit binary format and the C implementation computes this expression in that format or the 64-bit format or wider, as the former is the desired format and the latter is wide enough to represent intermediate results exactly.)
IEEE-754 binary floating-point is assumed, with round-to-nearest. Overflow occurs if x•(p+1) rounds to infinity.
#define RemoveBits(x, p) (float32_t) (((float32_t) ((x) * ((p)+1))) - (float32_t) (((float32_t) ((x) * ((p)+1))) - (x))))
What you're asking for can be done with varying degrees of partial portability, but not absolute unless you want to run the source file through your own preprocessing tool at build time to reduce the precision. If that's an option for you, it's probably your best one.
Short of that, I'm going to assume at least that your floating point types are base 2 and obey Annex F/IEEE semantics. This should be a reasonable assumption, but the latter is false with gcc on platforms (including 32-bit x86) with extended-precision under the default standards-conformance profile; you need -std=cNN or -fexcess-precision=standard to fix it.
One approach is to add and subtract a power of two chosen to cause rounding to the desired precision:
#define FP_REDUCE(x,p) ((x)+(p)-(p))
Unfortunately, this works in absolute precisions, not relative, and requires knowing the right value p for the particular x, which is going to be equal to the value of the leading base-2 place of x, times 2 raised to the power of FLT_MANT_DIG minus the bits of precision you want. This cannot be evaluated as a constant expression for use as an initializer, but you can write it in terms of FLT_EPSILON and, if you can assume C99+, a preprocessor-token-pasting to form a hex float literal, yielding the correct value for this factor. But you still need to know the power of two for the leading digit of x; I don't see any way to extract that as a constant expression.
Edit: I believe this is fixable, so as not to need an absolute precision but rather automatically scale to the value, but it depends on correctness of a work in progress. See Is there a correct constant-expression, in terms of a float, for its msb?. If that works I will later integrate the result with this answer.
Another approach I like, if your compiler supports compound literals in static initializers and if you can assume IEEE type representations, is using a union and masking off bits:
union { float x; uint32_t r; } fr;
#define FP_REDUCE(x) ((union fr){.r=(union fr){x}.r & (0xffffffffu<<n)}.x)
where n is the number of bits you want to drop. This will round towards zero rather than to nearest; if you want to make it round to nearest, it should be possible by adding an appropriate constant to the low bits before masking, but you have to take care about what happens when the addition overflows into the exponent bits.

c: change variable type without casting

I'm changing an uint32_t to a float but without changing the actual bits.
Just to be sure: I don't wan't to cast it. So float f = (float) i is the exact opposite of what I wan't to do because it changes bits.
I'm going to use this to convert my (pseudo) random numbers to float without doing unneeded math.
What I'm currently doing and what is already working is this:
float random_float( uint64_t seed ) {
// Generate random and change bit format to ieee
uint32_t asInt = (random_int( seed ) & 0x7FFFFF) | (0x7E000000>>1);
// Make it a float
return *(float*)(void*)&asInt; // <-- pretty ugly and nees a variable
}
The Question: Now I'd like to get rid of the asInt variable and I'd like to know if there is a better / not so ugly way then getting the address of this variable, casting it twice and dereferencing it again?
You could try union - as long as you make sure the types are identical in memory sizes:
union convertor {
int asInt;
float asFloat;
};
Then you can assign your int to asFloat (or the other way around if you want to). I use it a lot when I need to do bitwise operations on one hand and still get a uint32_t representation on the number on the other hand
[EDIT]
Like many of the commentators rightfully state, you must take into consideration values that are not presentable by integers like like NAN, +INF, -INF, +0, -0.
So you seem to want to generate floating point numbers between 0.5 and 1.0 judging from your code.
Assuming that your microcontroller has a standard C library with floating point support, you can do this all standards compliant without actually involving any floating point operations, all you need is the ldexp function that itself doesn't actually do any floating point math.
This would look something like this:
return ldexpf((1 << 23) + random_thing_smaller_than_23_bits(), -24);
The trick here is that we happen to know that IEEE754 binary32 floating point numbers have integer precision between 2^23 and 2^24 (I could be off-by-one here, double check please, I'm translating this from some work I've done on doubles). So the compiler should know how to convert that number to a float trivially. Then ldexp multiplies that number by 2^-24 by just changing the bits in the exponent. No actual floating point operations involved and no undefined behavior, the code is fully portable to any standard C implementation with IEEE754 numbers. Double check the generated code, but a good compiler and c library should not use any floating point instructions here.
If you want to peek at some experiments I've done around generating random floating point numbers you can peek at this github repo. It's all about doubles, but should be trivially translatable to floats.
Reinterpreting the binary representation of an int to a float would result in major problems:
There are a lot of undefined codes in the binary representation of a float.
Other codes represent special conditions, like NAN, +INF, -INF, +0, -0 (sic!), etc.
Also, if that is a random value, even if catching all non-value representations, that would yield a very bad random distribution.
If you are working on an MCU without FPU, you should better think about avoiding float at all. An alternative might be fraction or scaled integers. There are many implementations of algorithms which use float, but can be easily converted to fixed point types with acceptable loss of precision (or even none at all). Some might even yield more precision than float (note that single precision float has only 23 bits of mantissa, an int32 would have 31 bits (+ 1 sign for either), same for a fractional or fixed scaled int.
Note that C11 added (optional) support for _Frac. You might want to research on that.
Edit:
According you your comments, you seem to convert the int to a float in range 0..<1. For that, you can assemble the float using bit operations on an uint32_t (e.g. the original value). You just need to follow the IEEE format (presumed your toolchain does comply to the C standard! See wikipedia.
The result (still uint32_t) can then be reinterpreted by a union or pointer as described by others already. Pack that in a system-dependent, well-commented library and dig it deep. Do not forget to check about endianess and alignment (likely both the same for float and uint32_t, but important for the bit-ops).

Max value of datatypes in C

I am trying to understand the maximum value that I can store in C. I tried doing printf("%f", pow(2, x)). The answer holds good until x = 1023. It says Inf when x = 1024.
I am sorry that it is a basic question but I am trying to understand how C assigns datatypes' sizes based on my machine.
I have a Mac (64-bit processor). A clear understanding that I have is that my processor being a 64-bit one, it will be able to do calculations up to the value (264). Clearly pow(2, 1023) is greater than that. But my program is working fine till x = 1023. How is this possible? Is GNU compiler has something to do with this?
If this is a duplicate of other question kindly give the link.
In C the pow() functions returns a double, and the double type is typically a 64-bit IEEE format representation of a floating point number.
The basic idea of floating point is to express a number in the same general way as e.g. 1.234×1056. Here you have a mantissa 1.234 and an exponent 56. C++, and probably also C, allows decimal representation for floating point numbers (but not for integer types), but in practice the internal representation will be binary, with a power of 2 rather than a power of 10.
The limit you ran up against was the supported range for the exponent in your compiler's representation of double numbers; probably 64-bit IEEE 754.
The limits of the various built-in integral numerical types are available as symbolic constants from <limits.h>. The limits of the built-in floating point types are available as symbolic constants from <float.h>. See the table over at cppreference.com for more details.
In C++ these limits are also available via the numeric_limits class template from <limits>.
"64-bit processor" typically means that it can deal with integers that contain at most 64 bits at a time (i.e. in a single instruction), not that it can only process numbers with 64 binary digits or less. Using arbitrary precision arithmetic you can do calculations on numbers that are arbitrarily large, provided that you have enough memory (and time), just like how us humans can do operations on big values with only 10 fingers. Read more here: What is the biggest number you can generate using a 64-bit processor?
However pow(2, 1023) is a little bit different. It's not an integer but a floating-point number (of type double in C) represented by a sign, a mantissa and an exponent like this (-1)sign × 1 × 21023. Not all the digits are stored so it's only accurate to the first few digits. However most systems use binary floating-point types so they can store the precise value of a power of 2 up to a large exponent depending on the exponent range. Most modern systems' floating-point types conform to IEEE-754 standard with double maps to binary64/double precision, therefore the maximum value will be
21023 × (1 + (1 − 2−52)) ≈ 1.7976931348623157 × 10308
The maximum value for a double is DBL_MAX. This is defined by <float.h> in C, or <cfloat> in C++. The numeric value may vary across systems, but you can always refer to it by the macro DBL_MAX.
You can print this:
printf("%f\n", DBL_MAX);
The integer data types all have similar macros defined in <limits.h>: e.g. ULLONG_MAX is the biggest value for unsigned long long. If printing with printf make sure to use the correct format specifier.

Illegal float values

I am currently working on an embedded microcontroller and use a custom printf routine. The toolchain is the GCC Toolchain for the AVR32 architecture.
I have the problem that upon calling vsnprintf or similar for the second time that the CPU enters an exception condition.
From support, I received the answer that:
We could not find any obvious reason for such behavior. However, creating a float overflow condition by writing byte by byte is not safe. We cannot ensure the value generated by this and it is recommended to check using “FLT_MAX”.
Now I am wondering: What are "illegal" float values? Shouldn't all bit combinations represent at least some value? If relevant: sizeof(float) is 4 bytes.
Summary
I suggest you print the bits of floating-point values as if they were a hexadecimal integer, as shown in code below, so that you can analyze those bits to see if they contain the values you are attempting to compute or have been modified improperly due to some bug.
Details
The AVR32CU Technical Reference Manual says “The floating point hardware conforms to the requirements of the C standard, which is based on the IEEE 754 floating point standard.“ The latter clause is false; the C standard is not based on IEEE 754. The C standard does specify bindings to IEEE 754 (via the name IEC 60559) as an optional feature of C implementations. I will presume that the model of AVR32 CPU you are using conforms to IEEE 754 to some degree.
There are no “illegal” values in IEEE 754. There are values that do not represent numbers, and some of those values are intended to cause exceptions. Such a value is called a NaN (for “Not a Number”). There are quiet NaNs and signaling NaNs. Quiet NaNs are intended to pass through operations silently, producing a NaN result. E.g., 3 + NaN should produce NaN. Signaling NaNs are intended to cause exceptions, which may cause changes to program control (such as signals or program aborts).
The technical reference manual cited above also says “Signalling NaN are not provided, all NaN are non-signalling (quiet).”
A good vsnprintf routine should accept quiet NaN values for printing and should format them by producing a string such as “NaN”. When a signaling NaN is passed for formatting, I suppose it might be reasonable either to format it or to produce an exception.
I expect the message you received from support is suggesting that your software created some kind of NaN, and that vsnprintf cannot handle these. From the phrasing, I think their response is speculative.
If you are creating floating-point values by assembling bytes, then you may have created a NaN when you did not intend to, if there was some error in your software. I suggest that you debug this by using vsnprintf to print the bytes of the floating-point value instead of printing it with a floating-point format specifier.
If the GCC version you are using has the usual features of GCC, and the unsigned int in your implementation is 32 bits, you can format the bits of a 32-bit float value x as a hexadecimal value using:
vsnprintf(Buffer, BufferLength, "0x%x",
(union { float f; unsigned int u; }) {x} .u);
The second line uses a compound literal to put the value x into a union and reinterpret its bytes as an unsigned int. (This is a supported way in C to reinterpret the bytes of an object. Many people use pointer aliasing, which works in GCC if the appropriate flag is used, but it is not generally supported by the C standard. Another supported method is to copy the bytes, as with unsigned int u; memcpy(&u, &x, sizeof u);.)
Once you see what the bits in the float are, you can interpret them manually from information in the IEEE 754 standard or using an online analyzer. (Select the “hexadecimal” button to input hexadecimal values to be interpreted.)
In an IEEE-754 32-bit binary floating-point object, the value is a NaN if:
Bits 31 has any value. (It is the sign bit, irrelevant for recognizing a NaN.)
Bits 30 to 23 are all ones.
Bits 22 to 0 are not all zeros.
(If Bits 30 to 23 are all ones but bits 22 to 0 are all zeroes, the value is an infinity. This is not illegal but might also cause a low-quality vsnprintf to generate an exception.)
i didnt work on AVR32, but 'illegal' float values, generally float (single-precision arithmetic) is important topic in numeric methods. Maximal number for float is:
FLT_MAX = 3.40282e+38
but float have also limit for floating values. The closer to zero you are, more floating digits you can specify.
for example:
the minimal value between [1,2] is 1.19209e-07 ( it is 2^-23) also known as macheps (machine epsilon ( FLT_EPSILON from float.h ))
the minimal value between [2,4] is 2 * 1.19209e-07 = 2 * 2^-23
it also works for the oder side:
the minimal value between [1/2,1] is 2^-24.
why this is happening?
Let define number as beforedot.afterdot.
the larger number is, more bits is required to write beforedot number, and simetric for less numbers.
In conclusion:
Min for float: 1.0842e-19,
Max for float: 3.40282e+38.

Resources