On iSeries (AS400), in a developer environment, when I compile a C or C++ program (under qsh, using gmake and ixlc / not CRTPGM)
ixlc -c mySource.c
Each time <math.h> is used in a source code, pops this series of warnings:
/QIBM/include/math.h Line 000195 The floating point literal "1.1754943508222875E-38F" is out of range.
/QIBM/include/math.h Line 000208 The floating point literal "1.1754943508222875E-38F" is out of range.
/QIBM/include/math.h Line 000217 The floating point literal "1.1754943508222875E-38F" is out of range.
... plus 10 more
Which corresponds to any line using FLT_MIN , a defined const, from /QIBM/include/float.h .
#define FLT_MIN 1.1754943508222875E-38F
But how can I avoid this warning message which floods my compilation logs.
The defined value for FLT_MIN in your header file has 16 decimal digits. Which is way too much.
This system uses IEEE 754. So single-precision float values can only have up to 7 decimal digits. (And double-precision value can have 15).
The compiler seems to know, but not the std sources.
Fix the code in the library.
Take example on other implementations of the std, the code in the library should be:
#define FLT_EPSILON 1.192093e-07
#define FLT_MIN 1.175494e-38
#define FLT_MAX 3.402823e+38
Since this ILE source is protected on iSeries, maybe choose to redirect the symbolic link from /QIBM/include/float.h to another source code of your creation. Not necessarily a member, but an IFS file should do the job.
Related
I'm looking for a way to detect whether a C compiler uses the IEEE-754 floating point representation at compile time, preferably in the preprocessor, but a constant expression is fine too.
Note that the __STDC_IEC_559__ macro does not fit this purpose, as an implementation may use the correct representation while not fully supporting Annex F.
Not an absolute 100% solution, but will get you practically close.
Check if the characteristics of floating type double match binary64:
#include <float.h>
#define BINARY64_LIKE ( \
(FLT_RADIX == 2) \
(DBL_MANT_DIG == 53) \
(DBL_DECIMAL_DIG == 17) \
(DBL_DIG == 15) \
(DBL_MIN_EXP == -1021) \
(DBL_HAS_SUBNORM == 1) \
(DBL_MIN_10_EXP == -307) \
(DBL_MAX_EXP == +1024) \
(DBL_MAX_10_EXP == +308))
BINARY64_LIKE usable at compile time. Need additional work though for older compilers that do not define them all like: DBL_HAS_SUBNORM since C11.
Likewise for float.
Since C11, code could use _Static_assert() to detect some attributes.
_Static_assert(sizeof(double)*CHAR_BIT == 64, "double unexpected size");
See also Are there any commonly used floating point formats besides IEEE754?.
Last non-IEEE754 FP format I used was CCSI 5 years ago.
Caution: Unclear why OP wants this test. If code is doing some bit manipulations of a floating point, even with __STDC_IEC_559__ defined there remains at least one hole: The endian of floating point and integer may differ - uncommon - but out there.
Other potential holes: support of -0.0, NaN sign, encoding of infinity, signalling NaN, quiet NaN, NaN payload: the usual suspects.
As of July 2020, this would still be compiler specific... though C2x intends to change that with the __STDC_IEC_60559_BFP__ macro - see Annex F, section F.2.
It might be noted that:
The compiler usually doesn't choose the binary representation. The compiler usually follows the target system's architecture (the chipset instruction design for the CPU / GPU, etc').
The use of non-conforming binary representations for floating-point is pretty much a thing of the past. If you're using a modern (or even a moderately modern) system from the past 10 years, you are almost certainly using a conforming binary representation.
In my specific case, I firstly developed a program to run in a Texas Instruments microcontroller (TMS320F28335). It was a real time synchronous generator simulator so it needed to perform an important amount of floating-point operations. I did not specify any suffix for floating-point constants so they were treated as doubles (I guess that what the C standards says) but the compiler provided by Texas Instruments implements those doubles as 32 bits floating-point numbers so the FPU of the microcontroller was used (see Table 6-1 from the compiler user's guide) in an efficient way let's say. Then, I had to port that program to a BeagleBone Black with an embedded linux (patched for the real time requirements). Of course, all the constants were treated again as doubles by the compiler (GCC), but in this case that did not mean 32 bits floating-point numbers but 64 bits floating-point numbers. I do not fully understand how the FPU of the ARM Cortex A8 works but as far as I read (and since it is a 32 bits processor), performance would be improved if those floating-point constants were treated as 32 bits floating-point numbers. So, all of this led me to the question: is there a way to make more portable "best type" floating-point constants? In this case, I would have solve the problem by adding the "f" suffix to every constant because both processors are more efficient (I guess) treating with float, but if I am developing something in an amd64 PC and I want to port that to a 32 bits microcontroller, is there a way to add some suffix that could be changed to be "f" for 32 bits microcontrollers and "l" for an amd64 PC? I thought of something like this (of course, it doesn't work):
architecture-dependant header file for 32 bits microcontroller:
#define BEST_TYPE f
.
.
.
architecture-dependant header file for amd64 PC:
#define BEST_TYPE l
.
.
.
architecture-independent source file:
.
.
.
a = b * 0.1BEST_TYPE;
.
.
.
To clarify, by "best type" I meant the more precise numeric data type supported by the FPU of the microprocessor.
The question requests something similar to the macros in C 2018 7.20.4, “Macros for integer constants.” These macros, such as UINT64_C(value), expand to integer constants suitable for initializing objects with the corresponding type.
If we look at the <stdint.h> supplied with Xcode 11 for macOS 10.14, we see the implementations simply append a suffix that indicates the type (or does nothing, if the value will have the desired type by default):
#define UINT8_C(v) (v)
#define UINT16_C(v) (v)
#define UINT32_C(v) (v ## U)
#define UINT64_C(v) (v ## ULL)
We can use this as guidance for a similar macro for floating-point types, using one of:
#define MyFloatType_C(v) (v ## f)
#define MyFloatType_C(v) (v)
#define MyFloatType_C(v) (v ## l)
Which definition of the macro to use could be chosen by various compile-time means, such as testing preprocessor macros (that are either built into the compiler to describe the target or that are explicitly passed on the command line).
I'm developing for the AVR platform and I have a question. I don't want the floating point library to be linked with my code, but I like the concept of having analog values of the range 0.0 ... 1.0 instead of 0...255 and 0...1023, depending on even whether I'm using a port as an input or as an output.
So I decided to multiply the input/output functions' arguments by 1023.0 and 255.0, respecively. Now, my question is: if I implement the division like this:
#define analog_out(port, bit) _analog_out(port, ((uint8_t)((bit) * 255.0)))
will GCC (with the -O3 flag turned on) optimize the compile-time floating point multiplications, known at compile time and cast to an integral type, into integer operations? (I know that when using these macros with non-constant arguments, the optimization is not possible; I just want to know if it will be done in the other case.)
GCC should always do constant folding if you supply bit as a numeric literal.
If you want the compiler enforce the constness, you could get away with something like this:
#define force_const(x) (__builtin_choose_expr(__builtin_constant_p(x), (x), (void)0))
#define analog_out(port, bit) _analog_out(port, force_const((uint8_t)((bit) * 255.0)))
Generally, I think gcc -O2 will do all arithmetic on constants at compile time.
It won't convert it to integer arithmetic - just to a constant integer.
It may be dangerous to rely on, especially if other people maintain the code. A situation where passing a non-constant parameter to a macro results in an error isn't good.
AFAIK, C supports just a few data types:
int, float, double, char, void enum.
I need to store a number that could reach into the high 10 digits. Since I'm getting a low 10 digit # from
INT_MAX
, I suppose I need a double.
<limits.h> doesn't have a DOUBLE_MAX. I found a DBL_MAX on the internet that said this is LEGACY and also appears to be C++. Is double what I need? Why is there no DOUBLE_MAX?
DBL_MAX is defined in <float.h>. Its availability in <limits.h> on unix is what is marked as "(LEGACY)".
(linking to the unix standard even though you have no unix tag since that's probably where you found the "LEGACY" notation, but much of what is shown there for float.h is also in the C standard back to C89)
You get the integer limits in <limits.h> or <climits>. Floating point characteristics are defined in <float.h> for C. In C++, the preferred version is usually std::numeric_limits<double>::max() (for which you #include <limits>).
As to your original question, if you want a larger integer type than long, you should probably consider long long. This isn't officially included in C++98 or C++03, but is part of C99 and C++11, so all reasonably current compilers support it.
Its in the standard float.h include file. You want DBL_MAX
Using double to store large integers is dubious; the largest integer that can be stored reliably in double is much smaller than DBL_MAX. You should use long long, and if that's not enough, you need your own arbitrary-precision code or an existing library.
You are looking for the float.h header.
INT_MAX is just a definition in limits.h. You don't make it clear whether you need to store an integer or floating point value. If integer, and using a 64-bit compiler, use a LONG (LLONG for 32-bit).
In my source code, if I write 1.23 as a literal, e.g. doThis(1.23), gcc assumes it's a double.
Rather than type doThis((float) 1.23), is there a way to use floats for decimal literals/constants unless otherwise specified in an individual source file?
Mega-bonus points, is there a way that works across (nearly) every C compiler?
Yes, the standard way is to write 1.23f. It works with every C compiler, since it is defined in ISO C99 section 6.4.4.2 Floating constants. ISO C90 and K&R have similar definitions.
try:
float fred = 0.37f;
try 123.4F for a float constant
Also gcc has the option -fsingle-precision-constant that tells the compiler to treat constants as single precision. See http://gcc.gnu.org/wiki/FloatingPointMath