Is there a 128 bit integer in gcc? - c

I want a 128 bit integer because I want to store results of multiplication of two 64 bit numbers. Is there any such thing in gcc 4.4 and above?

For GCC before C23, a primitive 128-bit integer type is only ever available on 64-bit targets, so you need to check for availability even if you have already detected a recent GCC version. In theory gcc could support TImode integers on machines where it would take 4x 32-bit registers to hold one, but I don't think there are any cases where it does.
In C++, consider a library such as boost::multiprecision::int128_t which hopefully uses compiler built-in wide types if available, for zero overhead vs. using your own typedef (like GCC's __int128 or Clang's _BitInt(128)). See also #phuclv's answer on another question.
ISO C23 will let you typedef unsigned _BitInt(128) u128, modeled on clang's feature originally called _ExtInt() which works even on 32-bit machines; see a brief intro to it. Current GCC -std=gnu2x doesn't even support that syntax yet.
GCC 4.6 and later has a __int128 / unsigned __int128 defined as a built-in type. Use
#ifdef __SIZEOF_INT128__ to detect it.
GCC 4.1 and later define __int128_t and __uint128_t as built-in types. (You don't need #include <stdint.h> for these, either. Proof on Godbolt.)
I tested on the Godbolt compiler explorer for the first versions of compilers to support each of these 3 things (on x86-64). Godbolt only goes back to gcc4.1, ICC13, and clang3.0, so I've used <= 4.1 to indicate that the actual first support might have been even earlier.
legacy recommended(?) | One way of detecting support
__uint128_t | [unsigned] __int128 | #ifdef __SIZEOF_INT128__
gcc <= 4.1 | 4.6 | 4.6
clang <= 3.0 | 3.1 | 3.3
ICC <= 13 | <= 13 | 16. (Godbolt doesn't have 14 or 15)
If you compile for a 32-bit architecture like ARM, or x86 with -m32, no 128-bit integer type is supported with even the newest version of any of these compilers. So you need to detect support before using, if it's possible for your code to work at all without it.
The only direct CPP macro I'm aware of for detecting it is __SIZEOF_INT128__, but unfortunately some old compiler versions support it without defining it. (And there's no macro for __uint128_t, only the gcc4.6 style unsigned __int128). How to know if __uint128_t is defined
Some people still use ancient compiler versions like gcc4.4 on RHEL (RedHat Enterprise Linux), or similar crusty old systems. If you care about obsolete gcc versions like that, you probably want to stick to __uint128_t. And maybe detect 64-bitness in terms of sizeof(void*) == 8 as a fallback for __SIZEOF_INT128__ no being defined. (I think GNU systems always have CHAR_BIT==8, although I might be wrong about some DSPs). That will give a false negative on ILP32 ABIs on 64-bit ISAs (like x86-64 Linux x32, or AArch64 ILP32), but this is already just a fallback / bonus for people using old compilers that don't define __SIZEOF_INT128__.
There might be some 64-bit ISAs where gcc doesn't define __int128, or maybe even some 32-bit ISAs where gcc does define __int128, but I'm not aware of any.
The GCC internals are integer TI mode (GCC internals manual). (Tetra-integer = 4x width of a 32-bit int, vs. DImode = double width vs. SImode = plain int.) As the GCC manual points out, __int128 is supported on targets that support a 128-bit integer mode (TImode).
// __uint128_t is pre-defined equivalently to this
typedef unsigned uint128 __attribute__ ((mode (TI)));
There is an OImode in the manual, oct-int = 32 bytes, but current GCC for x86-64 complains "unable to emulate 'OI'" if you attempt to use it.
Random fact: ICC19 and g++/clang++ -E -dM define:
#define __GLIBCXX_TYPE_INT_N_0 __int128
#define __GLIBCXX_BITSIZE_INT_N_0 128
#MarcGlisse commented that's the way you tell libstdc++ to handle extra integer types (overload abs, specialize type traits, etc)
icpc defines that even with -xc (to compile as C, not C++), while g++ -xc and clang++ -xc don't. But compiling with actual icc (e.g. select C instead of C++ in the Godbolt language dropdown) doesn't define this macro.
The test function was:
#include <stdint.h> // for uint64_t
#define uint128_t __uint128_t
//#define uint128_t unsigned __int128
uint128_t mul64(uint64_t a, uint64_t b) {
return (uint128_t)a * b;
}
compilers that support it all compile it efficiently, to
mov rax, rdi
mul rsi
ret # return in RDX:RAX which mul uses implicitly

Ah, big integers are not C's forte.
GCC does have an unsigned __int128/__int128 type, starting from version 4.something (not sure here). I do seem to recall, however, that there was a __int128_t def before that.
These are only available on 64-bit targets.
(Editor's note: this answer used to claim that gcc defined uint128_t and int128_t. None of the versions I tested on the Godbolt compiler explorer define those types without leading __, from gcc4.1 to 8.2 , or clang or ICC.)

You could use a library which handles arbitrary or large precision values, such as the GNU MP Bignum Library.

Related

What are the requirements to use __float128?

I'm not really sure how I can find out of a system does or doesn't support the __float128 type. What system requirements have to be met for this to be available, and is there a way to check if those requirements are met in C code? (For reference, my system supports __int128 but not __float128.)
Suppose you are asking about GCC, clause 6.12 of the documentation for GCC 10.2 says:
… __float128 is available on i386, x86_64, IA-64, and hppa HP-UX, as well as on PowerPC GNU/Linux targets that enable the vector scalar (VSX) instruction set. __float128 supports the 128-bit floating type. On i386, x86_64, PowerPC, and IA-64 other than HP-UX, __float128 is an alias for _Float128. On hppa and IA-64 HP-UX, __float128 is an alias for long double…
and:
… In order to use _Float128, __float128, and __ibm128 on PowerPC Linux systems, you must use the -mfloat128 option. It is expected in future versions of GCC that _Float128 and __float128 will be enabled automatically…
I do not see an in-source test for the type, such as a preprocessor macro that would indicate it.

Are there fixed-width float types in clang?

GCC provides _Float32 and _Float64 for fixed-width floats.
However, these are not standard, and don't exist in clang. I also can't find the equivalents for clang.
Some platforms can define float or double to not be 32 or 64 bits, so using these types is not an option.
Answering the question as posed, CLang's documented language extensions do not include analogs of GCC's _Float32 and _Float64 types. Do note, however, that even GCC provides those only on targets that support corresponding types natively.
On the other hand, inasmuch as clang is built on top of LLVM, it is worthwhile to consider LLVM's documentation of FP type representations:
The binary format of half, float, double, and fp128 correspond to the
IEEE-754-2008 specifications for binary16, binary32, binary64, and
binary128 respectively.
In that sense, then, CLang's equivalents of _Float64 and _Float32 are double and float, respectively. (Indeed, the same equivalence holds in GCC for substantially all targets where the explicit-width versions are supported.)
Yes. They're called float and double.

Comparing double with literal value in C gives different results on 32 bit machines

Can someone please explain why:
double d = 1.0e+300;
printf("%d\n", d == 1.0e+300);
Prints "1" as expected on a 64-bit machine, but "0" on a 32-bit machine? (I got this using GCC 6.3 on Fedora 25)
To my best knowledge, floating point literals are of type double and there is no type conversion happening.
Update: This only occurs when using the -std=c99 flag.
The C standard allows to silently propagate floating-point constant to long double precision in some expressions (notice: precision, not the type). The corresponding macro is FLT_EVAL_METHOD, defined in <float.h> since C99.
As by C11 (N1570), §5.2.4.2.2, the semantic of value 2 is:
evaluate all operations and constants to the range and precision of
the long double type.
From the technical viewpoint, on x86 architecture (32-bit) GCC compiles the given code into FPU instructions using x87 with 80-bit stack registers, while for x86-64 architecture (64-bit) it preffers SSE unit (as scalars within XMM registers).
The current implementation was introduced in GCC 4.5 along with -fexcess-precision=standard option. From the GCC 4.5 release notes:
GCC now supports handling floating-point excess precision arising from
use of the x87 floating-point unit in a way that conforms to ISO C99.
This is enabled with -fexcess-precision=standard and with standards
conformance options such as -std=c99, and may be disabled using
-fexcess-precision=fast.

How do I identify x86 vs. x86_64 at compile time in gcc?

I want to compile part of my code only on x86 and x86_64 linux, but not s390 linux or others. How to use the macro define in C to achieve it? I know linux is to determine linux OS, and 386, 486 and 586 to determine CPU architecture. Is there an easy macro define to determine x86 linux and x86_64 linux? Thanks
You can detect whether or not you are in a 64 bit mode easily:
#if defined(__x86_64__)
/* 64 bit detected */
#endif
#if defined(__i386__)
/* 32 bit x86 detected */
#endif
If your compiler does not provide pre-defined macros and constants, you may define it yourself: gcc -D WHATEVER_YOU_WANT.
Additional reward: if you compile your code for, say, amd64, but you don't define amd64, you can compare the results (the version which use amd64-specific parts vs the generic version) and see, whether your amd64 optimalization worths the effort.
Another option instead of pre-processor macros is sizeof(void*) == 4 to detect 32-bit and/or sizeof(void*) == 8 for 64-bit. This is more portable, as it does not rely on any defined symbols or macros.
As long as your compiler has any level of optimization enabled, it should be able to see that this kind of statement is either always true or always false for the current build target, so the resulting binary should be no less efficient than if you'd used pre-processor macros.

How to identify a 64 Bit build on Linux using the preprocessor?

I am about to port a Windows 32 Bit application to 64 Bit, but might decide to port the whole thing to Linux later.
The code contains sections which are dependent on the amount of memory available to the application (which depends on whether I'm creating a 32 or 64 Bit build), while the ability to compile a 32 Bit version of the code should be preserved for backward compatibility.
On Windows, I am able to simply wrap the respective code sections into preprocessor statements to ensure the right version of the code is compiled.
Unfortunately I have very few experience on programming on the Linux platform, so the question occurred:
How am I able to identify a 64 Bit build on the Linux platform?
Is there any (preferably non-compiler-specific) preprocessor define I might check for this?
Thanks in advance!
\Bjoern
Assuming you are using a recent GNU GCC compiler for IA32 (32-bit) and amd64 (the non-Itanium 64-bit target for AMD64 / x86-64 / EM64T / Intel 64), since very few people need a different compiler for Linux (Intel and PGI).
There is the compiler line switch (which you can add to CFLAGS in a Makefile) -m64 / -m32 to control the build target.
For conditional C code:
#if defined(__LP64__) || defined(_LP64)
#define BUILD_64 1
#endif
or
#include <limits.h>
#if ( __WORDSIZE == 64 )
#define BUILD_64 1
#endif
While the first one is GCC specific, the second one is more portable, but may not be correct in some bizarre environment I cannot think of.
At present both should both work for IA-32 / x86 (x86-32) and x86-64 / amd64 environments correctly. I think they may work for IA-64 (Itanium) as well.
Also see Andreas Jaeger's paper from GCC Developer's Summit entitled, Porting to 64-bit GNU/Linux Systems which described 64-bit Linux environments in additional detail.
According to the GCC Manual:
__LP64__
_LP64
These macros are defined, with value 1, if (and only if) the
compilation is for a target where long
int and pointer both use 64-bits and
int uses 32-bit.
That's what you need, right?
Also, you can try
#define __64BIT (__SIZEOF_POINTER__ == 8)

Resources