Difference between INT_MAX and __INT_MAX__ in C - c

What is the difference between the 2? __INT_MAX__ is defined without adding a library as far as I know and INT_MAX is defined in limits.h but when I include the library INT_MAX gets expanded to __INT_MAX__ either way (or so does VSCode say). Why would I ever use the limits.h one when it gets expanded to the other one?

You should always use INT_MAX, as that is the macro constant that is defined by the ISO C standard.
The macro constant __INT_MAX__ is not specified by ISO C, so it should not be used, if you want your code to be portable. That macro is simply an implementation detail of the compiler that you are using. Other compilers will probably not define that macro, and will implement INT_MAX in some other way.

__INT_MAX__ is an implementation defined macro, which means not all systems may have it. In particular, GCC defines this macro but MSVC does not.
On the other hand, INT_MAX is defined by the C standard and is guaranteed to be present in limits.h for any conforming compiler.
So for portability, use INT_MAX.

Why would I ever use the limits.h one when it gets expanded to the other one?
limits.h is standard and portable.
Every implementation of the C language is free to create the value of macros such as INT_MAX as it sees fit. The __INT_MAX__ value you are seeing is an artifact of your particular compiler, and maybe even the particular version of the compiler you're using.

To add to the other answers, when you're writing code that will be run on several platforms, it really pays to stick to the standards. If you don't, when a new platform comes along, you have a lot of work to do adapting it, and the best way to do that is usually to change it conform to the standard. This work is very dull and uninteresting, and well worth avoiding by doing things right to start with.
I work on a mathematical modeller that was originally written in the 1980s on VAX/VMS, and in its early days supported several 68000 platforms, including Apollo/Domain. Nowadays, it runs on 64-bit Windows, Linux, macOS, Android and iOS, none of which existed when it was created.

__INT_MAX__ is a predefined macro in the C preprocessor that specifies the maximum value of an int type on a particular platform. This value is implementation-defined and may vary across different platforms.
INT_MAX is a constant defined in the limits.h header file that specifies the maximum value of an int type. It is defined as follows:
define INT_MAX __INT_MAX__
The limits.h header file is part of the C standard library and provides various constants that specify the limits of various types, such as the minimum and maximum values of the int, long, and long long types.
The reason why INT_MAX is defined as __INT_MAX__ is because __INT_MAX__ is a predefined macro that specifies the maximum value of an int type on a particular platform, and INT_MAX is simply an alias for this value.
You can use either __INT_MAX__ or INT_MAX to get the maximum value of an int type, but it is generally recommended to use INT_MAX since it is defined in a standard library header file and is therefore more portable.

INT_MAX is a macro that specifies that an integer variable cannot store any value beyond this limit.
INT_MIN specifies that an integer variable cannot store any value below this limit.
Values of INT_MAX and INT_MIN may vary
from compiler to compiler. Following are
typical values in a compiler where integers
are stored using 32 bits.
Value of INT_MAX is +2147483647.
Value of INT_MIN is -2147483648.

In the C programming language, INT_MAX is a macro that expands to the maximum value that can be stored in a variable of type int. This value is implementation-defined, meaning that it may vary depending on the specific C implementation being used. On most systems, int is a 32-bit data type and INT_MAX is defined as 2147483647, which is the maximum value that can be stored in a 32-bit, two's complement integer.
On the other hand, __INT_MAX__ is a predefined macro that represents the maximum value that can be stored in a variable of type int on the system where the C program is being compiled. Like INT_MAX, the value of __INT_MAX__ is implementation-defined and may vary depending on the specific C implementation being used. However, __INT_MAX__ is set by the compiler during compilation, whereas INT_MAX is typically defined in a header file (e.g., limits.h) and included in the program at runtime.
In general, it is recommended to use INT_MAX rather than __INT_MAX__ in C programs, as INT_MAX is portable and will work on any system, whereas __INT_MAX__ is specific to the system where the program is being compiled.

INT_MAX (macro) : It specifies that any integer variable cannot store values beyond the mentioned limit.
whereas
INT_MIN specifies that any integer variable cannot store some value that is below the mentioned limit.

Related

How can <stdint.h> types guarantee bit width?

Since C is a loosely typed language and stdint.h defines just typedefs (I assume), how can the widths of ints be guaranteed?
What I am asking is about the implementation rather than the library usage.
How can <stdint.h> types guarantee bit width?
C can't and C does not require it.
C does require minimum widths.
The below, individuality, are required only on systems that support them, without padding and 2's complement for sign types.
(u)int8_t, (u)int16_t, (u)int32_t, (u)int64_t
An implementation may optionally have other sizes like uint24_t
Below are required.
(u)int_least8_t, (u)int_least16_t, (u)int_least32_t, (u)int_least64_t
stdint.h is part of the C implementation, which defines the typedefs using whatever underlying types are appropriate for that implementation. It's not a portable file you can carry to any C implementation you like.
A C compiler eventually needs to compile to machine code. Machine code only has hard, fixed-width types like a 32-bit int, 64-bit int etc. (or rather, it has memory blocks of that size + operations that operate on memory of that size and either treat it as signed or unsigned)
So the people who create your compiler are the ones who define what your compiler actually uses under the hood when you ask it for an int, and the stdint.h header file is a file they write. It is basically documentation of what they did. They know that e.g. their long type is 64 bits in size, so add a typedef long int64_t; etc.
On a system where int is 16 bits and long is 32 bits, they could even have their compiler understand a special internal type and e.g. name it __int64 and then make stdint.h contain a typedef __int64 int64_t;.
The C standard just defines that there has to be a stdint.h header provided with your compiler, and that if you define int64_t in there, it has to map to a data type that is the right size.
Theoretically one could just build everything in stdint.h into the compiler instead (so instead of using an intermediate name like __int64 and typedefing it to int64_t, they could just use int64_t directly, but by using this approach, old code that was written before stdint.h existed and defined their own type named int64_t can just not include stdint and will thus keep compiling. Names starting with two underscores have been historically reserved for the compiler maker, so there is no chance of existing C code using the name __int64 already.

What hardware specific defines does the c89 standard require the implementation to provide?

Does the c(89) standard specify certain hardware properties that must be defined by the implementation? For example on my Linux system there is a define for __WORDSIZE (defined as 64) - can I expect __WORDSIZE to be defined on every system complying with c(89)? Are there other hardware specific values that the c standard requires an implementation to provide?
C89 specifies limits provided by limits.h, see here for the freely accessible draft text.
As already commented by alk answered by alk, the only one that's truly hardware specific is CHAR_BIT, the others are implementation specific.
As for __WORDSIZE, this isn't a standard define, and it's questionable what a word size should be.
You can always determine the number of bits in a type using an ingenious macro found for example in this answer, quoting it here:
/* Number of bits in inttype_MAX, or in any (1<<b)-1 where 0 <= b < 3E+10 */
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
With this, you can determine the size in bits of an unsigned int like this:
IMAX_BITS((unsigned)-1)
But is this really the word size? On x86_64, the result will be 32, while pointers are 64 bits.
With C99 and later, you could instead use
IMAX_BITS((uintptr_t)-1)
But be aware that uintptr_t is only required to be able to hold a pointer -- it could be larger.
CHAR_BIT comes to my mind (see 5.2.4.2.1/1 of the C11 Standard draft).
It defines the width (number of bits) of the smallest integer that is not a bit field. On recent systems this typically is 8.
Still, whether this is necessarily to be taken as hardware property is arguable.
C tries to abstract hardware properties. That is probably a (main?) reason why it spread.

Are there any well-established/standardized ways to use fixed-width integers in C89?

Some background:
the header stdint.h is part of the C standard since C99. It includes typedefs that are ensured to be 8, 16, 32, and 64-bit long integers, both signed and unsigned. This header is not part of the C89 standard, though, and I haven't yet found any straightforward way to ensure that my datatypes have a known length.
Getting to the actual topic
The following code is how SQLite (written in C89) defines 64-bit integers, but I don't find it convincing. That is, I don't think it's going to work everywhere. Worst of all, it could fail silently:
/*
** CAPI3REF: 64-Bit Integer Types
** KEYWORDS: sqlite_int64 sqlite_uint64
**
** Because there is no cross-platform way to specify 64-bit integer types
** SQLite includes typedefs for 64-bit signed and unsigned integers.
*/
#ifdef SQLITE_INT64_TYPE
typedef SQLITE_INT64_TYPE sqlite_int64;
typedef unsigned SQLITE_INT64_TYPE sqlite_uint64;
#elif defined(_MSC_VER) || defined(__BORLANDC__)
typedef __int64 sqlite_int64;
typedef unsigned __int64 sqlite_uint64;
#else
typedef long long int sqlite_int64;
typedef unsigned long long int sqlite_uint64;
#endif
typedef sqlite_int64 sqlite3_int64;
typedef sqlite_uint64 sqlite3_uint64;
So, this is what I've been doing so far:
Checking that the "char" data type is 8 bits long, since it's not guaranteed to be. If the preprocessor variable "CHAR_BIT" is not equal to 8, compilation fails
Now that "char" is guaranteed to be 8 bits long, I create a struct containing an array of several unsigned chars, which correspond to several bytes in the integer.
I write "operator" functions for my datatypes. Addition, multiplication, division, modulo, conversion from/to string, etc.
I have abstracted this process in a header file, which is the best I can do with what I know, but I wonder if there is a more straightforward way to achieve this.
I'm asking because I want to write a portable C library.
First, you should ask yourself whether you really need to support implementations that don't provide <stdint.h>. It was standardized in 1999, and even many pre-C99 implementations are likely to provide it as an extension.
Assuming you really need this, Doug Gwyn, a member of the ISO C standard committee, created an implementation of several of the new headers for C9x (as C99 was then known), compatible with C89/C90. The headers are in the public domain and should be reasonably portable.
http://www.lysator.liu.se/(nobg)/c/q8/index.html
(As I understand it, the name "q8" has no particular meaning; he just chose it as a reasonably short and unique search term.)
One rather nasty quirk of integer types in C stems from the fact that many "modern" implementations will have, for at least one size of integer, two incompatible signed types of that size with the same bit representation and likewise two incompatible unsigned types. Most typically the types will be 32-bit "int" and "long", or 64-bit "long" and "long long". The "fixed-sized" types will typically alias to one of the standard types, though implementations are not consistent about which one.
Although compilers used to assume that accesses to one type of a given size might affect objects of the other, the authors of the Standard didn't mandate that they do so (probably because there would have been no point ordering people to do things they would do anyway and they couldn't imagine any sane compiler writer doing otherwise; once compilers started doing so, it was politically difficult to revoke that "permission"). Consequently, if one has a library which stores data in a 32-bit "int" and another which reads data from a 32-bit "long", the only way to be assured of correct behavior is to either disable aliasing analysis altogether (probably the sanest choice while using gcc) or else add gratuitous copy operations (being careful that gcc doesn't optimize them out and then use their absence as an excuse to break code--something it sometimes does as of 6.2).

Should enum never be used in an API?

I am using a C library provided to me already compiled. I have limited information on the compiler, version, options, etc., used when compiling the library. The library interface uses enum both in structures that are passed and directly as passed parameters.
The question is: how can I assure or establish that when I compile code to use the provided library, that my compiler will use the same size for those enums? If it does not, the structures won't line up, and the parameter passing may be messed up, e.g. long vs. int.
My concern stems from the C99 standard, which states that the enum type:
shall be compatible with char, a signed integer type, or an unsigned
integer type. The choice of type is implementation-defined, but shall
be capable of representing the values of all the members of the
enumeration.
As far as I can tell, so long as the largest value fits, the compiler can pick any type it darn well pleases, effectively on a whim, potentially varying not only between compilers, but different versions of the same compiler and/or compiler options. It could pick 1, 2, 4, or 8-byte representations, resulting in potential incompatibilities in both structures and parameter passing. (It could also pick signed or unsigned, but I don't see a mechanism for that being a problem in this context.)
Am I missing something here? If I am not missing something, does this mean that enum should never be used in an API?
Update:
Yes, I was missing something. While the language specification doesn't help here, as noted by #Barmar the Application Binary Interface (ABI) does. Or if it doesn't, then the ABI is deficient. The ABI for my system indeed specifies that an enum must be a signed four-byte integer. If a compiler does not obey that, then it is a bug. Given a complete ABI and compliant compilers, enum can be used safely in an API.
APIs that use enum are depending on the assumption that the compiler will be consistent, i.e. given the same enum declaration, it will always choose the same underlying type.
While the language standard doesn't specifically require this, it would be quite perverse for a compiler to do anything else.
Furthermore, all compilers for a particular OS need to be consistent with the OS's ABI. Otherwise, you would have far more problems, such as the library using 64-bit int while the caller uses 32-bit int. Ideally, the ABI should constrain the representation of enums, to ensure compatibility.
More generally, the language specification only ensures compatibility between programs compiled with the same implementation. The ABI ensures compatibility between programs compiled with different implementations.
From the question:
The ABI for my system indeed specifies that an enum must be a signed four-byte integer. If a compiler does not obey that, then it is a bug.
I'm surprised about that. I suspect in reality you're compiler will select a 64-bit (8 byte) size for your enum if you define an enumerated constant with a value larger that 2^32.
On my platforms (MinGW gcc 4.6.2 targeting x86 and gcc 4,.4 on Linux targeting x86_64), the following code says that I get both 4 and 8 byte enums:
#include <stdio.h>
enum { a } foo;
enum { b = 0x123456789 } bar;
int main(void) {
printf("%lu\n", sizeof(foo));
printf("%lu", sizeof(bar));
return 0;
}
I compiled with -Wall -std=c99 switches.
I guess you could say that this is a compiler bug. But the alternatives of removing support for enumerated constants larger than 2^32 or always using 8-byte enums both seem undesirable.
Given that these common versions of GCC don't provide a fixed size enum, I think the only safe action in general is to not use enums in APIs.
Further notes for GCC
Compiling with "-pedantic" causes the following warnings to be generated:
main.c:4:8: warning: integer constant is too large for 'long' type [-Wlong-long]
main.c:4:12: warning: ISO C restricts enumerator values to range of 'int' [-pedantic]
The behavior can be tailored via the --short-enums and --no-short-enums switches.
Results with Visual Studio
Compiling the above code with VS 2008 x86 causes the following warnings:
warning C4341: 'b' : signed value is out of range for enum constant
warning C4309: 'initializing' : truncation of constant value
And with VS 2013 x86 and x64, just:
warning C4309: 'initializing' : truncation of constant value

How do I get DOUBLE_MAX?

AFAIK, C supports just a few data types:
int, float, double, char, void enum.
I need to store a number that could reach into the high 10 digits. Since I'm getting a low 10 digit # from
INT_MAX
, I suppose I need a double.
<limits.h> doesn't have a DOUBLE_MAX. I found a DBL_MAX on the internet that said this is LEGACY and also appears to be C++. Is double what I need? Why is there no DOUBLE_MAX?
DBL_MAX is defined in <float.h>. Its availability in <limits.h> on unix is what is marked as "(LEGACY)".
(linking to the unix standard even though you have no unix tag since that's probably where you found the "LEGACY" notation, but much of what is shown there for float.h is also in the C standard back to C89)
You get the integer limits in <limits.h> or <climits>. Floating point characteristics are defined in <float.h> for C. In C++, the preferred version is usually std::numeric_limits<double>::max() (for which you #include <limits>).
As to your original question, if you want a larger integer type than long, you should probably consider long long. This isn't officially included in C++98 or C++03, but is part of C99 and C++11, so all reasonably current compilers support it.
Its in the standard float.h include file. You want DBL_MAX
Using double to store large integers is dubious; the largest integer that can be stored reliably in double is much smaller than DBL_MAX. You should use long long, and if that's not enough, you need your own arbitrary-precision code or an existing library.
You are looking for the float.h header.
INT_MAX is just a definition in limits.h. You don't make it clear whether you need to store an integer or floating point value. If integer, and using a 64-bit compiler, use a LONG (LLONG for 32-bit).

Resources