How to Declare a 32-bit Integer in C - c

What's the best way to declare an integer type which is always 4 byte on any platforms? I don't worry about certain device or old machines which has 16-bit int.

#include <stdint.h>
int32_t my_32bit_int;

C doesn't concern itself very much with exact sizes of integer types, C99 introduces the header stdint.h , which is probably your best bet. Include that and you can use e.g. int32_t. Of course not all platforms might support that.

Corey's answer is correct for "best", in my opinion, but a simple "int" will also work in practice (given that you're ignoring systems with 16-bit int). At this point, so much code depends on int being 32-bit that system vendors aren't going to change it.
(See also why long is 32-bit on lots of 64-bit systems and why we have "long long".)
One of the benefits of using int32_t, though, is that you're not perpetuating this problem!

You could hunt down a copy of Brian Gladman's brg_types.h if you don't have stdint.h.
brg_types.h will discover the sizes of the various integers on your platform and will create typedefs for the common sizes: 8, 16, 32 and 64 bits.

You need to include inttypes.h instead of stdint.h because stdint.h is not available on some platforms such as Solaris, and inttypes.h will include stdint.h for you on systems such as Linux.
If you include inttypes.h then your code is more portable between Linux and Solaris.
This link explains what I'm saying:
HP link about inttypes.h
And this link has a table showing why you don't want to use long or int if you have an intention of a certain number of bits being present in your data type.
IBM link about portable data types

C99 or later
Use <stdint.h>.
If your implementation supports 2's complement 32-bit integers then it must define int32_t.
If not then the next best thing is int_least32_t which is an integer type supported by the implementation that is at least 32 bits, regardless of representation (two's complement, one's complement, etc.).
There is also int_fast32_t which is an integer type at least 32-bits wide, chosen with the intention of allowing the fastest operations for that size requirement.
ANSI C
You can use long, which is guaranteed to be at least 32-bits wide as a result of the minimum range requirements specified by the standard.
If you would rather use the smallest integer type to fit a 32-bit number, then you can use preprocessor statements like the following with the macros defined in <limits.h>:
#define TARGET_MAX 2147483647L
#if SCHAR_MAX >= TARGET_MAX
typedef signed char int32;
#elif SHORT_MAX >= TARGET_MAX
typedef short int32;
#elif INT_MAX >= TARGET_MAX
typedef int int32;
#else
typedef long int32;
#endif
#undef TARGET_MAX

If stdint.h is not available for your system, make your own. I always have a file called "types.h" that have typedefs for all the signed/unsigned 8, 16, and 32 bit values.

You can declare 32 bits with signed or unsigned long.
int32_t variable_name;
uint32_t variable_name;

also depending on your target platforms you can use autotools for your build system
it will see if stdint.h/inttypes.h exist and if they don't will create appropriate typedefs in a "config.h"

stdint.h is the obvious choice, but it's not necessarily available.
If you're using a portable library, it's possible that it already provides portable fixed-width integers.
For example, SDL has Sint32 (S is for “signed”), and GLib has gint32.

Related

Can sized integer types be used interchangeably with typedefs?

Visual Studio stdint.h seems to have the following typedefs:
typedef signed char int8_t;
typedef short int16_t;
typedef int int32_t;
typedef long long int64_t;
typedef unsigned char uint8_t;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;
typedef unsigned long long uint64_t;
However, sized integer types use the __intN syntax, as described here: https://learn.microsoft.com/en-us/cpp/cpp/int8-int16-int32-int64?view=msvc-170
Is there any difference (for example) between using int32_t versus using __int32?
I guess I am a little confused if the purpose of the int32_t typedef is to be the ANSI C99 standard to abstract away the specific compiler syntax (so you can use int32_t in both Visual Studio and gcc C code for example), then I'm not sure why the typedef in Visual Studio wouldn't be: typedef __int32 int32_t;
Say that the codebase has the following:
#ifdef _MSC_VER
typedef __int64 PROD_INT64;
#else
typedef int64_t PROD_INT64;
#endif
And it uses PROD_INT64 everywhere for a 64-bit signed integer, and it is compiled in both Visual Studio and gcc.
Can it simply use the int64_t in both Visual Studio and gcc? It would seem this is changing __int64 for long long in Visual Studio.
Q: Is there any difference (for example) between using int32_t versus using __int32?
A: Yes:
int32_t and friends are standard fixed width integer types (since C99)
"__" is "reserved":
https://stackoverflow.com/a/25090719/421195
C standard says (section 7.1.3):
All identifiers that begin with an underscore and either an uppercase
letter or another underscore are always reserved for any use.
All identifiers that begin with an underscore are always reserved for
use as identifiers with file scope in both the ordinary
and tag name spaces.
What this means is that for example, the implementation (either the compiler or a standard header) can use the
name __FOO for anything it likes. If you define that identifier in
your own code, your program's behavior is undefined. If you're
"lucky", you'll be using an implementation that doesn't happen to
define it, and your program will work as expected.*
In other words, for any NEW code, you should use "int32_t".
sized integer types use the __intN syntax, as described here: https://learn.microsoft.com/en-us/cpp/cpp/int8-int16-int32-int64?view=msvc-170
Your wording suggests that you think the __intN syntax is somehow more correct or fundamental than all other alternatives. That's not what the doc you link says. It simply defines what those particular forms mean. In Microsoft C, those are preferred over Microsoft's older, single-underscore forms (_intN), but there's no particular reason to think that they are to be preferred over other alternatives, such as the intN_t forms available when you include stdint.h. The key distinguishing characteristic of the __intN types is that they are built in, available without including any particular header.
Is there any difference (for example) between using int32_t versus using __int32?
On Windows, int32_t is the same type as __int32, but the former is standard C, whereas the latter is not.
You need to include stdint.h to use int32_t, whereas __int32 is built in to MSVC.
I'm not sure why the typedef in Visual Studio wouldn't be: typedef __int32 int32_t;
It's an implementation decision that may or may not have a well-considered reason. As long as the implementation provides correct definitions -- and there's no reason to think MSVC is doing otherwise -- you shouldn't care about the details.
Say that the codebase has the following:
#ifdef _MSC_VER
typedef __int64 PROD_INT64;
#else
typedef int64_t PROD_INT64;
#endif
And it uses PROD_INT64 everywhere for a 64-bit signed integer, and it
is compiled in both Visual Studio and gcc.
Can it simply use the int64_t in both Visual Studio and gcc?
Yes, and that's certainly what I would do.
It would
seem this is changing __int64 for long long in Visual Studio.
Which is a distinction without a difference. Both of those spellings give you the same type in MSVC.
From my top comments ...
stdint.h is provided by the compiler rather than libc or the OS. It provides portable guarantees (e.g. int32_t will be 32 bits). The compiler designers could do:
typedef __int32 int32_t;
Or, they can do:
typedef int int32_t;
The latter is what most stdint.h files do (since they don't have the __int* types).
Probably, the VS compiler designers just grabbed a copy of the standard stdint.h and didn't bother to change it.
Your point is valid, it's just a design choice (or lack of it) that the compiler writers made. Just use the standard/portable int32_t and don't worry ;-)
Historical note: stdint.h is relatively recent. In the 1980s, MS had [16 bit] MS/DOS. Many mc68000 based micros at the time defined int to be 32 bits. But, on the MS C compiler, int was 16 bits because that fit the 8086 arch best.
stdint.h didn't exist back then. But, if it did, it would need:
typedef long int32_t;
because long was the only way to define a 32 bit integer for the MS 8086 compiler.
When 64 bit machines became available, POSIX compliant machines allowed long to "float" with the arch/mode. It was 32 bits on 32 bit arches, and 64 bits on 64 arches. This is the LP64 memory model.
Here's the original rationale: https://unix.org/version2/whatsnew/lp64_wp.htm
But, because of MS's longstanding use of long to be a 32 bit integer, it couldn't do this. Too many programs written in the 8086 days would break if recompiled.
IIRC [and I could be wrong]:
MS came up with __int64 and LONGLONG as types.
They had to define [yet another] abstract type for pointers [remember near and far pointers, anyone ;-)?]
So, IMO, it was, in part, because of all the MS craziness that prompted the creation of stdint.h in the first place.

How can <stdint.h> types guarantee bit width?

Since C is a loosely typed language and stdint.h defines just typedefs (I assume), how can the widths of ints be guaranteed?
What I am asking is about the implementation rather than the library usage.
How can <stdint.h> types guarantee bit width?
C can't and C does not require it.
C does require minimum widths.
The below, individuality, are required only on systems that support them, without padding and 2's complement for sign types.
(u)int8_t, (u)int16_t, (u)int32_t, (u)int64_t
An implementation may optionally have other sizes like uint24_t
Below are required.
(u)int_least8_t, (u)int_least16_t, (u)int_least32_t, (u)int_least64_t
stdint.h is part of the C implementation, which defines the typedefs using whatever underlying types are appropriate for that implementation. It's not a portable file you can carry to any C implementation you like.
A C compiler eventually needs to compile to machine code. Machine code only has hard, fixed-width types like a 32-bit int, 64-bit int etc. (or rather, it has memory blocks of that size + operations that operate on memory of that size and either treat it as signed or unsigned)
So the people who create your compiler are the ones who define what your compiler actually uses under the hood when you ask it for an int, and the stdint.h header file is a file they write. It is basically documentation of what they did. They know that e.g. their long type is 64 bits in size, so add a typedef long int64_t; etc.
On a system where int is 16 bits and long is 32 bits, they could even have their compiler understand a special internal type and e.g. name it __int64 and then make stdint.h contain a typedef __int64 int64_t;.
The C standard just defines that there has to be a stdint.h header provided with your compiler, and that if you define int64_t in there, it has to map to a data type that is the right size.
Theoretically one could just build everything in stdint.h into the compiler instead (so instead of using an intermediate name like __int64 and typedefing it to int64_t, they could just use int64_t directly, but by using this approach, old code that was written before stdint.h existed and defined their own type named int64_t can just not include stdint and will thus keep compiling. Names starting with two underscores have been historically reserved for the compiler maker, so there is no chance of existing C code using the name __int64 already.

Are there any well-established/standardized ways to use fixed-width integers in C89?

Some background:
the header stdint.h is part of the C standard since C99. It includes typedefs that are ensured to be 8, 16, 32, and 64-bit long integers, both signed and unsigned. This header is not part of the C89 standard, though, and I haven't yet found any straightforward way to ensure that my datatypes have a known length.
Getting to the actual topic
The following code is how SQLite (written in C89) defines 64-bit integers, but I don't find it convincing. That is, I don't think it's going to work everywhere. Worst of all, it could fail silently:
/*
** CAPI3REF: 64-Bit Integer Types
** KEYWORDS: sqlite_int64 sqlite_uint64
**
** Because there is no cross-platform way to specify 64-bit integer types
** SQLite includes typedefs for 64-bit signed and unsigned integers.
*/
#ifdef SQLITE_INT64_TYPE
typedef SQLITE_INT64_TYPE sqlite_int64;
typedef unsigned SQLITE_INT64_TYPE sqlite_uint64;
#elif defined(_MSC_VER) || defined(__BORLANDC__)
typedef __int64 sqlite_int64;
typedef unsigned __int64 sqlite_uint64;
#else
typedef long long int sqlite_int64;
typedef unsigned long long int sqlite_uint64;
#endif
typedef sqlite_int64 sqlite3_int64;
typedef sqlite_uint64 sqlite3_uint64;
So, this is what I've been doing so far:
Checking that the "char" data type is 8 bits long, since it's not guaranteed to be. If the preprocessor variable "CHAR_BIT" is not equal to 8, compilation fails
Now that "char" is guaranteed to be 8 bits long, I create a struct containing an array of several unsigned chars, which correspond to several bytes in the integer.
I write "operator" functions for my datatypes. Addition, multiplication, division, modulo, conversion from/to string, etc.
I have abstracted this process in a header file, which is the best I can do with what I know, but I wonder if there is a more straightforward way to achieve this.
I'm asking because I want to write a portable C library.
First, you should ask yourself whether you really need to support implementations that don't provide <stdint.h>. It was standardized in 1999, and even many pre-C99 implementations are likely to provide it as an extension.
Assuming you really need this, Doug Gwyn, a member of the ISO C standard committee, created an implementation of several of the new headers for C9x (as C99 was then known), compatible with C89/C90. The headers are in the public domain and should be reasonably portable.
http://www.lysator.liu.se/(nobg)/c/q8/index.html
(As I understand it, the name "q8" has no particular meaning; he just chose it as a reasonably short and unique search term.)
One rather nasty quirk of integer types in C stems from the fact that many "modern" implementations will have, for at least one size of integer, two incompatible signed types of that size with the same bit representation and likewise two incompatible unsigned types. Most typically the types will be 32-bit "int" and "long", or 64-bit "long" and "long long". The "fixed-sized" types will typically alias to one of the standard types, though implementations are not consistent about which one.
Although compilers used to assume that accesses to one type of a given size might affect objects of the other, the authors of the Standard didn't mandate that they do so (probably because there would have been no point ordering people to do things they would do anyway and they couldn't imagine any sane compiler writer doing otherwise; once compilers started doing so, it was politically difficult to revoke that "permission"). Consequently, if one has a library which stores data in a 32-bit "int" and another which reads data from a 32-bit "long", the only way to be assured of correct behavior is to either disable aliasing analysis altogether (probably the sanest choice while using gcc) or else add gratuitous copy operations (being careful that gcc doesn't optimize them out and then use their absence as an excuse to break code--something it sometimes does as of 6.2).

Purpose of typedef int16_t int_fast16_t in avr-gcc library

I am now going through avr library in "Arduino\hardware\tools\avr\avr\include" folder. In stdint.h file there is the piece of code:
typedef unsigned int uint16_t __attribute__ ((__mode__ (__HI__)));
typedef signed int int32_t __attribute__ ((__mode__ (__SI__)));
typedef uint16_t uint_fast16_t;
/** \ingroup avr_stdint
fastest signed int with at least 32 bits. */
typedef int32_t int_fast32_t;
So basically int32_t, int_fast32_t and signed int __attribute__ ((__mode__ (__SI__))) are the same thing. Could anybody confirm that?
If yes, why is it done in such way? Why don't just use int32_t?
I understand the question to be "Why does stdint.h declare types with names like int_leastN_t and int_fastN_t as well as the intN_t that I expect it to declare?"
The simple answer is that the C standard (since its 1999 revision) requires stdint.h to declare those types, because the committee thought they would be useful. As it happens, they were wrong; almost nobody wants anything out of stdint.h but the exact-width types. But it is very, very rare for anything to get removed from the C standard once it's been included, because that would break programs that are using them. So stdint.h will probably continue to declare these types forever.
(I could go on at considerable length about why non-exact-width types are less than useful in C, but you probably don't care.)
The actual answer depends on your implementation. The actual point for the typedefs is to give programmers who have to care about micro-optimisations like choosing one integer type over another, just because of the slight performance gains, the possibility to still write platform independent code. signed int __attribute__ ((__mode__(__SI__))) may be the best performing integer type on one platform, but as soon as one decides to support another platform, there would be thousand of types, which have to be changed.

Is u_int64_t available on 32-bit machine?

I want to use a u_int64_t variable as search key.
Is u_int64_t available on 32-bit machine?
If not, do I have to divide this variable into two variables? Then as a search key, it is a bit more troublesome.
Are there any workarounds for this?
An unsigned 64-bit integral type is not guaranteed by the C standard, but is typically available on 32-bit machines, and on virtually all machines running Linux. When present, the type will be named uint64_t (note one less underscore) and declared in the <stdint.h> header file.
Yes 64 bit integer datatype is supported on a 32 bit machine.
In C89 Standard , long long (≥ 64, ≥ size of long) type is supported as a GNU extension.
In C99 standard, there is native support for long long(≥ 64, ≥ size of long) integer.
as per some of the documentation or reading its not quite clear
__GLIBC_HAVE_LONG_LONG is the one that defines its presense in 32 bit architecture
aprobable solution for usage could be something similar to below
#include <sys/types.h>
#ifdef __GLIBC_HAVE_LONG_LONG
u_int64_t i;
#endif
Oh bythe way this is in linux

Resources