I have a platform dependent type defined in my code:
typedef uint64_t myType;
However on some more limited platform, it might be 32 bits.
How do I printf it?
As in, in the current situation, I can use %llu, but if on another platform it's 32 bits, this is not the best idea.
I thought about using some macros, but would anyone know of a better way? I'd love to hear about some format specifier that could take the length from the next argument, for example.
Since you have platform-specific types, it should be easy enough to use platform-specific format strings as well, something like:
#ifdef USING_64_bits
typedef uint64_t myType;
#define MY_TYPE_FMT PRIu64
#else
typedef uint32_t myType;
#define MY_TYPE_FMT PRIu32
#endif
Then you can use it with:
myType var1 = 42, var2 = 99;
printf ("%6" MY_TYPE_FMT ", %019" MY_TYPE_FMT "\n", var1, var2);
The extraction of the % from the format string allows you to insert other format specifiers dynamically, such as field widths and padding characters.
You'll also notice that I've avoided the %llu-style format specifiers, you should be using the more targeted ones in inttypes.h since the implementation will give you the correct one for your type.
Just cast it up to the largest-possible integer type matching the desired signedness and use the format for that, either:
printf("%jd", (intmax_t)x);
or:
printf("%ju", (uintmax_t)x);
(The question title asks for signed but the body is using unsigned examples, so I've covered both.)
This is a lot less ugly/more readable than using the PRI* macros suggested in the other answer, and also works for types where you don't inherently know the right PRI macro to use, like off_t.
Related
I'd like to be able to portably fprintf() a uint_fast32_t as defined in stdint.h with all leading zeroes. For instance if my platform defined uint_fast32_t as a 64-bit unsigned integer, I would want to fprintf() with a format specifier like %016lX, but if it's a 32-bit unsigned integer, I would want to use %08lX.
Is there a macro like INTFAST32_BITS somewhere that I could use?
right now I'm using
#if UINT_FAST32_MAX == 0xFFFFFFFF
# define FORMAT "08"PRIXFAST32
#elif UINT_FAST32_MAX == 0xFFFFFFFFFFFFFFFF
# define FORMAT "016"PRIXFAST32
#endif
But this only works with uint_fast32_t that is exactly 32 or 64 bits wide, and the code is also kinda clunky and hard to read
You can find the size of a uint_fast32_t the same way you can find the size of any other type: by applying the sizeof operator to it. The unit of the result is the size of type char. Example:
size_t chars_in_a_uint_fast32_t = sizeof(uint_fast32_t);
But this is not sufficient for printing one with one of the printf-family functions. Undefined behavior arises in printf et. al. if any formatting directive is mismatched with the type of the corresponding argument, and the size of of uint_fast32_t is not sufficient information to make a proper match.
This is where the formatting macros already described in your other answer come in. These provide the appropriate size modifier and conversion letter for each type and conversion. For an X conversion of uint_fast32_t, the appropriate macro is PRIXFAST32.
You provide any flags, field width, and precision as normal. If indeed you want to use a field width that is adapted to the actual size of uint_fast32_t, then a reasonable way to do that would be to avail yourself of the option to specify that the field width is passed as an argument (of type int). Thus:
uint_fast32_t fast;
// ...
printf("%0*" PRIXFAST32 "\n", (int) sizeof(fast) * 2, fast);
I note, however, that this seems a little questionable, inasmuch as you are making provision for values wider than generally you should allow any value of that type to become. The printf function will not in any case print fewer digits than are required to express the value, supposing that it is indeed of the specified type, so I would be inclined to simply use a fixed field width sufficient for 32-bit integers (say 8).
Assuming I have, for example, a variable i of type uint32_t. The expected way to print it would be like that:
printf("%"PRIu32"\n", i);
However, it should be noted that it is required for long unsigned int to be at least 32bits wide. The correct specifier for long unsigned int is %lu. Thus, can the above statement be replaced with:
printf("%lu\n", i);
I’d suppose yes, since I can see no reason why not. However, if yes, then this would remove the need for existence of these macroified specifiers like PRIu32, so I figure I’d better ask.
The reason I’m asking it is that I’d like to create a format string for printf dynamically, and it’d be hard to allocate space for this format string if I don't know the size of the string PRIu32 expands to (and whether sizeof(PRIu32) is valid or not may be worthy of a separate question).
In any case, I suppose it should be valid to write:
printf("%lu\n", (long unsigned)i);
Thus, can the above statement be replaced with:
printf("%lu\n", i);
I’d suppose yes, since I can see no reason why not.
No, because long unsigned int can be larger than 32 bits, or if exactly 32-bits can nevertheless have a different representation than does uint32_t.
In any case, I suppose it should be valid to write:
printf("%lu\n", (long unsigned)i);
Yes, and as you observed, it is also safe, because long unsigned int is required to be able to represent all the values that a uint32_t can take.
I switched to fixed-length integer types in my projects mainly because they help me think about integer sizes more clearly when using them. Including them via #include <inttypes.h> also includes a bunch of other macros like the printing macros PRIu32, PRIu64,...
To assign a constant value to a fixed length variable I can use macros like UINT32_C() and INT32_C(). I started using them whenever I assigned a constant value.
This leads to code similar to this:
uint64_t i;
for (i = UINT64_C(0); i < UINT64_C(10); i++) { ... }
Now I saw several examples which did not care about that. One is the stdbool.h include file:
#define bool _Bool
#define false 0
#define true 1
bool has a size of 1 byte on my machine, so it does not look like an int. But 0 and 1 should be integers which should be turned automatically into the right type by the compiler. If I would use that in my example the code would be much easier to read:
uint64_t i;
for (i = 0; i < 10; i++) { ... }
So when should I use the fixed length constant macros like UINT32_C() and when should I leave that work to the compiler(I'm using GCC)? What if I would write code in MISRA C?
As a rule of thumb, you should use them when the type of the literal matters. There are two things to consider: the size and the signedness.
Regarding size:
An int type is guaranteed by the C standard values up to 32767. Since you can't get an integer literal with a smaller type than int, all values smaller than 32767 should not need to use the macros. If you need larger values, then the type of the literal starts to matter and it is a good idea to use those macros.
Regarding signedness:
Integer literals with no suffix are usually of a signed type. This is potentially dangerous, as it can cause all manner of subtle bugs during implicit type promotion. For example (my_uint8_t + 1) << 31 would cause an undefined behavior bug on a 32 bit system, while (my_uint8_t + 1u) << 31 would not.
This is why MISRA has a rule stating that all integer literals should have an u/U suffix if the intention is to use unsigned types. So in my example above you could use my_uint8_t + UINT32_C(1) but you can as well use 1u, which is perhaps the most readable. Either should be fine for MISRA.
As for why stdbool.h defines true/false to be 1/0, it is because the standard explicitly says so. Boolean conditions in C still use int type, and not bool type like in C++, for backwards compatibility reasons.
It is however considered good style to treat boolean conditions as if C had a true boolean type. MISRA-C:2012 has a whole set of rules regarding this concept, called essentially boolean type. This can give better type safety during static analysis and also prevent various bugs.
It's for using smallish integer literals where the context won't result in the compiler casting it to the correct size.
I've worked on an embedded platform where int is 16 bits and long is 32 bits. If you were trying to write portable code to work on platforms with either 16-bit or 32-bit int types, and wanted to pass a 32-bit "unsigned integer literal" to a variadic function, you'd need the cast:
#define BAUDRATE UINT32_C(38400)
printf("Set baudrate to %" PRIu32 "\n", BAUDRATE);
On the 16-bit platform, the cast creates 38400UL and on the 32-bit platform just 38400U. Those will match the PRIu32 macro of either "lu" or "u".
I think that most compilers would generate identical code for (uint32_t) X as for UINT32_C(X) when X is an integer literal, but that might not have been the case with early compilers.
I have some code which is built both on Windows and Linux. Linux at this point is always 32bit but Windows is 32 and 64bit. Windows wants to have time_t be 64 bit and Linux still has it as 32 bit. I'm fine with that, except in some places time_t values are converted to strings. So when time_T is 32 bit it should be done with %d and when it is 64bit with %lld... what is the smart way to do this? Also: any ideas how I may find all places where time_t's are passed to printf-style functions to address this issue?
edit:
I came up with declaring TT_FMT as "%d" or "%lld" and then changing my printfs as in
printf("time: %d, register: blah") to be printf("time: " TT_FMT ", register: blah")
Is there a better way? And how do I find them all?
According to the C standard, time_t is an arithmetic type, "capable of representing times". So, it could be double for example. (Posix mentions this more explicitly, and also guarantees that time() returns the number of seconds since the Epoch—the latter is not guaranteed by the C standard.)
Maybe the cleanest solution is to convert the value to whatever type you want. You may want one of unsigned long long or unsigned long:
printf("%llu\n", (unsigned long long)t);
I think the only truly portable way is to use strftime to convert the time_t to a string.
If you're sure that you're only operating on platforms where time_t is an int, you could cast to intmax_t (from stdint.h) and print it using PRIdMAX (from inttypes.h).
If you want to go with the macro specifier, I would recommend one minor tweak. Instead of encapsulating the entire specifier, encapsulate just the modifier:
#ifdef 64_BIT_TIME
#define TT_MOD "ll"
#else
#define TT_MOD ""
#endif
and then using it like this:
printf("current time in seconds is: %" TT_MOD "u", time(0));
The reason why is that while you primarily want the second in decimal, every so often you may want hex (or perhaps you want leading 0's). By only having the modifier there, you can easily write:
"%" TT_MOD "x" // in hex
"%08" TT_MOD "d" // left pad with 0's so the number is at least 8 digits
Slight adjustment on Alok's answer, it's signed on both Windows and Linux, so:
printf("%lld\n", t);
is cleaner.
What are the format specifiers to use for printf when dealing with types such as int32_t, uint16_t and int8_t, etc.?
Using %d, %i, etc. will not result in a portable program. Is using the PRIxx macros the best approach?
Is using the PRIxx macros the best approach?
As far as I know, yes.
Edit: another solution is to cast to a type that is at least as wide as the one you want to print. For example int is at least 2 bytes wide, to can print a int16_t with printf("%d\n", (int)some_var).
Yes, if you're using the new types, you really should be using the new format specifiers.
That's the best way to do it since the implementation has already done the grunt work of ensuring the format strings will be correct for the types.
So, for example:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main (void) {
int32_t i32 = 40000;
printf ("%d\n", i32); // might work.
printf ("%" PRId32 "\n", i32); // will work.
return 0;
}
shows both ways of doing it.
However, there's actually no guarantee that the first one will do as you expect. On a system with 16-bit int types for example, you may well get a different value.