Standard and fasted integer equivalency - c

The standard gives the following minimum bit widths for standard unsigned types:
unsigned char >= 8
unsigned short >= 16
unsigned int >= 16
unsigned long >= 32
unsigned long long >= 64
(Implicitly, by specifying minimum maximal values).
Does that imply the following equivalencies?
unsigned char == uint_fast8_t
unsigned short == uint_fast16_t
unsigned int == uint_fast16_t
unsigned long == uint_fast32_t
unsigned long long == uint_fast64_t

No, because the size of the default "primitive data types" are picked by the compiler to be something convenient for the given system. Convenient meaning easy to work with for various reasons: integer range, the size of the other integer types, backwards compatibility etc. It is not necessarily picked to be the fastest possible.
For example, in practice unsigned int has the size 32 on a 32 bit system, but also size 32 on a 64 bit system. But on the 64 bit system, uint_fast32_t could be 64 bits.
Just look at the most commonly used sizes in practice for unsigned char versus uint_fast8_t:
Data bus unsigned char uint_fast8_t
8 bit 8 bit 8 bit
16 bit 8 bit 16 bit
32 bit 8 bit 32 bit
64 bit 8 bit 32/64 bit
This is so, because for convenience we need a byte type to work with. Yet the optimizing compiler may very well place this unsigned char byte at an aligned address and read it as a 32 bit access, so it might perform optimizations despite the actual size of the data type.

I don't know if this is an answer to the question but (at least in glibc), int_fastx_t and uint_fastx_t are typedefed depending of the word size:
/* Fast types. */
/* Signed. */
typedef signed char int_fast8_t;
#if __WORDSIZE == 64
typedef long int int_fast16_t;
typedef long int int_fast32_t;
typedef long int int_fast64_t;
#else
typedef int int_fast16_t;
typedef int int_fast32_t;
__extension__
typedef long long int int_fast64_t;
#endif
/* Unsigned. */
typedef unsigned char uint_fast8_t;
#if __WORDSIZE == 64
typedef unsigned long int uint_fast16_t;
typedef unsigned long int uint_fast32_t;
typedef unsigned long int uint_fast64_t;
#else
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;
__extension__
typedef unsigned long long int uint_fast64_t;
#endif

Basically you're right, but not de jure.
int is meant to be the "natural" integer type for the system. In fact it is usually 32 bit on 32 and 64 bit systems, and 16 bits on small systems, because 64 bit ints would break too many interfaces. So int is essentially the fast_32_t.
However it's not guaranteed. And the smaller types like 8 and 16 bits might well have faster 32 bit equivalents. However in practise using int will likely give you the fastest code.

Related

What is the difference between long int and long long int? [duplicate]

What's the difference between long long and long? And they both don't work with 12 digit numbers (600851475143), am I forgetting something?
#include <iostream>
using namespace std;
int main(){
long long a = 600851475143;
}
Going by the standard, all that's guaranteed is:
int must be at least 16 bits
long must be at least 32 bits
long long must be at least 64 bits
On major 32-bit platforms:
int is 32 bits
long is 32 bits as well
long long is 64 bits
On major 64-bit platforms:
int is 32 bits
long is either 32 or 64 bits
long long is 64 bits as well
If you need a specific integer size for a particular application, rather than trusting the compiler to pick the size you want, #include <stdint.h> (or <cstdint>) so you can use these types:
int8_t and uint8_t
int16_t and uint16_t
int32_t and uint32_t
int64_t and uint64_t
You may also be interested in #include <stddef.h> (or <cstddef>):
size_t
ptrdiff_t
long long does not exist in C++98/C++03, but does exist in C99 and c++0x.
long is guaranteed at least 32 bits.
long long is guaranteed at least 64 bits.
To elaborate on #ildjarn's comment:
And they both don't work with 12 digit numbers (600851475143), am I forgetting something?
The compiler looks at the literal value 600851475143 without considering the variable that you're assigning it to/initializing it with. You've written it as an int typed literal, and it won't fit in an int.
Use 600851475143LL to get a long long typed literal.
Your C++ compiler supports long long, that is guaranteed to be at least 64-bits in the C99 standard (that's a C standard, not a C++ standard). See Visual C++ header file to get the ranges on your system.
Recommendation
For new programs, it is recommended that one use only bool, char, int, and double, until circumstance arises that one of the other types is needed.
http://www.somacon.com/p111.php
Depends on your compiler.long long is 64 bits and should handle 12 digits.Looks like in your case it is just considering it long and hence not handling 12 digits.

int_fast_ types and value overflow

If I understand well, int_fast_n_t types are guaranteed to be at least n bits long. Depending on the compiler and the architecture of the computer these types can also be defined on more than n bits. For instance, a int_fast_8_t could be interpreted as a 32 bits int.
Is there some kind of mechanism which enforces that the value of an int_fast_n_t never overflow even if the true type is defined on more than n bits?
Consider the following code for example:
int main(){
int_fast8_t a = 64;
a *= 2; // -128
return 0;
}
I do not want a to be greater than 127. If a is interpreted as a "regular" int (32 bits), is it possible that a exceed 127 and be not equal to -128?
Thanks for your answers.
int_fast8_t a = 64;
a *= 2;
If a is interpreted as a "regular" int (32 bits), is it possible that a exceed 127 and be not equal to -128?
Yes. It very likely a * 2 will save in a as 128. I would expect this on all processors unless the processor was an 8-bit one.
Is there some kind of mechanism which enforces that the value of an int_fast_n_t never overflow ?
No. Signed integer overflow is still possible as well as values outside the [-128...127] range.
I do not want a to be greater than 127
Use int8_t. The value save will never exceed 127, yet code still has implementation defined behavior in setting a 128 to an int8_t. This often results in -128 (values wrap mod 256), yet other values are possible (this is uncommon).
int8_t a = 64;
a *= 2;
If assignment to int8_t is not available or has unexpected implementation defined behavior, code could force the wrapping itself:
int_fast8_t a = foo(); // a takes on some value
a %= 256;
if (a < -128) a += 256;
else if (a > 127) a -= 256;
It is absolutely possible for the result to exceed 127. int_fast8_t (and uint_fast8_t and all the rest) set an explicit minimum size for the value, but it could be larger, and the compiler will not prevent it from exceeding the stated 8 bit bounds (it behaves exactly like the larger type it represents, the "8ness" of it isn't relevant at runtime), only guarantee it can definitely represent all values in said 8 bit range.
If you need it to explicitly truncate/wrap to 8 bit values, either use (or cast to) int8_t to restrict the representable range (though overflow wouldn't be defined), or explicitly use masks to perform the same work yourself when needed.
Nope. All the fast types really are are typedefs. For example, stdint.h on my machine includes
/* Fast types. */
/* Signed. */
typedef signed char int_fast8_t;
#if __WORDSIZE == 64
typedef long int int_fast16_t;
typedef long int int_fast32_t;
typedef long int int_fast64_t;
#else
typedef int int_fast16_t;
typedef int int_fast32_t;
__extension__
typedef long long int int_fast64_t;
#endif
/* Unsigned. */
typedef unsigned char uint_fast8_t;
#if __WORDSIZE == 64
typedef unsigned long int uint_fast16_t;
typedef unsigned long int uint_fast32_t;
typedef unsigned long int uint_fast64_t;
#else
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;
__extension__
typedef unsigned long long int uint_fast64_t;
#endif
The closest you can come without a significant performance penalty is probably casting the result to an 8-bit type.
Just use unsigned char if you want to manipulate 8 bits (unsigned char is one byte long) you will work on 0 to 0xFF (255) unsigned range
From the C(99) standard:
The typedef name intN_t designates a signed integer type with width N
, no padding bits, and a two’s complement representation. Thus, int8_t
denotes a signed integer type with a width of exactly 8 bits.
So use int8_t to guarantee 8 bit int.
A compliant C99/C11 compiler on a POSIX platform must have int8_t.

Is there a way to specify int size in C?

I'm trying to check some homework answers about overflow for 2's complement addition, subtraction, etc. and I'm wondering if I can specify the size of a data type. For instance if I want to see what happens when I try to assign -128 or -256 to a 7-bit unsigned int.
On further reading I see you wanted bit sizes that are not normal ones, such as 7 bit and 9 bit etc.
You can achieve this using bitfields
struct bits9
{
int x : 9;
};
Now you can use this type bits9 which has one field in it x that is only 9 bits in size.
struct bits9 myValue;
myValue.x = 123;
For an arbitrary sized value, you can use bitfields in structs. For example for a 7-bit value:
struct something {
unsigned char field:7;
unsigned char padding:1;
};
struct something value;
value.field = -128;
The smallest size you have have is char which is an 8 bit integer. You can have unsigned and signed chars. Take a look at the stdint.h header. It defines a int types for you in a platform independent way. Also there is no such thing as an 7 bit integer.
Using built in types you have things like:
char value1; // 8 bits
short value2; // 16 bits
long value3; // 32 bits
long long value4; // 64 bits
Note this is the case with Microsoft's compiler on Windows. The C standard does not specify exact widths other than "this one must be at least as big as this other one" etc.
If you only care about a specific platform you can print out the sizes of your types and use those once you have figured them out.
Alternatively you can use stdint.h which is in the C99 standard. It has types with the width in the name to make it clear
int8_t value1; // 8 bits
int16_t value2; // 16 bits
int32_t value3; // 32 bits
int64_t value4; // 64 bits

why we require uint64_t when unsigned long is available ? [duplicate]

This question already has answers here:
Reasons to use (or not) stdint
(4 answers)
Closed 8 years ago.
I just wanted to know, Why we need to have uint64_t which is actually a typedef of unsigned long , when unsigned long is anyway available. Is it only for make the name short or any other reason ?
The reason should be pretty obvious; it's a guarantee that there is 64 bits of precision. There is no such guarantee for unsigned long.
The fact that on your system it's a typedef, just means that on your system unsigned long is 64 bits. That's not true in general, but the typedef can be varied (by the compiler implementors) to make the guarantee for uint64_t.
It's not always a typedef for unsigned long, because unsigned long is not universally 64 bit wide. For example, on x64 Windows, unsigned long is 32 bit, only unsigned long long is 64 bit.
Because uint64_t means a type of exactly 64 bits, where unsigned long can be 32 bits or higher.
On a system where unsigned long is 32 bits, it would be used to typedef uint32_t , and unsigned long long would be used to typedef uint64_t.
Yes, it does make it shorter, but that's not the only reason to use it. The main motto of using this typedef-ed data types is to make a code more robust and portable accross various platforms.
Sometimes, unsigned long may not be 64 bits, but a definition of uint64_t will always gurantee 64 bit precision. The typedef of uint64_t can [ read as : will] be varied accross different platform to have a precision of 64 bits.
Example:
for systems having long as 32 bits, uint64_t will be typedef to unsigned long long
for systems having long as 64 bits, uint64_t will be typedef to unsigned long
Not only to make it shorter, but also to make it more descriptive.
If you see in code uint64_t it will be easy to read as unsigned int of 64 bits data type.

64 bit variable is stored as 32-bit

I have some issue when compiling in ARM cross compiler with a variable of type unsigned long long
The variable represents partition size (~256GBytes). I expect it to be stored as 64-bit, but on printing it using %lld, or even trying to print it as Mega bytes (value/(1024*1024*1024)), I always see the only 32-bit of the real value.
Does anyone know why the compiler store it as 32-bits?
Edit:
My mistake, the value is set in C using the following calculation:
partition_size = st.f_bsize*st.f_frsize;
struct statvfs { unsigned long int f_bsize; unsigned long int f_frsize; ...}
The issue is that f_bsize and f_frsize are only 32 bits, and the compiler does not automatically cast them to 64bits! Casting solved this issue for me.
My mistake........
the value is set in C using the following calcualtion:
partition_size = st.f_bsize*st.f_frsize;
struct statvfs
{ unsigned long int f_bsize; unsigned long int f_frsize;...}
The issue is that f_bsize & f_frsize are only 32 bits, and the compiler does not automatically cast them to 64bits!
Casting solved this issue for me.
The below code prints the whole 64 bits.Try printing it using %llu
main()
{
unsigned long long num = 4611111275421987987;
printf("%llu\n",num);
}

Resources