If I understand well, int_fast_n_t types are guaranteed to be at least n bits long. Depending on the compiler and the architecture of the computer these types can also be defined on more than n bits. For instance, a int_fast_8_t could be interpreted as a 32 bits int.
Is there some kind of mechanism which enforces that the value of an int_fast_n_t never overflow even if the true type is defined on more than n bits?
Consider the following code for example:
int main(){
int_fast8_t a = 64;
a *= 2; // -128
return 0;
}
I do not want a to be greater than 127. If a is interpreted as a "regular" int (32 bits), is it possible that a exceed 127 and be not equal to -128?
Thanks for your answers.
int_fast8_t a = 64;
a *= 2;
If a is interpreted as a "regular" int (32 bits), is it possible that a exceed 127 and be not equal to -128?
Yes. It very likely a * 2 will save in a as 128. I would expect this on all processors unless the processor was an 8-bit one.
Is there some kind of mechanism which enforces that the value of an int_fast_n_t never overflow ?
No. Signed integer overflow is still possible as well as values outside the [-128...127] range.
I do not want a to be greater than 127
Use int8_t. The value save will never exceed 127, yet code still has implementation defined behavior in setting a 128 to an int8_t. This often results in -128 (values wrap mod 256), yet other values are possible (this is uncommon).
int8_t a = 64;
a *= 2;
If assignment to int8_t is not available or has unexpected implementation defined behavior, code could force the wrapping itself:
int_fast8_t a = foo(); // a takes on some value
a %= 256;
if (a < -128) a += 256;
else if (a > 127) a -= 256;
It is absolutely possible for the result to exceed 127. int_fast8_t (and uint_fast8_t and all the rest) set an explicit minimum size for the value, but it could be larger, and the compiler will not prevent it from exceeding the stated 8 bit bounds (it behaves exactly like the larger type it represents, the "8ness" of it isn't relevant at runtime), only guarantee it can definitely represent all values in said 8 bit range.
If you need it to explicitly truncate/wrap to 8 bit values, either use (or cast to) int8_t to restrict the representable range (though overflow wouldn't be defined), or explicitly use masks to perform the same work yourself when needed.
Nope. All the fast types really are are typedefs. For example, stdint.h on my machine includes
/* Fast types. */
/* Signed. */
typedef signed char int_fast8_t;
#if __WORDSIZE == 64
typedef long int int_fast16_t;
typedef long int int_fast32_t;
typedef long int int_fast64_t;
#else
typedef int int_fast16_t;
typedef int int_fast32_t;
__extension__
typedef long long int int_fast64_t;
#endif
/* Unsigned. */
typedef unsigned char uint_fast8_t;
#if __WORDSIZE == 64
typedef unsigned long int uint_fast16_t;
typedef unsigned long int uint_fast32_t;
typedef unsigned long int uint_fast64_t;
#else
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;
__extension__
typedef unsigned long long int uint_fast64_t;
#endif
The closest you can come without a significant performance penalty is probably casting the result to an 8-bit type.
Just use unsigned char if you want to manipulate 8 bits (unsigned char is one byte long) you will work on 0 to 0xFF (255) unsigned range
From the C(99) standard:
The typedef name intN_t designates a signed integer type with width N
, no padding bits, and a two’s complement representation. Thus, int8_t
denotes a signed integer type with a width of exactly 8 bits.
So use int8_t to guarantee 8 bit int.
A compliant C99/C11 compiler on a POSIX platform must have int8_t.
Related
I thought the following code might cause an overflow since a * 65535 is larger than what unsigned short int can hold, but the result seems to be correct.
Is there some built-in mechanism in C that stores intermediary arithmetic results in a larger data type? Or is it working as an accident?
unsigned char a = 30;
unsigned short int b = a * 65535 /100;
printf("%hu", b);
It works because all types narrower than int will go under default promotion. Since unsigned char and unsigned short are both smaller than int on your platform, they'll be promoted to int and the result won't overflow if int contains CHAR_BIT + 17 bits or more (which is the result of a * 65535 plus sign) 22 bits or more (which is the number of bits in 30 * 65535 plus sign). However if int has fewer bits then overflow will occur and undefined behavior happens. It won't work if sizeof(unsigned short) == sizeof(int) either
Default promotion allows operations to be done faster (because most CPUs work best with values in its native size) and also prevents some naive overflow from happening. See
Implicit type promotion rules
Will char and short be promoted to int before being demoted in assignment expressions?
Why must a short be converted to an int before arithmetic operations in C and C++?
I would like to convert int to byte in C.
How could i get the value?
in Java
int num = 167;
byte b = num.toByte(); // -89
in C
int num = 167;
???
There is no such type as Byte in native C. Although if you don't want to import new libs, you can create one like this :
typedef unsigned char Byte
And then create any variable you'd like with it :
int bar = 15;
Byte foo = (Byte)bar
You can simply cast to a byte:
unsigned char b=(unsigned char)num;
Note that if num is more than 255 or less than 0 C won't crash and simply give the wrong result.
In computer science, the term byte is well-defined as an 8 bit chunk of raw data. Apparently Java uses a different definition than computer science...
-89 is not the value 167 "converted to a byte". 167 already fits in a byte, so no conversion is necessary.
-89 is the value 167 converted to signed 2's complement with 8 bits representation.
The most correct type to use for signed 2's complement 8 bit integers in C is int8_t from stdint.h.
Converting from int to int8_t is done implicitly in C upon assignment. There is no need for a cast.
int num = 167;
int8_t b = num;
byte is a java signed integer type with a range of -128 to 127.
The corresponding type in C is int8_t defined in <stdint.h> for architectures with 8-bit bytes. It is an alias for signed char.
You can write:
#include <stdint.h>
void f() {
int num = 167;
int8_t b = num; // or signed char b = num;
...
If your compiler emits a warning about the implicit conversion to a smaller type, you can add an explicit cast:
int8_t b = (int8_t)num; // or signed char b = (signed char)num;
Note however that it is much more common to think of 8-bit bytes as unsigned quantities in the range 0 to 255, for which one would use type uint8_t or unsigned char. The reason java byte is a signed type might be that there is no unsigned type in this language, but it is quite confusing for non-native readers.
byte can also be defined as a typedef:
typedef unsigned char byte; // 0-255;
or
typedef signed char byte; // -128-127;
Do not use type char because it is implementation defined whether this type is signed or unsigned by default. Reserve type char for the characters in C strings, although many functions actually consider these to be unsigned: strcmp(), functions from <ctype.h>...
The standard gives the following minimum bit widths for standard unsigned types:
unsigned char >= 8
unsigned short >= 16
unsigned int >= 16
unsigned long >= 32
unsigned long long >= 64
(Implicitly, by specifying minimum maximal values).
Does that imply the following equivalencies?
unsigned char == uint_fast8_t
unsigned short == uint_fast16_t
unsigned int == uint_fast16_t
unsigned long == uint_fast32_t
unsigned long long == uint_fast64_t
No, because the size of the default "primitive data types" are picked by the compiler to be something convenient for the given system. Convenient meaning easy to work with for various reasons: integer range, the size of the other integer types, backwards compatibility etc. It is not necessarily picked to be the fastest possible.
For example, in practice unsigned int has the size 32 on a 32 bit system, but also size 32 on a 64 bit system. But on the 64 bit system, uint_fast32_t could be 64 bits.
Just look at the most commonly used sizes in practice for unsigned char versus uint_fast8_t:
Data bus unsigned char uint_fast8_t
8 bit 8 bit 8 bit
16 bit 8 bit 16 bit
32 bit 8 bit 32 bit
64 bit 8 bit 32/64 bit
This is so, because for convenience we need a byte type to work with. Yet the optimizing compiler may very well place this unsigned char byte at an aligned address and read it as a 32 bit access, so it might perform optimizations despite the actual size of the data type.
I don't know if this is an answer to the question but (at least in glibc), int_fastx_t and uint_fastx_t are typedefed depending of the word size:
/* Fast types. */
/* Signed. */
typedef signed char int_fast8_t;
#if __WORDSIZE == 64
typedef long int int_fast16_t;
typedef long int int_fast32_t;
typedef long int int_fast64_t;
#else
typedef int int_fast16_t;
typedef int int_fast32_t;
__extension__
typedef long long int int_fast64_t;
#endif
/* Unsigned. */
typedef unsigned char uint_fast8_t;
#if __WORDSIZE == 64
typedef unsigned long int uint_fast16_t;
typedef unsigned long int uint_fast32_t;
typedef unsigned long int uint_fast64_t;
#else
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;
__extension__
typedef unsigned long long int uint_fast64_t;
#endif
Basically you're right, but not de jure.
int is meant to be the "natural" integer type for the system. In fact it is usually 32 bit on 32 and 64 bit systems, and 16 bits on small systems, because 64 bit ints would break too many interfaces. So int is essentially the fast_32_t.
However it's not guaranteed. And the smaller types like 8 and 16 bits might well have faster 32 bit equivalents. However in practise using int will likely give you the fastest code.
Let's say there's a data type defined as such:
typedef int my_integer_type;
Somewhere else, there exists code like this
#define MAX_MY_INTEGER_TYPE 0xFFFFFFFF
However, if my_integer_type later is changed to 8 bytes instead of 4, MAX_MY_INTEGER_TYPE will need to be updated as well. Is there a way for MAX_MY_INTEGER_TYPE to be a bit smarter, to basically represent the number of bytes in my_integer_type all set to 0xFF?
I realize there are other traditional ways to get the maximum integer size (like here: maximum value of int), but my actual use case is a bit more complex and I cannot apply those solutions.
No, for signed integer types there is basically no way to calculate the maximum values for the type. This is the main reason why all the predefined MAX-macros such as INT_MAX exist. To avoid the maintenance problem you could go the other way round
#include <limits.h>
#define MY_INTEGER_TYPE_MAX 0xFFFF...FFF // whatever number of 'F' you chose
#if MY_INTEGER_TYPE_MAX <= SCHAR_MAX
typedef signed char my_integer_type;
#elsif MY_INTEGER_TYPE_MAX <= SHRT_MAX
typedef signed short int my_integer_type;
#elsif MY_INTEGER_TYPE_MAX <= INT_MAX
typedef signed int my_integer_type;
#elsif MY_INTEGER_TYPE_MAX <= LONG_MAX
typedef signed long int my_integer_type;
#elsif MY_INTEGER_TYPE_MAX <= LLONG_MAX
typedef signed long long int my_integer_type;
#else
typedef intmax_t my_integer_type;
#endif
You see that it is not a very long list of cases that you have to differentiate.
You can calculate the maximum size of an integer like using bit-wise operations.
For an unsigned integer you can do it like this:
const unsigned int MAX_SIZE = ~0;
or, you can do it like this if your compiler does not complain:
const unsigned int MAX_SIZE = -1;
EDIT
For your particular case, it seems you want to be able to calculate the maximum value of a signed type. Assuming the sign is calculated using 2's complement (as is in most systems), you can use this methods:
const my_integer_type MAX_MY_INTEGER_TYPE =
~((my_integer_type)1 << (sizeof(my_integer_type) * CHAR_BIT) - 1);
So, what I do here is calculate the size of the type. Whatever the size is, I'm going to subtract 1 from that. I use the resulting number to shift a 1 to the left most position. Then I invert the whole thing.
Let's say that my_integer_type is 1 bytes. That is 8 bits, meaning that the shift amount is 7. I take the 1 and shift it. The result of the shift looks like this: 0x80. When you invert it, you get this: 0x7f. And that is the maximum value (of course, assuming it uses 2's complement).
Note that I did not used the preprocessor. Unfortunately, the preprossor is a simple find and replace tool that is not aware of types in C. So, you'll have to use constants.
Not clear is OP wants "every byte is 0xFF" or the maximum integer value.
1) [Edit] Assuming "every byte is 0xFF" is OP's goal, #Eric Postpischil idea is sound and simple
// Better name: MY_INTEGER_TYPE_ALL_BITS_SET
#define MAX_MY_INTEGER_TYPE (~ (my_integer_type) 0)
2) Interesting OP is asking for "value of a given type where every byte is 0xFF?" and then proceeds to call it "MAX_MY_INTEGER_TYPE". This does sound like an unsigned as the maximum signed value is rarely (if ever) "every byte is 0xFF". Using -1 as suggested by #H2CO3 looks good.
#define MAX_MY_INTEGER ((my_integer_type) 0 - 1).
3) Not yet answered: if OP wants max value and my_integer_type could be signed (and not assuming 2's complement).
You could use predefined types from stdint.h header.
typedef int32_t my_integer_type;
And you are sure that your type will have 4 byes on all platforms.
I was referring a tutorial on c,I found that signed int & short signed int range are -32768 to 32767 and it's of 2 bytes, is their any difference, if not then why two kinds of declarations used.
It's platform specific - all that you can be sure of in this context is that sizeof(int) >= sizeof(short) >= 16 bits.
The best answer to your question can be found in the ANSI standard for C, section 2.2.4.2 - Numerical Limits. I reproduce the relevant parts of that section here for your convenience:
2.2.4.2 Numerical limits
A conforming implementation shall
document all the limits specified in
this section, which shall be specified
in the headers and
.
"Sizes of integral types "
The values given below shall be
replaced by constant expressions
suitable for use in #if preprocessing
directives. Their
implementation-defined values shall be
equal or greater in magnitude
(absolute value) to those shown, with
the same sign.
maximum number of bits for smallest
object that is not a bit-field (byte)
CHAR_BIT 8
minimum value for an object of type
signed char SCHAR_MIN
-127
maximum value for an object of type
signed char SCHAR_MAX
+127
maximum value for an object of type
unsigned char UCHAR_MAX
255
minimum value for an object of type
char CHAR_MIN see
below
maximum value for an object of type
char CHAR_MAX see
below
maximum number of bytes in a
multibyte character, for any supported
locale MB_LEN_MAX
1
minimum value for an object of type
short int SHRT_MIN
-32767
maximum value for an object of type
short int SHRT_MAX
+32767
maximum value for an object of type
unsigned short int USHRT_MAX
65535
minimum value for an object of type
int INT_MIN
-32767
maximum value for an object of type
int INT_MAX
+32767
maximum value for an object of type
unsigned int UINT_MAX
65535
minimum value for an object of type
long int LONG_MIN
-2147483647
maximum value for an object of type
long int LONG_MAX
+2147483647
maximum value for an object of type
unsigned long int ULONG_MAX
4294967295
The not so widely implemented C99 adds the following numeric types:
minimum value for an object of type long long int
LLONG_MIN -9223372036854775807 // -(263 - 1)
maximum value for an object of type long long int
LLONG_MAX +9223372036854775807 // 263 - 1
maximum value for an object of type unsigned long long int
ULLONG_MAX 18446744073709551615 // 264 - 1
A couple of other answers have correctly quoted the C standard, which places minimum ranges on the types. However, as you can see, those minimum ranges are identical for short int and int - so the question remains: Why are short int and int distinct? When should I choose one over the other?
The reason that int is provided is to provide a type that is intended to match the "most efficient" integer type on the hardware in question (that still meets the minimum required range). int is what you should use in C as your general purpose small integer type - it should be your default choice.
If you know that you'll need more range than -32767 to 32767, you should instead choose long int or long long int. If you are storing a large number of small integers, such that space efficiency is more important than calculation efficiency, then you can instead choose short (or even signed char, if you know that your values will fit into the -127 to 127 range).
C and C++ only make minimum size guarantees on their objects. There is no exact size guarantee that is made. You cannot rely on type short being exactly 2 bytes, only that it can hold values in the specified range (so it is at least two bytes). Type int is at least as large as short and is often larger. Note that signed int is a long-winded way to say int while signed short int is a long-winded way to say short int which is a long-winded way to say short. With the exception of type char (which some compilers will make unsigned), all the builtin integral types are signed by default. The types short int and long int are longer ways to say short and long, respectively.
A signed int is at least as large as a short signed int. On most modern hardware a short int is 2 bytes (as you saw), and a regular int is 4 bytes. Older architectures generally had a 2-byte int which may have been the cause of your confusion.
There is also a long int which is usually either 4 or 8 bytes, depending on the compiler.
Please read following expalination for signed char then we will talk about signed/unsigned int.
First I want to prepare background for your question.
................................................
char data type is of two types:
unsigned char;
signed char;
(i.e. INTEGRAL DATATYPES)
.................................................
Exaplained as per different books as:
char 1byte –128 to 127 (i.e. by default signed char)
signed char 1byte –128 to 127
unsigned char 1byte 0 to 255
.................................................
one more thing 1byte=8 bits.(zero to 7th bit)
As processor flag register reserves 7th bit for representing sign(i.e. 1=+ve & 0=-ve)
-37 will be represented as 1101 1011 (the most significant bit is 1),
+37 will be represented as 0010 0101 (the most significant bit is 0).
.................................................
similarly for char last bit is by default taken as signed
This is why?
Because char also depends on ASCII codes of perticular charectors(Eg.A=65).
In any case we are using char and using 7 bits only.
In this case to increase memory range for char/int by 1 bit we use unsigned char or unsigned int;
Thanks for the question.
similarly for 4bit int or 2bit int we need signed int & unsigned int
It depends on the platform.
Int is 32-bit wide on a 32-bit system and 64 bit wide on a 64-bit system(i am sure that this is ever the case).
I was referring a tutorial on c,I found that signed int & short signed int range are -32768 to 32767 and it's of 2 bytes.
That's a very old tutorial. The modern C standard is as per Paul R's answer. On a 32 bit architecture, normally:
short int is 16 bits
int is 32 bits
long int is 32 bits
long long int is 64 bits
the size of an int would normally only be 16 bits on a 16 bit machine. 16 bit machines are presumably limited to embedded devices these days.
On a 16 bit machine, sizes amay be like this:
short int is 16 bits
int is 16 bits
long int is 32 bits
long long int is 64 bits