I'm trying to check some homework answers about overflow for 2's complement addition, subtraction, etc. and I'm wondering if I can specify the size of a data type. For instance if I want to see what happens when I try to assign -128 or -256 to a 7-bit unsigned int.
On further reading I see you wanted bit sizes that are not normal ones, such as 7 bit and 9 bit etc.
You can achieve this using bitfields
struct bits9
{
int x : 9;
};
Now you can use this type bits9 which has one field in it x that is only 9 bits in size.
struct bits9 myValue;
myValue.x = 123;
For an arbitrary sized value, you can use bitfields in structs. For example for a 7-bit value:
struct something {
unsigned char field:7;
unsigned char padding:1;
};
struct something value;
value.field = -128;
The smallest size you have have is char which is an 8 bit integer. You can have unsigned and signed chars. Take a look at the stdint.h header. It defines a int types for you in a platform independent way. Also there is no such thing as an 7 bit integer.
Using built in types you have things like:
char value1; // 8 bits
short value2; // 16 bits
long value3; // 32 bits
long long value4; // 64 bits
Note this is the case with Microsoft's compiler on Windows. The C standard does not specify exact widths other than "this one must be at least as big as this other one" etc.
If you only care about a specific platform you can print out the sizes of your types and use those once you have figured them out.
Alternatively you can use stdint.h which is in the C99 standard. It has types with the width in the name to make it clear
int8_t value1; // 8 bits
int16_t value2; // 16 bits
int32_t value3; // 32 bits
int64_t value4; // 64 bits
Related
I would like to convert int to byte in C.
How could i get the value?
in Java
int num = 167;
byte b = num.toByte(); // -89
in C
int num = 167;
???
There is no such type as Byte in native C. Although if you don't want to import new libs, you can create one like this :
typedef unsigned char Byte
And then create any variable you'd like with it :
int bar = 15;
Byte foo = (Byte)bar
You can simply cast to a byte:
unsigned char b=(unsigned char)num;
Note that if num is more than 255 or less than 0 C won't crash and simply give the wrong result.
In computer science, the term byte is well-defined as an 8 bit chunk of raw data. Apparently Java uses a different definition than computer science...
-89 is not the value 167 "converted to a byte". 167 already fits in a byte, so no conversion is necessary.
-89 is the value 167 converted to signed 2's complement with 8 bits representation.
The most correct type to use for signed 2's complement 8 bit integers in C is int8_t from stdint.h.
Converting from int to int8_t is done implicitly in C upon assignment. There is no need for a cast.
int num = 167;
int8_t b = num;
byte is a java signed integer type with a range of -128 to 127.
The corresponding type in C is int8_t defined in <stdint.h> for architectures with 8-bit bytes. It is an alias for signed char.
You can write:
#include <stdint.h>
void f() {
int num = 167;
int8_t b = num; // or signed char b = num;
...
If your compiler emits a warning about the implicit conversion to a smaller type, you can add an explicit cast:
int8_t b = (int8_t)num; // or signed char b = (signed char)num;
Note however that it is much more common to think of 8-bit bytes as unsigned quantities in the range 0 to 255, for which one would use type uint8_t or unsigned char. The reason java byte is a signed type might be that there is no unsigned type in this language, but it is quite confusing for non-native readers.
byte can also be defined as a typedef:
typedef unsigned char byte; // 0-255;
or
typedef signed char byte; // -128-127;
Do not use type char because it is implementation defined whether this type is signed or unsigned by default. Reserve type char for the characters in C strings, although many functions actually consider these to be unsigned: strcmp(), functions from <ctype.h>...
If I understand well, int_fast_n_t types are guaranteed to be at least n bits long. Depending on the compiler and the architecture of the computer these types can also be defined on more than n bits. For instance, a int_fast_8_t could be interpreted as a 32 bits int.
Is there some kind of mechanism which enforces that the value of an int_fast_n_t never overflow even if the true type is defined on more than n bits?
Consider the following code for example:
int main(){
int_fast8_t a = 64;
a *= 2; // -128
return 0;
}
I do not want a to be greater than 127. If a is interpreted as a "regular" int (32 bits), is it possible that a exceed 127 and be not equal to -128?
Thanks for your answers.
int_fast8_t a = 64;
a *= 2;
If a is interpreted as a "regular" int (32 bits), is it possible that a exceed 127 and be not equal to -128?
Yes. It very likely a * 2 will save in a as 128. I would expect this on all processors unless the processor was an 8-bit one.
Is there some kind of mechanism which enforces that the value of an int_fast_n_t never overflow ?
No. Signed integer overflow is still possible as well as values outside the [-128...127] range.
I do not want a to be greater than 127
Use int8_t. The value save will never exceed 127, yet code still has implementation defined behavior in setting a 128 to an int8_t. This often results in -128 (values wrap mod 256), yet other values are possible (this is uncommon).
int8_t a = 64;
a *= 2;
If assignment to int8_t is not available or has unexpected implementation defined behavior, code could force the wrapping itself:
int_fast8_t a = foo(); // a takes on some value
a %= 256;
if (a < -128) a += 256;
else if (a > 127) a -= 256;
It is absolutely possible for the result to exceed 127. int_fast8_t (and uint_fast8_t and all the rest) set an explicit minimum size for the value, but it could be larger, and the compiler will not prevent it from exceeding the stated 8 bit bounds (it behaves exactly like the larger type it represents, the "8ness" of it isn't relevant at runtime), only guarantee it can definitely represent all values in said 8 bit range.
If you need it to explicitly truncate/wrap to 8 bit values, either use (or cast to) int8_t to restrict the representable range (though overflow wouldn't be defined), or explicitly use masks to perform the same work yourself when needed.
Nope. All the fast types really are are typedefs. For example, stdint.h on my machine includes
/* Fast types. */
/* Signed. */
typedef signed char int_fast8_t;
#if __WORDSIZE == 64
typedef long int int_fast16_t;
typedef long int int_fast32_t;
typedef long int int_fast64_t;
#else
typedef int int_fast16_t;
typedef int int_fast32_t;
__extension__
typedef long long int int_fast64_t;
#endif
/* Unsigned. */
typedef unsigned char uint_fast8_t;
#if __WORDSIZE == 64
typedef unsigned long int uint_fast16_t;
typedef unsigned long int uint_fast32_t;
typedef unsigned long int uint_fast64_t;
#else
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;
__extension__
typedef unsigned long long int uint_fast64_t;
#endif
The closest you can come without a significant performance penalty is probably casting the result to an 8-bit type.
Just use unsigned char if you want to manipulate 8 bits (unsigned char is one byte long) you will work on 0 to 0xFF (255) unsigned range
From the C(99) standard:
The typedef name intN_t designates a signed integer type with width N
, no padding bits, and a two’s complement representation. Thus, int8_t
denotes a signed integer type with a width of exactly 8 bits.
So use int8_t to guarantee 8 bit int.
A compliant C99/C11 compiler on a POSIX platform must have int8_t.
I'm trying to celebrate 10,000,000 questions on StackOverflow with a simple console application written in C, but I don't want to waste any memory. What's the most efficient way to store the number 10,000,000 in memory?
The type you're looking for is int_least32_t, from stdint.h, which will give you the smallest type with at least 32 bits. This type is guaranteed to exist on C99 implementations.
Exact-width typedefs such as int32_t are not guaranteed to exist, though you'd be hard pressed to find a platform without it.
The number 10000000 (ten million) requires 24 bits to store as an unsigned value.
Most C implementations do not have a 24-bit type. Any implementation that conforms to the 1999 C standard or later must provide the <stdint.h> header, and must define all of:
uint_least8_t
uint_least16_t
uint_least32_t
uint_least64_t
each of which is (an alias for) an unsigned integer type with at least the specified width, such that no narrower integer type has at least the specified width. Of these, uint_least32_t is the narrowest type that's guaranteed to hold the value 10000000.
On the vast majority of C implementations, uint_least32_t is the type you're looking for -- but on an implementation that supports 24-bit integers, there will be a narrower type that satisfies your requirements.
Such an implementation would probably define uint24_t, assuming that it's unsigned 24-bit type has no padding bits. So you could do something like this:
#include <stdint.h>
#ifdef UINT_24_MAX
typedef uint24_t my_type;
#else
typedef uint_least32_t my_type;
#endif
That's still not 100% reliable (for example if there's a 28-bit type but no 24-bit type, this would miss it). In the worst case, it would select uint_least32_t.
If you want to restrict yourself to the predefined types (perhaps because you want to support pre-C99 implementations), you could do this:
#include <limits.h>
#define TEN_MILLION 10000000
#if UCHAR_MAX >= TEN_MILLION
typedef unsigned char my_type;
#elif USHRT_MAX >= TEN_MILLION
typedef unsigned short my_type;
#elif UINT_MAX >= TEN_MILLION
typedef unsigned int my_type;
#else
typedef unsigned long my_type;
#endif
If you merely want the narrowest predefined type that's guaranteed to hold the value 10000000 on all implementations (even if some implementations might have a narrower type that can hold it), use long (int can be a narrow as 16 bits).
If you don't require using an integer type, you can simply define a type that's guaranteed to be 3 bytes wide:
typedef unsigned char my_type[3];
But actually that will be wider than you need of CHAR_BIT > 8:
typedef unsigned char my_type[24 / CHAR_BIT]
but that will fail if 24 is not a multiple of CHAR_BIT.
Finally, your requirement is to represent the number 10000000; you didn't say you need to be able to represent any other numbers:
enum my_type { TEN_MILLION };
Or you can define a 1-bit bitfield with the value 1 denoting 10000000 and the value 0 denoting not 10000000.
Technically a 24-bit integer can store that, but there are no 24-bit primitive types in C. You will have to use a 32-bit int, or a long.
For performance that would be the best approach, wasting 1 unused byte of memory is irrelevant.
Now, if for study purposes you really really want to store 10 million in the smallest piece of memory possible, and you are even willing to make your own data storage method to achieve that, you can store that in 1 byte, by customizing a storage method that follows the example of float. You only need 4 bits to represent 10, and other 3 bits to represent 7, and have in 1 byte all data you need to calculate pow(10, 7);. It even leaves you with an extra free bit you can use as sign.
The largest value that can be stored in an uint16_t is 0xFFFF, which is 65535 in decimal. That is obviously not large enough.
The largest value that can be stored in an uint32_t is 0xFFFFFFFF, which is 4294967295 in decimal. That is obviously large enough.
Looks like you'll need to use uint32_t.
If you want to store a number n > 0 in an unsigned integer data type, then you need at least ⌈lg (n+1)⌉ bits in your integer type. In your case, ⌈lg (10,000,000 + 1)⌉ = 24, so you'd need at least 24 bits in whatever data type you picked.
To the best of my knowledge, the C spec does not include an integer type that holds specifically 24 bits. The closest option would be to use something like uint32_t, but (as was mentioned in a comment) this type might not exist on all compilers. The type int_least32_t is guaranteed to exist, but it might be way larger than necessary.
Alternatively, if you just need it to work on one particular system, your specific compiler may have a compiler-specific data type for 24-bit integers. (Spoiler: it probably doesn't. ^_^)
The non-smart-alecky answer is uint32_t or int32_t.
But if you're really low on memory,
uint8_t millions_of__questions;
But only if you are a fan of fixed-point arithmetic and are willing to deal with some error (or are using some specialized numerical representation scheme). Depends on what you're doing with it after you store it and what other numbers you want to be storable in your "datatype".
struct {
uint16_t low; // Lower 16 bits
uint8_t high; // Upper 8 bits
} my_uint24_t;
This provides 24-bits of storage and can store values up to 16,777,215. It's not a native type and would need to special accessor functions, but it takes less space than a 32-bit integer on most platforms (packing may be required).
uint32_t from <stdint.h> would probably be your go-to data type. Since uint16_t would only be able to store 2^16-1=65535, the next available type is uint32_t.
Note that uint32_t stores unsigned values; use int32_t if you want to store signed numbers as well. You can find more info, e.g., here: http://pubs.opengroup.org/onlinepubs/009695399/basedefs/stdint.h.html
How do you store a 16 bit binary in an array in C? What data type would I have to make the array? long int, float, char?
ex. data = {'1001111101110010', '1001101011010101','1000000011010010'}
Within stdint.h is the following typedef:
uint16_t
This integer type is exactly 16 bits in width. You can use it for your needs like this:
#include <stdint.h>
uint16_t arr[NUM_ELEMENTS] = {...};
The only type I am aware of that can store anything resembling:
{'1001111101110010', '1001101011010101','1000000011010010'}
is a char array:
char *binaryArray[] = {"1001111101110010", "1001101011010101","1000000011010010"};
I am pretty certain that is not what you want.
There are several types that would work to hold an array of bits (but they are not presented that way);
unsigned short,
short,
wchar_t,
unsigned __int16
(et. al.) all have 16 bits.
Look here for other data types that would work.
Pick any one that will work, and create an array of C bit fields. In C this could look like:
typedef struct
{
//type member name field width (number of bits in field)
unsigned short bits : 16;
}BIT;
BIT bit[10]; //array of 10 bit fields, each with capacity for 16 bits
Note:
An assignment such as:
bit[0].bits = 40818; //0x9F72 //1001111101110010
bit[1].bits = 39637; //0x9AD5 //1001101011010101
bit[2].bits = 32978; //0x08D2 //1000000011010010
Does not look binary, but it is equal.
You can read more about bit fields in this example
I want to define a datatype for boolean true/false value in C. Is there any way to define a datatype with 1 bit size to declare for boolean?
Maybe you are looking for a bit-field:
struct bitfield
{
unsigned b0:1;
unsigned b1:1;
unsigned b2:1;
unsigned b3:1;
unsigned b4:1;
unsigned b5:1;
unsigned b6:1;
unsigned b7:1;
};
There are so many implementation-defined features to bit-fields that it is almost unbelievable, but each of the elements of the struct bitfield occupies a single bit. However, the size of the structure may be 4 bytes even though you only use 8 bits (1 byte) of it for the 8 bit-fields.
There is no other way to create a single bit of storage. Otherwise, C provides bytes as the smallest addressable unit, and a byte must have at least 8 bits (historically, there were machines with 9-bit or 10-bit bytes, but most machines these days provide 8-bit bytes only — unless perhaps you're on a DSP where the smallest addressable unit may be a 16-bit quantity).
Try this:
#define bool int
#define true 1
#define false 0
In my opinion use a variable of type int. That is what we do in C. For example:
int flag=0;
if(flag)
{
//do something
}
else
{
//do something else
}
EDIT:
You can specify the size of the fields in a structure of bits fields in bits. However the compiler will round the type to at the minimum the nearest byte so you save nothing and the fields are not addressable and the order that bits in a bit field are stored is implementation defined. Eg:
struct A {
int a : 1; // 1 bit wide
int b : 1;
int c : 2; // 2 bits
int d : 4; // 4 bits
};
Bit-fields are only allowed inside structures. And other than bit-fields, no object is allowed to be smaller than sizeof(char).
The answer is _Bool or bool.
C99 and later have a built-in type _Bool which is guaranteed to be large enough to store the values 0 and 1. It may be 1 bit or larger, and it is an integer.
They also have a library include which provides a macro bool which expands to _Bool, and macros true and false that expand to 1 and 0 respectively.
If you are using an older compiler, you will have to fake it.
[edit: thanks Jonathan]