significance of int a : 2 syntax [duplicate] - c

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What does ‘unsigned temp:3’ mean?
I came across one syntax like int variable:4;
can anyone tell me what this syntax means?
struct abc
{
int a;
int b:2;
int c:1;
};`enter code here`

It defines a width of a bitfield in a struct. A bitfield holds an integer value, but its length is restricted to a certain number of bits, and hence it can only hold a restricted range of values.
In the code you posted, in the structure a is a 32-bit integer, b is a 2-bit bitfield and c is a 1-bit bitfield.

It is a bit-field. Instead of storing a full integer for b it only stores 2 bits, so b can have values of -2, -1, 0 and 1. Similarly c can only have values of -1 and 0.
Depending on what version of the compiler you have the sign extension is a little unpredictable, and some systems may present these values as 0, 1 2 and 3 or 0 and 1.
This will also pack the fields into less than an integer, but again, this is in an implementation defined manner, and you would do well not to make assumptions about how much memory will actually be used or the order of the data in memory.

Related

Why does an int take up 4 bytes in c or any other language? [duplicate]

This question already has answers here:
size of int variable
(6 answers)
Closed 5 years ago.
This is a bit of a general question and not completely related to the c programming language but it's what I'm on studying at the moment.
Why does an integer take up 4 bytes or How ever many bytes dependant on the system?
Why does it not take up 1 byte per integer?
For example why does the following take up 8 bytes:
int a = 1;
int b = 1;
Thanks
I am not sure whether you are asking why int objects have fixed sizes instead of variable sizes or whether you are asking why int objects have the fixed sizes they do. This answers the former.
We do not want the basic types to have variable lengths. That makes it very complicated to work with them.
We want them to have fixed lengths, because then it is much easier to generate instructions to operate on them. Also, the operations will be faster.
If the size of an int were variable, consider what happens when you do:
b = 3;
b += 100000;
scanf("%d", &b);
When b is first assigned, only one byte is needed. Then, when the addition is performed, the compiler needs more space. But b might have neighbors in memory, so the compiler cannot just grow it in place. It has to release the old memory and allocate new memory somewhere.
Then, when we do the scanf, the compiler does not know how much data is coming. scanf will have to do some very complicated work to grow b over and over again as it reads more digits. And, when it is done, how does it let you know where the new b is? The compiler has to have some mechanism to update the location for b. This is hard and complicated and will cause additional problems.
In contrast, if b has a fixed size of four bytes, this is easy. For the assignment, write 3 to b. For the addition, add 100000 to the value in b and write the result to b. For the scanf, pass the address of b to scanf and let it write the new value to b. This is easy.
The basic integral type int is guaranteed to have at least 16 bits; At least means that compilers/architectures may also provide more bits, and on 32/64 bit systems int will most likely comprise 32 bits or 64 bits (i.e. 4 bytes or 8 bytes), respectively (cf, for example, cppreference.com):
Integer types
... int (also accessible as signed int): This is the most optimal
integer type for the platform, and is guaranteed to be at least 16
bits. Most current systems use 32 bits (see Data models below).
If you want an integral type with exactly 8 bits, use the int8_t or uint8_t.
It doesn't. It's implementation-defined. A signed int in gcc on an Atmel 8-bit microcontroller, for example, is a 16-bit integer. An unsigned int is also 16-bits, but from 0-65535 since it's unsigned.
The fact that an int uses a fixed number of bytes (such as 4) is a compiler/CPU efficiency and limitation, designed to make common integer operations fast and efficient.
There are types (such as BigInteger in Java) that take a variable amount of space. These types would have 2 fields, the first being the number of words being used to represent the integer, and the second being the array of words. You could define your own VarInt type, something like:
struct VarInt {
char length;
char bytes[]; // Variable length
}
VarInt one = {1, {1}}; // 2 bytes
VarInt v257 = {2, {1,1}}; // 3 bytes
VarInt v65537 = {4, {1,0,0,1}}; // 5 bytes
and so on, but this would not be very fast to perform arithmetic on. You'd have to decide how you would want to treat overflow; resizing the storage would require dynamic memory allocation.

uint8_t data types initialisation [duplicate]

This question already has answers here:
What does 'unsigned temp:3' in a struct or union mean? [duplicate]
(4 answers)
Closed 5 years ago.
Recently I stumbled upon a code written like this:
typedef struct
{
uint8_t TC0_WG0 :2;
uint8_t TC0_CS :3;
} Timer0;
What I wanted to know is what does the part that says :2; & :3; specifically mean? Is it accessing the bits 0, 1, 2 only or 0, 1, 2 & 3 only of the 8-bit unsigned character or what ?
These are basically bitfields explicitly telling that the TC0_CS would be in 3 bit.
These can be used to save space. In embedded system, I have faced this use when designing an interrupt system. Used bitfield to specify the specific positions as a way to activate deactivate interrupts.
He is not accessing the 0,1 or 2nd bits but OP can using appropriate bit masking.
It is called bit-field members.
cppreference say's :
Bit fields
Declares a member with explicit width, in bits. Adjacent bit field
members may be packed to share and straddle the individual bytes.
A bit field declaration is a struct or union member declaration which
uses the following declarator:
identifier(optional) : width
identifier - the name of the bit field that is being declared. The name is optional: nameless bitfields introduce the specified number of
bits of padding
width - an integer constant expression with a value greater or equal to zero and less or equal the number of bits in the underlying
type. When greater than zero, this is the number of bits that this bit
field will occupy. The value zero is only allowed for nameless
bitfields and has special meaning: it specifies that the next bit
field in the class definition will begin at an allocation unit's
boundary.

Operating on arrays using Unsigned Values [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have been generating a C function from Matlab Coder environnement in order to implement it in an other software called MAX/MSP.
I'm trying to make sense of it, thanks to my poor level in C programming, and there is some syntax elements I can't understand: the use of unsigned values like 0U or 1U to pass arrays.
The next example doesn't do anything. Pasting the entire code wouldn't help much, unless you think so.
void function1(const double A[49], double B[50])
{
function2( (double *)&A[0U] );
}
static void function2(const double A[])
{
}
While doing some math, Matlab wrote something like:
b = 2;
f[1U] += b;
I don't understand the use of unsigned value either...
Thanks a lot!
For a[n], array indexes are always non-negative values, from 0 to n-1. Appending a u to a decimal constant poses no problem for indexing an array, yet does offer a benefit: it insures that the value is of minimal type width and of some unsigned type.
Automated generation of a fixed index like with Matlab benefits by using a u suffix.
Consider small and large values on a 32-bit unsigned/int/long/size_t system
aweeu[0u]; // `0u` is 32-bit `unsigned`.
aweenou[0]; // `0` is 32-bit `int`.
abigu[3000000000u]; // `3000000000u` is a 32-bit `unsigned`.
abignou[3000000000]; // `3000000000` is a 64-bit `long long`.
Is this of value? Perhaps. Some compiles make look at the value first and see that all above are in range of size_t and not complain. Others may complain about an index of type long long or possible even int. By appending the u, such rare complaints do not occur.
The U suffix is obviously not necessary here. It can be useful to force unsigned arithmetics in certain situations, with surprising side effects:
if (-1 < 1U) {
printf("surprise!\n");
}
In some rare occasions, it is necessary to avoid some type changes. On many current architectures, the following comparisons hold and the type of 2147483648 is different from that of 2147483648U is more than just signedness:
For example, on 32-bit linux and 32- and 64-bit windows:
sizeof(2147483648) == sizeof(long long) // 8 bytes
sizeof(2147483648U) == sizeof(unsigned) // 4 bytes
On many embedded systems with 16-bit ints:
sizeof(2147483648) == sizeof(long long) // 8 bytes
sizeof(2147483648U) == sizeof(unsigned long) // 4 bytes
sizeof(32768) == sizeof(long) // 4 bytes
sizeof(32768U) == sizeof(unsigned int) // 2 bytes
Depending on implementation details, array index values can exceed the range of both type int and type unsigned, and pointer offset values can be even larger. Just specifying U is no guarantee of anything.

How does explicit casting work in C? [duplicate]

This question already has an answer here:
Casting a large number type to a smaller type
(1 answer)
Closed 7 years ago.
Let's say I have the following code lines:
int a; // 4-byte-integer
char b, c, d, e;
b = (char)(a >> 24);
c = (char)(a >> 16);
d = (char)(a >> 8);
e = (char)a;
Let's also assume that the system is storing the bytes in little-endian mode and a = 100.
When using the explicit cast like that, do the left-most bytes disappear?
I guess that after executing the above lines, the variables will hold these values: b=100, c=0, d=0, e=0. Is it right?
You guess right! But your explanation is not completely correct:
The behavior of the above code does not depend on the endianness of the system: if int is 32 bits and char 8 bits, a >> 24 is the high order byte and a & 255 the low order byte, for all possible endianness possibilities.
explicit casts as (char) are not needed, because C does implicit conversion of the expression value to the type of the assignment destination. I suppose the programmer wrote it this way to silence a compiler warning. Microsoft compilers are notoriously vocal about losing precision in assignments.
the leftmost bytes do not disappear, the value is computed modulo the size of char, hopefully 8 bits in your case, so (char)a is essentially the same as a & 255. But if char is signed, this behavior is not actually well defined by the Standard if the value exceeds CHAR_MAX. It is wise to use unsigned types for this kind of bit manipulation.

How do I create a 3 bit variable as datatype in C? [duplicate]

This question already has answers here:
Is it possible to create a data type of length one bit in C
(8 answers)
Closed 7 years ago.
I can typedef char to CHAR1 which is 8 bits.
But how can I make 3 bit variable as datatype?
You might want to do something similar to the following:
struct
{
.
.
.
unsigned int fieldof3bits : 3;
.
.
.
} newdatatypename;
In this case, the fieldof3bits takes up 3 bits in the structure (based upon how you define everything else, the size of structure might vary though).
This usage is something called a bit field.
From Wikipedia:
A bit field is a term used in computer programming to store multiple, logical, neighboring bits, where each of the sets of bits, and single bits can be addressed. A bit field is most commonly used to represent integral types of known, fixed bit-width.
It seems you're asking for bitfields https://en.wikipedia.org/wiki/Bit_field
Just be aware that for some cases it's can be safer just use char or unsigned char instead of bits (compiler specific, physical memory layout etc.)
Happy coding!
typedef struct {
int a:3;
}hello;
It is only possible when it is inside structure else it is not

Resources