value of members of unions with bit-fields - c

I'm trying to learn how memory is allocated for unions containing bit-fields.
I've looked at posts and questions similar to this and have understood that padding is involved most of the times depending on the order in which members are declared in structure.
1.
union Example
{ int i:2;
int j:9;
};
int main(void)
{
union Example ex;
ex.j=15;
printf("%d",ex.i);
}
Output: -1
2.
union Example
{ unsigned int i:2;
int j:9;
};
int main(void)
{
union Example ex;
ex.j=15;
printf("%d",ex.i);
}
Output: 3
Can someone please explain the output? Is it just the standard output for such cases?
I use the built-in gcc compiler in ubuntu

The way your compiler allocates bits for bit-fields in a union happens to overlap the low bits of j with the bits of i. Thus, when j is set to 15, the two bits of i are each set to 1.
When i is a two-bit signed integer, declared with int i:2, your compiler interprets it as a two-bit two’s complement number. With this interpretation, the bits 11 represent −1.
When i is a two-bit unsigned integer, declared with unsigned int i:2, it is pure binary, with no sign bit. With this interpretation, the bits 11 represent 3.
The program below shows that setting a signed two-bit integer to −1 or an unsigned two-bit integer to 3 produce the same bit pattern in the union, at least in your C implementation.
#include <stdio.h>
int main(void)
{
union Example
{
unsigned u;
int i : 2;
int j : 9;
unsigned int k : 2;
} ex;
ex.u = 0;
ex.i = -1;
printf("ex.i = -1: %#x.\n", ex.u);
ex.u = 0;
ex.j = 15;
printf("ex.j = 15: %#x.\n", ex.u);
ex.u = 0;
ex.k = 3;
printf("ex.k = 3: %#x.\n", ex.u);
}
Output:
ex.i = -1: 0x3.
ex.j = 15: 0xf.
ex.k = 3: 0x3.
Note that a compiler could also allocate the bits high-to-low instead of low-to-high, in which case the high bits of j would overlap with i, instead of the low bits. Also, a compiler could use different sizes of storage units for the nine-bit j than it does for the two-bit i, in which case their bits might not overlap at all. (For example, it might use a single eight-bit byte for i but use a 32-bit word for j. The eight-bit byte would overlap some part of the 32-bit word, but not necessarily the part used for j.)

Example1
#include <stdio.h>
union Example
{ int i:2;
int j:9;
};
int main(void)
{
union Example ex;
ex.j=15;
printf("%d",ex.i);
printf("\nSize of Union : %lu\n" , sizeof(ex));
return 0;
}
Output:
-1
Size of Union : 4
Observations:
From this ouput, We can deduce that even if we use 9 bits( max ), Compiler allocates 4 byte(because of padding).
Statement union Example ex; allocates 4 bytes of memory for Union Example object ex.
Statement ex.j=15; asigns 0x0000000F to that 4 Byte of memory. So for j it allocates 0-0-0-0-0-1-1-1-1 for 9 bits.
Statement printf("%d",ex.i); tries to print ex.i( which is 2 bits). We have 1-1 from the previous statement.
Here comes the interesting part, ex.i is of type signed int. Hence first bit is used to assign signed representation and most cpu does signed representation in 2's complement form.
So if we does reverse 2's complement of 1-1 we will get value 1. So the ouput we get is -1.
Example2
#include <stdio.h>
union Example
{ unsigned int i:2;
int j:9;
};
int main(void)
{
union Example ex;
ex.j=15;
printf("%d",ex.i);
return 0;
}
Output:
3
Observations:
Statement union Example ex; allocates 4 bytes of memory for Union Example object ex.
Statement ex.j=15; asigns 0x0000000F to that 4 Byte of memory. So for j it allocates 0-0-0-0-0-1-1-1-1 for 9 bits.
statement printf("%d",ex.i); tries to print ex.i( which is 2 bits). We have 1-1 from the previous statement.
Same as Above example but here ex.i is of type unsigned int. Hence no signed bit is used here for representation.
So whatever is stored in the 2 bit location is the exact value. Hence the ouput is 3.
Hope i cleared your doubt. Please check for 2's and 1's complement in the internet.

Related

"Bitwise assignment" in C? Assigning a variable's bits to another

Is it possible to do a bitwise assignment in C? (Assigning the bits of a variable to another, assuming for simplicity that the source and the target of assignment have the same number of bits.)
For example, assign the number int 1 (which has bits 0...01) to a float variable, obtaining not the float number 1.0f but the number (assuming IEEE-754 representation and assuming a float is 4 bytes as the int) with bits:
0 (sign) 0000'0000 (exponent) 0...01 (mantissa)
which would be a subnormal number (cause the exponent bits are all 0's and the mantissa is not zero), hence representing the number
+ 2^-126 2^-23 (assuming mantissa has 23 bits, then 0..(23 zeroes in total)..1 is 2^-23), that is 2^-149, that is approx. 1.4 10^-45.
NOTE: I'm in the process of learning. I am not trying to do this in a real-life scenario.
Given two objects a and b that are known to be the same size, you can copy the bits of b into a with memcpy(&a, &b, sizeof a);.
You could use a union for that:
int source;
float target;
union Data {
int i;
float f;
} data;
source = 42;
data.i = source;
target = data.f; // target should now have the bitwise equivalent of 42.
Be mindful about the sizes of the union members. If they are not equal I think they are padded to the right, but to be sure check with the documentation.
You can access the bits of other types via pointers (including memcpy, which takes them as arguments) or via unions. There is already another answer about the former, so I'll focus on the union approach.
Union members share the same memory, so you could use bit fields or integer types to access the individual bits, and then view the same bits by using a member of another type. However, note that both accessing the value of another type via a union and bit fields themselves are implementation defined, so this is inherently non-portable. In particular it is not specified how the bits end up being aligned in relation to other union members…
An example for the case of floats:
#include <stdio.h>
union float_bits {
float value;
struct {
unsigned mantissa:23;
unsigned exponent:8;
unsigned sign:1;
};
struct {
unsigned bits:32;
};
};
static void print_float_bits(union float_bits f) {
printf("(%c %02x %06x) (%08x) %f\n", f.sign ? '-' : '+', (unsigned) f.exponent, (unsigned) f.mantissa, (unsigned) f.bits, f.value);
}
int main(void) {
union float_bits f;
f.value = 1;
print_float_bits(f);
f.sign = 1;
print_float_bits(f);
// Largest normal number
f.sign = 0; f.exponent = 0xFE; f.mantissa = 0x7FFFFF;
print_float_bits(f);
// Infinity
f.exponent = 0xFF; f.mantissa = 0;
print_float_bits(f);
return 0;
}
On my x86-64 machine with 32-bit IEEE-754 floats, compiled with clang, outputs:
(+ 7f 000000) (3f800000) 1.000000
(- 7f 000000) (bf800000) -1.000000
(+ fe 7fffff) (7f7fffff) 340282346638528859811704183484516925440.000000
(+ ff 000000) (7f800000) inf
Disclaimer: Very much implementation defined behaviour, non-portable and dangerous. Bitfields used for readability of the example. Other alternatives would be to put an array of char or some integer type like uint32_t in the union instead of bitfields, but it's still very much implementation defined behaviour.

Converting types via unions doesn't work when using structs

I want to be able to "concat" bytes together, so that if I have bytes 00101 and 010 the result will be 00101010
For this task I have written the following code:
#include <stdio.h>
typedef struct
{
unsigned char bits0to5 : 6;
unsigned char bits5to11 : 6;
unsigned char bits11to15 : 4;
}foo;
typedef union
{
foo bytes_in_one_form;
long bytes_in_other_form : 16;
}bar;
int main()
{
bar example;
/*just in case the problem is that bytes_in_other_form wasn't initialized */
example.bytes_in_other_form = 0;
/*001000 = 8, 000101 = 5, 1111 = 15*/
example.bytes_in_one_form.bits0to5 = 8;
example.bytes_in_one_form.bits5to11 = 5;
example.bytes_in_one_form.bits11to15 = 15;
/*sould be 0010000001011111 = 8287*/
/*NOTE: maybe the printf is wrong since its only 2 bytes and from type long?*/
printf("%d", example.bytes_in_other_form);
/*but number printed is 1288*/
return 0;
}
What have I done wrong? Unions should have all their member share the same memory, and both the struct and the long take up exactly 2 bytes.
Note:
For solutions that use entirely different algorithm, please note I need to be able to have the zeros at the start (so for example 8 = 001000 and not 1000, and the solution should match any number of bytes at any distribution (although understanding what I did wrong in my current algorithm is better) I should also mention I use ANSI C.
Thanks!
This answer applies to the original question, which had:
typedef struct
{
unsigned char bits0to5 : 6;
unsigned char bits5to11 : 6;
unsigned char bits11to15 : 4;
}foo;
Here's what's happening in your specific example (note that the results may vary from one platform to another):
The bit fields are being packed into char variables. If the next bit field doesn't fit into the current char, it skips to the next one. Additionally, you have little-endian addressing, so the char values appear right-to-left in the aliased long bit field.
So the layout of the structure fields is:
+--------+--------+--------+
|....cccc|..bbbbbb|..aaaaaa|
+--------+--------+--------+
Where aaaaaa is the first field, bbbbbb is the second field, cccc is the third field, and the . values are padding.
When storing your values, you have:
+--------+--------+--------+
|....1111|..000101|..001000|
+--------+--------+--------+
With zeroes in the pad bits, this becomes:
+--------+--------+--------+
|00001111|00000101|00001000|
+--------+--------+--------+
The other value in the union is aliased to the low-order 16 bits, so the value it picks up is:
+--------+--------+
|00000101|00001000|
+--------+--------+
This is 0x0508, which in decimal is 1288 as you saw.
If the structure instead uses unsigned long for the bit field types, then we have:
typedef struct
{
unsigned long bits0to5 : 6;
unsigned long bits5to11 : 6;
unsigned long bits11to15 : 4;
}foo;
In this case, the fields are packed into an unsigned long as follows:
-----+--------+--------+
.....|11110001|01001000|
-----+--------+--------+
The low-order 16 bits are 0xf148, which is 61768 in decimal.

C - Method for setting all even-numbered bits to 1

I was charged with the task of writing a method that "returns the word with all even-numbered bits set to 1." Being completely new to C this seems really confusing and unclear. I don't understand how I can change the bits of a number with C. That seems like a very low level instruction, and I don't even know how I would do that in Java (my first language)! Can someone please help me! This is the method signature.
int evenBits(void){
return 0;
}
Any instruction on how to do this or even guidance on how to begin doing this would be greatly appreciated. Thank you so much!
Break it down into two problems.
(1) Given a variable, how do I set particular bits?
Hint: use a bitwise operator.
(2) How do I find out the representation of "all even-numbered bits" so I can use a bitwise operator to set them?
Hint: Use math. ;-) You could make a table (or find one) such as:
Decimal | Binary
--------+-------
0 | 0
1 | 1
2 | 10
3 | 11
... | ...
Once you know what operation to use to set particular bits, and you know a decimal (or hexadecimal) integer literal to use that with in C, you've solved the problem.
You must give a precise definition of all even numbered bits. Bits are numbered in different ways on different architectures. Hardware people like to number them from 1 to 32 from the least significant to the most significant bit, or sometimes the other way, from the most significant to the least significant bit... while software guys like to number bits by increasing order starting at 0 because bit 0 represents the number 20, ie: 1.
With this latter numbering system, the bit pattern would be 0101...0101, thus a value in hex 0x555...555. If you number bits starting at 1 for the least significant bit, the pattern would be 1010...1010, in hex 0xAAA...AAA. But this representation actually encodes a negative value on current architectures.
I shall assume for the rest of this answer that even numbered bits are those representing even powers of 2: 1 (20), 4 (22), 16 (24)...
The short answer for this problem is:
int evenBits(void) {
return 0x55555555;
}
But what if int has 64 bits?
int evenBits(void) {
return 0x5555555555555555;
}
Would handle 64 bit int but would have implementation defined behavior on systems where int is smaller.
Using macros from <limits.h>, you could mask off the extra bits to handle 16, 32 and 64 bit ints:
#include <limits.h>
int evenBits(void) {
return 0x5555555555555555 & INT_MAX;
}
But this code still makes some assumptions:
int has at most 64 bits.
int has an even number of bits.
INT_MAX is a power of 2 minus 1.
These assumptions are valid for most current systems, but the C Standard allows for implementations where one or more are invalid.
So basically every other bit has to be set to one? This is why we have bitwise operations in C. Imagine a regular bitarray. What you want is the right most even bit and set it to 1(this is the number 2). Then we just use the OR operator (|) to modify our existing number. After doing that. we bitshift the number 2 places to the left (<< 2), this modifies the bit array to 1000 compared to the previous 0010. Then we do the same again and use the or operator. The code below describes it better.
#include <stdio.h>
unsigned char SetAllEvenBitsToOne(unsigned char x);
int IsAllEvenBitsOne(unsigned char x);
int main()
{
unsigned char x = 0; //char is one byte data type ie. 8 bits.
x = SetAllEvenBitsToOne(x);
int check = IsAllEvenBitsOne(x);
if(check==1)
{
printf("shit works");
}
return 0;
}
unsigned char SetAllEvenBitsToOne(unsigned char x)
{
int i=0;
unsigned char y = 2;
for(i=0; i < sizeof(char)*8/2; i++)
{
x = x | y;
y = y << 2;
}
return x;
}
int IsAllEvenBitsOne(unsigned char x)
{
unsigned char y;
for(int i=0; i<(sizeof(char)*8/2); i++)
{
y = x >> 7;
if(y > 0)
{
printf("x before: %d\t", x);
x = x << 2;
printf("x after: %d\n", x);
continue;
}
else
{
printf("Not all even bits are 1\n");
return 0;
}
}
printf("All even bits are 1\n");
return 1;
}
Here is a link to Bitwise Operations in C

how can split integers into bytes without using arithmetic in c?

I am implementing four basic arithmetic functions(add, sub, division, multiplication) in C.
the basic structure of these functions I imagined is
the program gets two operands by user using scanf,
and the program split these values into bytes and compute!
I've completed addition and subtraction,
but I forgot that I shouldn't use arithmetic functions,
so when splitting integer into single bytes,
I wrote codes like
while(quotient!=0){
bin[i]=quotient%2;
quotient=quotient/2;
i++;
}
but since there is arithmetic functions that i shouldn't use..
so i have to rewrite that splitting parts,
but i really have no idea how can i split integer into single byte without using
% or /.
To access the bytes of a variable type punning can be used.
According to the Standard C (C99 and C11), only unsigned char brings certainty to perform this operation in a safe way.
This could be done in the following way:
typedef unsigned int myint_t;
myint_t x = 1234;
union {
myint_t val;
unsigned char byte[sizeof(myint_t)];
} u;
Now, you can of course access to the bytes of x in this way:
u.val = x;
for (int j = 0; j < sizeof(myint_t); j++)
printf("%d ",u.byte[j]);
However, as WhozCrag has pointed out, there are issues with endianness.
It cannot be assumed that the bytes are in determined order.
So, before doing any computation with bytes, your program needs to check how the endianness works.
#include <limits.h> /* To use UCHAR_MAX */
unsigned long int ByteFactor = 1u + UCHAR_MAX; /* 256 almost everywhere */
u.val = 0;
for (int j = sizeof(myint_t) - 1; j >= 0 ; j--)
u.val = u.val * ByteFactor + j;
Now, when you print the values of u.byte[], you will see the order in that bytes are arranged for the type myint_t.
The less significant byte will have value 0.
I assume 32 bit integers (if not the case then just change the sizes) there are more approaches:
BYTE pointer
#include<stdio.h>
int x; // your integer or whatever else data type
BYTE *p=(BYTE*)&x;
x=0x11223344;
printf("%x\n",p[0]);
printf("%x\n",p[1]);
printf("%x\n",p[2]);
printf("%x\n",p[3]);
just get the address of your data as BYTE pointer
and access the bytes directly via 1D array
union
#include<stdio.h>
union
{
int x; // your integer or whatever else data type
BYTE p[4];
} a;
a.x=0x11223344;
printf("%x\n",a.p[0]);
printf("%x\n",a.p[1]);
printf("%x\n",a.p[2]);
printf("%x\n",a.p[3]);
and access the bytes directly via 1D array
[notes]
if you do not have BYTE defined then change it for unsigned char
with ALU you can use not only %,/ but also >>,& which is way faster but still use arithmetics
now depending on the platform endianness the output can be 11,22,33,44 of 44,33,22,11 so you need to take that in mind (especially for code used in multiple platforms)
you need to handle sign of number, for unsigned integers there is no problem
but for signed the C uses 2'os complement so it is better to separate the sign before spliting like:
int s;
if (x<0) { s=-1; x=-x; } else s=+1;
// now split ...
[edit2] logical/bit operations
x<<n,x>>n - is bit shift left and right of x by n bits
x&y - is bitwise logical and (perform logical AND on each bit separately)
so when you have for example 32 bit unsigned int (called DWORD) yu can split it to BYTES like this:
DWORD x; // input 32 bit unsigned int
BYTE a0,a1,a2,a3; // output BYTES a0 is the least significant a3 is the most significant
x=0x11223344;
a0=DWORD((x )&255); // should be 0x44
a1=DWORD((x>> 8)&255); // should be 0x33
a2=DWORD((x>>16)&255); // should be 0x22
a3=DWORD((x>>24)&255); // should be 0x11
this approach is not affected by endianness
but it uses ALU
the point is shift the bits you want to position of 0..7 bit and mask out the rest
the &255 and DWORD() overtyping is not needed on all compilers but some do weird stuff without them especially on signed variables like char or int
x>>n is the same as x/(pow(2,n))=x/(1<<n)
x&((1<<n)-1) is the same as x%(pow(2,n))=x%(1<<n)
so (x>>8)=x/256 and (x&255)=x%256

Size of structure with bit fields

Here I have a code snippet.
#include <stdio.h>
int main()
{
struct value
{
int bit1 : 1;
int bit2 : 4;
int bit3 : 4;
} bit;
printf("%d",sizeof(bit));
return 0;
}
I'm getting the output as 4 (32 bit compiler).
Can anyone explain me how? Why is it not 1+ 4 + 4 = 9?
I've never worked with bit fields before so would love some help. Thank you. :)
When you tell the C compiler this:
int bit1 : 1
It interprets it as, and allocates to it, an integer; but refers to it's first bit as bit1.
So if we consider your code:
struct value
{
int bit1 : 1;
int bit2 : 4;
int bit3 : 4;
} bit;
What you are telling the compiler is this: Take necessary number of the ints, and refer to the chunks bit 1 as bit1, then refer to bits 2 - 5 as bit2, and then refer to bits 6 - 9 as bit3.
Since the complete number of bits required are 9, and an int is 32 bits (in your computer's architecture), memory space of only 1 int is required. Thus you get the size as 4 (bytes).
Instead, if you were to define the struct using chars, since char is 8 bits, the compiler would allocate the memory space of two chars for each struct value. And you will get 2 (bytes) as your output.
Because C requests to pack the bits in the same unit (here one signed int / unsigned int):
(C99, 6.7.2.1p10) "If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit"
The processor just likes chucking around 32 bits in one go - not 9, 34 etc.
It just rounds it up to what the processor likes. (Keep the worker happy)

Resources