Confused on AND bit masking - c

I am playing around with bit masking and I had thought I understood bitwise math...apparently not.
#include <stdio.h>
int main()
{
printf("%08x", 0x01111111 & 0xF0F0F0F);
/*
* 0000 0001 0001 0001 0001 0001 0001 0001
* 1111 0000 1111 0000 1111 0000 1111
* -----------------------------
* 0000 0000 0001 0000 0001 0000 0001
*/
return 0;
}
Above is a code snippet of a simple bit mask using F0F0F0F to turn off every other byte.
I know that 0111 1111 converted to binary is 0000 0001 0001 0001 0001 0001 0001 0001. If we AND mask this against 1111 0000 1111 0000 1111 0000 1111 I would expected the output to be 0000 0000 0001 0000 0001 0000 0001. However running this program gives a result I didn't expect - 01010101. It would appear the leading 0 in the MSB position is disregarded?
I'm sorry if this is trivial, I'm sure it is. But I am confused by this as I am not sure how this result is given.

0xF0F0F0F is really 0x0F0F0F0F. When you don't type "enough" digits to fill the whole integer, zeros are inserted automatically (e.g. if you just type 0x1, the internal representation is 0x00000001 for 32 bit integers).
So for your code it's
/*
* 0000 0001 0001 0001 0001 0001 0001 0001 (binary)
* 0000 1111 0000 1111 0000 1111 0000 1111 (binary)
* ---------------------------------------
* 0000 0001 0000 0001 0000 0001 0000 0001 (binary)
*/
and when printed as hex, you get 01010101

This is what's happening:
/*
* 0000 0001 0001 0001 0001 0001 0001 0001
* 0000 1111 0000 1111 0000 1111 0000 1111
* -----------------------------
* 0000 0001 0000 0001 0000 0001 0000 0001
*/
0xF0F0F0F has 0's at the beginning. That's what 0x means. So for, 0x1, 1 is the least significant bit.

Related

Using Logical Operators On Integers [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 months ago.
Improve this question
I saw lately a thread on this site about using logical operators on integers in C. However, I didn't really understand how this worked so here I provide two examples and I hope someone can provide a detailed explanation for them:
int result = 1000 & 255
int result = 1000 || 255
Decimal 1000 is binary 11 1110 1000. Decimal 255 is binary 1111 1111. First, they are converted to signed int, which is usually 32 bits wide.
Taking & of them sets the bit at all positions where both bits of the operands are set:
0000 0000 0000 0000 0000 0011 1110 1000
0000 0000 0000 0000 0000 0000 1111 1111
& ---------------------------------------
0000 0000 0000 0000 0000 0000 1110 1000
This is decimal 232. Taking | would have set the bit at all positions where at least one bit is set, i.e. would have produced binary 11 1111 1111, which is decimal 1023. Taking ^ would have set the bit at all positions where exactly one of the bits is set, i.e.
0000 0000 0000 0000 0000 0011 1110 1000
0000 0000 0000 0000 0000 0000 1111 1111
^ ---------------------------------------
0000 0000 0000 0000 0000 0011 0001 0111
&& is not a binary operation. It simply returns 1 if and only if both operands are non-zero. || returns 1 if and only if at least one of the operands is non-zero. In other cases, they return 0, respectively.

Why (3 & 0x1111) = 1?

I do not see why 3 & 0x1111 = 1 ? It seems that:
for any unsigned 32-bit integer i, i & 0x1111 should be i, right?
However when I tried this on ubuntu 14.04, I got 3 & 0x1111=1. Why?
int main() {
unsigned int a =3;
printf("size of a= %lu\n",sizeof(a));
printf("value of 3 & 0x1111= %d\n",a & 0x1111);
return 0;
}
Convert both of them to binary:
0x1111 = 0001 0001 0001 0001
3 = 0000 0000 0000 0011
When you & them bit by bit, what else do you expect?
In C, any numeric literal starting with 0x is a hexadecimal number. So the bitmask you are using is 1111 in hexadecimal. In the mask, bits #0, #4, #8 and #12 are 1s, and the rest are 0s. That's why you're getting 1.
0x1111 = 0000 0000 0000 0000 0001 0001 0001 0001 in binary
3 = 0000 0000 0000 0000 0000 0000 0000 0011 in binary
------------------------------------------------
1 = 0000 0000 0000 0000 0000 0000 0000 0001 after doing biwise AND
If you want to construct a mask with all 1s, in hex, it should be
0xffffffff = 1111 1111 1111 1111 1111 1111 1111 1111
3d = 3h = 11b
1111h = 0001000100010001b
so:
0001000100010001b
& 11b
-------------------
1b
0x1111 is 0001000100010001 in binary. So 0x1111 & 3 is 0001000100010001 & 0000000000000011 = 0000000000000001
0x1111 is 4369, or as binary: 0001000100010001
So, 3 (0011) masked against that is going to be 0001.
Similarly, 19 (0001011) would be 17 (00010001)
The & operator applies the binary and. The 0x means hexadecimal not binary so if we write 0x1111 into a binary we will get:
0001 0001 0001 0001 binary.
3 binary is 011
and
0001 0001 0001 0001 &
0000 0000 0000 0011 =
0000 0000 0000 0001 = 1

Explanation of rotate left in C

I need help understanding the C language. I just started out.
I have this piece of code from wikipedia:
unsigned int rotl(unsigned int value, int shift) {
return (value << shift) | (value >> (sizeof(value) * CHAR_BIT - shift));
}
I do understand what rotation of bits means. I just don't understand this implementation.
Why do I have to perform the OR operator here? And what does the right part actually do?
I shift value to the right for the number of bytes of value times (the number of bits in a char variable minus the shift I want). Why do I have to do this?
If I think of an example.
I want to shift unsigned 1011 (Base 2) 2 Bits to the left.
I do what the code says:
0000 0000 0000 0000 0000 0000 0000 1011 << 2 = 0000 0000 0000 0000 0000 0000 0010 1100
1011 >> (4*(8-2))=24 = 0000 0000 0000 0000 0000 0000 0000 0000 0000;
perform |: = 0000 0000 0000 0000 0000 0000 0010 1100.
Ok that did not work. What am I doing wrong?
Thanks!
Here is a graphical definition of an 8-bit 'Shift Left', and 'Rotate Left':
"Why do I have to perform the OR operator here?"
"And what does the right part actually do?"
For a 'rotate left' operation, the bits that "fall off" the left side are recycled, as they are 'OR'ed back into the right side.

Encoding A Decimal Value Into a Fixed Number of Bits

There is a heated, ongoing disagreement between myself and someone more senior that I need to resolve. Thus, I turn to you internets. Don't fail me now!
The objective is to take a decimal value and encode it into 24 bits. It's a simple linear scale so that 0x000000 is the min value and 0xFFFFFF is the max value.
We both agree on the basic formula of how to achieve this: (max-min)/range. The issue is the denominator. The other party says that this should be 1 << 24 (one left shifted 24 bits). This yields 16777216. I argue (and have seen this done previously) that the denominator should be 0xFFFFFF, or 16777215.
Who is correct?
The denominator should definitely be 16777215 as you described. 2^24 is 16777216 but that number cannot be represented with a 24 bit number. The max number is 2^24 - 1 (16777215) or 0xFFFFFF like you say.
I'd second #Tejolote's answer, since shifting a 1 0 or more times will give you a range between 1..1677216.
(32-bit number)
0000 0000 0000 0000 0000 0000 0001 // (1 << 0)
0001 0000 0000 0000 0000 0000 0000 // (1 << 24)
If you were to get a bitmask of those 24 bits, you would get a range from 1 to 0 (probably not what you intended):
(mask to a 24-bit number)
0000 0000 0000 0000 0000 0000 0001 // (1 << 0)
& 0000 1111 1111 1111 1111 1111 1111 // mask
==================================
0000 0000 0000 0000 0000 0000 0001 // result of '1', correct
and
0001 0000 0000 0000 0000 0000 0000 // (1 << 24)
& 0000 1111 1111 1111 1111 1111 1111 // mask
==================================
0000 0000 0000 0000 0000 0000 0000 // result of '0', wrong
What you want instead is a range from 0 to 16777215:
& 0000 0000 0000 0000 0000 0000 0000 // (1 << 0) - 1
0000 1111 1111 1111 1111 1111 1111 // mask
==================================
0000 0000 0000 0000 0000 0000 0000 // result of '0', correct
and
0000 1111 1111 1111 1111 1111 1111 // (1 << 24) - 1
& 0000 1111 1111 1111 1111 1111 1111 // mask
==================================
0000 1111 1111 1111 1111 1111 1111 // result of '16777215', correct
OP "So let's say that I'm encoding speed for a car. 0.0 mph would be 0x000000 and 150.0mph would be represented by 0xFFFFFF. It's a simple linear scale from there."
Yes 16777215 = 0xFFFFFF - 0x000000
0.0 --> 0x000000
150.0 --> 0xFFFFFF
y = dy/dx(x - x0) + y0 = (0xFFFFFF - 0x000000)/(150.0 - 0.0)*(x - 0.0) + 0x000000
But if senior was thinking the decimal value on the upper end represented the speed one could approach, but not attain.
0.0 --> 0x000000
150.0 --> 0xFFFFFF + 1
16777216 = 0xFFFFFF + 1 - 0x000000
I'd recommend buying your senior a brew. Learn from them - they cheat

How can I get the least significant byte of a variable in C

I'm trying to write a C function that will print a word consisting of the least significant
byte of x, and the remaining bytes of y. For example if x = 0x89ABCDEF and y =
0x76543210, this should give 0x765432EF, but how?
In order to manipulate specific bits (or bytes) within a data type, you should use the bit-wise operators.
The basic bit-wise operators are | (or), & (and), ^ (exclusive or), and ~ (NOT or complement), and they work very differently from the logical operators ||, &&, and !.
Using your variables x = 0x89ABCDEF and y = 0x76543210, let's step through a solution:
First, these are the values of x and y in binary:
x = 1000 1001 1010 1011 1100 1101 1110 1111
y = 0111 0110 0101 0100 0011 0010 0001 0000
I've split the 32 bits up into groups of 4 to see how hex translates to binary.
Now, we need to unset all but the last byte in x: the operation (x & 0xFF)
x = 1000 1001 1010 1011 1100 1101 1110 1111
0xFF = 0000 0000 0000 0000 0000 0000 1111 1111
--------------------------------------------------
x & 0xFF = 0000 0000 0000 0000 0000 0000 1110 1111
And we need to unset just the last byte of y: the operation (y & ~0xFF)
y = 0111 0110 0101 0100 0011 0010 0001 0000
~0xFF = 1111 1111 1111 1111 1111 1111 0000 0000 (~ flips all bits)
-----------------------------------------------------
(y & ~0xFF) = 0111 0110 0101 0100 0011 0010 0000 0000
Now, combine our results using the "or" operation: (x & 0xFF) | (y & ~0xFF)
(x & 0xFF) = 0000 0000 0000 0000 0000 0000 1110 1111
(y & ~0xFF) = 0111 0110 0101 0100 0011 0010 0000 0000
------------------------------------------------------------------
(x & 0xFF) | (y & ~0xFF) = 0111 0110 0101 0100 0011 0010 1110 1111
Which in hex is------------7----6----5----4----3----2----E----F
Take some time to get familiar with these operations, and you shouldn't have any trouble. Also, make sure to learn other bitwise operators (<<, >>, &=, |=, etc.)
Have you tried this?
inline uint32_t combine(uint32_t x, uint32_t y) {
return (y & 0xffffff00) | (x & 0xff);
}
Here's an example (see the result here).
#include <inttypes.h>
#include <stdio.h>
inline uint32_t combine(uint32_t x, uint32_t y) {
return (y & 0xffffff00) | (x & 0xff);
}
main() {
printf("%" PRIx32 "\n", combine(0x89abcdef, 0x76543210));
}

Resources