What is if (c >> a & 1) does mean? - c

I'm trying to understand what is this condition mean.
Does it mean after shifting the value it will be equal to 1?
I mean does it mean --> if (c >> a is 1)
Note: c >> a & 1 same as (c >> a) & 1.

Bitwise AND operate on bits, so the possibilities are :
1101 & 0001 => 0001
0001 & 0001 => 0001
1010 & 0001 => 0000
0000 & 0001 => 0000
Now, on C, anything that's not a zero is treated as true, so the statement means "if after shifting the least significant bit is 1", or perhaps "if after shifting the value is odd" if you're dealing with odd-even operation.

It executes the following statement or block if bit a of value c is true.
a+1 a a-1 1 0
... --+---+---+---+-- ... -+---+---+
| z | y | x | | q | p |
... --+---+---+---+-- ... -+---+---+
... -+---+---+
>> a | z | y |
... -+---+---+
... -+---+---+
&& 1 | 0 | y |
... -+---+---+

>> has higher operator precedence than &.
So c >> a & 1 means "shift the value c by a bits to the right, then check if the lowest bit of the shifted value is set. To single out certain bit values like this is known as bit masking and 1 in this case is the mask.

Related

Using Bitwise Operators in C [duplicate]

This question already has answers here:
How does this work? Weird Towers of Hanoi Solution
(3 answers)
Closed 3 years ago.
I am a beginner to C language.I have a code for towers of hanoi but can someone explain me what are these bitwise operators doing ie if value of i is 1 what will be the source and target output value ?
source = (i & i-1) % 3;
target = ((i | i-1) + 1) % 3;
i & i-1 turns off the lowest set bit in i (if there are any set). For example, consider i=200:
200 in binary is 1100 1000. (The space is inserted for visual convenience.)
To subtract one, the zeros cause us to “borrow” from the next position until we reach a one, producing 1100 0111. Note that, working from the right, all the zeros became ones, and the first one became a zero.
The & produces the bits that are set in both operands. Since i-1 changed all the bits up to the first one, those bits are clear in the &—none of the changed bits are the same in both i and i-1, so none of them is a one in both. The other ones in i, above the lowest one bit, are the same in both i and i-1, so they remain ones in i & i-1. The result of i & i-1 is 1100 0000.
1100 0000 is 1100 1000 with the lowest set bit turned off.
Then the % 3 is selecting which pole in Towers of Hanoi to use as the source. This is discussed in this question.
Similarly i | i-1 turns on all the low zeros in i, all the zeros up to the lowest one bit. Then (i | i-1) + 1 adds one to that. The result is the same as adding one to the lowest one bit in i. That is, the result is i + x, where x is the lowest bit set in i. Using our example value:
i is 1100 1000 and i-1 is 1100 0111.
i | i-1 is 1100 1111.
(i | i-1) + 1 is 1101 0000, which equals 1100 1000 + 0000 1000.
And again, the % 3 selects a pole.
A quick overview of bitwise operators:
Each operator takes the bits of both numbers and applies the operation to each bit of it.
& Bitwise AND
True only if both bits are true.
Truth table:
A | B | A & B
-------------
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
| Bitwise OR
True if either bit is true.
Truth table:
A | B | A | B
-------------
0 | 0 | 0
1 | 0 | 1
0 | 1 | 1
1 | 1 | 1
^ Bitwise XOR
True if only one bit is true.
Truth table:
A | B | A ^ B
-------------
0 | 0 | 0
1 | 0 | 1
0 | 1 | 1
1 | 1 | 0
~ Bitwise NOT
Inverts each bit. 1 -> 0, 0 -> 1. This is a unary operator.
Truth table:
A | ~A
------
0 | 1
1 | 0
In your case, if i = 1,
the expressions would be evaluated as:
source = (1 & 1-1) % 3;
target = ((1 | 1-1) + 1) % 3;
// =>
source = (1 & 0) % 3;
target = ((1 | 0) + 1) % 3;
// =>
source = 0 % 3;
target = (1 + 1) % 3;
// =>
source = 0;
target = 2 % 3;
// =>
source = 0;
target = 2;
Good answer above, here is a high-level approach:
i == 1:
source: (1 & 0). Are both of these values true or >= 1? No they are not. So the overall result is 0, 0 % 3 = 0.
target: ((1 | 0) + 1) % 3.
(1 | 0) evaluates to 1(true) since one of the two values on the sides of the | operator are 1, so now we have (1 + 1). so then it follows we have 2 % 3 = 2.
Source: 0, target: 2

Why must I use the ~ operator when clearing a bit? [duplicate]

This question already has answers here:
How do I set, clear, and toggle a single bit?
(27 answers)
Closed 5 years ago.
For example, if I want to set a bit in y at position n (in C)
y = y | (1 << n)
But if I want to delete a bit in y at position n I have to use the ~ operator after binary AND.
y = y & ~(1 << n);
My question: Why Must I use the ~ operator?
Is this because the result turns into negative area?
If you want to set a bit at third place from the right :
Y : 01001000
1 << 2 : 00000100
Y | (1 << 2) : 01001100 The | is OR, bits are set to 1 if any is 1.
If you want to remove the bit :
1 << 2 : 00000100
~(1 << 2) : 11111011 The ~ is NOT, bits are inversed
Y : 01001100
Y & ~(1 << 2) : 01001000 The & is AND, bits are set to 1 if both are 1.
I suggest you read more about Bitwise operators
No, ~ has nothing to do with interpreting the number as negative: tilde ~ operator interprets the number as a pattern of bits, which it then inverts (i.e. replaces zeros with ones and ones with zeros). In fact, if you apply ~ to an unsigned value, the result would remain positive.
Recall that 1 << k expression produces a pattern of all zeros and a single 1 at the position designated by k. This is a bit mask that can be used to force bit at position k to 1 by applying OR operation.
Now consider what happens when you apply ~ to it: all 0s would become 1s, and the only 1 would become zero. Hence, the result is a bit mask suitable for forcing a single bit to zero by applying AND operation.
The ~ operator turns all of the 0's to 1's and all of the 1's to 0's. In order to clear the bint in position n you want to and it will all ones and a zero in the nth position so shift a one to the nth position and ~ invert all the bits.
1 << n for n==3 (just an example) gives you a pattern 0000000...0001000. ~ negates the bit
pattern to 11111111....11110111. Using the bitwise AND operator (&) will
only set the required bit to 0, all other remain with the same value. It's using
the fact that for a bit b: b & 1 == b.
~ flips all bits, it has nothing to do with negative numbers.
A graphical representation for a sequence of k-bits
pos k-1 k-2 0
+---+---+-------------------+---+---+
1: | 0 | 0 | ··· | 0 | 1 |
+---+---+-------------------+---+---+
pos k-1 k-2 n n-1 0
+---+---+-----+---+---+---+-----+---+
1<<n | 0 | 0 | ··· | 1 | 0 | 0 | ··· | 0 |
+---+---+-----+---+---+---+-----+---+
pos k-1 k-2 n n-1 0
+---+---+-----+---+---+---+-----+---+
~(1<<n) | 1 | 1 | ··· | 0 | 1 | 1 | ··· | 1 |
+---+---+-----+---+---+---+-----+---+

Understanding this symbol in C

What does the & mean in this code:
(number >> 9) & 0b111
I know about & in terms of pointers. But not sure how it works in the code above
Lets break it down:
(number >> 9) & 0b111
| | | | |
| | | | Binary '7'*
| | | Binary AND
| | Number to shift by
| Binary shift operator
Variable
We'll start with the expression in the parenthesis:
(number >> 9)
This performs a binary right-shift by 9 places. For example:
1101101010010011 will be shifted to become:
0000000001101101
The & symbol is Binary AND. Where the bits are both 1 in both of the source variables, the returned value will have those bits set:
01101
& 11010
= 01000
So your code shifts your number by 9 places and performs AND on the result against b111. As the three least significant bits are all set in the second input, the result of this operation will be the bits that are set in the bottom three bits of the shifted input.
Example:
number = 1101101010010011
number >> 9 = 0000000001101101
(number >> 9) & '111' = 0000000000000101
An alternate way of thinking about it is as follows: The line extracts bits 10-12 and returns them as the result.
XXXXbbbXXXXXXXXX -> bbb
A common use for this is to apply a mask to a value to extract the bits. E.g. some libraries allow you to pass parameters with enumerable types like this:
set_params(option_a | option_b);
which sets both option_a and option_b.
Whether a parameter is set can be read by:
set_params(unsigned int params)
{
if (params & option_a)
{ /* do option_a stuff */}
}
*assuming your compiler has a binary extension to the C spec. otherwise you could use 0x7 (hex 7) or just 7
It is the bitwise AND operator.
More info here:
Wikipedia link
& is Bitwise AND
The C operators are here:
https://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B

IP Address left shift representation

I found the following in some old and bad documented C code:
#define addr (((((147 << 8) | 87) << 8) | 117) << 8) | 107
What is it? Well I know it's an IP address - and shifting 8 bits to the left makes some sense too. But can anyone explain this to me as a whole? What is happening there?
Thank you!
The code
(((((147 << 8) | 87) << 8) | 117) << 8) | 107
generates 4 bytes containing the IP 147.87.117.107.
The first step is the innermost bracket:
147<<8
147 = 1001 0011
1001 0011 << 8 = 1001 0011 0000 0000
The second byte 87 is inserted by bitwise-or operation on (147<<8). As you can see, the 8 bits on the right are all 0 (due to <<8), so the bitwise-or operation just inserts the 8 bits from 87:
1001 0011 0000 0000 (147<<8)
0000 0000 0101 0111 (87)
------------------- bitwise-or
1001 0011 0101 0111 (147<<8)|87
The same is done with rest so you have 4 bytes at the end saved into a single 32-bit integer.
An IPv4 address consists of four bytes, which means it can be stored in a 32-bit integer. This is taking the four parts of the IP address (147.87.117.107) and using bit-shifting and the bit-wise OR operator to "encode" the address in a single 4-byte quantity.
(Note: the address might be 107.117.87.147 - I can't remember offhand what order the bytes are stored in.)
The (hex) bytes of the resulting quantity look like:
aabb ccdd
Where aa is the hex representation of 147 (0x93), bb is 87 (0x57), cc is 117 (0x75), and dd is 107 (0x6b), so the resulting value is 9357756b.
Update: None of this applies to IPv6, since an IPv6 address is 128 bits instead of 32.

How to think of bit operations for simple operations?

for example:
unsigned int a; // value to merge in non-masked bits
unsigned int b; // value to merge in masked bits
unsigned int mask; // 1 where bits from b should be selected; 0 where from a.
unsigned int r; // result of (a & ~mask) | (b & mask) goes here
r = a ^ ((a ^ b) & mask);
merges bits from two values according to the mask.
[taken from here]
In this case, I can see that it works, but I am not sure what the logic is? And I am not sure I can create my own bit operations like this from scratch. How do I start thinking in bits?
Pencil and paper helps the best in cases like this. I usually write it down:
a = 10101110
b = 01100011
mask = 11110000
a ^ b = 10101110
01100011
--------
x => 11001101
x & mask = 11001101
11110000
--------
x => 11000000
a ^ x = 11000000
10101110
--------
x => 01101110
(final x is your r)
I don't know if this is the result you were after, but that's what it does. Writing it out usually helps when I don't understand a bitwise operation.
In this case, I can see that it works, but I am not sure what the logic is? And I am not sure I can create my own bit operations like this from scratch. How do I start thinking in bits?
People have answered your first question -- explaining the logic. I shall hopefully show you a terribly basic, long-winded but standard method of making any bit twiddling operations. (note, once you get used to working with bits you'll start thinking in & and | straight off without doing such nonsense).
Figure out what you'd like your operation to do.
Write out a FULL truth table.
Either read the sum-of-products direct from the table or make a Karnaugh map. The km will reduce the final eqution a lot.
???
Profit
Deriving for the example you gave. ie, where a mask selects bits from A or B. (0 is A, 1 is B)
This table is for 1 bit per input. I'm not doing more than one bit, as I don't want to waste my time :) ( why? 2^(2bits * 3inputs) = 64 cases :( 2^(3bits * 3inputs) = 512 cases :(()
But the good news is that in this case the operation is independant of the number of bits, so a 1 bit example is 100% fine. Infact it's recommended by me :)
| A | B | M || R |
============++====
| 0 | 0 | 0 || 0 |
| 0 | 0 | 1 || 0 |
| 0 | 1 | 0 || 0 |
| 0 | 1 | 1 || 1 |
| 1 | 0 | 0 || 1 |
| 1 | 0 | 1 || 0 |
| 1 | 1 | 0 || 1 |
| 1 | 1 | 1 || 1 |
Hopefully you can see how this truth table works.
how to get an expression from this? Two methods: KMaps and by-hand. Let's do it by-hand first, should we? :)
Looking at the points where R is true, we see:
| A | B | M || R |
============++====
| 0 | 1 | 1 || 1 |
| 1 | 0 | 0 || 1 |
| 1 | 1 | 0 || 1 |
| 1 | 1 | 1 || 1 |
From this we can dervive an expresion:
R = (~A & B & M) |
( A & ~B & ~M) |
( A & B & ~M) |
( A & B & M) |
Hopefully you can see how this works: just or together the full expressions seen in each case. By full I imply that you need to not-variables i nthere.
Let's try it in python:
a = 0xAE #10101110b
b = 0x64 #01100011b
m = 0xF0 #11110000b
r = (~a & b & m) | ( a & ~b & ~m) | ( a & b & ~m) | ( a & b & m)
print hex(r)
OUTPUT:
0x6E
These numbers are from Abel's example. The output is 0x6E, which is 01101110b.
So it worked! Hurrah. (ps, it's possible to derive an expression for ~r from the first table, should you wish to do so. Just take the cases where r is 0).
This expression you've made is a boolean "sum of products", aka Disjunctive Normal Form, although DNF is really the term used when using first-order predicate logic. This expression is also pretty unweidly. Making it smaller is a tedious thing to do on paper, and is the kind of thing you'll do 500,000 times at Uni' on a CS degree if you take the compiler or hardware courses. (Highly recommended :))
So let's do some boolean algebra magic on this (don't try and follow this, it's a waste of time):
(~a & b & m) | ( a & ~b & ~m) | ( a & b & ~m) | ( a & b & m)
|= ((~a & b & m) | ( a & ~b & ~m)) | ( a & b & ~m) | ( a & b & m)
take that first sub-clause that I made:
((~a & b & m) | ( a & ~b & ~m))
|= (~a | (a & ~b & ~m)) & (b | ( a & ~b & ~m)) & (m | ( a & ~b & ~m))
|= ((~a | a) & (a | ~b) &( a | ~m)) & (b | ( a & ~b & ~m)) & (m | ( a & ~b & ~m))
|= (T & (a | ~b) &( a | ~m)) & (b | ( a & ~b & ~m)) & (m | ( a & ~b & ~m))
|= ((a | ~b) & (a | ~m)) & (b | ( a & ~b & ~m)) & (m | ( a & ~b & ~m))
etc etc etc. This is the massively tedious bit incase you didn't guess. So just whack the expression in a website of your choice, which will tell you
r = (a & ~m) | (b & m)
Hurrah! Correct result. Note, it might even go so far as giving you an expression involving XORs, but who cares? Actually, some people do, as the expression with ands and ors is 4 operations (1 or, 2 and, 1 neg), whilst r = a ^ ((a ^ b) & mask) is 3 (2 xor, 1 and).
Now, how do you do it with kmaps? Well, first you need to know how to make them, I'll leave you to do that. :) Just google for it. There's software available, but I think it's best to do it by hand -- it's more fun and the programs don't allow you to cheat.
Cheat? Well, if you have lots of inputs, it's often best to reduce the table like so:
| A | B | M || R |
============++====
| X | X | 0 || A |
| X | X | 1 || B |
eg that 64 case table?
| A1| A0| B1| B0| M1| M0|| R1| R0|
========================++========
| X | X | X | X | 0 | 0 || A1| A0|
| X | X | X | X | 0 | 1 || A1| B0|
| X | X | X | X | 1 | 0 || B1| A0|
| X | X | X | X | 1 | 1 || B1| B0|
Boils down to 4 cases in this example :)
(Where X is "don't care".) Then put that table in your Kmap. Once again, an exercise for you to work out [ie, I've forgotten how to do this].
Hopefully you can now derive your own boolean madness, given a set of inputs and an expected set of outputs.
Have fun.
In order to create boolean expressions like that one, I think you'd have to learn some boolean algebra.
This looks good:
http://www.allaboutcircuits.com/vol_4/chpt_7/1.html
It even has a page on generating boolean expressions from truth tables.
It also has a section on Karnaugh Maps. To be honest, I've forgotten what those are, but they look like they could be useful for what you want to do.
http://www.allaboutcircuits.com/vol_4/chpt_8/1.html
a ^ x for some x gives the result of flipping those bits in a which are set in x.
a ^ b gives you a 1 where the bits in a and b differ, a 0 where they are the same.
Setting x to (a ^ b) & mask gives the result of flipping the bits in a which are different in a and b and are set in the mask. Thus a ^ ((a ^ b) & mask) gives the result of changing, where necessary, the values of the bits which are set in the mask from the value they take in a to the value they take in b.
The basis for most bitwise operations (&, |, ^, ~) is Boolean algebra. Think of performing Boolean algebra to multiple Boolean values in parallel and you've got the bitwise operators. The only operators this doesn't cover are the shift operators (<<, >>), which you can think of as shifting the bits or as multiplication by powers of two. x << 3 == x * pow(2,3) and x >> 4 == (int)(x * pow(2,-4)).
Thinking in bits in not that hard, you just need to convert, in your head, all the values into bits and work on them a bit at a time. That sounds hard but it does get easier over time. A good first step is to start thinking of them as hex digits (4 bits at a time).
For example, let's say a is 0x13, b is 0x22 and mask is 0x0f:
a : 0x13 : 0001 0011
b : 0x22 : 0010 0010
---------------------------------
a^b : 0x31 : 0011 0001
mask : 0x0f : 0000 1111
---------------------------------
(a^b)&mask : 0x01 : 0000 0001
a : 0x13 : 0001 0011
---------------------------------
a^((a^b)&mask) : 0x12 : 0001 0010
This particular example is a way to combine the top four bits of a with the bottom 4 bits of b (the mask decides which bits come from a and b.
As the site says, it's an optimization of (a & ~mask) | (b & mask):
a : 0x13 : 0001 0011
~mask : 0xf0 : 1111 0000
---------------------------------
a & ~mask : 0x10 : 0001 0000
b : 0x22 : 0010 0010
mask : 0x0f : 0000 1111
---------------------------------
b & mask : 0x20 : 0000 0010
a & ~mask : 0x10 : 0001 0000
b & mask : 0x20 : 0000 0010
---------------------------------
(a & ~mask) | : 0x12 : 0001 0010
(b & mask)
Aside: I wouldn't be overly concerned about not understanding something on that page you linked to. There's some serious "black magic" going on there. If you really want to understand bit fiddling, start with unoptimized ways of doing it.
First learn the logical (that is, 1-bit) operators well. Try to write down some rules, like
a && b = b && a
1 && a = a
1 || a = 1
0 && a = ... //you get the idea. Come up with as many as you can.
Include the "logical" xor operator:
1 ^^ b = !b
0 ^^ b = ...
Once you have a feel for these, move onto bitwise operators. Try some problems, look at some common tricks and techniques. With a bit of practice you'll feel much more confident.
Break the expression down into individual bits. Consider a single bit position in the expression (a ^ b) & mask. If the mask has zero at that bit position, (a ^ b) & mask will simply give you zero. Any bit xor'ed with zero will remain unchanged, so a ^ (a ^ b) & mask will simply return a's original value.
If the mask has a 1 at that bit position, (a ^ b) & mask will simply return the value of a ^ b. Now if we xor the value with a, we get a ^ (a ^ b) = (a ^ a) ^ b = b. This is a consequence of a ^ a = 0 -- any value xor'ed with itself will return zero. And then, as previously mentioned, zero xor'ed with any value will just give you the original value.
How to think in bits:
Read up on what others have done, and take note of their strategies. The Stanford site you link to is a pretty good resource -- there are often several techniques shown for a particular operation, allowing you to see the problem from different angles. You might have noticed that there are people who've submitted their own alternative solutions for a particular operation, which were inspired by the techniques applied to a different operation. You could take the same approach.
Also, it might help you to remember a handful of simple identities, which you can then string together for more useful operations. IMO listing out the results for each bit-combination is only useful for reverse-engineering someone else's work.
Maybe you dont need to think in bits - perhaps you can get your compiler to think in bits for you
and you can focus on the actual problem you're trying to solve instead. Using bit manipulation directly
in your code can produce some profoundly impenetrable (if impressive) code -- here's some nice macros
(from the windows ddk) that demonstrate this
// from ntifs.h
// These macros are used to test, set and clear flags respectivly
#define FlagOn(_F,_SF) ((_F) & (_SF))
#define BooleanFlagOn(F,SF) ((BOOLEAN)(((F) & (SF)) != 0))
#define SetFlag(_F,_SF) ((_F) |= (_SF))
#define ClearFlag(_F,_SF) ((_F) &= ~(_SF))
now if you want to set a flag in a value you can simply say SetFlag(x, y) much clearer I think. Moreover
if you focus on the problem you're trying to address with your bit fiddling the mechanics will become
second nature without you having to expend any effort. Look after the bits and the bytes will look after
themselves!

Resources