This question already has answers here:
What are bitwise shift (bit-shift) operators and how do they work?
(10 answers)
Closed 8 years ago.
In C I have this enum:
enum {
STAR_NONE = 1 << 0, // 1
STAR_ONE = 1 << 1, // 2
STAR_TWO = 1 << 2, // 4
STAR_THREE = 1 << 3 // 8
};
Why is 1 << 3 equal to 8 and not 6?
Shifting a number to the left is equivalent to multiplying that number by 2n where n is the distance you shifted that number.
To see how is that true lets take an example, suppose we have the number 5 so we shift it 2 places to the left, 5 in binary is 00000101 i.e.
0×27 + 0×26 + 0×25 + 0×24 + 0×23 + 1×22 + 0×21 + 1×20 = 1×22 + 1×20 = 4 + 1 = 5
now, 5 << 2 would be 00010100 i.e.
0×27 + 0×26 + 0×25 + 1×24 + 0×23 + 1×22 + 0×21 + 0×20 = 1×24 + 1×22 = 16 + 4 = 20
But we can write 1×24 + 1×22 as
1×24 + 1×22 = (1×22 + 1×20)×22 = 5×4 → 20
and from this, it is possible to conclude that 5 << 2 is equivalent to 5×22, it would be possible to demonstrate that in general
k << m = k×2m
So in your case 1 << 3 is equivalent to 23 = 8, since
1 << 3 → b00000001 << 3 → b00001000 → 23 -> 8
If you instead did this 3 << 1 then
3 << 1 → b00000011 << 1 → b00000110 → 22 + 21 → 4 + 2 → 6
1 in binary is 0000 0001
by shifting to left by 3 bits (i.e. 1 << 3) you get
0000 1000
Which is 8 in decimal and not 6.
Because two to the power of three is eight.
Think in binary.
You are actually doing this.
0001 shifted 3 times to the left = 1000 = 8
Related
I don't understand the exercise 2-9, in K&R C programming language,
chapter 2, 2.10:
Exercise 2-9. In a two's complement number system, x &= (x-1) deletes the rightmost 1-bit in x . Explain why. Use this observation to write a faster version of bitcount .
the bitcount function is:
/* bitcount: count 1 bits in x */
int bitcount(unsigned x)
{
int b;
for (b = 0; x != 0; x >>= 1)
if (x & 01)
b++;
return b;
}
The function deletes the rightmost bit after checking if it is bit-1 and then pops in the last bit .
I can't understand why x&(x-1) deletes the right most 1-bit?
For example, suppose x is 1010 and x-1 is 1001 in binary, and x&(x-1) would be 1011, so the rightmost bit would be there and would be one, where am I wrong?
Also, the exercise mentioned two's complement, does it have something to do with this question?
Thanks a lot!!!
First, you need to believe that K&R are correct.
Second, you may have some mis-understanding on the words.
Let me clarify it again for you. The rightmost 1-bit does not mean the right most bit, but the right most bit which is 1 in the binary form.
Let's arbitrary assume that x is xxxxxxx1000(x can be 0 or 1). Then from right to left, the fourth bit is the "rightmost 1-bit". On the basis of this understanding, let's continue on the problem.
Why x &=(x-1) can delete the rightmost 1-bit?
In a two's complement number system, -1 is represented with all 1 bit-pattern.
So x-1 is actually x+(-1), which is xxxxxxx1000+11111111111. Here comes the tricky point.
before the righmost 1-bit, all 0 becomes 1 and the rightmost 1-bit becomes 0 and there is a carry 1 go to left side. And this 1 will continue to proceed to the left most and cause an overflow, meanwhile, all 'x' bit is still a because 'x'+'1'+'1'(carry) causes a 'x' bit.
Then x & (x-1) will delete the rightmost 1-bit.
Hope you can understand it now.
Thanks.
Here is a simple way to explain it. Let's arbitrarily assume that number Y is xxxxxxx1000 (x can be 0 or 1).
xxxxxxx1000 - 1 = xxxxxxx0111
xxxxxxx1000 & xxxxxxx0111 = xxxxxxx0000 (See, the "rightmost 1" is gone.)
So the number of repetitions of Y &= (Y-1) before Y becomes 0 will be the total number of 1's in Y.
Why do x & (x-1) delete the right most order bit? Just try and see:
If the righmost order bit is 1, x has a binary representation of a...b1 and x-1 is a...b0 so the bitwise and will give a...b1 because common bits are left unchanged by the and and 1 & 0 is 0
Else x has a binary representation of a...b10...0; x-1 is a...b01...1 and for same reason as above x & (x-1) will be a...b00...0 again clearing the rightmost order bit.
So instead of scanning all bits to find which one are 0 and which one are 1, you just iterate the operation x = x & (x-1) until x is 0: the number of steps will be the number of 1 bits. It is more efficient than the naive implementation because statistically you will use half number of steps.
Example of code:
int bitcount(unsigned int x) {
int nb = 0;
while (x != 0) {
x &= x-1;
nb++
}
return nb;
}
Ik I'm already very late (≈ 3.5yrs) but your example has mistake.
x = 1010 = 10
x - 1 = 1001 = 9
1010 & 1001 = 1000
So as you can see, it deleted the rightmost bit in 10.
7 = 111
6 = 110
5 = 101
4 = 100
3 = 011
2 = 010
1 = 001
0 = 000
Observe that the position of rightmost 1 in any number, the bit at that same position of that number minus one is 0. Thus ANDing x with x-1 will be reset (i.e. set to 0) the rightmost bit.
7 & 6 = 111 & 110 = 110 = 6
6 & 5 = 110 & 101 = 100 = 4
5 & 4 = 101 & 100 = 100 = 4
4 & 3 = 010 & 011 = 010 = 2
3 & 2 = 011 & 010 = 010 = 2
2 & 1 = 010 & 001 = 000 = 0
1 & 0 = 001 & 000 = 000 = 0
I've seen the answer here: http://clc-wiki.net/wiki/K%26R2_solutions:Chapter_2:Exercise_6
and i've tested the first, but in this part:
x = 29638;
y = 999;
p = 10;
n = 8;
return (x & ((~0 << (p + 1)) | (~(~0 << (p + 1 - n)))))
in a paper it give to me a 6, but in the program it return 28678...
in this part:
111001111000110
&000100000000111
in the result, the left-most three bits has to be 1's like in x but the bitwise operator & says:
The output of bitwise AND is 1 if the corresponding bits of all operands is 1. If either bit of an operand is 0, the result of corresponding bit is evaluated to 0.
so why it returns the number with thats 3 bits in 1?
Here we go, one step at a time (using 16-bit numbers). We start with:
(x & ((~0 << (p + 1)) | (~(~0 << (p + 1 - n)))))
Substituting in numbers (in decimal):
(29638 & ((~0 << (10 + 1)) | (~(~0 << (10 + 1 - 8)))))
Totalling up the bit shift amounts gives:
(29638 & ((~0 << 11) | (~(~0 << 3))))
Rewriting numbers as binary and applying the ~0s...
(0111001111000110 & ((1111111111111111 << 1011) | (~(1111111111111111 << 0011))))
After performing the shifts we get:
(0111001111000110 & (1111100000000000 | (~ 1111111111111000)))
Applying the other bitwise-NOT (~):
(0111001111000110 & (1111100000000000 | 0000000000000111))
And the bitwise-OR (|):
0111001111000110 & 1111100000000111
And finally the bitwise-AND (&):
0111000000000110
So we then have binary 0111000000000110, which is 2 + 4 + 4096 + 8192 + 16384, which is 28678.
I need help to understand what is happening in this declaration:
#define LDA(m) (LDA_OP << 5 | ((m) & 0x001f))
Thank you
y << x is a left shift of y by x.
x & y is a bitwise and of x and y.
So, the left shift operator is like multiplying by 10 in base 10, but instead you multiply for 2 in base 2, for example:
In base 10
300 * 10 = 3000
In base 2:
0b0001 * 2 = 0b0010 = 0b0001 << 1
with a << b you "push" the number a, b places to the left.
and the or operator ( | )
you have to take two bits and if one or both of them are true (1) then the result is true.
For example:
0b0010 | 0b0001 = 0b0011
0b0010 | 0b0010 = 0b0010
If you have problems with this operators, just try to work the same numbers but in binary.
Okay i know this is a pretty mean task from which i got nightmares but maybe ..i'll crack that code thanks to someone of you.
I want to compare if number is between 0 and 10 with bitwise operators. Thats the thing.. it is between 0 and 10 and not for example between 0 and 2, 0 and 4, 0 and 8 and so on..
Reference for number/binary representation with 0-4 bits. (little endian)
0 0
1 1
2 10
3 11
4 100
5 101
6 110
7 111
8 1000
9 1001
10 1010
11 1011
12 1100
13 1101
14 1110
15 1111
Trying to figure out something like
if(((var & 4) >> var) + (var & 10))
I attempt to solve it with bitwise operators only (no addition).
The expression below will evaulate to nonzero if the number (v) is out of the 0 - 10 inclusive range:
(v & (~0xFU)) |
( ((v >> 3) & 1U) & ((v >> 2) & 1U) ) |
( ((v >> 3) & 1U) & ((v >> 1) & 1U) & (v & 1U) )
The first line is nonzero if the number is above 15 (any higher bit than the first four is set). The second line is nonzero if in the low 4 bits it is between 12 and 15 inclusive. The third line is nonzero if in the low 4 bits the number is either 11 or 15.
It was not clear in the question, but if the number to test is limited between the 0 - 15 inclusive range (only low 4 bits), then something nicer is possible here:
((~(v >> 3)) & 1U) |
( ((~(v >> 2)) & 1U) & (( ~v ) & 1U) ) |
( ((~(v >> 2)) & 1U) & ((~(v >> 1)) & 1U) )
First line is 1 if the number is between 0 and 7 inclusive. Second line is 1 if the number is one of 0, 2, 8 or 10. Third line is 1 if the number is one of 0, 1, 8 or 9. So OR combined the expression is 1 if the number is between 0 and 10 inclusive. Relating this solution, you may also check out the Karnaugh map, which can assist in generating these (and can also be used to prove there is no simpler solution here).
I don't think I could get any closer stricly using only bitwise operators in a reasonable manner. However if you can use addition it becomes a lot simpler as Pat's solution shows it.
Assuming that addition is allowed, then:
(v & ~0xf) | ((v+5) & ~0xf)
is non-zero if v is out-of-range. The first term tests if v is outside the range 0..15, and the second shifts the unwanted 11, 12, 13, 14, 15 outside the 0..15 range.
When addition is allowed and the range is 0..15, a simple solution is
(v - 11) & ~7
which is nonzero when v is in the range 0..10. Using shifts instead, you can use
(1<<10) >> v
which is also nonzero if the input is in the range 0..10. If the input range is unrestricted and the shift count is modulo 32, like on most CPUs, you can use
((1<<11) << ~v) | (v & ~15)
which is nonzero if the input is not in the range (the opposite is difficult since already v == 0 is difficult with only bitops). If other arithmetic operations are allowed, then
v / 11
can be used, which is also nonzero if the input is not in the range.
bool b1 = CheckCycleStateWithinRange(cycleState, 0b0, 0b1010); // Note *: 0b0 = 0 and 1010 = 10
bool CheckCycleStateWithinRange(int cycleState, int minRange, int maxRange) const
{
return ((IsGreaterThanEqual(cycleState, minRange) && IsLessThanEqual(cycleState, maxRange)) ? true : false );
}
int IsGreaterThanEqual(int cycleState, int limit) const
{
return ((limit + (~cycleState + 1)) >> 31 & 1) | (!(cycleState ^ limit));
}
int IsLessThanEqual(int cycleState, int limit) const
{
return !((limit + (~cycleState + 1)) >> 31 & 1) | (!(cycleState ^ limit));
}
I suppose the title might be a little misleading, but I couldn't think of a better one.
I have an array A[], all but one of whose elements occurs some number of times that is a multiple of 15, e.g. 2 occurs 30 times, 3 occurs 45 times. But one element occurs x times where x is not a multiple of 15. How do I print the number x. I'm looking for a linear solution without a hash-table.
Thanks.
There was similar question here, on StackOverflow, but i can't find it.
Lets use 3 instead of 15, because it will be easier and i think that it is completely equivalent. The sequence will be 4, 5, 4, 5, 3, 3, 4, 5, in binary 100, 101, 100, 101, 11, 11, 100, 101.
You can do the following: sum all values in least significant bit of numbers and take remainder over 3 (15 originally):
bit1 = (0 + 1 + 0 + 1 + 1 + 1 + 0 + 1) % 3 = 5 % 3 = 2 != 0
if it is != 0 then that bit is equal to 1 in number that we are trying to find. Now lets move to the next:
bit2 = (0 + 0 + 0 + 0 + 1 + 1 + 0 + 0) % 3 = 2 % 3 = 2 != 0
bit3 = (1 + 1 + 1 + 1 + 0 + 0 + 1 + 1) % 3 = 6 % 3 = 0 == 0
So we have bit3 == 0, bit2 != 0, bit1 != 0, making 011. Convert to decimal: 3.
The space complexity is O(1) and time complexity is O(n * BIT_LENGTH_OF_VARS), where BIT_LENGTH_OF_VARS == 8 for byte, BIT_LENGTH_OF_VARS == 32 for int, etc. So it can be large, but constants don't affect asymptotic behavior and O(n * BIT_LENGTH_OF_VARS) is really O(n).
That's it!