Bit Twiddling Hacks contains the following macros, which count the number of bytes in a word x that are less than, or greater than, n:
#define countless(x,n) \
(((~0UL/255*(127+(n))-((x)&~0UL/255*127))&~(x)&~0UL/255*128)/128%255)
#define countmore(x,n) \
(((((x)&~0UL/255*127)+~0UL/255*(127-(n))|(x))&~0UL/255*128)/128%255)
However, it doesn't explain why they work. What's the logic behind these macros?
Let's try for intuition on countmore.
First, ~0UL/255*(127-n) is a clever way of copying the value 127-n to all bytes in the word in parallel. Why does it work? ~0 is 255 in all bytes. Consequently, ~0/255 is 1 in all bytes. Multiplying by (127-n) does the "copying" mentioned at the outset.
The term ~0UL/255*127 is just a special case of the above where n is zero. It copies 127 into all bytes. That's 0x7f7f7f7f if words are 4 bytes. "Anding" with x zeros out the high order bit in each byte.
That's the first term (x)&~0UL/255*127). The result is the same as x except the high bit in each byte is zeroed.
The second term ~0UL/255*(127-(n)) is as above: 127-n copied to each byte.
For any given byte x[i], adding the two terms gives us 127-n+x[i] if x[i]<=127. This quantity will have the high order bit set whenever x[i]>n. It's easiest to see this as adding two 7-bit unsigned numbers. The result "overflows" into the 8th bit because the result is 128 or more.
So it looks like the algorithm is going to use the 8th bit of each byte as a boolean indicating x[i]>n.
So what about the other case, x[i]>127? Here we know the byte is more than n because the algorithm stipulates n<=127. The 8th bit ought to be always 1. Happily, the sum's 8th bit doesn't matter because the next step "or"s the result with x. Since x[i] has the 8th bit set to 1 if and only if it's 128 or larger, this operation "forces" the 8th bit to 1 just when the sum might provide a bad value.
To summarize so far, the "or" result has the 8th bit set to 1 in its i'th byte if and only if x[i]>n. Nice.
The next operation &~0UL/255*128 sets everything to zero except all those 8th bits of interest. It's "anding" with 0x80808080...
Now the task is to find the number of these bits set to 1. For this, countmore uses some basic number theory. First it shifts right 7 bits so the bits of interest are b0, b8, b16... The value of this word is
b0 + b8*2^8 + b16*2^16 + ...
A beautiful fact is that 1 == 2^8 == 2^16 == ... mod 255. In other words, each 1 bit is 1 mod 255. It follows that finding mod 255 of the shifted result is the same as summing b0+b8+b16+...
Yikes. We're done.
Let's analyse countless macro. We can simplify this macro as following code:
#define A(n) (0x0101010101010101UL * (0x7F+n))
#define B(x) (x & 0x7F7F7F7F7F7F7F7FUL)
#define C(x,n) (A(n) - B(x))
#define countless(x,n) (( C(x,n) & ~x & 0x8080808080808080UL) / 0x80 % 0xFF )
A(n) will be:
A(0) = 0x7F7F7F7F7F7F7F7F
A(1) = 0x8080808080808080
A(2) = 0x8181818181818181
A(3) = 0x8282828282828282
....
And for B(x), each byte of x will mask with 0x7F.
If we suppose x = 0xb0b1b2b3b4b5b6b7 and n = 0, then C(x,n) will equals to 0x(0x7F-b0)(0x7F-b1)(0x7F-b2)...
For example, We suppose x = 0x1234567811335577 and n = 0x50. So:
A(0x50) = 0xCFCFCFCFCFCFCFCF
B(0x1234567811335577) = 0x1234567811335577
C(0x1234567811335577, 0x50) = 0xBD9B7957BE9C7A58
~(0x1234567811335577) = 0xEDCBA987EECCAA88
0xEDCBA987EECCAA88 & 0x8080808080808080UL = 0x8080808080808080
C(0x1234567811335577, 0x50) & 0x8080808080808080 = 0x8080000080800000
(0x8080000080800000 / 0x80) % 0xFF = 4 //Count bytes that equal to 0x80 value.
int a = 12;
for eg: binary of 12 is 1100 so answer should be 3 as 3rd bit from right is set.
I want the position of the last most set bit of a. Can anyone tell me how can I do so.
NOTE : I want position only, here I don't want to set or reset the bit. So it is not duplicate of any question on stackoverflow.
This answer Unset the rightmost set bit tells both how to get and unset rightmost set bit for an unsigned integer or signed integer represented as two's complement.
get rightmost set bit,
x & -x
// or
x & (~x + 1)
unset rightmost set bit,
x &= x - 1
// or
x -= x & -x // rhs is rightmost set bit
why it works
x: leading bits 1 all 0
~x: reversed leading bits 0 all 1
~x + 1 or -x: reversed leading bits 1 all 0
x & -x: all 0 1 all 0
eg, let x = 112, and choose 8-bit for simplicity, though the idea is same for all size of integer.
// example for get rightmost set bit
x: 01110000
~x: 10001111
-x or ~x + 1: 10010000
x & -x: 00010000
// example for unset rightmost set bit
x: 01110000
x-1: 01101111
x & (x-1): 01100000
Finding the (0-based) index of the least significant set bit is equivalent to counting how many trailing zeros a given integer has. Depending on your compiler there are builtin functions for this, for example gcc and clang support __builtin_ctz.
For MSVC you would need to implement your own version, this answer to a different question shows a solution making use of MSVC intrinsics.
Given that you are looking for the 1-based index, you simply need to add 1 to ctz's result in order to achieve what you want.
int a = 12;
int least_bit = __builtin_ctz(a) + 1; // least_bit = 3
Note that this operation is undefined if a == 0. Furthermore there exist __builtin_ctzl and __builtin_ctzll which you should use if you are working with long and long long instead of int.
One can use the property of 2s-complement here.
Fastest way to find 2s-complement of a number is to get the rightmost set bit and flip everything to the left of it.
For example: consider a 4 bit system
/* Number in binary */
4 = 0100
/* 2s complement of 4 */
complement = 1100
/* which nothing but */
complement == -4
/* Result */
4 & (-4) = 0100
Notice that there is only one set bit and its at rightmost set bit of 4.
Similarly we can generalise this for n.
n&(-n) will contain only one set bit which is actually at the rightmost set bit position of n.
Since there is only one set bit in n&(-n), it is a power of 2.
So finally we can get the bit position by:
log2(n&(-n))+1
The leftmost bit of n can be obtained using the formulae:
n & ~(n-1)
This works because when you calculate (n-1) .. you are actually making all the zeros till the rightmost bit to 1, and the rightmost bit to 0.
Then you take a NOT of it .. which leaves you with the following:
x= ~(bits from the original number) + (rightmost 1 bit) + trailing zeros
Now, if you do (n & x), you get what you need, as the only bit that is 1 in both n and x is the rightmost bit.
Phewwwww .. :sweat_smile:
http://www.catonmat.net/blog/low-level-bit-hacks-you-absolutely-must-know/
helped me understand this.
There is a neat trick in Knuth 7.1.3 where you multiply by a "magic" number (found by a brute-force search) that maps the first few bits of the number to a unique value for each position of the rightmost bit, and then you can use a small lookup table. Here is an implementation of that trick for 32-bit values, adapted from the nlopt library (MIT/expat licensed).
/* Return position (0, 1, ...) of rightmost (least-significant) one bit in n.
*
* This code uses a 32-bit version of algorithm to find the rightmost
* one bit in Knuth, _The Art of Computer Programming_, volume 4A
* (draft fascicle), section 7.1.3, "Bitwise tricks and
* techniques."
*
* Assumes n has a 1 bit, i.e. n != 0
*
*/
static unsigned rightone32(uint32_t n)
{
const uint32_t a = 0x05f66a47; /* magic number, found by brute force */
static const unsigned decode[32] = { 0, 1, 2, 26, 23, 3, 15, 27, 24, 21, 19, 4, 12, 16, 28, 6, 31, 25, 22, 14, 20, 18, 11, 5, 30, 13, 17, 10, 29, 9, 8, 7 };
n = a * (n & (-n));
return decode[n >> 27];
}
Try this
int set_bit = n ^ (n&(n-1));
Explanation:
As noted in this answer, n&(n-1) unsets the last set bit.
So, if we unset the last set bit and xor it with the number; by the nature of the xor operation, the last set bit will become 1 and the rest of the bits will return 0
1- Subtract 1 form number: (a-1)
2- Take it's negation : ~(a-1)
3- Take 'AND' operation with original number:
int last_set_bit = a & ~(a-1)
The reason behind subtraction is, when you take negation it set its last bit 1, so when take 'AND' it gives last set bit.
Check if a & 1 is 0. If so, shift right by one until it's not zero. The number of times you shift is how many bits from the right is the rightmost bit that is set.
You can find the position of rightmost set bit by doing bitwise xor of n and (n&(n-1) )
int pos = n ^ (n&(n-1));
I inherited this one, with a note that it came from HAKMEM (try it out here). It works on both signed and unsigned integers, logical or arithmetic right shift. It's also pretty efficient.
#include <stdio.h>
int rightmost1(int n) {
int pos, temp;
for (pos = 0, temp = ~n & (n - 1); temp > 0; temp >>= 1, ++pos);
return pos;
}
int main()
{
int pos = rightmost1(16);
printf("%d", pos);
}
You must check all 32 bits starting at index 0 and working your way to the left. If you can bitwise-and your a with a one bit at that position and get a non-zero value back, it means the bit is set.
#include <limits.h>
int last_set_pos(int a) {
for (int i = 0; i < sizeof a * CHAR_BIT; ++i) {
if (a & (0x1 << i)) return i;
}
return -1; // a == 0
}
On typical systems int will be 32 bits, but doing sizeof a * CHAR_BIT will get you the right number of bits in a even if it's a different size
Accourding to dbush's solution, Try this:
int rightMostSet(int a){
if (!a) return -1; //means there isn't any 1-bit
int i=0;
while(a&1==0){
i++;
a>>1;
}
return i;
}
return log2(((num-1)^num)+1);
explanation with example: 12 - 1100
num-1 = 11 = 1011
num^ (num-1) = 12^11 = 7 (111)
num^ (num-1))+1 = 8 (1000)
log2(1000) = 3 (answer).
x & ~(x-1) isolates the lowest bit that is one.
int main(int argc, char **argv)
{
int setbit;
unsigned long d;
unsigned long n1;
unsigned long n = 0xFFF7;
double nlog2 = log(2);
while(n)
{
n1 = (unsigned long)n & (unsigned long)(n -1);
d = n - n1;
n = n1;
setbit = log(d) / nlog2;
printf("Set bit: %d\n", setbit);
}
return 0;
}
And the result is as below.
Set bit: 0
Set bit: 1
Set bit: 2
Set bit: 4
Set bit: 5
Set bit: 6
Set bit: 7
Set bit: 8
Set bit: 9
Set bit: 10
Set bit: 11
Set bit: 12
Set bit: 13
Set bit: 14
Set bit: 15
Let x be your integer input.
Bitwise AND by 1.
If it's even ie 0, 0&1 returns you 0.
If it's odd ie 1, 1&1 returns you 1.
if ( (x & 1) == 0) )
{
std::cout << "The rightmost bit is 0 ie even \n";
}
else
{
std::cout<< "The rightmost bit is 1 ie odd \n";
}```
Alright, so number systems is just working with logarithms and exponents. So I'll dive down into an approach that really makes sense to me.
I would prefer you read this because I write there about how I interpret logarithms as.
When you perform the x & -x operation, it gives you the value which has the right most bit as 1 (for example, it can be 0001000 or 0000010. Now according to how I interpret logarithms as, this value of the right most set bit, is the final value after I grow at the rate of 2. Now we are interested in finding the number of digits in this answer because whatever that is, if you subtract 1 from it, that is precisely the bit-count of set bit (bit count begins with 0 here and the digit count begins with 1, so yeah). But the number of digits is precisely the time you expanded for + 1 (in accordance with my logic) or just the formula I mentioned in the previous link. But now, as we don't really need the digits, but need the bit count, and we also don't have to worry about values of bits which potentially can be real (if the number is 65) because the number is always some multiple of 2 (except 1). So if you just take the logarithm of the value x & -x, we get the bit count! I did see an answer before that mentioned this, but diving down to why it really works was something I felt like writing down.
P.S: You could also count the number of digits and then subtract 1 from it to get the bit-count.
The following two C functions are equivalent:
unsigned f(unsigned A, unsigned B) {
return (A | B) & -(A | B);
}
unsigned g(unsigned A, unsigned B) {
unsigned C = (A - 1) & (B - 1);
return (C + 1) & ~C;
}
My question is: why are they equivalent? What rules/transforms occur to g which transform it into f?
1a. Expression x & -x is a well-known "bit hack": it evaluates to a value that has all bits set to 0 except for one bit: the lowest 1 bit in the original value of x. (Unless x is 0, of course.)
For example, in unsigned arithmetic: 5 & -5 = 1, 4 & -4 = 4 etc.
2a. This immediately tells us what function f does: by using | operator it combines all 1 bits in A and B and then finds the lowest 1 in the combined value. In other words, the result of f is a word that contains a sole 1 bit in the position of the lowest 1 in A or B.
1b. Expression (x + 1) & ~x is a well-known "bit hack": it evaluates to all bits set to 0 except for the lowest 0 bit in the original value of x. The lowest 0 bit in x becomes the sole 1 in the resultant value. (Unless x is all-1-bits, of course.)
For example, in unsigned arithmetic: (5 + 1) & -5 = 2, (4 + 1) & -4 = 1 etc.
2b. Expression x - 1 replaces all trailing 0 bits in x with 1 and replaces the lowest 1 in x with 0, keeping the rest of x unchanged. Operator & combines all 0 bits (just like operator | combines all 1 bits). That means that (A - 1) & (B - 1) will have its lowest 0 bit where the lowest 1 bit was in A or B.
3b. Per 1b, (C + 1) & ~C replaces that lowest 0 with a lone 1, zeroing out everything else.
That means that g does the same thing as f. Both functions find and return the lowest 1 bit between two input values. The result is always a power of 2 (or just 0). E.g. if at least one input value is odd, the result is 1.
I have an intuitive feeling (which could be wrong) that in order to build a formal transformation of one function into the other by applying additional operations to the existing expressions, one needs at least one of these functions to be "reversible" (is some semi-informal meaning of the term). Neither of these two looks sufficiently "reversible" to me...
my solution
get the rightmost n bits of y
a = ~(~0 << n) & y
clean the n bits of x beginning from p
c = ( ~0 << p | ~(~0 << (p-n+1))) & x
set the cleaned n bits to the n rightmost bits of y
c | (a << (p-n+1))
it is rather long statements. do we have a better one?
i.e
x = 0 1 1 1 0 1 1 0 1 1 1 0
p = 4
y = 0 1 0 1 1 0 1 0 1 0
n = 3
the 3 rightmost bits of y is 0 1 0
it will replace x from bits 4 to bits 2 which is 1 1 1
I wrote similar one:
unsigned setbits (unsigned x, int p, int n, unsigned y)
{
return (x & ~(~(~0<<n)<<(p+1-n)))|((y & ~(~0<<n))<<(p+1-n));
}
There are two reasonable approaches.
One is yours: Grab the low n bits of y, nuke the middle n bits of x, and "or" them into place.
The other is to build the answer from three parts: Low bits "or" middle bits "or" high bits.
I think I actually like your version better, because I bet n and p are more likely to be compile-time constants than x and y. So your answer becomes two masking operations with constants and one "or"; I doubt you will do better.
I might modify it slightly to make it easier to read:
mask = (~0 << p | ~(~0 << (p-n+1)))
result = (mask & a) | (~mask & (y << (p-n+1)))
...but this is the same speed (indeed, code) as yours when mask is a constant, and quite possibly slower when mask is a variable.
Finally, make sure you have a good reason to worry about this in the first place. Clean code is good, but for something this short, put it in a well-documented function and it does not matter that much. Fast code is good, but do not attempt to micro-optimize something like this until your profiler tells you do. (Modern CPUs do this stuff very fast; it is unlikely your application's performance is bounded by this sort of function. At the very least it is "innocent until proven guilty".)
Have a look at the following descriptive code:
int setbitsKR(int x, int p, int n, int y){
int shiftingDistance = p - n + 1,
bitmask = (1 << n) - 1, // example, 11
digitsOfY = (y & bitmask) << shiftingDistance, // whatever
bitmaskShiftedToLeft = bitmask << shiftingDistance, // 001100
invertedBitmaskShiftedToLeft = ~bitmaskShiftedToLeft; // 110011
// erase those middle bits of x
x &= invertedBitmaskShiftedToLeft;
// add those bits from y into x
x |= digitsOfY;
return x;
}
In short, it creates a bitmask (string of 1s), shifts them to get to that middle position of x, nukes those bits of x by &ing with a string of 0s (inverted bitmask), and finally |s that position with the right digits of y.
In this question, assume all integers are unsigned for simplicity.
Suppose I would like to write 2 functions, pack and unpack, which let you pack integers of smaller width into, say, a 64-bit integer. However, the location and width of the integers is given at runtime, so I can't use C bitfields.
Quickest is to explain with an example. For simplicity, I'll illustrate with 8-bit integers:
* *
bit # 8 7 6 5 4 3 2 1
myint 0 1 1 0 0 0 1 1
Suppose I want to "unpack" at location 5, an integer of width 2. These are the two bits marked with an asterisk. The result of that operation should be 0b01. Similarly, If I unpack at location 2, of width 6, I would get 0b100011.
I can write the unpack function easily with a bitshift-left followed by a bitshift right.
But I can't think of a clear way to write an equivalent "pack" function, which will do the opposite.
Say given an integer 0b11, packing it into myint (from above) at location 5 and width 2 would yield
* *
bit # 8 7 6 5 4 3 2 1
myint 0 1 1 1 0 0 1 1
Best I came up with involves a lot of concatinating bit-strings with OR, << and >>. Before I implement and test it, maybe somebody sees a clever quick solution?
Off the top of my head, untested.
int pack(int oldPackedInteger, int bitOffset, int bitCount, int value) {
int mask = (1 << bitCount) -1;
mask <<= bitOffset;
oldPackedInteger &= ~mask;
oldPackedInteger |= value << bitOffset;
return oldPackedInteger;
}
In your example:
int value = 0x63;
value = pack(value, 4, 2, 0x3);
To write the value "3" at an offset of 4 (with two bits available) when 0x63 is the current value.