So I'm making a hash code function for this algorithm:
For each character rotated the current bit three bits left
add the value of each character,
xor the results with the current
Here's the code I have so far:
unsigned int hash_function(const char *k){
unsigned int current = 0;
unsigned int rot = 0;
int i = 0;
int r = 0;
for(i = 0; i < strlen(k); i++){
for(r = 0; r < 3; r++){
rot = ((rot & 1 (1 << 31)) >> 31 | (rot << 1);
}
rot += k[i];
current ^= rot;
rot = current;
}
return current;
}
Some examples that the algorithm should give
"gimme" = 477003,
"shelter" = 41540041
However, this algorithm isn't giving me the correct results. I'm fairly certain I'm using the correct rotation operations and then I follow the algorithm as it is. I'm wondering if anybody can point me in the correct direction.
Thanks, and hopefully I formatted this question correctly
I think you meant to put rot = ((rot & (1 << 31)) >> 31) | (rot << 1);. But the loop is unnecessary — use rot = ((rot & (7 << 29)) >> 29) | (rot << 3); instead.
This should work:
unsigned int hash_function(const char *k){
unsigned int current = 0;
unsigned int rot = 0;
int i = 0;
for (i = 0; i < strlen(k); i++){
rot = ((rot & (7 << 29)) >> 29) | (rot << 3);
rot += k[i];
current ^= rot;
rot = current;
}
return current;
}
This was initially just a comment, but it turns out that it's the answer to your question.
On this line:
rot = ((rot & 1 (1 << 31)) >> 31 | (rot << 1);
You have an extra 1 and an extra (, which prevent it from compiling on gcc:
main.c: In function ‘hash_function’:
main.c:11:29: error: called object is not a function or function pointer
rot = ((rot & 1 (1 << 31)) >> 31 | (rot << 1);
^
main.c:11:58: error: expected ‘)’ before ‘;’ token
rot = ((rot & 1 (1 << 31)) >> 31 | (rot << 1);
^
main.c:12:9: error: expected ‘;’ before ‘}’ token
}
^
Remove those two things:
rot = (rot & (1 << 31)) >> 31 | (rot << 1);
And it works.
In the future, you may want to find a compiler which can detect errors like this. I'm actually surprised that yours didn't.
Related
I have the function below which seems to act as it should, however when running the program through a testing program, I am given the errors: parse error: [int xSwapped = ((255 << nShift) | (255 << mShift));] and
undeclared variable `xSwapped': [return (~xSwapped & x) | nMask | mMask;]
int dl15(int x, int n, int m){
// calculates shifts, create mask to shift, combine result
// get number of bytes needed to shift, multiplying by 8
// get Masks by shifting 0xff and shift amount
// shift bits to required position
// combine results
int nShift = n<< 3;
int mShift = m<< 3;
int nMask = x & (255 << nShift);
int mMask = x & (255 << mShift);
nMask = 255 & (nMask >> nShift);
mMask = 255 & (mMask >> mShift);
nMask = nMask << mShift;
mMask = mMask << nShift;
int xSwapped = ((255 << nShift) | (255 << mShift));
return (~xSwapped & x) | nMask | mMask;
}
Not certain what im missing, thank you.
It looks like you are using a C compiler set to an old C standard. Prior to C99 you could not put executable statements before declarations.
You can fix this by moving the declaration of xSwapped to the top:
int nShift = n<< 3;
int mShift = m<< 3;
int nMask = x & (255 << nShift);
int mMask = x & (255 << mShift);
int xSwapped; // Declaration
nMask = 255 & (nMask >> nShift);
mMask = 255 & (mMask >> mShift);
nMask = nMask << mShift;
mMask = mMask << nShift;
xSwapped = ((255 << nShift) | (255 << mShift)); // Assignment
return (~xSwapped & x) | nMask | mMask;
Suppose I have two 4-bit values, ABCD and abcd. How to interleave it, so it becomes AaBbCcDd, using bitwise operators? Example in pseudo-C:
nibble a = 0b1001;
nibble b = 0b1100;
char c = foo(a,b);
print_bits(c);
// output: 0b11010010
Note: 4 bits is just for illustration, I want to do this with two 32bit ints.
This is called the perfect shuffle operation, and it's discussed at length in the Bible Of Bit Bashing, Hacker's Delight by Henry Warren, section 7-2 "Shuffling Bits."
Assuming x is a 32-bit integer with a in its high-order 16 bits and b in its low-order 16 bits:
unsigned int x = (a << 16) | b; /* put a and b in place */
the following straightforward C-like code accomplishes the perfect shuffle:
x = (x & 0x0000FF00) << 8 | (x >> 8) & 0x0000FF00 | x & 0xFF0000FF;
x = (x & 0x00F000F0) << 4 | (x >> 4) & 0x00F000F0 | x & 0xF00FF00F;
x = (x & 0x0C0C0C0C) << 2 | (x >> 2) & 0x0C0C0C0C | x & 0xC3C3C3C3;
x = (x & 0x22222222) << 1 | (x >> 1) & 0x22222222 | x & 0x99999999;
He also gives an alternative form which is faster on some CPUs, and (I think) a little more clear and extensible:
unsigned int t; /* an intermediate, temporary variable */
t = (x ^ (x >> 8)) & 0x0000FF00; x = x ^ t ^ (t << 8);
t = (x ^ (x >> 4)) & 0x00F000F0; x = x ^ t ^ (t << 4);
t = (x ^ (x >> 2)) & 0x0C0C0C0C; x = x ^ t ^ (t << 2);
t = (x ^ (x >> 1)) & 0x22222222; x = x ^ t ^ (t << 1);
I see you have edited your question to ask for a 64-bit result from two 32-bit inputs. I'd have to think about how to extend Warren's technique. I think it wouldn't be too hard, but I'd have to give it some thought. If someone else wanted to start here and give a 64-bit version, I'd be happy to upvote them.
EDITED FOR 64 BITS
I extended the second solution to 64 bits in a straightforward way. First I doubled the length of each of the constants. Then I added a line at the beginning to swap adjacent double-bytes and intermix them. In the following 4 lines, which are pretty much the same as the 32-bit version, the first line swaps adjacent bytes and intermixes, the second line drops down to nibbles, the third line to double-bits, and the last line to single bits.
unsigned long long int t; /* an intermediate, temporary variable */
t = (x ^ (x >> 16)) & 0x00000000FFFF0000ull; x = x ^ t ^ (t << 16);
t = (x ^ (x >> 8)) & 0x0000FF000000FF00ull; x = x ^ t ^ (t << 8);
t = (x ^ (x >> 4)) & 0x00F000F000F000F0ull; x = x ^ t ^ (t << 4);
t = (x ^ (x >> 2)) & 0x0C0C0C0C0C0C0C0Cull; x = x ^ t ^ (t << 2);
t = (x ^ (x >> 1)) & 0x2222222222222222ull; x = x ^ t ^ (t << 1);
From Stanford "Bit Twiddling Hacks" page:
https://graphics.stanford.edu/~seander/bithacks.html#InterleaveTableObvious
uint32_t x = /*...*/, y = /*...*/;
uint64_t z = 0;
for (int i = 0; i < sizeof(x) * CHAR_BIT; i++) // unroll for more speed...
{
z |= (x & 1U << i) << i | (y & 1U << i) << (i + 1);
}
Look at the page they propose different and faster algorithms to achieve the same.
Like so:
#include <limits.h>
typedef unsigned int half;
typedef unsigned long long full;
full mix_bits(half a,half b)
{
full result = 0;
for (int i=0; i<sizeof(half)*CHAR_BIT; i++)
result |= (((a>>i)&1)<<(2*i+1))|(((b>>i)&1)<<(2*i+0));
return result;
}
Here is a loop-based solution that is hopefully more readable than some of the others already here.
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
uint64_t interleave(uint32_t a, uint32_t b) {
uint64_t result = 0;
int i;
for (i = 0; i < 31; i++) {
result |= (a >> (31 - i)) & 1;
result <<= 1;
result |= (b >> (31 - i)) & 1;
result <<= 1;
}
// Skip the last left shift.
result |= (a >> (31 - i)) & 1;
result <<= 1;
result |= (b >> (31 - i)) & 1;
return result;
}
void printBits(uint64_t a) {
int i;
for (i = 0; i < 64; i++)
printf("%lu", (a >> (63 - i)) & 1);
puts("");
}
int main(){
uint32_t a = 0x9;
uint32_t b = 0x6;
uint64_t c = interleave(a,b);
printBits(a);
printBits(b);
printBits(c);
}
I have used the 2 tricks/operations used in this post How do you set, clear, and toggle a single bit? of setting a bit at particular index and checking the bit at particular index.
The following code is implemented using these 2 operations only.
int a = 0b1001;
int b = 0b1100;
long int c=0;
int index; //To specify index of c
int bit,i;
//Set bits in c from right to left.
for(i=32;i>=0;i--)
{
index=2*i+1; //We have to add the bit in c at this index
//Check a
bit=a&(1<<i); //Checking whether the i-th bit is set in a
if(bit)
c|=1<<index; //Setting bit in c at index
index--;
//Check b
bit=b&(1<<i); //Checking whether the i-th bit is set in b
if(bit)
c|=1<<index; //Setting bit in c at index
}
printf("%ld",c);
Output: 210 which is 0b11010010
How do I return the number of consecutive ones on the left side of an integer using only bit operations in C (no if, for, while,etc.)?
I'm not sure where to begin for this problem.
//BurstSize(-1) = 32,
//BurstSize(0xFFF0F0F0) = 12
//Legal: ! ~ & ^ | + << >>
//Max ops: 50
int BurstSize(int a) {
//code
}
If you use GCC, you could call __builtin_clz() to count leading zeros. Invert the input, then it could be used to count leading ones.
int BurstSize(unsigned a) {
return __builtin_clz(~a);
}
If you cannot access __builtin_*(), then you can implement the leading zero counting function as in Hacker's Delight:
int nlz4(unsigned x) {
int y, m, n;
y = -(x >> 16); // If left half of x is 0,
m = (y >> 16) & 16; // set n = 16. If left half
n = 16 - m; // is nonzero, set n = 0 and
x = x >> m; // shift x right 16.
// Now x is of the form 0000xxxx.
y = x - 0x100; // If positions 8-15 are 0,
m = (y >> 16) & 8; // add 8 to n and shift x left 8.
n = n + m;
x = x << m;
y = x - 0x1000; // If positions 12-15 are 0,
m = (y >> 16) & 4; // add 4 to n and shift x left 4.
n = n + m;
x = x << m;
y = x - 0x4000; // If positions 14-15 are 0,
m = (y >> 16) & 2; // add 2 to n and shift x left 2.
n = n + m;
x = x << m;
y = x >> 14; // Set y = 0, 1, 2, or 3.
m = y & ~(y >> 1); // Set m = 0, 1, 2, or 2 resp.
return n + 2 - m;
}
int BurstSize(unsigned a) {
return nlz4(~a);
}
Easiest method: invert the number, then find the most significant bit set. The rest you can do yourself (I am 99% sure this is a homework question, so I am giving a hint only. If you really need more help, ask in the comments and I will expand further).
As for finding the most significant bit set, look at https://stackoverflow.com/a/21413883/1967396
for a fairly efficient method.
update Now for a complete method that finds the most significant bit set (after inverting), and then uses a clever lookup table to convert to actual byte (with a modulo 37 trick that didn't come from me... I found it at http://graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightModLookup but made a small change so it works for 32 bits set). I include code to test patterns from 0 to 32 bits - seems to work.
#include <stdio.h>
int burstSize(int n) {
// return number of consecutive bits set
unsigned int m, r;
m = ~n;
m = m | m >> 1;
m = m | m >> 2;
m = m | m >> 4;
m = m | m >> 8;
m = m | m >> 16;
m = ((m ^ (m >> 1)) | 0x80000000) & m;
static const int Mod37BitPosition[] = // map a bit value mod 37 to its position
{
-1, 0, 1, 26, 2, 23, 27, 0, 3, 16, 24, 30, 28, 11, 0, 13, 4,
7, 17, 0, 25, 22, 31, 15, 29, 10, 12, 6, 0, 21, 14, 9, 5,
20, 8, 19, 18
};
r = Mod37BitPosition[m % 37]; // <<<< not sure if this is allowed in your assignment...
return 31 - r; // <<< you could rewrite the LUT so you don't need an operation here. I was lazy.
}
int main(void) {
printf("%d\n", burstSize(0x00000000));
printf("%d\n", burstSize(0x80000000));
printf("%d\n", burstSize(0xC0000000));
printf("%d\n", burstSize(0xE0000000));
printf("%d\n", burstSize(0xF0000000));
printf("%d\n", burstSize(0xF8000000));
printf("%d\n", burstSize(0xFC000000));
printf("%d\n", burstSize(0xFE000000));
printf("%d\n", burstSize(0xFF000000));
printf("%d\n", burstSize(0xFF800000));
printf("%d\n", burstSize(0xFFC00000));
printf("%d\n", burstSize(0xFFE00000));
printf("%d\n", burstSize(0xFFF00000));
printf("%d\n", burstSize(0xFFF80000));
printf("%d\n", burstSize(0xFFFC0000));
printf("%d\n", burstSize(0xFFFE0000));
printf("%d\n", burstSize(0xFFFF0000));
printf("%d\n", burstSize(0xFFFFF800));
printf("%d\n", burstSize(0xFFFFFC00));
printf("%d\n", burstSize(0xFFFFFE00));
printf("%d\n", burstSize(0xFFFFFF00));
printf("%d\n", burstSize(0xFFFFFFF8));
printf("%d\n", burstSize(0xFFFFFFFC));
printf("%d\n", burstSize(0xFFFFFFFE));
printf("%d\n", burstSize(0xFFFFFFFF));
}
TRY THIS:
unsigned int A=0XFFF0; //your own number
unsigned int B0=(1 & A)/1;
unsigned int B1=(2 & A)/2;
unsigned int B2=(4 & A)/4;
unsigned int B3=(8 & A)/8;
unsigned int B4=(16 & A)/16;
unsigned int B5=(32 & A)/32;
unsigned int B6=(64 & A)/64;
unsigned int B7=(128 & A)/128;
unsigned int B8=(256 & A)/256;
unsigned int B9=(512 & A)/512;
unsigned int B10=(1024 & A)/1024;
unsigned int B11=(2048 & A)/2048;
unsigned int B12=(4096 & A)/4096;
unsigned int B13=(8192 & A)/8192;
unsigned int B14=(16384 & A)/16384;
unsigned int B15=(32768 & A)/32768;
int Result=B15+
B14*(B15)+
B13*(B14*B15)+
B12*(B13*B14*B15)+
B11*(B12*B13*B14*B15)+
B10*(B11*B12*B13*B14*B15)+
B9*(B10*B11*B12*B13*B14*B15)+
B8*(B9*B10*B11*B12*B13*B14*B15)+
B7*(B8*B9*B10*B11*B12*B13*B14*B15)+
B6*(B7*B8*B9*B10*B11*B12*B13*B14*B15)+
B5*(B6*B7*B8*B9*B10*B11*B12*B13*B14*B15)+
B4*(B5*B6*B7*B8*B9*B10*B11*B12*B13*B14*B15)+
B3*(B4*B5*B6*B7*B8*B9*B10*B11*B12*B13*B14*B15)+
B2*(B3*B4*B5*B6*B7*B8*B9*B10*B11*B12*B13*B14*B15)+
B1*(B2*B3*B4*B5*B6*B7*B8*B9*B10*B11*B12*B13*B14*B15)+
B0*(B1*B2*B3*B4*B5*B6*B7*B8*B9*B10*B11*B12*B13*B14*B15);
The following answer reverses the number to do the operation, it doesn't use the % operator, it uses one - sign though:
#include <iostream>
int burstSize(int n) {
// Reverse the bits
n = (n & 0x55555555) << 1 | (n & 0xAAAAAAAA) >> 1;
n = (n & 0x33333333) << 2 | (n & 0xCCCCCCCC) >> 2;
n = (n & 0x0F0F0F0F) << 4 | (n & 0xF0F0F0F0) >> 4;
n = (n & 0x00FF00FF) << 8 | (n & 0xFF00FF00) >> 8;
n = (n & 0x0000FFFF) << 16 | (n & 0xFFFF0000) >> 16;
// rightmost 0-bit, produces 0 if none (e.g., 10100111 -> 00001000)
int r = ~n & (n + 1);
// r - 1 will give us a mask of the consequtive 1s to isolate (e.g., 0100 -> 0011)
n = (r - 1) & n;
// Count the bits
n = (n & 0x55555555) + ((n >> 1) & 0x55555555);
n = (n & 0x33333333) + ((n >> 2) & 0x33333333);
n = (n & 0x0F0F0F0F) + ((n >> 4) & 0x0F0F0F0F);
n = (n & 0x00FF00FF) + ((n >> 8) & 0x00FF00FF);
n = (n & 0x0000FFFF) + ((n >>16) & 0x0000FFFF);
// Return the bit count
return n;
}
int main() {
std::cout << burstSize(0x00000000) << std::endl; // 0
std::cout << burstSize(0x00010F00) << std::endl; // 0
std::cout << burstSize(0x80010F00) << std::endl; // 1
std::cout << burstSize(0xF0010F00) << std::endl; // 4
std::cout << burstSize(0xFFFFFFFE) << std::endl; // 31
std::cout << burstSize(0xFFFFFFFF) << std::endl; // 32
return 0;
}
For homework, using C, I'm supposed to make a program that finds the log base 2 of a number greater than 0 using only the operators ! ~ & ^ | + << >>. I know that I'm supposed to shift right a number of times, but I don't know how to keep track of the number of times without having any loops or ifs. I've been stuck on this question for days, so any help is appreciated.
int ilog2(int x) {
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >> 16);
}
This is what I have so far. I pass the most significant bit to the end.
Assumes a 32-bit unsigned int :
unsigned int ulog2 (unsigned int u)
{
unsigned int s, t;
t = (u > 0xffff) << 4; u >>= t;
s = (u > 0xff ) << 3; u >>= s, t |= s;
s = (u > 0xf ) << 2; u >>= s, t |= s;
s = (u > 0x3 ) << 1; u >>= s, t |= s;
return (t | (u >> 1));
}
Since I assumed >, I thought I'd find a way to get rid of it.
(u > 0xffff) is equivalent to: ((u >> 16) != 0). If subtract borrows:
((u >> 16) - 1) will set the msb, iff (u <= 0xffff). Replace -1 with +(~0) (allowed).
So the condition: (u > 0xffff) is replaced with: (~((u >> 16) + ~0U)) >> 31
unsigned int ulog2 (unsigned int u)
{
unsigned int r = 0, t;
t = ((~((u >> 16) + ~0U)) >> 27) & 0x10;
r |= t, u >>= t;
t = ((~((u >> 8) + ~0U)) >> 28) & 0x8;
r |= t, u >>= t;
t = ((~((u >> 4) + ~0U)) >> 29) & 0x4;
r |= t, u >>= t;
t = ((~((u >> 2) + ~0U)) >> 30) & 0x2;
r |= t, u >>= t;
return (r | (u >> 1));
}
This gets the floor of logbase2 of a number.
int ilog2(int x) {
int i, j, k, l, m;
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >> 16);
// i = 0x55555555
i = 0x55 | (0x55 << 8);
i = i | (i << 16);
// j = 0x33333333
j = 0x33 | (0x33 << 8);
j = j | (j << 16);
// k = 0x0f0f0f0f
k = 0x0f | (0x0f << 8);
k = k | (k << 16);
// l = 0x00ff00ff
l = 0xff | (0xff << 16);
// m = 0x0000ffff
m = 0xff | (0xff << 8);
x = (x & i) + ((x >> 1) & i);
x = (x & j) + ((x >> 2) & j);
x = (x & k) + ((x >> 4) & k);
x = (x & l) + ((x >> 8) & l);
x = (x & m) + ((x >> 16) & m);
x = x + ~0;
return x;
}
Your result is simply the rank of the highest non-null bit.
int log2_floor (int x)
{
int res = -1;
while (x) { res++ ; x = x >> 1; }
return res;
}
One possible solution is to take this method:
It is based on the additivity of logarithms:
log2(2nx) = log2(x) + n
Let x0 be a number of 2n bits (for instance, n=16 for 32 bits).
if x0 > 2n, we can define x1 so that
x0 = 2nx1
and we can say that
E(log2(x0)) = n + E(log2(x1))
We can compute
x1
with a binary shift:
x1 = x0 >> n
Otherwise we can simply set X1 = X0
We are now facing the same problem with the remaining upper or lower half of x0
By splitting x in half at each step, we can eventually compute E(log2(x)):
int log2_floor (unsigned x)
{
#define MSB_HIGHER_THAN(n) (x &(~((1<<n)-1)))
int res = 0;
if MSB_HIGHER_THAN(16) {res+= 16; $x >>= 16;}
if MSB_HIGHER_THAN( 8) {res+= 8; $x >>= 8;}
if MSB_HIGHER_THAN( 4) {res+= 4; $x >>= 4;}
if MSB_HIGHER_THAN( 2) {res+= 2; $x >>= 2;}
if MSB_HIGHER_THAN( 1) {res+= 1;}
return res;
}
Since your sadistic teacher said you can't use loops, we can hack our way around by computing a value that will be n in case of positive test and 0 otherwise, thus having no effect on addition or shift:
#define N_IF_MSB_HIGHER_THAN_N_OR_ELSE_0(n) (((-(x>>n))>>n)&n)
If the - operator is also forbidden by your psychopatic teacher (which is stupid since processors are able to handle 2's complements just as well as bitwise operations), you can use -x = ~x+1 in the above formula
#define N_IF_MSB_HIGHER_THAN_N_OR_ELSE_0_WITH_NO_MINUS(n) (((~(x>>n)+1)>>n)&n)
that we will shorten to NIMHTNOE0WNM for readability.
Also we will use | instead of + since we know they will be no carry.
Here the example is for 32 bits integers, but you could make it work on 64, 128, 256, 512 or 1024 bits integers if you managed to find a language that supports that big an integer value.
int log2_floor (unsigned x)
{
#define NIMHTNOE0WNM(n) (((~(x>>n)+1)>>n)&n)
int res, n;
n = NIMHTNOE0WNM(16); res = n; x >>= n;
n = NIMHTNOE0WNM( 8); res |= n; x >>= n;
n = NIMHTNOE0WNM( 4); res |= n; x >>= n;
n = NIMHTNOE0WNM( 2); res |= n; x >>= n;
n = NIMHTNOE0WNM( 1); res |= n;
return res;
}
Ah, but maybe you were forbidden to use #define too?
In that case, I cannot do much more for you, except advise you to flog your teacher to death with an old edition of the K&R.
This leads to useless, obfuscated code that gives off a strong smell of unwashed 70's hackers.
Most if not all processors implement specific "count leading zeroes" instructions (for instance, clz on ARM, bsr on x86 or cntlz on PowerPC) that can do the trick without all this fuss .
If you're allowed to use & then can you use &&? With that you can do conditionals without the need of if
if (cond)
doSomething();
can be done with
cond && doSomething();
Otherwise if you want to assign value conditionally like value = cond ? a : b; then you may do it with &
mask = -(cond != 0); // assuming int is a 2's complement 32-bit type
// or mask = (cond != 0) << 31) >> 31;
value = (mask & a) | (~mask & b);
There are many other ways in the bithacks page:
int v; // 32-bit integer to find the log base 2 of
int r; // result of log_2(v) goes here
union { unsigned int u[2]; double d; } t; // temp
t.u[__FLOAT_WORD_ORDER==LITTLE_ENDIAN] = 0x43300000;
t.u[__FLOAT_WORD_ORDER!=LITTLE_ENDIAN] = v;
t.d -= 4503599627370496.0;
r = (t.u[__FLOAT_WORD_ORDER==LITTLE_ENDIAN] >> 20) - 0x3FF;
or
unsigned int v; // 32-bit value to find the log2 of
register unsigned int r; // result of log2(v) will go here
register unsigned int shift;
r = (v > 0xFFFF) << 4; v >>= r;
shift = (v > 0xFF ) << 3; v >>= shift; r |= shift;
shift = (v > 0xF ) << 2; v >>= shift; r |= shift;
shift = (v > 0x3 ) << 1; v >>= shift; r |= shift;
r |= (v >> 1);
another way
uint32_t v; // find the log base 2 of 32-bit v
int r; // result goes here
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
r = MultiplyDeBruijnBitPosition[(uint32_t)(v * 0x07C4ACDDU) >> 27];
The question is equal to "find the highest bit of 1 of the binary number"
STEP 1: set the left of 1 all to 1
like 0x07000000 to 0x07ffffff
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >> 16); // number of ops = 10
STEP 2: returns count of number of 1's in word and minus 1
Reference: Hamming weight
// use bitCount
int m1 = 0x55; // 01010101...
m1 = (m1 << 8) + 0x55;
m1 = (m1 << 8) + 0x55;
m1 = (m1 << 8) + 0x55;
int m2 = 0x33; // 00110011...
m2 = (m2 << 8) + 0x33;
m2 = (m2 << 8) + 0x33;
m2 = (m2 << 8) + 0x33;
int m3 = 0x0f; // 00001111...
m3 = (m3 << 8) + 0x0f;
m3 = (m3 << 8) + 0x0f;
m3 = (m3 << 8) + 0x0f;
x = x + (~((x>>1) & m1) + 1); // x - ((x>>1) & m1)
x = (x & m2) + ((x >> 2) & m2);
x = (x + (x >> 4)) & m3;
// x = (x & m3) + ((x >> 4) & m3);
x += x>>8;
x += x>>16;
int bitCount = x & 0x3f; // max 100,000(2) = 32(10)
// Number of ops: 35 + 10 = 45
return bitCount + ~0;
This is how I do. Thank you~
I also was assigned this problem for homework and I spent a significant amount of time thinking about it so I thought I'd share what I came up with. This works with integers on a 32 bit machine. !!x returns if x is zero or one.
int ilog2(int x) {
int byte_count = 0;
int y = 0;
//Shift right 8
y = x>>0x8;
byte_count += ((!!y)<<3);
//Shift right 16
y = x>>0x10;
byte_count += ((!!y)<<3);
//Shift right 24 and mask to adjust for arithmetic shift
y = (x>>0x18)&0xff;
byte_count += ((!!y)<<3);
x = (x>>byte_count) & 0xff;
x = x>>1;
byte_count += !!x;
x = x>>1;
byte_count += !!x;
x = x>>1;
byte_count += !!x;
x = x>>1;
byte_count += !!x;
x = x>>1;
byte_count += !!x;
x = x>>1;
byte_count += !!x;
x = x>>1;
byte_count += !!x;
x = x>>1; //8
byte_count += !!x;
return byte_count;
}
how to reverse the bits using bit wise operators in c language
Eg:
i/p: 10010101
o/p: 10101001
If it's just 8 bits:
u_char in = 0x95;
u_char out = 0;
for (int i = 0; i < 8; ++i) {
out <<= 1;
out |= (in & 0x01);
in >>= 1;
}
Or for bonus points:
u_char in = 0x95;
u_char out = in;
out = (out & 0xaa) >> 1 | (out & 0x55) << 1;
out = (out & 0xcc) >> 2 | (out & 0x33) << 2;
out = (out & 0xf0) >> 4 | (out & 0x0f) << 4;
figuring out how the last one works is an exercise for the reader ;-)
Knuth has a section on Bit reversal in The Art of Computer Programming Vol 4A, bitwise tricks and techniques.
To reverse the bits of a 32 bit number in a divide and conquer fashion he uses magic constants
u0= 1010101010101010, (from -1/(2+1)
u1= 0011001100110011, (from -1/(4+1)
u2= 0000111100001111, (from -1/(16+1)
u3= 0000000011111111, (from -1/(256+1)
Method credited to Henry Warren Jr., Hackers delight.
unsigned int u0 = 0x55555555;
x = (((x >> 1) & u0) | ((x & u0) << 1));
unsigned int u1 = 0x33333333;
x = (((x >> 2) & u1) | ((x & u1) << 2));
unsigned int u2 = 0x0f0f0f0f;
x = (((x >> 4) & u2) | ((x & u2) << 4));
unsigned int u3 = 0x00ff00ff;
x = (((x >> 8) & u3) | ((x & u3) << 8));
x = ((x >> 16) | (x << 16) mod 0x100000000); // reversed
The 16 and 8 bit cases are left as an exercise to the reader.
Well, this might not be the most elegant solution but it is a solution:
int reverseBits(int x) {
int res = 0;
int len = sizeof(x) * 8; // no of bits to reverse
int i, shift, mask;
for(i = 0; i < len; i++) {
mask = 1 << i; //which bit we are at
shift = len - 2*i - 1;
mask &= x;
mask = (shift > 0) ? mask << shift : mask >> -shift;
res |= mask; // mask the bit we work at at shift it to the left
}
return res;
}
Tested it on a sheet of paper and it seemed to work :D
Edit: Yeah, this is indeed very complicated. I dunno why, but I wanted to find a solution without touching the input, so this came to my haead