Bitwise Shifting in C - c

I've recently decided to undertake an SMS project for sending and receiving SMS though a mobile.
The data is sent in PDU format - I am required to change ASCII characters to 7 bit GSM alphabet characters. To do this I've come across several examples, such as http://www.dreamfabric.com/sms/hello.html
This example shows Rightmost bits of the second septet, being inserted into the first septect, to create an octect.
Bitwise shifts do not cause this to happen, as >> will insert to the left, and << to the right. As I understand it, I need some kind of bitwise rotate to create this - can anyone tell me how to move bits from the right handside and insert them on the left?
Thanks,

Here is a quick algorithm to do that:
int b1, bnext;
int mod;
int pand;
char *b; // this is your byte buffer, with message content
int blen; // this is your byte buffer length
char sb[160];
int totchars = 0;
b1 = bnext = 0;
for (int i=0; i < blen; i++) {
mod = i%7;
pand = 0xff >> (mod + 1);
b1 = ((b[i] & pand) << mod) | bnext;
bnext = (0xff & b[i]) >> (7 - mod);
sb[totchars++] = (char)b1;
if (mod == 6) {
sb[totchar++] = (char)bnext;
bnext = 0;
}
}
sb[totchar] = 0;
It will convert 7bit compressed buffers to regular ASCII char arrays in C.

can anyone tell me how to move bits
from the right handside and insert
them on the left?
There are indirect ways in C but I'd simply do it like this:
void main()
{
int x = 0xBAADC0DE;
__asm
{
mov eax, x;
rol eax, 1;
}
}
This will rotate (not shift!) the bits to the left (one step).
"ror" will rotate to the right.

Related

Encoding number through editing last bit of array elements

I'm trying to write a simple encoding program in C and I definitely have something wrong with my bitwise operations so I tried to write a simplified version to fix that mistake - so far it's still not working. I have an encoding and decoding method where, given a "key" I encode one number by hiding bits of it in a large array of unsigned ints.
I hide it by using srand(key) (so that I can generate the same numbers afterword with the same key) choosing array elements and then taking one bit of number (iterating through all) and swapping the least significant bit of array element for the bit coming through the number.
In decode method I try to reverse the steps, get all the bits from array elements back and glue them together to get back the original number.
That's the code I have so far:
unsigned int * encode(unsigned int * original_array, char * message, unsigned int mSize, unsigned int secret) {//disregard message, that's the later part, for now just encoding mSize - size of message
int size = MAX; //amount of elementas in array, max defined at top
int i, j, tmp;
unsigned int *array;
srand(secret); //seed rand with the given key
array = (unsigned int *)malloc(MAX*sizeof(unsigned int));
//copy to array from im
for (i=0; i<MAX; i++){
array[i] = original_array[i];
}
//encode message length first. it's a unsigned int therefore it's size is 4 bytes - 32 bits.
for (i=0; i<32; i++){
tmp = rand() % size;
if (((mSize >> i) & 1)==1) //check if the bit is 1
array[tmp] = (1 << 0) | array[tmp]; // then write 1 as last bit
else //else bit is 0
array[tmp] = array[tmp] & (~(1 << 0)); //write 0 as last bit
}
return array;
}
unsigned int decode(unsigned int * im, unsigned int secret) {
char * message;
int i, tmp;
unsigned int result = 2;
int size = MAX;
srand(secret);
for (i=0; i<32; i++){
tmp = rand() % size;
if (((im[tmp] << 0) & 1)==1)
result = (1 >> i) | result;
else
result = result & (~(1 >> i));
}//last
return result;
}
However running it and trying to print decoded result will give me 2, which is the dummy value I gave to result in decode() -therefore I know that at least my method of recovering the changed bits is clearly not working. Unfortunately since decoding is not working, I have no idea if encoding actually works and I can't seem to pinpoint the mistake.
I'm trying to understand how the hiding of such bits works since, ultimately, I want to hide entire message in a slightly more complicated structure then array, but first I wanted to get it working on a simpler level since I have troubles working with bitwise operators.
Edit: Through some debugging I think the encoding function works correctly - or at least does seem to change the array elements by one sometimes which would indicate flipping one bit if conditions are met.
Decoding doesn't seem to affect result variable at all - it doesn't change thgoughour all the bitwise operations and I don't know why.
The main bit of the encode function is the following which is same as your original just tidied it up a little by removing the uncessary 0 shifts and brackets:
//encode message length first. it's a unsigned int therefore it's size is 4 bytes - 32 bits.
for (i=0; i<32; i++){
tmp = rand() % size;
if (((mSize >> i) & 1)==1) //check if the bit is 1
array[tmp] |= 1; // then write 1 as last bit
else //else bit is 0
array[tmp] &= ~1; //write 0 as last bit
}
The problem you have is when you either set the last bit to 1 or 0 then you effectively lose information. There is no way of telling what the original last bit was. And so you will not be able to decode or reverse it.
In short the decode function will never work. As the encode function is not invertible.
EDIT
Following on from your comment. I would say the following about the decode function (again tidied up this should be the same as the original):
unsigned int decode(unsigned int * im, unsigned int secret) {
char * message;
int i, tmp;
unsigned int result = 2;
int size = MAX;
srand(secret);
for (i=0; i<32; i++){
tmp = rand() % size;
if ((im[tmp] & 1)==1)
result |= 1 >> i;
else
result &= ~(1 >> i);
}//last
return result;
}
The thing to note here is that for all values of i > 0 the following will apply:
1 >> i
is the same as
0
This means that for majority of your loop the code will be doing the following
if ((im[tmp] & 1)==1)
result |= 0;
else
result &= ~0;
And since 2 = 2 | 0 and 2 = 2 & ~0 then regardless of which branch of the if is executed the result will always be 2. This would be the same for any even number.
When i = 0 then the following is the case:
if ((im[tmp] & 1)==1)
result |= 1;
else
result &= ~1;
And so since 2 | 1 = 3 and 2 & ~1 = 2 your decode function will only ever return 2 or occasionally 3.

Bitwise Operations in a Steganography Program (C)

I'm relatively new to both C and bitwise operations and I'm having trouble with an assignment I've been given in my class. A majority of the code has been given to me, but I've been having issues figuring out a part pertaining to bitwise operations. Once I figure this part out, I'll be golden. I hope that someone can help!
Here is the excerpt from my assignment:
You will need to use 8 bytes of an image to hide one byte of information (remember that only LSBs of the cover image can be modified). You will use the rest of the 16 bytes of the cover image to embed 16 bits of b.size (two least significant bytes of the size field for the binary data), next 32 bytes of the cover will be used to embed the file extension
for the payload file, and after that you will use 8*b.size bytes to embed the payload (b.data).
What this program is doing is stenography of an image, and I have to modify the least significant bits of the image read in using data from a file that I created. Like I said, all of the code for that is already written. I just can't figure out how to modify the LSBs. Any help would be greatly appreciated!!!
The functions I have to use for reformatting the LSBs are as follows:
byte getlsbs(byte *b);
void setlsbs(byte *b, byte b0);
This is what I've attempted thus far:
/* In main function */
b0 = getlsbs(&img.gray[0])
/* Passing arguments */
byte getlsbs(byte *b)
{
byte b0;
b0[0] = b >> 8;
return b0;
}
I'm honestly at a complete loss. I've been working on this all night and I still barely have made headway.
To set LSB of b to 1:
b |= 1;
To set LSB of b to 0:
b &= 0xFE;
Here is an idea how the functions could be implemented. This code is not tested.
byte getlsbs(byte *b)
{
byte result = 0;
for (int i = 0; i < 8; ++i)
{
result >>= 1;
if (*b & 1)
result |= 0x80;
++b;
}
return result;
}
void setlsbs(byte *b, byte b0)
{
for (int i = 0; i < 8; ++i)
{
if (b0 & 1)
*b |= 1;
else
*b &= 0xFE;
++b;
b0 >>= 1;
}
}

Reversing two bits at a time in an array

I am new to bit manipulation.
My friend recently asked me this in an interview.
Given an array of bytes
Eg: 1000100101010101 | 001010011100
We need to flip it two bits at a time horizontally inplace.
So the new array should be:
1000 | 0101 and so on.
and so on.
I think we start from the middle (marked by | here) and continue our way outwards taking two bits at a time.
I know how to reverse single bits in a number at a time like this:
unsigned int reverse(unsigned int num)
{
unsigned int x = sizeof(num) * 8;
unsigned int reverse_num = 0, i, temp;
for (i = 0; i < x; i++)
{
temp = (num & (1 << i));
if(temp)
reverse_num |= (1 << ((x - 1) - i));
}
return reverse_num;
}
But I wonder how can we reverse two bits efficiently inplace.
Thanks in advance.
I'd just do a whole byte (or more) at once:
output = (input & 0x55) << 1;
output |= (input & 0xAA) >> 1;
The "trick" way to do this is to precompute a table of bytes with the bits flipped. Then you can just index the table using a byte from the array, and write it back. As others have said - how many bits are in your bytes here?

Finding trailing 0s in a binary number

How to find number of trailing 0s in a binary number?Based on K&R bitcount example of finding 1s in a binary number i modified it a bit to find the trailing 0s.
int bitcount(unsigned x)
{
int b;
for(b=0;x!=0;x>>=1)
{
if(x&01)
break;
else
b++;
}
I would like to review this method.
Here's a way to compute the count in parallel for better efficiency:
unsigned int v; // 32-bit word input to count zero bits on right
unsigned int c = 32; // c will be the number of zero bits on the right
v &= -signed(v);
if (v) c--;
if (v & 0x0000FFFF) c -= 16;
if (v & 0x00FF00FF) c -= 8;
if (v & 0x0F0F0F0F) c -= 4;
if (v & 0x33333333) c -= 2;
if (v & 0x55555555) c -= 1;
On GCC on X86 platform you can use __builtin_ctz(no)
On Microsoft compilers for X86 you can use _BitScanForward
They both emit a bsf instruction
Another approach (I'm surprised it's not mentioned here) would be to build a table of 256 integers, where each element in the array is the lowest 1 bit for that index. Then, for each byte in the integer, you look up in the table.
Something like this (I haven't taken any time to tweak this, this is just to roughly illustrate the idea):
int bitcount(unsigned x)
{
static const unsigned char table[256] = { /* TODO: populate with constants */ };
for (int i=0; i<sizeof(x); ++i, x >>= 8)
{
unsigned char r = table[x & 0xff];
if (r)
return r + i*8; // Found a 1...
}
// All zeroes...
return sizeof(x)*8;
}
The idea with some of the table-driven approaches to a problem like this is that if statements cost you something in terms of branch prediction, so you should aim to reduce them. It also reduces the number of bit shifts. Your approach does an if statement and a shift per bit, and this one does one per byte. (Hopefully the optimizer can unroll the for loop, and not issue a compare/jump for that.) Some of the other answers have even fewer if statements than this, but a table approach is simple and easy to understand. Of course you should be guided by actual measurements to see if any of this matters.
I think your method is working (allthough you might want to use unsigned int). You check the last digit each time, and if it's zero, you discard it an increment the number of trailing zero-bits.
I think for trailing zeroes you don't need a loop.
Consider the following:
What happens with the number (in binary representation, of course) if you subtract 1? Which digits change, which stay the same?
How could you combine the original number and the decremented version such that only bits representing trailing zeroes are left?
If you apply the above steps correctly, you can just find the highest bit set in O(lg n) steps (look here if you're interested in how to do).
Should be:
int bitcount(unsigned char x)
{
int b;
for(b=0; b<7; x>>=1)
{
if(x&1)
break;
else
b++;
}
return b;
}
or even
int bitcount(unsigned char x)
{
int b;
for(b=0; b<7 && !(x&1); x>>=1) b++;
return b;
}
or even (yay!)
int bitcount(unsigned char x)
{
int b;
for(b=0; b<7 && !(x&1); b++) x>>=1;
return b;
}
or ...
Ah, whatever, there are 100500 millions methods of doing this. Use whatever you need or like.
We can easily get it using bit operations, we don't need to go through all the bits. Pseudo code:
int bitcount(unsigned x) {
int xor = x ^ (x-1); // this will have (1 + #trailing 0s) trailing 1s
return log(i & xor); // i & xor will have only one bit 1 and its log should give the exact number of zeroes
}
int countTrailZero(unsigned x) {
if (x == 0) return DEFAULT_VALUE_YOU_NEED;
return log2 (x & -x);
}
Explanation:
x & -x returns the number of right most bit set with 1.
e.g. 6 -> "0000,0110", (6 & -6) -> "0000,0010"
You can deduct this by two complement:
x = "a1b", where b represents all trailing zeros.
then
-x = !(x) + 1 = !(a1b) + 1 = (!a)0(!b) + 1 = (!a)0(1...1) + 1 = (!a)1(0...0) = (!a)1b
so
x & (-x) = (a1b) & (!a)1b = (0...0)1(0...0)
you can get the number of trailing zeros just by doing log2.

Bit reversal of an integer, ignoring integer size and endianness

Given an integer typedef:
typedef unsigned int TYPE;
or
typedef unsigned long TYPE;
I have the following code to reverse the bits of an integer:
TYPE max_bit= (TYPE)-1;
void reverse_int_setup()
{
TYPE bits= (TYPE)max_bit;
while (bits <<= 1)
max_bit= bits;
}
TYPE reverse_int(TYPE arg)
{
TYPE bit_setter= 1, bit_tester= max_bit, result= 0;
for (result= 0; bit_tester; bit_tester>>= 1, bit_setter<<= 1)
if (arg & bit_tester)
result|= bit_setter;
return result;
}
One just needs first to run reverse_int_setup(), which stores an integer with the highest bit turned on, then any call to reverse_int(arg) returns arg with its bits reversed (to be used as a key to a binary tree, taken from an increasing counter, but that's more or less irrelevant).
Is there a platform-agnostic way to have in compile-time the correct value for max_int after the call to reverse_int_setup(); Otherwise, is there an algorithm you consider better/leaner than the one I have for reverse_int()?
Thanks.
#include<stdio.h>
#include<limits.h>
#define TYPE_BITS sizeof(TYPE)*CHAR_BIT
typedef unsigned long TYPE;
TYPE reverser(TYPE n)
{
TYPE nrev = 0, i, bit1, bit2;
int count;
for(i = 0; i < TYPE_BITS; i += 2)
{
/*In each iteration, we swap one bit on the 'right half'
of the number with another on the left half*/
count = TYPE_BITS - i - 1; /*this is used to find how many positions
to the left (and right) we gotta move
the bits in this iteration*/
bit1 = n & (1<<(i/2)); /*Extract 'right half' bit*/
bit1 <<= count; /*Shift it to where it belongs*/
bit2 = n & 1<<((i/2) + count); /*Find the 'left half' bit*/
bit2 >>= count; /*Place that bit in bit1's original position*/
nrev |= bit1; /*Now add the bits to the reversal result*/
nrev |= bit2;
}
return nrev;
}
int main()
{
TYPE n = 6;
printf("%lu", reverser(n));
return 0;
}
This time I've used the 'number of bits' idea from TK, but made it somewhat more portable by not assuming a byte contains 8 bits and instead using the CHAR_BIT macro. The code is more efficient now (with the inner for loop removed). I hope the code is also slightly less cryptic this time. :)
The need for using count is that the number of positions by which we have to shift a bit varies in each iteration - we have to move the rightmost bit by 31 positions (assuming 32 bit number), the second rightmost bit by 29 positions and so on. Hence count must decrease with each iteration as i increases.
Hope that bit of info proves helpful in understanding the code...
The following program serves to demonstrate a leaner algorithm for reversing bits, which can be easily extended to handle 64bit numbers.
#include <stdio.h>
#include <stdint.h>
int main(int argc, char**argv)
{
int32_t x;
if ( argc != 2 )
{
printf("Usage: %s hexadecimal\n", argv[0]);
return 1;
}
sscanf(argv[1],"%x", &x);
/* swap every neigbouring bit */
x = (x&0xAAAAAAAA)>>1 | (x&0x55555555)<<1;
/* swap every 2 neighbouring bits */
x = (x&0xCCCCCCCC)>>2 | (x&0x33333333)<<2;
/* swap every 4 neighbouring bits */
x = (x&0xF0F0F0F0)>>4 | (x&0x0F0F0F0F)<<4;
/* swap every 8 neighbouring bits */
x = (x&0xFF00FF00)>>8 | (x&0x00FF00FF)<<8;
/* and so forth, for say, 32 bit int */
x = (x&0xFFFF0000)>>16 | (x&0x0000FFFF)<<16;
printf("0x%x\n",x);
return 0;
}
This code should not contain errors, and was tested using 0x12345678 to produce 0x1e6a2c48 which is the correct answer.
typedef unsigned long TYPE;
TYPE reverser(TYPE n)
{
TYPE k = 1, nrev = 0, i, nrevbit1, nrevbit2;
int count;
for(i = 0; !i || (1 << i && (1 << i) != 1); i+=2)
{
/*In each iteration, we swap one bit
on the 'right half' of the number with another
on the left half*/
k = 1<<i; /*this is used to find how many positions
to the left (or right, for the other bit)
we gotta move the bits in this iteration*/
count = 0;
while(k << 1 && k << 1 != 1)
{
k <<= 1;
count++;
}
nrevbit1 = n & (1<<(i/2));
nrevbit1 <<= count;
nrevbit2 = n & 1<<((i/2) + count);
nrevbit2 >>= count;
nrev |= nrevbit1;
nrev |= nrevbit2;
}
return nrev;
}
This works fine in gcc under Windows, but I'm not sure if it's completely platform independent. A few places of concern are:
the condition in the for loop - it assumes that when you left shift 1 beyond the leftmost bit, you get either a 0 with the 1 'falling out' (what I'd expect and what good old Turbo C gives iirc), or the 1 circles around and you get a 1 (what seems to be gcc's behaviour).
the condition in the inner while loop: see above. But there's a strange thing happening here: in this case, gcc seems to let the 1 fall out and not circle around!
The code might prove cryptic: if you're interested and need an explanation please don't hesitate to ask - I'll put it up someplace.
#ΤΖΩΤΖΙΟΥ
In reply to ΤΖΩΤΖΙΟΥ 's comments, I present modified version of above which depends on a upper limit for bit width.
#include <stdio.h>
#include <stdint.h>
typedef int32_t TYPE;
TYPE reverse(TYPE x, int bits)
{
TYPE m=~0;
switch(bits)
{
case 64:
x = (x&0xFFFFFFFF00000000&m)>>16 | (x&0x00000000FFFFFFFF&m)<<16;
case 32:
x = (x&0xFFFF0000FFFF0000&m)>>16 | (x&0x0000FFFF0000FFFF&m)<<16;
case 16:
x = (x&0xFF00FF00FF00FF00&m)>>8 | (x&0x00FF00FF00FF00FF&m)<<8;
case 8:
x = (x&0xF0F0F0F0F0F0F0F0&m)>>4 | (x&0x0F0F0F0F0F0F0F0F&m)<<4;
x = (x&0xCCCCCCCCCCCCCCCC&m)>>2 | (x&0x3333333333333333&m)<<2;
x = (x&0xAAAAAAAAAAAAAAAA&m)>>1 | (x&0x5555555555555555&m)<<1;
}
return x;
}
int main(int argc, char**argv)
{
TYPE x;
TYPE b = (TYPE)-1;
int bits;
if ( argc != 2 )
{
printf("Usage: %s hexadecimal\n", argv[0]);
return 1;
}
for(bits=1;b;b<<=1,bits++);
--bits;
printf("TYPE has %d bits\n", bits);
sscanf(argv[1],"%x", &x);
printf("0x%x\n",reverse(x, bits));
return 0;
}
Notes:
gcc will warn on the 64bit constants
the printfs will generate warnings too
If you need more than 64bit, the code should be simple enough to extend
I apologise in advance for the coding crimes I committed above - mercy good sir!
There's a nice collection of "Bit Twiddling Hacks", including a variety of simple and not-so simple bit reversing algorithms coded in C at http://graphics.stanford.edu/~seander/bithacks.html.
I personally like the "Obvious" algorigthm (http://graphics.stanford.edu/~seander/bithacks.html#BitReverseObvious) because, well, it's obvious. Some of the others may require less instructions to execute. If I really need to optimize the heck out of something I may choose the not-so-obvious but faster versions. Otherwise, for readability, maintainability, and portability I would choose the Obvious one.
Here is a more generally useful variation. Its advantage is its ability to work in situations where the bit length of the value to be reversed -- the codeword -- is unknown but is guaranteed not to exceed a value we'll call maxLength. A good example of this case is Huffman code decompression.
The code below works on codewords from 1 to 24 bits in length. It has been optimized for fast execution on a Pentium D. Note that it accesses the lookup table as many as 3 times per use. I experimented with many variations that reduced that number to 2 at the expense of a larger table (4096 and 65,536 entries). This version, with the 256-byte table, was the clear winner, partly because it is so advantageous for table data to be in the caches, and perhaps also because the processor has an 8-bit table lookup/translation instruction.
const unsigned char table[] = {
0x00,0x80,0x40,0xC0,0x20,0xA0,0x60,0xE0,0x10,0x90,0x50,0xD0,0x30,0xB0,0x70,0xF0,
0x08,0x88,0x48,0xC8,0x28,0xA8,0x68,0xE8,0x18,0x98,0x58,0xD8,0x38,0xB8,0x78,0xF8,
0x04,0x84,0x44,0xC4,0x24,0xA4,0x64,0xE4,0x14,0x94,0x54,0xD4,0x34,0xB4,0x74,0xF4,
0x0C,0x8C,0x4C,0xCC,0x2C,0xAC,0x6C,0xEC,0x1C,0x9C,0x5C,0xDC,0x3C,0xBC,0x7C,0xFC,
0x02,0x82,0x42,0xC2,0x22,0xA2,0x62,0xE2,0x12,0x92,0x52,0xD2,0x32,0xB2,0x72,0xF2,
0x0A,0x8A,0x4A,0xCA,0x2A,0xAA,0x6A,0xEA,0x1A,0x9A,0x5A,0xDA,0x3A,0xBA,0x7A,0xFA,
0x06,0x86,0x46,0xC6,0x26,0xA6,0x66,0xE6,0x16,0x96,0x56,0xD6,0x36,0xB6,0x76,0xF6,
0x0E,0x8E,0x4E,0xCE,0x2E,0xAE,0x6E,0xEE,0x1E,0x9E,0x5E,0xDE,0x3E,0xBE,0x7E,0xFE,
0x01,0x81,0x41,0xC1,0x21,0xA1,0x61,0xE1,0x11,0x91,0x51,0xD1,0x31,0xB1,0x71,0xF1,
0x09,0x89,0x49,0xC9,0x29,0xA9,0x69,0xE9,0x19,0x99,0x59,0xD9,0x39,0xB9,0x79,0xF9,
0x05,0x85,0x45,0xC5,0x25,0xA5,0x65,0xE5,0x15,0x95,0x55,0xD5,0x35,0xB5,0x75,0xF5,
0x0D,0x8D,0x4D,0xCD,0x2D,0xAD,0x6D,0xED,0x1D,0x9D,0x5D,0xDD,0x3D,0xBD,0x7D,0xFD,
0x03,0x83,0x43,0xC3,0x23,0xA3,0x63,0xE3,0x13,0x93,0x53,0xD3,0x33,0xB3,0x73,0xF3,
0x0B,0x8B,0x4B,0xCB,0x2B,0xAB,0x6B,0xEB,0x1B,0x9B,0x5B,0xDB,0x3B,0xBB,0x7B,0xFB,
0x07,0x87,0x47,0xC7,0x27,0xA7,0x67,0xE7,0x17,0x97,0x57,0xD7,0x37,0xB7,0x77,0xF7,
0x0F,0x8F,0x4F,0xCF,0x2F,0xAF,0x6F,0xEF,0x1F,0x9F,0x5F,0xDF,0x3F,0xBF,0x7F,0xFF};
const unsigned short masks[17] =
{0,0,0,0,0,0,0,0,0,0X0100,0X0300,0X0700,0X0F00,0X1F00,0X3F00,0X7F00,0XFF00};
unsigned long codeword; // value to be reversed, occupying the low 1-24 bits
unsigned char maxLength; // bit length of longest possible codeword (<= 24)
unsigned char sc; // shift count in bits and index into masks array
if (maxLength <= 8)
{
codeword = table[codeword << (8 - maxLength)];
}
else
{
sc = maxLength - 8;
if (maxLength <= 16)
{
codeword = (table[codeword & 0X00FF] << sc)
| table[codeword >> sc];
}
else if (maxLength & 1) // if maxLength is 17, 19, 21, or 23
{
codeword = (table[codeword & 0X00FF] << sc)
| table[codeword >> sc] |
(table[(codeword & masks[sc]) >> (sc - 8)] << 8);
}
else // if maxlength is 18, 20, 22, or 24
{
codeword = (table[codeword & 0X00FF] << sc)
| table[codeword >> sc]
| (table[(codeword & masks[sc]) >> (sc >> 1)] << (sc >> 1));
}
}
How about:
long temp = 0;
int counter = 0;
int number_of_bits = sizeof(value) * 8; // get the number of bits that represent value (assuming that it is aligned to a byte boundary)
while(value > 0) // loop until value is empty
{
temp <<= 1; // shift whatever was in temp left to create room for the next bit
temp |= (value & 0x01); // get the lsb from value and set as lsb in temp
value >>= 1; // shift value right by one to look at next lsb
counter++;
}
value = temp;
if (counter < number_of_bits)
{
value <<= counter-number_of_bits;
}
(I'm assuming that you know how many bits value holds and it is stored in number_of_bits)
Obviously temp needs to be the longest imaginable data type and when you copy temp back into value, all the extraneous bits in temp should magically vanish (I think!).
Or, the 'c' way would be to say :
while(value)
your choice
We can store the results of reversing all possible 1 byte sequences in an array (256 distinct entries), then use a combination of lookups into this table and some oring logic to get the reverse of integer.
Here is a variation and correction to TK's solution which might be clearer than the solutions by sundar. It takes single bits from t and pushes them into return_val:
typedef unsigned long TYPE;
#define TYPE_BITS sizeof(TYPE)*8
TYPE reverser(TYPE t)
{
unsigned int i;
TYPE return_val = 0
for(i = 0; i < TYPE_BITS; i++)
{/*foreach bit in TYPE*/
/* shift the value of return_val to the left and add the rightmost bit from t */
return_val = (return_val << 1) + (t & 1);
/* shift off the rightmost bit of t */
t = t >> 1;
}
return(return_val);
}
The generic approach hat would work for objects of any type of any size would be to reverse the of bytes of the object, and the reverse the order of bits in each byte. In this case the bit-level algorithm is tied to a concrete number of bits (a byte), while the "variable" logic (with regard to size) is lifted to the level of whole bytes.
Here's my generalization of freespace's solution (in case we one day get 128-bit machines). It results in jump-free code when compiled with gcc -O3, and is obviously insensitive to the definition of foo_t on sane machines. Unfortunately it does depend on shift being a power of 2!
#include <limits.h>
#include <stdio.h>
typedef unsigned long foo_t;
foo_t reverse(foo_t x)
{
int shift = sizeof (x) * CHAR_BIT / 2;
foo_t mask = (1 << shift) - 1;
int i;
for (i = 0; shift; i++) {
x = ((x & mask) << shift) | ((x & ~mask) >> shift);
shift >>= 1;
mask ^= (mask << shift);
}
return x;
}
int main() {
printf("reverse = 0x%08lx\n", reverse(0x12345678L));
}
In case bit-reversal is time critical, and mainly in conjunction with FFT, the best is to store the whole bit reversed array. In any case, this array will be smaller in size than the roots of unity that have to be precomputed in FFT Cooley-Tukey algorithm. An easy way to compute the array is:
int BitReverse[Size]; // Size is power of 2
void Init()
{
BitReverse[0] = 0;
for(int i = 0; i < Size/2; i++)
{
BitReverse[2*i] = BitReverse[i]/2;
BitReverse[2*i+1] = (BitReverse[i] + Size)/2;
}
} // end it's all

Resources