I was helping someone with their homework and ran into this strange issue. The problem is to write a function that reverses the order of bytes of a signed integer(That's how the function was specified anyway), and this is the solution I came up with:
int reverse(int x)
{
int reversed = 0;
reversed = (x & (0xFF << 24)) >> 24;
reversed |= (x & (0xFF << 16)) >> 8;
reversed |= (x & (0xFF << 8)) << 8;
reversed |= (x & 0xFF) << 24;
return reversed;
}
If you pass 0xFF000000 to this function, the first assignment will result in 0xFFFFFFFF. I don't really understand what is going on, but I know it has something to do with conversions back and forth between signed and unsigned, or something like that.
If I either append ul to 0xFF it works fine, which I assume is because it's forced to unsigned then converted to signed or something in that direction. The resulting code also changes; without the ul specifier it uses sar(shift arithmetic right), but as unsigned it uses shr as intended.
I would really appreciate it if someone could shed some light on this for me. I'm supposed to know this stuff, and I thought I did, but I'm really not sure what's going on here.
Thanks in advance!
Since x is a signed quantity, the result of (x & (0xFF << 24)) is 0xFF000000 which is also signed and thus a negative number since the top (sign) bit is set. The >> operator on int (a signed value) performs sign extension (Edit: though this behaviour is undefined and implementation-specific) and propagates the sign bit value of 1 as the value is shifted to the right.
You should rewrite the function as follows to work exclusively on unsigned values:
unsigned reverse(unsigned x)
{
unsigned int reversed = 0;
reversed = (x & (0xFF << 24)) >> 24;
reversed |= (x & (0xFF << 16)) >> 8;
reversed |= (x & (0xFF << 8)) << 8;
reversed |= (x & 0xFF) << 24;
return reversed;
}
From your results we can deduce that you are on a 32-bit machine.
(x & (0xFF << 24)) >> 24
In this expression 0xFF is an int, so 0xFF << 24 is also an int, as is x.
When you perform the bitwise & between two int, the result is also an int and in this case the value is 0xFF000000 which on a 32-bit machine means that the sign bit is set, so you have a negative number.
The result of performing a right-shift on an object of signed type with a negative value is implementation-defined. In your case, as sign-preserving arithmetic shift right is performed.
If you right-shift an unsigned type, then you would get the results that you were expecting for a byte reversal function. You could achieve this by making either operand of the bitwise & operand an unsigned type forcing conversion of both operands to the unsigned type. (This is true on any implementation where an signed int can't hold all the possible range of positive values of an unsigned int which is nearly all implementations.)
Right shift on signed types is implementation defined, in particular the compiler is free to do an arithmetic or logical shift as pleases. This is something you will not notice if the concrete value that you are treating is positive, but as soon as it is negative you may fall into a trap.
Just don't do it, this is not portable.
x is signed, so the highest bit is used for the sign. 0xFF000000 means "negative 0x7F000000". When you do the shift, the result is "sign extended": The binary digit that is added on the left to replace the former MSB that was shifted right, is always the same as the sign of value. So
0xFF000000 >> 1 == 0xFF800000
0xFF000000 >> 2 == 0xFFC00000
0xFF000000 >> 3 == 0xFFE00000
0xFF000000 >> 4 == 0xFFF00000
If the value being shifted is unsigned, or if the shift is toward the left, the new bit would be 0. It's only in right-shifts of signed values that sign-extension come into play.
If you want it to work the same on al platforms with both signed and unsigned integers, change
(x & (0xFF << 24)) >> 24
into
(x >> 24) & 0xFF
If this is java code you should use '>>>' which is an unsigned right shift, otherwise it will sign extend the value
Related
I'm trying to work with bit manipulation, and am struggling modifying the bits directly.
I have something as follows:
unsigned char myBits = 128; // 10000000 in binary
myBits = myBits >> 1; // Right shift, so we get 64, or 01000000 in binary
Now, how would I use bit manipulation to modify the first bit after the right shift (01000000) to a 1 (11000000)?
Most implementations will shift a "1" bit in from the left if the type in question is signed and the value is negative.
So you could either change the type to signed char, or do some casting on the unsigned types:
myBits = (unsigned char)((signed char)myBits >> 1);
you need to binary OR it with the shifted value:
myBits |= myBits >> 1;
https://godbolt.org/z/dY3eY5dc5
To set the most significant bit (you can change the type to any integer type and it will work):
myBits |= 1ULL << (sizeof(myBits) * CHAR_BIT - 1);
How do I set the first (least significant) eight bits of any integer type to all zeroes? Essentially do a bitwise AND of any integer type with 0x00.
What I need is a generic solution that works on any integer size, but not have to create a mask setting all the higher bits to 1.
In other words:
0xffff & 0x00 = 0xff00
0xaabbccddeeffffff & 0x00 = 0xaabbccddeeffff00
With bit shifts:
any_unsigned_integer = any_unsigned_integer >> 8 << 8;
The simplest solution works for all integer types on architectures with 2's complement representation for negative numbers:
val = val & ~0xff;
The reason is ~0xff evaluates to -256 with type int. Let's consider all possible types for val:
if the type of val is smaller than int, val is promoted to int, the mask operation works as expected and the result is converted back to the type of val.
if the type of val is signed, -256 is converted to type of val preserving its value, hence replicating the sign bit, and the mask is performed properly.
If the type of val is unsigned, converting -256 to this type produces the value TYPE_MAX + 1 - 256 that has all bits set except the 8 low bits, again the proper mask for the operation.
Another simple solution, that works for all representations of negative values is this:
val = val ^ (val & 0xff);
It requires storing the value into a variable to avoid multiple evaluation, whereas the first proposal can be applied to any expression with potential side-effects:
return my_function(a, b, c) & ~0xff;
The C not operator ~ will invert all the bits of a given value so, in order to get a mask that will clear only the lower eight bits:
int val = 123456789;
int other_val = val & ~0xff; // AND with binary 1111 ... 1111 0000 0000
val &= ~0xff; // alternative to change original variable.
If you have a wider (or thinner) type, the 0xff should be of the correct type, for example:
long val = 123456789L;
long other_val = val & ~(long)0xff;
val &= ~(long)0xff; // alternative to change original variable.
One way to do it without a creating a mask for the higher bits is to use a combination of the & and ^ operators: x = x ^ (x & 0xFF); (or, using compound assignment: x ^= x & 0xFF;).
Universal solution no mask, any number of bits
#define RESETB(val, nbits) ((val) ^ ((val) & ((1ULL << (nbits)) - 1)))
or even better
#define RESETB(val, nbits) ((val) ^ ((val) & ((nbits) ? ((nbits) >= sizeof(val) * CHAR_BIT ? ((1ULL << (sizeof(val) * CHAR_BIT)) - 1) : ((1ULL << (nbits)) - 1)) : 0)))
I'm trying to get the most significant bit of an unsigned 8-bit type in C.
This is what I'm trying to do right now:
uint8_t *var = ...;
...
(*var >> 6) & 1
Is this right? If it's not, what would be?
To get the most significant bit from a memory pointed to by uint8_t pointer, you need to shift by 7 bits.
(*var >> 7) & 1
The most standard/correct way of masking bits is to use a readable bit mask of the form 1u << bit. Any C programmer spotting 1u << n in code will know that it is a bit mask - so it is self-documenting code.
So if you want bit number 7, you would write
*var & (1u << 7)
The u suffix is important for rugged code, since you want to avoid accidental implicit promotions to signed types.
Another option is to simply apply a bit mask and check the resulting value:
*var & 0x80u // 1000 0000
I wanted to try to get only the four bits from the right in a byte by using only bit shift operations but it sometimes worked and sometimes not, but I don't understand why.
Here's an example:
unsigned char b = foo; //say foo is 1000 1010
unsigned char temp=0u;
temp |= ((b << 4) >> 4);//I want this to be 00001010
PS: I know I can use a mask=F and do temp =(mask&=b).
Shift operator only only works on integral types. Using << causes implicit integral promotion, type casting b to an int and "protecting" the higher bits.
To solve, use temp = ((unsigned char)(b << 4)) >> 4;
I want to exchange the bytes of a number. Example the binary representation of a number is
00000001 00011000 00000100 00001110. I want to reverse it: 00001110 00000100 00011000 00000001.
Can you please help? The current code is below:
void showBits(int n)
{
int i,k,andMask;
for(i=15;i>=0;i--)
{
andMask=1<<i;
k=n&andMask;
k==0?(cout<<"0"):(cout<<"1");
}
}
int reverse(int a)
{
int b=a<<8;
int c=a>>8;
return (b|c);
}
int main()
{
int a=10;
showBits(a);
int b=reverse(a);
showBits(b);
cin.get();
}
Something like this should work:
result = ((number & 0xFF) << 24) | ((number & 0xFF00) << 8) |
((number & 0xFF0000) >> 8) | ((number & 0xFF000000) >> 24);
int myInt = 0x012345678
__asm {
mov eax, myInt
bswap eax
mov myInt, eax
}
If you want to simply reverse a 32-bit number, you can use a bit-shifting technique that isolates each byte region using bit-masks and logical AND, and then swaps those bytes by the appropriate number of shifted bits using the bit-shift operators >> and <<. You can then recombined the bits using logical OR like so:
int temp = 0x12345678;
temp = ((0xFF & temp) << 24) | ((0xFF00 & temp) << 8) | ((0xFF0000 & temp) >> 8) |
((0xFF000000 & temp) >> 24));
You'll now end up with a final value in temp of 0x78563412.
Update: Okay, I'm looking over you code, and noting the following:
The size of int it seems like you're wanting to work with from the binary digit you have posted is 32-bits ... so your showBits function is not cycling through enough bits to display an entire 32-bit integer. As far as I can see right now, it will only show up to the 16 lower bits. So you'll need to be clear on whether you're working on a platform that defines int as 16 or 32-bits.
Your reverse functions is not correct. If you have a 32-bit int like 0x12345678, and you shift it left by 8-bits, you will end up with 0x34567800. Likewise, when you shift it right by 8-bits, you will just end up with 0x00123456. Shifting a number does not rotate the values through their respective positions (i.e., a shift left of 8-bits would not give you 0x34567812). Plus, even if it did, the logical OR of the rotated values would still not be a correct reversal of the values. Instead, you must use the technique described above that uses bit-masks and logical AND to isolate each byte, and then shift those bits the appropriate number of places in order to reverse the bits in a 32-bit word.
If your original task to to convert from machine-specific byte order to big endian and back, take a look at the hton* and ntoh* functions.
http://minix1.woodhull.com/manpages/man3/hton.3.html