I have the following code in c:
unsigned int a = 60; /* 60 = 0011 1100 */
int c = 0;
c = ~a; /*-61 = 1100 0011 */
printf("c = ~a = %d\n", c );
c = a << 2; /* 240 = 1111 0000 */
printf("c = a << 2 = %d\n", c );
The first output is -61 while the second one is 240. Why the first printf computes the two's complement of 1100 0011 while the second one just converts 1111 0000 to its decimal equivalent?
You have assumed that an int is only 8 bits wide. This is probably not the case on your system, which is likely to use 16 or 32 bits for int.
In the first example, all the bits are inverted. This is actually a straight inversion, not two's complement:
1111 1111 1111 1111 1111 1111 1100 0011 (32-bit)
1111 1111 1100 0011 (16-bit)
In the second example, when you shift it left by 2, the highest-order bit is still zero. You have misled yourself by depicting the numbers as 8 bits in your comments.
0000 0000 0000 0000 0000 0000 1111 0000 (32-bit)
0000 0000 1111 0000 (16-bit)
Try to avoid doing bitwise operations with signed integers -- often it'll lead you into undefined behavior.
The situation here is that you're taking unsigned values and assigning them to a signed variable. For ~60 this is undefined behavior. You see it as -61 because the bit pattern ~60 is also the two's-complement representation of -61. On the other hand 60 << 2 comes out correct because 240 has the same representation both as a signed and unsigned integer.
Related
I have two unsigned ints X and Y, and I want to efficiently decide if X is at most half as long as Y, where the length of X is k+1, where 2^k is the largest power of 2 that is no larger than X.
i.e., X=0000 0101 has length 3, Y=0111 0000 is more than twice as long as X.
Obviously we can check by looking at individual bits in X and Y, for example by shifting right and counting in a loop, but is there an efficient, bit-twiddling (and loop-less) solution?
The (toy) motivation comes from the fact that I want to divide the range RAND_MAX either into range buckets or RAND_MAX/range buckets, plus some remainder, and I prefer use the larger number of buckets. If range is (approximately) at most the square root of RAND_MAX (i.e., at most half as long), than I prefer using RAND_MAX/range buckets, and otherwise I want to use range buckets.
It should be noted, therefore, that X and Y might be large, where possibly Y=1111 1111, in the 8-bit example above. We certainly don't want to square X.
Edit, post-answer: The answer below mentions the built-in count leading zeros function (__builtin_clz()), and that is probably the fastest way to compute the answer. If for some reason this is unavailable, the lengths of X and Y can be obtained through some well-known bit twiddling.
First, smear the bits of X to the right (filling X with 1s except its leading 0s), and then do a population count. Both of these operations involve O(log k) operations, where k is the number of bits that X occupies in memory (my examples are for uint32_t, 32 bit unsigned integers). There are various implementations, but I put the ones that are easiest to understand below:
//smear
x = x | x>>1;
x = x | x>>2;
x = x | x>>4;
x = x | x>>8;
x = x | x>>16;
//population count
x = ( x & 0x55555555 ) + ( (x >> 1 ) & 0x55555555 );
x = ( x & 0x33333333 ) + ( (x >> 2 ) & 0x33333333 );
x = ( x & 0x0F0F0F0F ) + ( (x >> 4 ) & 0x0F0F0F0F );
x = ( x & 0x00FF00FF ) + ( (x >> 8 ) & 0x00FF00FF );
x = ( x & 0x0000FFFF ) + ( (x >> 16) & 0x0000FFFF );
The idea behind the population count is to divide and conquer. For example with
01 11, I first count the 1-bits in 01: there is 1 1-bit on the right, and
there are 0 1-bits on the left, so I record that as 01 (in place). Similarly,
11 becomes 10, so the updated bit-string is 01 10, and now I will add the
numbers in buckets of size 2, and replace the pair of them with the result;
1+2=3, so the bit string becomes 0011, and we are done. The original
bit-string is replaced with the population count.
There are faster ways to do the pop count given in Hacker's Delight, but this
one is easier to explain, and seems to be the basis for most of the others. You
can get my code as a
Gist here..
X=0000 0000 0111 1111 1000 1010 0010 0100
Set every bit that is 1 place to the right of a 1
0000 0000 0111 1111 1100 1111 0011 0110
Set every bit that is 2 places to the right of a 1
0000 0000 0111 1111 1111 1111 1111 1111
Set every bit that is 4 places to the right of a 1
0000 0000 0111 1111 1111 1111 1111 1111
Set every bit that is 8 places to the right of a 1
0000 0000 0111 1111 1111 1111 1111 1111
Set every bit that is 16 places to the right of a 1
0000 0000 0111 1111 1111 1111 1111 1111
Accumulate pop counts of bit buckets size 2
0000 0000 0110 1010 1010 1010 1010 1010
Accumulate pop counts of bit buckets size 4
0000 0000 0011 0100 0100 0100 0100 0100
Accumulate pop counts of bit buckets size 8
0000 0000 0000 0111 0000 1000 0000 1000
Accumulate pop counts of bit buckets size 16
0000 0000 0000 0111 0000 0000 0001 0000
Accumulate pop counts of bit buckets size 32
0000 0000 0000 0000 0000 0000 0001 0111
The length of 8358436 is 23 bits
Y=0000 0000 0000 0000 0011 0000 1010 1111
Set every bit that is 1 place to the right of a 1
0000 0000 0000 0000 0011 1000 1111 1111
Set every bit that is 2 places to the right of a 1
0000 0000 0000 0000 0011 1110 1111 1111
Set every bit that is 4 places to the right of a 1
0000 0000 0000 0000 0011 1111 1111 1111
Set every bit that is 8 places to the right of a 1
0000 0000 0000 0000 0011 1111 1111 1111
Set every bit that is 16 places to the right of a 1
0000 0000 0000 0000 0011 1111 1111 1111
Accumulate pop counts of bit buckets size 2
0000 0000 0000 0000 0010 1010 1010 1010
Accumulate pop counts of bit buckets size 4
0000 0000 0000 0000 0010 0100 0100 0100
Accumulate pop counts of bit buckets size 8
0000 0000 0000 0000 0000 0110 0000 1000
Accumulate pop counts of bit buckets size 16
0000 0000 0000 0000 0000 0000 0000 1110
Accumulate pop counts of bit buckets size 32
0000 0000 0000 0000 0000 0000 0000 1110
The length of 12463 is 14 bits
So now I know that 12463 is significantly larger than the square root of
8358436, without taking square roots, or casting to floats, or dividing or
multiplying.
See also
Stackoverflow
and Haacker's Delight (it's
a book, of course, but I linked to some snippets on their website).
If you are dealing with unsigned int and sizeof(unsigned long long) >= sizeof(unsigned int), you can just use the square method after casting:
(unsigned long long)X * (unsigned long long)X <= (unsigned long long)Y
If not, you can still use the square method if X is less than the square root of UINT_MAX+1, which you may need to hard code in the function.
Otherwise, you could use floating point calculation:
sqrt((double)Y) >= (double)X
On modern CPUs, this would be quite fast anyway.
If you are OK with gcc extensions, you can use __builtin_clz() to compute the length of X and Y:
int length_of_X = X ? sizeof(X) * CHAR_BIT - __builtin_clz(X) : 0;
int length_of_Y = Y ? sizeof(Y) * CHAR_BIT - __builtin_clz(Y) : 0;
return length_of_X * 2 <= length_of_Y;
__buitin_clz() compiles to a single instruction on modern Intel CPUs.
Here is a discussion on more portable ways to count leading zeroes you could use to implement your length function: Counting leading zeros in a 32 bit unsigned integer with best algorithm in C programming or this one: Implementation of __builtin_clz
How to calculate ~a manually? I am seeing these types of questions very often.
#include <stdio.h>
int main()
{
unsigned int a = 10;
a = ~a;
printf("%d\n", a);
}
The result of the ~ operator is the bitwise complement of its (promoted) operand
C11dr ยง6.5.3.3
When used with unsigned, it is sufficient to mimic ~ with exclusive-or with UINT_MAX which is the same type and value as (unsigned) -1. #EOF
unsigned int a = 10;
// a = ~a;
a ^= -1;
You could XOR it with a bitmask of all 1's.
unsigned int a = 10, mask = 0xFFFFFFFF;
a = a ^ mask;
This is assuming of course that an int is 32 bits. That's why it makes more sense to just use ~.
Just convert the number to binary form, and change '1' by '0' and '0' by '1'.
That is:
10 (decimal)
Converted to binary (32 bits as usual in an int) gives us:
0000 0000 0000 0000 0000 0000 0000 1010
Then apply the ~ operator:
1111 1111 1111 1111 1111 1111 1111 0101
Now you have a number that could be interpreted as an unsigned 32 bit number, or signed one. As you are using %d in your printf and a is an int, signed it is.
To find out the value in decimal from a signed (2-complement) number do as this:
If the most significant bit (the leftmost) is 0, then just convert back the binary number to decimal as usual.
if the most significant bit is 1 (our case here), then change '1' by '0' and '0' by '1', add '1' and convert to decimal prepending a minus sign to the result.
So it is:
1111 1111 1111 1111 1111 1111 1111 0101
^
|
Its most significant bit is 1, so first we change 0 and 1
0000 0000 0000 0000 0000 0000 0000 1010
And then, we add 1
0000 0000 0000 0000 0000 0000 0000 1010
1
---------------------------------------
0000 0000 0000 0000 0000 0000 0000 1011
Take this number and convert back to decimal prepending a minus sign to the result. The converted value is 11. With the minus sign, is -11
This function shows the binary representation of an int and swaps the 0's and 1's:
void not(unsigned int x)
{
int i;
for(i=(sizeof(int)*8)-1; i>=0; i--)
(x&(1u<<i))?putchar('0'):putchar('1');
printf("\n");
}
Source: https://en.wikipedia.org/wiki/Bitwise_operations_in_C#Right_shift_.3E.3E
I'm currently up to chapter 2 in The C Programming Language (K&R) and reading about bitwise operations.
This is the example that sparked my curiosity:
x = x & ~077
Assuming a 16-bit word length and 32-bit long type, what I think would happen is 077 would first be converted to:
0000 0000 0011 1111 (16 bit signed int).
This would then be complemented to:
1111 1111 1100 0000.
My question is what would happen next for the different possible types of x? If x is a signed int the answer is trivial. But, if x is a signed long I'm assuming ~077 would become:
1111 1111 1111 1111 1111 1111 1100 0000
following 2s complement to preserve the sign. Is this correct?
Also, if x is an unsigned long will ~077 become:
0000 0000 0000 0000 1111 1111 1100 0000
Or, will ~077 be converted to a signed long first:
1111 1111 1111 1111 1111 1111 1100 0000
...after which it is converted to an unsigned long (no change to bits)?
Any help would help me clarify whether or not this operation will always set only the last 6 bits to zero.
Whatever data-type you choose, ~077 will set the rightmost 6 bits to 0 and all others to 1.
Assuming 16-bit ints and 32-bit longs, there are 4 cases:
Case 1
unsigned int x = 077; // x = 0000 0000 0011 1111
x = ~x; // x = 1111 1111 1100 0000
unsigned long y = ~x; // y = 0000 0000 0000 0000 1111 1111 1100 0000
Case 2
unsigned int x = 077; // x = 0000 0000 0011 1111
x = ~x; // x = 1111 1111 1100 0000
long y = ~x; // y = 0000 0000 0000 0000 1111 1111 1100 0000
Case 3
int x = 077; // x = 0000 0000 0011 1111
x = ~x; // x = 1111 1111 1100 0000
unsigned long y = ~x; // y = 1111 1111 1111 1111 1111 1111 1100 0000
Case 4
int x = 077; // x = 0000 0000 0011 1111
x = ~x; // x = 1111 1111 1100 0000
long y = ~x; // y = 1111 1111 1111 1111 1111 1111 1100 0000
See code here. This means the sign extension is done when the source is signed. When the source is unsigned, sign bit is not extended and the left bits are set to 0.
x = x & ~077 //~077=11111111111111111111111111000000(not in every case)
~077 is a constant evaluated at the complie time so its value will be casted according to the value of x at the compile time so the AND operation will always yield to last 6 bits of x to 0 and the remaining bits will remain whatever they were before the AND operation. Like
//let x=256472--> Binary--> 0000 0000 0000 0011 1110 1001 1101 1000
x = x & ~077;
// now x = 0000 0000 0000 0011 1110 1001 1100 0000 Decimal--> 256448
So the last 6 bits are changed to 0 irrespective of the data type during the compile time remaining bits remain same. And in knr it is written there The portable form
involves no extra cost, since ~077 is a constant expression that can be evaluated at compile time.
I have a sample question from test from my school. Which way is the most simple for solving it on paper?
The question:
Run-time system uses two's complement for representation of integers. Data type int has size 32 bits, data type short has size 16 bits. What does printf show? (The answer is ffffe43c)
short int x = -0x1bc4; /* !!! short */
printf ( "%x", x );
lets make it in two steps: 1bc4 = 1bc3 + 1
first of all we make this on long:
0 - 1 = ffffffff
then
ffffffff - 1bc3
this can be done by symbols
ffffffff
-
00001bc3
you will get the result you have
Since your x is negative take the two's complement of it which will yield:
2's(-x) = ~(x) + 1
2's(-0x1BC4) = ~(0x1BC4) + 1 => 0xE43C
0x1BC4 = 0001 1011 1100 0100
~0X1BC4 =1110 0100 0011 1011
+1 = [1]110 0100 0011 1100 (brackets around MSB)
which is how your number is represented internally.
Now %x expects a 32-bit integer so your computer will sign-extend your value which copies the MSB to the upper 16 bits of your value which will yield:
1111 1111 1111 1111 1110 0100 0011 1100 == 0xFFFFE43C
So I was messing around with Bit-Twiddling in C, and I came across an interesting output:
int main()
{
int a = 0x00FF00FF;
int b = 0xFFFF0000;
int res = (~b & a);
printf("%.8X\n", (res << 8) | (b >> 24));
}
And the output from this statement is:
FFFFFFFF
I expected the output to be
0000FFFF
But why wasn't it? Am I missing something with bit-shifting here?
TLDR: Your integer b is negative so when you shift it right the value of the uppermost bit (i.e. 1) remains the same. Therefore when you shift b right by 24 places you end up with 0xFFFFFFFF.
Longer explanation:
Assuming on your platform that your integers are 32 bits or longer and a signed integer is represented by 2's complement then the 0xFFFF0000 assigned to a signed integer variable is a negative number. If an int is longer than 32 bits then the 0xFFFF0000 will be sign extended first and will still be a negative number.
Shifting a negative number right is implementation defined by the standard (C99 / N1256, section 6.5.7.5):
The result of E1 >> E2 is E1 right-shifted E2 bit positions. [...] If E1
has a signed type and a negative value, the resulting value is
implementation defined.
That means a particular compiler can choose what happens in a particular situation, but it should be noted in the compiler manual what the effect is.
There tend to be two sets of shift instructions in many processors, a logical shift and an arithmetic shift. The logical shift right will shift bits and fill the exposed bits with zeros. Arithmetic shifts right (assuming 2's complement again) will fill the exposed bits with the same bit value of the most significant bit so that it ends up with a result that is consistent with using shifts as a divide by 2. (For example, -4 >> 1 == 0xFFFFFFFC >> 1 == 0xFFFFFFFE == -2.)
In your case it appears that the compiler implementor has chosen to use arithmetic shifts when applied to signed integers and so the result of shifting a negative value to the right remains a negative value. In terms of bit patterns 0xFFFF0000 >> 24 gives 0xFFFFFFFF.
Unless you are absolutely sure of what you are doing it is best to perform bitwise operations only on unsigned types as their internal representation can safety be treated as a collection of bits. You probably also want to make sure any numeric values you use in that case are unsigned by appending the unsigned suffix to your number.
Right-shifting negative values (like b) can be defined in two different ways: logical shift, which pads the value with zeroes on the left (which yields a positive number when shifting a nonzero amount), and arithmetic shift, which pads the value with ones (always yielding a negative number). Which definition is used in C is implementation-defined, and your compiler apparently uses arithmetic shift, so b >> 24 is 0xFFFFFFFF.
b >> 24 gives 0xFFFFFFFF signed right pad of negative number
List = (res << 8) | (b >> 24)
a = 0x00FF00FF = 0000 0000 1111 1111 0000 0000 1111 1111
b = 0xFFFF0000 = 1111 1111 1111 1111 0000 0000 0000 0000
~b = 0x0000FFFF = 0000 0000 0000 0000 1111 1111 1111 1111
~b & a = 0x000000FF = 0000 0000 0000 0000 0000 0000 1111 1111, = res
res << 8 = 0x0000FF00 = 0000 0000 0000 0000 1111 1111 0000 0000
b >> 24 = 0xFFFFFFFF = 1111 1111 1111 1111 1111 1111 1111 1111
List = 0xFFFFFFFF = 1111 1111 1111 1111 1111 1111 1111 1111
The golden rule: Never ever mix signed numbers with bitwise operators.
Change all ints to unsigned ints. Just as a precaution, change all literals to unsigned too.
#include <stdint.h>
uint32_t a = 0x00FF00FFu;
uint32_t b = 0xFFFF0000u;
uint32_t res = (~b & a);