~ Unary Operator and Bitwise Tests Give Negative Results - c

I was studying bitwise operators and they make sense until the unary ~one's complement is used with them. Can anyone explain to me how this works?
For example, these make sense however the rest of the computations aside from these do not:
1&~0 = 1 (~0 is 1 -> 1&1 = 1)
~0^~0 = 0 (~0 is 1 -> 1^1 = 0)
~1^0 = 1 (~1 is 0 -> 0^1 = 1)
~0&1 = 1 (~0 is 1 -> 1&1 = 1)
~0^~1 = 1 (~0 is 1, ~1 is 0 -> 1^0 = 1)
~1^~1 = 0 (~1 is 0 -> 0^0)
The rest of the results produced are negative(or a very large number if unsigned) or contradict the logic I am aware of. For example :
0&~1 = 0 (~1 = 0 therefor 0&0 should equal 0 but they equal 1)
~0&~1 = -2
~1|~0 = -1
etc. Anywhere you can point me to learn about this?

They actually do make sense when you expand them out a little more. A few things to be aware of though:
Bitwise AND yields a 1 only when both bits involved are 1. Otherwise, it yields 0. 1 & 1 = 1, 0 & anything = 0.
Bitwise OR yields a 1 when any of the bits in that position are a 1, and 0 only if all bits in that position are 0. 1 | 0 = 1, 1 | 1 = 1, 0 | 0 = 0.
Signed numbers are generally done as two's complement (though a processor does not have to do it that way!). Remember with two's complement, you invert and add 1 to get the magnitude when the highest bit position is a 1.
Assuming a 32-bit integer, you get these results:
0 & ~1 = 0 & 0xFFFFFFFE = 0
~0 & ~1 = 0xFFFFFFFF & 0xFFFFFFFE = 0xFFFFFFFE (0x00000001 + 1) = -2
~1 | ~0 = 0xFFFFFFFE & 0xFFFFFFFF = 0xFFFFFFFF (0x00000000 + 1) = -1

0&~1 = 0 (~1 = 0 therefor 0&0 should equal 0 but they equal 1)
~1 equals -2. If you flip all the bits of a Two's Complement number, you multiply it by -1 and subtract 1 from the result. Regardless of that 0 has 0 for all the bits, so the result of & is going to be 0 anyway.
~0&~1 = -2
~0 has all bits set so ~0&~1 is just ~1. Which is -2.
~1|~0 = -1
~0 has all bits set, so the result of the | is ~0 (= -1) no matter what it is OR'd with.

~1 = 0 - No it's not. It's equal to -2. Let's take a eight bit two complement as example. The decimal number 1 has the representation 0000 0001. So ~1 will have 1111 1110 which is the two complement representation of -2.

Related

bitwise operation for c confusion

Assuming that x is a positive integer, the following function returns 1 if x is a certain type
of values or it returns 0 otherwise.
int mystery(int x) {
return !((x-1) & x);
}
What does mystery(20) return?
May I ask how do we approach this type of qn ?
My idea is to express x in binary and do bitwise operation with it.
Do correct me if I am wrong thanks !
Let's work from the outside in.
!(expression)
you will get a 0 if expression is true, that is, not zero,
and you will get a 1 if expression is false, that is, zero.
So when will expression be non-zero, giving a zero as a result? Whenever (x-1) has some bits in common with x.
What are examples?
0 - 1 = 0xfff... & 0, no bits in common, returns 1
1 - 1 = 0 & 1, no bits in common, returns 1
2 - 1 = 1 & 2, no bits in common, returns 1
3 - 1 = 2 & 3, bits in common, returns 0
4 - 1 = 3 & 4, no bits in common, returns 1
5 - 1 = 4 & 5, bits in common, returns 0
6 - 1 = 5 & 6, bits in common, returns 0
7 - 1 = 6 & 7, bits in common, returns 0
8 - 1 = 7 & 8, no bits in common, returns 1
It looks to me like we can say it returns 1 when the binary representation has exactly zero or one bits turned on in it.
0 or 1 or 10 or 100

K&R C programming language exercise 2-9

I don't understand the exercise 2-9, in K&R C programming language,
chapter 2, 2.10:
Exercise 2-9. In a two's complement number system, x &= (x-1) deletes the rightmost 1-bit in x . Explain why. Use this observation to write a faster version of bitcount .
the bitcount function is:
/* bitcount: count 1 bits in x */
int bitcount(unsigned x)
{
int b;
for (b = 0; x != 0; x >>= 1)
if (x & 01)
b++;
return b;
}
The function deletes the rightmost bit after checking if it is bit-1 and then pops in the last bit .
I can't understand why x&(x-1) deletes the right most 1-bit?
For example, suppose x is 1010 and x-1 is 1001 in binary, and x&(x-1) would be 1011, so the rightmost bit would be there and would be one, where am I wrong?
Also, the exercise mentioned two's complement, does it have something to do with this question?
Thanks a lot!!!
First, you need to believe that K&R are correct.
Second, you may have some mis-understanding on the words.
Let me clarify it again for you. The rightmost 1-bit does not mean the right most bit, but the right most bit which is 1 in the binary form.
Let's arbitrary assume that x is xxxxxxx1000(x can be 0 or 1). Then from right to left, the fourth bit is the "rightmost 1-bit". On the basis of this understanding, let's continue on the problem.
Why x &=(x-1) can delete the rightmost 1-bit?
In a two's complement number system, -1 is represented with all 1 bit-pattern.
So x-1 is actually x+(-1), which is xxxxxxx1000+11111111111. Here comes the tricky point.
before the righmost 1-bit, all 0 becomes 1 and the rightmost 1-bit becomes 0 and there is a carry 1 go to left side. And this 1 will continue to proceed to the left most and cause an overflow, meanwhile, all 'x' bit is still a because 'x'+'1'+'1'(carry) causes a 'x' bit.
Then x & (x-1) will delete the rightmost 1-bit.
Hope you can understand it now.
Thanks.
Here is a simple way to explain it. Let's arbitrarily assume that number Y is xxxxxxx1000 (x can be 0 or 1).
xxxxxxx1000 - 1 = xxxxxxx0111
xxxxxxx1000 & xxxxxxx0111 = xxxxxxx0000 (See, the "rightmost 1" is gone.)
So the number of repetitions of Y &= (Y-1) before Y becomes 0 will be the total number of 1's in Y.
Why do x & (x-1) delete the right most order bit? Just try and see:
If the righmost order bit is 1, x has a binary representation of a...b1 and x-1 is a...b0 so the bitwise and will give a...b1 because common bits are left unchanged by the and and 1 & 0 is 0
Else x has a binary representation of a...b10...0; x-1 is a...b01...1 and for same reason as above x & (x-1) will be a...b00...0 again clearing the rightmost order bit.
So instead of scanning all bits to find which one are 0 and which one are 1, you just iterate the operation x = x & (x-1) until x is 0: the number of steps will be the number of 1 bits. It is more efficient than the naive implementation because statistically you will use half number of steps.
Example of code:
int bitcount(unsigned int x) {
int nb = 0;
while (x != 0) {
x &= x-1;
nb++
}
return nb;
}
Ik I'm already very late (≈ 3.5yrs) but your example has mistake.
x = 1010 = 10
x - 1 = 1001 = 9
1010 & 1001 = 1000
So as you can see, it deleted the rightmost bit in 10.
7 = 111
6 = 110
5 = 101
4 = 100
3 = 011
2 = 010
1 = 001
0 = 000
Observe that the position of rightmost 1 in any number, the bit at that same position of that number minus one is 0. Thus ANDing x with x-1 will be reset (i.e. set to 0) the rightmost bit.
7 & 6 = 111 & 110 = 110 = 6
6 & 5 = 110 & 101 = 100 = 4
5 & 4 = 101 & 100 = 100 = 4
4 & 3 = 010 & 011 = 010 = 2
3 & 2 = 011 & 010 = 010 = 2
2 & 1 = 010 & 001 = 000 = 0
1 & 0 = 001 & 000 = 000 = 0

Understanding the bitwise negation of signed numbers and negative number representation in a typical sysem

Suppose I have a signed char member num = 15 and I do num = ~num. Then, as per this post, we will get -16.
~(00001111) = (11110000)
But, if I consider the MSB as sign bit, shouldn't the answer be -112? How come this is resulting in -16? Why the second and third set bits from left are being ignored.
Can anyone please clarify.
EDIT
I want more detailed explaination of why the following program resulted in -16 and not -112
#include<stdio.h>
int main()
{
char num = 15;
num = ~num;
printf("%d\n", num);
return 0;
}
I expected it as 1(1110000) = -(1110000) = -112
~(00001111) = (11110000)
What you're doing is using the MSb (Most Significant bit) as a flag to decide to put a '-' sign, and then reading the rest of the bits normally. Instead, the MSb is a flag to do 2 things: put a '-' sign - and then NOT the value and add one (2's complement) - before printing out the rest of the bits.
This comes from the overflow/underflow nature of fixed-length bit values:
00000010 - 1 = 00000001 (2-1=1)
00000001 - 1 = 00000000 (1-1=0)
00000000 - 1 = 11111111 (0-1=-1)
C allows for three different representations of signed integers, but the most common is "two's complement". However, I'll briefly discuss "one's complement" as well to illustrate how there is a relationship between them.
One's complement
One's complement integers are split into sign and value bits. To use the 8-bit representation of the integer 19 as an example:
S|64 32 16 8 4 2 1
0| 0 0 1 0 0 1 1 = 19
Using the bitwise complement operator ~ flips all of the bits of the integer, including the sign bit:
S|64 32 16 8 4 2 1
1| 1 1 0 1 1 0 0 = ~19
When the sign bit is set, the interpretation of 1 and 0 bits is reversed (0=on, 1=off), and the value is considered negative. This means the value above is:
-(16 + 2 + 1) = -19
Two's complement
Unlike one's complement, an integer is not divided into a sign bit and value bits. Instead, what is regarded as a sign bit adds -2^(b - 1) to the rest of the value, where b is the number of bits. To use the example of an 8-bit representation of ~19 again:
-128 64 32 16 8 4 2 1
1 1 1 0 1 1 0 0 = ~19
-128 + 64 + 32 + 8 + 4
= -128 + 108
= -(128 - 108)
= -20
The relationship between them
The value of -19 is 1 more than -20 arithmetically, and this follows a generic pattern in which any value of -n in two's complement is always one more than the value of ~n, meaning the following always holds true for a value n:
-n = ~n + 1
~n = -n - 1 = -(n + 1)
This means that you can simply look at the 5-bit value 15, negate it and subtract 1 to get ~15:
~15 = (-(15) - 1)
= -16
-16 for a 5-bit value in two's complement is represented as:
-16 8 4 2 1
1 0 0 0 0 = -16
Flipping the bits using the ~ operator yields the original value 15:
-16 8 4 2 1
0 1 1 1 1 = ~(-16) = -(-16 + 1) = -(-15) = 15
Restrictions
I feel I should mention arithmetic overflow regarding two's complement. I'll use the example of a 2-bit signed integer to illustrate. There are 2^2=4 values for a 2-bit signed integer: -2, -1, 0, and 1. If you attempt to negate -2, it won't work:
-2 1
1 0 = -2
Writing +2 in plain binary yields 1 0, the same as the representation of -2 above. Because of this, +2 is not possible for a 2-bit signed integer. Using the equations above also reveals the same issue:
// Flip the bits to obtain the value of ~(-2)
~(-2) = -(-2 + 1)
~(-2) = 1
// Substitute 1 in place of ~(-2) to find the result of -(-2)
-(-2) = ~(-2) + 1
-(-2) = 1 + 1
-(-2) = 2
While this makes sense mathematically, the fact is that 2 is outside the representable range of values (only -2, -1, 0, and 1 are allowed). That is, adding 1 to 01 (1) results in 10 (-2). There's no way to magically add an extra bit in hardware to yield a new sign bit position, so instead you get an arithmetic overflow.
In more general terms, you cannot negate an integer in which only the sign bit is set with a two's complement representation of signed integers. On the other hand, you cannot even represent a value like -2 in a 2-bit one's complement representation because you only have a sign bit, and the other bit represents the value 1; you can only represent the values -1, -0, +0, and +1 with one's complement.

Operator &~31 of C Program

I want to ask about C operator from this code. My friends ask it, but I never seen this operator:
binfo_out.biSizeImage = ( ( ( (binfo_out.biWidth * binfo_out.biBitCount) + 31) & ~31) / 8) * abs(out_bi.biHeight);
What this operator & ~31 mean? anybody can explain this?
The & operator is a bitwise AND. The ~ operator is a bitwise NOT (i.e. inverts the bits). As 31 is binary 11111, ~31 is binary 1111111....111100000 (i.e. a number which is all ones, but has five zeroes at the end). Anding a number with this thus clears the least significant five bits, which (if you think about it) rounds down to a multiple of 32.
What does the whole thing do? Note it adds 31 first. This has the effect that the whole thing rounds something UP to the next multiple of 32.
This might be used to calculate (for instance), how many bits are going to be used to store something if you can only use 32 bit quantities to store them, as there is going to be some wastage in the last 32 bit number.
31 in binary representation will be 11111 so ~31 = 5 zeros 00000 preceeded by 1's. so it is to make last 5 bits zero. i.e. to mask the last 5 bits.
here ~ is NOT operator i.e. it gives 1's complement. and & is AND operator.
& is the bitwise AND operator. It and's every corresponding bit of two operands on its both sides. In an example, it does the following:
Let char be a type of 8 bits.
unsigned char a = 5;
unsigned char b = 12;
Their bit representation would be as follows:
a --> 0 0 0 0 0 1 0 1 // 5
b --> 0 0 0 0 1 1 0 0 // 12
And the bitwise AND of those would be:
a & b --> 0 0 0 0 0 1 0 0 // 8
Now, the ~ is the bitwise NOT operator, and it negates every single bit of the operand it prefixes. In an example, it does the following:
With the same a from the previous example, the ~a would be:
~a --> 1 1 1 1 1 0 1 0 // 250
Now with all this knowledge, x & ~31 would be the bitwise AND of x and ~31, where the bit representation of ~31 looks like this:
~31 --> 1111 1111 1111 1111 1111 1111 1110 0000 // -32 on my end
So the result would be whatever the x has on its bits, other than its last 5 bits.
& ~31
means bitwise and of the operand on the left of & and a bitwise not of 31.
http://en.wikipedia.org/wiki/Bitwise_operation
The number 31 in binary is 11111 and ~ in this case is the unare one's compliment operator. So assuming 4-byte int:
~31 = 11111111 11111111 11111111 11100000
The & is the bitwise AND operator. So you're taking the value of:
((out_bi.biWidth * out_bi.biBitCount) + 31)
And performing a bitwise AND with the above value, which is essentially blanking the 5 low-order bits of the left-hand result.

Changing the LSB to 1 in a 4 bit integer via C

I am receiving a number N where N is a 4-bit integer and I need to change its LSB to 1 without changing the other 3 bits in the number using C.
Basically, all must read XXX1.
So lets say n = 2, the binary would be 0010. I would change the LSB to 1 making the number 0011.
I am struggling with finding a combination of operations that will do this. I am working with: !, ~, &, |, ^, <<, >>, +, -, =.
This has really been driving me crazy and I have been playing around with >>/<< and ~ and starting out with 0xF.
Try
number |= 1;
This should set the LSB to 1 regardless of what the number is. Why? Because the bitwise OR (|) operator does exactly what its name suggests: it logical ORs the two numbers' bits. So if you have, say, 1010b and 1b (10 and 1 in decimal), then the operator does this:
1 0 1 0
OR 0 0 0 1
= 1 0 1 1
And that's exactly what you want.
For your information, the
number |= 1;
statement is equivalent to
number = number | 1;
Use x = x | 0x01; to set the LSB to 1
A visualization
? ? ? ? ? ? ? ?
OR
0 0 0 0 0 0 0 1
----------------------
? ? ? ? ? ? ? 1
Therefore other bits will stay the same except the LSB is set to 1.
Use the bitwise or operator |. It looks at two numbers bit by bit, and returns the number generated by performing an OR with each bit.
int n = 2;
n = n | 1;
printf("%d\n", n); // prints the number 3
In binary, 2 = 0010, 3 = 0011, and 1 = 0001
0010
OR 0001
-------
0011
If n is not 0
n | !!n
works.
If n is 0, then !n is what you want.
UPDATE
The fancy one liner :P
n = n ? n | !!n : !n;

Resources