#include<stdio.h>
int main(void)
{
int a=-3,b=5,c;
c=a|b;
printf("%d ",c);
c=a&b;
printf("%d ",c);
}
The output is -3 5, please explain how?
To understand the output, you need to become familiar with the Two's Complement which is used to represent negative binary numbers. The conversion from +x to -x is actually quite easy: Complement all bits and add one.
Now just assume your ints are of length 8 bits (which is sufficient to examine 5 and -3):
5: 0000 0101
3: 0000 0011 => -3: 1111 1101
Now lets take a look at the bitwise or:
1111 1101 | 0000 0101 = 1111 1101
Exactly the represantation of -3
And now the bitwise AND:
1111 1101 & 0000 0101 = 0000 0101
Exactly the binary representation of 5
It helps when you look at the binary representations alongside each other:
-3 == 1111 1111 1111 1101
+5 == 0000 0000 0000 0101
The thing to understand is that both | and & will leave a bit alone if it has the same value on both sides. If the values are different (ie one operand has a 0 at that position and the other has a 1), then one of them "wins", depending on whether you're using | or &.
When you OR those bits together, the 1s win. However, the 5 has a 0 in the same position as the 0 in -3, so that bit comes through the OR operation unchanged. The result (1111 1111 1111 1101) is still the same as -3.
When you do a bitwise AND, the zeroes win. However, the 1s in 5 match up with 1s in -3, so those bits come through the AND operation unchanged. The result is still 5.
Binary of 5 --is--> 0000 0101
3 --> 0000 0011 -- 1's Complement --> 1111 1100 -- 2's Complement (add 1) --> 1111 1101 == -3. This is how it gets stored in Memory.
Bitwise OR Truth Table:
p OR q
p || q || p | q
T(1) || T(1) || T(1)
T(1) || F(0) || T(1)
F(0) || T(1) || T(1)
F(0) || F(0) || F(0)
1111 1101 | 0000 0101 = 1111 1101 == -3
Bitwise AND Truth Table:
p AND q
p || q || p & q
T(1) || T(1) || T(1)
T(1) || F(0) || F(0)
F(0) || T(1) || F(0)
F(0) || F(0) || F(0)
1111 1101 & 0000 0101 = 0000 0101 == 5
Also, see - What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Get some paper and a writing implement of your choice
Write out -3 and 5 in binary ( See twos complement for how to do negative numbers)
hint: | means OR, & means AND
-3 = 0xFFFD = 1111 1111 1111 1101 5 = 0101 so the bitwise or does not change the first argument ( just overlap one with ones ) and results is still -3
The bitwise and takes the coomon ones between 1101 and 0101 that is 0101=5 :) no reason to consider all the trailing one in -3 since 5 = 0000 0000 0000 0101
If you know all about 2's complements, then you should know
how to write out 3 and 5 in 2's complement as bit31 bit32 ... bit3n and bit51 bit52 .. bit5n
how to compute the result of bit3i | bit5i for i = 0 ... n
how to convert the result back to base 10
That should give you your answer for the first, do the same with & for the second.
Ever hear of DeMorgan's law...??? The hint is in the linky, it is the table that epitomises and embodies the raw cold truth of logic that is pinned into the syntaxes of major language compilers...
Even more worrying is the fact that you don't have the basic CS101 knowledge and posting this question (Sorry if you take this to be condescending but am not), I genuinely cannot believe you're looking at a C code and not told anything about two's complements, bitwise logic... something is very wrong here... If your college lecturer has not told you any of it, said lecturer should not be lecturing at all and find another job.... sigh
Related
You are asked to complete the following C function:
/* Return 1 when all bits of byte i of x equal 1; 0 otherwise. */
int allBits_ofByte_i(unsigned x, int i) {
return _____________________ ;
}
My solution: !!(x&(0xFF << (i<<3)))
The correct answer to this question is:
!~(~0xFF | (x >> (i << 3 ))
Can someone explain it?
Also, can someone take a look at my answer, is it right?
The expression !~(~0xFF | (x >> (i << 3 )) is evaluated as follows.
i<<3 multiplies i by 8 to get a number of bits which will be 0, 8, 16, or 24, depending on which byte the caller wants to test. This is actually the number of bits to ignore, as it is the number of bits that are less significant than the byte we're interested it.
(x >> ...) shifts the test value right to eliminate the low bits that we're not interested in. The 8 bits of interest are now the lowest 8 bits in the unsigned value we're evaluating. Note that other higher bits may or may not be set.
(~0xFF | ...) sets all 24 bits above the 8 we're interested in, but does not alter those 8 bits. (~0xFF is a shorthand for 0xFFFFFF00, and yes, arguably 0xFFu should be used).
~(...) flips all bits. This will result in a value of zero if every bit was set, and a non-zero value in every other case.
!(...) logically negates the result. This will result in a value of 1 only if every bit was set during step 3. In other words, every bit in the 8 bits we were interested in was set. (The other 24 bits were set in step 3.)
The algorithm can be summed up as, set the 24 bits we're not interested in, then verify that 32 bits are set.
Your answer took a slightly different approach, which was to shift the 0xFF mask left rather than shift the test value right. That was my first thought for how to approach the problem too! But your logical negation doesn't verify that every bit is set, which is why your answer wouldn't produce correct results in all cases.
x is of unsigned integer type. Let's say that x is (often) 32 bit.
One byte consists of 8 bits. So x has 4 bytes in this case: 0, 1, 2 or 3
According to the solution the endianness of the architecture can be imagined as follows:
x => bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb
i => 3 2 1 0
I will try to break it down:
!~ ( ~0xFF | ( x >> (i << 3) ) )
i can be either 0, 1, 2 or 3. So i << 3 would either give you 0, 8, 16 or 24. (i << n is like multiplying by 2^n; it means shift i to the left n times putting 0).
Note that 0, 8, 16 and 24 are the byte segments: 0-7, 8-15, 16-23, 24-31
This is used to ...
x >> (i<<3) shifts to the right x by that result (0, 8, 16 or 24 times). So that the corresponding byte denoted by the i parameter occupies now the right most bits.
Until now you manipulated x so that the byte you are interested in is located on the right most 8 bits (the right most byte).
~0xFF is the inversion of 0000 0000 0000 0000 0000 0000 1111 1111 which gives you 1111 1111 1111 1111 1111 1111 0000 0000
The bitwise or operator is applied to the two results above, which would result in
1111 1111 1111 1111 1111 1111 abcd efgh - the letters being the bits of the corresponding byte of x.
~1111 1111 1111 1111 1111 1111 abcd efgh will turn into 0000 0000 0000 0000 0000 0000 ABCD EFGH - the capital letters being the inverse of the lower letters' values.
!0000 0000 0000 0000 0000 0000 ABCD EFGH is a logical operation. !n is 1 if n is 0, and it is 0 if n is otherwise.
So you get a 1 if all the inverted bits of the corresponding byte were 0000 0000 (i.e. the byte is 1111 1111).
Otherwise you get a 0.
In the C programming language a result of 0 corresponds to a boolean false value. And a result different than 0 corresponds to a boolean true value.
int main(){
int a = 10, b = -2;
printf("\n %d \n",a^b);
return 0;
}
This program outputs -12. I could not understand how. Please explain.
0111 1110 -> 2's complement of -2
0000 1010 -> 10
---------
0111 0100
This no seems to be greater than -12 and is +ve. But how did I get the o/p as -12 ?
To find the two's complement of a negative integer, first find the binary representation of its magnitude. Then flip all its bits, i.e., apply the bitwise NOT operator !. Then add 1 to it. Therefore, we have
2 --> 0000 0000 0000 0010
~2 --> 1111 1111 1111 1101 // flip all the bits
~2 + 1 --> 1111 1111 1111 1110 // add 1
Therefore, the binary representation of -2 in two's complement is
1111 1111 1111 1110
Now, assuming the size of int is 4, the representation of a and b in two's complement is -
a --> 0000 0000 0000 1010 --> 10
b --> 1111 1111 1111 1110 --> -2
a^b --> 1111 1111 1111 0100 --> -12
The operator ^ is the bitwise XOR, or exclusive OR operator. If operates on the corresponding bits of a and b and evaluates to 1 only when the bits are not both 0 or both 1, else it evaluate to 0.
Seems legit!
1111 1110 (-2)
xor
0000 1010 (10)
=
1111 0100 (-12)
^ is the bitwise XOR, not power
a = 10 = 0000 1010
b = -2 = 1111 1110
──────────────────
a^b = 1111 0100 = -12
(int) -2 = 0xfffffffe
(int) 10 = 0x0000000a
0xfffffffe ^ 0x0000000a = fffffff4 = (int) -12
I have the following code in c:
unsigned int a = 60; /* 60 = 0011 1100 */
int c = 0;
c = ~a; /*-61 = 1100 0011 */
printf("c = ~a = %d\n", c );
c = a << 2; /* 240 = 1111 0000 */
printf("c = a << 2 = %d\n", c );
The first output is -61 while the second one is 240. Why the first printf computes the two's complement of 1100 0011 while the second one just converts 1111 0000 to its decimal equivalent?
You have assumed that an int is only 8 bits wide. This is probably not the case on your system, which is likely to use 16 or 32 bits for int.
In the first example, all the bits are inverted. This is actually a straight inversion, not two's complement:
1111 1111 1111 1111 1111 1111 1100 0011 (32-bit)
1111 1111 1100 0011 (16-bit)
In the second example, when you shift it left by 2, the highest-order bit is still zero. You have misled yourself by depicting the numbers as 8 bits in your comments.
0000 0000 0000 0000 0000 0000 1111 0000 (32-bit)
0000 0000 1111 0000 (16-bit)
Try to avoid doing bitwise operations with signed integers -- often it'll lead you into undefined behavior.
The situation here is that you're taking unsigned values and assigning them to a signed variable. For ~60 this is undefined behavior. You see it as -61 because the bit pattern ~60 is also the two's-complement representation of -61. On the other hand 60 << 2 comes out correct because 240 has the same representation both as a signed and unsigned integer.
I have a sample question from test from my school. Which way is the most simple for solving it on paper?
The question:
Run-time system uses two's complement for representation of integers. Data type int has size 32 bits, data type short has size 16 bits. What does printf show? (The answer is ffffe43c)
short int x = -0x1bc4; /* !!! short */
printf ( "%x", x );
lets make it in two steps: 1bc4 = 1bc3 + 1
first of all we make this on long:
0 - 1 = ffffffff
then
ffffffff - 1bc3
this can be done by symbols
ffffffff
-
00001bc3
you will get the result you have
Since your x is negative take the two's complement of it which will yield:
2's(-x) = ~(x) + 1
2's(-0x1BC4) = ~(0x1BC4) + 1 => 0xE43C
0x1BC4 = 0001 1011 1100 0100
~0X1BC4 =1110 0100 0011 1011
+1 = [1]110 0100 0011 1100 (brackets around MSB)
which is how your number is represented internally.
Now %x expects a 32-bit integer so your computer will sign-extend your value which copies the MSB to the upper 16 bits of your value which will yield:
1111 1111 1111 1111 1110 0100 0011 1100 == 0xFFFFE43C
When I complement 1 (~1), I get the output as -2. How is this done internally?
I first assumed that the bits are inverted, so 0001 becomes 1110 and then 1 is added to it, so it becomes 1111 which is stored, how is the number then retrieved?
Well, no. When you complement 1, you go just invert the bits:
1 == 0b00000001
~1 == 0b11111110
And that's -2 in two's complement, which is the way your computer internally represents negative numbers. See http://en.wikipedia.org/wiki/Two's_complement but here are some examples:
-1 == 0b11111111
-2 == 0b11111110
....
-128== 0b10000000
+127== 0b01111111
....
+2 == 0b00000010
+1 == 0b00000001
0 == 0b00000000
Whar do you mean "when I complement 1 (~1)," ? There is what is called Ones-complement, and there is what is called Twos-Complement. Twos-Complement is more common (it is used on most computers) as it allows negative numbers to be added and subtracted using the same algorithm as postive numbers.
Twos-Complement is created by taking the binary representation of the postive number and switching every bit from 1 to 0 and from 0 to 1, and then adding one
5 0000 0101
4 0000 0100
3 0000 0011
2 0000 0010
1 0000 0001
0 0000 0000
-1 1111 1111
-2 1111 1110
-3 1111 1101
-4 1111 1100
-5 1111 1011
etc.