I'm learning bitwise operation and i came across a xor operation,
#include<stdio.h>
#include<conio.h>
int main
{
printf("%d\n",10 ^ 9);
getch();
return 0;
}
the binary form of 10 ---> 1 0 1 0
the binary form of 9 ---> 1 0 0 1
So in XOR the output is 1 when one of the input is 1 and other is 0.
So the output of 10 ^ 9 is 0 0 1 1 => 3
So when trying for the -10 ^ 9, I'm getting the output as -1.
#include<stdio.h>
#include<conio.h>
int main
{
printf("%d\n",-10 ^ 9);
getch();
return 0;
}
Can some one explain me how it is -1?
Thanks in advance for all who helps!
Because the operator precedence of XOR is lower than the unary minus.
That is, -10 ^ 9 is equal to (-10) ^ 9.
-10 ^ 9 is not equal to -(10 ^ 9).
-10 is 11110110(2) and 9 is 00001001(2).
11110110(2) XOR 00001001(2) = 11111111(2)
11111111(2) is -1 in 2's complement representation.
Continuing from the comment.
In a two's complement system, negative values are represented by values that are sign-extended to the width of the type. Where 10 is 1010 in binary, the two-complement representation for -10 for a 4-byte integer is:
11111111111111111111111111110110
(which has an unsigned value of 4294967286)
Now you see what happens when you xor with 9 (binary 1001),
11111111111111111111111111110110
^ 1001
----------------------------------
11111111111111111111111111111111 (-1 for a signed integer)
The result is 1111 which is sign-extended to 32-bits, or 11111111111111111111111111111111 for a signed int, which is -1.
Binary representation of negative numbers uses a concept called two's complement. Basically, every bit is first flipped and then you add 1.
For example, the 8-bit representation positive 10 would be 00001010. To make -10, first you flip the bits: 11110101, and then you add 1: 11110101 + 1 = 11110110.
So the binary representation of -10 is therefore 11110110
If you XOR this value with 9, it would look then look like this: 11110110 XOR 00001001 = 11111111.
11111111 is the two's complement of 1, therefore the final answer is -1.
The minus '-' sign have higher precedence than the xor '^' operator . So first we find the value of -10.
The binary equivalent of 10 is 1010 & representation in terms of 8-bits becomes 0000 1010 .
For signed numbers , we take a 2's complement of 10 .
First find 1's complement of 0000 1010
0000 1010 ----- 1's complement ---- 1111 0101
Now find 2's complement by adding 1 in 1's complement result .
1's complement --------- 1111 0101
Adding 1 --------- 1
2's complement --------- 1111 0110
Now perform -10^9 (XOR operator gives 1 when both bits are different other wise it gives 0)
-10 ------- 1111 0110
9 ------- 0000 1001
--------------------------
-10^9 ------- 1111 1111
-10^9 = 1111 1111 which is equal to the -1 in signed numbers.
Thats why the output becomes -1 .
Related
Look at the following code:
#include <stdio.h>
int main()
{
int num, sum;
scanf("%d", &num);
sum = num - (num&-num);
printf("%d", sum);
return 0;
}
The line sum = num - (num& - num) returns value of the given number without the last 1 in binary format. For example if the num is equal to 6(110) it will give the value without the last 1. In this case 100 which is equal to 4.
My question is how does &- work and how could do we do the same thing in octal numeral system?
What you have here is two separate operators: the bitwise AND operator & and the unary negation operator -. So the line in question is actually:
sum = num - (num & -num);
Assuming integers are stored in two's complement representation, negating a value means inverting all bits and adding one.
In the case that num is 6, (binary 00000110), -num is -6 which is 11111010. Then performing a bitwise AND between these two values results in 00000010 in binary which is 2 in decimal, so num - (num & -num) is 6 - 2 which is 4.
There is no &- operator. num - (num&-num) is num - (num & -num), where & performs the bitwise AND of num and its negation, -num.
Consider some number in binary, say 0011 0011 1100 11002 (spaces used for easy visualization). The negative of this is −0011 0011 1100 11002. However, we commonly represent negative numbers using two’s complement, in which a power of two is added. For 16-bit formats, we add 216, which is 1 0000 0000 0000 00002. So, to represent −0011 0011 1100 11002, we use 1 0000 0000 0000 00002 + −0011 0011 1100 11002 = 1100 1100 0011 01002.
Observe what happens in this subtraction, column by column from the right:
In the rightmost column, we have 0−0, which yields 0.
The next column also has 0−0, yielding 0.
The next column has 0−1. This yields 1 with a borrow from the column to the left.
The next column has 0−1−1 (where the second −1 is the borrow). This yields 0 with another borrow.
Because the minuend (the number being subtracted from) has all zeros up to that initial 1, this borrow is never satisfied, so it propagates throughout the bits.
Thus the result is:
Starting at the right, 0s remain 0s.
The first 1 stays a 1 but starts a borrow chain.
Because of the borrow, all bits to the left of that flip from 0 to 1 or 1 to 0.
Thus the subtrahend (the number being subtracted) and the result differ in all bits to the left of the first 1. And they both have 0s to the right of the first 1. So the only bit they both have set is that first 1.
Therefore, when two’s complement is in use, the result of num & -num is the lowest 1 bit that is set in num.
Then num - (num & -num) subtracts this bit from num, leaving it as num with its lowest 1 bit changed to 0.
I'm having trouble understanding why c equals -61 on the following program:
main() {
unsigned int a = 60; // 60 = 0011 1100
unsigned int b = 13; // 13 = 0000 1101
int c = 0;
c = ~a; //-61 = 1100 0011
printf("Line 4 - Value of c is %d\n", c );
}
I do understand how the NOT operator works on 0011 1100 (the solution being 1100 0011). But I'm not sure why the decimal number is increased by 1. Is this some sort of type conversion from unsigned int (from a) into signed int (from c) ?
Conversion from a positive to a negative number in twos complement (the standard signed format) constitutes a bitwise inversion, and adding one.
Note that for simplicity I am using a single signed byte.
So if 60 = 0011 1100
Then c = 1100 0011 + 1
= 1100 0100
And for a signed byte, the most significant bit is negative,
so
c = -128 + 64 + 4 = -60
You need to add 1 to account for the fact that the most significant bit is -128, while the largest positive number is 0111 1111 = 127. All negative numbers have a 1 for -128 which needs to be offset.
This is easy to see when you look at converting 0 to -0. Invert 00000000 and you get 11111111 and adding one gets you back to 00000000. Do the same with 1 to -1 and you get 11111111 - the largest possible negative number.
I don't fully understand how the "-" operator affects the following code:
#define COMP(x) ((x) & -(x))
unsigned short a = 0xA55A;
unsigned short b = 0x0400;
Could someone explain what COMP(a) and COMP(b) are and how they are calculated?
(x) & -(x) is equal to the lowest bit set in x when using 2's complement for representing binary numbers.
This means COMP(a) == 0x0002; and COMP(b) == 0x0400;
the "-" sign negative the value of the short parameter in a two's complement way. (in short, turn all 0 to 1, 1 to 0 and then add 1)
so 0xA55A in binary is 1010 0101 0101 1010
then -(0xA55A) in binary is 0101 1010 1010 0110
run & between them will give you 0000 0000 0000 0010
-(x) negates x. Negation in two's complement is the same as ~x+1 (bitflip+1)
as the code below shows:
#include <stdio.h>
#include <stdio.h>
#include <stdint.h>
int prbits(uintmax_t X, int N /*how many bits to print (from the low end)*/)
{
int n=0;
uintmax_t shift=(uintmax_t)1<<(N-1);
for(;shift;n++,shift>>=1) putchar( (X&shift)?'1':'0');
return n;
}
int main()
{
prbits(0xA55A,16),puts("");
//1010010101011010
prbits(~0xA55A,16),puts("");
//0101101010100101
prbits(~0xA55A+1,16),puts("");
//0101101010100110
prbits(-0xA55A,16),puts("");
//0101101010100110 (same)
}
When you bitand a value with its bitfliped value, you get 0. When you bitand a value with its bitfliped value + 1 (=its negated value) you get the first nonzero bit from the right.
Why? If the rightmost bit of ~x is 1, adding 1 to it will yield 0 with carry=1. You repeat this while the rightmost bits are 1 and, zeroing those bits. Once you hit zero (which would be 1 in x, since you're adding 1 to ~x), it gets turned into 1 with carry==0, so the addition ends. To the right you have zeros, to the left you have bitflips. You bitand this with the original and you get the first nonzero bit from the right.
Basically, what COMP does is AND the two operands of which one is in its original form and one of which is a negation of it's form.
How CPUs typically handle signed numbers is using 2's Complement, 2's complement splits the range of a numeric data type to 2, of which (2^n-1) -1 is positive and (2^n-1) is negative.
The MSB (right-most bit) represents the sign of the numeric data
e.g.
0111 -> +7
0110 -> +6
0000 -> +0
1111 -> -1
1110 -> -2
1100 -> -6
So what COMP does by doing an AND on positive and negative version of the numeric data is to get the LSB (Left-most bit) of the first 1.
I wrote some sample code that can help you understand here:
http://coliru.stacked-crooked.com/a/935c3452b31ba76c
How do I subtract IEEE 754 numbers?
For example: 0,546875 - 32.875...
-> 0,546875 is 0 01111110 10001100000000000000000 in IEEE-754
-> -32.875 is 1 10000111 01000101111000000000000 in IEEE-754
So how do I do the subtraction? I know I have to to make both exponents equal but what do I do after that? 2'Complement of -32.875 mantissa and add with 0.546875 mantissa?
Really not any different than you do it with pencil and paper. Okay a little different
123400 - 5432 = 1.234*10^5 - 5.432*10^3
the bigger number dominates, shift the smaller number's mantissa off into the bit bucket until the exponents match
1.234*10^5 - 0.05432*10^5
then perform the subtraction with the mantissas
1.234 - 0.05432 = 1.17968
1.17968 * 10^5
Then normalize (which in this case it is)
That was with base 10 numbers.
In IEEE float, single precision
123400 = 0x1E208 = 0b11110001000001000
11110001000001000.000...
normalize that we have to shift the decimal place 16 places to the left so
1.1110001000001000 * 2^16
The exponent is biased so we add 127 to 16 and get 143 = 0x8F. It is a positive number so the sign bit is a 0 we start to build the IEEE floating point number the leading
1 before the decimal is implied and not used in single precision, we get rid of it and keep the fraction
sign bit, exponent, mantissa
0 10001111 1110001000001000...
0100011111110001000001000...
0100 0111 1111 0001 0000 0100 0...
0x47F10400
And if you write a program to see what a computer things 123400 is you get the same thing:
0x47F10400 123400.000000
So we know the exponent and mantissa for the first operand'
Now the second operand
5432 = 0x1538 = 0b0001010100111000
Normalize, shift decimal 12 bits left
1010100111000.000
1.010100111000000 * 2^12
The exponent is biased add 127 and get 139 = 0x8B = 0b10001011
Put it all together
0 10001011 010100111000000
010001011010100111000000
0100 0101 1010 1001 1100 0000...
0x45A9C00
And a computer program/compiler gives the same
0x45A9C000 5432.000000
Now to answer your question. Using the component parts of the floating point numbers, I have restored the implied 1 here because we need it
0 10001111 111100010000010000000000 - 0 10001011 101010011100000000000000
We have to line up our decimal places just like in grade school before we can subtract so in this context you have to shift the smaller exponent number right, tossing mantissa bits off the end until the exponents match
0 10001111 111100010000010000000000 - 0 10001011 101010011100000000000000
0 10001111 111100010000010000000000 - 0 10001100 010101001110000000000000
0 10001111 111100010000010000000000 - 0 10001101 001010100111000000000000
0 10001111 111100010000010000000000 - 0 10001110 000101010011100000000000
0 10001111 111100010000010000000000 - 0 10001111 000010101001110000000000
Now we can subtract the mantissas. If the sign bits match then we are going to actually subtract if they dont match then we add. They match this will be a subtraction.
computers perform a subtraction by using addition logic, inverting the second operator on the way into the adder and asserting the carry in bit, like this:
1
111100010000010000000000
+ 111101010110001111111111
==========================
And now just like with paper and pencil lets perform the add
1111000100000111111111111
111100010000010000000000
+ 111101010110001111111111
==========================
111001100110100000000000
or do it with hex on your calculator
111100010000010000000000 = 1111 0001 0000 0100 0000 0000 = 0xF10400
111101010110001111111111 = 1111 0101 0110 0011 1111 1111 = 0xF563FF
0xF10400 + 0xF563FF + 1 = 0x1E66800
1111001100110100000000000 =1 1110 0110 0110 1000 0000 0000 = 0x1E66800
A little bit about how the hardware works, since this was really a subtract using the adder we also invert the carry out bit (or on some computers they leave it as is). So that carry out of a 1 is a good thing we basically discard it. Had it been a carry out of a zero we would have needed more work. We dont have a carry out so our answer is really 0xE66800.
Very quickly lets see that another way, instead of inverting and adding one lets just use a calculator
111100010000010000000000 - 000010101001110000000000 =
0xF10400 - 0x0A9C00 =
0xE66800
By trying to visualize it I perhaps made it worse. The result of the mantissa subtracting is 111001100110100000000000 (0xE66800), there was no movement in the most significant bit we end up with a 24 bit number in this case with the msbit of a 1. No normalization. To normalize you need to shift the mantissa left or right until the 24 bits lines up with the most significant 1 in that left most position, adjusting the exponent for each bit shift.
Now stripping the 1. bit off the answer we put the parts together
0 10001111 11001100110100000000000
01000111111001100110100000000000
0100 0111 1110 0110 0110 1000 0000 0000
0x47E66800
If you have been following along by writing a program to do this, I did as well. This program violates the C standard by using a union in an improper way. I got away with it with my compiler on my computer, dont expect it to work all the time.
#include <stdio.h>
union
{
float f;
unsigned int u;
} myun;
int main ( void )
{
float a,b,c;
a=123400;
b= 5432;
c=a-b;
myun.f=a; printf("0x%08X %f\n",myun.u,myun.f);
myun.f=b; printf("0x%08X %f\n",myun.u,myun.f);
myun.f=c; printf("0x%08X %f\n",myun.u,myun.f);
return(0);
}
And our result matches the output of the above program, we got a 0x47E66800 doing it by hand
0x47F10400 123400.000000
0x45A9C000 5432.000000
0x47E66800 117968.000000
If you are writing a program to synthesize the floating point math your program can perform the subtract, you dont have to do the invert and add plus one thing, over complicates it as we saw above. If you get a negative result though you need to play with the sign bit, invert your result, then normalize.
So:
1) extract the parts, sign, exponent, mantissa.
2) Align your decimal places by sacrificing mantissa bits from the number with the smallest exponent, shift that mantissa to the right until the exponents match
3) being a subtract operation if the sign bits are the same then you perform a subtract, if the sign bits are different you perform an add of the mantissas.
4) if the result is a zero then your answer is a zero, encode the IEEE value for zero as the result, otherwise:
5) normalize the number, shift the answer to the right or left (The answer can be 25 bits from a 24 bit add/subtract, add/subtract can have a dramatic shift to normalize, either one right or many bits to the left) until you have a 24 bit number with the most significant one left justified. 24 bits is for single precision float. The more correct way to define normalizing is to shift left or right until the number resembles 1.something. if you had 0.001 you would shift left 3, if you had 11.10 you would shift right 1. a shift left increases your exponent, a shift right decreases it. No different than when we converted from integer to float above.
6) for single precision remove the leading 1. from the mantissa, if the exponent has overflowed then you get into building a signaling nan. If the sign bits were different and you performed an add, then you have to deal with figuring out the result sign bit. If as above everything fine you just place the sign bit, exponent and mantissa in the result
Multiply and divide is different, you asked about subract, so that is all I covered.
I'm presuming 0,546875 means 0.546875.
Firstly, to correct/clarify:
0 01111110 10001100000000000000000 = 0011 1111 0100 0110 0000 0000 0000 0000 =
0x3F460000 in IEEE-754 is 0.77343750, not 0.546875.
0.546875 in IEEE-754 is 0x3F0C0000 = 0011 1111 0000 1100 0000 0000 0000 0000 =
0 01111110 00011000000000000000000 = 1 x 1.00011 x 2^(01111110 - 127) =
1.00011 x 2^(126 - 127) = 1.00011 x 2^-1 = (1 + 1/16 + 1/32) x 1/2.
1 10000111 01000101111000000000000 = 1100 0011 1010 0010 1111 0000 0000 0000 =
0xc3a2f000 in IEEE-754 is -325.87500, not -32.875.
-32.875 in IEEE-754 is 0xC2038000 = 1100 0010 0000 0011 1000 0000 0000 0000 =
1 10000100 00000111000000000000000 = -1 x 1.00000111 x 2^(10000100 - 127) =
-1.00000111 x 2^(132 - 127) = -1.00000111 x 2^5 = (1 + 1/64 + 1/128 + 1/256) x -32.
32.875 in IEEE-754 is 0x42038000 = 0100 0010 0000 0011 1000 0000 0000 0000 =
0 10000100 00000111000000000000000 = 1 x 1.00000111 x 2^(10000100 - 127) =
1.00000111 x 2^(132 - 127) = 1.00000111 x 2^5 = (1 + 1/64 + 1/128 + 1/256) x 32.
The subtraction is carried out as follows:
1.00011000 x 1/2
- 1.00000111 x 32
------------------
==>
0.00000100011 x 32
- 1.00000111000 x 32
---------------
==>
-1 x (
1.00000111000 x 32
- 0.00000100011 x 32
---------------
)
==>
-1 x (
1.00000110112 x 32 // borrow
- 0.00000100011 x 32
---------------
)
==>
-1 x (
1.00000110112 x 32
- 0.00000100011 x 32
---------------
1.00000010101 x 32
)
==>
-1.00000010101 x 32 =
-1.00000010101000000000000 x 32 =
-1.00000010101000000000000 x 2^5 =
-1.00000010101000000000000 x 2^(132 - 127) =
-1.00000010101000000000000 x 2^(10000100 - 127)
==>
1 10000100 00000010101000000000000 =
1100 0010 0000 0001 0101 0000 0000 0000 =
0xc2015000
Note that in this example we did not need to handle underflow, which is more complicated.
When I complement 1 (~1), I get the output as -2. How is this done internally?
I first assumed that the bits are inverted, so 0001 becomes 1110 and then 1 is added to it, so it becomes 1111 which is stored, how is the number then retrieved?
Well, no. When you complement 1, you go just invert the bits:
1 == 0b00000001
~1 == 0b11111110
And that's -2 in two's complement, which is the way your computer internally represents negative numbers. See http://en.wikipedia.org/wiki/Two's_complement but here are some examples:
-1 == 0b11111111
-2 == 0b11111110
....
-128== 0b10000000
+127== 0b01111111
....
+2 == 0b00000010
+1 == 0b00000001
0 == 0b00000000
Whar do you mean "when I complement 1 (~1)," ? There is what is called Ones-complement, and there is what is called Twos-Complement. Twos-Complement is more common (it is used on most computers) as it allows negative numbers to be added and subtracted using the same algorithm as postive numbers.
Twos-Complement is created by taking the binary representation of the postive number and switching every bit from 1 to 0 and from 0 to 1, and then adding one
5 0000 0101
4 0000 0100
3 0000 0011
2 0000 0010
1 0000 0001
0 0000 0000
-1 1111 1111
-2 1111 1110
-3 1111 1101
-4 1111 1100
-5 1111 1011
etc.