Two's complement of a number - c

As far as I know, if I want to represent -1 in binary form, then:
I'll first look for the binary representation of 1 which is 0001.
Then I'll find one's complement (invert all 0's and 1's) to get 1110.
Then add 1 to least significant bit and get 1111. which is my answer.
However I have a doubt that if I represent 1 (in step number 1) as 001 (I believe we can do this), then one's complement would be 110 and adding 1 would yield me 111 which is different from what I obtained previously.
How do you explain this difference?

The twos-complement representation of -1 is "all 1s".
However many places are in your number representation set all of them to 1 and that is the two's complement of 1.
For more details look up "sign extending".

Related

what is the difference between logical OR operation and binary addition?

I'm trying to understand how a binary addition and logical OR table differs.
does both carry forward 1 or if not which one does carry forward operation and which does not?
The exclusive-or (XOR) operation is like binary addition, except that
there is no carry from one bit position to the next. Thus, each bit
position can be evaluated independently of the rest.
I'll attempt to clarify a few points with a few illustrations.
First, addition. Basically like adding numbers in grade school. But if you have a 1-bit aligned with a 1-bit, you get a 0 with a 1 carry (i.e. 10, essentially analogous to 5 plus 5 in base-10). Otherwise, add them like 'regular' (base-10) numbers. For instance:
₁₁₁
1001
+ 1111
______
11000
Note that in the left-most column two 1's are added to give 10, which with another 1 gives 11 (similar to 5 + 5 + 5).
Now, assuming by "logical OR" you mean something along the lines of bitwise OR (an operation which basically performs the logical OR (inclusive) operation on each pair of corresponding bits), then you have this:
1001
| 1111
______
1111
Only case here you should have a 0 bit is if both bits are 0.
Finally, since you tagged this question xor, which I assume is bitwise as well.
1001
^ 1111
______
0110 = 110₂
In this case, two 1-bits give a 0, and of course two 0-bits give 0.
With a logical OR you get a logical result (Boolean). IOW true OR true is true (anything other than false OR false is true). In some languages (like C) any numeric value other than 0 means true. And some languages use an explicit datatype for true, false (bool, Boolean).
In case of binary OR, you are ORing the bits of two binary values. ie: 1 (which is binary 1) bitwise OR 2 (which is binary 10) is binary 11:
01
10
11
which is 3. Thus binary OR is also an addition when the values do not have shared bits (like flag values).

Possible values for operands in bitwise-and expression

Given the following C code:
int x = atoi(argv[1]);
int y = (x & -x);
if (x==y)
printf("Wow... they are the same!\n");
What values of x will result in "Wow... they are the same!" getting printed? Why?
So. It generally depends, but I can assume, that your architecture represents numbers with sign in U2 format (everything is false if it's not in U2 format). Let's have an example.
We take 3, which representation will be like:
0011
and -3. which will be:
~ 0011
+ 1
-------
1101
and we make and
1101
& 0011
------
0001
so:
1101 != 0001
that's what is happening underhood. You have to find numbers that fit to this pattern. I do not know what kind of numbers fit it upfront. But basing on this you can predict this.
The question is asking about the binary & operator, and 2's compliment arithmetic.
I would look to how numbers are represented in 2's compliment, and what the binary & symbol does.
Assuming a 2's compliment representation for negative numbers, the only values for which this is true are positive numbers of the form 2^n where n >= 0, and 0.
When you take the 2's compliment of a number, you flip all bits and then add one. So the least significant bit will always match. The next bit won't match unless the prior carried over, and the same for the next bit.
An int is typically 32 bits, however I'll use 5 bits in the following examples for simplicity.
For example, 5 is 00101. Flipping all bits gives us 11010, then adding 1 gives us 11011. Then 00101 & 11011 = 00001. The only bit that matches a set bit is the last one, so 5 doesn't work.
Next we'll try 12, which is 01100. Flipping the bits gives us 10011, then adding 1 gives us 10100. Then 01100 & 10100 = 00100. Because of the carry-over the third bit is set, however the second bit is not, so 12 doesn't work either.
So the most significant bit which is set won't match unless all lower bits carry over when 1 is added. This is true only for numbers with one bit set, i.e. powers of 2.
If we now try 8, which is 01000, flipping the bits gives us 10111 and adding 1 gives us 11000. And 01000 & 11000 = 01000. In this case, the second bit is set, which is the only bit set in the original number. So the condition holds.
Negative numbers cannot satisfy this condition because positive numbers have the most significant bit set to 0, while negative numbers have the most significant bit set to 1. So a bitwise AND of a number and its negative will always have the most significant bit set to 0, meaning this number cannot be negative.
0 is a special case since it is its own negative. 0 & 0 = 0, so it also satisfies this condition.
Another special case is the smallest number you can represent. In the case of a 5-bit number this is -16, which is represented by 10000. Flipping all the bits gives you 01111 and adding 1 gives you 10000, which is the same number. On the surface it seems this number also satisfies the condition, however this is an overflow condition and implementations may not handle this case correctly. See this link for more details.

Problematic understanding of IEEE 754 [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First of all i woild like to point out that i am not native speaker and i really need some terms used more commonly.
And the second thing i would like to mention is that i am not a math genious. I am really trying to understand everything about programming.. but ieee-754 makes me think that it'll never happan.. its full of mathematical terms i don't understand..
What is precision? What is it used for? What is mantissa and what is mantissa used for? How to determine the range of float/double by their size? What is ± symbol (Plus-minus) used for? (i believe its positive/negative choice but what does that have to do with everything?),
Isn't there any brief and clean explanation you guys could provide me with?
I spent 600 years of trying to understand wikipedia. I failed tremendously.
What is precision?
It refers to how closely a binary floating point representation can represent a real value. Real values have infinite precision and infinite range. Digital values have finite range and precision. In practice a single-precision IEEE-754 can represent real values of a precision of 6 significant figures (decimal), while double-precision is good for 15 significant figures.
The practical effect of this for example is that a single precision value: 123456000.00 cannot be distinguished from say 123456001.00, but equally a value 0.00123456 can be represented.
What is it used for?
Precision is not used for anything other than to define a characteristic of a particular floating point representation.
What is mantissa and what is mantissa used for?
The term is not mentioned in the English language Wikipedia article, and is imprecise - in mathematics in general it has a different meaning that that used here.
The correct term is significand. For a decimal value 0.00123456 for example the significand is is 123456. 123456000.00 has exactly the same significand. Each of these values has the same significand but a different exponent. The exponent is a scaling factor which determines where the decimal point is (hence floating point).
Of course IEEE754 is a binary floating point representation not decimal, but for the same of explanation of the terms it is perhaps easier to use decimal.
How to determine the range of float/double by their size?
By the size alone you cannot; you need to know how many bits are assigned to the significand and how many bits are assigned to the exponent. In C however the range is defined by the macros FLT_MIN, FLT_MAX, DBL_MIN and DBL_MAX in the float.h header. Other characteristics of the implementations floating point representation are described there also.
Note that a specific compiler may not in fact use IEEE754, however that is the format used by most hardware FPU implementations, and the compiler will naturally follow that. For targets with no FPU (small embedded processors typically), other formats may be used.
What is ± symbol (Plus-minus) used for?
It simply means that the value given may be both positive or negative. It may refer to a specific value, or it may indicate a range. So ±n may refer to two discrete values -n or +n, or it may mean a range -n to +n. Context is everything! In this article it refers to discrete values +0, -0, +∞ and -∞.
There are 3 different components: sign, exponent, mantissa
Assuming that the exponent has only 2 Bits, 4 combinations are possible:
binary decimal
00 0
01 1
10 2
11 3
The represented floating-point value is 2exponent:
binary exponent-value
00 2^0 = 1
01 2^1 = 2
10 2^2 = 4
11 2^3 = 8
The range of the floating point value, results from the exponent. 2 bits => maximum value = 8.
The mantissa divide the range from a given exponent to the next higher exponent.
For example the exponent is 2 and the mantissa has one bit, then there are two values possible:
exponent-value mantissa-binary represented floating-point value
2 0 2
2 1 3
The represented floating-point value is 2exponent × (1 + m1×2-1 + m2×2-2 + m3×2-3 + …).
Here an example with a 3 bit mantissa:
exponent-value mantissa-binary represented floating-point value
2 000 2 * (1 ) = 2
2 001 2 * (1 + 2^-3) = 2,25
2 010 2 * (1 + 2^-2 ) = 2,5
2 011 2 * (1 + 2^-2 + 2^-3) = 2,75
2 100 2 * (1 + 2^-1 ) = 3
and so on…
The sign has only just one Bit:
0 -> positive value
1 -> negative value
In IEEE-754 a 32 bit floating-point data type has an 8 bit exponent (with a range from 2-127 to 2128) and a 23 bit mantissa.
1 10000010 01101000000000000000000
- 130 1,40625
The represented floating-point value for this is:
-1 × 2(130 – 127) × (1 + 2-2 + 2-3 + 2-5) = -11,25
Try it: http://www.h-schmidt.net/FloatConverter/IEEE754.html

Truncation in Two's Complement? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm struggling to understand how truncation works when converting from unsigned to Two's Complement. Can someone please explain? (my textook uses the example of truncating a 4 bit value to a 3 bit value, and says that -1 becomes -1, but -5 becomes 3).
-1 represented on four binary bits is:
1 1 1 1
(-1 is always represented as all bits 1 in 2's complement).
In your textbook “truncating” is simply used to mean(*) “cutting off the highest-order bit(s)”:
1 1 1
The result still has all its bits sets so it still represents -1 — This time, the 3-bit 2's complement version of -1.
-5 is represented in 2's complement on 4 bits as:
1 0 1 1
Chopping off the highest-order bit:
0 1 1
We are left with the 3-bit representation of 3. The reason we could not get -5 any more is that -5's magnitude is too large to fit in a 3-bit format.
Numbers with smaller magnitude, that can be represented with 3 bits, are unchanged when the higher-order bits are chopped off. This is the case for numbers from -4 to 3.
(*) Note that usually “truncating” means keeping the most significant bits and removing the least significant ones, especially in the context of floating-point where the bits with weight less than one are erased when converting to integer by “truncation”. The choice of words in the OP's book is very doubtful, unless the book is not in English and words do not map exactly to English when translated.

Can anyone explain why '>>2' shift means 'divided by 4' in C codes?

I know and understand the result.
For example:
<br>
7 (decimal) = 00000111 (binary) <br>
and 7 >> 2 = 00000001 (binary) <br>
00000001 (binary) is same as 7 / 4 = 1 <br>
So 7 >> 2 = 7 / 4 <br>
<br>
But I'd like to know how this logic was created.
Can anyone elaborate on this logic?
(Maybe it just popped up in a genius' head?)
And are there any other similar logics like this ?
It didn't "pop-up" in a genius' head. Right shifting binary numbers would divide a number by 2 and left shifting the numbers would multiply it by 2. This is because 10 is 2 in binary. Multiplying a number by 10(be it binary or decimal or hexadecimal) appends a 0 to the number(which is effectively left shifting). Similarly, dividing by 10(or 2) removes a binary digit from the number(effectively right shifting). This is how the logic really works.
There are plenty of such bit-twiddlery(a word I invented a minute ago) in computer world.
http://graphics.stanford.edu/~seander/bithacks.html Here is for the starters.
This is my favorite book: http://www.amazon.com/Hackers-Delight-Edition-Henry-Warren/dp/0321842685/ref=dp_ob_image_bk on bit-twiddlery.
It is actually defined that way in the C standard.
From section 6.5.7:
The result of E1 >> E2 is E1 right-shifted E2 bit positions. [...]
the value of the result is the integral part of the quotient of E1 / 2E2
On most architectures, x >> 2 is only equal to x / 4 for non-negative numbers. For negative numbers, it usually rounds the opposite direction.
Compilers have always been able to optimize x / 4 into x >> 2. This technique is called "strength reduction", and even the oldest compilers can do this. So there is no benefit to writing x / 4 as x >> 2.
Elaborating on Aniket Inge's answer:
Number: 30710 = 1001100112
How multiply by 10 works in decimal system
10 * (30710)
= 10 * (3*102 + 7*100)
= 3*102+1 + 7*100+1
= 3*103 + 7*101
= 307010
= 30710 << 1
Similarly multiply by 2 in binary,
2 * (1001100112)
= 2 * (1*28 + 1*25 + 1*24 + 1*21 1*20)
= 1*28+1 + 1*25+1 + 1*24+1 + 1*21+1 1*20+1
= 1*29 + 1*26 + 1*25 + 1*22 + 1*21
= 10011001102
= 1001100112 << 1
I think you are confused by the "2" in:
7 >> 2
and are thinking it should divide by 2.
The "2" here means shift the number ("7" in this case) "2" bit positions to the right.
Shifting a number "1"bit position to the right will have the effect of dividing by 2:
8 >> 1 = 4 // In binary: (00001000) >> 1 = (00000100)
and shifting a number "2"bit positions to the right will have the effect of dividing by 4:
8 >> 2 = 2 // In binary: (00001000) >> 2 = (00000010)
Its inherent in the binary number system used in computer.
a similar logic is --- left shifting 'n' times means multiplying by 2^n.
An easy way to see why it works, is to look at the familiar decimal ten-based number system, 050 is fifty, shift it to the right, it becomes 005, five, equivalent to dividing it by 10. The same thing with shifting left, 050 becomes 500, five hundred, equivalent to multiplying it by 10.
All the other numeral systems work the same way.
they do that because shifting is more efficient than actual division. you're just moving all the digits to the right or left, logically multiplying/dividing by 2 per shift
If you're wondering why 7/4 = 1, that's because the rest of the result, (3/4) is truncated off so that it's an interger.
Just my two cents: I did not see any mention to the fact that shifting right does not always produce the same results as dividing by 2. Since right shifting rounds toward negative infinity and integer division rounds to zero, some values (like -1 in two's complement) will just not work as expected when divided.
It's because >> and << operators are shifting the binary data.
Binary value 1000 is the double of binary value 0100
Binary value 0010 is the quarter of binary value 1000
You can call it an idea of a genius mind or just the need of the computer language.
To my belief, a Computer as a device never divides or multiplies numbers, rather it only has a logic of adding or simply shifting the bits from here to there. You can make an algorithm work by telling your computer to multiply, subtract them up, but when the logic reaches for actual processing, your results will be either an outcome of shifting of bits or just adding of bits.
You can simply think that for getting the result of a number being divided by 4, the computer actually right shifts the bits to two places, and gives the result:
7 in 8-bit binary = 00000111
Shift Right 2 places = 00000001 // (Which is for sure equal to Decimal 1)
Further examples:
//-- We can divide 9 by four by Right Shifting 2 places
9 in 8-bit binary = 00001001
Shift right 2 places: 00000010 // (Which is equal to 9/4 or Decimal 2)
A person with deep knowledge of assembly language programming can explain it with more examples. If you want to know the actual sense behind all this, I guess you need to study bit level arithmetic and assembly language of computer.

Resources