Why -1 is internally represented as all 1's - c

Why is -1 internally represented as all 1's in a 16 bit compiler?
This was used to solve the following program:
main()
{
printf("%x",-1<<4);
}
I didn't understand the solution to this question.Please help.
Thank you :)

Signed integers are represented as "two's complement".
When a number is positive, you will have its representation in binary as you expect.
When it's negative, it is being representd as two's complement.
The two's complement of k, where k has N bits, is 2^N-k. Since 2^N is a one bit followed by k-1 zeros,
you have, for example,
k = -3
N = 4 (4 bits, 2^N=16)
then -3 is represented as
16 - 3 = 13, which in binary is 1101.
You can then see that -1 is represented as
16 - 1 = 15, which in binary is 1111
So, if the internal representation is two's complement, the number -1 will always be represented as a sequence of ones.
The table below shows the representation of integers with 4 bits. Negatives are represented as two's complements.
REPR N
---- --
0111 7
0110 6
0101 5
0100 4
0011 3
0010 2
0001 1
0000 0
1111 -1
1110 -2
1101 -3
1100 -4
1011 -5
1010 -6
1001 -7
1000 -8
If you were using one's complement, then you'd have three bits for the number and one for the sign,
so the table would be
REPR N
---- --
0111 7
0110 6
0101 5
0100 4
0011 3
0010 2
0001 1
0000 0
1000 0
1001 -1
1010 -2
1011 -3
1100 -4
1101 -5
1110 -6
1111 -7
Why do we use 2's complement? Well, see that with one's complement you have two representations for zero (and we don't, in 2's complement). Besides that, you can add numbers (negative or positive) in two's complement and the result will just be correct.
One disadvantage of using two's complement is that you cannot represent abs(INT_MIN).
More about two's complement in the Wikipedia article

Signed binary numbers use what's called a Two's Complement. Check out the wikipedia page: http://en.wikipedia.org/wiki/Twos_complement
If it helps, think of it as representing -65536 + x.

Two's complement representation of signed int x:
If the most significant bit of x is 0, then just take the the value represented by x "as is".
If the most significant bit of x is 1, then calculate the value represented by x as follows:
Flip all the bits
Add 1 to the result
Consider the result as negative

-1 is stored as all 1's because it is in Two's Complement notation.
Two's complement notation is used for a variety of reasons, some of which include:
No special logic needed when dealing with negative numbers
No wasted bits (positive 0 and negative 0)

Related

Representation of negative numbers in binary

I am styding about the different data types in C. In my book it is written that the signed char data type is of
1 byte(8 bits). Now,it is written that the range of ASCII codes that can be stored in signed char data type is from -128 to 127.
It was written that a negative number is stored as a
2's complement of its binary.
I can't understand how is it possible to store the 2's complement form of -128 within 8 bits?
First of all we need 9 bits to write the signed binary representation of +128 which is 010000000.Now, if we take the 2's complement form of it, we get
110000000 which is also 9 bits long.
Then how are we able to store -128 in a 8 bit signed char?
You can store -128 as 1000 0000.
The first 1 indicates that you are using a negative number and then, how I learned it, you count the 0's as if they are 1's and vice versa and at the end you add 1. This means that for your case you would perform the following steps (when the first digit is a 1):
Take the signed char: 1000 0000.
Strip the first digit and switch the other digits: 111 1111.
Calculate, in 'simple' binary the value: in this case 127.
Add 1 and afterwards add negative sign: -128.
It is important that you take into account that all negative numbers are counted 'inversed':
-1 in decimal is written as 1111 1111
-2 in decimal is written as 1111 1110
-3 in decimal is written as 1111 1101
-4 in decimal is written as 1111 1100
etc.
This means that you have 128 possible values, but since you do not have to include the decimal 0 (because it is represented as 0000 0000), your negative range is [-1, -128], while on the other hand the range is [0, 127].

Why is 128 in one and two's complement using 8 bits overflow?

Suppose I want to represent 128 in one and two's complement using 8 bits, no sign bit
Wouldn't that be:
One's complement: 0111 1111
Two's complement: 0111 1110
No overflow
But the correct answer is:
One's complement: 0111 1111
Two's complement: 0111 1111
Overflow
Additional Question:
How come 1 in one's and two's complement is 0000 0001 and 0000 0001 respectively. How come you don't flip the bits like we did with 128?
One's and Two's Complements are both ways to represent signed integers.
For One's Complement Representation:
Positive Numbers: Represented with its regular binary representation
For example: decimal value 1 will be represented in 8 bit One's Complement as 0000 0001
Negative Numbers: Represented by complementing the binary representation of its magnitude
For example: decimal value of -127 will be represented in 8 bit One's Complement as 1000 0000 because the binary representation of 127 is 0111 1111 when complemented that will be 1000 0000
For Two's Complement Representation:
Positive Numbers: Represented with its regular binary representation
For example: decimal value 1 will be represented in 8 bit One's Complement as 0000 0001
Negative Numbers: Represented by complementing the binary representation of its magnitude then adding 1 to the value
For example: decimal value of -127 will be represented in 8 bit One's Complement as 1000 0001 because the binary representation of 127 is 0111 1111 when complemented that will be 1000 0000 then add 0000 0001 to get 1000 0001
Therefore, 128 overflows in both instances because the binary representation of 128 is 1000 0000 which in ones complement represents -127 and in twos complement represents -128. In order to be able to represent 128 in both ones and twos complement you would need 9 bits and it would be represented as 0 1000 0000.
In 8-bit unsigned, 128 is 1000 0000. In 8-bit two's complement, that binary sequence is interpreted as -128. There is no representation for 128 in 8-bit two's complement.
0111 1110 is 126.
As mentioned in a comment, 0111 1111 is 127.
See https://www.cs.cornell.edu/~tomf/notes/cps104/twoscomp.html.
Both two's complement and one's complement are ways to represent negative numbers. Positive numbers are simply binary numbers; there is no complementing involved.
I worked on one computer with one's complement arithmetic (LINC). I vastly prefer two's complement, as there is only one representation for zero. The disadvantage to two's complement is that there is one value (-128, for 8-bit numbers) that can't be negated -- causing the overflow you're asking about. One's complement doesn't have that issue.

Is 2's complement representation also used on positive numbers?

when i write :
signed int a = 4;
is my computer using 2's representation?
because if my computer use 2’s complement representation to represent number 4, this is what will happen on a 8 bit machine:
binary value of 4 : 0000 0100
2’s complement become: 1111 1011
add 1: 1111 1100
but i read that when the most signficant bit is 1 , your number is negative. but here my most significant bit is 1 and my number is 4 . it is not -4.
why my number 4 has a 1 as the most significant bit?
2’s complement become: 1111 1011
No. Where did you get that idea from? 1111 1011 is -5 in two's complement. -5 is not +4.
-4 is not the same as +4 either.
The binary value of 4 is 0000 0100. The signed number variable representation of 4 is therefore also 0000 0100.
Two's complement is irrelevant unless the number is negative.
why my number 4 has a 1 as the most significant bit?
It doesn't. Your -4 has a 1 as the msb.
When you initialize a as :
signed int a;
The machine will keep the last bit(MSB) as a marker for positive or negative values.
0 for Positive
1 for Negative
When you take 2's complement of a number which is taking the 1's complement and adding 1 to it(regarding the confusion in the phrasing of the question)
you negate the number you are working with.
So when you do 2's complement of 4 you get 1111 1100 which is binary notation for -4
Since this is negative number you get the MSB as 1
In some sense a signed integer is in "2's complement", in that it requires the left-most bit to be reserved for the sign. "2's complement" tells how to make a positive integer negative by using "flip the bits + one". The implication is that the positive integer is in base 2 and can use only n-1 bits, with n the number of bits in an int.
So "2's complement" is a way to represent negative numbers in binary, not positive numbers in binary.

How does the compiler treats printing unsigned int as signed int?

I'm trying to figure out why the following code:
{
unsigned int a = 10;
a = ~a;
printf("%d\n", a);
}
a will be 00001010 to begin with, and after NOT opertaion, will transform
into 11110101.
What happens exactly when one tries to print a as signed integer, that makes
the printed result to be -11?
I thought i would end up seeing -5 maybe (according to the binary representation), but not -11.
I'll be glad to get a clarification on the matter.
2's complement notation is used to store negative numbers.
The number 10 is 0000 0000 0000 0000 0000 0000 0000 1010 in 4 byte binary.
a=~a makes the content of a as 1111 1111 1111 1111 1111 1111 1111 0101.
This number when treated as signed int will tell the compiler to
take the most significant bit as sign and rest as magnitude.
The 1 in the msb makes the number a negative number.
Hence 2's complement operation is performed on the remaining bits.
Thus 111 1111 1111 1111 1111 1111 1111 0101 becomes
000 0000 0000 0000 0000 0000 0000 1011.
This when interpreted as a decimal integer becomes -11.
When you write a = ~a; you reverse each an every bit in a, what is also called a complement to 1.
The representation of a negative number is declared as implementation dependant, meaning that different architectures could have different representation for -10 or -11.
Assuming a 32 architecture on a common processor that uses complement to 2 to represent negative numbers -1 will be represented as FFFFFFFF (hexadecimal) or 32 bits to 1.
~a will be represented as = FFFFFFF5 or in binary 1...10101 which is the representation of -11.
Nota: the first part is always the same and is not implementation dependant, ~a is FFFFFFF5 on any 32 bits architecture. It is only the second part (-11 == FFFFFFF5) that is implementation dependant. BTW it would be -10 on an architecture that would use complement to 1 to represent negative numbers.

Converting IEEE 754 Float to MIL-STD-1750A Float

I am trying to convert a IEEE 754 32 bit single precision floating point value (standard c float variable) to an unsigned long variable in the format of MIL-STD-1750A. I have included the specification for both IEEE 754 and MIL-STD-1750A at the bottom of the post. Right now, I am having issues in my code with converting the exponent. I also see issues with converting the mantissa, but I haven't gotten to fixing those yet. I am using the examples listed in Table 3 in the link above to confirm if my program is converting properly. Some of those examples do not make sense to me.
How can these two examples have the same exponent?
.5 x 2^0 (0100 0000 0000 0000 0000 0000 0000 0000)
-1 x 2^0 (1000 0000 0000 0000 0000 0000 0000 0000)
.5 x 2^0 has one decimal place, and -1 has no decimal places, so the value for .5 x 2^0 should be
.5 x 2^0 (0100 0000 0000 0000 0000 0000 0000 0010)
right? (0010 instead of 0001, because 1750A uses plus 1 bias)
How can the last example use all 32 bits and the first bit be 1, indicating a negative value?
0.7500001x2^4 (1001 1111 1111 1111 1111 1111 0000 0100)
I can see that a value with a 127 exponent should be 7F (0111 1111) but what about a value with a negative 127 exponent? Would it be 81 (1000 0001)? If so, is it because that is the two's complement +1 of 127?
Thank you
1) How can these two examples have the same exponent?
As I understand it, the sign and mantissa effectively define a 2's-complement value in the range [-1.0,1.0).
Of course, this leads to redundant representations (0.125*21 = 0.25*20, etc.) So a canonical normalized representation is chosen, by disallowing mantissa values in the range [-0.5,0.5).
So in your two examples, both -1.0 and 0.5 fall into the "allowed" mantissa range, so they both share the same exponent value.
2) How can the last example use all 32 bits and the first bit be 1, indicating a negative value?
That doesn't look right to me; how did you obtain that representation?
3) What about a value with a negative 127 exponent? Would it be 81 (1000 0001)?
I believe so.
Remember the fraction is a "signed fraction". The signed values are stored in 2's complement format. So think of the zeros as ones.
Thus the number can be written as -0.111111111111111111111 (base 2) x 2^0
, which is close to one (converges to 1.0 if my math is correct)
On the last example, there is a negative sign in the original document (-0.7500001x2^4)

Resources