I'm trying to figure out why the following code:
{
unsigned int a = 10;
a = ~a;
printf("%d\n", a);
}
a will be 00001010 to begin with, and after NOT opertaion, will transform
into 11110101.
What happens exactly when one tries to print a as signed integer, that makes
the printed result to be -11?
I thought i would end up seeing -5 maybe (according to the binary representation), but not -11.
I'll be glad to get a clarification on the matter.
2's complement notation is used to store negative numbers.
The number 10 is 0000 0000 0000 0000 0000 0000 0000 1010 in 4 byte binary.
a=~a makes the content of a as 1111 1111 1111 1111 1111 1111 1111 0101.
This number when treated as signed int will tell the compiler to
take the most significant bit as sign and rest as magnitude.
The 1 in the msb makes the number a negative number.
Hence 2's complement operation is performed on the remaining bits.
Thus 111 1111 1111 1111 1111 1111 1111 0101 becomes
000 0000 0000 0000 0000 0000 0000 1011.
This when interpreted as a decimal integer becomes -11.
When you write a = ~a; you reverse each an every bit in a, what is also called a complement to 1.
The representation of a negative number is declared as implementation dependant, meaning that different architectures could have different representation for -10 or -11.
Assuming a 32 architecture on a common processor that uses complement to 2 to represent negative numbers -1 will be represented as FFFFFFFF (hexadecimal) or 32 bits to 1.
~a will be represented as = FFFFFFF5 or in binary 1...10101 which is the representation of -11.
Nota: the first part is always the same and is not implementation dependant, ~a is FFFFFFF5 on any 32 bits architecture. It is only the second part (-11 == FFFFFFF5) that is implementation dependant. BTW it would be -10 on an architecture that would use complement to 1 to represent negative numbers.
Related
Is this operation valid everytime ?
unsigned long long p64 = 0;
short int x = 7;
p64 = x;
So, for this example, the p64 variable will always be this one ?
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0111
which means
(p64 == 7)
I ask this question, because sometimes the bits after 0111 gets to be all 1, instead of 0. But the gcc compiler shows no warnings, so is this operation valid everytime? Do you have any solutions to convert 16 bits variables into 64 bits variables ?
When assigning to any unsigned type
.. the new type is unsigned, the value1 is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type. C17dr § 6.3.1.3 3
This means unsigned long long p64 = any_integer_type is well defined.
With short, which is signed, positive values will convert with no value change. Negatives will act like p64 = neg_short + ULLONG_MAX + 1.
As a side effect, this looks like a bit-sign extention for common short (2's complement - unpadded).
Detail: note that the conversion is not define in terms of bits.
1 with integer type .
Yes, this is valid. p64 will always have the value 7.
You should note that if x's value is negative it will be sign-extended. The value -16 for example (in binary: 1111 1111 1111 0000) will be converted to a 64-bit -16 (1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 0000) even though p64 is unsigned. You could avoid this by making x unsigned.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Hey I was trying to figure out why -1 << 4 (left) shift is FFF0 after looking and reading around web I came to know that "negative" numbers have a sign bit i.e 1. So just because "-1" would mean extra 1 bit ie (33) bits which isn't possible that's why we consider -1 as 1111 1111 1111 1111 1111 1111 1111 1111
For instance :-
#include<stdio.h>
void main()
{
printf("%x",-1<<4);
}
In this example we know that –
Internal representation of -1 is all 1’s 1111 1111 1111 1111 1111 1111 1111 1111 in an 32 bit compiler.
When we bitwise shift negative number by 4 bits to left least significant 4 bits are filled with 0’s
Format specifier %x prints specified integer value as hexadecimal format
After shifting 1111 1111 1111 1111 1111 1111 1111 0000 = FFFFFFF0 will be printed.
Source for the above
http://www.c4learn.com/c-programming/c-bitwise-shift-negative-number/
First, according to the C standard, the result of a left-shift on a signed variable with a negative value is undefined. So from a strict language-lawyer perspective, the answer to the question "why does -1 << 4 result in XYZ" is "because the standard does not specify what the result should be."
What your particular compiler is really doing, though, is left-shifting the two's-complement representation of -1 as if that representation were an unsigned value. Since the 32-bit two's-complement representation of -1 is 0xFFFFFFFF (or 11111111 11111111 11111111 11111111 in binary), the result of shifting left 4 bits is 0xFFFFFFF0 or 11111111 11111111 11111111 11110000. This is the result that gets stored back in the (signed) variable, and this value is the two's-complement representation of -16. If you were to print the result as an integer (%d) you'd get -16.
This is what most real-world compilers will do, but do not rely on it, because the C standard does not require it.
First thing first, the tutorials use void main. The comp.lang.c frequently asked question 11.15 should be of interest when assessing the quality of the tutorial:
Q: The book I've been using, C Programing for the Compleat Idiot, always uses void main().
A: Perhaps its author counts himself among the target audience. Many books unaccountably use void main() in examples, and assert that it's correct. They're wrong, or they're assuming that everyone writes code for systems where it happens to work.
That said, the rest of the example is ill-advised. The C standard does not define the behaviour of signed left shift. However, a compiler implementation is allowed to define behaviour for those cases that the standard leaves purposefully open. For example GCC does define that
all signed integers have two's-complement format
<< is well-defined on negative signed numbers and >> works as if by sign extension.
Hence, -1 << 4 on GCC is guaranteed to result in -16; the bit representation of these numbers, given 32 bit int are 1111 1111 1111 1111 1111 1111 1111 1111 and 1111 1111 1111 1111 1111 1111 1111 0000 respectively.
Now, there is another undefined behaviour here: %x expects an argument that is an unsigned int, however you're passing in a signed int, with a value that is not representable in an unsigned int. However, the behaviour on GCC / with common libc's most probably is that the bytes of the signed integer are interpreted as an unsigned integer, 1111 1111 1111 1111 1111 1111 1111 0000 in binary, which in hex is FFFFFFF0.
However, a portable C program should really never
assume two's complement representation - when the representation is of importance, use unsigned int or even uint32_t
assume that the << or >> on negative numbers have a certain behaviour
use %x with signed numbers
write void main.
A portable (C99, C11, C17) program for the same use case, with defined behaviour, would be
#include <stdio.h>
#include <inttypes.h>
int main(void)
{
printf("%" PRIx32, (uint32_t)-1 << 4);
}
Suppose I want to represent 128 in one and two's complement using 8 bits, no sign bit
Wouldn't that be:
One's complement: 0111 1111
Two's complement: 0111 1110
No overflow
But the correct answer is:
One's complement: 0111 1111
Two's complement: 0111 1111
Overflow
Additional Question:
How come 1 in one's and two's complement is 0000 0001 and 0000 0001 respectively. How come you don't flip the bits like we did with 128?
One's and Two's Complements are both ways to represent signed integers.
For One's Complement Representation:
Positive Numbers: Represented with its regular binary representation
For example: decimal value 1 will be represented in 8 bit One's Complement as 0000 0001
Negative Numbers: Represented by complementing the binary representation of its magnitude
For example: decimal value of -127 will be represented in 8 bit One's Complement as 1000 0000 because the binary representation of 127 is 0111 1111 when complemented that will be 1000 0000
For Two's Complement Representation:
Positive Numbers: Represented with its regular binary representation
For example: decimal value 1 will be represented in 8 bit One's Complement as 0000 0001
Negative Numbers: Represented by complementing the binary representation of its magnitude then adding 1 to the value
For example: decimal value of -127 will be represented in 8 bit One's Complement as 1000 0001 because the binary representation of 127 is 0111 1111 when complemented that will be 1000 0000 then add 0000 0001 to get 1000 0001
Therefore, 128 overflows in both instances because the binary representation of 128 is 1000 0000 which in ones complement represents -127 and in twos complement represents -128. In order to be able to represent 128 in both ones and twos complement you would need 9 bits and it would be represented as 0 1000 0000.
In 8-bit unsigned, 128 is 1000 0000. In 8-bit two's complement, that binary sequence is interpreted as -128. There is no representation for 128 in 8-bit two's complement.
0111 1110 is 126.
As mentioned in a comment, 0111 1111 is 127.
See https://www.cs.cornell.edu/~tomf/notes/cps104/twoscomp.html.
Both two's complement and one's complement are ways to represent negative numbers. Positive numbers are simply binary numbers; there is no complementing involved.
I worked on one computer with one's complement arithmetic (LINC). I vastly prefer two's complement, as there is only one representation for zero. The disadvantage to two's complement is that there is one value (-128, for 8-bit numbers) that can't be negated -- causing the overflow you're asking about. One's complement doesn't have that issue.
I am trying to convert a IEEE 754 32 bit single precision floating point value (standard c float variable) to an unsigned long variable in the format of MIL-STD-1750A. I have included the specification for both IEEE 754 and MIL-STD-1750A at the bottom of the post. Right now, I am having issues in my code with converting the exponent. I also see issues with converting the mantissa, but I haven't gotten to fixing those yet. I am using the examples listed in Table 3 in the link above to confirm if my program is converting properly. Some of those examples do not make sense to me.
How can these two examples have the same exponent?
.5 x 2^0 (0100 0000 0000 0000 0000 0000 0000 0000)
-1 x 2^0 (1000 0000 0000 0000 0000 0000 0000 0000)
.5 x 2^0 has one decimal place, and -1 has no decimal places, so the value for .5 x 2^0 should be
.5 x 2^0 (0100 0000 0000 0000 0000 0000 0000 0010)
right? (0010 instead of 0001, because 1750A uses plus 1 bias)
How can the last example use all 32 bits and the first bit be 1, indicating a negative value?
0.7500001x2^4 (1001 1111 1111 1111 1111 1111 0000 0100)
I can see that a value with a 127 exponent should be 7F (0111 1111) but what about a value with a negative 127 exponent? Would it be 81 (1000 0001)? If so, is it because that is the two's complement +1 of 127?
Thank you
1) How can these two examples have the same exponent?
As I understand it, the sign and mantissa effectively define a 2's-complement value in the range [-1.0,1.0).
Of course, this leads to redundant representations (0.125*21 = 0.25*20, etc.) So a canonical normalized representation is chosen, by disallowing mantissa values in the range [-0.5,0.5).
So in your two examples, both -1.0 and 0.5 fall into the "allowed" mantissa range, so they both share the same exponent value.
2) How can the last example use all 32 bits and the first bit be 1, indicating a negative value?
That doesn't look right to me; how did you obtain that representation?
3) What about a value with a negative 127 exponent? Would it be 81 (1000 0001)?
I believe so.
Remember the fraction is a "signed fraction". The signed values are stored in 2's complement format. So think of the zeros as ones.
Thus the number can be written as -0.111111111111111111111 (base 2) x 2^0
, which is close to one (converges to 1.0 if my math is correct)
On the last example, there is a negative sign in the original document (-0.7500001x2^4)
I have been asked in an interview is it valid declaration on a machine which is not 16 bit??
Below is the declaration,
unsigned int zero = 0;
unsigned int compzero = 0xFFFF;
They are both valid declarations, yes, inasmuch as there's no syntax error.
However, if your intent is to get the complement of 0 (all bits inverted), you should use:
unsigned int zero = 0;
unsigned int compzero = ~zero;
With (for example) a 32-bit unsigned int, 0xffff and ~0 are respectively:
0000 0000 0000 0000 1111 1111 1111 1111
1111 1111 1111 1111 1111 1111 1111 1111
Yes the deceleration is valid. Think about it this way, a hex literal is no different then a decimal literal. If they wanted the result of the hex converted to decimal to be zero, then this might not be the case (depends on which system is in use and which negative-number system is in use: 1's complement, 2's complement or a simple Not operator)