Can you guys please explain the below program
int main()
{
int max = ~0;
printf("%d\n",max);
return 0;
}
AFAIK ~ will flip the bits. In this case i.e ~0 will set all the bits into 1. So max variable should contain MAX value but I am getting o/p as -1. So can anyone here please tell me why I am getting o/p as -1.
Why did you expect to obtain the "max value"? In 2's-complement signed representation all-1 bit pattern stands for -1. It is just the way it is.
Maximum value in 2's-complement signed representation is represented by 01111...1 bit pattern (i.e the first bit is 0). What you got is 1111...1, which is obviously negative since the very first bit - sign bit - is 1.
If you want an example where complemented zero produces a "max value", use unsigned representation
int main() {
unsigned max = ~0u;
printf("%u\n", max);
}
That is the correct output since you are using the int data type which is signed. You need to read about two's complement negative representation. All one bits is not the maximum negative value, it is -1 as your program outputs. Maximum negative signed value is most significant bit set and all the remaining bits zero, 0x80000000 in the 32-bit case. Maximum positive signed value is 0x7fffffff in the 32-bit case.
The above answers have already covered the reason behind ~0 having value -1.
If you are looking for max integer value, then you can include the limits.h library and use the constants declared in that library
INT_MAX gives you the maximum signed integer value.
UINT_MAX gives you the maximum unsigned integer value.
#include <stdio.h>
#include <limits.h>
int main()
{
printf( "Max signed int value: %d \n", INT_MAX);
printf("Max unsigned int value: %u \n", UINT_MAX );
return 0;
}
This question was a long time ago, but for posterity's sake:
It might help you to see it better if you print ~0 as both int and hex as follows:
printf("compliment of zero %d\n", (~0));
printf("compliment of zero 0x%x\n", (~0));
Output:
Compliment of zero -1
Compliment of zero 0xffffffff
Related
I'm trying to figure out how variables really work in C and find it strange how the printf function seems to know the difference between different variables of the same size, I'm assuming they're both 16 bits.
#include <stdio.h>
int main(void) {
unsigned short int positive = 0xffff;
short int negative = 0xffff;
printf("%d\n%d\n", positive, negative);
return 0;
}
Output:
65535
-1
I think we have to more carefully distinguish between the type conversions on the different integer types on one hand and the printf format specifiers (allowing to force printf how to interpret the data) on the other hand.
Systematically:
printf("%hd %hd\n", positive, negative);
// gives: -1 -1
Both values are interpreted as signed short int by printf, regardless of the declaration.
printf("%hu %hu\n", positive, negative);
// gives: 65535 65535
Both values are interpreted as unsigned short int by printf, regardless of the declaration.
However,
printf("%d %d\n", positive, negative);
// gives: 65535 -1
Both values are implicitly converted to (a longer) int, while the sign is kept.
Finally,
printf("%u %u\n", positive, negative);
// gives 65535 4294967295
Again, both values are implicitly converted to int, while the sign is kept, but then the negative value is interpreted as unsigned. As we can see here, plain int is actually 32-bit (on this system).
Curiously, only if I compile with gcc and -Wpedantic, it gives me a warning for the assignment short int negative = 0xffff;.
#include <stdio.h>
int main(int argc, char **args){
int d=4294967295;
unsigned e= -1;
printf("\n%u\n%d\n%d\n%u\n%lu\n",d,d,e,e,sizeof(int));
return 0;
}
Output:
4294967295
-1
-1
4294967295
4
Question is, if both signed and unsigned integers can be used to display all kinds of integers by just applying a suitable format string, what's the need for unsigned in the first place?
You are right in so far as (on machines using two's complement and having 32 bit ints) the 0xFFFFFFFF is displayed as -1 with %d, but as 4294967295 with %u.
But strictly speaking this is undefined behaviour.
But the real difference between signed and unsigned variables is in their arithmetic interpretation. The main differences are
with comparing: -1 < 0, but 0 < 4294967295.
with multiplication and division.
Unsigned can hold a larger positive value, and no negative value. Unsigned uses the leading bit as a part of the value, while the signed version uses the left-most-bit to identify if the number is positive or negative. Signed integers can hold both positive and negative numbers.
I have the following C code and I have to understand why the result is a fairly big positive number:
int k;
unsigned int l;
float f;
f=4; l = 1; k = -2;
printf("\n %f", (unsigned int)l+k+f);
The result is a very large number (around 4 billion, max for 32 bit integers), so I suspect it has something to do with the representation of signed negative integers (two's complement) that looks fairly big if we look at it as unsigned.
However I don't quite understand why it behaves this way and what the float has to do with the behavior (if I remove it it stops doing it). Could someone explain how does it do that ? What is the process that happens when adding the numbers that leads to this result ?
The problem is that when you add a signed int to an unsigned, C converts the result to an unsigned int, even negative ones. Since k is negative, it gets re-interpreted as a large positive number before the addition. After that the f is added, but it is small in comparison to negative 2 re-interpreted as a positive number.
Here is a short illustration of the problem:
int k = -2;
unsigned int l = 1;
printf("\n %u", l+k);
This prints 4294967295 on a 32-bit system, because -2 in two's complement representation is 0xFFFFFFFE (demo).
The answer is that when the compiler does the l+k it typecasts the k to unsigned int which turns it into that big number. If you change the order of the vars to this: l+f+k this behavior will not occur.
What happens is that a negative integer is converted to an unsigned integer by adding the maximum unsigned value, plus one, repeatedly to the signed value until it's representable. This is why you see such a large value.
With two's complement, it's the same as truncating or sign extending the value and interpreting it as unsigned which is simple for a computer to do.
What is the difference between (unsigned)~0 and (unsigned)1. Why is unsigned of ~0 is -1 and unsigned of 1 is 1? Does it have something to do with the way unsigned numbers are stored in the memory. Why does an unsigned number give a signed result. It didn't give any overflow error either. I am using GCC compiler:
#include<sdio.h>
main()
{
unsigned int x=(unsigned)~0;
unsigned int y=(unsigned)1;
printf("%d\n",x); //prints -1
printf("%d\n",y); //prints 1
}
Because %d is a signed int specifier. Use %u.
which prints 4294967295 on my machine.
As others mentioned, if you interpret the highest unsigned value as signed, you get -1, see the wikipedia entry for two's complement.
Your system uses two's complement representation of negative numbers. In this representation a binary number composed of all ones represent the biggest negative number -1.
Since inverting all bits of a zero gives you a number composed of all ones, you get -1 when you re-interpret the number as a signed number by printing it with a %d which expects a signed number, not an unsigned one.
First, in your use of printf you are telling it to print the number as signed ("%d") instead of unsigned ("%u").
Second, you are right in that it has "something to do with the way numbers are stored in memory". An int (signed or unsigned) is not a single bit on your computer, but a collection of k bits. The exact value of k depends on the specifics of your computer architecture, but most likely you have k=32.
For the sake of succinctness, lets assume your ints are 8 bits long, so k=8 (this is most certainly not the case, unless you are working on a very limited embedded system,). In that case (int)0 is actually 00000000, and (int)~0 (which negates all the bits) is 11111111.
Finally, in two's complement (which is the most common binary representation of signed numbers), 11111111 is actually -1. See http://en.wikipedia.org/wiki/Two's_complement for a description of two's complement.
If you changed your print to use "%u", then it will print a positive integer that represents (2^k-1) where k is the number of bits in an integer (so probably it will print 4294967295).
printf() only knows what type of variable you passed it by what format specifiers you used in your format string. So what's happening here is that you're printing x and y as signed integers, because you used %d in your format string. Try %u instead, and you'll get a result more in line with what you're probably expecting.
For example:
unsigned int i = ~0;
Result: Max number I can assign to i
and
signed int y = ~0;
Result: -1
Why do I get -1? Shouldn't I get the maximum number that I can assign to y?
Both 4294967295 (a.k.a. UINT_MAX) and -1 have the same binary representation of 0xFFFFFFFF or 32 bits all set to 1. This is because signed numbers are represented using two's complement. A negative number has its MSB (most significant bit) set to 1 and its value determined by flipping the rest of the bits, adding 1 and multiplying by -1. So if you have the MSB set to 1 and the rest of the bits also set to 1, you flip them (get 32 zeros), add 1 (get 1) and multiply by -1 to finally get -1.
This makes it easier for the CPU to do the math as it needs no special exceptions for negative numbers. For example, try adding 0xFFFFFFFF (-1) and 1. Since there is only room for 32 bits, this will overflow and the result will be 0 as expected.
See more at:
http://en.wikipedia.org/wiki/Two%27s_complement
unsigned int i = ~0;
Result: Max number I can assign to i
Usually, but not necessarily. The expression ~0 evaluates to an int with all (non-padding) bits set. The C standard allows three representations for signed integers,
two's complement, in which case ~0 = -1 and assigning that to an unsigned int results in (-1) + (UINT_MAX + 1) = UINT_MAX.
ones' complement, in which case ~0 is either a negative zero or a trap representation; if it's a negative zero, the assignment to an unsigned int results in 0.
sign-and-magnitude, in which case ~0 is INT_MIN == -INT_MAX, and assigning it to an unsigned int results in (UINT_MAX + 1) - INT_MAX, which is 1 in the unlikely case that unsigned int has a width (number of value bits for unsigned integer types, number of value bits + 1 [for the sign bit] for signed integer types) smaller than that of int and 2^(WIDTH - 1) + 1 in the common case that the width of unsigned int is the same as the width of int.
The initialisation
unsigned int i = ~0u;
will always result in i holding the value UINT_MAX.
signed int y = ~0;
Result: -1
As stated above, only if the representation of signed integers uses two's complement (which nowadays is by far the most common representation).
~0 is just an int with all bits set to 1. When interpreted as unsigned this will be equivalent to UINT_MAX. When interpreted as signed this will be -1.
Assuming 32 bit ints:
0 = 0x00000000 = 0 (signed) = 0 (unsigned)
~0 = 0xffffffff = -1 (signed) = UINT_MAX (unsigned)
Paul's answer is absolutely right. Instead of using ~0, you can use:
#include <limits.h>
signed int y = INT_MAX;
unsigned int x = UINT_MAX;
And now if you check values:
printf("x = %u\ny = %d\n", UINT_MAX, INT_MAX);
you can see max values on your system.
No, because ~ is the bitwise NOT operator, not the maximum value for type operator. ~0 corresponds to an int with all bits set to 1, which, interpreted as an unsigned gives you the max number representable by an unsigned, and interpreted as a signed int, gives you -1.
You must be on a two's complement machine.
Look up http://en.wikipedia.org/wiki/Two%27s_complement, and learn a little about Boolean algebra, and logic design. Also learning how to count in binary and addition and subtraction in binary will explain this further.
The C language used this form of numbers so to find the largest number you need to use 0x7FFFFFFF. (where you use 2 FF's for each byte used and the leftmost byte is a 7.) To understand this you need to look up hexadecimal numbers and how they work.
Now to explain the unsigned equivalent. In signed numbers the bottom half of numbers are negative (0 is assumed positive so negative numbers actually count 1 higher than positive numbers). Unsigned numbers are all positive. So in theory your highest number for a 32 bit int is 2^32 except that 0 is still counted as positive so it's actually 2^32-1, now for signed numbers half those numbers are negative. which means we divide the previous number 2^32 by 2, since 32 is an exponent we get 2^31 numbers on each side 0 being positive means the range of an signed 32 bit int is (-2^31, 2^31-1).
Now just comparing ranges:
unsigned 32 bit int: (0, 2^32-1)
signed 32 bit int: (-2^31, 2^32-1)
unsigned 16 bit int: (0, 2^16-1)
signed 16 bit int: (-2^15, 2^15-1)
you should be able to see the pattern here.
to explain the ~0 thing takes a bit more, this has to do with subtraction in binary. it's just adding 1 and flipping all the bits then adding the two numbers together. C does this for you behind the scenes and so do many processors (including the x86 and x64 lines of processors.)
Because of this it's best to store negative numbers as though they are counting down, and in two's complement the added 1 is also hidden. Because 0 is assumed positive thus negative numbers can't have a value for 0, so they automatically have -1 (positive 1 after the bit flip) added to them. when decoding negative numbers we have to account for this.