Value of a short variable in C - c

I have a simple program with a short variable declaration:
short int v=0XFFFD;
printf("v = %d\n",v);
printf("v = %o\n",v);
printf("v = %X\n",v);
The result is:
v = -3 ; v = 37777777775 ; v = FFFFFFFD
I don't understand how to calculate these values. I know that a short variable can hold values between -32768 and 32767, and the value 0XFFFD causes an overflow, but I don't know how to calculate the exact value, which is -3 in this case.
Also, if my declaration is v=0XFFFD why the output v=%X is FFFFFFFD?

First of all a short can be as short as 16 bits (which probably is the case on your compiler). This means that 65533 can't be represented correctly, the assignment overflows, it wraps to -3 (as short int is a signed short integer). But you already knew that.
Secondly when sent as an argument to printf the short int is converted to int automatically, but as v contains -3 that's the value that is sent to printf.
Thirldly the %o and %X conversions expect an unsigned int which is not quite what you've supplied. This means undefined behavior (in theory), but in practice it's quite predictable. This means that the bit pattern for -3 is interpreted as an unsigned integer istead which on 32 bit machines happens to be 0xFFFFFFFD.

If short is 2 bytes on this machine then 0x8000 is binary representation of the biggest negative value this type holds that is -32768. Because of how things are designed, next numbers are represented by corresponding next bit patterns, i.e:
biggest negative value = 0x8000 = -32768
0x8001 = biggest negative value + 1 = -32767
0xFFFF = biggest negative value + 0x7FFF = -1
0xFFFE = biggest negative value + 0x7FFE = 0xFFFF - 1 = -2
0xFFFD = biggest negative value + 0x7FFD = 0xFFFF - 2 = 0xFFFE - 1 = -3

The number 0xFFFD does not cause an overflow, since -3 is perfectly within the range of -32768 through 32767.
Any short variable is a signed two's complement and the result of -3(decimal) is calculated like any two's complement value. So to find out how to calculate the result of -3(decimal) please take a look on the many tutorials on youtube or in any pertinent textbooks.
Your compiler seems to conduct an implicite type conversion of your short variable to an int value before it prints the number as an int. To do that it has to carry out a so called sign extension since it has to make a 32-bit signed two's complement out of a 16-bit signed two's complement. Just like with any decimal number there are an infinite amount of zeros preceding the number (for example 34 = 0034) there are an infinite number of 1s preceding a negative two's complement number. So the compiler copies the most significand bit to the left. makin 0xFFFFFFFD out of 0xFFFD and 37777777775(oct) out of 177775(oct) i.e. (00)1 111 111 111 111 101 (bin).
I hope that information helped you.

but I don't know how to calculate the exact value,which is -3 in this case.
The most common way of representing integer numbers in computers are 2's complement.
If you have an n-bit integer, you get the decimal value like this:
Decimal value = - bn-12n-1 + bn-22n-2 + ... + b121 + b020 where bi is the value of bit number i.
In your case you have a 16 bit variable. The hexadecimal representation is 0xfffd which in binary is 1111.1111.1111.1101
Inserting into the formula you'll get:
Decimal value = - 1*215 + 1*214 + ... + 1*22 + 0*21 + 1*20 = - 3
See https://en.wikipedia.org/wiki/Two%27s_complement for more about the subject.

Related

Difference between unsigned int and int

I read about twos complement on wikipedia and on stack overflow, this is what I understood but I'm not sure if it's correct
signed int
the left most bit is interpreted as -231 and this how we can have negative numbers
unsigned int
the left most bit is interpreted as +231 and this is how we achieve large positive numbers
update
What will the compiler see when we store 3 vs -3?
I thought 3 is always 00000000000000000000000000000011
and -3 is always 11111111111111111111111111111101
example for 3 vs -3 in C:
unsigned int x = -3;
int y = 3;
printf("%d %d\n", x, y); // -3 3
printf("%u %u\n", x, y); // 4294967293 3
printf("%x %x\n", x, y); // fffffffd 3
Two's complement is a way to represent negative integers in binary.
First of all, here's a standard 32-bit integer ranges:
Signed = -(2 ^ 31) to ((2 ^ 31) - 1)
Unsigned = 0 to ((2 ^ 32) - 1)
In two's complement, a negative is represented by inverting the bits of its positive equivalent and adding 1:
10 which is 00001010 becomes -10 which is 11110110 (if the numbers were 8-bit integers).
Also, the binary representation is only important if you plan on using bitwise operators.
If your doing basic arithmetic, then this is unimportant.
The only time this may give unexpected results outside of the aforementioned times is getting the absolute value of the signed version of -(2 << 31) which will always give a negative.
Your problem does not have to do with the representation, but the type.
A negative number in an unsigned integer is represented the same, the difference is that it becomes a super high number since it must be positive and the sign bit works as normal.
You should also realize that ((2^32) - 5) is the exact same thing as -5 if the value is unsigned, etc.
Therefore, the following holds true:
unsigned int x = (2 << 31) - 5;
unsigned int y = -5;
if (x == y) {
printf("Negative values wrap around in unsigned integers on underflow.");
}
else {
printf( "Unsigned integer underflow is undefined!" );
}
The numbers don't change, just the interpretation of the numbers. For most two's complement processors, add and subtract do the same math, but set a carry / borrow status assuming the numbers are unsigned, and an overflow status assuming the number are signed. For multiply and divide, the result may be different between signed and unsigned numbers (if one or both numbers are negative), so there are separate signed and unsigned versions of multiply and divide.
For 32-bit integers, for both signed and unsigned numbers, n-th bit is always interpreted as +2n.
For signed numbers with the 31th bit set, the result is adjusted by -232.
Example:
1111 1111 1111 1111 1111 1111 1111 11112 as unsigned int is interpreted as 231+230+...+21+20. The interpretation of this as a signed int would be the same MINUS 232, i.e. 231+230+...+21+20-232 = -1.
(Well, it can be said that for signed numbers with the 31th bit set, this bit is interpreted as -231 instead of +231, like you said in the question. I find this way a little less clear.)
Your representation of 3 and -3 is correct: 3 = 0x00000003, -3 + 232 = 0xFFFFFFFD.
Yes, you are correct, allow me to explain a bit further for clarification purposes.
The difference between int and unsigned int is how the bits are interpreted. The machine processes unsigned and signed bits the same way, but there are extra bits added for signing. Two's complement notation is very readable when dealing with related subjects.
Example:
The number 5's, 0101, inverse is 1011.
In C++, it's depends when you should use each data type. You should use unsigned values when functions or operators return those values. ALUs handle signed and unsigned variables very similarly.
The exact rules for writing in Two's complement is as follows:
If the number is positive, count up to 2^(32-1) -1
If it is 0, use all zeroes
For negatives, flip and switch all the 1's and 0's.
Example 2(The beauty of Two's complement):
-2 + 2 = 0 is displayed as 0010 + 1110; and that is 10000. With overflow at the end, we have our result as 0000;

Tilde C unsigned vs signed integer

For example:
unsigned int i = ~0;
Result: Max number I can assign to i
and
signed int y = ~0;
Result: -1
Why do I get -1? Shouldn't I get the maximum number that I can assign to y?
Both 4294967295 (a.k.a. UINT_MAX) and -1 have the same binary representation of 0xFFFFFFFF or 32 bits all set to 1. This is because signed numbers are represented using two's complement. A negative number has its MSB (most significant bit) set to 1 and its value determined by flipping the rest of the bits, adding 1 and multiplying by -1. So if you have the MSB set to 1 and the rest of the bits also set to 1, you flip them (get 32 zeros), add 1 (get 1) and multiply by -1 to finally get -1.
This makes it easier for the CPU to do the math as it needs no special exceptions for negative numbers. For example, try adding 0xFFFFFFFF (-1) and 1. Since there is only room for 32 bits, this will overflow and the result will be 0 as expected.
See more at:
http://en.wikipedia.org/wiki/Two%27s_complement
unsigned int i = ~0;
Result: Max number I can assign to i
Usually, but not necessarily. The expression ~0 evaluates to an int with all (non-padding) bits set. The C standard allows three representations for signed integers,
two's complement, in which case ~0 = -1 and assigning that to an unsigned int results in (-1) + (UINT_MAX + 1) = UINT_MAX.
ones' complement, in which case ~0 is either a negative zero or a trap representation; if it's a negative zero, the assignment to an unsigned int results in 0.
sign-and-magnitude, in which case ~0 is INT_MIN == -INT_MAX, and assigning it to an unsigned int results in (UINT_MAX + 1) - INT_MAX, which is 1 in the unlikely case that unsigned int has a width (number of value bits for unsigned integer types, number of value bits + 1 [for the sign bit] for signed integer types) smaller than that of int and 2^(WIDTH - 1) + 1 in the common case that the width of unsigned int is the same as the width of int.
The initialisation
unsigned int i = ~0u;
will always result in i holding the value UINT_MAX.
signed int y = ~0;
Result: -1
As stated above, only if the representation of signed integers uses two's complement (which nowadays is by far the most common representation).
~0 is just an int with all bits set to 1. When interpreted as unsigned this will be equivalent to UINT_MAX. When interpreted as signed this will be -1.
Assuming 32 bit ints:
0 = 0x00000000 = 0 (signed) = 0 (unsigned)
~0 = 0xffffffff = -1 (signed) = UINT_MAX (unsigned)
Paul's answer is absolutely right. Instead of using ~0, you can use:
#include <limits.h>
signed int y = INT_MAX;
unsigned int x = UINT_MAX;
And now if you check values:
printf("x = %u\ny = %d\n", UINT_MAX, INT_MAX);
you can see max values on your system.
No, because ~ is the bitwise NOT operator, not the maximum value for type operator. ~0 corresponds to an int with all bits set to 1, which, interpreted as an unsigned gives you the max number representable by an unsigned, and interpreted as a signed int, gives you -1.
You must be on a two's complement machine.
Look up http://en.wikipedia.org/wiki/Two%27s_complement, and learn a little about Boolean algebra, and logic design. Also learning how to count in binary and addition and subtraction in binary will explain this further.
The C language used this form of numbers so to find the largest number you need to use 0x7FFFFFFF. (where you use 2 FF's for each byte used and the leftmost byte is a 7.) To understand this you need to look up hexadecimal numbers and how they work.
Now to explain the unsigned equivalent. In signed numbers the bottom half of numbers are negative (0 is assumed positive so negative numbers actually count 1 higher than positive numbers). Unsigned numbers are all positive. So in theory your highest number for a 32 bit int is 2^32 except that 0 is still counted as positive so it's actually 2^32-1, now for signed numbers half those numbers are negative. which means we divide the previous number 2^32 by 2, since 32 is an exponent we get 2^31 numbers on each side 0 being positive means the range of an signed 32 bit int is (-2^31, 2^31-1).
Now just comparing ranges:
unsigned 32 bit int: (0, 2^32-1)
signed 32 bit int: (-2^31, 2^32-1)
unsigned 16 bit int: (0, 2^16-1)
signed 16 bit int: (-2^15, 2^15-1)
you should be able to see the pattern here.
to explain the ~0 thing takes a bit more, this has to do with subtraction in binary. it's just adding 1 and flipping all the bits then adding the two numbers together. C does this for you behind the scenes and so do many processors (including the x86 and x64 lines of processors.)
Because of this it's best to store negative numbers as though they are counting down, and in two's complement the added 1 is also hidden. Because 0 is assumed positive thus negative numbers can't have a value for 0, so they automatically have -1 (positive 1 after the bit flip) added to them. when decoding negative numbers we have to account for this.

Why stores 255 in a char variable give its value -1 in C?

I am reading a C book, and there is a text the author mentioned:
"if ch (a char variable) is a signed type, then storing 255 in the ch variable gives it the value -1".
Can anyone elaborate on that?
Assuming 8-bit chars, that is actually implementation-defined behaviour. The value 255 cannot be represented as a signed 8-bit integer.
However, most implementations simply store the bit-pattern, which for 255 is 0xFF. With a two's-complement interpretation, as a signed 8-bit integer, that is the bit-pattern of -1. On a rarer ones'-complement architecture, that would be the bit pattern of negative zero or a trap representation, with sign-and-magnitude, it would be -127.
If either of the two assumptions (signedness and 8-bit chars) doesn't hold, the value will be¹ 255, since 255 is representable as an unsigned 8-bit integer or as a signed (or unsigned) integer with more than 8 bits.
¹ The standard guarantees that CHAR_BIT is at least 8, it may be greater.
Try it in decimal. Suppose we can only have 3 digits. So our unsigned range is 0 - 999.
Let's see if 999 can actually behave as -1 (signed):
42 + 999 = 1041
Because we can only have 3 digits, we drop the highest order digit (the carry):
041 = 42 - 1
This is a general rule that applies to any number base.
That is not guaranteed behavior. To quote ANSI/ISO/IEC 9899:1999 §6.3.1.3 (converting between signed and unsigned integers) clause 3:
Otherwise, the new type is signed and the value cannot be represented in it;
either the result is implementation-defined or an implementation-defined signal
is raised.
I'll leave the bitwise/2's complement explanations to the other answers, but standards-compliant signed chars aren't even guaranteed to be too small to hold 255; they might work just fine (giving the value 255.)
That's how two's complement works. Read all about it here.
You have classical explanation in others messages. I give you a rule:
In a signed type with size n, presence of MSB set as 1, must interpreted as -2^(n-1).
For this concrete question, assuming size of char is 8 bits length (1 bytes), 255 to binary is equal to:
1*2^(7) +
1*2^(6) +
1*2^(5) +
1*2^(4) +
1*2^(3) +
1*2^(2) +
1*2^(1) +
1*2^(0) = 255
255 equivalent to 1 1 1 1 1 1 1 1.
For unsigned char, you get 255, but if you are dealing with char (same as signed char), MSB represents a negative magnitude:
-1*2^(7) +
1*2^(6) +
1*2^(5) +
1*2^(4) +
1*2^(3) +
1*2^(2) +
1*2^(1) +
1*2^(0) = -1

Test whether sum of two integers might overflow

From C traps and pitfalls
If a and b are two integer variables, known to be non-negative then to
test whether a+b might overflow use:
if ((int) ((unsigned) a + (unsigned) b) < 0 )
complain();
I didn't get that how comparing the sum of both integers with zero will let you know that there is an overflow?
The code you saw for testing for overflow is just bogus.
For signed integers, you must test like this:
if (a^b < 0) overflow=0; /* opposite signs can't overflow */
else if (a>0) overflow=(b>INT_MAX-a);
else overflow=(b<INT_MIN-a);
Note that the cases can be simplified a lot if one of the two numbers is a constant.
For unsigned integers, you can test like this:
overflow = (a+b<a);
This is possible because unsigned arithmetic is defined to wrap, unlike signed arithmetic which invokes undefined behavior on overflow.
When an overflow occurs, the sum exceeds some range (let's say this one):
-4,294,967,295 < sum < 4,294,967,295
So when the sum overflows, it wraps around and goes back to the beginning:
4,294,967,295 + 1 = -4,294,967,295
If the sum is negative and you know the the two numbers are positive, then the sum overflowed.
If a and b are known to be non negative integers, the sequence (int) ((unsigned) a + (unsigned) b) will return indeed a negative number on overflow.
Lets assume a 4 bit (max positive integer is 7 and max unsigned integer is 15) system with the following values:
a = 6
b = 4
a + b = 10 (overflow if performed with integers)
While if we do the addition using the unsigned conversion, we will have:
int((unsigned)a + (unsigned)b) = (int) ((unsigned)(10)) = -6
To understand why, we can quickly check the binary addition:
a = 0110 ; b = 0100 - first bit is the sign bit for signed int.
0110 +
0100
------
1010
For unsigned int, 1010 = 10. While the same representation in signed int means -6.
So the result of the operation is indeed < 0.
If the integers are unsigned and you're assuming IA32, you can do some inline assembly to check the value of the CF flag. The asm can be trimmed a bit, I know.
int of(unsigned int a, unsigned int b)
{
unsigned int c;
__asm__("addl %1,%2\n"
"pushfl \n"
"popl %%edx\n"
"movl %%edx,%0\n"
:"=r"(c)
:"r"(a), "r"(b));
return(c&1);
}
There are some good explanations on this page.
Here's the simple way from that page that I like:
Do the addition normally, then check the result (e.g. if (a+23<23) overflow).
As we know that Addition of 2 Numbers might be overflow.
So for that we can use following way to add the two numbers.
Adder Concept
Suppose we have 2 numbers "a" AND "b"
(a^b)+(a&b);
this equation will give the correct result..
And this is patented by the Samsung.
assuming twos compliment representation and 8 bit integers, the most significant bit has sign (1 for negative and 0 for positive), since we know the integers are non negative, it means most significant bit is 0 for both integers. Now if adding the unsigned representation of these numbers result in a 1 in most significant bit then that mean the addition has overflowed, and to check whether an unsigned integer has a 1 in most significant bit is to check if it is more than the range of signed integer, or you can convert it to signed integer which will be negative (because the most significant bit is 1)
example 8 bit signed integers (range -128 to 127):
twos compliment of 127 = 0111 1111
twos complement of 1 = 0000 0001
unsigned 127 = 0111 1111
unsigned 1 = 0000 0001
unsigned sum = 1000 0000
sum is 128, which is not a overflow for unsigned integer but is a overflow for signed integer, the most significant bit gives it away.

Clarification is needed on bitwise not (~) operator

Suppose you have the following C code.
unsigned char a = 1;
printf("%d\n", ~a); // prints -2
printf("%d\n", a); // prints 1
I am surprised to see -2 printed as a result of ~1 conversion:
The opposite of 0000 0001 is 1111 1110. That is anything but -2.
What am I missing here?
It is two's complement.
In two's complement representation, if a number x's most significant bit is 1, then the actual value would be −(~x + 1).
For instance,
0b11110000 = -(~0b1111 + 1) = -(15 + 1) = -16.
This is a natural representation of negative numbers, because
0000001 = 1
0000000 = 0
1111111 = -1 (wrap around)
1111110 = -2
1111101 = -3 etc.
See http://en.wikipedia.org/wiki/Two%27s_complement for detail.
BTW, to print an unsigned value, use the %hhu or %hhx format. See http://www.ideone.com/YafE3.
%d stands for signed decimal number, not unsigned. So your bit pattern, even though it is stored in an unsigned variable, is interpreted as a signed number.
See this Wikipedia entry on signed number representations for an understanding of the bit values. In particular see Two's complement.
One (mildly humorous) way to think of signed maths is to recognize that the most significant bit really represents an infinite number of bits above it. So in a 16-bit signed number, the most significant bit is 32768+65536+131072+262144+...etc. which is 32768*(1+2+4+8+...) Using the standard formula for a power series, (1+ X + X^2 + X^3 +...) = 1/(1-X), one discovers that (1+2+4+8+...) is -1, so the sum of all those bits is -32768.

Resources