Why does gcc not add tmin + tmin correctly? [duplicate] - c

This question already has an answer here:
Where in the C99 standard does it say that signed integer overflow is undefined behavior?
(1 answer)
Closed 3 years ago.
I've been playing around with bitwise operations and two's complement, when I discovered this oddity.
#include <stdio.h>
int main ()
{
int tmin = 0x80000000;
printf("tmin + tmin: 0x%x\n", tmin + tmin);
printf("!(tmin + tmin): 0x%x\n", !(tmin + tmin));
}
The code above results in the following output
tmin + tmin: 0x0
!(tmin + tmin): 0x0
Why does this happen?

0x80000000 in binary is
0b10000000000000000000000000000000
When you add two 0x80000000s together,
|<- 32bits ->|
0b10000000000000000000000000000000
+ 0b10000000000000000000000000000000
------------------------------------
0b100000000000000000000000000000000
|<- 32bits ->|
However, int on your machine seem to have 32 bits, so only the lower 32 bits are preserved, which means the 1 in your result is silently discarded. This is called an Integer Overflow.
Also note that in C, signed (as opposed to unsigned, i.e. unsigned int) integer overflow is actually undefined behavior, which is why !(tmin + tmin) gives 0x0 instead of 0x1. See this blog post for an example where a variable is both true and false due to another undefined behavior, i.e. uninitialized variable.

Related

Why is `x - y <= x` true, when x=0x80000000, y = 1(32-bit complement)? [duplicate]

This question already has answers here:
Detecting signed overflow in C/C++
(13 answers)
Closed 1 year ago.
I want to know if x - y overflows.
Below is my code.
#include <stdio.h>
/* Determine whether arguments can be subtracted without overflow */
int tsub_ok(int x, int y)
{
return (y <= 0 && x - y >= x) || (y >= 0 && x - y <= x);
}
int main()
{
printf("0x80000000 - 1 : %d\n", tsub_ok(0x80000000, 1));
}
Why can't I get the result I expect?
You can't check for overflow of signed integers by performing the offending operation and seeing if the result wraps around.
First, the value 0x80000000 passed to the function is outside the range of a 32 bit int. So it undergoes an implementation defined conversion. On most systems that use 2's compliment, this will result in the value with that representation which is -2147483648 which also happens to be the value of INT_MIN.
Then you attempt to execute x - y which results in signed integer overflow which triggers undefined behavior, giving you an unexpected result.
The proper way to handle this is to perform some algebra to ensure the overflow does not happen.
If x and y have the same sign then subtracting won't overflow.
If the signs differ and x is positive, one might naively try this:
INT_MAX >= x - y
But this could overflow. Instead change it to the mathematically equivalent:
INT_MAX + y >= x
Because y is negative, INT_MAX + y doesn't overflow.
A similar check can be done when x is negative with INT_MIN. The full check:
if (x>=0 && y>=0) {
return 1;
} else if (x<=0 && y<=0) {
return 1;
} else if (x>=0 && INT_MAX + y >= x) {
return 1;
} else if (x<0 && INT_MIN + y <= x) {
return 1;
} else {
return 0;
}
Yes, x - y overflows.
We assume int and unsigned int are 32 bits in the C implementation you are using, as indicated in the title, and that two’s complement is used for int. Then the range of values for int is −231 to +231−1.
In tsub_ok(0x80000000, 1), the constant 0x80000000 has the value 231, and its type is unsigned int since it will not fit in an int. Then this value is passed to tsub_ok. Since the first parameter of tsub_ok has type int, the value is converted to int.
By C 2018 6.3.1.3 3, the conversion is implementation-defined. Many C implementations “wrap” the value modulo 232. Assuming your C implementation does this, the result of converting 231 to int is −231.
Then, inside the function, x - y is −231 − 1, and the result of that overflows the int type. The C standard does not define the behavior of the program when signed integer overflow occurs, and so any test that relies on comparing x - y when it may overflow is not supported by the C standard.
Here an int is 32 bits. This means it has a total range of 2^32 possible values. Converting this to hex, that's a max of 0xFFFFFFFF(when unsigned), but not signed. A signed int will have a max hex value of 0x7FFFFFFF. Thus, you cannot store 0x80000000 in an int here and have everything work.
In computer programming, signed and unsigned numbers are represented only as sequences of bits. Bit 31 is the sign bit for a 32-bit signed int, hence the highest 32-bit int you can store is 0x7FFFFFFF, hence the overflow with 0x80000000 as signed int.
Remember, a signed int is an integer that can be both positive and negative. This is as opposed to an unsigned int, which can only be used to hold a positive integer.
What you are trying to do is, you are trying a signed int variable hold an unsigned value - which causes the overflow.
For more info check Signed number representations or refer any beginner level digital number systems and programming book.

Does bit-shifting in C only work on blocks of 32-bits

I've been experimenting with C again after a while of not coding, and I have come across something I don't understand regarding bit shifting.
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
void main()
{
uint64_t x = 0;
uint64_t testBin = 0b11110000;
x = 1 << testBin;
printf("testBin is %"PRIu64"\nx is %"PRIu64"\n", testBin, x);
//uint64_t y = 240%32;
//printf("%"PRIu64 "\n",y);
}
In the above code, x returns 65536, indicating that after bit shifting 240 places the 1 is now sat in position 17 of a 32-bit register, whereas I'd expect it to be at position 49 of a 64-bit register.
I tried the same with unsigned long long types, that did the same thing.
I've tried compiling both with and without the m64 argument, both the same.
In your setup the constant 1 is a 32 bit integer. Thus the expression 1 << testBin operates on 32 bits. You need to use a 64 bit constant to have the expression operate on 64 bits, e.g.:
x = (uint64_t)1 << testBin;
This does not change the fact that shifting by 240 bits is formally undefined behavior (even though it will probably give the expected result anyway). If testBin is set to 48, the result will be well-defined. Hence the following should be preferred:
x = (uint64_t)1 << (testBin % 64);
It happens because if the default integer type of the constant 1. It is integer (not long long integer). You need to use ULL postfix
x = 1ULL << testbin
PS if you want to shift 240 bits and your integer is less than it (maybe your implementation supports some giant inteters), it is an Undefined Behaviour)

What does "return 0x8000 0000;" mean? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I saw the following code(part of a function):
if (end == start)
{
*max = *min = *start;
return 0x80000000;
}
I dont understand why it returns 0x80000000,which is 2^31,and it is out of int's range and has type unsigned int
And what is it equal to ?
Complete code:
int MaxDiffCore(int* start, int* end, int* max, int* min)
{
if (end == start)
{
*max = *min = *start;
return 0x80000000;
}
int* middle = start + (end - start) / 2;
int maxLeft, minLeft;
int leftDiff = MaxDiffCore(start, middle, &maxLeft, &minLeft);
int maxRight, minRight;
int rightDiff = MaxDiffCore(middle + 1, end, &maxRight, &minRight);
int crossDiff = maxLeft - minRight;
*max = (maxLeft > maxRight) ? maxLeft : maxRight;
*min = (minLeft < minRight) ? minLeft : minRight;
int maxDiff = (leftDiff > rightDiff) ? leftDiff : rightDiff;
maxDiff = (maxDiff > crossDiff) ? maxDiff : crossDiff;
return maxDiff;
}
0x80000000 is not out of int's range. An int is platform dependent, and there are platforms where int is 32 bits wide. This number is 32 bits wide, so it will do a "straight bit assignment" to the int.
Yes, the decimal representation of this number is 2^31 but that's only if you interpret the bits as unsigned, which in the case of bits, makes little sense. You really need to look at the L-value to know what it is going to be handled as, and that's a signed int/
Now, assuming this is a 32 bit platform, this is a fancy way to write MIN_INT, and by fancy, I mean non-portable and requiring a lot of assumptions that aren't constant, and finally confusing to those who don't want to do the binary math. It assumes 2's complement math and opts to set the bits directly.
Basically, with 2's complement numbers, zero is still
0x00000000
but to get -1 + 1 = 0 you have to get something to add to 1 yeilding 0
0x????????
+ 0x00000001
= 0x00000000
So you choose
0x11111111
+ 0x00000001
= 0x00000000
relying on the carrying 1's to eventually walk off the end. You can then deduce that 1 lower is -2 and so on; up to a point -2 = 0x11111110 and so on. Basically since the first bit determines the "sign" of the number, the "biggest" negative number you could have would be 0x1000000 and if you tried to subtract 1 from that, you would carry from the "negative" sign bit yielding the largest positive number. 0x01111111.
If the constant has type unsigned int on your platform and the function is declared as returning int, then the unsigned int value will get implicitly converted to type int. If the original value does not fit into int range, the result of this conversion is implementation-defined (or it might raise a signal).
6.3.1.3 Signed and unsigned integers
3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is
implementation-defined or an implementation-defined signal is raised.
Consult your compiler documentation to see what will happen in this case. Apparently the authors of the code came to conclusion that the implementation does exactly what they wanted it to do. For example, a natural thing to expect is for some implementation to "reinterpret" this bit pattern as a signed integer value with the highest-order bit becoming the sign bit. It will convert 0x80000000 into INT_MIN value on a 2's-complement platform with 32-bit ints.
This is a wrong practice and the return value should be corrected.
The code is returning binary 10000000 00000000 00000000 00000000 and
On machines where sizeof(int) = 4byte :
In your case, function return type int is treating it as binary 10000000 00000000 00000000 00000000 = integer -2147483648 (negative value).
If function return type would have been unsigned int, it would have treated 0x80000000 as binary 10000000 00000000 00000000 00000000 = integer 2147483648
On machines where sizeof(int) = 2byte :
The result would be implementation-defined or an implementation-defined signal would be raised [see other answer].

Inconsistent left logical shift behavior [duplicate]

This question already has answers here:
What's bad about shifting a 32-bit variable 32 bits?
(5 answers)
Closed 7 years ago.
I am developing a simple C app on a CentOS linux machine my university owns and I am getting very strange inconsistent behavior with the << operator.
Basically I am attempting to shift 0xffffffff left based on a variable shiftNum which is based on variable n
int shiftNum = (32 + (~n + 1));
int shiftedBits = (0xffffffff << shiftNum);
This has the effect of shifting 0xffffffff left 32-n times and works as expected. However when n = 0 and shiftNum = 32 I get some very strange behaviour. Instead of getting the expected 0x00000000 I get 0xffffffff.
For example this script:
int n = 0;
int shiftNum = (32 + (~n + 1));
int shiftedBits = (0xffffffff << shiftNum );
printf("n: %d\n",n);
printf("shiftNum: 0x%08x\n",shiftNum);
printf("shiftedBits: 0x%08x\n",shiftedBits);
int thirtyTwo = 32;
printf("ThirtyTwo: 0x%08x\n",thirtyTwo);
printf("Test: 0x%08x\n", (0xffffffff << thirtyTwo));
Outputs:
n: 0
shiftNum: 0x00000020
shiftedBits: 0xffffffff
ThirtyTwo: 0x00000020
Test: 0x00000000
I have no idea what is going on honestly. Some crazy low-level something I suspect. Even more strange the operation (0xffffffff << (shiftNum -1)) << 1 outputs 0x00000000.
Does anyone have any clue whats going on?
If you invoke undefined behaviour, the results are unspecified and anything is valid.
When n is 0, 32 + (~n + 1) is 32 (on a two's complement CPU). If sizeof(shiftNum) == 4 (or sizeof(shiftNum) * CHAR_BIT == 32, which usually has the same result), then you are only allowed to shift by values 0..31; anything else is undefined behaviour.
ISO/IEC 9899:2011 §6.5.7 Bitwise shift operators:
If the value of the right operand is negative or is
greater than or equal to the width of the promoted left operand, the behavior is undefined.
The result, therefore, is correct — even if you get a different answer each time you run the code, or recompile the program, or anything else.

Task: describe what the following C program does? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
this is the code (copy & pasted):
#include <stdio.h>
int main(){
char x,y,result;
//Sample 1:
x = 35;
y = 85;
result = x + y;
printf("Sample 1: %hi + %hi = %hi\n",x ,y, result);
//Sample 2:
x = 85;
y = 85;
result = x + y;
printf("Sample 2: %hi + %hi = %hi\n",x ,y, result);
return 0;
}
I've tried to compile it but it doesn't work. Am I stupid or is it "int" or "short" instead of char at the beginning? Once I change this it works, but I'm worried that it should work as is...
Does the program really just add x and y and show the result? That's what it does if I use short instead of char.
Thanks in advance!
E: Why the down votes? What is wrong with my post?
Thoughts:
For an introductory course, this is a terrible example. Depending on your implementation, char is either a signed or unsigned number. And the code will behave very differently depending on this fact.
That being said, yes, this code is basically adding two numbers and printing the result. I agree that the %hi is odd. That expects a short int. I'd personally expect either %hhi or just %i, and let integer promotion do it's thing.
If the numbers are unsigned chars
85 + 35 == 120, which is probably less than CHAR_MAX (which is probably 255). So there's no problem and everything works fine.
85 + 85 == 170, which is probably less than CHAR_MAX (which is probably 255). So there's no problem and everything works fine.
If the numbers are signed chars
85 + 35 == 120, which is probably less than CHAR_MAX (which is probably 127). So there's no problem and everything works fine.
85 + 85 == 170, which is probably greater than CHAR_MAX. This causes signed integer overflow, which is undefined behavior.
The output of the program appears to be
Sample 1: 35 + 85 = 120
Sample 2: 85 + 85 = -86
I compiled this on http://ideone.com/ and it worked fine.
The output is in fact what you would expect. The program is working! The reason you are seeing a number that you do not expect is due to the width of a char data type - 1 byte.
The C standard does not dictate whether char is signed or unsigned but assuming it is signed it can represent numbers in the range -128 to 127 (a char is 8 bits or 1 byte). 85 + 85 = 170 which is outside of this range... the MSB of the byte becomes 1 and the number system wraps round to give you a negative number. Try reading up on twos compliment arithmetic.
The arithmetic is:
01010101 +
01010101
--------
10101010
Because the data type is signed and the MSB is set, the number is now negative, in this case -86
Note: Bill Lynch's answer... he has rightly pointed out that signed overflow is UB

Resources