C Integer Behavior: Dividing by Zero [duplicate] - c

This question already has answers here:
Division by zero: Undefined Behavior or Implementation Defined in C and/or C++?
(8 answers)
Closed 6 years ago.
I recently decided to satisfy a long time curiosity of mine. What happens when you divide by zero in C? I had always simply avoided this situation with logic, but was curious how a language like C, which cannot throw a catchable exception, handles this situation.
Here was my very simple test (compiled with GCC on Windows 10):
int main()
{
double test1 = 1.0/0.0;
printf( "%f", test1 );
int test2 = 1/0;
printf( "%d", test2 );
}
The operation done with the double types gave me a lovely little indication that the result was not a number: 1.#INF00. All's fine so far..
However, when performing a divide by zero with int types, the program, less than eloquently, "stopped working." I'm running this on Windows, so I was alerted with that lovely dialog.
I am curious about this behavior. Why is crashing the program the choosen solution for an integer division by zero? Is there really no other solution, say akin to the double way, to handle division by zero? Is this the same behavior on every compiler, or just GCC?

Dividing by zero invokes undefined behavior. Anything can happen when undefined behavior is invoked.
Quote from N1570 6.5.5 Multiplicative operators:
5 The result of the / operator is the quotient from the division of the first operand by the
second; the result of the % operator is the remainder. In both operations, if the value of
the second operand is zero, the behavior is undefined.
Quote from N1570 3. Terms, definitions, and symbols:
3.4.3
1 undefined behavior
behavior, upon use of a nonportable or erroneous program construct or of erroneous data,
for which this International Standard imposes no requirements
2
NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable
results, to behaving during translation or program execution in a documented manner characteristic of the
environment (with or without the issuance of a diagnostic message), to terminating a translation or
execution (with the issuance of a diagnostic message).

It's undefined behavior.
C11 §6.5.5 Multiplicative operators
The result of the / operator is the quotient from the division of the first operand by the second; the result of the % operator is the remainder. In both operations, if the value of the second operand is zero, the behavior is undefined.

The behaviour on integer division by zero is undefined in c. So the output can depend on anything, including the compiler. Essentially this is because there is no specific bit pattern to represent infinity for an integral type.
Floating point division by zero is defined: and should return the floating point's best representation of infinity.

Related

Why integer division by zero doesn't crash the program compiled with gcc?

Integer division by zero is undefined and should result floating point exception and this is what happens why I write the following code
int j = 0 ;
int x = 1 / j;
In this case the program just crashes with FPE, but if I won't use variable and will go with literal values like this int x = 1 / 0; the the program doesn't crash, but just assigns some random value to x ! I tried to detect if int x = 1 / 0; really causes a crash by adding custom SIGFPE handler, but as I thought it is never invoked during literal zero division, but it does when I store 0 in a variable first. So, it seems that the literal zero division never actually crashes the program( I mean the division itself never happens), so does the compiler(GCC in my case) performs any tricks(e.g. substituting 0 with random number) in this case? or I misunderstood something ?
Why integer division by zero doesn't crash the program compiled with gcc?
Well, I think the answer would be: because it can. It can crash. It can not crash.
or I misunderstood something ?
From C11 6.5.5p5 (emphasis mine :p) :
The result of the / operator is the quotient from the division of the first operand by the second; the result of the % operator is the remainder. In both operations, if the value of the second operand is zero, the behavior is undefined.
The behavior is not defined. It is not defined what should happen. It can "crash". It can not crash. It can spawn nasal demons. The program can do anything.
You should write only programs that you know how will they behave. If you want your program to "crash", call abort().
so does the compiler(GCC in my case) performs any tricks(e.g. substituting 0 with random number) in this case?
The only way to know is to inspect the generated assembly code. You could use https://godbolt.org/
Since the expression contains division by zero constant, gcc could handle the issue at compile time.
If the division by variable valued 0, the compiler will give the responsibility to the exception handler because the evaluation is unpredictable.

What is the difference between performing division operation on constants and variables

According to this link, performing INT_MIN / -1 division operation will result in the program to terminate in i386 CPUs. My processor is of 32-bit architecture and I use GCC compiler. I have done the following experiments to check it.
int a = INT_MIN;
int b = -1;
int c = a / b;
printf("%d\n",c);
As per the information specified in the link mentioned above this program gets terminated throwing a Floating point exception. But it wasn't the same when I tried it in a different manner.
int c = INT_MIN / -1;
printf("%d\n",c);
The compiler threw the following warning after compiling this program.
iso.c: In function ‘main’:
iso.c:6:18: warning: integer overflow in expression [-Woverflow]
int c = INT_MIN / -1;
_____________^
But I got the output -2147483648. Once again I did more two experiments.
int a = INT_MIN;
int b = -1;
printf("%d\n",a / b);
This was a floating point exception.
printf("%d\n",INT_MIN / -1);
This threw the following compiler warning.
iso.c: In function ‘main’:
iso.c:6:24: warning: integer overflow in expression [-Woverflow]
printf("%d\n",INT_MIN / -1);
__________________^
And the output of this program was again -2147483648.
After doing all these experiments, I have noticed that result of division operation done directly on constants differs from the result of the division operation done on variables. So what exactly is making this difference?
Both results are acceptable according to standard. Draft n1256 for C99 says (emphasize mine):
6.5 Expressions...
5 If an exceptional condition occurs during the evaluation of an expression (that is, if the
result is not mathematically defined or not in the range of representable values for its
type), the behavior is undefined.
In 2's complement integer representation, INT_MIN/-1 is INT_MAX + 1 so the operation invokes Undefined Behaviour, so any result (including crash) is acceptable
As explained by #tilz0R in his comment, when the values are passed in variables, the operation is executed at run time and raises a SIGFPE signal. But when the operation only involves compile time constants, the operation is executed by the compiler at compiler time. In gcc implementation, the compiler protect itself against the error and simply uses its best represention for INT_MAX + 1. In a 32 bit 2's complement implementation, INT_MAX is 0x7fffffff, so INT_MAX + 1 is (after signed overflow) 0x80000000 or INT_MIN again (-2147483648)

Why do you get different values for integer division in C89?

For example, suppose you have these variables:
int i = 9;
int j = 7;
Depending on the implementation, the value of, (-i)/j, could be either –1 or –2. How is it possible to get these two different results?
Surprisingly the result is implementation defined in C89:
ANSI draft § 3.3.5
When integers are divided and the division is inexact, if both
operands are positive the result of the / operator is the largest
integer less than the algebraic quotient and the result of the %
operator is positive. If either operand is negative, whether the
result of the / operator is the largest integer less than the
algebraic quotient or the smallest integer greater than the algebraic
quotient is implementation-defined
However this was changed in C99
N1256
§ 6.5.5/6
When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded*
With a footnote:
* This is often called "truncation toward zero"
To clarify, "implementation defined" means the implementation must decide which one, it doesn't mean sometimes you'll get one thing and sometimes you'll get another (unless the implementation defined it to do something really strange like that, I guess).
In C89, the result of division / can be truncated either way for negative operands. (In C99, the result will be truncated toward zero.)
The historical reason is explained in C99 Rationale:
Rationale for International Standard — Programming Languages — C §6.5.5 Multiplicative operators
In C89, division of integers involving negative operands could round upward or downward in an implementation-defined manner; the intent was to avoid incurring overhead in run-time code to check for special cases and enforce specific behavior. In Fortran, however, the result will always truncate toward zero, and the overhead seems to be acceptable to the numeric programming community. Therefore, C99 now requires similar behavior, which should facilitate porting of code from Fortran to C.

Signed integers' undefined behavior and Apple Secure Coding Guide

Apple Secure Coding Guide says the following (page 27):
Also, any bits that overflow past the length of an integer variable (whether signed or unsigned) are dropped.
However, regards to signed integer overflow C standard (89) says:
An example of undefined behavior is the behavior on integer overflow.
and
If an exception occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not representable), the behavior is undefined.
Is the Coding Guide wrong? Is there something here that I don't get? I am not convinced myself that Apple Secure Coding Guide could get this wrong.
Here is a second opinion, from a static analyzer described as detecting undefined behavior:
int x;
int main(){
x = 0x7fffffff + 1;
}
The analyzer is run so:
$ frama-c -val -machdep x86_32 t.c
And it produces:
[kernel] preprocessing with "gcc -C -E -I. t.c"
[value] Analyzing a complete application starting at main
...
t.c:4:[kernel] warning: signed overflow. assert 0x7fffffff+1 ≤ 2147483647;
...
[value] Values at end of function main:
NON TERMINATING FUNCTION
This means that the program t.c contains undefined behavior, and that no execution of it ever terminates without causing undefined behavior.
Let's take this example:
1 << 32
If we assume 32-bit int, C clearly says it is undefined behavior. Period.
But any implementation can define this undefined behavior.
gcc for example says (while not very explicit in defining the behavior):
GCC does not use the latitude given in C99 only to treat certain aspects of signed '<<' as undefined, but this is subject to change.
http://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html
I don't know for clang but I suspect that as for gcc, the evaluation of an expression like 1 << 32 would give no surprise (that is, evaluate to 0).
But even if it is defined on implementations running in Apple operating systems, a portable program should not make use of expressions that invoke undefined behavior in the C language.
EDIT: I thought the Apple sentence was dealing only with bitwise << operator. It looks like it's more general and in that case for C language, they are utterly wrong.
The two statements are not mutually incompatible.
The standard does not define what behaviour each implementation is required to provide (so different implementations can do different things and still be standard conformant).
Apple is allowed to define the behaviour of its implementation.
You as a programmer would be well advised to treat the behaviour as undefined since your code may need to be moved to other platforms where the behaviour is different, and perhaps because Apple could, in theory, change its mind in the future and still conform to the standard.
Consider the code
void test(int mode)
{
int32_t a = 0x12345678;
int32_t b = mode ? a*0x10000 : a*0x10000LL;
return b;
}
If this method is invoked with a mode value of zero, the code will compute the long long value 0x0000123456780000 and store it into a. The behavior of this is fully defined by the C standard: if bit 31 of the result is clear, it will lop off all but the bottom 32 bits and store the resulting (positive) integer into a. If bit 31 were set and the result were being stored to a 32-bit int rather than a variable of type int32_t, the implementation would have some latitude, but implementations are only allowed to define int32_t if they would perform such narrowing conversions according to the rules of two's-complement math.
If this method were invoked with a non-zero mode value, then the numerical computation would yield a result outside the range of the temporary expression value, and as such would cause Undefined Behavior. While the rules dictate what should happen if a calculation performed on a longer type is stored into a shorter one, they do not indicate what should happen if calculations don't fit in the type with which they are performed. A rather nasty gap in the standard (which should IMHO be plugged) occurs with:
uint16_t multiply(uint16_t x, uint16_t y)
{
return x*y;
}
For all combinations of x and y values where the Standard says anything about what this function should do, the Standard requires that it compute and return the product mod 65536. If the Standard were to mandate that for all combinations of x and y values 0-65535 this method must return the arithmetical value of (x*y) mod 65536, it would be mandating behavior with which 99.99% of standards-compliant compilers would already be in conformance. Unfortunately, on machines where int is 32 bits, the Standard presently imposes no requirements with regard to this function's behavior in cases where the arithmetical product would be larger than 2147483647. Even though any portion of the intermediate result beyond the bottom 16 bits will ignored, the code will try to evaluate the result using a 32-bit signed integer type; the Standard imposes no requirements on what should happen if a compiler recognizes that the product will overflow that type.

Why the output of printf("%d",1/0.0) is 0?

I use the codeblock.
When the code is:
printf("%d",1/0);
The program can not run, there is an error. But when I write this:
printf("%d",1/0.0);
The program can run,and the output is 0. I want to know why.
1/0 or 1/0.0 are both undefined behavior:
C11 §6.5.5 Multiplicative operators
The result of the / operator is the quotient from the division of the first operand by the
second; the result of the % operator is the remainder. In both operations, if the value of the second operand is zero, the behavior is undefined.
You are invoking undefined behavior of two different forms one by dividing by zero, the draft standard section 6.5.5 Multiplicative operators paragraph 5 says (emphasis mine):
The result of the / operator is the quotient from the division of the first operand by the
second; the result of the % operator is the remainder. In both operations, if the value of the second operand is zero, the behavior is undefined.
The second by using the wrong format specifier in printf, you should be using %f since the result of 1/0.0 is a double not an int. The C99 draft standard section 7.19.6.1 The fprintf function which also covers pritnf in paragraph 9 says:
If a conversion specification is invalid, the behavior is undefined.248) If any argument is not the correct type for the corresponding conversion specification, the behavior is
undefined.
Although if the implementation supports IEEE 754 floating point division by zero should result in either +/- inf. and 0/0.0 will produce a NaN. Important to note that relying on __STDC_IEC_559__ being defined may not work as I note in this comment.
In theory, the result of 1/0.0 may be undefined in a C implementation, since it is undefined by the C standard. However, in the C implementation you use, the result is likely infinity. This is because most common C implementations use (largely) IEEE 754 for floating-point operations.
In this case, the cause of the output you see is that 1/0.0 has double type, but you are printing it with %d, which requires int type. You should print it with a specifier that accepts the double type, such as %g.

Resources