I use the codeblock.
When the code is:
printf("%d",1/0);
The program can not run, there is an error. But when I write this:
printf("%d",1/0.0);
The program can run,and the output is 0. I want to know why.
1/0 or 1/0.0 are both undefined behavior:
C11 §6.5.5 Multiplicative operators
The result of the / operator is the quotient from the division of the first operand by the
second; the result of the % operator is the remainder. In both operations, if the value of the second operand is zero, the behavior is undefined.
You are invoking undefined behavior of two different forms one by dividing by zero, the draft standard section 6.5.5 Multiplicative operators paragraph 5 says (emphasis mine):
The result of the / operator is the quotient from the division of the first operand by the
second; the result of the % operator is the remainder. In both operations, if the value of the second operand is zero, the behavior is undefined.
The second by using the wrong format specifier in printf, you should be using %f since the result of 1/0.0 is a double not an int. The C99 draft standard section 7.19.6.1 The fprintf function which also covers pritnf in paragraph 9 says:
If a conversion specification is invalid, the behavior is undefined.248) If any argument is not the correct type for the corresponding conversion specification, the behavior is
undefined.
Although if the implementation supports IEEE 754 floating point division by zero should result in either +/- inf. and 0/0.0 will produce a NaN. Important to note that relying on __STDC_IEC_559__ being defined may not work as I note in this comment.
In theory, the result of 1/0.0 may be undefined in a C implementation, since it is undefined by the C standard. However, in the C implementation you use, the result is likely infinity. This is because most common C implementations use (largely) IEEE 754 for floating-point operations.
In this case, the cause of the output you see is that 1/0.0 has double type, but you are printing it with %d, which requires int type. You should print it with a specifier that accepts the double type, such as %g.
Related
This question already has answers here:
Division by zero: Undefined Behavior or Implementation Defined in C and/or C++?
(8 answers)
Closed 6 years ago.
I recently decided to satisfy a long time curiosity of mine. What happens when you divide by zero in C? I had always simply avoided this situation with logic, but was curious how a language like C, which cannot throw a catchable exception, handles this situation.
Here was my very simple test (compiled with GCC on Windows 10):
int main()
{
double test1 = 1.0/0.0;
printf( "%f", test1 );
int test2 = 1/0;
printf( "%d", test2 );
}
The operation done with the double types gave me a lovely little indication that the result was not a number: 1.#INF00. All's fine so far..
However, when performing a divide by zero with int types, the program, less than eloquently, "stopped working." I'm running this on Windows, so I was alerted with that lovely dialog.
I am curious about this behavior. Why is crashing the program the choosen solution for an integer division by zero? Is there really no other solution, say akin to the double way, to handle division by zero? Is this the same behavior on every compiler, or just GCC?
Dividing by zero invokes undefined behavior. Anything can happen when undefined behavior is invoked.
Quote from N1570 6.5.5 Multiplicative operators:
5 The result of the / operator is the quotient from the division of the first operand by the
second; the result of the % operator is the remainder. In both operations, if the value of
the second operand is zero, the behavior is undefined.
Quote from N1570 3. Terms, definitions, and symbols:
3.4.3
1 undefined behavior
behavior, upon use of a nonportable or erroneous program construct or of erroneous data,
for which this International Standard imposes no requirements
2
NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable
results, to behaving during translation or program execution in a documented manner characteristic of the
environment (with or without the issuance of a diagnostic message), to terminating a translation or
execution (with the issuance of a diagnostic message).
It's undefined behavior.
C11 §6.5.5 Multiplicative operators
The result of the / operator is the quotient from the division of the first operand by the second; the result of the % operator is the remainder. In both operations, if the value of the second operand is zero, the behavior is undefined.
The behaviour on integer division by zero is undefined in c. So the output can depend on anything, including the compiler. Essentially this is because there is no specific bit pattern to represent infinity for an integral type.
Floating point division by zero is defined: and should return the floating point's best representation of infinity.
I'm using QAC and I get the below message for the respective source code line. How can I cast it in order for QAC to "understand" it ?
Compiler used is gcc - it doesn't warn about this issue, as it is set to "iso c99".
#define DIAGMGR_SIGNED_2_BYTES_178 ((s16)178)
sK = (s16)(sE1 / DIAGMGR_SIGNED_2_BYTES_178);
^
Result of signed division or remainder operation may be implementation
defined
.
A division ('/') or remainder ('%') operation is being performed in a
signed integer type and the result may be implementation-defined.
Message 3103 is generated for an integer division or remainder
operation in a signed type where:
One or both operands are non-constant and of signed integer type, or
Both operands are integer constant expressions, one of negative value and one of positive value
A signed integer division or remainder operation in which one operand
is positive and the other is negative may be performed in one of two
ways:
The division will round towards zero and any non-zero remainder will be a negative value
The division will round away from zero and any non-zero remainder will be a positive value In the ISO:C99 standard the first approach is
always used. In the ISO:C90 standard either approach may be used - the
result is implementation defined. For example:
/PRQA S 3120,3198,3408,3447 ++/
extern int r;
extern int si;
extern void foo(void)
{
r = -7 / 4; /* Message 3103 *//* Result is -1 in C99 but may be -2 in C90 */
r = -7 % 4; /* Message 3103 *//* Result is -3 in C99 but may be 1 in C90 */
si = si / r; /* Message 3103 */
}
You need to configure the tool so that it understands that your code is C99. In the old C90 standard, division with negative numbers could be implemented in two different ways, see this. This was a known "bug" in the C90 standard, which has been fixed since C99.
This is a standard warning for most static analysis tools, particularly if they are set to check for MISRA-C compliance. Both MISRA-C:2004 and 2012 require that the programmer is aware of this C standard "bug".
Work-arounds in C90:
If you know for certain that the operands aren't negative, simply cast them to unsigned type, or use unsigned type to begin with.
If you know that the operands might be negative:
If either operand is negative, set flags to indicate which one(s) it was.
Take the absolute values of both operands.
Perform division on absolute values.
Re-add sign to the numbers.
That's unfortunately the only portable work-around in C90. Alternatively you could add a static assertion to prevent the code from compiling on systems that truncate negative numbers downwards.
If you are using C99, no work-arounds are needed, as it always truncates towards zero. You can then safely disable the warning.
For example, suppose you have these variables:
int i = 9;
int j = 7;
Depending on the implementation, the value of, (-i)/j, could be either –1 or –2. How is it possible to get these two different results?
Surprisingly the result is implementation defined in C89:
ANSI draft § 3.3.5
When integers are divided and the division is inexact, if both
operands are positive the result of the / operator is the largest
integer less than the algebraic quotient and the result of the %
operator is positive. If either operand is negative, whether the
result of the / operator is the largest integer less than the
algebraic quotient or the smallest integer greater than the algebraic
quotient is implementation-defined
However this was changed in C99
N1256
§ 6.5.5/6
When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded*
With a footnote:
* This is often called "truncation toward zero"
To clarify, "implementation defined" means the implementation must decide which one, it doesn't mean sometimes you'll get one thing and sometimes you'll get another (unless the implementation defined it to do something really strange like that, I guess).
In C89, the result of division / can be truncated either way for negative operands. (In C99, the result will be truncated toward zero.)
The historical reason is explained in C99 Rationale:
Rationale for International Standard — Programming Languages — C §6.5.5 Multiplicative operators
In C89, division of integers involving negative operands could round upward or downward in an implementation-defined manner; the intent was to avoid incurring overhead in run-time code to check for special cases and enforce specific behavior. In Fortran, however, the result will always truncate toward zero, and the overhead seems to be acceptable to the numeric programming community. Therefore, C99 now requires similar behavior, which should facilitate porting of code from Fortran to C.
If all floats are represented as x = (-1)^s * 2^e * 1.m , there is no way to store zero without support for special cases.
No, all conforming C implementations must support a floating-point value of 0.0.
The floating-point model is described in section 5.2.4.2.2 of the C standard (the link is to a recent draft). That model does not make the leading 1 in the significand (sometimes called the mantissa) implicit, so it has no problem representing 0.0.
Most implementations of binary floating-point don't store the leading 1, and in fact the formula you cited in the question:
x = (-1)^s * 2^e * 1.m
is typically correct (though the way e is stored can vary).
In such implementations, including IEEE, a special-case bit pattern, typically all-bits-zero, is used to represent 0.0.
Following up on the discussion in the comments, tmyklebu argues that not all numbers defined by the floating-point model in 5.2.4.2.2 are required to be representable. I disagree; if not all such numbers are required to be representable, then the model is nearly useless. But even leaving that argument aside, there is an explicit requirement that 0.0 must be representable. N1570 6.7.9 paragraph 10:
If an object that has static or thread storage duration is not
initialized explicitly, then:
...
if it has arithmetic type, it is initialized to (positive or unsigned) zero;
...
This is a very long-standing requirement. A C reference from 1975 (3 years before the publication of K&R1) says:
The initial value of any externally-defined object not explicitly initialized is guaranteed to be 0.
which implies that there must be a representable 0 value. K&R1 (published in 1978) says, on page 198:
Static and external variables which are not initialized are guaranteed
to start off as 0; automatic and register variables which are not
initialized are guaranteed to start off as garbage.
Interestingly, the 1990 ISO C standard (equivalent to the 1989 ANSI C standard) is slightly less explicit than its predecessors and successors. In 6.5.7, it says:
If an object that has static storage duration is not initialized
explicitly, it is initialized implicitly as if every member that has
arithmetic type were assigned 0 and every member that has pointer type
were assigned a null pointer constant.
If a floating-point type were not required to have an exact representation for 0.0, then the "assigned 0" phrase would imply a conversion from the int value 0 to the floating-point type, yielding a small value close to 0.0. Still, C90 has the same floating-point model as C99 and C11 (but with no mention of subnormal or unnormalized values), and my argument above about model numbers still applies. Furthermore, the C90 standard was officially superseded by C99, which in turn was superseded by C11.
After o search a while a found this.
ISO/IEC 9899:201x section 6.2.5, Paragraph 13
Each complex type has the same representation and alignment requirements as an array
type containing exactly two elements of the corresponding real type; the first element is
equal to the real part, and the second element to the imaginary part, of the complex number.
section 6.3.1.7, Paragraph 1
When a value of real type is converted to a complex type, the real part of the complex
result value is determined by the rules of conversion to the corresponding real type and
the imaginary part of the complex result value is a positive zero or an unsigned zero.
So, if i understand this right, any implementations where supports C99 (first C standard with _Complex types), must support a floating-point value of 0.0.
EDIT
Keith Thompson pointed out that complex types are optional in C99, so this argument is pointless.
I believe the following floating-point system is an example of a conforming floating-point arithmetic without a representation for zero:
A float is a 48-bit number with one sign bit, 15 exponent bits, and 32 significand bits. Every choice of sign bit, exponent bits, and significant bits corresponds to a normal floating-point number with an implied leading 1 bit.
Going through the constraints in section 5.2.4.2.2 of the draft C standard Keith Thompson linked:
This floating-point system plainly conforms to paragraphs 1 and 2 of 5.2.4.2.2 in the draft standard.
We only represent normalised numbers; paragraph 3 merely permits us to go farther.
Paragraph 4 is tricky; it says that zero and "values that are not floating-point numbers" may be signed or unsigned. But paragraph 3 didn't force us to have any values that aren't floating-point numbers, so I can't imagine interpreting paragraph 4 as requiring there to be a zero.
The range of representable values in paragraph 5 is apparently -0x1.ffffffffp+16383 to 0x1.ffffffffp+16383.
Paragraph 6 states that +, -, *, / and the math library have implementation-defined accuracy. Still OK.
Paragraph 7 doesn't really constrain this implementation as long as we can find appropriate values for all the constants.
We can set FLT_ROUNDS to 0; this way, I don't even have to specify what happens when addition or subtraction overflows.
FLT_EVAL_METHOD shall be 0.
We don't have subnormals, so FLT_HAS_SUBNORM shall be zero.
FLT_RADIX shall be 2, FLT_MANT_DIG shall be 32, FLT_DECIMAL_DIG shall be 10, FLT_DIG shall be 9, FLT_MIN_EXP shall be -16383, FLT_MIN_10_EXP shall be -4933, FLT_MAX_EXP shall be 16384, and FLT_MAX_10_EXP shall be 4933.
You can work out FLT_MAX, FLT_EPSILON, FLT_MIN, and FLT_TRUE_MIN.
I seem to remember that ANSI C didn't specify what value should be returned when either operand of a modulo operator is negative (just that it should be consistent). Did it get specified later, or was it always specified and I am remembering incorrectly?
C89, not totally (§3.3.5/6). It can be either -5 or 5, because -5 / 10 can return 0 or -1 (% is defined in terms of a linear equation involving /, * and +):
When integers are divided and the division is inexact, if both operands are positive the result of the / operator is the largest integer less than the algebraic quotient and the result of the % operator is positive. If either operand is negative, whether the result of the / operator is the largest integer less than the algebraic quotient or the smallest integer greater than the algebraic quotient is implementation-defined, as is the sign of the result of the % operator. If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a.
C99, yes (§6.5.5/6), the result must be -5:
When integers are divided, the result of the / operator is the algebraic quotient with any fractional part discarded.88) If the quotient a/b is representable, the expression (a/b)*b + a%b shall equal a.
88) This is often called "truncation toward zero".
Similarly, in C++98 the result is implementation defined (§5.6/4), following C89's definition, but mentions that the round-towards-zero rule is preferred,
... If both operands are nonnegative then the remainder is nonnegative; if not, the sign of the remainder is implementation-defined74).
74) According to work underway toward the revision of ISO C, the preferred algorithm for integer division follows the rules defined in the ISO Fortran standard, ISO/IEC 1539:1991, in which the quotient is always rounded toward zero.
and indeed it becomes the standard rule in C++0x (§5.6/4):
... For integral operands the / operator yields the algebraic quotient with any fractional part discarded;82 ...
82) This is often called truncation towards zero.
To add a little detail to KennyTM's answer: If the C Standards call something implementation defined then that implementation is required to document the choice it makes. Usually this would be in the compiler or library documentation (man page, help manual, printed docs, CD booklet :-)
Any implementation claiming conformance to C89 or later must provide this somewhere.
Try looking for such a document. In the case of gcc for example, this is in the gcc-info:
4 C Implementation-defined behavior
A conforming implementation of ISO C is required to document its
choice of behavior in each of the areas that are designated
"implementation defined". The following lists all such areas, along
with the section numbers from the ISO/IEC 9899:1990 and ISO/IEC
9899:1999 standards. Some areas are only implementation-defined in one
version of the standard.
Some choices depend on the externally determined ABI for the platform
(including standard character encodings) which GCC follows; these are
listed as "determined by ABI" below. *Note Binary Compatibility:
Compatibility, and `http://gcc.gnu.org/readings.html'. Some choices
are documented in the preprocessor manual. *Note
Implementation-defined behavior: (cpp)Implementation-defined behavior.
Some choices are made by the library and operating system (or other
environment when compiling for a freestanding environment); refer to
their documentation for details.
Menu:
Translation implementation::
Environment implementation::
Identifiers implementation::
Characters implementation::
Integers implementation::
Floating point implementation::
Arrays and pointers implementation::
Hints implementation::
Structures unions enumerations and bit-fields implementation::
Qualifiers implementation::
Declarators implementation::
Statements implementation::
Preprocessing directives implementation::
Library functions implementation::
Architecture implementation::
Locale-specific behavior implementation::