C bitfield with assigned value 1 shows -1 - c

I played with bit-fields and stuck with some strange thing:
#include <stdio.h>
struct lol {
int a;
int b:1,
c:1,
d:1,
e:1;
char f;
};
int main(void) {
struct lol l = {0};
l.a = 123;
l.c = 1; // -1 ???
l.f = 'A';
printf("%d %d %d %d %d %c\n", l.a, l.b, l.c, l.d, l.e, l.f);
return 0;
}
The output is:
123 0 -1 0 0 A
Somehow the value of l.c is -1. What is the reason?
Sorry if obvious.

Use unsigned bitfields if you don't want sign-extension.
What you're getting is your 1 bit being interpreted as the sign bit in a two's complement representation. In two's complement, the sign-bit is the highest bit and it's interpreted as -(2^(width_of_the_number-1)), in your case -(2^(1-1)) == -(2^0) == -1. Normally all other bits offset this (because they're interpreted as positive) but a 1-bit number doesn't and can't have other bits, so you just get -1.
Take for example 0b10000000 as a as int8_t in two's complement.
(For the record, 0b10000000 == 0x80 && 0x80 == (1<<7)). It's the highest bit so it's interpreted as -(2^7) (==-128)
and there's no positive bits to offset it, so you get printf("%d\n", (int8_t)0x80); /*-128*/
Now if you set all bits on, you get -1, because -128 + (128-1) == -1. This (all bits on == -1) holds true for any width interpreted as in two's complement–even for width 1, where you get -1 + (1-1) == -1`.
When such a signed integer gets extended into a wider width, it undergoes so called sign extension.
Sign extension means that the highest bit gets copied into all the newly added higher bits.
If the highest bit is 0, then it's trivial to see that sign extension doesn't change the value (take for example 0x01 extended into 0x00000001).
When the highest bit is 1 as in (int8_t)0xff (all 8 bits 1), then sign extension copies the sign bit into all the new bits: ((int32_t)(int8_t)0xff == (int32_t)0xffffffff). ((int32_t)(int8_t)0x80 == (int32_t)0xffffff80) might be a better example as it more clearly shows the 1 bits are added at the high end (try _Static_assert-ing either of these).
This doesn't change the value either as long as you assume two's complement, because if you start at:
-(2^n) (value of sign bit) + X (all the positive bits) //^ means exponentiation here
and add one more 1-bit to the highest position, then you get:
-(2^(n+1)) + 2^(n) + X
which is
2*-(2^(n)) + 2^(n) + X == -(2^n) + X //same as original
//inductively, you can add any number of 1 bits
Sign extension normally happens when you width-extend a native signed integer into a native wider width (signed or unsigned), either with casts or implicit conversions.
For the native widths, platforms usually have an instruction for it.
Example:
int32_t signExtend8(int8_t X) { return X; }
Example's dissassembly on x86_64:
signExtend8:
movsx eax, dil //the sx stands for Sign-eXtending
ret
If you want to make it work for nonstandard widths, you can usually utilize the fact that signed-right-shifts normally copy the the sign bit alongside the shifted range (it's really implementation defined what signed right-shifts do)
and so you can unsigned-left-shift into the sign bit and then back to get sign-extension artificially for non-native width such as 2:
#include <stdint.h>
#define MC_signExtendIn32(X,Width) ((int32_t)((uint32_t)(X)<<(32-(Width)))>>(32-(Width)))
_Static_assert( MC_signExtendIn32(3,2 /*width 2*/)==-1,"");
int32_t signExtend2(int8_t X) { return MC_signExtendIn32(X,2); }
Disassembly (x86_64):
signExtend2:
mov eax, edi
sal eax, 30
sar eax, 30
ret
Signed bitfields essentially make the compiler generate (hidden) macros like the above for you:
struct bits2 { int bits2:2; };
int32_t signExtend2_via_bitfield(struct bits2 X) { return X.bits2; }
Disassembly (x86_64) on clang:
signExtend2_via_bitfield: # #signExtend2_via_bitfield
mov eax, edi
shl eax, 30
sar eax, 30
ret
Example code on godbolt: https://godbolt.org/z/qxd5o8 .

Bit-fields are very poorly standardized and they are generally not guaranteed to behave predictably. The standard just vaguely states (6.7.2.1/10):
A bit-field is interpreted as having a signed or unsigned integer type consisting of the
specified number of bits.125)
Where the informative note 125) says:
125) As specified in 6.7.2 above, if the actual type specifier used is int or a typedef-name defined as int,
then it is implementation-defined whether the bit-field is signed or unsigned.
So we can't know if int b:1 gives a signed type or unsigned type, it's up to the compiler. Your compiler apparently decided that it would be a great idea to have signed bits. So it treats your 1 bit as binary translated into a two's complement 1 bit number, where binary 1 is decimal -1 and zero is zero.
Furthermore, we can't know where b in your code ends up in memory, it could be anywhere and also depends on endianess. What we do know is that you save absolutely no memory from using a bit-field here, since at least 16 bits for an int will get allocated anyway.
General good advise:
Never use bit-fields for any purpose.
Use the bit-wise operators << >> | & ^ ~ and named bit-masks instead, for 100% portable and well-defined code.
Use the stdint.h types or at least unsigned ones whenver dealing with raw binary.

You are using a signed integer, and since the representation of 1 in binary has the very first bit set to 1, in a signed representation that is translated with the existence of negative signedness, so you get -1. As other comments suggest, use the unsigned keyword to remove the possibility to represent negative integers.

Related

can't understand bit fields in C

For about 3-4 hours and start reading about bit fields in C and I can't understand how they work. For example, I can't understand why the program has the output: -1, 2, -3
#include <stdio.h>
struct REGISTER {
int bit1 : 1;
int : 2;
int bit3 : 4;
int bit4 : 4;
};
int main(void) {
struct REGISTER bit = { 1, 2, 13 };
printf("%d, %d, %d\n", bit.bit1, bit.bit3, bit.bit4);
return 0;
}
Can someone give me an explanation? I tend to think that if I use unsigned in the struct then the output would be positive. But I don't know where that -3 comes from.
Your compiler considers the type int of a bit field as the type signed int.
Consider the binary representation of initializers (it is enough to consider one byte)
1 -> 0b00000001
2 -> 0b00000010
13 -> 0b00001101
So the first bit-field having the width equal to 1 gets 1. For an integer with one bit this bit is the sign bit. So such a bit field can represent two values 0 and -1 (in the 2's complement representation).
The second initialized bit-field has the width equal to 4. So it stores the bit representation 0010 that is equal to 2.
The third initialized bit-field also has the width equal to 4. So its stored bit combination is equal to 1101. The most significant bit is the sign bit and it is set. So the bit-field contains a negative number. This number is equal to -3.
1101 ( = -3 )
+
0011 ( = 3 )
====
0000 ( = 0 )
It is implementation-defined whether an int bitfield is signed or unsigned, hence it is actually an error in any program where you care about the value - if you care for the value you will qualify it either as signed or unsigned.
Now, your compiler considers bitfields without specified signedness as signed. I.e. int bit4: 4 tells that that bit-field is 4 bits wide and signed.
13 cannot be represented in a signed 4-bit bitfield as the maximum value is 7, no matter the representation of negative numbers - 2's complement, 1's complement, sign-and-magnitude. An implementation-specified conversion will now occur: in your case the bit representation 1101 is stored as-is in the 2's complement signed bitfield, and it is considered as the 2's complement negative value -3.
Same happens for 1-bit signed bitfield: the one bit is the sign bit, hence there are only 2 possible values: 0 and -1 on two's complement systems. On one's complement system, or sign-and-magnitude, one-bit bitfield does not make any sense at all because it can only store 0 or a trap value.
If you want it to be able to store value 13, you will have to use at least 5 bits or use unsigned int: 4.
Please Dont use signed int in this operation. here that lead to minus results. if we use unsigned int,The output comes out to be negative
What happened behind is that the value 13 was stored in 4 bit signed integer which is equal to 1101. The MSB is a 1, so it’s a negative number and you need to calculate the 2’s complement of the binary number to get its actual value which is what is done internally. By calculating 2’s complement you will arrive at the value 0011 which is equivalent to decimal number 3 and since it was a negative number you get a -3.
#include <stdio.h>
struct REGISTER {
unsigned int bit1: 1;
unsigned int :2;
unsigned int bit3: 4;
unsigned int bit4: 4;
};
int main(void) {
struct REGISTER bit={1,2,13};
printf("%d, %d, %d\n", bit.bit1, bit.bit3, bit.bit4);
return 0;
}
Here the output will be exactly 1,2,13
Thanks
It's pretty easy, you're initializing the bitfields as follows:
1 as bit length 1. That means the MSB is 1, which since the type is int is the sign bit. So the resulting value is -0-1 = -1.
2 as bit length 4. MSB (sign bit) is 0, so the result is positive, ie 2.
13 as bit length 4. In binary that is 1101, so the MSB is 1 (negative), so the resulting value is -2-1 = -3.
You can read more about two's complement (the format in which numbers are stored on Intel architectures) here. The short version is that negative numbers have MSB=1, and to calculate their value you take the negative of their not representation (without the sign bit) minus one.

Store signed 32-bit in unsigned 64-bit int

Basically, what I want is to "store" a signed 32-bit int inside (in the 32 rightmost bits) an unsigned 64-bit int - since I want to use the leftmost 32 bits for other purposes.
What I'm doing right now is a simple cast and mask:
#define packInt32(X) ((uint64_t)X | INT_MASK)
But this approach has an obvious issue: If X is a positive int (the first bit is not set), everything goes fine. If it's negative, it becomes messy.
The question is:
How to achieve the above, also supporting negative numbers, in the fastest and most-efficient way?
The "mess" you mention happens because you cast a small signed type to a large unsigned type.
During this conversion the size is adjusted first with applying sign extension. This is what causes your trouble.
You can simply cast the (signed) integer to an unsigned type of same size first. Then casting to 64 bit will not trigger sign extension:
#define packInt32(X) ((uint64_t)(uint32_t)(X) | INT_MASK)
You need to mask out any bits besides the low order 32 bits. You can do that with a bitwise AND:
#define packInt32(X) (((uint64_t)(X) & 0xFFFFFFFF) | INT_MASK)
A negative 32-bit integer will get sign-extended into 64-bits.
#include <stdint.h>
uint64_t movsx(int32_t X) { return X; }
movsx on x86-64:
movsx:
movsx rax, edi
ret
Masking out the higher 32-bits will remove cause it to be just zero-extended:
#include <stdint.h>
uint64_t mov(int32_t X) { return (uint64_t)X & 0xFFFFFFFF; }
//or uint64_t mov(int32_t X) { return (uint64_t)(uint32_t)X; }
mov on x86-64:
mov:
mov eax, edi
ret
https://gcc.godbolt.org/z/fihCmt
Neither method loses any info from the lower 32-bits, so either method is a valid way of storing a 32-bit integer into a 64-bit one.
The x86-64 code for a plain mov is one byte shorter (3 bytes vs 4). I don't think there should be much of a speed difference, but if there is one, I'd expect the plain mov to win by a tiny bit.
One option is to untangle the sign-extension and the upper value when it is read back, but that can be messy.
Another option is to construct a union with a bit-packed word. This then defers the problem to the compiler to optimise:
union {
int64_t merged;
struct {
int64_t field1:32,
field2:32;
};
};
A third option is to deal with the sign bit yourself. Store a 15-bit absolute value and a 1-bit sign. Not super-efficient, but more likely to be legal if you should ever encounter a non-2's-complement processor where negative signed values can't be safely cast to unsigned. They are rare as hens teeth, so I wouldn't worry about this myself.
Assuming that the only operation on the 64 bit value will be to convert it back to 32 (and potentially, storing/displaying it), there is no need to apply a mask. The compiler will sign extend the 32 bit attributes when casting it to 64 bit, and will pick the lowest 32 bit when casting the 64 bit value back to 32 bit.
#define packInt32(X) ((uint64_t)(X))
#define unpackInt32(X) ((int)(X))
Or better, using (inline) functions:
inline uint64_t packInt32(int x) { return ((uint64_t) x) ; }
inline int unpackInt32(uint64_t x) { return ((int) x) ; }

How does in assembly does assigning negative number to an unsigned int work?

I Learned About 2's Complement and unsigned and signed int. So I Decided to test my knowledge , as far as i know that a negative number is stored in 2's complement way so that addition and subtraction would not have different algorithm and circuitry would be simple.
Now If I Write
int main()
{
int a = -1 ;
unsigned int b = - 1 ;
printf("%d %u \n %d %u" , a ,a , b, b);
}
Output Comes To Be -1 4294967295 -1 4294967295 . Now , i looked at the bit pattern and various things and then i realized that -1 in 2's complement is 11111111 11111111 11111111 11111111 , so when i interpret it using %d , it gives -1 , but when i interpret using %u , it treat it as a positive number and so it gives 4294967295. I Checked the Assembly Of the code is
.LC0:
.string "%d %u \n %d %u"
main:
push rbp
mov rbp, rsp
sub rsp, 16
mov DWORD PTR [rbp-4], -1
mov DWORD PTR [rbp-8], -1
mov esi, DWORD PTR [rbp-8]
mov ecx, DWORD PTR [rbp-8]
mov edx, DWORD PTR [rbp-4]
mov eax, DWORD PTR [rbp-4]
mov r8d, esi
mov esi, eax
mov edi, OFFSET FLAT:.LC0
mov eax, 0
call printf
mov eax, 0
leave
ret
Now here -1 is moved to the register both the times in unsigned and signed . What i want to know if reinterpretation is only that matters , then why do we have two types unsigned and signed , it is printf format string %d and %u that matters ?
Further what really happens when i assign negative number to a unsigned integer (I learned That The initializer converts this value from int to unsigned int. ) but in the assembly code I did not saw such a thing. So what really happens ??
And How does Machine knows when it has to do 2's complement and when not , does it see the negative sign and performs 2's complement?
I have read almost every question and answer you could think this question be duplicate of , but I could not find a satisfactory solution.
Both signed and unsigned are pieces of memory and according to operations it matters how they behave.
It doesn't make any difference when adding or subtracting because due to 2-complement the operations are exactly the same.
It matters when we compare two numbers: -1 is lower than 0 while 4294967295 obviously isn't.
About conversion - for the same size it simply takes variable content and moves it to another - so 4294967295 becomes -1. For bigger size it's first signed extended and then content is moves.
How does machine now - according the instruction we use. Machines have either different instructions for comparing signed and unsigned or they provide different flags for it (x86 has Carry for unsigned overflow and Overflow for signed overflow).
Additionally, note that C is relaxed how the signed numbers are stored, they don't have to be 2-complements. But nowadays, all common architectures store the signed like this.
There are a few differences between signed and unsigned types:
The behaviors of the operators <, <=, >, >=, /, %, and >> are all different when dealing with signed and unsigned numbers.
Compilers are not required to behave predictably if any computation on a signed value exceeds the range of its type. Even when using operators which would behave identically with signed and unsigned values in all defined cases, some compilers will behave in "interesting" fashion. For example, a compiler given x+1 > y could replace it with x>=y if x is signed, but not if x is unsigned.
As a more interesting example, on a system where "short" is 16 bits and "int" is 32 bits, a compiler given the function:
unsigned mul(unsigned short x, unsigned short y) { return x*y; }
might assume that no situation could ever arise where the product would exceed 2147483647. For example, if it saw the function invoked as unsigned x = mul(y,65535); and y was an unsigned short, it may omit code elsewhere that would only be relevant if y were greater than 37268.
It seems you seem to have missed the facts that firstly, 0101 = 5 in both signed and unsigned integer values and that secondly, you assigned a negative number to an unsigned int - something your compiler may be smart enough to realise and, therfore, correct to a signed int.
Setting an unsigned int to -5 should technically cause an error because unsigned ints can't store values under 0.
You could understand it better when you try to assign a negative value to a larger sized unsigned integer. Compiler generates the assembly code to do sign extension when transferring small size negative value to larger sized unsigned integer.
see this blog post for assembly level explanation.
Choice of signed integer representation is left to the platform. The representation applies to both negative and non-negative values - for example, if 11012 (-5) is the two's complement of 01012 (5), then 01012 (5) is also the two's complement of 11012 (-5).
The platform may or may not provide separate instructions for operations on signed and unsigned integers. For example, x86 provides different multiplication and division instructions for signed (idiv and imul) and unsigned (div and mul) integers, but uses the same addition (add) and subtraction (sub) instructions for both.
Similarly, x86 provides a single comparison (cmp) instruction for both signed and unsigned integers.
Arithmetic and comparison operations will set one or more status register flags (carry, overflow, zero, etc.). These can be used differently when dealing with words that are supposed to represent signed vs. unsigned values.
As far as printf is concerned, you're absolutely correct that the conversion specifier determines whether the bit pattern 0xFFFF is displayed as -1 or 4294967295, although remember that if the type of the argument does not match up with what the conversion specifier expects, then the behavior is undefined. Using %u to display a negative signed int may or may not give you the expected equivalent unsigned value.

Extracting the sign bit with shift

Is it always defined behavior to extract the sign of a 32 bit integer this way:
#include <stdint.h>
int get_sign(int32_t x) {
return (x & 0x80000000) >> 31;
}
Do I always get a result of 0 or 1?
No, it is incorrect to do this because right shifting a signed integer with a negative value is implementation-defined, as specified in the C Standard:
6.5.7 Bitwise shift operators
The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type or if E1 has a signed type and a nonnegative value, the value of the result is the integral part of the quotient of E1 / 2E2. If E1 has a signed type and a negative value, the resulting value is implementation-defined.
You should cast x as (uint32_t) before masking and shifting.
EDIT: Wrong answer! I shall keep this answer here as an example of good looking, intuitive but incorrect reasoning. As explained in the other answers, there is not right shifting of a negative value in the code posted. The type of x & 0x80000000 is one of the signed integer or unsigned integer types depending on the implementation characteristics, but its value is always positive, either 0 or 2147483648. Right shifting this value is not implementation-defined, the result is always either 0 or 1. Whether the result is the value of the sign bit is less obvious: it is the value of the sign bit except for some very contorted corner cases, hybrid architectures quite unlikely to exist and probably non standard conforming anyway.
Since the answer assumes that fixed width types are available, therefore a negative zero doesn't exists1, the only correct way of extracting the sign bit is to simply check if the value is negative:
_Bool Sign( const int32_t a )
{
return a < 0 ;
}
1 Fixed width types require two's complement representation, which doesn't have a negative zero.
Yes it is correct on 1s and 2s complement architectures, but for subtile reasons:
for the overwhelmingly common hardware where int is the same type as int32_t and unsigned the same as uint32_t, the constant literal 0x80000000 has type unsigned int. The left operand of the & operation is converted to unsigned int and the result of the & has the same type. The right shift is applied to an unsigned int, the value is either 0 or 1, no implementation-defined behavior.
On other platforms, 0x80000000 may have a different type and the behavior might be implementation defined:
0x80000000 can be of type int, if the int type has more than 31 value bits. In this case, x is promoted to int, and its value is unchanged.
If int uses 1s complement or 2s complement representation, the sign bit is replicated into the more significant bits. The mask operation evaluates to an int with value 0 or 0x80000000. Right shifting it by 31 positions evaluates to 0 and 1 respectively, no implementation-defined behavior either.
Conversely, if int uses sign/magnitude representation, preserving the value of x will effectively reset its 31st bit, moving the sign bit beyond the value bits. The mask operation will evaluate to 0 and the result will be incorrect.
0x80000000 can be of type long, if the int type has fewer than 31 value bits or if INT_MIN == -INT_MAX and long has more that 31 value bits. In this case, x is converted to long, and its value is unchanged, with the same consequences as for the int case. For 1s or 2s complement representation of long, the mask operation evaluates to a positive long value of either 0 or 0x80000000 and right shifting it by 31 places is defined and gives either 0 or 1, for sign/magnitude, the result should be 0 in all cases.
0x80000000 can be of type unsigned long, if the int type has fewer than 31 value bits and long has 31 value bits and uses 2s complement representation. In this case, x is converted to unsigned long keeping the sign bit intact. The mask operation evaluates to an unsigned long value of either 0 or 0x80000000 and right shifting it by 31 places is defined and gives either 0 or 1.
lastly, 0x80000000 can be of type long long, if both the int type has fewer than 31 value bits or INT_MIN == -INT_MAX and long has 31 value bits but does not use 2s complement representation. In this case, x is converted to long long, keeping its value, with the same consequences as for the int case if long long representation is sign/magnitude.
This question was purposely contrived. The answer is you get the correct result so long as the platform does not use sign/magnitude representation. But the C Standard insists on supporting integer representations other than 2s complement, with very subtile consequences.
EDIT: Careful reading of section 6.2.6.2 Integer types of the C Standard seems to exclude the possibility for different representations of signed integer types to coexist in the same implementation. This makes the code fully defined as posted, since the very presence of type int32_t implies 2s complement representation for all signed integer types.
Do I always get a result of 0 or 1?
Yes.
Simple answer:
0x80000000 >> 31 is always 1.
0x00000000 >> 31 is always 0.
See below.
[Edit]
Is it always defined behavior to extract the sign of a 32 bit integer this way
Yes, except for a corner case.
Should 0x80000000 implement as a int/long (this implies the type > 32 bit) and that signed integer type is signed-magnitude (or maybe one's complement) on a novel machine, then the conversion of int32_t x to that int/long would move the sign bit to a new bit location, rendering the & 0x80000000 moot.
The question is open if C supports int32_t (which must be 2's complement) and any of int/long/long long as non-2's complement.
0x80000000 is a hexadecimal constant. "The type of an integer constant is the first of the corresponding list in which its value can be represented" C11 §6.4.4.1 5: Octal or Hexadecimal Constant: int, unsigned, long or unsigned long.... Regardless of its type, it will have a value of +2,147,483,648.
The type of x & 0x80000000 will be the wider of the types of int32_t and the type of 0x80000000. If the 2 types are the same width and differ in sign-ness, it will be the unsigned one. INT32_MAX is +2,147,483,647 and less than +2,147,483,648, thus 0x80000000 must be a wider type (or same and unsigned) than int32_t. So regardless of what type 0x80000000, x & 0x80000000 will be the same type.
It makes no difference how int nor long are implemented as 2's complement or not.
The & operation does not change the sign of the value of 0x80000000 as either it is an unsigned integer type or the sign bit is in a more significant position. x & 0x80000000 then has the value of +2,147,483,648 or 0.
Right shift of a positive number is well defined regardless of integer type. Right shift of negative values are implementation defined. See C11 §6.5.7 5. x & 0x80000000 is never a negative number.
Thus (x & 0x80000000) >> 31 is well defined and either 0 or 1.
return x < 0; (which does not "Extracting the sign bit with shift" per post title) is understandable and is certainly the preferred code for most instances I can think of. Either approach may not make any executable code difference.
Whether this expression has precisely defined semantics or not, it is not the most readable way to get the sign bit. Here is simpler alternative:
int get_sign(int32_t x) {
return x < 0;
}
As correctly pointed out by 2501, int32_t is defined to have 2s complement representation, so comparing to 0 has the same semantics as extracting the most significant bit.
Incidentally, both functions compile to the same exact code with gcc 5.3:
get_sign(int):
movl %edi, %eax
shrl $31, %eax
ret

Is it possible that for two positive integers i and j, (-i)/j is not equal to -(i/j)?

Is it possible that for two positive integers i and j, (-i)/j is not equal to -(i/j) ? I can't figure out if this is possible...i thought it would be something regarding bits, or overflow of a char type or something but i can't find it. Any ideas?
Pre-C99, it's possible because division of negative operands is implementation-defined; it can be algebraic division or round-towards-zero. C99 defines it to round-towards-zero.
For example, C89 allows (-1)/2 == -1, while C99 requires (-1)/2 == 0. In all cases, -(1/2) == 0.
It is indeed possible when using unsigned integers to represent i and j (you said positive integers, right? :P).
For instance, the output of the following program is (-i)/j 2147483647 -(i/j) 0 in my Intel 64bit OSX machine (unsigned ints are 32 bits long)
#include <stdio.h>
int main()
{
unsigned int i = 1;
unsigned int j = 2;
printf("(-i)/j %u -(i/j) %u\n", (-i)/j, -(i/j));
return 0;
}
Looking at the assembler, the negl intel instruction is used to compute the the negation. negl performs a twos complement. The twos complement result, when interpreted as an unsigned value, causes the mismatch. But don't simply take my word for it, here's an example:
For instance, assuming 8bit words, and for i=1 and j=2
In binary form: i=00000001 j=00000010
-(i/j) = twos_complement(00000001/00000010) = twos_complement(00000000) = 00000000
-(i)/j = twos_complement(00000001)/00000010 = 11111111 / 00000001 = 1111111 (127 in decimal)
The mismatch triggered by an unsigned representation even happens when using a C99 compiler. As #R states, another mismatch could also happen in a pre-c99 compiler since the division by a negative number was implementation defined.

Resources