Conversion to float in uint32_t calculation - c

I am trying to improve a SWRTC by modifying the definition of a second (long unsigned n_ticks_per_second) by synchronizing time with a server.
#include <stdint.h>
#include <stdio.h>
int main(int argc, char * argv[]){
int32_t total_drift_SEC;
int32_t drift_per_sec_TICK;
uint32_t at_update_posix_time = 1491265740;
uint32_t posix_time = 1491265680;
uint32_t last_update_posix_time = 1491251330;
long unsigned n_ticks_per_sec = 1000;
total_drift_SEC = (posix_time - at_update_posix_time);
drift_per_sec_TICK = ((float) total_drift_SEC) / (at_update_posix_time - last_update_posix_time);
n_ticks_per_sec += drift_per_sec_TICK;
printf("Total drift sec %d\r\n", total_drift_SEC);
printf("Drift per sec in ticks %d\r\n", drift_per_sec_TICK);
printf("n_ticks_per_second %lu\r\n", n_ticks_per_sec);
return 0;
}
What I don't understand is that I need to cast total_drift_SEC to float in order to have a correct result in the end, ie to have n_ticks_per_sec equal to 1000 in the end.
The output of this code is:
Total drift sec -60
Drift per sec in ticks 0
n_ticks_per_second 1000
Whereas the output of the code without the cast to float is:
Total drift sec -60
Drift per sec in ticks 298054
n_ticks_per_second 299054

This line
drift_per_sec_TICK = total_drift_SEC / (at_update_posix_time - last_update_posix_time);
divides a 32 bit signed int by a 32 bit unsigned int.
32 bit unsigned int has a higher rank then 32 bit signed int.
When doing arithmetic operations the "Usual Arithmetic Conversions" are applied:
From the C11 Standard (draft) 6.3.1.8/1:
if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type.
So -60 gets converted to a (32 bit) unsigned int: 4294967236
Here
drift_per_sec_TICK = (float) total_drift_SEC / (at_update_posix_time - last_update_posix_time);
The following applies (from the paragraph of the C Standard as above):
if the corresponding real type of either operand is float, the other
operand is converted, without change of type domain, to a type whose
corresponding real type is float.
To not blindly step into those traps always specify -Wconversion when compiling with GCC.

Because with "integer" version total_drift_SEC will become unsigned so -60 --> 4294967236
4294967236 / 14410 = 298054
Using float the division will calculate:
-60/14410 = 0
Referring to the c-standard at page 53
6.3.1.8 Usual arithmetic conversions
1 Many operators that expect operands of arithmetic type cause conversions and yield result
types in a similar way. The purpose is to determine a common real type for the operands
and result. For the specified operands, each operand is converted, without change of type
domain, to a type whose corresponding real type is the common real type. Unless
explicitly stated otherwise, the common real type is also the corresponding real type of
the result, whose type domain is the type domain of the operands if they are the same,
and complex otherwise. This pattern is called the usual arithmetic conversions:
[...]
Otherwise, the integer promotions are performed on both operands. Then the
following rules are applied to the promoted operands:
If both operands have the same type, then no further conversion is
needed. Otherwise, if both operands have signed integer types or both
have unsigned integer types, the operand with the type of lesser
integer conversion rank is converted to the type of the operand with
greater rank.
Otherwise, if the operand that has unsigned integer type has rank
greater or equal to the rank of the type of the other operand, then
the operand with signed integer type is converted to the type of the
operand with unsigned integer type.
Otherwise, if the type of the operand with signed integer type can
represent all of the values of the type of the operand with unsigned
integer type, then the operand with unsigned integer type is
converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
Ephasis mine

Related

Usual Arithmetic Conversions: unexpected output

signed short Temp;
Temp = 0xF2C9;
Temp2 = 0x100;
unsigned char a;
a = (unsigned char)(Temp/((unsigned short)Temp2));
What will be the expected output?
As per my understanding due to "Usual Arithmetic Conversions" first Temp should be converted into unsigned shortand result in a should be 0xF2, but I am getting the response 0xF3 which means operation is performed with signed value of Temp. Please explain the behavior.
Is endianess also relevant in this scenario?
No, first all arguments to arithmetic operators are promoted, that is narrow types, such as your shorts are converted to int. (at least on all common architectures).
Assuming short is 16 bit wide on your system, the initialization of Temp is implementation defined because the value of 0xF2C9 doesn't fit to the type. Most probably it is a negative value. Then, for the computation, that negative signed short value is promoted to int. The result of the division is a negative value, which then in turn is converted to unsigned char.
It depends if INT_MAX >= USHRT_MAX
The "Usual Arithmetic Conversions" convert Temp into (int) Temp. This will only "extend the sign" if int is wider than short.
((unsigned short)Temp2) is promoted to (int)((unsigned short)Temp2) or (unsigned)((unsigned short)Temp2).
If INT_MAX >= USHRT_MAX, then the division is done as (int)/(int).
Otherwise, like on a 16-bit system, the division is done as (int)/(unsigned), which is done as (unsigned)/(unsigned).
[Edit]
Temp, initialized with 0xF2C9 (see note), likely has the value of -3383 (or has the value of 62153 should short unlikely be wider than 16 bits.)
With (int)/(int), -3383/256 --> -13.21... --> -13. -13 converted to unsigned char --> 256 - 13 --> 243 or 0xF3.
(Assuming 16-bit int/unsigned) With (unsigned)/(unsigned), -3383 is converted to unsigned 65536 - 3383 --> 62153. 62153/256 --> 242.78... --> 242. 242 converted to unsigned char --> 242 or 0xF2.
Endian-ness in not relevant in this scenario.
Note: As pointed out by #Jens Gustedt, the value in Temp is implementation defined when Temp is 16-bit.
integer division works with "truncation toward zero", so the result of this division is F3, which is -13, instead of F2, which is -14. if you calculate this in decimal represenation the result would be -13.21 and then you would cut the .21.
"The question asked why the integers are signed"
"As per my understanding due to "Usual Arithmetic Conversions" first Temp should be converted into unsigned short..." NO. You are wrong here. The signed short Temp has its high bit set, hence is negative. The bit gets extended to the left when converted to int. The signed short Temp2 does not have its high bt set; the cast to (unsigned short) has no effect; it is converted to a positive int. The negative int is now divided by the positive int, resulting in a negtive value.
In the conversion of Temp2 to int, you don't want the sign bit extended. Use:
a = (unsigned char)(((unsigned short)Temp)/Temp2);
(I didn't test it; just theory)
Jens ,Paul and Mch,
Thanks for your clarification. But as per "ISO/IEC 9899 section 6.3.1.1
The rank of any unsigned integer type shall equal the rank of the corresponding
signed integer type, if any.
and as per 6.3.1.8 Usual arithmetic conversions" following rules should be applicable.
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned
integer types, the operand with the type of lesser integer conversion rank is
converted to the type of the operand with greater rank.
**Otherwise, if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type.**
Otherwise, if the type of the operand with signed integer type can represent
all of the values of the type of the operand with unsigned integer type, then
the operand with unsigned integer type is converted to the type of the
operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type
so according to the above mentioned rule 3 1st signed short (integer type) should be converted in to unsigned short (integer type) and then arithmatic operation should be performed and result should be 0xF2.

C operators result type

In C11 (n1570), there are some operators whose result types are clearly stated. For example, the result of && has type int.
But I haven't found the result type of other operators, like +. Is it specified somewhere in the standard?
I tried this program:
unsigned char usc = 254;
unsigned int usi = 4294967293;
signed char sic = 126;
long long unsigned llu = usc*2;
printf("%llu\n",llu);
llu = usi*2;
printf("%llu\n",llu);
llu = usc+usc;
printf("%llu\n",llu);
llu = usi+usi;
printf("%llu\n",llu);
llu = usc+4294967294;
printf("%llu\n",llu);
llu = usc+2147483646;
printf("%llu\n",llu);
llu = sic+4294967294;
printf("%llu\n",llu);
llu = sic+2147483646;
printf("%llu\n",llu);
Output:
508
4294967290
508
4294967290
4294967548
18446744071562068220
4294967420
18446744071562068092
I guess unsigned char gets promoted here, but unsigned int doesn't; result type of char + unsigned int seems to be unsigned int, and result type of char + int seems to be int.
But I'm not so sure.
Are these castings standard, or implementation-defined?
The rules are explicitly defined by the standard:
6.3.1.8 Usual arithmetic conversions
1 Many operators that expect operands of arithmetic type cause conversions and yield result
types in a similar way. The purpose is to determine a common real type for the operands
and result. For the specified operands, each operand is converted, without change of type
domain, to a type whose corresponding real type is the common real type. Unless
explicitly stated otherwise, the common real type is also the corresponding real type of
the result, whose type domain is the type domain of the operands if they are the same,
and complex otherwise. This pattern is called the usual arithmetic conversions:
First, if the corresponding real type of either operand is long double, the other
operand is converted, without change of type domain, to a type whose corresponding real type is long double.
Otherwise, if the corresponding real type of either operand is double, the other
operand is converted, without change of type domain, to a type whose
corresponding real type is double.
Otherwise, if the corresponding real type of either operand is float, the other
operand is converted, without change of type domain, to a type whose
corresponding real type is float.62)
Otherwise, the integer promotions are performed on both operands. Then the
following rules are applied to the promoted operands:
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned
integer types, the operand with the type of lesser integer conversion rank is
converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type.
Otherwise, if the type of the operand with signed integer type can represent
all of the values of the type of the operand with unsigned integer type, then
the operand with unsigned integer type is converted to the type of the
operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
Still, how big your types are and whether signed types use 1s-complement, 2s-complement or sign-and-magnitude representation is implementation-defined.
You properly deduced what happens for your implementation.
There is no “casting” in your examples, only conversions. A cast is a syntactic construct, like (unsigned long), that causes a conversion. Conversions also happen on their own, as they do in your examples.
The names for the conversions that happen in your examples is “usual arithmetic conversions”. They are described in the C11 standard at clause 6.3.1.8, which is too long to quote in full (it also handles the conversion from integer type to floating-point type when the program contains 1 + 1.0). For integer types, the rules look like:
If both operands have the same type, then no further conversion is
needed.
Otherwise, if both operands have signed integer types or both
have unsigned integer types, the operand with the type of lesser
integer conversion rank is converted to the type of the operand with
greater rank.
Otherwise, if the operand that has unsigned integer type
has rank greater or equal to the rank of the type of the other
operand, then the operand with signed integer type is converted to the
type of the operand with unsigned integer type.
Otherwise, if the type
of the operand with signed integer type can represent all of the
values of the type of the operand with unsigned integer type, then the
operand with unsigned integer type is converted to the type of the
operand with signed integer type.
Otherwise, both operands are
converted to the unsigned integer type corresponding to the type of
the operand with signed integer type.
The type of the result is the same as the common type decided for the operands.
Frama-C's front-end, which is only one “apt-get install” away if you are using Linux, makes these conversions explicit and displays them as casts when you command it to print the Abstract Syntax Tree it has built. On your examples, the conversions are (removing the printf() calls that do not add information):
$ frama-c -print t.c
...
usc = (unsigned char)254;
usi = 4294967293;
sic = (signed char)126;
llu = (unsigned long long)((int)usc * 2);
llu = (unsigned long long)(usi * (unsigned int)2);
llu = (unsigned long long)((int)usc + (int)usc);
llu = (unsigned long long)(usi + usi);
llu = (unsigned long long)((unsigned int)usc + 4294967294);
llu = (unsigned long long)((int)usc + 2147483646);
llu = (unsigned long long)((unsigned int)sic + 4294967294);
llu = (unsigned long long)((int)sic + 2147483646);
This is for a common ILP32 architecture. Implementation-defined parameters can influence the conversions, as for instance the type of 40000 may be int for most C99 compilers and long int with others (it is always the first type in the list int, long int, long long int that can represent the constant).

C Treament when Casting Comparison Values

Someone was talking with me about wraparound in C (0xffff + 0x0001 = 0x0000), and it led me to the following situation:
int main() {
unsigned int a;
for (a = 0; a > -1; a++)
printf("%d\n", a);
return 0;
}
Compiling with GCC, this program exits without running the loop, which I assume is because -1 was implicitly cast to 0xffff. The same happens when switching int to long. However, when switching to char, the program runs indefinitely. I would expect that since the int did not run the loop, neither would the char. Can someone explain what sort of implicit casting the compiler is performing in this situation, and is it defined in one of the editions of the C standard or is it compiler-dependent?
In C, unsignedness is sticky:
unsigned int a;
/* ... */
a > -1
in the above > expression, the left operands is of type unsigned int and the right operand is of type int. The C usual arithmetic conversions convert the two operands to a common type: unsigned int and so the > expression above is equivalent to:
a > (unsigned int) -1
The conversion of -1 to unsigned int makes the resulting value a huge unsigned int value and as a initial value is 0, the expression is evaluated to false (0).
Now if a is of type char or int, -1 is then not converted to unsigned int and so 0 > -1 is true (1) as expected.
Quote excerpted from ISO/IEC 9899:
If both of the operands have arithmetic type, the usual arithmetic conversions are
performed.
Several operators convert operand values from one type to another automatically.
Many operators that expect operands of arithmetic type cause conversions and yield result
types in a similar way. The purpose is to determine a common real type for the operands
and result. For the specified operands, each operand is converted, without change of type
domain, to a type whose corresponding real type is the common real type. Unless
explicitly stated otherwise, the common real type is also the corresponding real type of
the result, whose type domain is the type domain of the operands if they are the same,
and complex otherwise. This pattern is called the usual arithmetic conversions:
First, if the corresponding real type of either operand is long double(...)
Otherwise, if the corresponding real type of either operand is double(...)
Otherwise, if the corresponding real type of either operand is float(...)
Otherwise, the integer promotions are performed on both operands. Then the
following rules are applied to the promoted operands:
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned
integer types(...)
Otherwise, if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type.
Otherwise, if the type of the operand with signed integer type can represent
all of the values of the type of the operand with unsigned integer type, then
the operand with unsigned integer type is converted to the type of the
operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
Basically, when you do an operation in C, the operation must be executed in a base at least as big as the greater operator.
So, assuming that your int is an int32, the operation:
uint32_t > int32_t
the operation must be executed in the base "uint32_t" or grater. In this case, it is being executed in the "uint32_t".
When you do:
uint8_t > int32_t
the operation is being executed in the base "int32_t" or greater.
Usually, when possible, the operation will be executed in the "int" base, as it is supposed to be faster than any other base.
So, if you do:
(int)unsigned char > int(-1)
, the condition will always be true.

Bit width of bit shift and arithmetic in C

I am working on a program that mixes 64 bit (for some calculations) and 32 bit (for space saving storage) unsigned integers, so it is very important I keep them sorted out during arithmetic to avoid overflows
Here's an example problem
I want to bit shift 1 to the left by an unsigned long n, but I want the result to be an unsigned long long. This will be used in an comparison operation in an if statement, so there is no assignment going on. I'll give you some code.
void example(unsigned long shift, unsigned long long compare)
{
if((1<<shift)>compare)
{
do_stuff;
}
}
I suspect that this would NOT do what I want, so would the following do what I want?
void example(unsigned long shift, unsigned long long compare)
{
if(((unsigned long long)1<<shift)>compare)
{
do_stuff;
}
}
How do I micromanage the bit width of these things? Which operand determines the bit width that the operation is performed with, or is it the larger of the two?
Also, I would like a run down of how this works for other operations too if possible, such as + * / % etc.
Perhaps a reference to a resource with this information would be good, I cannot seem to find a clear statement of this information anywhere. Or perhaps the rules are simple enough to just post. I am not sure.
Which operand determines the bit width that the operation is performed with, or is it the larger of the two?
For the bit-shifts, it's the left operand (the one to be shifted) that determines the type that the operation is performed with. If the integer promotions convert it to int or unsigned int, the operation is performed at that type, otherwise at the type of the left operand.
For the comparison, the result of the shift may then be converted to the type of the other operand. In your example code, the integer constant 1 has type int, hence the shift would be performed at type int, and the result of that converted to unsigned long long for the comparison. Casting works, since the result has a type that is not changed by the integer promotions, as would using a suffixed literal 1ull.
For the other listed operations, the arithmetic operations (as for comparisons), the type at which the operation is performed is determined by both operands as follows:
Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands:
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type.
Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
It will do exactly what you want. However, this may be achieved (in this particular case) just by using a literal constant of type long long: 1LL
What you want is a long long literal. To do this, use 1LL instead of 1.

unsigned and int gotcha [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Arithmetic operations on unsigned and signed integers
unsigned int b=2;
int a=-2;
if(a>b)
printf("a>b");
else
printf("b>a");
OUTPUT: a>b
int b=2;
int a=-2;
if(a>b)
printf("a>b");
else
printf("b>a");
OUTPUT: b>a
PLEASE, someone explain the output
In the first case both operands are converted to unsigned int, the converted a will be UINT_MAX-1, which is much larger than b and hence the output.
Don't compare signed and unsigned integers unless you understand the semantics of arithematic conversions, the results might surprise you.
When signed and unsigned values are compared, and when the unsigned values can't all be represented in the signed type, then the signed operand is promoted to unsigned. This is done with a formula that amounts to a reinterpretation of the 2-s complement bit pattern.
Well, negative numbers have lots of high bits set...
Since your operands are all of the same rank, it's just a matter of unsigned bit patterns being compared.
And so -2 is represented with 111111..110, one less than the largest possible, and it easily beats 2 when interpreted as unsigned.
The following is taken from The C Programming Language by Kernighan and Ritchie - 2.7 Type Conversions - page 44; the second half of the page explains the same scenario in detail. A small portion is below for your reference.
Conversion rules are complicated when unsigned operands are involved. The problem is that comparison between signed and unsigned values are machine dependent, because they depend on the sizes of the various integer types. For example, suppose that int is 16 bits long and long is 32 bits. Then -1L < 1U, because 1U, which is an int, is promoted to a signed long. But -1L > 1UL, because -1L is promoted to unsigned long and thus appears to be a larger positive number.
You need to learn the operation of the operators in C and the C promotion and conversion rules. They are explained in the C standard. Some excerpts from it plus my comments:
6.5.8 Relational operators
Syntax
1 relational-expression:
shift-expression
relational-expression < shift-expression
relational-expression > shift-expression
relational-expression <= shift-expression
relational-expression >= shift-expression
Semantics
3 If both of the operands have arithmetic type, the usual arithmetic conversions are
performed.
Most operators include this "usual arithmetic conversions" step before the actual operation (addition, multiplication, comparison, etc etc). - Alex
6.3.1.8 Usual arithmetic conversions
1 Many operators that expect operands of arithmetic type cause conversions and yield result
types in a similar way. The purpose is to determine a common real type for the operands
and result. For the specified operands, each operand is converted, without change of type
domain, to a type whose corresponding real type is the common real type. Unless
explicitly stated otherwise, the common real type is also the corresponding real type of
the result, whose type domain is the type domain of the operands if they are the same,
and complex otherwise. This pattern is called the usual arithmetic conversions:
First, if the corresponding real type of either operand is long double, the other
operand is converted, without change of type domain, to a type whose
corresponding real type is long double.
Otherwise, if the corresponding real type of either operand is double, the other
operand is converted, without change of type domain, to a type whose
corresponding real type is double.
Otherwise, if the corresponding real type of either operand is float, the other
operand is converted, without change of type domain, to a type whose
corresponding real type is float.
Otherwise, the integer promotions are performed on both operands. Then the
following rules are applied to the promoted operands:
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned
integer types, the operand with the type of lesser integer conversion rank is
converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type.
Otherwise, if the type of the operand with signed integer type can represent
all of the values of the type of the operand with unsigned integer type, then
the operand with unsigned integer type is converted to the type of the
operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type. (The rules describe arithmetic on the mathematical value, not the value of a given type of expression.)
Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
So, in your a>b (with a being an int and b being an unsigned int), per the above rules you get a converted to unsigned int before the comparison. Since a is negative (-2), the unsigned value becomes UINT_MAX+1+a (this is the repeatedly adding or
subtracting one more than the maximum value bit). And UINT_MAX+1+a in your case is UINT_MAX+1-2 = UINT_MAX-1, which is a huge positive number compared to the value of b (2). And so a>b yields "true".
Forget the math you learned at school. Learn how C does it.
In this first case you get the unsigned a converted into a signed int. Then these two are compared.
Type conversion ranks between signed and unsigned types could have the same rank in C99. This is when a unsigned and a signed type have the corresponding types, when this happens the result is up to the compiler.
Here is a summary of the rules.

Resources