sqrt() of int type in C - c

I am programming in the c language on mac os x. I am using sqrt, from math.h, function like this:
int start = Data -> start_number;
double localSum;
for (start; start <= end; start++) {
localSum += sqrt(start);
}
This works, but why? and why am I getting no warning? On the man page for sqrt, it takes a double as parameter, but I give it an int - how can it work?
Thanks

Type conversions which do not cause a loss in precision might not throw warnings. They are cast implicitly.
int --> double //no loss in precision (e.g 3 became 3.00)
double --> int //loss in precision (e.g. 3.01222 became 3)
What triggers a warning and what doesn't is depends largely upon the compiler and the flags supplied to it, however, most compilers (atleast the ones I've used) don't consider implicit type-conversions dangerous enough to warrant a warning, as it is a feature in the language specification.
To warn or not to warn:
C99 Rationale states it like a guideline
One of the important outcomes of exploring this (implicit casting) problem is the understanding that high-quality compilers might do well to look
for such questionable code and offer (optional) diagnostics, and that
conscientious instructors might do well to warn programmers of the
problems of implicit type conversions.
C99 Rationale (Apr 2003) : Page 45

The compiler knows the prototype of sqrt, so it can - and will - produce the code to convert an int argument to double before calling the function.
The same holds the other way round too, if you pass a double to a function (with known prototype) taking an int argument, the compiler will produce the conversion code required.
Whether the compiler warns about such conversions is up to the compiler and the warning-level you requested on the command line.
For the conversion int -> double, which usually (with 32-bit (or 16-bit) ints and 64-bit doubles in IEEE754 format) is lossless, getting a warning for that conversion is probably hard if possible at all.
For the double -> int conversion, with gcc and clang, you need to specifically ask for such warnings using -Wconversion, or they will silently compile the code.

Int can be safely upcast automatically to a double because there's no risk of data loss. The reverse is not true. To turn a double to an int, you have to explicitly cast it.

C-compilers do some automatic casting with double and int.
you could also do the following:
int start = Data -> start_number;
int localSum;
for (start; start <= end; start++) {
localSum += sqrt(start);
}
Even if the localSum is an int this will still work, but it will always cut of anything beyond the point.
For example if the return value of sqrt() is 1.365, it will be stored as a simple 1.

Related

Why does gcc accept "int a = 3i;" as a valid statement?

When I write the following C code, I find that (contrary to what I expected), gcc can accept this code and compile it successfully! I don't know why for it seems it is a wrong statement.
int main() {
int a = 1i;
return 0;
}
I guess it may be accepting 1i as a complex number. Then int a = 1i means int a = 0+1i and 1i is not a valid integer so it only accepts the 0.
int main() {
int a = 1i*1i;
printf("%d\n",a);
return 0;
}
I tried the code above and found that it printa -1. Maybe my thought is correct. But this is the first time I find that the C compiler can do this work. Is my guess correct?
Your intuition is correct. gcc allows for complex number constants as an extension.
From the gcc documentation:
To write a constant with a complex data type, use the suffix i or
j (either one; they are equivalent). For example, 2.5fi has type
_Complex float and 3i has type _Complex int. Such a constant always has a pure imaginary value, but you can form any complex value you
like by adding one to a real constant. This is a GNU extension; if you
have an ISO C99 conforming C library (such as GNU libc), and want to
construct complex constants of floating type, you should include
<complex.h> and use the macros I or _Complex_I instead.
So 1i is the complex number i. Then when you assign it to a, the complex part is truncated and the real part is assigned (and if the real part is a floating point type, that would be converted to int).
This conversion is spelled out in section 6.3.1.7p2 of the C standard:
When a value of complex type is converted to a real type, the
imaginary part of the complex value is discarded and the value of the
real part is converted according to the conversion rules for the
corresponding real type.

casting in c can cause any compile error?

I know that in java I can't return double from a function that supose to return an int value without casting, but now when I learn c I see that I can compile something like (only warnings, no error):
int calc(double d, char c) {
return d * c / 3;
}
so my question is, c compiler will always do for me auto casting when needed?
or this is specific working one because of char or something?
C has a concept of implicit conversions, i.e. rules that define under which conditions and how values are converted to different types implicitly, without the need to explicitly cast them (see, for example, cppreference.com). So C is not "auto-casting" everything, but only under certain conditions.
Your return type is int, whereas the result of expression d * c / 3 is double. So the following (implicit) conversion applies:
Real floating-integer conversions
A finite value of any real floating
type can be implicitly converted to any integer type. Except where
covered by boolean conversion above, the rules are: The fractional
part is discarded (truncated towards zero). If the resulting value can
be represented by the target type, that value is used otherwise, the
behavior is undefined
Actually, I think that you may not get any warning. The compiler is performing an implicit conversion, and the only way to force it is adding the flag -Wconversion (I am talking about gcc, something like: warning: conversion to ‘int’ from ‘double’ may alter its value [-Wfloat-conversion]). Anyway, the compiler supposes that you know about it, I mean, the decimal value dissapears in case of an integer conversion.
(See YePhIcK's comment about MSVC)

C long double in golang

I am porting an algorithm from C to Go. And I got a little bit confused. This is the C function:
void gauss_gen_cdf(uint64_t cdf[], long double sigma, int n)
{
int i;
long double s, d, e;
//Calculations ...
for (i = 1; i < n - 1; i++) {
cdf[i] = s;
}
}
And in the for loop value "s" is assigned to element "x" the array cdf. How is this possible? As far as I know, a long double is a float64 (in the Go context). So I shouldn't be able to compile the C code because I am assigning an long double to an array which just contains uint64 elements. But the C code is working fine.
So can someone please explain why this is working?
Thank you very much.
UPDATE:
The original C code of the function can be found here: https://github.com/mjosaarinen/hilabliss/blob/master/distribution.c#L22
The assignment cdf[i] = s performs an implicit conversion to uint64_t. It's hard to tell if this is intended without the calculations you omitted.
In practice, long double as a type has considerable variance across architectures. Whether Go's float64 is an appropriate replacement depends on the architecture you are porting from. For example, on x86, long double is an 80-byte extended precision type, but Windows systems are usually configured in such a way to compute results only with the 53-bit mantissa, which means that float64 could still be equivalent for your purposes.
EDIT In this particular case, the values computed by the sources appear to be static and independent of the input. I would just use float64 on the Go side and see if the computed values are identical to those of the C version, when run on a x86 machine under real GNU/Linux (virtualization should be okay), to work around the Windows FPU issues. The choice of x86 is just a guess because it is likely what the original author used. I do not understand the underlying cryptography, so I can't say whether a difference in the computed values impact the security. (Also note that the C code does not seem to properly seed its PRNG.)
C long double in golang
The title suggests an interest in whether of not Go has an extended precision floating-point type similar to long double in C.
The answer is:
Not as a primitive, see Basic types.
But arbitrary precision is supported by the math/big library.
Why this is working?
long double s = some_calculation();
uint64_t a = s;
It compiles because, unlike Go, C allows for certain implicit type conversions. The integer portion of the floating-point value of s will be copied. Presumably the s value has been scaled such that it can be interpreted as a fixed-point value where, based on the linked library source, 0xFFFFFFFFFFFFFFFF (2^64-1) represents the value 1.0. In order to make the most of such assignments, it may be worthwhile to have used an extended floating-point type with 64 precision bits.
If I had to guess, I would say that the (crypto-related) library is using fixed-point here because they want to ensure deterministic results, see: How can floating point calculations be made deterministic?. And since the extended-precision floating point is only being used for initializing a lookup table, using the (presumably slow) math/big library would likely perform perfectly fine in this context.

Specify float when initializing double. gcc and clang differs

I tried running this simple code on ideone.com
#include<stdio.h>
int main()
{
double a = 0.7f; // Notice: f for float
double b = 0.7;
if (a == b)
printf("Identical\n");
else
printf("Differ\n");
return 0;
}
With gcc-5.1 the output is Identical
With clang 3.7 the output is Differ
So it seems gcc ignores the f in 0.7f and treats it as a double while clang treats it as a float.
Is this a bug in one of the compilers or is this implementation dependent per standard?
Note: This is not about floating point numbers being inaccurate. The point is that gcc and clang treats this code differently.
The C standard allows floating point operations use higher precision than what is implied by the code during compilation and execution. I'm not sure if this is the exact clause in the standard but the closest I can find is §6.5 in C11:
A floating expression may be contracted, that is, evaluated as though it were a single operation, thereby omitting rounding errors implied by the source code and the expression evaluation method
Not sure if this is it, or there's a better part of the standard that specifies this. There was a huge debate about this a decade ago or two (the problem used to be much worse on i386 because of the internally 40/80 bit floating point numbers in the 8087).
The compiler is required to convert the literal into an internal representation which is at least as accurate as the literal. So gcc is permitted to store floating point literals internally as doubles. Then when it stores the literal value in 'a' it will be able to store the double. And clang is permitted to store floats as floats and doubles as doubles.
So it's implementation specific, rather than a bug.
Addendum: For what it is worth, something similar can happen with ints as well
int64_t val1 = 5000000000;
int64_t val2 = 5000000000LL;
if (val1 != val2) { printf("Different\n"); } else { printf("Same\n"); }
can print either Different or Same depending on how your compiler treats integer literals (though this is more particularly an issue with 32 bit compilers)

doubt regarding operations on "int" flavors

I am having following doubt regarding "int" flavors (unsigned int, long int, long long int).
When we do some operations(* , /, + , -) between int and its flavors (lets say long int)
in 32bit system and 64bit system is the implicit typecast happen for "int"
for example :-
int x ;
long long int y = 2000;
x = y ; (Higher is assigned to lower one data truncation may happen)
I am expecting compiler to give warning for this But I am not getting any such warning.
Is this due to implicit typecast happen for "x" here.
I am using gcc with -Wall option. Is the behavior will change for 32bit and 64bit.
Thanks
Arpit
-Wall does not activate all possible warnings. -Wextra enables other warnings. Anyway, what you do is a perfectly "legal" operation and since the compiler can't always know at compile-time the value of the datum that could be "truncated", it is ok it does not warn: programmer should be already aware of the fact that a "large" integer could not fit into a "small" integer, so it is up to the programmer usually. If you think your program is written in not-awareness of this, add -Wconversion.
Casting without an explicit type cast operator is perfectly legal in C, but may have undefined behavior. In your case, int x; is signed, so if you try to store a value in it that's outside the range of int, your program has undefined behavior. On the other hand, if x were declared as unsigned x; the behavior is well-defined; cast is via reduction modulo UINT_MAX+1.
As for arithmetic, when you perform arithmetic between integers of different types, the 'smaller' type is promoted to the 'larger' type prior to the arithmetic. The compiler is free to optimize out this promotion of course if it does not affect the results, which leads to idioms like casting a 32bit integer to 64bit before multiplying to get a full 64bit result. Promotion gets a bit confusing and can have unexpected results when signed and unsigned values are mixed. You should look it up if you care to know since it's hard to explain informally.
If you are worried, you can include <stdint.h> and use types with defined lengths, such as uint16_t for a 16-bit unsigned integer.
Your code is perfectly valid (as already said by others). If you want to program in a portable way in most cases you should not use the bare C types int, long or unsigned int but types that tell a bit better what you are planing to do with it.
E.g for indices of arrays use always size_t. Regardless on whether or not you are on a 32 or 64 bit system this will be the right type. Or if you want to take the integer of maximal width on the platform you happen to land on use intmax_t or uintmax_t.
See http://gcc.gnu.org/ml/gcc-help/2003-06/msg00086.html -- the code is perfectly valid C/C++.
You might want to look at static analysis tools (sparse, llvm, etc.) to check for this type of truncation.

Resources