C macro produces different result using literal vs variable - c

I have the following C program:
#include <stdio.h>
#include <math.h>
#define LOG2(x) ((int)( log((double)(x)) / log(2) ))
int main() {
int num = 64;
int val1 = LOG2(num);
int val2 = LOG2(64);
printf("val1: %d, val2 %d\n", val1, val2);
return 0;
}
Which outputs:
val1: 5, val2: 6
Why does this macro produce a different (and wrong) answer when I use it with a variable, but works correctly when I just type 64 directly?
Regardless of whether or not this is actually a good way to get the log base 2, what is causing this behavior? Is there any way I can get this macro to work properly with variables? (all my inputs will be exact powers of 2)

This is, mathematically, a fine way of computing base-2 logs, but since log(val) and log(2) are both going to be long, messy fractions, it's not unlikely that the result of the division will end up being 5.999, which will truncate down to 5. I recommend rounding, especially if you know the inputs will always be powers of 2.
(But why did you get different answers for constant vs. variable? That's a good question, and I'm not sure of the answer. Usually the answer is that when there are constants involved the compiler is able to perform some/all of the calculation at compile time, but often the compiler ends up using subtly or significantly different floating-point arithmetic than the run-time environment. But I wouldn't have thought the compiler would be interpreting functions like log() while doing compile-time constant folding.)
Also, some C libraries have a log2() function which should give you perfect answers, but I don't know how standard that function is.

Related

can't figure out the sizeof(long double) in C is 16 bytes or 10 bytes

Although there are some answers are on this websites, I still can't figure out the meaning of sizeof(long double). Why is the output of printing var3 is 3.141592653589793115998?
When I try to execute codes from another person, it runs different from another person. Could somebody help me to solve this problem?
My testing codes:
float var1 =3.1415926535897932;
double var2=3.1415926535897932;
long double var3 =3.141592653589793213456;
printf("%d\n",sizeof(float));
printf("%d\n",sizeof(double));
printf("%d\n",sizeof(long double));
printf("%.16f\n",var1);
printf("%.16f\n",var2);
printf("%.21Lf\n",var3);
output of my testing codes:
4
8
16
3.1415927410125732
3.1415926535897931
3.141592653589793115998
Codes are the same with another person, but the output from another person is:
4
8
12
3.1415927410125732
3.1415926535897931
3.141592741012573213359
Could somebody tell me why the output of us are different?
Floating point numbers -inside our computers- are not mathematical real numbers.
They have lots of counter-intuitive properties (e.g. (1.0-x) + x in your C code can be different of 1....). For more, read the floating-point-gui.de
Be also aware that a number is not its representation in digits. For example, most of your examples are approximations of the number π (which, intuitively speaking, has an infinite number of digits or bits, since it is a trancendental number, as proven by Évariste Galois and Niels Abel). The continuum hypothesis is related.
I still can't figure out the meaning of sizeof(long double).
It is the implementation specific ratio of the number of bytes (or octets) in a long double automatic variable vs the number of bytes in a char automatic variable.
The C11 standard (read n1570 and see this reference) does allow an implementation to have sizeof(long double) being, like sizeof(char), equal to 1. I cannot name such an implementation, but it might be (theoretically) the case on some weird computer architectures (e.g. some DSP).
Could somebody tell me why the output of us are different?
What make you think they could be equal?
Practically speaking, floating point numbers are often IEEE754. But on IBM mainframes (e.g. z/Series) or on VAXes they are not.
float var1 =3.1415926535897932;
double var2 =3.1415926535897932;
Be aware that it could be possible to have a C implementation where (double)var1 != var2 or where var1 != (float)var2 after executing these above instructions.
If you need more precision that what long double achieve on your particular C implementation (e.g. your recent GCC compiler, which could be a cross-compiler), consider using some arbitrary precision arithmetic library such as GMPlib.
I recommend carefully reading the documentation of printf(3), and of every other function that you are using from your C standard library. I also suggest to read the documentation of your C compiler.
You might be interested by static program analysis tools such as Frama-C or the Clang static analyzer. Read also this draft report.
If your C compiler is a recent GCC, compile with all warnings and debug info, so gcc -Wall -Wextra -g and learn how to use the GDB debugger.
Could somebody tell me why the output of us are different?
C allows different compilers/implementations to use different floating point encoding and handle evaluations in slightly different ways.
Precision
The difference in sizeof hint that the 2 implementations may employ different precision. Yet the difference could be due to padding, In this case, extra bytes added to preserve an alignment for performance reasons.
A better precision assessment is to print epsilon: the difference between 1.0 and the next larger value of the type.
#include <float.h>
printf("%e %e %Le\n", FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON);
Sample result
1.192093e-07 2.220446e-16 1.084202e-19
FLT_EVAL_METHOD
When this is 0, floating point types evaluate to that type. With other values like 2, floating point evaluate using wider types and only in the end save the result to the target type.
printf("FLT_EVAL_METHOD %d\n", FLT_EVAL_METHOD);
Two of several possible values indicated below:
FLT_EVAL_METHOD
0 evaluate all operations and constants just to the range and precision of the type;
2 evaluate all operations and constants to the range and precision of the long double type.
Notice the constants 3.1415926535897932, 3.141592653589793213456 are both normally double constants. Neither has an L suffix that would make the long double. Both have the same double value of 3.1415926535897931... and val2, val3 should get the same value. Yet with FLT_EVAL_METHOD==2, constants can evaluated as a long double and that is certainly what happened in "the output from another person" code.
Print FLT_EVAL_METHOD to see that difference.

Opeartion on a variable and constant giving different result in c

A simple calculation: 3^20%15.
The answer, according to a calculator, is 6.
The following code generates answer 7.
#include <stdio.h>
#include <math.h>
int main() {
int i = 20;
printf("%d\n", ((int)pow(3,20)%15));
return 0;
}
If I replace 20 in the printf statement with the variable i, it gives -8 as output.
Assuming that the calculator is corrent (or not?), what is the problem in the program?
Thank you.
The result of pow(3,20) can't fit in an int on your platform (or mine for that matter). Because of that, you're experiencing unexpected results.
Changing to a larger integer type such as long long will do the job.
Moreover, pow works with floating-point numbers which aren't represented exactly in memory (look that up). During conversion to an integer, that can cause certain errors. For example, for printf("%fl\n", (pow(3,20))); I get 3486784401.000000l which is not a precise integer value.
What likely happened here is:
In your C implementation, int is 32 bits, with a minimum of −2,147,483,648 and a maximum of 2,147,483,647.
The result of pow(3, 20) is 3486784401. (See note 1 below.)
3486784401 is too large for an int, so there is an overflow. In case of integer overflow, the C standard permits an implementation to do anything.
In (int) pow(3, 20), the conversion to int may have been computed at a compile time by producing the maximum, 2,147,483,647. Then the remainder of that divided by 15 is 7.
In (int) pow(3, i), the conversion to int may have been computed at run time by producing the minimum, −2,147,483,648. (Some processors produce such a result for integer overflows.) Then the remainder of that divided by 15 is −8.
In summary:
Your code overflows, so the C standard does not define the behavior.
The compiler likely behaves differently for pow(3, 20) and pow(3, i) because it evaluates the former at compile time and the latter at execution time.
Note
Good implementations of pow return exactly 3486784401 for pow(3, 20). Unfortunately, poor implementations may return inaccurate values such as 3486784401.000000476837158203125 or 3486784400.999999523162841796875.

Why does pow(n,2) return 24 when n=5, with my compiler and OS?

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
{
int n,i,ele;
n=5;
ele=pow(n,2);
printf("%d",ele);
return 0;
}
The output is 24.
I'm using GNU/GCC in Code::Blocks.
What is happening?
I know the pow function returns a double , but 25 fits an int type so why does this code print a 24 instead of a 25? If n=4; n=6; n=3; n=2; the code works, but with the five it doesn't.
Here is what may be happening here. You should be able to confirm this by looking at your compiler's implementation of the pow function:
Assuming you have the correct #include's, (all the previous answers and comments about this are correct -- don't take the #include files for granted), the prototype for the standard pow function is this:
double pow(double, double);
and you're calling pow like this:
pow(5,2);
The pow function goes through an algorithm (probably using logarithms), thus uses floating point functions and values to compute the power value.
The pow function does not go through a naive "multiply the value of x a total of n times", since it has to also compute pow using fractional exponents, and you can't compute fractional powers that way.
So more than likely, the computation of pow using the parameters 5 and 2 resulted in a slight rounding error. When you assigned to an int, you truncated the fractional value, thus yielding 24.
If you are using integers, you might as well write your own "intpow" or similar function that simply multiplies the value the requisite number of times. The benefits of this are:
You won't get into the situation where you may get subtle rounding errors using pow.
Your intpow function will more than likely run faster than an equivalent call to pow.
You want int result from a function meant for doubles.
You should perhaps use
ele=(int)(0.5 + pow(n,2));
/* ^ ^ */
/* casting and rounding */
Floating-point arithmetic is not exact.
Although small values can be added and subtracted exactly, the pow() function normally works by multiplying logarithms, so even if the inputs are both exact, the result is not. Assigning to int always truncates, so if the inexactness is negative, you'll get 24 rather than 25.
The moral of this story is to use integer operations on integers, and be suspicious of <math.h> functions when the actual arguments are to be promoted or truncated. It's unfortunate that GCC doesn't warn unless you add -Wfloat-conversion (it's not in -Wall -Wextra, probably because there are many cases where such conversion is anticipated and wanted).
For integer powers, it's always safer and faster to use multiplication (division if negative) rather than pow() - reserve the latter for where it's needed! Do be aware of the risk of overflow, though.
When you use pow with variables, its result is double. Assigning to an int truncates it.
So you can avoid this error by assigning result of pow to double or float variable.
So basically
It translates to exp(log(x) * y) which will produce a result that isn't precisely the same as x^y - just a near approximation as a floating point value,. So for example 5^2 will become 24.9999996 or 25.00002

Optimize C code

I have the following code
void Fun2()
{
if(X<=A)
X=ceil(M*1.0/A*X);
else
X=M*1.0/(M-A)*(M-X);
}
I want to program it in fast manner using C99, take into account the following comments.
Xand A, are 32 bit variables and I declare them as uint64_t, While M as static const uint64_t.
This function is called by another function and the value of A are changed to a new value every n times of calling.
The optimization is needed in the execution time, CPU is Core i3, OS is windows 7
The math model I want to implement it is
F=ceil(Max/A*X) if x<=A
F=floor(M/(M-A)*(M-X)) if x>A
For clarity and no confusion My previous post was
I have the following code
void Fun2()
{
if(X0<=A)
X0=ceil(Max1*X0);
else
X0=Max2*(Max-X0);
}
I want to program it in fast manner using C99, take into account the following comments.
X0, A, Max1, and Max2 are 32 bit variable and I declare them as uint64_t, While Max as static const uint64_t.
This function is called by another function and the values of Max1, A, Max2 are changed to random values every n times of calling.
I work in Windows 7 and in codeblocks software
Thanks
It is completely pointless and impossible to optimize code like this without a specific target in mind. In order to do so, you need the following knowledge:
Which CPU is used.
Which OS is used (if any).
In-depth knowledge of the above, to the point where you know more, or about as much of the system as the people who wrote the optimizer for the given compiler port.
What kind of optimization that is most important: execution speed, RAM usage or program size.
The only kind of optimization you can do without knowing the above is on the algorithm level. There are no such algorithms in the code posted.
Thus your question cannot be answered by anyone until more information is provided.
If "fast manner" means fast execution, your first change is to declare this function as an inline one, a feature of C99.
inline void Fun2()
{
...
...
}
I recall that GNU CC has some interesting macros that may help optimizing this code as well. I don't think this is C99 compliant but it is always interesting to note. I mean: your function has an if statement. If you can know by advance what probability has each branch of being taken, you can do things like:
if (likely(X0<=A)).....
If it's probable that X0 is less or equal than A. Or:
if (unlikely(X0<=A)).....
If it's not probable that X0 is less or equal than A.
With that information, the compiler will optimize the comparison and jump so the most probable branch will be executed with no jumps, so it will be executed faster in architectures with no branch prediction.
Another thing that may improve speed is to use the ?: ternary operator, as both branches assign a value to the same variable, something like this:
inline void Func2()
{
X0 = (X0>=A)? Max1*X0 : Max2*(Max-X0);
}
BTW: why use ceil()? ceil() is for double numbers to round down a decimal number to the nearest non greater integer. If X0 and Max1 are integer numbers, there won't be decimals in the result, so ceil() won't have any effect.
I think one thing that can be improved is not to use floating point. Your code mostly deals with integers, so you want to stick to integer arithmetic.
The only floating point number is Max1. If it's always whole, it can be an integer. If not, you may be able to replace it with two integers: Max1*X0 -> X0 * Max1_nom / Max1_denom. If you calculate the nominator/denominator once, and use many times, this can speed things up.
I'd transform the math model to
Ceil (M*(X-0) / (A-0)) when A<=X
Floor (M*(X-M) / (A-M)) when A>X
with
Ceil (A / B) = Floor((A + (B-1)) / B)
Which substituted to the first gives:
((M * (X - m0) + c ) / ( A - m0))
where
c = A-1; m0 = 0, when A <= X
c = 0; m0 = M, when A >= X
Everything will be performed in integer arithmetic, but it'll be quite tough to calculate the reciprocals in advance;
It may still be possible to use some form of DDA to avoid calculating the division between iterations.
Using the temporary constants c, m0 is simply for unifying the pipeline for both branches as the next step is in pursuit of parallelism.

How to set precision for double data type variables in C? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Most effective way for float and double comparison
In my program some double variables take values of the form 1.00000001. Equality check of these variables with 1 obviously fails.
I wanted to know how can I reduce the precision of double type variables in c, so that equality with integers works.
You should almost never check floating point values for exact equality, expecially when they comes from comutation. You should check the absolute value of the difference from the compare value being less than a certain epsilon. The "precision" of the double is given by the internal number representation and you can't change it.
How to exactly choose epsilon can be difficult, there is some comment to that answer discussing that, read it, but you eventually come with the pratical epsilon based equality.
There's no portable way, no.
With the GNU C library, you can use this API to change the rounding mode.
But in general, it's better to express it with code, so that your expectations become clear and portable:
#define EQUALITY_EPSILON 1e-3 /* Or whatever. */
if(fabs(x - y) <= EQUALITY_EPSILON)
{
}
You should avoid checking for equality in floating-point comparisons. Instead, use a precision value epsilon like so:
if (fabs(a - b) < epsilon)
{
// treat a and b as equal
}
The choice of epsilon is probably complicated, but my knowledge doesn't go that far.
Do a Google search for "What Every Programmer Should Know About Floating-Point Arithmetic". It's a well-known article that covers all this stuff.
It doesn't make a lot of sense to 'reduce the precision of double type variables'. You should probably think instead of using functions such as ceil, floor, and round to make an integer out of your double in a way that you have good control over and then use that integer in your comparisons.
in C99
#include <stdio.h>
#include <math.h>
int main(){
double d = 1.00000001;
int i = (int)round(d);
printf("%d\n", i);
return 0;
}

Resources