Is it possible to make integer math with C preprocessor? - c-preprocessor

Is it possible to make integer math with C preprocessor(cpp)? For example subtract one defined value from another and as a result cpp prints out the subtraction result.

Related

What does computationally associative mean?

K&R's C language has the following sentence:
A compiler's license to treat mathematically associative operators as computationally associative is revoked.
This is in the Appendix C, which tells what's different from before ANSI C. But I don't know how computationally associated is different from mathematically associated. Maybe I guess the mathematically associative is a * b * c = (a * b) * c (left), or a * (b * c) (right).
Consider this code:
#include <stdio.h>
int main(void)
{
double a = 0x1p64; // Two the power of 64, 18,446,744,073,709,551,616.
double b = 1;
double c = -a;
printf("%g\n", a+b+c);
}
In the C grammar, a+b+c is equivalent to (a+b)+c, so a and b are added first, and then c is added. In the format commonly used for double, a+b yields 264, not 264+1, because the double format does not have enough precision to represent 264+1, so the result of the addition is the ideal mathematical result rounded to the nearest representable value, which is 264. Then adding c yields zero, so “0” is printed.
If instead we calculated a+c+b, adding a and c would give zero, and then adding b would give one, and “1” would be printed.
Thus, floating-point operations are not generally associative; a+b+c is not the same as a+c+b.
In ordinary mathematics with real numbers, a+b+c is the same as a+c+b; addition of real numbers is associative.
Prior to standardization, some C compilers would treat floating-point expressions as if operators were associative (for those operators whose counterparts in real-number-arithmetic were associative). The C standard does not permit that in implementations that conform to the standard. Conforming compilers must produce results as if the operations were performed in the order specified by the C grammar.
Some compilers may still treat floating-point operators as associative when operating in non-standard modes, which may be selected by flags or switches passed to the compiler. Also, because the C standard allows implementations to perform floating-point arithmetic with more precision than the nominal type (e.g., when computing a+b+c, it can calculate it as if the types were long double instead of double), that can produce results that are the same as if operations were rearranged, so you can still get results that look like operators have been reordered associatively, depending on the C implementation and the flags used.

can't figure out the sizeof(long double) in C is 16 bytes or 10 bytes

Although there are some answers are on this websites, I still can't figure out the meaning of sizeof(long double). Why is the output of printing var3 is 3.141592653589793115998?
When I try to execute codes from another person, it runs different from another person. Could somebody help me to solve this problem?
My testing codes:
float var1 =3.1415926535897932;
double var2=3.1415926535897932;
long double var3 =3.141592653589793213456;
printf("%d\n",sizeof(float));
printf("%d\n",sizeof(double));
printf("%d\n",sizeof(long double));
printf("%.16f\n",var1);
printf("%.16f\n",var2);
printf("%.21Lf\n",var3);
output of my testing codes:
4
8
16
3.1415927410125732
3.1415926535897931
3.141592653589793115998
Codes are the same with another person, but the output from another person is:
4
8
12
3.1415927410125732
3.1415926535897931
3.141592741012573213359
Could somebody tell me why the output of us are different?
Floating point numbers -inside our computers- are not mathematical real numbers.
They have lots of counter-intuitive properties (e.g. (1.0-x) + x in your C code can be different of 1....). For more, read the floating-point-gui.de
Be also aware that a number is not its representation in digits. For example, most of your examples are approximations of the number π (which, intuitively speaking, has an infinite number of digits or bits, since it is a trancendental number, as proven by Évariste Galois and Niels Abel). The continuum hypothesis is related.
I still can't figure out the meaning of sizeof(long double).
It is the implementation specific ratio of the number of bytes (or octets) in a long double automatic variable vs the number of bytes in a char automatic variable.
The C11 standard (read n1570 and see this reference) does allow an implementation to have sizeof(long double) being, like sizeof(char), equal to 1. I cannot name such an implementation, but it might be (theoretically) the case on some weird computer architectures (e.g. some DSP).
Could somebody tell me why the output of us are different?
What make you think they could be equal?
Practically speaking, floating point numbers are often IEEE754. But on IBM mainframes (e.g. z/Series) or on VAXes they are not.
float var1 =3.1415926535897932;
double var2 =3.1415926535897932;
Be aware that it could be possible to have a C implementation where (double)var1 != var2 or where var1 != (float)var2 after executing these above instructions.
If you need more precision that what long double achieve on your particular C implementation (e.g. your recent GCC compiler, which could be a cross-compiler), consider using some arbitrary precision arithmetic library such as GMPlib.
I recommend carefully reading the documentation of printf(3), and of every other function that you are using from your C standard library. I also suggest to read the documentation of your C compiler.
You might be interested by static program analysis tools such as Frama-C or the Clang static analyzer. Read also this draft report.
If your C compiler is a recent GCC, compile with all warnings and debug info, so gcc -Wall -Wextra -g and learn how to use the GDB debugger.
Could somebody tell me why the output of us are different?
C allows different compilers/implementations to use different floating point encoding and handle evaluations in slightly different ways.
Precision
The difference in sizeof hint that the 2 implementations may employ different precision. Yet the difference could be due to padding, In this case, extra bytes added to preserve an alignment for performance reasons.
A better precision assessment is to print epsilon: the difference between 1.0 and the next larger value of the type.
#include <float.h>
printf("%e %e %Le\n", FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON);
Sample result
1.192093e-07 2.220446e-16 1.084202e-19
FLT_EVAL_METHOD
When this is 0, floating point types evaluate to that type. With other values like 2, floating point evaluate using wider types and only in the end save the result to the target type.
printf("FLT_EVAL_METHOD %d\n", FLT_EVAL_METHOD);
Two of several possible values indicated below:
FLT_EVAL_METHOD
0 evaluate all operations and constants just to the range and precision of the type;
2 evaluate all operations and constants to the range and precision of the long double type.
Notice the constants 3.1415926535897932, 3.141592653589793213456 are both normally double constants. Neither has an L suffix that would make the long double. Both have the same double value of 3.1415926535897931... and val2, val3 should get the same value. Yet with FLT_EVAL_METHOD==2, constants can evaluated as a long double and that is certainly what happened in "the output from another person" code.
Print FLT_EVAL_METHOD to see that difference.

Why do both % and fmod() exist in C

I took a quiz in my CS class today and got a question about the modulo operator wrong because I didn't know about the availability of % in C, I've been using fmod(). Why do both exist? Is one better/faster or do they just deal with different data types?
modulo division using % operator in C only works for integer operands and returns an integer remainder of the division.
The function fmod accepts double as arguments meaning that it accepts non-integer values and returns the remainder of the division.
Additional note on fmod: how is the remainder calculated in case of double operand? Thanks #chux for showing the documentation on how fmod calculates the remainder of a floating point division.
The floating-point remainder of the division operation x/y calculated
by this function is exactly the value x - n*y, where n is x/y with its
fractional part truncated.
The returned value has the same sign as x and is less or equal to y in
magnitude.
On the other hand, when the modulo division binary operator (%) was first designed, it was determined by the language designers that it would only support operands of 'integer' types because technically speaking, the notion of 'remainder' in mathematics only applies to integer divisions.
It's because % is an integer operator, and fmod stands for floatmod and is used for floating point numbers.
Why do both exist?
Because they may have computed different results, even with the same values. These differences may occur with negative values. In essence fmod() and % were different mathematical functions.
fmod(x,y), since C89, had the result "the result has the same sign as x and magnitude less than the magnitude of y".
i%j was not so singularly defined. See Remainder calculation for the modulo operation. This allow code to use existing variant processors effectively. The div() function was created to address this variability. Ref
By C99 they compute the same for the same values. Future C could allow 123.4 % 56.7
% is just integer modulo
fmod is float modulo and can be used as described in MSDN.
https://msdn.microsoft.com/en-us/library/20dckbeh.aspx

Which of the options is correct for assigning nC3 to an integer variable?

Why is the answer given option B? According to me, it should be D since it would clearly calculate the float value and then assign it to an integer variable.
Order of operations and integer overflow. If n is large, option B keeps the intermediate value the lowest. Option D gives a double which is not percise and still may overflow in the first part of the calculation.
Option C fails because the intermediate n(n-1)/3 may not be an integer.

Is there a C rounding function like MATLAB's round function?

I need a C rounding function which rounds numbers like MATLAB's round function. Is there one? If you don't know how MATLAB's round function works see this link:
MATLAB round function
I was thinking I might just write my own simple round function to match MATLAB's functionality.
Thanks,
DemiSheep
This sounds similar to the round() function from math.h
These functions shall round their
argument to the nearest integer value
in floating-point format, rounding
halfway cases away from zero,
regardless of the current rounding
direction.
There's also lrint() which gives you an int return value, though lrint() and friends obey the current rounding direction - you'll have to set that using fesetround() , the various rounding directions are found here.
Check out the standard header <fenv.c>, specifically the fesetround() function and the four macros FE_DOWNWARD, FE_TOWARDZERO, FE_TONEAREST and FE_UPWARD. This controls how floating point values are rounded to integers. Make sure your implementation (i.e., C compiler / C library) actually support this (by checking the return value of fesetround() and the documentation of your implementation).
Functions honoring these settings include (from <math.h>):
llrint()
llrintf()
llrintl()
lrint()
lrintf()
lrintl()
rint()
rintf()
rintl()
llround()
llroundf()
llroundl()
lround()
lroundf()
lroundl()
nearbyint()
nearbyintf()
nearbyintl()
depending on your needs (parameter type and return type, with or without inexact floating point exception).
NOTE: round(), roundf() and roundl() do look like they belong in the list above, but these three do not honor the rounding mode set by fesetround()!!
Refer to your most favourite standard library documentation for the exact details.
No, C (before C99) doesn't have a round function. The typical approach is something like this:
double sign(double x) {
if (x < 0.0)
return -1.0;
return 1.0;
}
double round(double x) {
return (long long)x + 0.5 * sign(x);
}
This rounds to an integer, assuming the original number is in the range that can be represented by a long long. If you want to round to a specific number of places after the decimal point, that can be a bit harder. If the numbers aren't too large or too small, you can multiply by 10N, round to an integer, and divide by 10N again (keeping in mind that this may introduce some rounding errors of its own).
If there isn't a round() function in the standard library, you could, if dealing with floating-point numbers, arbitrarily evaluate each value, analyze the number in the place after the place you want to round to, check to see if it's greater, equal-to, or less-than 5; Then, if the value is less than 5, you can floor() the number you're ultimately looking at. If the value of the digit after the place you're rounding to is 5 or greater, you can proceed to having the function floor() the number being evaluated, then add 1.
I apologize for any inefficiency tied to this.
If I'm not mistaken you are looking for something like floor and ceil and you shall find them in <math.h>
The documentation specifies
Y = round(X) rounds the elements of X to the nearest integers.
Not the plural: as per regular MATLAB operations, it operates on all elements of a matrix. The C equivalents posted above only deal with a single value at once. If you can use C++, check out Valarray. If not, then good ol' for loop is your friend.

Resources