C precomputed expressions - c

I would like to switch processing in a library routine based on whether a parameter exceeds a function of a system limit or not, for example whether (input - 1) <= sqrt (LONG_MAX)
As I see it, I have three choices of implementing this in C:
evaluate the function in each library call. Expensive, though some compilers can probably optimise out math.h function calls with constant parameters
define the result of the function call as a preprocessor macro. Looking at glibc limits.h this would require two #defines based on the __WORDSIZE value. I don't think this would be portable
create a global variable that is set to the result of the function in an initialiser routine. This requires the library user to always run an init routine before any other library routines
I do not really like any of these approaches. A compromise between 1 and 3 would be to run the init internally if not run previously. This spares the user the need to do it and reduces the runtime overhead to one boolean value check.
Is there some more elegant solution possible?

"Elegant" is not really a well defined term, you would have been better off specifying something more measurable, like "speed".
If speed is indeed the goal, an the system parameter is one that doesn't change at runtime, you can have a portable solution like:
#undef SQRT_LM
#if LONG_MAX == 64
#define SQRT_LM 8
#endif
#if LONG_MAX == 256
#define SQRT_LM 16
#endif
: : :
#ifndef SQRT_LM
#error Weird LONG_MAX value, please adjust code above.
#endif
Then your code can simply use SQRT_LM as a constant value.
The 1/3 combo, along the lines of:
void doSomething(int x) {
static long sqrt_lm = -1;
if (sqrt_lm == -1)
sqrt_lm = sqrt(LONG_MAX);
// Now can use sqrt_lm freely
}
is not really as efficient as forcing the user to explicitly call an init function, since the above code still has to perform the if on every call.
But, as stated, it really depends on what you mean by "elegant". I tend to optimise for readability first and only worry about performance if it becomes a serious issue.

Use a static variable in the function:
void foo(int input)
{
static const long limit = __builtin_sqrt(LONG_MAX);
assert(input < limit);
}
So limit is only computed the first time the function is executed. This requires that the function is a constant expression, which is why I use GCC's __builtin_sqrt(); regular sqrt() will be rejected (by GCC, at least).

Isn't (input -1) <= sqrt(LONG_MAX) the same as input <= sqrt(LONG_MAX) + 1 which just looks like a simple compare of a value with a constant.

Related

Programmatically determine if a variable's value was computed at compile time or at run time

Is there a way in C to programmatically determine that variable's value was computed at compile time or at run time?
Example:
const double a = 2.0;
const double b = 3.0;
double c1 = a / b; // done at compile time (constant folding / propagation)
double c2 = *(volatile double*)&a / *(volatile double*)&b; // done at run time
compute_time_t c1_ct = compute_time(c1);
compute_time_t c2_ct = compute_time(c2);
assert(c1_ct == COMPILE_TIME);
assert(c2_ct == RUN_TIME);
In C (as in, defined by the language standard), no, there is no way.
There are however compiler-specific ways using which you can get really close to achieving what you want. The most famous, as #Nate Eldredge notes in the comments, is the builtin function __builtin_constant_p() available in GCC and Clang.
Here's the relevant excerpt from the GCC doc:
Built-in Function: int __builtin_constant_p (exp)
You can use the built-in function __builtin_constant_p to determine if a value is known to be constant at compile time and hence that GCC can perform constant-folding on expressions involving that value. The argument of the function is the value to test. The function returns the integer 1 if the argument is known to be a compile-time constant and 0 if it is not known to be a compile-time constant. A return of 0 does not indicate that the value is not a constant, but merely that GCC cannot prove it is a constant with the specified value of the -O option.
Note that this function does not guarantee to detect all compile-time constants, but only the ones that GCC is able to prove as such. Different optimization levels might change the result returned by this function.
This built-in function is widely used in glibc for optimization purposes (example), and usually the result is only trusted when it's 1, assuming a non-constant otherwise:
void somefunc(int x) {
if (__builtin_constant_p(x)) {
// Perform optimized operation knowing x is a compile-time constant.
} else {
// Assume x is not a compile-time constant.
}
}
Using your own example:
const double a = 2.0;
const double b = 3.0;
double c1 = a / b; // done at compile time (constant folding / propagation)
double c2 = *(volatile double*)&a / *(volatile double*)&b; // done at run time
assert(__builtin_constant_p(c1));
assert(!__builtin_constant_p(c2));
You ask,
Is there a way in C to programmatically determine that variable's
value was computed at compile time or at run time?
No, there is no way to encode such a determination into the source of a strictly conforming C program.
Certainly C does not require values to be tagged systematically in a way that distinguishes among them based on when they were computed, and no C implementation I have ever heard of or imagined does that, so such a determination cannot be based on the values of the expressions of interest. Furthermore, all C function arguments are passed by value, so the hypothetical compute_time() cannot be implemented as a function because values are all it would have to work with.
compute_time() also cannot be a macro, because macros can work only with (preprocessing) tokens, for example the identifiers c1 and c2 in your example code. Those are opaque to the preprocessor; it knows nothing about values attributed to them when they are evaluated as expressions according to C semantics.
And there is no operator that serves the purpose.
Standard C provides no other alternatives, so if the question is about the C language and not any particular implementation of it then that's the end of the story. Moreover, although it is conceivable that a given C implementation would provide your compute_time() or a functional equivalent as an extension, I am unaware of any that do. (However, see #MarcoBonelli's answer, for an example of a similar, but not identical, extension.)

Evaluate macro parameter only once

I need to write a macro which traps any invalid index i for an array of length n. Here is what I got so far:
#define TRAP(i, n) (((unsigned int) (i) < (n))? (i): (abort(), 0))
The problem with this definition, however, is that the index expression i is evaluated twice; in the expression a[TRAP(f(), n)], for instance, f may have a side effect or take a long time to execute. I cannot introduce a temporary variable since the macro needs to expand to an expression. Also, defining TRAP as an ordinary function implies a run-time overhead and makes it harder for the compiler to optimize away the trap.
Is there a way to rewrite TRAP so that i is evaluated only once?
Edit: I'm using ANSI C89
You can evaluate once, and use the result, by doing something like this:
#define TRAP2(i, n) ({unsigned int _i = (i); _i < (n)? _i: (abort(), 0);})
This is a gcc specific solution, that will compile when used as the RHS of an assignment. It defines a (very) local variable, which might hide a prior definition of another variable, but that doesn't matter, as long as you don't try to use the prior version in the macro. But as people say, why do this in the first place?
Use the macro TRAP when the index expression doesn't contain a function call and use a (non-macro) function trap when it does. This way the function call overhead only occurs in the rarer latter case.

"Type" of symbolic constants?

When is it appropriate to include a type conversion in a symbolic constant/macro, like this:
#define MIN_BUF_SIZE ((size_t) 256)
Is it a good way to make it behave more like a real variable, with type checking?
When is it appropriate to use the L or U (or LL) suffixes:
#define NBULLETS 8U
#define SEEK_TO 150L
You need to do it any time the default type isn't appropriate. That's it.
Typing a constant can be important at places where the automatic conversions are not applied, in particular functions with variable argument list
printf("my size is %zu\n", MIN_BUF_SIZE);
could easily crash when the width of int and size_t are different and you wouldn't do the cast.
But your macro leaves room for improvement. I'd do that as
#define MIN_BUF_SIZE ((size_t)+256U)
(see the little + sign, there?)
When given like that the macro still can be used in preprocessor expressions (with #if). This is because in the preprocessor the (size_t) evaluates to 0 and thus the result is an unsigned 256 there, too.
#define is just token pasting preprocessor.
Whatever you write in #define it will replace with the replacement text before compilation.
So either way is correct
#define A a
int main
{
int A; // A will be replaced by a
}
There are many variations in #define like variadic macro or multiline macro
But the main aim of #define is the only one explained above.
Explicitly indicating the types in a constant was more relevant in Kernighan and Richie C (before ANSI/Standard C and its function prototypes came along).
Function prototypes like double fabs(double value); now allow the compiler to generate proper type conversions when needed.
You still want to explicitly indicate the constant sizes in some cases. The examples that come to my mind right now are bit masks:
#define VALUE_1 ((short) -1) might be 16 bits long while #define VALUE_2 ((char) -1) might be 8. Therefore, given a long x, x & VALUE_1 and x & VALUE_2would give very different results.
This would also be the case for the L or LL suffixes: the constants would use different numbers of bits.

C non-trivial constants

I want to make several constants in C with #define to speed up computation. Two of them are not simply trivial numbers, where one is a right shift, the other is a power. math.h in C gives the function pow() for doubles, whereas I need powers for integers, so I wrote my own function, ipow, so I wouldn't need to be casting everytime.
My question is this: One of the #define constants I want to make is a power, say ipow(M, T), where M and T were also #define constants. ipow is a function in the actual code, so this actually seems to slows things down when I run the code (is it running ipow everytime the constant is mentioned?). However, when I ues the built in pow function and just do (int)pow(M,T), the code is sped up. I'm confused as to why this is, since the ipow and pow functions are just as fast.
On a more general note, can I define constants using #define using functions inside the actual code? The above example has me confused on whether this speeds things up or actually slows things down.
(int)pow(M,T) is faster than using your function ipow, because if they are doing the same, then ipow is as fast but with the overhead of calling it (pushing arguments, etc.).
Also, yes, if you #define it in this way, ipow / pow / whatever is called every time; the preprocessor has no idea about what it is doing; it's basically string replacing. Therefore, your constant is simply being replaced by the text ipow(M,T) and so it is calculated everytime you need your constant.
Finally, for your case, a solution might be to use a global variable instead of a #define constant for your constant. This way, you can compute it once at the beginning of your program, and then use it later (without any more computations of it).
You don't need C++ to do metaprogramming. If you have a C99 compatible C compiler and preprocessor you can use P99 with something like the following
#include "p99_for.h"
#define P00_POW(NAME, I, REC, Y) (NAME) * (REC)
#define IPOW(N, X) (P99_FOR(X, N, P00_POW, P00_NAM, P99_REP(N,)))
For example IPOW(4, A) is then expanded to ((A) * ((A) * ((A) * (A)))). The only things that you should watch are
N must be (or expand to) a plain decimal constant with no suffix such as U or L
X should not have side effects
since it is evaluated several times
Yes, ipow is getting run every time the constant is mentioned. The C preprocessor is simply replacing all mentions of the constant with what you #define'd it as.
EDIT:
If you really want to compute these integers at compile time, you could try using Template Metaprogramming. This requires C++, however.
I don't think this is possible with c pre-possessor , because it doesn't support recursion.
(you can use template meta-programming if you are using c++)
I suspect that (int)pow(M,T) is faster than using (int)ipow(M,T) because the compiler has special knowledge of the pow() function (as an intrinsic). I wouldn't be surprised if given constant arguments that it elides the function call altogether when pow() is used.
However, since it has no special knowledge of ipow(), it doesn't do the same, and ends up actually calling the function.
You should be able to verify whether or not this is happening by looking at the assembly generated in a debugger or by having the compiler create an assembly listing. If that's what's happening, and your ipow() function is nothing more than a call to pow() (casting the result to an int), you might be able to convince your compiler to perform the same optimization for ipow() by making it an inline function.
Your ipow isn't faster since its just a simple call to a function.
Also I'm aware of compiler optimisation for standard C library routines and math functions.
Most possible the compiler is capable of determining the constexpr parameters and calculate the value of the #define at compile time.
Internally they will be replaced to some thing like this where the exponent is constant.
#define pow2(x) ( (x) * (x) )
#define pow3(x) ( (x) * (x) * (x) )
#define pow4(x) ( pow2(x) * pow2(x) )
#define pow5(x) ( pow4(x) * (x) )
#define pow6(x) ( pow3(x) * pow3(x) )
...
The only work around is to use C++ metta programming to get better run time performance.
template<class T, T base, T exp>
struct ipow
{
static const T value = base * ipow<T, base, exp - 1>::value;
};
template<class T, T base>
struct ipow<T, base, 0>
{
static const T value = 1;
};
you would use the above struct as follows:
ipow<size_t, M, T>::value
The C preprocessor will not evaluate a function call to a C function such as ipow or pow at compile time, it merely does text replacement.
The preprocessor does have a concept of function-like macros, however these are not so much 'evaluated' as text replaced. It would be temping to think you could write a recursive function-like macro to self-multiply a constant to raise it to a power, but in fact you can't - due to the non-evaluation of macro bodies, you won't actually get continually recursive calculation when the macro refers to itself.
For your shift operation, a #define involving constants and the shift operator will get text replaced by the preprocessor, but the constant expression will get evaluated during compilation, so this is efficient. In fact it's very common in hardware interfaces, ie #define UART_RXD_READY ( 1 << 11 ) or something like that

return an int or pass an int pointer -- whats better?

which of these two is better?
void SetBit(int *flag, int bit)
{
*flag |= 1 << bit;
}
Or
int SetBit(int flag, int bit)
{
flag |= 1 << bit;
return flag;
}
I like the second one because it doesn't have any side effects. If you want to modify flag, you can simply assign the result to itself:
flag = SetBit(flag, 4);
Neither.
int SetBit(int flag, int bit)
{
return flag | 1 << bit;
}
It depends.
first one is imperative style, second one is functional style.
If you want to do
SetBit(SetBit(... SetBit(flag, b1), b2),...), bn)
do the second one.
If you want
SetBit(&flag, b1)
SetBit(&flag, b2)
...
SetBit(&flag, bn)
do the first one. In C, I would prefer the latter (ie. the imperative one). In other languages/contexts, the former may be a good idea.
I would use a macro:
#define BIT_SET(a, bit) ((a) | (1 << (bit)))
To be honest, I think this just encourages people to use "magic numbers" as flags:
SetBit(&flags, 12); // 12 is the flag for Super mode
What you actually want is named constants:
#define SUPERMODE_FLAG 12
...
SetBit(&flags, SUPERMODE_FLAG);
But if you're going to use named constants, you might as well name masks rather than bit numbers, in which case the operation is so simple there's no need for a helper function:
#define SUPERMODE_MASK (1 << 12)
....
flags |= SUPERMODE_MASK;
In the unusual case that you're manipulating individual bits by number, without knowing what they mean, then I prefer the second for the same reason as Kristo - I find side-effect-free functions slightly easier to reason about than mutators.
I like the second one better...
However, I'd recommend changing the name of your function to be more descriptive (assuming this is the actual name). "SetBit" doesn't do much to describe what the function does or returns :)
The second is better because it won't crash.
The first one will could crash if you pass in an NULL invalid pointer so you'd need to have some code to check and handle that.
It depends.
In that case, either way would really work. But I can think of two special cases where I would favor using a pointer.
If the type you're passing in is large and a value copy would be expensive, use a pointer for performance reasons.
If you need to return something else, maybe a status code or success/failure indication, then you need to use the pointer so that you can leave room to return the value you need to return.
I personally think that outside of those situations, the second one (pass/return by value) is clearer and slightly more readable.
First is OK if functions is going to be inlined. Without inlining that's bit too much overhead to pass around pointers to ints. (On 64bit LP64 archs, int is 4 bytes, pointer is 8.)
Second ... function name SetBit() is going to cause some mild confusion. Name implies that function changes something while in fact it doesn't. As long as you are OK with the name, then it is performance-wise a better option.
E.g. Linux kernel uses for many similar things the pointer variant, since often memory location of the datum is important or required for portability. But they either make all such functions a preprocessor macro or mark with gcc's always_inline attribute. For user-land, plain application programming, I'd say the second should be preferred. Only pick a better name.
If the only pattern for using the function will be "variable = doSomething(variable);" and performance is not an issue, I would consider "doSomething(&variable);" to be more legible. The only time I would favor the former would be if the destination was sometimes something other than the source, or if performance were crucial and one's compiler could not efficiently handle the latter case (common on embedded systems).
It should be noted that the latter format allows something the former does not. In VB-style:
Sub OrBits(ByRef N as Integer, ByVal Bits as Integer)
Dim OldValue as Integer
Do
OldValue = N
Loop While Threading.Interlocked.CompareExchange(N, OldValue or Bits, OldValue) <> OldValue
End Sub
The effect of this code will always be to OR the specified bits into N, even if something else changes N while the function is running. It is not possible to achieve such behavior with the read-and-return strategy.
Pass an int to save time dereferencing the pointer.

Resources