Compiling in constants at runtime - c

I have a program for some scientific simulation stuff, and as such needs to run quickly.
When I started out, I was somewhat lazy, and decided to allow inputting constants later; and just used #define macros for them all.
The problem is that when I tried changing that, it got a lot slower. For example, changing
#define WIDTH 8
//..... code
to
#define WIDTH width
int width;
//... main() {
width=atoi(argv[1]);
//...... code
resulted in something that used to take 2 seconds taking 2.8. That's just for one of about a dozen constants, and I can't really afford that even. Also, there is probably some complied-away math with these.
So my question is if I can have some way (bash script?) of compiling the constants I want to use into the program at runtime. It's ok if any machine that needs to run this has to have a compiler on it. It currently compiles with a standard (quite simple) Makefile.
--This also allows for march=native, which should help a little.
I suppose my question also is if there's a better way of doing it entirely...

At least if I understand your question correctly, what I'd probably do would be something like:
#ifndef WIDTH
#define WIDTH 8
#endif
(and likewise for the other constants you want to be able to modify). The in your makefile(s), add some options to the makefile to pass the correct definitions to the compiler when/if necessary, so if you wanted to change the WIDTH, you'd have something like:
cflags=-DWIDTH=12
and when you compile the file, this would be used as the definition for WIDTH, but if you didn't define a value in the makefile, the default in the source file would be used.

The difference is that with the macro being just an integer literal, the compiler is able to often calculate a bunch of the math at compile time. A trivial example is if you had:
int x = WIDTH * 3;
the compiler would actually emit:
int x = 24;
no multiply there. If you change WIDTH to a variable, it can't do that, because it could be any value. So there is almost certainly going to be some difference in speed (how much depends on the circumstance and it is often so little that it doesn't matter).
I recommend making what needs to be variables variables and then profiling to find the hot spots in the code. Almost always, it's the algorithm that slows you down the most. Once you find out which blocks of code you are spending the most time in, then you can figure out ways to make that part faster.
The only real solution would be to have a separate header file with the constants that you could have a script generate then compile the program. Or if there aren't too many just passing them directly to gcc. This of course sacrifices up front speed for runtime speed. I do wonder if a difference of 0.8 seconds in runtime is un-affordable, how is compiling a program (which will surely take more than a second) affordable?
The script could be something as simple as this:
#!/bin/sh
echo "#define WIDTH $1" > constants.h
echo "#define HEIGHT $2" >> constants.h
gcc prog.c -o prog && ./prog
where prog.c includes constants.h or something like this (with no extra header).
#!/bin/sh
gcc -DWIDTH=$1 -DHEIGHT=$2 prog.c -o prog && ./prog

You could store the relevant defines into a separate header file constants.h:
#ifndef CONSTANTS_H
#define CONSTANTS_H
#define WIDTH 8
...other defines...
#endif
If you take care that the header is included only once, then you can even omit the include guards and have a small file with only the relevant stuff. I would go this way if the program is used by others who need to change the constants. If you're the only one using it, then Jerry's method is just fine.
EDIT:
Reading your comment, this separate header could be easily generated with a little tool from the makefile before the compilation.

Related

How to check macros defined in .so? I'd use nm to check the function, is there a way to do the same for macros?

I have a code like this in mylib.h, and then I use it to create mylib.so.
Is there a way to check how MY_MACROS is defined in .so?
#ifdef SWITCH_CONDITION
#define MY_MACROS 0
#else
#define MY_MACROS 1
#endif
If that would be a function, I'd simply do
nm mylib.so | grep myfunction
Is there a way to do the same for macros?
P.S. There should be because of
> grep MY_MACROS mylib.so
> Binary file mylib.so matches
In general there is no way to do this sort of thing for macros. (But see more below.)
Preprocessor macros are theoretically a compile-time concept. In fact, in the early implementations of C, the preprocessor was -- literally -- a separate program, running in a separate process, converting C code with #include and #define and #ifdef into C code without them. The actual C compiler saw only the "preprocessed" code.
Now, theoretically a compiler could somehow save away some record of macro definitions, perhaps to aid in debugging. I wasn't aware of any that did this, although evidently those using the DWARF format actually do! See comments below, and this answer.
You can always write your own, explicit code to track the definition of certain macros. For example, I've often written code elong the lines of
void print_version()
{
printf("myprogram version %s", VERSION_STRING);
#ifdef DEBUG
printf(" (debug version)");
#endif
printf("\n");
}
Some projects have rather elaborate mechanisms to keep track of the compilation switches which are in effect for a particular build. For example, in projects managed by a configure script, there's often a single file config.status containing one single record of all the compilation options, for posterity.
Yes, but it requires debugging info.
You can compile your code with -g3:
$ gcc -g3 -shared -fPIC test.c -o test.so
and then run strings on the resulting binary:
$ strings test.so
...
__DEC32_EPSILON__ 1E-6DF
MY_MACROS 1
__UINT_LEAST32_TYPE__ unsigned int

Strange compiler speed optimization results - IAR compiler

I'm experiencing a strange issue when I try to compile two source files that contain some important computing algorithms that need to be highly optimized for speed.
Initially, I have two source files, let's call them A.c and B.c, each containing multiple functions that call each other (functions from a file may call functions from the other file). I compile both files with full speed optimizations and then when I run the main algorithm in an application, it takes 900 ms to run.
Then I notice the functions from the two files are mixed up from a logical point of view, so I move some functions from A.c to B.c; let's call the new files A2.c and B2.c. I also update the two headers A.h and B.h by moving the corresponding declarations.
Moving function definitions from one file to the other is the only modification I make!
The strange result is that after I compile the two files again with the same optimizations, the algorithm now takes 1000 ms to run.
What is going on here?
What I suspect happens: when functions f calls function g, being in the same file allows the compiler to replace actual function calls with inline code as an optimization. This is no longer possible when definitions are not compiled at the same time.
Am I correct in my assumption?
Aside from regrouping the function definitions as it was before, is there anything I can do to obtain the same optimization as before? I researched and it seems it's not possible to compile two source files simultaneously into a single object file. Could the order of compilation matter?
As to whether your assumption is correct, the best way to tell is to examine the assembler output, such as by using gcc -S or gcc -save-temps. That will be the definitive way to see what your compiler has done.
As to compiling two C source files into a single object file, that's certainly doable. Just create a AB.c as follows:
#include "A.c"
#include "B.c"
and compile that.
Barring things that should be kept separate (such as static items which may exist in both C files), that should work (or at least work with a little modification).
However, remember the optimisation mantra: Measure, don't guess! You're giving up a fair bit of encapsulation by combining them so make sure the benefits well outweigh the costs.

Is it possible to achieve infinite compilation time in C (i.e. without templates)?

So, the title. It is generally known that it is possible to write C++ program, which would require infinite time to compile (in theory). But is it possible to write such program in plain C? Or is there a way to slow down compilation time to at least several minutes with a small program?
Here is the example you asked for, with exponentially increasing macros.
#define FOO i++ // Substitute any statement here for i++
#define FOO1 FOO ; FOO
#define FOO2 FOO1 ; FOO1
#define FOO3 FOO2 ; FOO2
#define FOO4 FOO3 ; FOO3
#define FOO5 FOO4 ; FOO4
#define FOO6 FOO5 ; FOO5
// Keep going however much you want
...
#define FOO40 FOO39 ; FOO39
volatile int i;
int main(void)
{
FOO40; // This expands to 2^40 statements.
}
I used FOO18 for a timing test to see what would happen. I tested both the preprocessor time and the compilation time separately:
(Preprocessor phase)
time gcc -E foo.c -o foo.i
1.7 seconds
(Compilation phase)
time gcc foo.i -o foo
21 seconds
Out of curiosity I kept trying bigger and bigger values. Unfortunately, at some point the compiler ran out of memory (the preprocessor was fine). I got this error:
cc1: out of memory allocating 16842751 bytes after a total of 403505152 bytes
At FOO16 with -O2, I was able to get 2:23 compilation time without running out of memory. So if you want to get an even longer compilation time, first find out many many statements you can put into a single function without running out of memory (FOO16 for me). Then make several functions, like this:
int main(void)
{
FOO16;
}
void bar1(void)
{
FOO16;
}
void bar2(void)
{
FOO16;
}
// etc...
Include recursively
#include __FILE__
Most compilers will probably bail out early, but theoretically, this would cause infinite compilation.
Include (or compile directly, or link against) a device instead of a file
#include "/dev/tty"
On systems that support it, this causes the compiler to wait for input. Similarly, you could use a named pipe.
Find a bug in the compiler
It is possible the compiler has a logic error that will cause it to loop forever.
Experimentally, if you ask a recent GCC or Clang/LLVM to compile with optimizations some C code containing a single (or very few) C function(s) with many thousands of statements, the compilation time is growing a lot.
By experience, compilation with gcc -O2 of a single function with many thousand C statements requires a time proportional to the square of the number of statements (more exactly number of Gimple statements after preprocessing & gimplification).
The intuition explaining that is that the algorithms for register allocation, instruction scheduling, and middle-end optimizations are often worse than O(n) ; naive non-optimizing C compilers like tinycc, nwcc, 8cc etc don't have such time behavior (but generate code really worse than gcc -O2...)
BTW, you could play (on Linux) with my manydl.c program (which generates some more or less random C code, then compile it and dlopen it, to show that you can dlopen many hundred thousands of shared objects). Read its source code, it will illustrate my point.
More seriously, I did (in the past) experiment the same issue in old versions of MELT, which generates C++ code suitable for extension of GCC. I had to split in several routines some huge sequential initialization code.
You might compile with gcc -ftime-report -O2 such a huge function to understand more precisely in which optimization passes is the compiler spending its time.
At last, if you want a small source code, you can cheat by asking it to #include "some-huge-file.c" but I believe that does not count.

C: `Turn on debug messages

This is probably a really stupid question, but how do I turn on these debug messages in my code?
#ifdef DEBUG_MSG
printf("initial state : %d\n", initial_state);
#endif
Many thanks in advance,
When compiling, try something like this:
$ gcc -DDEBUG_MSG -o foo foo.c
You would have to #define that somehow.
0. In your code.
Directly in your code somewhere before you use that flag:
#define DEBUG_MSG
1. On the command line.
For each sourcefile, or appropriately in your makefile:
gcc -DDEBUG_MSG main.c
(For gcc, the flag is -D<macro-name>, for MSVC, it is /D, for ICC it is one of the former, depending on your operating system. )
2. In your IDE, somewhere.
In your IDE's project settings, find where you can put definitions. Under the hood, this is done using 1.
#ifdef means 'If defined', your code essentially tells the preprocessor to check if DEBUG_MSG is defined somewhere else. If it is, it will include the code you've shown.
The C preprocessor phase will only pass code inside an #ifdef/#endif to the compiler phase if the symbol is defined.
You can generally do this in (at least) two ways.
The first is to use a command line switch for the compiler such as:
gcc -DDEBUG_MSG myprog.c
(-D means to define the pre-processor symbol following it and, although this is implementation-specific, many compilers use the same switch). The second is to place a line like:
#define DEBUG_MSG
inside your actual source code somewhere before the #ifdef.
The former is usually preferred since it allows you to control that behaviour without having to make changes to your source code so that, for example, you can have a debug and release build generated from the same source code.
#ifdef will make your macro to be expanded only if DEBUG_MSG is defined. You can do this in two ways. Either do a #define DEBUG_MSG 1 in your source or compile using -DDEBUG_MSG (if using gcc, there are similar flags for other compilers too)

Check whether function is declared with C preprocessor?

Is it possible to tell the C preprocessor to check whether a function (not a macro) is declared? I tried the following, but it doesn't appear to work:
#include <stdio.h>
int main(void)
{
#if defined(printf)
printf("You support printf!\n");
#else
puts("Either you don't support printf, or this test doesn't work.");
#endif
return 0;
}
No. Preprocessor runs before the C compiler and the C compiler processes function declarations. The preprocessor is only there for text processing.
However, most header files have include guard macros like _STDIO_H_ that you can test for in the preprocessor stage. However, that solution is not portable as the include guard macro names are not standardized.
If you look at tools like autoconf you will see that they go through many tests to determine what a computer has or doesn't have, to compile properly, then they set the correct #DEFINES.
You may want to look at that model, and that tool if you are on some flavor of unix, as what you want to do isn't going to be possible, as others undoubtedly are pointing out.
Strictly speaking no, the preprocessor can't do it on its own. However, you can give it a little help by creating appropriate #defines automatically.
Normally as was mentioned above, you'd use autotools if on a unix type system. However, you can also create the same effect using a makefile. I recently had cause to detect the "posix_fallocate" function being defined in headers, because I was using uClibc which seemed to omit it in earlier versions. This works in gnu make, but you can probably get a similar thing to work in other versions:
NOFALLOC := $(shell echo "\#include <fcntl.h>\nint main() { posix_fallocate(0,0,0);}" | $(CC) -o /dev/null -Werror -xc - >/dev/null 2>/dev/null && echo 0 || echo 1)
ifeq "$(NOFALLOC)" "1"
DFLAGS += -DNO_POSIX_FALLOCATE
endif
The preprocessor is a simple program and knows next to nothing about the underlying language. It cannot tell if a function has been declared. Even if it could, the function may be defined in another library and the symbol is resolved during linking, so the preprocessor could not help in that regard.
Since the preprocessor is not aware of the language C/C++ (it really only does text-replacement) I would guess that this is not possible. Why do you want to do this? Maybe there is another way.

Resources