Is this a reasonable hack to inline functions across translation units? - c

I'm writing performance-sensitive code that really requires me to force certain function calls to be inlined.
For inline functions that are shared between translation units via a header, one would normally have to put the function definition in the header file. I don't want to do that. Some of these functions operate on complex data structures that should not be exposed in the header.
I've gotten around this by simply #including all the .h and .c files once each into a single .c file, so that there is only one translation unit. (That slows down re-compiles, but not by enough to matter.)
This would be "problem solved," but it eliminates getting an error when a function in one C file calls a function in another C file that is supposed to be private, and I want to get an error in that case. So, I have a separate Makefile entry that does a "normal" build, just to check for this case.
In order to force functions declared inline to play nicely in the "normal" build, I actually define a macro, may_inline, which is used where the inline attribute normally would be. It is defined as empty for a normal build and is defined as "inline" for an optimized build.
This seems like an acceptable solution. The only downside I can see is that I can't have private functions in different .c files that have the same prototype, but so far, that hasn't been much of an issue for me.
Another potential solution is to use GCC's Link-Time Optimization, which is supposed to allow inlining across translation units. It's a new feature, though, and I don't trust it to always inline things the way I would want. Furthermore, I can only get it working on trivial problems, not my actual code.
Is this an acceptable hack, or am I doing something incredibly stupid? The fact that I've never seen this done before makes me a bit nervous.

Unity build is an absolutely valid approach and has been widely used in industry since forever (see e.g. this post). Recent versions of Visual Studio even provide builtin support for them.
LTO has a downside of not being portable even across compilers for the same platform.

Related

unused `static inline` functions generate warnings with `clang`

When using gcc or clang, it's generally a good idea to enable a number of warnings, and a first batch of warnings is generally provided by -Wall.
This batch is pretty large, and includes the specific warning -Wunused-function.
Now, -Wunused-function is useful to detect static functions which are no longer invoked, meaning they are useless, and should therefore preferably be removed from source code.
When applying a "zero-warning" policy, it's no longer "preferable", but downright compulsory.
For performance reasons, some functions may be defined directly into header files *.h, so that they can be inlined at compile time (disregarding any kind of LTO magic). Such functions are generally declared and defined as static inline.
In the past, such functions would probably have been defined as macros instead, but it's considered better to make them static inline functions instead, whenever applicable (no funny type issue).
OK, so now we have a bunch of functions defined directly into header files, for performance reasons. A unit including such a header file is under no obligation to use all its declared symbols. Therefore, a static inline function defined in a header file may reasonably not be invoked.
For gcc, that's fine. gcc would flag an unused static function, but not an inline static one.
For clang though, the outcome is different : static inline functions declared in headers trigger a -Wunused-function warning if a single unit does not invoke them. And it doesn't take a lot of flags to get there : -Wall is enough.
A work-around is to introduce a compiler-specific extension, such as __attribute__((unused)), which explicitly states to the compiler that the function defined in the header may not necessarily be invoked by all its units.
OK, but now, the code which used to be clean C99 is including some form of specific compiler extension, adding to the weight of portability and maintenance.
The question therefore is more about the logic of such a choice : why does clang selects to trigger a warning when a static inline function defined in a header is not invoked ? In which case is that a good idea ?
And what does clang proposes to cover the relatively common case of inlined functions defined in header file, without requesting the usage of compiler extension ?
edit :
After further investigation, it appears the question is incorrect.
The warning is triggered in the editor (VSCode) using clang linter applying a selected list compilation flags (-Wall, etc.).
But when the source code is actually compiled with clang and with exactly the same list of flags, the "unused function" warning is not present.
So far, the results visible in the editor used to be exactly the ones found at compilation time. It's the first time I witness a difference.
So the problem seems related to the way the linter uses clang to produce its list of warnings. That's a much more complex and specific question.
Note the comment:
OK, sorry, this is actually different from expectation. It appears the warning is triggered in the editor using clang linter with selected compilation flags (-Wall, etc.). But when the source code is compiled with exactly the same flags, the "unused function" warning is actually not present. So far, the results visible in the editor used to be exactly the ones found at compilation time; it's the first time I witness a difference. So the problem seems related to the way the linter uses clang to produce its list of warnings. It seems to be a more complex question [than I realized].
I'm not sure you'll find any "why". I think this is a bug, possibly one that they don't care to fix. As you hint in your question, it does encourage really bad practice (annotation with compiler extensions where no annotation should be needed), and this should not be done; rather, the warning should just be turned off unless/until the bug is fixed.
If you haven't already, you should search their tracker for an existing bug report, and open one if none already exists.
Follow-up: I'm getting reports which I haven't verified that this behavior only happens for functions defined in source files directly, not from included header files. If that's true, it's nowhere near as bad, and probably something you can ignore.
'#ifdef USES_FUNTION_XYZ'
One would have to configure the used inline functions before including the header.
Sounds like a hassle and looks clumsy.
When using gcc or clang, it's generally a good idea to enable a number of warnings,
When using any C compiler, it's a good idea to ensure that the warning level is turned up, and to pay attention to the resulting warnings. Much breakage, confusion, and wasted effort can be saved that way.
Now, -Wunused-function is useful to detect static functions which are
no longer invoked, meaning they are useless, and should therefore
preferably be removed from source code. When applying a "zero-warning"
policy, it's no longer "preferable", but downright compulsory.
Note well that
Such zero-warning policies, though well-intended, are a crutch. I have little regard for policies that substitute inflexible rules for human judgement.
Such zero-warning policies can be subverted in a variety of ways, with disabling certain warnings being high on the list. Just how useful are they really, then?
Policy is adopted by choice, as a means to an end. Maybe not your choice personally, but someone's. If existing policy is not adequately serving the intended objective, or is interfering with other objectives, then it should be re-evaluated (though that does not necessarily imply that it will be changed).
For performance reasons, some functions may be defined directly into header files *.h, so that they can be inlined at compile time (disregarding any kind of LTO magic).
That's a choice. More often than not, one affording little advantage.
Such functions are generally declared and defined as static inline. In the past, such functions would probably have been defined as macros instead, but it's considered better to make them static inline functions instead, whenever applicable (no funny type issue).
Considered by whom? There are reasons to prefer functions over macros, but there are also reasons to prefer macros in some cases. Not all such reasons are objective.
A unit including such a header file is
under no obligation to use all its declared symbols.
Correct.
Therefore, a
static inline function defined in a header file may reasonably not be
invoked.
Well, that's a matter of what one considers "reasonable". It's one thing to have reasons to want to do things that way, but whether those reasons outweigh those for not doing it that way is a judgement call. I wouldn't do that.
The question therefore is more about the logic of such a choice : why
does clang selects to trigger a warning when a static inline function
defined in a header is not invoked ? In which case is that a good idea
?
If we accept that it is an intentional choice, one would presume that the Clang developers have a different opinion about how reasonable the practice you're advocating is. You should consider this a quality-of-implementation issue, there being no rules for whether compilers should emit diagnostics in such cases. If they have different ideas about what they should warn about than you do, then maybe a different compiler would be more suitable.
Moreover, it would be of little consequence if you did not also have a zero-warning policy, so multiple choices on your part are going into creating an issue for you.
And what does clang proposes to cover the relatively common case of
inlined functions defined in header file, without requesting the usage
of compiler extension ?
I doubt that clang or its developers propose any particular course of action here. You seem to be taking the position that they are doing something wrong. They are not. They are doing something that is inconvenient for you, and that therefore you (understandably) dislike. You will surely find others who agree with you. But none of that puts any onus on Clang to have a fix.
With that said, you could try defining the functions in the header as extern inline instead of static inline. You are then obligated to provide one non-inline definition of each somewhere in the whole program, too, but those can otherwise be lexically identical to the inline definitions. I speculate that this may assuage Clang.

Why not concatenate C source files before compilation? [duplicate]

This question already has answers here:
#include all .cpp files into a single compilation unit?
(6 answers)
The benefits / disadvantages of unity builds? [duplicate]
(3 answers)
Closed 6 years ago.
I come from a scripting background and the preprocessor in C has always seemed ugly to me. None the less I have embraced it as I learn to write small C programs. I am only really using the preprocessor for including the standard libraries and header files I have written for my own functions.
My question is why don't C programmers just skip all the includes and simply concatenate their C source files and then compile it? If you put all of your includes in one place you would only have to define what you need once, rather than in all your source files.
Here's an example of what I'm describing. Here I have three files:
// includes.c
#include <stdio.h>
// main.c
int main() {
foo();
printf("world\n");
return 0;
}
// foo.c
void foo() {
printf("Hello ");
}
By doing something like cat *.c > to_compile.c && gcc -o myprogram to_compile.c in my Makefile I can reduce the amount of code I write.
This means that I don't have to write a header file for each function I create (because they're already in the main source file) and it also means I don't have to include the standard libraries in each file I create. This seems like a great idea to me!
However I realise that C is a very mature programming language and I'm imagining that someone else a lot smarter than me has already had this idea and decided not to use it. Why not?
Some software are built that way.
A typical example is SQLite. It is sometimes compiled as an amalgamation (done at build time from many source files).
But that approach has pros and cons.
Obviously, the compile time will increase by quite a lot. So it is practical only if you compile that stuff rarely.
Perhaps, the compiler might optimize a bit more. But with link time optimizations (e.g. if using a recent GCC, compile and link with gcc -flto -O2) you can get the same effect (of course, at the expense of increased build time).
I don't have to write a header file for each function
That is a wrong approach (of having one header file per function). For a single-person project (of less than a hundred thousand lines of code, a.k.a. KLOC = kilo line of code), it is quite reasonable -at least for small projects- to have a single common header file (which you could pre-compile if using GCC), which will contain declarations of all public functions and types, and perhaps definitions of static inline functions (those small enough and called frequently enough to profit from inlining). For example, the sash shell is organized that way (and so is the lout formatter, with 52 KLOC).
You might also have a few header files, and perhaps have some single "grouping" header which #include-s all of them (and which you could pre-compile). See for example jansson (which actually has a single public header file) and GTK (which has lots of internal headers, but most applications using it have just one #include <gtk/gtk.h> which in turn include all the internal headers). On the opposite side, POSIX has a big lot of header files, and it documents which ones should be included and in which order.
Some people prefer to have a lot of header files (and some even favor putting a single function declaration in its own header). I don't (for personal projects, or small projects on which only two or three persons would commit code), but it is a matter of taste. BTW, when a project grows a lot, it happens quite often that the set of header files (and of translation units) changes significantly. Look also into REDIS (it has 139 .h header files and 214 .c files i.e. translation units totalizing 126 KLOC).
Having one or several translation units is also a matter of taste (and of convenience and habits and conventions). My preference is to have source files (that is translation units) which are not too small, typically several thousand lines each, and often have (for a small project of less than 60 KLOC) a common single header file. Don't forget to use some build automation tool like GNU make (often with a parallel build through make -j; then you'll have several compilation processes running concurrently). The advantage of having such a source file organization is that compilation is reasonably quick. BTW, in some cases a metaprogramming approach is worthwhile: some of your (internal header, or translation units) C "source" files could be generated by something else (e.g. some script in AWK, some specialized C program like bison or your own thing).
Remember that C was designed in the 1970s, for computers much smaller and slower than your favorite laptop today (typically, memory was at that time a megabyte at most, or even a few hundred kilobytes, and the computer was at least a thousand times slower than your mobile phone today).
I strongly suggest to study the source code and build some existing free software projects (e.g. those on GitHub or SourceForge or your favorite Linux distribution). You'll learn that they are different approaches. Remember that in C conventions and habits matter a lot in practice, so there are different ways to organize your project in .c and .h files. Read about the C preprocessor.
It also means I don't have to include the standard libraries in each file I create
You include header files, not libraries (but you should link libraries). But you could include them in each .c files (and many projects are doing that), or you could include them in one single header and pre-compile that header, or you could have a dozen of headers and include them after system headers in each compilation unit. YMMV. Notice that preprocessing time is quick on today's computers (at least, when you ask the compiler to optimize, since optimizations takes more time than parsing & preprocessing).
Notice that what goes into some #include-d file is conventional (and is not defined by the C specification). Some programs have some of their code in some such file (which should then not be called a "header", just some "included file"; and which then should not have a .h suffix, but something else like .inc). Look for example into XPM files. At the other extreme, you might in principle not have any of your own header files (you still need header files from the implementation, like <stdio.h> or <dlfcn.h> from your POSIX system) and copy and paste duplicated code in your .c files -e.g. have the line int foo(void); in every .c file, but that is very bad practice and is frowned upon. However, some programs are generating C files sharing some common content.
BTW, C or C++14 do not have modules (like OCaml has). In other words, in C a module is mostly a convention.
(notice that having many thousands of very small .h and .c files of only a few dozen lines each may slow down your build time dramatically; having hundreds of files of a few hundred lines each is more reasonable, in term of build time.)
If you begin to work on a single-person project in C, I would suggest to first have one header file (and pre-compile it) and several .c translation units. In practice, you'll change .c files much more often than .h ones. Once you have more than 10 KLOC you might refactor that into several header files. Such a refactoring is tricky to design, but easy to do (just a lot of copy&pasting chunk of codes). Other people would have different suggestions and hints (and that is ok!). But don't forget to enable all warnings and debug information when compiling (so compile with gcc -Wall -g, perhaps setting CFLAGS= -Wall -g in your Makefile). Use the gdb debugger (and valgrind...). Ask for optimizations (-O2) when you benchmark an already-debugged program. Also use a version control system like Git.
On the contrary, if you are designing a larger project on which several persons would work, it could be better to have several files -even several header files- (intuitively, each file has a single person mainly responsible for it, with others making minor contributions to that file).
In a comment, you add:
I'm talking about writing my code in lots of different files but using a Makefile to concatenate them
I don't see why that would be useful (except in very weird cases). It is much better (and very usual and common practice) to compile each translation unit (e.g. each .c file) into its object file (a .o ELF file on Linux) and link them later. This is easy with make (in practice, when you'll change only one .c file e.g. to fix a bug, only that file gets compiled and the incremental build is really quick), and you can ask it to compile object files in parallel using make -j (and then your build goes really fast on your multi-core processor).
You could do that, but we like to separate C programs into separate translation units, chiefly because:
It speeds up builds. You only need to rebuild the files that have changed, and those can be linked with other compiled files to form the final program.
The C standard library consists of pre-compiled components. Would you really want to have to recompile all that?
It's easier to collaborate with other programmers if the code base is split up into different files.
Your approach of concatenating .c files is completely broken:
Even though the command cat *.c > to_compile.c will put all functions into a single file, order matters: You must have each function declared before its first use.
That is, you have dependencies between your .c files which force a certain order. If your concatenation command fails to honor this order, you won't be able to compile the result.
Also, if you have two functions that recursively use each other, there is absolutely no way around writing a forward declaration for at least one of the two. You may as well put those forward declarations into a header file where people expect to find them.
When you concatenate everything into a single file, you force a full rebuild whenever a single line in your project changes.
With the classic .c/.h split compilation approach, a change in the implementation of a function necessitates recompilation of exactly one file, while a change in a header necessitates recompilation of the files that actually include this header. This can easily speed up the rebuild after a small change by a factor of 100 or more (depending on the count of .c files).
You loose all the ability for parallel compilation when you concatenate everything into a single file.
Have a big fat 12 core processor with hyper-threading enabled? Pity, your concatenated source file is compiled by a single thread. You just lost a speedup of a factor greater than 20... Ok, this is an extreme example, but I have build software with make -j16 already, and I tell you, it can make a huge difference.
Compilation times are generally not linear.
Usually compilers contain at least some algorithms that have a quadratic runtime behavior. Consequently, there is usually some threshold from which on aggregated compilation is actually slower than compilation of the independent parts.
Obviously, the precise location of this threshold depends on the compiler and the optimization flags you pass to it, but I have seen a compiler take over half an hour on a single huge source file. You don't want to have such an obstacle in your change-compile-test loop.
Make no mistake: Even though it comes with all these problems, there are people who use .c file concatenation in practice, and some C++ programmers get pretty much to the same point by moving everything into templates (so that the implementation is found in the .hpp file and there is no associated .cpp file), letting the preprocessor do the concatenation. I fail to see how they can ignore these problems, but they do.
Also note, that many of these problems only become apparent with larger project sizes. If your project is less than 5000 lines of code, it's still relatively irrelevant how you compile it. But when you have more than 50000 lines of code, you definitely want a build system that supports incremental and parallel builds. Otherwise, you are wasting your working time.
With modularity, you can share your library without sharing the code.
For large projects, if you change a single file, you would end up
compiling the complete project.
You may run out of memory more easily when you attempt to compile large projects.
You may have circular dependencies in modules, modularity helps in maintaining those.
There may be some gains in your approach, but for languages like C, compiling each module makes more sense.
Because splitting things up is good program design. Good program design is all about modularity, autonomous code modules, and code re-usability. As it turns out, common sense will get you very far when doing program design: Things that don't belong together shouldn't be placed together.
Placing non-related code in different translation units means that you can localize the scope of variables and functions as much as possible.
Merging things together creates tight coupling, meaning awkward dependencies between code files that really shouldn't even have to know about each other's existence. This is why a "global.h" which contains all the includes in a project is a bad thing, because it creates a tight coupling between every non-related file in your whole project.
Suppose you are writing firmware to control a car. One module in the program controls the car FM radio. Then you re-use the radio code in another project, to control the FM radio in a smart phone. And then your radio code won't compile because it can't find brakes, wheels, gears, etc. Things that doesn't make the slightest sense for the FM radio, let alone the smart phone to know about.
What's even worse is that if you have tight coupling, bugs escalate throughout the whole program, instead of staying local to the module where the bug is located. This makes the bug consequences far more severe. You write a bug in your FM radio code and then suddenly the brakes of the car stop working. Even though you haven't touched the brake code with your update that contained the bug.
If a bug in one module breaks completely non-related things, it is almost certainly because of poor program design. And a certain way to achieve poor program design is to merge everything in your project together into one big blob.
Header files should define interfaces - that's a desirable convention to follow. They aren't meant to declare everything that's in a corresponding .c file, or a group of .c files. Instead, they declare all functionality in the .c file(s) that is available to their users. A well designed .h file comprises a basic document of the interface exposed by the code in the .c file even if there isn't a single comment in it. One way to approach the design of a C module is to write the header file first, and then implement it in one or more .c files.
Corollary: functions and data structures internal to the implementation of a .c file don't normally belong in the header file. You might need forward declarations, but those should be local and all variables and functions thus declared and defined should be static: if they are not a part of the interface, the linker shouldn't see them.
While you can still write your program in a modular way and build it as a single translation unit, you will miss all the mechanisms C provides to enforce that modularity. With multiple translation units you have fine control on your modules' interfaces by using e.g. extern and static keywords.
By merging your code into a single translation unit, you will miss any modularity issues you might have because the compiler won't warn you about them. In a big project this will eventually result in unintended dependencies spreading around. In the end, you will have trouble changing any module without creating global side-effects in other modules.
The main reason is compilation time. Compiling one small file when you change it may take a short amount of time. If you would however compile the whole project whenever you change single line, then you would compile - for example - 10,000 files each time, which could take a lot longer.
If you have - as in the example above - 10,000 source files and compiling one takes 10 ms, then the whole project builds incrementally (after changing single file) either in (10 ms + linking time) if you compile just this changed file, or (10 ms * 10000 + short linking time) if you compile everything as a single concatenated blob.
If you put all of your includes in one place you would only have to define what you need once, rather than in all your source files.
That's the purpose of .h files, so you can define what you need once and include it everywhere. Some projects even have an everything.h header that includes every individual .h file. So, your pro can be achieved with separate .c files as well.
This means that I don't have to write a header file for each function I create [...]
You're not supposed to write one header file for every function anyway. You're supposed to have one header file for a set of related functions. So your con is not valid either.
This means that I don't have to write a header file for each function I create (because they're already in the main source file) and it also means I don't have to include the standard libraries in each file I create. This seems like a great idea to me!
The pros you noticed are actually a reason why this is sometimes done in a smaller scale.
For large programs, it's impractical. Like other good answers mentioned, this can increase build times substantially.
However, it can be used to break up a translation unit into smaller bits, which share access to functions in a way reminiscent of Java's package accessibility.
The way the above is achieved involves some discipline and help from the preprocessor.
For example, you can break your translation unit into two files:
// a.c
static void utility() {
}
static void a_func() {
utility();
}
// b.c
static void b_func() {
utility();
}
Now you add a file for your translation unit:
// ab.c
static void utility();
#include "a.c"
#include "b.c"
And your build system doesn't build either a.c or b.c, but instead builds only ab.o out of ab.c.
What does ab.c accomplish?
It includes both files to generate a single translation unit, and provides a prototype for the utility. So that the code in both a.c and b.c could see it, regardless of the order in which they are included, and without requiring the function to be extern.

Is there, as in JavaScript, a performance penalty for creating functions in C?

In JavaScript, there are, often, huge performance penalties for writing functions. For example, if you use this function:
function double(x){ return x*2; }
inside an inner loop, you are probably hitting your performance considerably, so it is really profitable to inline that kind of function for intensive applications. Does this, in general, hold for C? Am I free to create those kind of functions for everything, and rest assured the compiler will do the job, or is hand inlining still important?
The answer is: it depends.
I'm currently using MSVC compiler and GCC for a project at work and my experience is that they both do a pretty good job. Furthermore, the cost of a function call in native code can be pretty small, especially in functions that do not need to be accessible outside the executable (like functions not exported in a shared library). For these functions, there is more flexibility with how the call is actually implemented.
A few things to note: it's much easier for a compiler to optimize calls to static functions. Functions with external linkage often require link time optimization since one must know how and where the function is actually called, as well as the implementation, to do much optimization or inlining. This requires examining more than one compilation unit at a time.
I would say that you should use functions where it makes sense and makes the code easier to read and maintain. In general, it is safe to assume that the cost is smaller than it would be in JavaScript. But in the end, you'd have to profile the code to say anything more precise.
UPDATE: I want to emphasize that functions can be inlined across compilation units, but this requires link-time optimization (or whole program optimization). This is supported in both GCC (https://gcc.gnu.org/wiki/LinkTimeOptimization) and MSVC (http://msdn.microsoft.com/en-us/library/0zza0de8.aspx).
These days, if you can beat the compiler by copying the body of a function and pasting it everywhere you call that function, you probably need a different compiler.
In general, with optimizations turned on, gcc will tend to inline short functions provided that they are defined in the same compilation unit that they are called in.
Moreover, if the calling function and called function are in different compilation units, the compiler does not have a chance to inline them regardless of what you request.
So, if you want to maximize the chance of the compiler optimizing away a function call (without manually inlining), you should define the function call in .h file or in the same c file that it is called in.
There are no inner functions in C. Dot. So the rest of your question is kind of irrelevant.
Anyway, as of "normal" functions in C compiler may or may not inline them ( replace function invocation by its body ). If you compile your code with "optimize for size" it may decide to do not do inlining for obvious reason.

Why does't the compiler inline functions written on different source files?

I've been doing some tests with Valgrind to understand how functions are translated by the compiler and have found that, sometimes, functions written on different files perform poorly compared to functions written on the same source file due to not being inlined.
Considering I have different files, each containing functions related to a particular area and all files share a common header declaring all functions, is this expected?
Why doesn't the compiler inline them when they are written on different files but does when they are on the same page?
If this behavior starts to cause performance issues, what is the recommended course of action, put all of the functions on the same file manually before compiling?
example:
//source 1
void foo(char *str1, char *str2)
{
//here goes the code
}
//source 2
void *bar(int something, char *somethingElse)
{
//bar code
foo(variableInsideBar, anotherVariableCreatedInsideBar);
return variableInsideBar;
}
Sample performance cost:
On different files: 29920
Both on the same file: 8704
For bigger functions it is not as pronounced, but still happens.
If you are using gcc you should try the options -combine and -fwhole-program and pass all the source files to the compiler in one invocation. Traditionally different C files are compiled separately, but it is becoming more common to optimize cross compilation units (files).
The compiler proper cannot inline functions defined in different translation units simply because it cannot see the definitions of these functions, i.e. it cannot see the source code for these functions. Historically, C compilers (and the language itself) were built around the principles of independent translation. Each translation unit is compiled from source code into an object code completely independently form other translation units. Only at the very last stage of translation all these disjoint pieces of object code are assembled together into a final program by so called linker. But in a traditional compiler implementation at that point it is already too late to inline anything.
As you probably know, the language-level support for function inlining says that in order for a function to be "inlinable" in some translation unit it has to be defined in that translation unit, i.e. the source code for its body should be visible to the compiler in that translation unit. This requirement stems directly from the aforementioned principle of independent translation.
Many modern compilers are gradually introducing features that overcome the limitations of the classic pure independent translation. They implement features like global optimizations which allow various optimizations that cross the boundaries of translation units. That potentially includes the ability to inline functions defined in other translation units. Consult your compiler documentation in order to see whether it can inline functions across translation units and how to enable this sort of optimizations.
The reason such global optimizations are usually disabled by default is that they can significantly increase the translation time.
Wow, how you noticed that. I think that's because then you compile something, at first compiler turn one c file to one object file without looking at any other files. After it made object file, it doesn't apply any optimisations.
I don't think it costs much perfomance.

Any good reason to #include source (*.c *.cpp) files?

i've been working for some time with an opensource library ("fast artificial neural network"). I'm using it's source in my static library. When i compile it however, i get hundreds of linker warnings which are probably caused by the fact that the library includes it's *.c files in other *.c files (as i'm only including some headers i need and i did not touch the code of the lib itself).
My question: Is there a good reason why the developers of the library used this approach, which is strongly discouraged? (Or at least i've been told all my life that this is bad and from my own experience i believe it IS bad). Or is it just bad design and there is no gain in this approach?
I'm aware of this related question but it does not answer my question. I'm looking for reasons that might justify this.
A bonus question: Is there a way how to fix this without touching the library code too much? I have a lot of work of my own and don't want to create more ;)
As far as I see (grep '#include .*\.c'), they only do this in doublefann.c, fixedfann.c, and floatfann.c, and each time include the reason:
/* Easy way to allow for build of multiple binaries */
This exact use of the preprocessor for simple copy-pasting is indeed the only valid use of including implementation (*.c) files, and relatively rare. (If you want to include some code for another reason, just give it a different name, like *.h or *.inc.) An alternative is to specify configuration in macros given to the compiler (e.g. -DFANN_DOUBLE, -DFANN_FIXED, or -DFANN_FLOAT), but they didn't use this method. (Each approach has drawbacks, so I'm not saying they're necessarily wrong, I'd have to look at that project in depth to determine that.)
They provide makefiles and MSVS projects which should already not link doublefann.o (from doublefann.c) with either fann.o (from fann.c) or fixedfann.o (from fixedfann.c) and so on, and either their files are screwed up or something similar has gone wrong.
Did you try to create a project from scratch (or use your existing project) and add all the files to it? If you did, what is happening is each implementation file is being compiled independently and the resulting object files contain conflicting definitions. This is the standard way to deal with implementation files and many tools assume it. The only possible solution is to fix the project settings to not link these together. (Okay, you could drastically change their source too, but that's not really a solution.)
While you're at it, if you continue without using their project settings, you can likely skip compiling fann.c, et. al. and possibly just removing those from the project is enough – then they won't be compiled and linked. You'll want to choose exactly one of double-/fixed-/floatfann to use, otherwise you'll get the same link errors. (I haven't looked at their instructions, but would not be surprised to see this summary explained a bit more in-depth there.)
Including C/C++ code leads to all the code being stuck together in one translation unit. With a good compiler, this can lead to a massive speed boost (as stuff can be inlined and function calls optimized away).
If actual code is going to be included like this, though, it should have static in most of its declarations, or it will cause the warnings you're seeing.
If you ever declare a single global variable or function in that .c file, it cannot be included in two places which both compile to the same binary, or the two definitions will collide. If it is included in even one place, it cannot also be compiled on its own while still being linked into the same binary as its user.
If the file is only included in one place, why not just make it a discrete compilation unit (and use its globals via extern declarations)? Why bother having it included at all?
If your C files declare no global variables or functions, they are header files and should be named as such.
Therefore, by exhaustive search, I can say that the only time you would ever potentially want to include C files is if the same C code is used in building multiple different binaries. And even there, you're increasing your compile time for no real gain.
This is assuming that functions which should be inlined are marked inline and that you have a decent compiler and linker.
I don't know of a quick way to fix this.
I don't know that library, but as you describe it, it is either bad practice or your understanding of how to use it is not good enough.
A C project that wants to be included by others should always provide well structured .h files for others and then the compiled library for linking. If it wants to include function definitions in header files it should either mark them as static (old fashioned) or as inline (possible since C99).
I haven't looked at the code, but it's possible that the .c or .cpp files being included actually contain code that works in a header. For example, a template or an inline function. If that is the case, then the warnings would be spurious.
I'm doing this at the moment at home because I'm a relative newcomer to C++ on Linux and don't want to get bogged down in difficulties with the linker. But I wouldn't recommend it for proper work.
(I also once had to include a header.dat into a C++ program, because Rational Rose didn't allow headers to be part of the issued software and we needed that particular source file on the running system (for arcane reasons).)

Resources