Optimizing includes in C [closed] - c

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
My project has several header files. Most of C source files include all of those, so nearly every source file contains the following lines:
#include "add.h"
#include "sub.h"
#include "mul.h"
#include "div.h"
#include "conv.h"
#include "comp.h"
// etc.
Should this be moved to some all.h or similar? Is there a better way to do this?

In each .c file it should be easy to know what includes it needs. By doing your way, you could lose track of what includes it needs, and another colleague will need to look for this file.
In another words, you shouldn't do that because you lose clarity in your code.

There is also the third, somewhere-in-the-middle option, to have a minimum common.h for stuff which is shared by pretty much all your translation units (e.g. headers like stdint.h). Or perhaps you might decide to group add.h, sub.h and similar headers into a single ops.h header.
But having literally a single all.h header would make encapsulation/information hiding in C even harder than it is already. Does a math library in a spaceship microcontroller really need to know everything about life support systems?
Of course, C doesn't really have "first class encapsulation mechanisms", you can throw an extern function declaration pretty much everywhere, that's why it's rather important to stick to reasonable conventions and best practices.

There are pro and conses in both approaches (having one single all.h including all your headers, or not), so it is also a matter of opinion. And on a reasonably small project, having a single header file containing all your common declarations (and also definitions of static inline functions) is also possible.
See also this answer (which gives examples of both approaches).
A possible reason to have one single header (which includes other headers, even the system ones) is to enable pre-compilation of that header file. See this answer (and the links there).
A possible reason to favor several header files is readability and modularity. You might want to have one (rather small) header file per module and in every translation unit (practically every .c file) you would include only the minimal set of header files (in the good order).
Remember that C99 & C11 (and even C++14) do not have any notion of modules; in other words, modules in C are only a matter of conventions & habits. And conventions and habits are really important in C programs, so look at what other people are doing in existing free software projects (e.g. on github or sourceforge).
Notice that preprocessing is the first phase of most C compilers. Read documentation about your C preprocessor.
You practically would use a build automation system like GNU make. You may want to automatically generate dependencies (e.g. like here).
For a single-person project (of a few dozens of thousand source lines at most), I personally prefer to have a single header file, but that is a matter of opinion and taste (so a lot of people disagree). BTW, refactoring your project (to several header files) when it becomes large enough is quite easy to do (but harder to design); you'll just copy&paste some chunks of code in several new header files.
For a project involving several developers, you may want to favor the idea that most files (header or code) have one single developer responsible for them (with other developers making occasional changes to it).
Notice that header inclusion and preprocessing is a textual operation. You might in theory even avoid having header files and copy and paste the same declarations in your .c files but that is very bad practice so you should not do that (in hand-written code). However, some projects are generating C code (some sort of metaprogramming), and their C code generator (some script, or some tool like bison) could emit the same declarations in several files.

Related

Why does the standard C library feature multiple header files instead of consolidating the contents into a single header? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
Why does the standard C library need to feature multiple header files? Would it not be more user friendly to consolidate it into one header file?
I understand that it is unnecessary to include function prototypes / global variables that go unused, however, won't the compiler eventually remove all of these references if they're unused?
Maybe I'm underestimating the size of contents spanning all of the header files. But it seems like I'm always googling to figure out what header I need to #include.
Edit: The top comment references a size of 50MB which appears to be untrue. The C library is relatively concise compared to other language's standard libraries hence the question.
// sarcasm ON
Build your own #include "monster.h" that includes all you want from the collection.
People write obfuscated code all the time. But, you won't be popular with your cohort.
// sarcasm OFF
The real reason? C was developed when storage was barely beyond punch card technology. A multi-user system might have 1/2Mb of (magnetic) core memory and a cycle time that is now laughable.
Yet, reams of C code was developed (and the story of the popularity of UNIX well known.)
Although new standards could change the details (and C++ has), there is an ocean of code that might/would no longer be able to be compiled.
Look up "Legacy".
There are (too many) examples of code in the world where the usual collection of, say, stdio.h, stdlib.h, etc. have been "hidden" inside #include "myApp.h". Doing so forces the reader to hunt-down a second file to verify what the &^%&%& is going on.
Get used to it. If it was a bad practice, it would not have survived for ~50 years.
EDIT: It is for a similar reason that "functions" have been "grouped together" into different libraries. By extension, pooling those libraries might make things a tiny bit "less cluttered", but the process of linking may require a supercomputer to finish the job before knock-off time.
EDIT2: Brace yourself. Cleanly written C++ code often has a .h and a .cpp for every class used in the app. Sometimes "families" (hierarchies) might live together in a single pair of files, although that can lead to boo-boo's when the coder is having a bad day...

Why not concatenate C source files before compilation? [duplicate]

This question already has answers here:
#include all .cpp files into a single compilation unit?
(6 answers)
The benefits / disadvantages of unity builds? [duplicate]
(3 answers)
Closed 6 years ago.
I come from a scripting background and the preprocessor in C has always seemed ugly to me. None the less I have embraced it as I learn to write small C programs. I am only really using the preprocessor for including the standard libraries and header files I have written for my own functions.
My question is why don't C programmers just skip all the includes and simply concatenate their C source files and then compile it? If you put all of your includes in one place you would only have to define what you need once, rather than in all your source files.
Here's an example of what I'm describing. Here I have three files:
// includes.c
#include <stdio.h>
// main.c
int main() {
foo();
printf("world\n");
return 0;
}
// foo.c
void foo() {
printf("Hello ");
}
By doing something like cat *.c > to_compile.c && gcc -o myprogram to_compile.c in my Makefile I can reduce the amount of code I write.
This means that I don't have to write a header file for each function I create (because they're already in the main source file) and it also means I don't have to include the standard libraries in each file I create. This seems like a great idea to me!
However I realise that C is a very mature programming language and I'm imagining that someone else a lot smarter than me has already had this idea and decided not to use it. Why not?
Some software are built that way.
A typical example is SQLite. It is sometimes compiled as an amalgamation (done at build time from many source files).
But that approach has pros and cons.
Obviously, the compile time will increase by quite a lot. So it is practical only if you compile that stuff rarely.
Perhaps, the compiler might optimize a bit more. But with link time optimizations (e.g. if using a recent GCC, compile and link with gcc -flto -O2) you can get the same effect (of course, at the expense of increased build time).
I don't have to write a header file for each function
That is a wrong approach (of having one header file per function). For a single-person project (of less than a hundred thousand lines of code, a.k.a. KLOC = kilo line of code), it is quite reasonable -at least for small projects- to have a single common header file (which you could pre-compile if using GCC), which will contain declarations of all public functions and types, and perhaps definitions of static inline functions (those small enough and called frequently enough to profit from inlining). For example, the sash shell is organized that way (and so is the lout formatter, with 52 KLOC).
You might also have a few header files, and perhaps have some single "grouping" header which #include-s all of them (and which you could pre-compile). See for example jansson (which actually has a single public header file) and GTK (which has lots of internal headers, but most applications using it have just one #include <gtk/gtk.h> which in turn include all the internal headers). On the opposite side, POSIX has a big lot of header files, and it documents which ones should be included and in which order.
Some people prefer to have a lot of header files (and some even favor putting a single function declaration in its own header). I don't (for personal projects, or small projects on which only two or three persons would commit code), but it is a matter of taste. BTW, when a project grows a lot, it happens quite often that the set of header files (and of translation units) changes significantly. Look also into REDIS (it has 139 .h header files and 214 .c files i.e. translation units totalizing 126 KLOC).
Having one or several translation units is also a matter of taste (and of convenience and habits and conventions). My preference is to have source files (that is translation units) which are not too small, typically several thousand lines each, and often have (for a small project of less than 60 KLOC) a common single header file. Don't forget to use some build automation tool like GNU make (often with a parallel build through make -j; then you'll have several compilation processes running concurrently). The advantage of having such a source file organization is that compilation is reasonably quick. BTW, in some cases a metaprogramming approach is worthwhile: some of your (internal header, or translation units) C "source" files could be generated by something else (e.g. some script in AWK, some specialized C program like bison or your own thing).
Remember that C was designed in the 1970s, for computers much smaller and slower than your favorite laptop today (typically, memory was at that time a megabyte at most, or even a few hundred kilobytes, and the computer was at least a thousand times slower than your mobile phone today).
I strongly suggest to study the source code and build some existing free software projects (e.g. those on GitHub or SourceForge or your favorite Linux distribution). You'll learn that they are different approaches. Remember that in C conventions and habits matter a lot in practice, so there are different ways to organize your project in .c and .h files. Read about the C preprocessor.
It also means I don't have to include the standard libraries in each file I create
You include header files, not libraries (but you should link libraries). But you could include them in each .c files (and many projects are doing that), or you could include them in one single header and pre-compile that header, or you could have a dozen of headers and include them after system headers in each compilation unit. YMMV. Notice that preprocessing time is quick on today's computers (at least, when you ask the compiler to optimize, since optimizations takes more time than parsing & preprocessing).
Notice that what goes into some #include-d file is conventional (and is not defined by the C specification). Some programs have some of their code in some such file (which should then not be called a "header", just some "included file"; and which then should not have a .h suffix, but something else like .inc). Look for example into XPM files. At the other extreme, you might in principle not have any of your own header files (you still need header files from the implementation, like <stdio.h> or <dlfcn.h> from your POSIX system) and copy and paste duplicated code in your .c files -e.g. have the line int foo(void); in every .c file, but that is very bad practice and is frowned upon. However, some programs are generating C files sharing some common content.
BTW, C or C++14 do not have modules (like OCaml has). In other words, in C a module is mostly a convention.
(notice that having many thousands of very small .h and .c files of only a few dozen lines each may slow down your build time dramatically; having hundreds of files of a few hundred lines each is more reasonable, in term of build time.)
If you begin to work on a single-person project in C, I would suggest to first have one header file (and pre-compile it) and several .c translation units. In practice, you'll change .c files much more often than .h ones. Once you have more than 10 KLOC you might refactor that into several header files. Such a refactoring is tricky to design, but easy to do (just a lot of copy&pasting chunk of codes). Other people would have different suggestions and hints (and that is ok!). But don't forget to enable all warnings and debug information when compiling (so compile with gcc -Wall -g, perhaps setting CFLAGS= -Wall -g in your Makefile). Use the gdb debugger (and valgrind...). Ask for optimizations (-O2) when you benchmark an already-debugged program. Also use a version control system like Git.
On the contrary, if you are designing a larger project on which several persons would work, it could be better to have several files -even several header files- (intuitively, each file has a single person mainly responsible for it, with others making minor contributions to that file).
In a comment, you add:
I'm talking about writing my code in lots of different files but using a Makefile to concatenate them
I don't see why that would be useful (except in very weird cases). It is much better (and very usual and common practice) to compile each translation unit (e.g. each .c file) into its object file (a .o ELF file on Linux) and link them later. This is easy with make (in practice, when you'll change only one .c file e.g. to fix a bug, only that file gets compiled and the incremental build is really quick), and you can ask it to compile object files in parallel using make -j (and then your build goes really fast on your multi-core processor).
You could do that, but we like to separate C programs into separate translation units, chiefly because:
It speeds up builds. You only need to rebuild the files that have changed, and those can be linked with other compiled files to form the final program.
The C standard library consists of pre-compiled components. Would you really want to have to recompile all that?
It's easier to collaborate with other programmers if the code base is split up into different files.
Your approach of concatenating .c files is completely broken:
Even though the command cat *.c > to_compile.c will put all functions into a single file, order matters: You must have each function declared before its first use.
That is, you have dependencies between your .c files which force a certain order. If your concatenation command fails to honor this order, you won't be able to compile the result.
Also, if you have two functions that recursively use each other, there is absolutely no way around writing a forward declaration for at least one of the two. You may as well put those forward declarations into a header file where people expect to find them.
When you concatenate everything into a single file, you force a full rebuild whenever a single line in your project changes.
With the classic .c/.h split compilation approach, a change in the implementation of a function necessitates recompilation of exactly one file, while a change in a header necessitates recompilation of the files that actually include this header. This can easily speed up the rebuild after a small change by a factor of 100 or more (depending on the count of .c files).
You loose all the ability for parallel compilation when you concatenate everything into a single file.
Have a big fat 12 core processor with hyper-threading enabled? Pity, your concatenated source file is compiled by a single thread. You just lost a speedup of a factor greater than 20... Ok, this is an extreme example, but I have build software with make -j16 already, and I tell you, it can make a huge difference.
Compilation times are generally not linear.
Usually compilers contain at least some algorithms that have a quadratic runtime behavior. Consequently, there is usually some threshold from which on aggregated compilation is actually slower than compilation of the independent parts.
Obviously, the precise location of this threshold depends on the compiler and the optimization flags you pass to it, but I have seen a compiler take over half an hour on a single huge source file. You don't want to have such an obstacle in your change-compile-test loop.
Make no mistake: Even though it comes with all these problems, there are people who use .c file concatenation in practice, and some C++ programmers get pretty much to the same point by moving everything into templates (so that the implementation is found in the .hpp file and there is no associated .cpp file), letting the preprocessor do the concatenation. I fail to see how they can ignore these problems, but they do.
Also note, that many of these problems only become apparent with larger project sizes. If your project is less than 5000 lines of code, it's still relatively irrelevant how you compile it. But when you have more than 50000 lines of code, you definitely want a build system that supports incremental and parallel builds. Otherwise, you are wasting your working time.
With modularity, you can share your library without sharing the code.
For large projects, if you change a single file, you would end up
compiling the complete project.
You may run out of memory more easily when you attempt to compile large projects.
You may have circular dependencies in modules, modularity helps in maintaining those.
There may be some gains in your approach, but for languages like C, compiling each module makes more sense.
Because splitting things up is good program design. Good program design is all about modularity, autonomous code modules, and code re-usability. As it turns out, common sense will get you very far when doing program design: Things that don't belong together shouldn't be placed together.
Placing non-related code in different translation units means that you can localize the scope of variables and functions as much as possible.
Merging things together creates tight coupling, meaning awkward dependencies between code files that really shouldn't even have to know about each other's existence. This is why a "global.h" which contains all the includes in a project is a bad thing, because it creates a tight coupling between every non-related file in your whole project.
Suppose you are writing firmware to control a car. One module in the program controls the car FM radio. Then you re-use the radio code in another project, to control the FM radio in a smart phone. And then your radio code won't compile because it can't find brakes, wheels, gears, etc. Things that doesn't make the slightest sense for the FM radio, let alone the smart phone to know about.
What's even worse is that if you have tight coupling, bugs escalate throughout the whole program, instead of staying local to the module where the bug is located. This makes the bug consequences far more severe. You write a bug in your FM radio code and then suddenly the brakes of the car stop working. Even though you haven't touched the brake code with your update that contained the bug.
If a bug in one module breaks completely non-related things, it is almost certainly because of poor program design. And a certain way to achieve poor program design is to merge everything in your project together into one big blob.
Header files should define interfaces - that's a desirable convention to follow. They aren't meant to declare everything that's in a corresponding .c file, or a group of .c files. Instead, they declare all functionality in the .c file(s) that is available to their users. A well designed .h file comprises a basic document of the interface exposed by the code in the .c file even if there isn't a single comment in it. One way to approach the design of a C module is to write the header file first, and then implement it in one or more .c files.
Corollary: functions and data structures internal to the implementation of a .c file don't normally belong in the header file. You might need forward declarations, but those should be local and all variables and functions thus declared and defined should be static: if they are not a part of the interface, the linker shouldn't see them.
While you can still write your program in a modular way and build it as a single translation unit, you will miss all the mechanisms C provides to enforce that modularity. With multiple translation units you have fine control on your modules' interfaces by using e.g. extern and static keywords.
By merging your code into a single translation unit, you will miss any modularity issues you might have because the compiler won't warn you about them. In a big project this will eventually result in unintended dependencies spreading around. In the end, you will have trouble changing any module without creating global side-effects in other modules.
The main reason is compilation time. Compiling one small file when you change it may take a short amount of time. If you would however compile the whole project whenever you change single line, then you would compile - for example - 10,000 files each time, which could take a lot longer.
If you have - as in the example above - 10,000 source files and compiling one takes 10 ms, then the whole project builds incrementally (after changing single file) either in (10 ms + linking time) if you compile just this changed file, or (10 ms * 10000 + short linking time) if you compile everything as a single concatenated blob.
If you put all of your includes in one place you would only have to define what you need once, rather than in all your source files.
That's the purpose of .h files, so you can define what you need once and include it everywhere. Some projects even have an everything.h header that includes every individual .h file. So, your pro can be achieved with separate .c files as well.
This means that I don't have to write a header file for each function I create [...]
You're not supposed to write one header file for every function anyway. You're supposed to have one header file for a set of related functions. So your con is not valid either.
This means that I don't have to write a header file for each function I create (because they're already in the main source file) and it also means I don't have to include the standard libraries in each file I create. This seems like a great idea to me!
The pros you noticed are actually a reason why this is sometimes done in a smaller scale.
For large programs, it's impractical. Like other good answers mentioned, this can increase build times substantially.
However, it can be used to break up a translation unit into smaller bits, which share access to functions in a way reminiscent of Java's package accessibility.
The way the above is achieved involves some discipline and help from the preprocessor.
For example, you can break your translation unit into two files:
// a.c
static void utility() {
}
static void a_func() {
utility();
}
// b.c
static void b_func() {
utility();
}
Now you add a file for your translation unit:
// ab.c
static void utility();
#include "a.c"
#include "b.c"
And your build system doesn't build either a.c or b.c, but instead builds only ab.o out of ab.c.
What does ab.c accomplish?
It includes both files to generate a single translation unit, and provides a prototype for the utility. So that the code in both a.c and b.c could see it, regardless of the order in which they are included, and without requiring the function to be extern.

What is a C header file? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
[C] Header per source file.
In C++ why have header files and cpp files?
C++ - What should go into an .h file?
Is the only reason header files exist in C is so a developer can quickly see what functions are available, and what arguments they can take? Or is it something to do with the compiler?
Why has no other language used this method? Is it just me, or does it seem that having 2 sets of function definitions will only lead to more maintenance and more room for errors? Or is knowing about header files just something every C developer must know?
Header files are needed to declare functions and variables that are available. You might not have access to the definitions (=the .c files) at all; C supports binary-only distribution of code in libraries.
The compiler needs the information in the header files to know what functions, structures, etc are available and how to use them.
All languages needs this kind of information, although they retrieve the information in different ways. For example, a Java compiler does this by scanning either the class-file or the java source code to retrieve the information.
The drawback with the Java-way is that the compiler potentially needs to hold a much more of information in its memory to be able to do this. This is no big deal today, but in the seventies, when the C language was created, it was simply not possible to keep that much information in memory.
The main reason headers exist is to share declarations among multiple source files.
Say you have the function float *f(int a, int b) defined in the file a.c and reused in b.c and d.c. To allow the compiler to properly check arguments and return values you either put the function prototype in an header file and include it in the .c source files or you repeat the prototype in each source file.
Same goes for typedef etc.
While you could, in theory, repeat the same declaration in each source file, it would become a real nightmare to properly manage it.
Some language uses the same approach. I remember the TurboPascal units being not very different. You would put use ... at the beginning to signal that you were going to require functions that were defined elsewhere. I can't remember if that was passed into Delphi as well.
Know what is in a library at your disposal.
Split the program into bite-size chunks for the compiler. Compiling a megabyte of C files simultaneously will take more resources than most modern hardware can offer.
Reduce compiler load. Why should it know in screen display procedures about deep database engine? Let it learn only of functions it needs now.
Separate private and public data. This use isn't frequent but you may implement in C what C++ uses private fields for: each .c file includes two .h files, one with declarations of private stuff, the other with whatever others may require from the file. Less chance of a namespace conflict, safer due to hermetization.
Alternate configs. Makefile decides which header to use, and the same code may service two different platforms given two different header files.
probably more.

Good way to organize C source files? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
The way I've always organized my C source was to put struct, macro and function prototypes in header files and function implementations in .c files. However, I have recently been reading alot of other peoples code for large projects and I'm starting to see that people often define things like structs and macros in the C source itself, immediately above the functions that make use of it. I can see some benefit to this as you don't have to go searching around to find the definition of structs and macros used by particular functions, everything is right there in roughly the same place as the functions that use it. However I can also see some disadvantages to it as it means that there is not one central repository for struct/macro definitions as they're scattered through the sourcecode.
My question is, what are some good rules of thumb for deciding when to put the macro/struct definition in the C source code as opposed to the header files themselves?
Typically, everything you put in the header file is part of the interface, while everything that you put in the source file is part of the implementation.
That is, if something in the header file is only ever used by the associated source file, it's an excellent candidate for moving to that source file. This prevents "polluting" the namespace of every file that uses your header with macros and types that they were never intended to use.
Interfaces and implementations is what it's all about.
Addendum to the accepted answer: it can be useful to put an incomplete struct declaration in the header but put the definition only in the .c file. Now a pointer to that struct gives you a private type that you have complete control over. Very useful to guarantee separation of concerns; a bit like private members in C++.
For extensive examples, follow the link.
Put your public structures and interface into the .h file.
Put your private bits into the .c file.
If I have more than one .c file that implements a logical set of functionality, I'll put the things that need to be shared among those implementation files into a *p.h file ('p' for private). Client code should not include the *p.h header.
For example, if I have a set of routines that implement an XML parser, I might have the following organization:
xmlparser.h - the public structures, types, enums, and function prototypes
xmlparserp.h - private types, function prototypes, etc. that client code
doesn't and shouldn't need
xmlparser.c - implementation of the XML parser
xmlutil.c - some other implementation bits (would include xmlparserp.h)
stuff defining the external interface to the module goes in the header.
stuff just used within the module should stay in the C file.
header files should include required headers to support its declarations
headers should wrap themselves in #ifndef NAME_H to ensure a single inclusion per compilation unit
modules should include their own headers to ensure consistency
Lots of people don't know this basic stuff BTW, which is required to maintain your sanity on any sized C project.
In the book "C style: Standards and Guidelines" from David Straker (available online here), there are some good ideas on file layout, and on the division between C file and headers.
You may read the chapter 7 and specially chapter 7.4.
And as John Calsbeek said, you can based your organization on how the header parts are used.
If one structure, type, macro, ... is only used by one source, you can move your code there.
You may have header files for the prototypes and some header file for the common declarations (type definitions, etc...)
Mainly, the issue at hand is a sort of encapsulation consideration. You can think of a module's header file as not so much its "overhead", which is what you seem to be doing now, as its public interface declaration, which is presumably how the people whose code you're looking at are seeing it.
From there it follows naturally that what goes in the header file is things that other modules need to know about: prototypes for functions you anticipate being used externally, structs and macros used for interface purposes, extern variable declarations, and so on, while things that are strictly for the module's internal use go in the .c file.
If you define your structs and macros inside the .c, you won't be able to use it from other .c files
To do so, you have to put it in the .h so that #include tells the compiler where to check for your structs and macros
unless you #include "x.c", which you shouldn't do =)

untangling .h dependencies

What do you do when you have a set of .h files that has fallen victim to the classic 'gordian knot' situation, where to #include one .h means you end up including almost the entire lot? Prevention is clearly the best medicine, but what do you do when this has happened before the vendor (!) has shipped the library?
Here's an extension to the question, and this is probably the more pertinent question -- should you even attempt to disentangle the dependencies in the first place?;
I've done this on a C++ code base that was already split into many libraries (which was a good start).
I had to workout (or guess) which library was the most depended upon, which depended upon nothing else in the code base. I then processed each library in turn.
I looked at each module (*.cpp files) in turn and made sure that its own header was #included first and commented out the rest, then I commented out all the #includes in that header file and then re-compiled just that module to let the compiler tell me what was needed. I would un-comment the first header that seemed to be needed, and reviewed that one, recursing as necessary. It was interesting to see how many headers ended up not being needed.
Where only the name is needed (because you have a pointer or reference) use class name; or struct name;, which is called forward declaration and avoid #including the header file.
The compiler is very helpful in telling you what the dependencies are when you comment out #includes (you need to recompile with ALL the compilers you have to maintain portability).
Sometimes I had to move modules between libraries so that no pairs or groups of libraries were mutually dependant.
As you have the opportunity, you should refactor the code to reduce includes that are too large, however that assumes you can achieve some sort of package cohesion. If you disentangle things just to discover that every user of the code has to include all the elements anyway, the end result is the same.
Another option is to use #defines to configure sections on and off. Regardless, for an existing code base the solution is to move toward package cohesion.
Read: http://ivanov.files.wordpress.com/2007/02/sedpackages.pdf and research issues related to package cohesion.
I've untangled that knot a few times, and it generally helps a lot when maintaining a system to reduce the .h dependencies as much as possible. There are decent tools for generating dependency trees ( I was using Klocwork at the time ).
The downside I found was with conditional compilation. Someone might remove a header file because they think we don't need it, but it turns out that we only don't need it because VxWorks has some screwed up headers... on Solaris (or any reasonable Posix system) you do need it.
There is a balance to be struck between an enormous number of finely organized headers and a single header that includes everything. Consider the Standard C library; there are some biggish headers like <stdio.h>, which declares a lot of functions, but they are all related to I/O. There are other headers that are more of a miscellany - notably <stdlib.h>.
The Goddard Space Flight Center guidelines for C are worth hunting down.
The basic rule is that each header should declare the facilities provided by a suitable (usually small) set of source files. The facilities and header should be self-contained. That is, if someone needs the code in header "something.h", then that should be the only header that must be added to the compilation. If there are facilities needed by "something.h" that are not declared in the header, then it must include the relevant headers. That can mean that headers end up including <stddef.h> because one of the functions uses size_t, for example.
As #quamrana points out, you can use forward declarations for structures (not classes, since the question is tagged C and not C++) when appropriate - which primarily means when the interface takes pointers and does not need to know the size of the structures or any of the members.

Resources