Makefile Project - c

I want to know please if there is a method to add sources files automatically from a specific directory without writing them in the Makefile.am . So I need an option if this exist. Thanks for help.

I want to know please if there is a method to add sources files automatically from a specific directory without writing them in the Makefile.am .
Some make implementations, such as GNU's, have features that address this objective, but there is no portable mechanism for it, and the Autotools are all about portability. It would therefore be poor Autotools style to rely on such a mechanism.
Even if you were satisfied to ignore portability implications, there would still be a limited scope for this, because Makefile.am associates source files with the target(s) to which they contribute. You would need there to be a direct link between the source directory in question and the target, or possibly between more specific filename patterns and targets. Sometimes such associations exist, but other times they don't.
Portability questions aside, I am not much of a fan of this sort of thing in general, as it lays a trap. If you engage such a mechanism then it is hard to add a source file without breaking the build, for example. The one-time cost of adding all the source names when autotooling an existing project is not that bad -- I've done it for several large projects -- and the cost to maintain your Makefile.am when you add sources is trivial.
But if you nevertheless want something along these lines then I would suggest engaging a source generator. It might look like this:
Makefile.am
bin_PROGRAMS = myprog
include myprog_sources.am
generate_myprog_sources.sh
#!/bin/bash
myprog_sources=(src/myprog/*.c)
echo "myprog_SOURCES = ${myprog_sources[*]}" > myprog_sources.am
That uses a separate shell script to generate Automake code for a _SOURCES definition naming all the .c files in the specified directory (that is, src/myprog). That's written to its own file, myprog_sources.am, for simplicity and safety, and the Makefile.am file uses an Automake include directive to incorporate it. When you want to adjust the build for a different complement of source files, you run the script. You can also make temporary changes to the source list by manually updating it without running the script.
This is not quite the level of hands-free automation that you asked for, but it's still pretty automatic, and it's better suited to the tools you are using.

Related

Why do we use "make" command when "cc ex1.c -o ex1" compiles the code written in "exe1.c" file

I was learning C programming in Linux while I came across this line of code
$ make ex1
cc ex1.c -o ex1
My question is why do we have to use make ex1? Isn't the cc command building the program and other necessary files.
With the exception of small set of very simple problem, almost all real-life C programs will be built from multiple modules, header files, external libraries, sometimes spanning over multiple folders. In some cases, additional code may be linked in using different tools (e.g., code generators).
For those cases, single 'cc' command is not going to work. The next solution will be to automated the build using build script. However, this can be time consuming to build, and almost impossible to maintain.
For building "C" programs, Make provides the many benefits on top of a simple shell build script. This is my personal "top 3"
Incremental build - when code files are modified, make can identify the execute the minimal set of build instructions, instead of rebuilding the whole code. This can provide a major efficiency boost to developers.
Rule based build - make uses rules to produce targets. Once you define a rule (one obvious rule: compile a ".c" file to ".o"), they can be applied consistently on all files.
provides setup for complete build process - including installation of code, cleanup, packaging, test, etc. Very important is that make can integrate (almost) any Unix tool into the build process - code generation, etc.
Needless to say, there are other build tools which provide additional/alternate benefits. CMake, gradle, SCons, to name a few.
For a one-file project, they come out to about the same. However, real-world projects tend to have tens, hundreds, or thousands of files and build tens or hundreds of libraries and executables, possibly with different settings etc. Managing all of this without storing the compilation commands somewhere would be impossible, and a Makefile is a very useful "somewhere."
Another important property of make is that it does timestamp checks on files and only re-runs the compilation/linking commands whose outputs are outdated, i.e. at least one of their inputs is newer than the outputs. Without such checks, you'd have to manually remember which files to recompile when you change something (especially difficult when the changed file is a header), or always recompile everything, increasing build times by orders of magnitude.

Conditional compilation of a source file that must be included only in debug builds

I have been reading best practices for conditional compilation, but I haven't found an example for the situation I have.
I have a C project whose target platform is a specific device (different from a PC). I have a source file that only contains functions for integration testing and things like that. I want this file to be compiled and linked only in DEBUG builds, not in the RELEASE ones.
My question is which of the following options is a better practice:
A*.c file like the following:
Tests.c
---------
#ifndef NDEBUG
// All testing functions
...
#endif
And include that file in both DEBUG and RELEASE builds.
Or
Checking whether or not NDEBUG is defined from within the project's Makefile / CMakeLists.txt and including the mentioned source file consequently.
My personal opinion (!) on this:
The first approach -- #ifndef NDEBUG -- is preferrable.
In the beginning, there was cc *.c.
Then came the adding of appropriate options.
Then came build systems, which figured out which of those *.c files actually needed recompilation, and relieved you of remembering which the appropriate options were.
Then came more sophisticated build systems, which could figure out the appropriate options for you.
Over time, build systems have become smarter, and can hold significant logic. However, I feel that this intelligence should remain focused on their primary function (see above), and that -- in the end -- a cc *.c should still be doing its job.
Build systems get outdated, or replaced. The next guy might not even know your build system of choice; he should still be able to make heads and tails out of your project without having to dig through your build system's logic as well.
Setting / checking NDEBUG is C, and anyone with a passing familiarity of the language (and <assert.h>) will immediately recognize what you're intending to do there.
Figuring out why a specific source file should only be included in a specific build type but not in others, from your build system, is not so intuitive, and might get lost altogether when somebody steps up, tosses your CMakeLists.txt out because he likes Jam better and builds that from scratch. That person might end up wondering why all those tests are cluttering up his release code, and why you weren't smart enough to make them Debug-only (not realizing you did do that in your build system).
In fact, I don't understand why to choose only one? In my opinion, both options can and should be used together.
If some file is totally unnecessary in a release build, then you have the perfect reason to exclude it from the building process completely. But having some preprocessor guards in the source code (be it #ifdef or #error) is by all means very useful.
I believe the first approach (#ifndef NDEBUG) is better, why?
Because I believe that each code encapsulation or dependency should be performed at the lowest level possible. If the build system can go on with its job without knowing that this file is compiled only for DEBUG builds, then we've just successfully removed a dependency in our project.
Building on top of the last argument, if this file will be needed by an additional project in the future, you'll have two places that behave conditionally instead of a single one.

Why aren't changes to header files accounted for in the Makefiles of mature C projects?

I have been reading up on make and looking at the Makefiles for popular C projects on GitHub to cement my understanding.
One thing I am struggling to understand is why none of the examples I've looked at (e.g. lz4, linux and FFmpeg) seem to account for header file dependencies.
For my own project, I have header files that contain:
Numeric and string constants
Macros
Short, inline functions
It would seem essential, therefore, to take any changes to these into account when determining whether to recompile.
I have discovered that gcc can automatically generate Makefile fragments from dependencies as in this SO answer but I haven't seen this used in any of the projects I've looked at.
Can you help me understand why these projects apparently ignore header file dependencies?
I'll attempt to answer.
The source distros of some projects include a configure script which creates a makefile from a template/whatever.
So the end user which needs to recompile the package for his/her target just has to do:
$ configure --try-options-until-it-works
$ make
Things can go wrong during the configure phase, but this has nothing to do with the makefile itself. User has to download stuff, adjust paths or configure switches and run again until makefile is successfully generated.
But once the makefile is generated, things should go pretty smooth from there for the user which only needs to build the product once to be able to use it.
A few portion of users will need to change some source code. In that case, they'll have to clean everything, because the makefile provided isn't the way the actual developpers manage their builds. They may use other systems (code::blocks, Ant, gprbuild...) , and just provide the makefile to automate production from scratch and avoid to depend on a complex production system. make is fairly standard even on Windows/MinGW.
Note that there are some filesystems which provide build audit (Clearcase) where the dependencies are automatically managed (clearmake).
If you see the makefile as a batch script to build all the sources, you don't need to bother adding a dependency system using
a template makefile
a gcc -MM command to append dependencies to it (which takes time)
Note that you can build it yourself with some extra work (adding a depend target to your makefile)

Can SCons keep track of linking dependencies?

I'm currently working on a C project with one main executable and one executable for each unit test. In the SConstruct file I specify the dependencies for each executable, something like
env.Program(['Main.c', 'Foo.c', 'Bar.c', 'Baz.c', ...])
env.Program(['FooTest.c', 'Foo.c', 'Baz.c', ...])
env.Program(['BarTest.c', 'Bar.c', 'Baz.c', ...])
...
This, however, is error prone and inelegant since the dependencies could just as well be tracked by the build tool, in this case SCons. How can I improve my build script?
What you are asking for is some sort of tool that
1) Looks at the headers you include
2) Determines from the headers which source files need building
3) Rinse and repeat for all the source files you've just added
Once it's done that it'll have to look over the tree it has generated and try and squish some of that into sensible libraries, assuming you haven't done that already (and looking at the tone of both the questions, that exercise seems to have been viewed as academic, rather than a standard part of good software development).
There might be some mileage in a tool that says "You've included header A/B.h, so you'll need libA in your link line" but even that is going to have plenty of gotchas depending on how different people build and link their libraries.
But what you've asked is asking how to define a build script that writes a build script. It's something you should be doing for yourself.

Reliable portability for C code without relying on the preprocessor

Relying on the preprocessor and predefined compiler macros for achieving portability seems hard to manage. What's a better way to achieve portability for a C project? I want to put environment-specific code in headers that behave the same way. Is there a way to have the build environment choose which headers to include?
I was thinking that I'd put the environment-specific headers into directories for specific environments. The build environment would then just copy the headers from the platform's directory into the root directory, build the project, and then remove the copies.
That depends entirely on your build environment of course and has nothing to do with C itself.
One thing you can try is to set up your include paths in your makefiles thus:
INCDIRS=-I ./solaris
#INCDIRS=-I ./windows
#INCDIRS=-I ./linux
:
CC=gcc $(INCDIRS) ...
and uncomment the one you're working on. Then put your platform specific headers in those directories:
./solaris/io.h
./windows/io.h
./linux/io.h
You could, at a pinch, even have different platform makefiles such as solaris.mk and windows.mk and not have to edit any files at all.
But I don't see your aversion to the preprocessor, that's one of the things it's good at, and people have been doing it successfully for decades. On top of that, what happens when your code needs to change per-platform. You can abstract the code into header files but that seems far harder to manage than a few #ifdefs to me.
This is basically what a configure script does - i.e. work out the specifics of the system and then modify the makefile for that system. Have a look at the documentation for GNU autoconf, it might do what you want, although I'm not sure how portable it would be to windows if that is necessary.
pax's answer is good, but I'll add that you can
Mix and Match Handle some system dependencies in the build system (big things, generally) and others with the preprocessor (small things)
Confine the trouble Define a thin glue layer between your code and the system dependent bits, and stick all the preprocessor crap in there. So you always call MyFileOpen() which calls fopen on unix and something else on windows. Now the only part of your code that has any preprocessor cruft related to file opening is the MyFileOps module.

Resources