I'm a C Newb
I write lots of code in dynamic languages (javascript, python, haskell, etc.), but I'm now learning C for graduate school and I have no idea what I'm doing.
The Problem
Originally I was building all my source in one directory using a makefile, which has worked rather well. However, my project is growing and I would like to split the source into multiple directories (unit tests, utils, core, etc.). For example, my directory tree might look like the following:
.
|-- src
| |-- foo.c
| |-- foo.h
| `-- main.c
`-- test
`-- test_foo.c
test/test_foo.c uses both src/foo.c and src/foo.h. Using makefiles, what is the best/standard way to build this? Preferably, there would be one rule for building the project and one for building the tests.
Note
I know that there are other ways of doing this, including autoconf and other automatic solutions. However, I would like to understand what is happening and be able to write the makefiles from scratch despite its possible impracticality.
Any guidance or tips would be appreciated. Thanks!
[Edit]
So the three solutions given so far are as follows:
Place globally used header files in a parallel include directory
use the path in the #include satement as in #include "../src/foo.h"
use the -I switch to inform the compiler of include locations
So far I like the -I switch solution because it doesn't involve changing source code when directory structure changes.
For test_foo.c you simply need to tell the compiler where the header files can be found. E.g.
gcc -I../src -c test_foo.c
Then the compiler will also look into this directory to find the header files. In test_foo.c you write then:
#include "foo.h"
EDIT:
To link against foo.c, actually against foo.o, you need to mention it in the object file list. I assume you have already the object files, then do after that:
gcc test_foo.o ../src/foo.o -o test
I also rarely use the GNU autotools. Instead, I'll put a single hand-crafted makefile in the root directory.
To get all headers in the source directory, use something like this:
get_headers = $(wildcard $(1)/*.h)
headers := $(call get_headers,src)
Then, you can use the following to make the object-files in the test directory depend on these headers:
test/%.o : test/%.c $(headers)
gcc -std=c99 -pedantic -Wall -Wextra -Werror $(flags) -Isrc -g -c -o $# $<
As you can see, I'm no fan of built-in directives. Also note the -I switch.
Getting a list of object-files for a directory is slightly more complicated:
get_objects = $(patsubst %.c,%.o,$(wildcard $(1)/*.c))
test_objects = $(call get_objects,test)
The following rule would make the objects for your tests:
test : $(test_objects)
The test rule shouldn't just make the object files, but the executables. How to write the rule depends on the structure of your tests: Eg you could create an executable for each .c file or just a single one which tests all.
A common way of doing this is for header files used by a single C file to be named the same as that C file and in the same directory, and for header files used by many C files (especially those used by the whole project) to be in a directory include that is parallel to the C source directory.
Your test file should just include the header files directly using relative paths, like this:
#include "../src/foo.h"
Related
I've written several c text processing functions that I've placed in a the files: string_functions.c and string_functions.h.
I was using these functions for one project and that worked out well. Now I want to use these same functions in a completely different project at the same time. I'm using gcc in Debian.
Is there a good way to use the same c source code in multiple projects at the same time. The projects are in different sub-directories with the same parent directory.
How do I structure the make files to do this?
Or do I just place a copy of the string_functions.c(h) in both projects. This seems like it would make it harder to maintain the source code.
Best way to do this is to build your C files (.h and .c) into a shared library.
There are many tutorials available on how to do this with gcc; one is at this link
Once the shared library is built, you can then link it into many other projects.
Briefly, these are the steps.
Ensure your string_functions.c includes string_functions.h and builds, of course.
Then compile position independent (that's what -fPIC is for)
$gcc -Wall -fPIC -c string_functions.c
Finally build your shared library like this
$gcc -shared -o my_stringfunctions.so string_functions.o
To link to your new shared library from some other program, ensure that whatever directory
you put it in is in your LD_LIBRARY_PATH.
Then you may link using something like
$gcc my_otherprogram.c -L/path/to/my/lib -lmy_stringfunctions
As pointed out, one should put include files (.h) used by a shared library in some directory path, and add the location to the include search path using the -I option:
$gcc my_otherprogram.c -I/path/to/include/files -L/path/to/my/lib -lmy_stringfunctions
If this is how your directory looks:
/parent
/project1
...
string_functions.h
string_functions.c
/project2
...
string_functions.h
string_functions.c
Then all you have to do is store it in a common location, and then point to that location when building your code. This is the standard way of doing it for custom installed libraries in /opt/, for example.
Hence, one suggestion is to do your directory structure like this:
/parent
/include
string_functions.h
string_functions.c
/project1
...
/project2
...
And when building your respective projects, you include that search path when compiling (using the -I flag):
gcc mainfile.c -I/parent/include <other options>
In a large C project, I have a top Makefile and many sub-Makefiles in different subdirectories. I need to collect all dependencies of the compilation. For that, I add -MMD to CFLAGS and get a bunch of .d dependency files.
These .d files are scattered in the subdirectories. Also, the dependencies are written sometimes as absolute paths, sometimes as paths relevant to the compilation directory, and sometimes containing symbolic links. I have written a script which finds all .d files, traverses their directories, and resolves all found paths. This works, but with tens of thousands of dependency files this dependency collection lasts about the same time as the compilation! (which is too long to wait :) )
Is there a faster way to get all dependencies in a single file? This is ANSI C, GCC and Linux if that matters. Thanks in advance.
Instead of -MMD, you can use -MM, which sends the dependencies to standard output.
You can then collect all the output to some dependency file in the top level directory with
gcc -MM ... file.c >>$(top)/all.d
If post processing is the only reason for collecting the output in one file, you can filter the output with a pipe
gcc -MM ... file.c | sh filter.sh >file.d
and keep the dependency files separate.
If the path to some local include file (defs.h) or the main source is important, you can force gcc to include a path by giving the appropriate -I option, e.g.
gcc -MM -I$(top)/path/to ... $(top)/path/to/file.c >>$(top)/all.d
or
gcc -MM -I$(top)/path/to ... $(top)/path/to/file.c | sh filter.sh >file.d
Instead of
file.o: file.c defs.h
gcc will emit
file.o: /absolute/path/to/file.c /absolute/path/to/defs.h
This works with relative paths as well, of course.
You can create the dependency files along with the first compile run.
During the first run, the objects do not exist yet, so the compiler will be invoked anyway. Create empty dependency files first, then update them while compiling.
It should be possible to extend the minimal Makefile for a single-directory C++ project to work with subdirectories.
I'm kind of lost in the Makefile business and I'm trying to come to terms with it. I would love if someone could make it clear on an example I'm currently programming.
I have these files:
my-bit-vector.h -> a header file included in eratost.c, ppm.c
ppm.c -> a .c file which includes my-bit-vector.h and error.h
error.h -> a header file included in eratost.c, ppm.c
error.c -> a .c file which includes error.h and defines the functions in it
erato.c -> a .c file which includes my-bit-vector.h and error.h
I need to link these together into one executable file. How would I go about doing that via Makefile? I hope I didn't forget something. Could you please help?
The contents of a Makefile, when put simple, is one or more targets (the things you want built). Each target has dependencies (if any dependencies don't exist yet, they must be built, and if they do exist but they're newer than their target, the target must be rebuilt), and rules (the commands to build the target, presumably from the dependencies).
In your case, lets say your final output is a program called program. You've identified the sources to build it, but you don't build an executable directly from sources, you do it from object files. You could start your makefile like this:
program: ppm.o error.o erato.o
cc -o program ppm.o error.o erato.o
WARNING The spacing on rule lines (the cc command line shown above) requires a TAB, not just spaces!
That's enough to start but not enough to be right. You'll notice that there's no target:dependency/rules for the .o's yet, but it still works because Make has some built-in rules.
With this makefile, if you type "make" twice, the first time you'll see everything gets built and the second time it won't -- nothing changed so no rebuild is needed. Unfortunately if you edit your .h's now, the .c's still won't rebuild, so lets fix that:
program: ppm.o error.o erato.o
cc -o program ppm.o error.o erato.o
ppm.o: ppm.c my-bit-vector.h error.h
error.o: error.c error.h
erato.o: erato.c my-bit-vector.h error.h
Now you've got your dependencies set to cause make to rebuild sources that must be rebuilt when headers change. There's no rules on those source builds because the built-in rule here is (often) sufficient. You can override the built-in if necessary, of course.
Here, when you type "make", the tool will find the first target (program) and inspect its dependencies. It will then make sure each of its dependencies are up to date (based on their target:dependency / rule definitions), recursively as long as there are targets needing to be considered for being built. Finally it will apply the rules for this target to complete its build.
There's much more that can be done with makefiles, this is just a brief intro.
program: ppm.o error.o erato.o
gcc ppm.o error.o erato.o -o program
ppm.o: ppm.c
gcc -c ppm.c -o ppm.o
error.o: error.c
gcc -c error.c -o error.o
erato.o: erato.c
gcc -c erato.c -o erato.o
stuff before the ":" is the target. stuff after ":" are the required targets for this target.
So if you "make program" make is looking for a target named "all". The target all requires ppm.o which is also defined as target in the makefile. So it executes this target first. the target ppm.o requires ppm.c which has no target defined in the makefile, so it is probably a file. I hope this explains the basic functionality to you.
http://mrbook.org/tutorials/make/
is a really good tutorial for beginners, with some basic makefile examples.
I have source code in one directory and have a makefile in a different directory. I am able to compile the code using the make system's vpath mechanism. The .o files are being created in the same folder where the makefile is. But I want to move those .o files to a different directory called obj. I tried the following:
vpath %.o obj
However, they are still being created in the same folder as the makefile. Can anyone help me to solve this issue?
Here are some highlighted lines of the makefile:
PATH_TO_OBJ:- ../obj
SRC :- .c files
OBJS :- $(SRC:.c = .o)
.c.o = $(CC) $(CFLAGS) -c
exe: cc $(LFLAGS) -o $(PATH_TO_OBJ) $(SRC).
After this also, .o file is creating in same folder of Makefile. Not moving to obj
-o option defines where to save the output file, produced by a gcc compiler.
gcc main.c -c -o path/to/object/files/main.o
Make's VPATH is only for finding source files. The placement of object files is up to the thing that is building them. There's a nice description at http://mad-scientist.net/make/vpath.html (I see someone beat me to posting this in a comment).
The *BSD build systems use variants of make that can place object files (and other generated files, including C sources from lex and yacc variants) in /usr/obj automatically. If you have access to that version of make, that will likely be a good way to deal with whatever underlying problem you are trying to solve.
I have a Linux GNU C project that requires building output for two different hardware devices, using a common C source code base, but different makefiles. Presently I have two makefiles in the same folder, one for each device, and when I make a change to the code, I have to first do "make clean" to make the first model, then "make clean" to make the second model. This is because they use different compilers and processors. Even if a code file didn't change, I have to recompile it for the other device.
What I would like to do is use a different folder for the second model, so it stores a separate copy of *.d and *.o files. I would not have to "make clean", only recompile the changed sources. I have looked at makefile syntax, and being new to Linux, can only scratch my head at the cryptic nature of this stuff.
One method I'm considering would update the .c & .h files from model_1 folder into model_2 folder. Can someone provide me with a simple makefile that will copy only newer *.c and *.h files from one folder to another?
Alternatively, there must be a way to have a common source folder, and separate output folders, so duplicated source files are not required. Any help to achieve that is appreciated. I can provide the makefiles if you want to look at them.
You want to generated files (object and dependencies) put into a separate folder for each build type as it compiles. Here's what I had do that may work for you:
$(PRODUCT1_OBJDIR)/%.o $(PRODUCT1_OBJDIR)/%.d: %.cpp
#mkdir -p $(#D)
$(CXX) $(PRODUCT1_DEPSFLAGS) $(CXXFLAGS) $(INCLUDE_DIR) $< -o $(PRODUCT1_OBJDIR)/$*.o
$(PRODUCT2_OBJDIR)/%.o $(PRODUCT2_OBJDIR)/%.d: %.cpp
#mkdir -p $(#D)
$(CXX) $(PRODUCT2_DEPSFLAGS) $(CXXFLAGS) $(INCLUDE_DIR) $< -o $(PRODUCT2_OBJDIR)/$*.o
$PRODUCT1_OBJDIR and $PRODUCT2_OBJDIR are variables names for the directory where you wish to have the generated files stored. This will check for changes to dependencies and recompile if needed.
If you still have problems, get back with feedback, will try and sort you out.
You could compile your source files to object files in different directories ("folder" is not really the appropriate word on Unix). You just have to set appropriate make rules. And you might use other builders like omake, scons, ...
You could use remake to debug your GNU Makefile-s. You could have inside a rule like
$(OBJDIR)/%.o: $(SRCDIR)/%.c
$(COMPILE.c) -c $^ -o $#
And you could set (earlier in your Makefile) variables with e.g.
OBJDIR:=obj-$(shell uname -m)
or something better
I do suggest to read GNU make's manual; it has an introduction to makefiles.
This can be easily achieved with makepp's repository mechanism. If you call makepp -R../src ARCH=whatever then it will symbolically link all parts of ../src under the current directory and compile here.
You can even create a file .makepprc and put in any options specific to this architecture, so you'll never get confused which command to call where.
If your different architectures have identically produced files (like generated sources), you can even build in ../src and the other architecture will pick up everything that doesn't depend on your current compile options.
There is much more to makepp. Besides doing almost all that GNU make can, there are lots more useful things, and you can even extend your makefiles with some Perl programming.