I have a medium-size project which consists of many *.c unit files.
In a "normal" compilation exercise, the program is built from its *.o object files, which are passed as pre-requisite of the main program in the Makefile recipe. This works well for parallel builds : with make -j, all these object files are compiled in parallel, then linked together at the end. It makes the whole build experience a lot faster.
However, in other cases, the list of prerequisites is passed as a list of *.c unit files, not *.o object files. The original intention is to not build these object files, as they could pollute the cache.
Now, this could probably be done differently, but for this specific project, the Makefile is an immutable object, so it can't be updated, and we have to live with it.
In this case, using make -j is not effective, as gcc will effectively receive the full list of units directly on a single command line. Which means, make is no longer able to organize parallelism.
The only opportunity I've got left is to pass flags and parameters to make. I was trying to find one which would make gcc compile a list of units in parallel, internally. I couldn't find any. Searching around on Internet, I found conjectures stating that "since make can do parallel build, gcc doesn't need to replicate this functionality". But no solution.
So the question is : how to deal with it ?
Assuming a compilation line like gcc *.c -o final_exe, which can be altered through standard flags (CC, CFLAGS, CPPFLAGS, LDFLAGS), is there any option available to make it build these units in parallel ?
Related
I have this program called parser I compiled with -g flag this is my makefile
parser: header.h parser.c
gcc -g header.h parser.c -o parser
clean:
rm -f parser a.out
code for one function in parser.c is
int _find(char *html , struct html_tag **obj)
{
char temp[strlen("<end")+1];
memcpy(temp,"<end",strlen("<end")+1);
...
...
.
return 0;
}
What I like to see when I debug the parser or something can I also have the capability to change the lines of code after hitting breakpoint and while n through the code of above function. If its not the job of gdb then is there any opensource solution to actually changing code and possible saving so when I run through the next statement in code then changed statement before doing n (possible different index of array) will execute, is there any opensource tool or can it be done in gdb do I need to do some compiling options.
I know I can assign values to variables at runtime in gdb but is this it? like is there any thing like actually also being capable of changing soure
Most C implementations are compiled. The source code is analyzed and translated to processor instructions. This translation would be difficult to do on a piecewise basis. That is, given some small change in the source code, it would be practically impossible to update the executable file to represent those changes. As part of the translation, the compiler transforms and intertwines statements, assigns processor registers to be used for computing parts of expressions, designates places in memory to hold data, and more. When source code is changed slightly, this may result in a new compilation happening to use a different register in one place or needing more or less memory in a particular function, which results in data moving back or forth. Merging these changes into the running program would require figuring out all the differences, moving things in memory, rearranging what is in what processor register, and so on. For practical purposes, these changes are impossible.
GDB does not support this.
(Appleās developer tools may have some feature like this. I saw it demonstrated for the Swift programming language but have not used it.)
Separating a program into header and source files perhaps might benefit in faster compilation if given to a smart compilation manager, which is what I am working on.
Will on theory work:
Creating a thread for each source file and
compiling each source file into object file at once.
Then link those object files together.
It still needs to wait for the source file being the slowest.
This shouldn't be a problem as a simple n != nSources counter can be implemented that increments for each .o generated.
I don't think GCC on default does that. When it invokes the assembler
it should parse the files one by one.
Is this a valid approach and how could I optimize compilation time even further?
All modern (as in post 2000-ish) make's offer this feature. Both GNU make and the various flavours of BSD make will compile source files in separate threads with the -j flag. It just requires that you have a makefile, of course. Ninja also does this by default. It vastly speeds up compilation.
So I have been studying this Modular programming that mainly compiles each file of the program at a time. Say we have FILE.c and OTHER.c that both are in the same program. To compile it, we do this in the prompt
$gcc FILE.c OTHER.c -c
Using the -c flag to compile it into .o files (FILE.o and OTHER.o) and only when that happens do we translate it (compile) to executable using
$gcc FILE.o OTHER.o -o
I know I can just do it and skip the middle part but as it shows everywhere, they do it first and then they compile it into executable, which I can't understand at all.
May I know why?
If you are working on a project with several modules, you don't want to recompile all modules if only some of them have been modified. The final linking command is however always needed. Build tools such as make is used to keep track of which modules need to be compiled or recompiled.
Doing it in two steps allows to separate more clearly the compiling and linking phases.
The output of the compiling step is object (.o) files that are machine code but missing the external references of each module (i.e. each c file); for instance file.c might use a function defined in other.c, but the compiler doesn't care about that dependency in that step;
The input of the linking step is the object files, and its output is the executable. The linking step bind together the object files by filling the blanks (i.e. resolving dependencies between objets files). That's also where you add the libraries to your executable.
This part of another answer responds to your question:
You might ask why there are separate compilation and linking steps.
First, it's probably easier to implement things that way. The compiler
does its thing, and the linker does its thing -- by keeping the
functions separate, the complexity of the program is reduced. Another
(more obvious) advantage is that this allows the creation of large
programs without having to redo the compilation step every time a file
is changed. Instead, using so called "conditional compilation", it is
necessary to compile only those source files that have changed; for
the rest, the object files are sufficient input for the linker.
Finally, this makes it simple to implement libraries of pre-compiled
code: just create object files and link them just like any other
object file. (The fact that each file is compiled separately from
information contained in other files, incidentally, is called the
"separate compilation model".)
It was too long to put in a comment, please give credit to the original answer.
I am trying to compile GNU Coreutils as a set of shared libraries, instead of a set of executables. I thought that make would let me pass in a flag to tell it to do this, but from what I can see I would actually have to modify the configure.ac and Makefile.am in order to make this work. I would prefer not to do this, since this potentially introduces bugs into code that I can currently rely on being bug-free. I tried manually turning the object files into so's by entering:
make CFLAGS='-fpic'
gcc -shared -o ls.so coreutils/src/ls.o
I am able to create the so file, but there seem to be a number of flags that I am missing, and I don't see any way to access a list of necessary flags to compile and link the code (even though this information is clearly contained in the computer). The only thing I can think to do is manually go through all of the linker errors and try to figure out what flags are missing, but I'm hoping that there is a less tedious way of getting what I want.
Not sure what you're trying to do, but related to this is the ./configure --enable-single-binary option which links all objects to a single executable.
I am a seasoned developer in Java and I had learned C in my College days, however I am brushing up my C skill and am following a tutorial from Here
I am trying to follow a tutorial on makefile here is what the author says:
Does the file ex1 exist already?
No. Ok, is there another file that starts with ex1?
Yes, it's called ex1.c. Do I know how to build .c files?
Yes, I run this command cc ex1.c -o ex1 to build them.
I shall make you one ex1 by using cc to build it from ex1.c.
But I am unable to grasp, what is a makefile and why is it used? What are the parameters to the same? Also what are CFLAGS? What is CC? I am new to Ubuntu although.
Good explanation would be very long.
Short explanation: makefile is a set of instructions on how to compile / build an executable. It includes all relationships. For example, "executable A needs object files B and C. B is compiled from files X.c X.h Y.c and Y.h; C depends on K.c". Now if someone modifies K.c, you know you need to recompile C but you don't need to recompile B (just link B and C into A at the end).
As projects get more complicated this becomes more important.
As for flags - there are all kinds of ways to control your compiler. Sometimes you will want to change these - say, you want to include more debug, or increase level of optimization; or change the target architecture. All these things are controlled with flags. By setting a variable to contain these flags, you can replace the flags in many commands at the same time.
You can even change what compiler you want to use - you might have different ones as your source code might contain more than one language (mixtures of C and FORTRAN are still encountered in many numerical analysis libraries, for example.)
cc is a C compiler. So is gcc (the Gnu C Compiler). Other compilers include g++ (for C++), g77 (for FORTRAN77), etc...
All of this means that using a makefile is great for maintaining control and flexibility when compiling large and complex projects.
But really - you need to read up about this properly and not rely on something that was written late at night...