Use Frama-c to analyze a project with CMake build infrastructure - c

I need to use frama-c value analysis plug-in to analyze some projects. These projects use CMake build infrastructure as their build system.
I have used frama-c to analyze each file separately. This way, the information about the entry point will be lost. More precisely, frama-c requires an entry point for the files that do not include a "main" function and therefore it is challenging to cover all functions and choose the best entry point within a single file from the project.
My question is that is there a way that we can run frama-c on the entire project as a whole unit (not file by file)?

Frama-C accepts multiple files on its command-line. This will work if the configuration of the preprocessor (option -cpp-extra-args, used in particular for includes) is the same accross all files.
If you need different preprocessor settings for different files, you should preprocess each file alone (only cpp, no Frama-C), and save each result as a .i file. Then, you can supply all those preprocessed files to Frama-C simultaneously. Usually, the first operation can be done by tweaking the build process.

Related

Why do we use "make" command when "cc ex1.c -o ex1" compiles the code written in "exe1.c" file

I was learning C programming in Linux while I came across this line of code
$ make ex1
cc ex1.c -o ex1
My question is why do we have to use make ex1? Isn't the cc command building the program and other necessary files.
With the exception of small set of very simple problem, almost all real-life C programs will be built from multiple modules, header files, external libraries, sometimes spanning over multiple folders. In some cases, additional code may be linked in using different tools (e.g., code generators).
For those cases, single 'cc' command is not going to work. The next solution will be to automated the build using build script. However, this can be time consuming to build, and almost impossible to maintain.
For building "C" programs, Make provides the many benefits on top of a simple shell build script. This is my personal "top 3"
Incremental build - when code files are modified, make can identify the execute the minimal set of build instructions, instead of rebuilding the whole code. This can provide a major efficiency boost to developers.
Rule based build - make uses rules to produce targets. Once you define a rule (one obvious rule: compile a ".c" file to ".o"), they can be applied consistently on all files.
provides setup for complete build process - including installation of code, cleanup, packaging, test, etc. Very important is that make can integrate (almost) any Unix tool into the build process - code generation, etc.
Needless to say, there are other build tools which provide additional/alternate benefits. CMake, gradle, SCons, to name a few.
For a one-file project, they come out to about the same. However, real-world projects tend to have tens, hundreds, or thousands of files and build tens or hundreds of libraries and executables, possibly with different settings etc. Managing all of this without storing the compilation commands somewhere would be impossible, and a Makefile is a very useful "somewhere."
Another important property of make is that it does timestamp checks on files and only re-runs the compilation/linking commands whose outputs are outdated, i.e. at least one of their inputs is newer than the outputs. Without such checks, you'd have to manually remember which files to recompile when you change something (especially difficult when the changed file is a header), or always recompile everything, increasing build times by orders of magnitude.

Why aren't changes to header files accounted for in the Makefiles of mature C projects?

I have been reading up on make and looking at the Makefiles for popular C projects on GitHub to cement my understanding.
One thing I am struggling to understand is why none of the examples I've looked at (e.g. lz4, linux and FFmpeg) seem to account for header file dependencies.
For my own project, I have header files that contain:
Numeric and string constants
Macros
Short, inline functions
It would seem essential, therefore, to take any changes to these into account when determining whether to recompile.
I have discovered that gcc can automatically generate Makefile fragments from dependencies as in this SO answer but I haven't seen this used in any of the projects I've looked at.
Can you help me understand why these projects apparently ignore header file dependencies?
I'll attempt to answer.
The source distros of some projects include a configure script which creates a makefile from a template/whatever.
So the end user which needs to recompile the package for his/her target just has to do:
$ configure --try-options-until-it-works
$ make
Things can go wrong during the configure phase, but this has nothing to do with the makefile itself. User has to download stuff, adjust paths or configure switches and run again until makefile is successfully generated.
But once the makefile is generated, things should go pretty smooth from there for the user which only needs to build the product once to be able to use it.
A few portion of users will need to change some source code. In that case, they'll have to clean everything, because the makefile provided isn't the way the actual developpers manage their builds. They may use other systems (code::blocks, Ant, gprbuild...) , and just provide the makefile to automate production from scratch and avoid to depend on a complex production system. make is fairly standard even on Windows/MinGW.
Note that there are some filesystems which provide build audit (Clearcase) where the dependencies are automatically managed (clearmake).
If you see the makefile as a batch script to build all the sources, you don't need to bother adding a dependency system using
a template makefile
a gcc -MM command to append dependencies to it (which takes time)
Note that you can build it yourself with some extra work (adding a depend target to your makefile)

Why it's important to separate compilation and linking processes in C?

I've been programming in C for a while and i wondered why is important to separate this processes (Compile and Linking)?
Can someone explain please?
It is useful in order to decrease rebuilt time. If you change just one source file - it is often no need to recompile whole project but only one or few files.
Because, compilation is responsible for transforming the source code of every single source code file to a corresponding object code. That's it. So the compiler doesn't have to care about your external symbols (like the libraries and extern variables).
Linking is responsible for finding those references and then producing a single binary as if your project was written as a single source code file. (I also recommend that you should refer to wikipedia linking page to know the differnce between static and dynamic linking)
If you happen to use the tool Make, you will see that it doesn't recompile every file whenever you invoke make, it finds what files have been modified since the last build and then recompiles them only. Then the linking process is invoked. That is a big time saver when you deal with big projects (e.g. linux kernel).
It's probably less important these days than it was once.
But there was a time when compiling a project could take literally days - we used to do a "complete build" over a weekend back in the 1980s. Just parsing the source code of a single file was a fairly big deal requiring significant amounts of time and memory, so languages were designed so that their modules (source files) could be processed in isolation.
The result was "object files" - .obj (DOS/Windows/VMS) and .o (unix) files - which contain the relocatable code, the static data, and the lists of exports (objects we've defined) and the imports (objects we need). The linking stage glues all this together into an executable, or into an archive (.lib, .a, .so, .dll files etc) for further inclusion.
Making the expensive compilation task operate in isolation led the way to sophisticated incremental build tools like make, which produced a significant increase in programmer productivity - still critical for large C projects, like the Linux kernel.
It also, usefully, means that any language that can be compiled into an object file can be linked together. So, with a little effort, one can link C to Fortran to COBOL to C++ and so on.
Many languages developed since those days have pushed the boundaries of what can be stored in object files. The C++ template system requires special handling, and overloaded methods don't quite fit either as plain .o files don't support multiple functions with the same name (see C++ name mangling). Java et al uses a completely different approach, with custom code file formats and a "native code" invocation mechanism that glues onto DLLs and shared object files.
In practice, it is not important at all. Especially for simpler programs, both steps are performed with one program call such as
gcc f1.c f2.c f3.c f4.c -o program
which creates the executable program from these source files.
But the fact remains that these are separate processes and in some cases it is worth noticing this.
I've worked on systems where it takes two days to compile them. You don't want to make a small change then have to wait 2 days to test.

Are Cmake/Autotools useful for non standard compilers?

I am working on a complex project written in C/Asm for an embedded target running on an Analog Devices DSP. The toolchain is close to gcc, but they are plenty of differences. Moreover, I am using a lot of autogeneration scripts using Jinja2 to generate header files from data extracted from a database. I also have plenty of compiler flags.
I currently wrote a Makefile from scratch. It is about 400 lines long and works pretty well. I automatically discover the sources across the directories and hold all the dependencies i.e.
a.tmpl --->jinja-->a.c--->a.o
^
a.yaml ------'
I would like to know if tools such as Cmake or Automake can be useful in my case. In other words, can I use these tools to simply the readability of Makefile?
CMake works perfectly with generated sources. Just add appropriate custom command:
add_custom_command(OUTPUT a.c
COMMAND jinja <args>
DEPENDS a.yaml)
add_executable(a a.c)

Can SCons keep track of linking dependencies?

I'm currently working on a C project with one main executable and one executable for each unit test. In the SConstruct file I specify the dependencies for each executable, something like
env.Program(['Main.c', 'Foo.c', 'Bar.c', 'Baz.c', ...])
env.Program(['FooTest.c', 'Foo.c', 'Baz.c', ...])
env.Program(['BarTest.c', 'Bar.c', 'Baz.c', ...])
...
This, however, is error prone and inelegant since the dependencies could just as well be tracked by the build tool, in this case SCons. How can I improve my build script?
What you are asking for is some sort of tool that
1) Looks at the headers you include
2) Determines from the headers which source files need building
3) Rinse and repeat for all the source files you've just added
Once it's done that it'll have to look over the tree it has generated and try and squish some of that into sensible libraries, assuming you haven't done that already (and looking at the tone of both the questions, that exercise seems to have been viewed as academic, rather than a standard part of good software development).
There might be some mileage in a tool that says "You've included header A/B.h, so you'll need libA in your link line" but even that is going to have plenty of gotchas depending on how different people build and link their libraries.
But what you've asked is asking how to define a build script that writes a build script. It's something you should be doing for yourself.

Resources