I compile my code using gcc and gfortran. To generate coverage information, I used to add --coverage to both compiler flags (FFLAGS and CFLAGS), the compiler would generate .gcno files and upon running the program I would get .gcda files containing the coverage information.
After separating source and object directory (src/*.c and out/obj/*.o) the gcda files are not generated any more. At least I suppose it is due to that separation.
Is this fixable?
Related
I’m quite new to C and the gcc compiler. I have a project where I include some other .h files. In that files there are include from more own .h files. And so on. Its a decent amount of includes.
My problem is, when I try to compile the main file with:
gcc -o test tests_main.c
I get the following error:
Undefined symbols for architecture x86_64:
"_waves_secure_hash_test", referenced from:
_crypto_tests in tests_main-d1ae56.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Thats because the compiler cant find the method "waves_secure_hash_test", but its in the included file:
#include "crypto_tests.h"
So, I searched a little bit and found a solution, just compile it with:
gcc -o test tests_main.c crypto_tests.c
Okay, that work for me with that two test file, where the functions are only some test prints.
For my complete project with a decent amount of includes, I can’t write down all that c files to compile... Is there a way to tell the compiler that he has to compile all needed and included files?
You're mixing up included files (.h files, header files) and source files (.c files). You don't have to do anything about header files, the compiler will process them automatically(1).
You do have to list all source files which you want to compile.They are what defines your project. Based on your source tree organisation, you might get away with using shell pattern expansion (e.g. *.c) to get them, though.
For larger projects, you normally don't type the compilation command by hand, but use some project management tool, such as an IDE, a Makefile, or a buildsystem generator such as CMake.
Also note that the error you're actually getting is not a compiler error, it's a linker error. Linking is a separate step that comes after compiling. The compilation reads source files and produces object files. Linking then reads object files and produces a binary (executable or shared library) out of them.
You should also note that there is not always a 1:1 correspondence between header and source files. The classic example are libraries: you can include one or more header files which are shipped with an external library; these will provide declarations for functions implemented inside that library. This allows you to call the functions in your code. To then make these functions available to the linker (so that they can become part of your program), instruct the linker to link the library into your binary (normally via the -l command-line option).
(1) You may still need to point your compiler at directories where these header files reside, which is usually done using the -I command-line option.
I am trying to generate the coverage for C files (Yocto project).
So, I have added gcov flags "-g -O0 --coverage" in the Makefile.am of most of the available modules.
It generated ".gcno" files during the compilation of each module with coverage flags.
I have generated an image from all these modules and loaded it in the test device and ran functional test cases.
I am able to find the path of the "gcda" files using strings command from the process that is running the test device.
So I have used gdb mode to flush the coverage using the "__gcov_flush" command after attaching the process id to gdb.
This throws an error "No symbol __gcov_flush in current context". Please suggest me what may be the cause for this error.
As per the comments it is not directly possible to just build the Linux Kernel with the Coverage Compiler flags and assume to get meaningful coverage metrics.
The Code Coverage metrics actually require some file systems to be available to write the run-time coverage data(i.e. *.gcda files).
If you want to enable Code Coverage for Linux Kernel here is a documentation on how to go about enabling the support using a GCOV virtual filesystem to collect coverage metrics.
Also, assuming you are cross compiling for a different architecture then probably you will have to use a cross-gcov tool to collect the coverage metrics, after you have captured the *.gcno files after execution.
TL:DR
Can you generate clang debugging information(CFGs, PDGs) when the original source file have DEPENDENCY errors from missing header files that cause compilation issues such as undeclared identifiers and unknown types? The files are syntactically correct. Is there a flag that maybe set all undeclared identifiers to INTs for debugging?
I am using Clang to analyze source code packages. Usually, I modify the makefile so clang generates debugging information using the command below
clang -emit-llvm -g -S -ferror-limit=0 -I somefile some_c_file
However, this approach is very makefile focused and if developer does not support Clang in that given build version, I have to figure out how to generate the debugging information.
This is not good for automation. For things such as OpenSSL where they include dozen of files(headers) and custom configurations for the given platform, this is not practical. I want to suppress or ignore the errors if possible since I know the build version's file under test is syntactically correct.
Thanks!
Recently I used clang-tidy for source code analysis of one of our projects. The project uses GNU compiler and we didn't wanted to move away from that. So the process that I followed was below:
1) Use bear to generate the compilation database i.e. compile_commands.json which is used by clang-tidy
2) By pass the include files that we don't want to analyze by including them as system files i.e. use --isystem for their inclusion and project specific files using -I. (If you can't change the Make files you could change the compile_commands.json by a simple find and replace)
Hope this helps
I have successfully set up gcov in my project to generate HTML files with code coverage data using lcov. However, as I often work via SSH with a text console only, I'm looking for a way to generate annotated source files like git-blame does with the history:
covered source_line();
not covered other_source_line();
Is it possible somehow?
I'm going to assume you are referring to gcovr when you say gcov, since gcov does not output to HTML format. gcovr does output to HTML though. gcovr is basically just a wrapper for gcov.
In order to get the annotated source files, you need to simply use gcov.
gcov, by default, annotates source files.
To run with gcov, you just need to compile with -fprofile-arcs -ftest-coverage -fPIC -O0, and link in the gcov library(-lgcov).
Then run your program.
Then issue the following command:
gcov main.c
Where main.c is whatever file you want your annotated analysis on.
After that, you will notice a new file created(main.c.gcov). This is the file you are looking for.
Here's a link on gcov usage
I'm trying to collect code coverage for the project that has c++ and c code both on Ubuntu.
I use '-fprofile-arcs' and '-ftest-coverage' values as CXXFLAGS and CFLAGS; '-lgcov' as LINKFLAGS.
Common C project structure is:
c_code\
src
unit_tests
src contains sources of static library.
unit_tests dir contain tests written with googletest framework e. g. tests of kind
TEST_F(test_case_name, test_name) {
some_gtest_assertions;
}
After the build googletest binary that should contain static library to test inside of it is launched.
Building and running of the project binaries results in generating of *.gcno and *.gcda files. However my C coverage results are empty (C++ is generated well).
lcov command has the folloiwng format:
lcov --capture --directory my_c_gcda_location --output-file c_coverage.info
Logs of lcov show the following for C-related gcda files:
gcov did not create any files for "my_c_gcda_location/*.gcda"`
There are also errors of kind:
*.gcda:stamp mismatch with notes file
Should I specify some additional parameters or perform some additional actions to get coverage results for C? Or what can be the reason of these issues?
You may get "stamp mismatch" when the .gcda files are newer than the .gcno files.
It can happen mostly for 2 reasons:
1. You might have re-build the code after running the test and before trace file generation.
2. The binary might be built in one machine and test was ran in other machine, whose time is ahead of the build machine.
For these two cases you have to just make sure the file creation time of .gcda is greater than .gcno and .c* files.
You can do that by just doing "touch *.gcda" and then run the lcov command.