Could Someone kindly tell me how can I profile single lines or blocks of code of a program in C with GNU profiler?
I used gprof ./a.out gmon.out which gives me flat profile and Call graph. However, I would like to see lines that are more frequently accessed.
Thanks,
This is probably one of those things that you just don't know the term you should've googled, so I'll answer it:
The term you are looking for is "annotation"-you want to annotate the source and see the line by line hits in the code.
Calling gprof with the -A flag will dump out the samples on each line that were caught.
See Also:
https://sourceware.org/binutils/docs/gprof/Annotated-Source.html
Ok, I'll post this answer so if some newbie like me searched for it can find it faster :)
here are the steps: source
gcc -fprofile-arcs -ftest-coverage fourcefile.c
(At the end of compilation files *.gcno will be produced)
Run the executable.
Run gcov: gcov sourcefile.c
(At the end of running, a file (*.gcov) will be produced which contains which contains all the required info)
Related
Im a newcommer to Linux and the gcc commands. I was reading the
gcc documentation particularly about the -o flag where it mentions the following:
Though -o names only the primary output, it also affects the naming of
auxiliary and dump outputs. See the examples below. Unless overridden,
both auxiliary outputs and dump outputs are placed in the same
directory as the primary output. In auxiliary outputs, the suffix of
the input file is replaced with that of the ...
They mention it quite a lot following this paragraph but don't explain it. I've skimmed the document and also looked online but haven't found any satisfactory explanation. If someone could provide me some explanation or even link me to some resources where I can learn about these terms it would be greatly appreciated. Thanks!
-o file
Place the output in file. This applies regardless of the type of output produced, whether it is an executable file, an object file, an assembler file or preprocessed C code.
Since only one output file can be specified, it makes no sense to use -o when compiling more than one input file, unless you want to output an executable file.
If -o is not specified, the default behavior is to produce an executable file named a.out, an object file for source.suffix named source.o, its assembler file in source.s, and all C source code preprocessed on standard output.
source: http://www.linuxcertif.com/man/1/gcc/
hope it will be useful
Consider the following command:
gcc -fprofile-arcs -ftest-coverage main.c
It generates the file, main.gcda, which is to be used by gcov, to generate the coverage analysis.
So how does main.gcda is generated? How the instrumentation is done? Can I see the instrumented code?
.gcda is not generated by compiler; it's generated by your program when you execute it.
.gcno is the file generated at compilation time and it is the 'note file'. gcc generate a basic block graph notes file (.gcno) for each CU(compiler unit).
So how does main.gcda is generated?
At running time the statistic data is gathered and stored in memory. Some exit callback is registered and is called to write the data to the .gcda file when the program terminates. This means if you call abort() instead of exit() in your program, no .gcda file would be generated.
How the instrumentation is done? Can I see the instrumented code?
You way need check gcc's implementation to get the details but basically the instrumentation is done by inserting instruction to the program to count the number of times each instruction is executed. But it doesn't really have to keep a counter for each instruction; GCC uses some algorithm to generate a program flow graph and finds a spanning tree for the graph. Only some special arcs have to be instrumented and from them the coverage of all code branches can be generated.
You can disassemble the binary to see the instrumented code.
And here are some files for coverage if you want to look into the gcc source file:
toplev.c
coverage.c
profile.c
libgcov.c
gcov.c
gcov-io.c
edit: some known gcov bugs FYI:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49484
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28441
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=44779
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=7970
Can I see the instrumented code?
You cannot see the instrumented data like gcda files .
How Gcov works ?
GCOV works in four phases:
1. Code instrumentation during compilation
2. Data collection during code execution
3. Data extraction at program exit time
4. Coverage analysis and presentation post-mortem.
to know more about individual steps u can go through this pdf.
http://ltp.sourceforge.net/documentation/technical_papers/gcov-ols2003.pdf
You can see which code related to gcov is instrumented at compile time executable or obj file, you can use following steps.
nm executable/objfile
Below is image attached of steps and output : -
I'm on a Mac and in terminal I'm compiling my program
gcc -Wall -g -o example example.c
it compiles (there are no errors), but when I try to provide command line arguments
example 5 hello how are you
terminal responds with "-bash: example: command not found"
how am supposed to provide the arguments I want to provide after compiling?
Run it like this with path:
./example 5 hello how are you
Unless the directory where the example binary is part of the PATH variable, what you have won't work even if the binary you are running is in the current directory.
It is not a compilation issue, but an issue with your shell. The current directory is not in your PATH (look with echo $PATH and use which to find out how the shell uses it for some particular program, e.g. which gcc).
I suggest testing your program with an explicit file path for the program like
./example 5 hello how are you
You could perhaps edit your ~/.bashrc to add . at the end of your PATH. There are pro and conses (in particular some possible security issues if your current directory happens to be sometimes a "malicious" one like perhaps /tmp might be : bad guys might put there a gcc which is a symlink to /bin/rm so you need to add . at the end of your PATH if you do).
Don't forget to learn how to use a debugger (like gdb). This skill is essential when coding in C (or in C++). Perhaps consider also upgrading your gcc (Apple don"t like much its current GPLv3 license so don't distribute the recent one; try just gcc -v and notice that the latest released GCC is today 4.8.1).
./example 5 Hello how are you is the syntax you're looking for.
This article lends a good explanation as to why this is important.
Basically, when you hit Enter, the shell checks to see if the first set of characters is an absolute path. If it's not, it checks the PATH variable to find executables with the name of the command you are trying to run. If it's found, it will be run, but otherwise it will crash and burn and you will become very sad.
I am writing a C language program on Linux and compiling it using GCC.
I also use a Make file.
I want to debug my program. I don't want to debug a single file, I want to debug the whole program.
How can I do it?
Compile your code with the -g flag, and then use the gdb debugger. Documentation for gdb is here, but in essence:
gcc -g -o prog myfile.c another.c
and then:
gdb prog
If you want a user-friendly GUI for gdb, take a look at DDD or Insight.
I guess that you are building from the command line.
You might want to consider an IDE (Integrated Development Environment), such as KDevelop or Eclipse, etc (hint - Eclipse ... ECLPISE ... E C L I PS E).
Use an IDE to edit your code, refactor your code, examine your code - class tree, click a variable, class or function to jump to declaration, etc, etc
And - of course - to debug:
run your code in the IDE
set breakpoints to stop at particular lines
or just step through, a line at a time
examine the call stack to see how you go there
examine the current values of variables, to understand your problem
change the values of those variables and run to see what happens
and more, more, more
p.s as wasatz mentioned- DDD is great - for visualizing the contents of arrays/matrices, and - imo - especially if you have linked lists
You can use gdb-based simple and useful GUI "Nemiver". It can debug your whole module comprising many source files.
Try cgdb
cgdb is a lightweight curses (terminal-based) interface to the GNU Debugger (GDB). In addition to the standard gdb console, cgdb provides a split screen view that displays the source code as it executes. The keyboard interface is modeled after vim, so vim users should feel at home using cgdb.
github repository
I am working on Linux environment. I have two 'C' source packages train and test_train.
train package when compiled generates libtrain.so
test_train links to libtrain.so and generates executable train-test
Now I want to generate a call graph using gprof which shows calling sequence of functions in main program as well as those inside libtrain.so
I am compiling and linking both packages with -pg option and debugging level is o0.
After I do ./train-test , gmon.out is generated. Then I do:
$ gprof -q ./train-test gmon.out
Here, output shows call graph of functions in train-test but not in libtrain.so
What could be the problem ?
gprof won't work, you need to use sprof instead. I found these links helpful:
How to use sprof?
http://greg-n-blog.blogspot.com/2010/01/profiling-shared-library-on-linux-using.html
Summary from the 2nd link:
Compile your shared library (libmylib.so) in debug (-g) mode. No -pg.
export LD_PROFILE_OUTPUT=`pwd`
export LD_PROFILE=libmylib.so
rm -f $LD_PROFILE.profile
execute your program that loads libmylib.so
sprof PATH-TO-LIB/$LD_PROFILE $LD_PROFILE.profile -p >log
See the log.
I found that in step 2, it needs to be an existing directory -- otherwise you get a helpful warning. And in step 3, you might need to specify the library as libmylib.so.X (maybe even .X.Y, not sure) -- otherwise you get no warning whatsoever.
I'm loading my library from Python and didn't have any luck with sprof. Instead, I used oprofile, which was in the Fedora repositories, at least:
operf --callgraph /path/to/mybinary
Wait for your application to finish or do Ctl-c to stop profiling. Now let's generate a profile summary:
opreport --callgraph --symbols
See the documentation to interpret it. It's kind of a mess. In the generated report, each symbol is listed in a block of its own. The block's main symbol is the one that's not indented. The items above it are functions that call that function, and the ones below it are the things that get called by it. The percentages in the below section are the relative amount of time it spent in those callees.
If you're not on Linux (like me on Solaris) you simply out of luck as there is no sprof there.
If you have the sources of your library you can solve your problem by linking a static library and making your profiling binary with that one instead.
Another way I manage to trace calls to shared libraries, is by using truss. With the option -u [!]lib,...:[:][!]func, ... one can get a good picture of the call history of a run. It's not completely the same as profiling but can be very usefull in some scenarios.