I want to narrow down places where can be bottlenecks? Building of my project can take even half an hour. I know many tricks and things which in theory can be guilty, however profiler will be complete solution for all my question.
I am asking about profiler for C++ - GNU GCC - make - Linux - environment, however I am curious if any popular language has such thing.
With gcc you can use the -ftime-report option to get the time taken by each compilation stage.
Related
I just used gprof to analyze my program. I wanted to see what functions were consuming the most CPU time. However, now I would like to analyze my program in a different way. I want to see what LINES of the code that consume the most CPU time. At first, I read that gprof could do that, but I couldn't find the right option for it.
Now, I found gcov. However, the third-party program I am trying to execute has no "./configure" so I could not apply the "./configure --enable-gcov".
My question is simple. Does anyone know how to get execution time for each line of code for my program?
(I prefer suggestions with gprof, because I found its output to be very easy to read and understand.)
I think oprofile is what you are looking for. It does statistical based sampling, and gives you an approximate indication of how much time is spent executing each line of code, both at the C level of abstraction, and at the assembler code level.
As well as simply profiling the relative number of cycles spent at each line, you can also instrument for other events like cache misses and pipeline stalls.
Best of all: you don't need to do special builds for profiling, all you need to do is enable debug symbols.
Here is a good introduction to oprofile: http://people.redhat.com/wcohen/Oprofile.pdf
If your program isn't taking too long to execute, Valgrind/Callgrind + KCacheGrind + [compiling with debugging turned on (-g)] is one of the best methods of how to tell where a program is spending time while it is running in user mode.
valgrind --tool=callgrind ./program
kcachegrind callgrind.out.12345
The program should have a stable IPC (instructions per clock) in the parts that you want to optimize.
A drawback is that Valgrind cannot be used to measure I/O latency or to profile kernel space. Also, it's usability with programming languages which are using a toolchain incompatible with the C/C++ toolchain is limited.
In case Callgrind's instrumentation of the whole program takes too much time to execute, there are macros CALLGRIND_START_INSTRUMENTATION and CALLGRIND_STOP_INSTRUMENTATION.
In some cases, Valgrind requires libraries with debug information (such as /usr/lib/debug/lib/libc-2.14.1.so.debug), so you may want to install Linux packages providing the debug info files or to recompile libraries with debugging turned on.
oprofile is probably, as suggested by Anthony Blake, the best answer.
However, a trick to force a compiler, or a compiler flag (such as -pg for gprof profiling), when compiling an autoconf-ed software, could be
CC='gcc -pg' ./configure
or
CFLAGS='-pg' ./configure
This is also useful for some newer modes of compilation. For instance, gcc 4.6 provides link time optimization with the -flto flag passed at compilation and at linking; to enable it, I often do
CC='gcc-4.6 -flto' ./configure
For a program not autoconf-ed but still built with a reasonable Makefile you might edit that Makefile or try
make CC='gcc -pg'
or
make CC='gcc -flto'
It usually (but not always) work.
I'm looking for a measure performance tool for C (I'm using MinGW windows toolchain) that gives me some results like:
Occupied memory by a variable;
Cycles to run the program/a function;
Spent time on a function.
Thanks
Google Perftools is multi-platform: http://code.google.com/p/google-perftools/
GCC has profiling as well: How to use profile guided optimizations in g++?
You can use gprof with is shipped with GCC. Here are some examples.
You'll find more about that in the GCC documentation. Just remember that you must use the -pg option for both compilation and link.
However, I got that working, but only on small software. On the bigger one I work on, I only got empty timing, and couldn't find the cause of that. But maybe you won't have the same problem...
Usually when gprof does not give you results it is because it is a multithread application. gprof does not support this kind of apps.
I want to run a simple hello world, written in c, app.
on my at91sam9rl-ek.
Is it possible without an os?
And (if it is) how do I have to compile it?
-right now I try using g++ lite for creating arm code
(In general which programms can the board start without OS,
assembler, arm code?)
Sure, no problem running without an operating system, I do that kind of thing daily...
http://sam7stuff.blogspot.com/
You programs are, at least at first, not going to resemble desktop applications, I would avoid any libraries C libraries, no printfs or strcmps or things like that until you get the feel for it and find the right tools. No floating point as well. add some numbers do some shifting blink some leds.
codesourcery lite is probably the fastest way to get started, the gnueabi one I believe is the one you want.
This winarm site has a compiler and tons of non-os projects for seems like every arm based microcontroller.
http://www.siwawi.arubi.uni-kl.de/avr_projects/arm_projects/
Atmel is very very good about information, no doubt they have example programs you can try as well on the eval board.
emdebian is another cross compiler that is somewhat up to date and has binaries. building a gcc from scratch for cross compiling is not bad at all. The C library is another story though, and even the gcc library for that matter. I find it easier to do without either library.
It is possible get a C library working and run a great many kinds of programs. Depends on what you are looking to do. Ahh, just looked at the specs, that is a pretty serious eval board, plenty of power for an operating system should you choose to run one. You can certainly run programs that use the display as a user interface. read/write sd cards, usb, basically everything on the board, without an os, if you choose.
Which static code analyzer (if any) do you use? I've been using PyLint for Python and I'm pretty satisfied with it, now I need something similar for C code.
How much of it's output do you have to suppress for normal daily usage?
Wikipedia maintains a list of static code analysis tools for various languages (including C).
Personally, I have used both PC-Lint and Splint. The best choice depends on the type of application you have written. However no matter which tool you use, there will be a low signal to noise ratio until you properly tune the tool and your code.
PC-Lint is the most powerful Lint tool I used. If you add it to an existing project, the signal to noise ratio can be low. However, once the tool and your code are properly configured, it can be used as part of your standard build process. The last major project where I used it, we set it so that PC-Lint warnings would break the build. Licenses for PC-Lint cost $389, but it is worth the cost.
Splint is a great open-source tool. I have used it on several projects, but found that it can be difficult to configure when using a compiler with non-ANSI C extenstions (e.g. on embedded systems projects).
Valgrind is also worth considering as a dynamic analysis tool.
You specifically requested feedback on SourceMonitor. This tool provides interesting metrics on your code, but should be used as a supplement to good Lint tool as it does not provide that kind of analysis.
As stated on their home page, SourceMonitor will:
...find out how much code you have and
to identify the relative complexity of
your modules. For example, you can use
SourceMonitor to identify the code
that is most likely to contain defects
and thus warrants formal review.
I used it on a recent project and found it to be easy to use (even for embedded systems code). The complexity metric is an excellent resource for developing code that will be less error-prone and easier to maintain.
SourceMonitor provides nice graphs of its output as well as well-formatted XML if you want to automate metrics collection. The only downside is that the tool only runs on Windows.
We use PC-Lint and are very happy with it.
There seem to be a few camps regarding message suppression and tuning:
suppress everything, then unsuppress only what you're interested in
unsuppress everything, then suppress warnings you're not interested in
keep everything unsuppressed
We tend to fall somewhere between the second and third categories. This does mean a ludicrous 100MiB+ text dump (one error per line) per lint run across the core libraries (lots of old code).
A custom diff-like tool watches for changes and emails those out to the commit's author, which keeps the amount that most people have to look at down to a few lines. We gather interesting statistics about errors-over-time with some basic data mining.
You can get really polished here, hyperlinking the errors back to more detailed descriptions, providing "points" for fixing existing warnings, etc...
There's splint, although, to be honest, I've never been able to get it to work; on my platform, it really is too overactive. In practice, my most-used "lint" are the following warning flags for gcc
-std=c89 -pedantic -W -Wall -Wstrict-prototypes -Wunreachable-code -Wwrite-strings -Wpointer-arith -Wbad-function-cast -Wcast-align -Wcast-qual
Of course, I've mostly forgotten what half of them mean. But they catch quite a few things.
I'm a big fan of David Evans's work on LC/Lint, which has apparently had its name changed to Splint. It is very aggressive, and you can tell it a lot of useful information by adding annotations to your code. It is designed to be used with programmer annotations. It will function without them, but if you try to use it as a simple checker without providing any annotations, you will probably be disappointed. If what you want is totally automated checking, and if you can deal with a Windows-only tool, you're better off with Gimpel's PC-Lint. Jim Gimpel has had happy customers for over 25 years.
I used PCLint forever and really liked it. I wish they'd get into C#... They are the ones with the pop quizzes on C or C++ code in all the magazines.
There is one in the llvm clang project http://clang-analyzer.llvm.org . I have not tried it myself but i intend to do it.
It looks pretty good in action:
http://www.mikeash.com/?page=pyblog/friday-qa-2009-03-06-using-the-clang-static-analyzer.html
Above is for Objective-C but it should be the same for C.
I'm looking for a make platform. I've read a little about gnu make, and that its got some issues on windows platforms (from slash/backslash, to shell determination ... ) so I would like to hear what are my alternatives to it ?
If it matters, i'm doing fortran development combined with (very)little c on small sized projects (50k lines max), but I don't think that matters since most of those are of the language agnostic type.
What are gnu make drawbacks, and what alternatives do I have, with what advantages?
There are a couple of good tools for continuous integration and building on windows. The two I have in mind are NAnt which describes itself as .Net build tool, but could be used to build anything - its open source and very extensible, although the UI is lacking. I've recently started to use Hudson which is brilliant, the output is way better than NAnt, making it much easier to use. I have zero experience with these tools and Fortran, so good luck there.
My thought on make and its derivatives is to avoid based on it's age, a good tool in its time but it must 20 years old now, and tech (even in the build area) has moved on a fair bit since then.
You can have a look at cmake. It's a kind of "meta-make" system: You write a make-file for it, which says how your project is structured, what libs and sources it needs, and so on. And it can build make-files for you for GNU make, nmake (i believe), project files for Kdevelop and Visual Studio.
KDE has adopted it for KDE4 onwards and it was since greatly enhanced: CMake
Another such system is Bakefile which was built to generate make-files and project-files for the wxWidgets GUI toolkit. It can be used for non-wx applications too, and is relatively young and modern (uses XML as its makefile description).
There is also nmake, which is Microsoft's version of nmake. I would recommend to stick with gnu make though. My advise is to always use Unix like slashes; they also work for Windows. Gnu make is widely used, you can easily find tutorials and get advices about it's use. It is also a better investment, since you can also use it in other areas in the future. Finally, it is much richer in functionality.
I use GNU make under Windows and have no problems with it. However, I also use bash as my shell. Both make and bash are available as part of the Cygwin package from www.cygwin.com and I strongly recommend you install bash & all the common command line tools (grep, sed etc.) if you are going to use make from the command line.
Make has stood the test of time even on windows, and I use it everyday, but there's also msbuild
Details, details...
Given your small project, I wuld just start with MS nmake. Then if that doesn't suffice, move on to GNUmake. Other advice above is also good. Ant and CMake are fine, but you don't need them and there are so many make users who can help you if you have problems.
For that matter, since you are on windows, doesn't the MS IDE have buil tools built in. Just click and go.
keep it simple. Plan to throw the first on away, you will anyway.
Wikipedia also has this to say:
http://en.wikipedia.org/wiki/List_of_build_automation_software