performance test of functions - c

linux gcc 4.4.1 C99
I am wondering what is the best way to test the performance of a C program.
I have some functions that I have implemented. However, I could have used a different design for each function.
Basically, I should want to test to see which design gives better performance.
Many thanks,

Take a look at this post on code profilers.

I want to test to see which design gives better performance.
Why does it matter? This is not a flip question! You should have a performance target in mind, and if you meet it, your code is fast enough.
How do you know how fast is "fast enough"? It turns out the user-interface people have good data on the effect of response time on your users' experience:
0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result. (Most people have a reaction time of about 0.1 seconds; jet fighter pilots get down to around 0.08s, i.e., 80ms.)
1 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of directly "driving" your application.
10 seconds is about the limit for keeping the user's attention focused on the app. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is hard to predict or varies a lot.
The quantitative results above apply only to interaction, of course, which is measured in seconds of waiting time. But even if your target is network packets sent, pages of RAM allocated, blocks of disk read/written, or just watts of power consumed, the message I am trying to communicate is that you should have a performance target, that target should be quantified, and the target should be connected to the needs of your users. If you don't have a quantifiable target, you're not doing engineering; you're just whistling in the dark. Unless your goal is to educate yourself (or to satisfy idle curiosity), the question you should be asking is "is my code good enough that I can move on?"
If you're not meeting your performance target, or if you are trying to educate yourself, I think the best combination of readable and detailed information comes from using the valgrind profiler (--tool=callgrind --dump-instr=yes) together with the kcachegrind visualizer.

Mostly you would like to use a profiler. The post pointed by Fragsworth is a good start. Personally, I prefer Shark for Mac OS X, and gprof for Linux.
But in your case, you may also call clock() or getrusage(), for example, in this way:
clock_t t = clock();
for (i = 0; i < 1000; ++i) my_func();
printf("time = %lf\n", (double)(clock() - t) / CLOCKS_PER_SEC);
Profiler is useful when you want to dig out which part of code takes most time. Calling clock()/getrusage() is more convenient (to me) when you want to compare/benchmark different implementations.

You can use gprof ,which is a free profiler .

The first thing to find out is whether you need to optimize those functions. Unless they are in the critical path for your code, they may be more then fast enough.
If you have profiled your application and found they are slow, one good way to test to performance is to call the function some large number of times and to find out the average time it takes to run.
You should also try to use CPU-time instead of wallclock-time as that is a more accurate gauge.

I addition to profiling you need to be running the code under test from a harness (driver) to average out the readings. In this way your comparisons are not skewed by one off readings, so you have a large sample population with mean and Standard Deviation to compare. There are many multi-threaded frameworks that can achieve the load driving for you.

Related

gprof is showing 0 computational time [duplicate]

I am trying to profile a c++ function using gprof, I am intrested in the %time taken. I did more than one run and for some reason I got a large difference in the results. I don't know what is causing this, I am assuming the sampling rate or I read in other posts that I/O has something to do with it. So is there a way to make it more accurate and generate somehow almost constant results?
I was thinking of the following:
increase the sampling rate
flush the caches before executing anything
use another profiler but I want it to generate results in a similar format to grof as function time% function name, I tried Valgrind but it gave me a massive file in size. So maybe I am generating the file with the wrong command.
Waiting for your input
Regards
I recommend printing a copy of the gprof paper and reading it carefully.
According to the paper, here's how gprof measures time. It samples the PC, and it counts how many samples land in each routine. Multiplied by the time between samples, that is each routine's total self time.
It also records in a table, by call site, how many times routine A calls routine B, assuming routine B is instrumented by the -pg option. By summing those up, it can tell how many times routine B was called.
Starting from the bottom of the call tree (where total time = self time), it assumes the average time per call of each routine is its total time divided by the number of calls.
Then it works back up to each caller of those routines. The time of each routine is its average self time plus the average number of calls to each subordinate routine times the average time of the subordinate routine.
You can see, even if recursions (cycles in the call graph) are not present, how this is fraught with possibilities for errors, such as assumptions about average times and average numbers of calls, and assumptions about subroutines being instrumented, which the authors point out. If there are recursions, they basically say "forget it".
All of this technology, even if it weren't problematic, begs the question - What is it's purpose? Usually, the purpose is "find bottlenecks". According to the paper, it can help people evaluate alternative implementations. That's not finding bottlenecks. They do recommend looking at routines that seem to be called a lot of times, or that have high average times. Certainly routines with low average cumulative time should be ignored, but that doesn't localize the problem very much. And, it completely ignores I/O, as if all I/O that is done is unquestionably necessary.
So, to try to answer your question, try Zoom, for one, and don't expect to eliminate statistical noise in measurements.
gprof is a venerable tool, simple and rugged, but the problems it had in the beginning are still there, and far better tools have come along in the intervening decades.
Here's a list of the issues.
gprof is not very accurate, particularly for small functions, see http://www.cs.utah.edu/dept/old/texinfo/as/gprof.html#SEC11
If this is Linux then I recommend a profiler that doesn't require the code to be instrumented, e.g. Zoom - you can get a free 30 day evaluation license, after that it costs money.
All sampling profilers suffer form statistical inaccuracies - if the error is too large then you need to sample for longer and/or with a smaller sampling interval.

Profile C application with mixed CUDA

I have a C program with a major function that takes about 70% of total runtime. I used gprof to profile the application. After that, I rewrote that particular function in CUDA to boost the runtime of the whole application. It's currently giving results correctly but I want to know about the performance.
Is there anyway (or tool) I can use to profile this new application with the runtime of the new kernel as percentage of runtime with respect to the whole new application? I want to see the data relating all other remaining C functions as well. I tried using nvprof but it only outputs the runtimes of the CUDA kernels.
Thanks,
You can use the NVIDIA profiling tools to give you this information.
Running the command line tool nvprof <app> will give you the percentage and you can use additional command line options to optimise your kernel further. The visual profiler (nvvp) will show you the timeline and also the percentage time spent in the kernels, and it will also give you guidance on how to further improve the performance (including linking back to the documentation to explain concepts).
See the documentation for more info.
ADDENDUM
In your comment you say that you want to see the profile of the C functions as well. One way to do that would be to use nvtx to annotate your code, see this blog post for a way to automate that task. Alternatively you could profile in nvprof or nvvp to see the overall timeline and profile in gprof to see time spent in non-GPU code.
Well, you might know that I'm partial to this technique.
It will tell you approximate percentages spent by functions, lines of code, anything you can identify.
I assume your main program at some point has to wait for the CUDA kernels to finish processing, so the fraction of samples ending in that wait gives you an estimate of the time spent in CUDA.
Samples not ending in that wait, but doing other things, indicate the time spent doing those other things.
The statistics are pretty simple. If a line of code or function is on the stack for fraction F of time, then it is responsible for that fraction of time. So if you take N samples, the number of samples showing the line of code or function are, on average, NF. The standard deviation is sqrt(NF(1-F)).
So if F is 50% or 0.5, and you take 20 random stack samples, you can expect to see the code on 10 of them, with a standard deviation of sqrt(20*0.5*0.5) = 2.24 samples, or somewhere between 7 and 13 samples, most likely between 9 and 11.
In other words, you get a very rough measurement of the code's cost, but you know precisely what code has that cost.
If you're concerned about speed, you mainly care about the things that have a big cost, so it's good at pointing those out to you.
If you want to know why gprof doesn't tell you those things, there's a lot more to say about that.

How to do good benchmarking of complex functions?

I am about to embark in very detailed benchmarking of a set of complex functions in C. This is "science level" detail. I'm wondering, what would be the best way to do serious benchmarking? I was thinking about running them, say, 10 times each, averaging the timing results and give the standard dev, for instance, just using <time.h>. What would you guys do to obtain good benchmarks?
Reporting an average and standard deviation gives a good description of a distribution when the distribution in question is approximately normal. However, this is rarely true of computational performance measurements. Instead, performance measurements tend to more closely resemble a poisson distribution. This makes sense, because not many random events on a computer will cause a program to go faster; essentially all of the measurement noise is in how many random events occur that cause it to slow down. (A normal distribution, by contrast, makes no intuitive sense at all; it would require the belief that a program has a non-zero probability of finishing in negative time).
In light of this, I find it most useful to report the minimum time over many runs of a program, rather than the average; the noise in the distribution is typically noise of the measuring system, rather than meaningful information about the algorithm. For complex algorithms that have early out conditions, and other shortcuts, you need to be a little more careful, but the minimum of many runs where each run handles a representative balance of inputs usually works well.
"10 times each" sounds like very few iterations to me. I generally do something on the order of thousands (or more, depending on the function/system) of runs unless that's completely infeasible. At a bare minimum, you need to make sure that you run the timing for sufficiently long as to shake out any dependence on system state, some of which may change at fairly large time granularity.
The other thing you should be aware of is that essentially every system has a platform-specific timer available that is much more accurate than what is available <time.h>. Find out what it is on your target platform[s] and use it instead.
I am assuming you are looking at benchmarking pure Algorithmic computation in your program and there is no user input or output which can take unpredictable time.
Now for purely number crunching programs, your results could vary based on the time your program actually runs which will be impacted by other ongoing activities in the system. There could be other factor which you may choose to ignore depending upon level of accuracy desired i.e. impact due to cache miss, different access time through the memory hierarchy"
One of the methods is as you suggested calculation average over a number of runs.
Or you could try to look at the assembly code and see the instructions generated. And then based on the processor get the cycle count for these instructions. This method may not be practical depending on the amount of code you are looking to benchmark. If you are particular about memory hierarchy impact then you may want to control execution environment very carefully i.e. where program is loaded, where its data is loaded etc. But as I mentioned depending on the accuracy desired, you may absorb the variation caused due to memory hierarchy in you statistical variation" .
You may need to carefully design the test input for you functions to ensure the path coverage and may choose to publish statistics of performance as a function of test input. This will show how function behaves across range of inputs

Can gdb or other tool be used to detect parts of a complex program (e.g. loops) that take more time than expected for targeting optimization?

As the title implies basically: Say we have a complex program and we want to make it faster however we can. Can we somehow detect which loops or other parts of its structure take most of the time for targeting them for optimizations?
edit: Notice, of importance is that the software is assumed to be very complex and we can't check each loop or other structure one by one, putting timers in them etc..
You're looking for a profiler. There are several around; since you mention gcc you might want to check gprof (part of binutils). There's also Google Perf Tools although I have never used them.
You can use GDB for that, by this method.
Here's a blow-by-blow example of using it to optimize a realistically complex program.
You may find "hotspots" that you can optimize, but more
generally the things that give you the greatest opportunity for saving time are mid-level function calls that you can avoid.
One example is, say, calling a function to extract information from a database, where the function is being called multiple times, when with some extra coding the result from a prior call could be used.
Often such calls are small and innocent-looking, and you're totally surprised to learn how much they're costing, as an overall percent of time.
Another example is doing some low-level I/O that escapes attention, but actually costs a hefty percent of clock time.
Another example is tidal waves of notifications that propagate from seemingly trivial changes to data.
Another good tool for finding these problems is Zoom.
Here's a discussion of the technical issues, but basically what to look for is:
It should tell you inclusive percent of time, at line-level resolution, not just functions.
a) Only knowing that a function is costly still leaves you wondering where the lines are in it that you should look at.
b) Inclusive percent tells the true cost of the line - how much bottom-line time it is responsible for and would not be spent if it were not there.
It should include both I/O (i.e. blocked) time and CPU time, not just CPU time. A tool that only considers CPU time will not see the first two problems mentioned above.
If your program is interactive, the tool should operate only during the time you care about, and not while waiting for user input. You don't want to include head-scratching time in your program's performance statistics.
gprof breaks it down by function. If you have many different loops in one function, it might not tell you which loop is taking the time. This is a clue to refactor ;-)

Performance/profiling measurement in C

I'm doing some prototyping work in C, and I want to compare how long a program takes to complete with various small modifications.
I've been using clock; from K&R:
clock returns the processor time used by the program since the beginning of execution, or -1 if unavailable.
This seems sensible to me, and has been giving results which broadly match my expectations. But is there something better to use to see what modifications improve/worsen the efficiency of my code?
Update: I'm interested in both Windows and Linux here; something that works on both would be ideal.
Update 2: I'm less interested in profiling a complex problem than total run time/clock cycles used for a simple program from start to finish—I already know which parts of my program are slow. clock appears to fit this bill, but I don't know how vulnerable it is to, for example, other processes running in the background and chewing up processor time.
Forget time() functions, what you need is:
Valgrind!
And KCachegrind is the best gui for examining callgrind profiling stats. In the past I have ported applications to linux just so I could use these tools for profiling.
For a rough measurement of overall running time, there's time ./myprog.
But for performance measurement, you should be using a profiler. For GCC, there is gprof.
This is both assuming a Unix-ish environment. I'm sure there are similar tools for Windows, but I'm not familiar with them.
Edit: For clarification: I do advise against using any gettime() style functions in your code. Profilers have been developed over decades to do the job you are trying to do with five lines of code, and provide a much more powerful, versatile, valuable, and fool-proof way to find out where your code spends its cycles.
I've found that timing programs, and finding things to optimize, are two different problems, and for both of them I personally prefer low-tech.
For timing, the trick is to make it take long enough by wrapping a loop around it. For example, if you iterate an operation 1000 times and time it with a stopwatch, then seconds become milliseconds when you remove the loop.
For finding things to optimize, there are pieces of code (terminal instructions and function calls) that are responsible for various fractions of the time. During that time, they are exposed on the stack. So you can wrap a loop around the program to make it take long enough, and then take stackshots. The code to optimize will jump out at you.
In POSIX (e.g. on Linux), you can use gettimeofday() to get higher-precision timing values (microseconds).
In Win32, QueryPerformanceCounter() is popular.
Beware of CPU clock-changing effects, if your CPU decides to clock down during the test, results may be skewed.
If you can use POSIX functions, have a look at clock_gettime. I found an example from a quick google search on how to use it. To measure processor time taken by your program, you need to pass CLOCK_PROCESS_CPUTIME_ID as the first argument to clock_gettime, if your system supports it. Since clock_gettime uses struct timespec, you can probably get useful nanosecond resolution.
As others have said, for any serious profiling work, you will need to use a dedicated profiler.

Resources