Performance comparison of a program: C / Assembly - c

I have a code in C and in assembly (x86 on linux) and I would like to compare their speed, but I don't know how to do it.
In C, the time.h library allows us to know the execution time of the program. But I don't know how to do that in assembly.
I have found the instruction rdtsc which allows us to know the number of clock cycles between two pieces of code. But I have the impression that there is a huge noise on the returned value (maybe because of what is running on the pc?) I don't see then how to compare the speed of these two programs. The time observed in the command prompt is apparently not a reference...
How should I proceed ? Thanks
I have tried to substitute values that I got with the assembly programm with the values I got from an empty code in order to have an average value, but values are still incoherent

Related

How to create a mini-benchmark program in C

I have an assignment where I need to make a benchmark program to test the performance of any processor with two sorting algorithms (an iterative one and a recursive one). The thing is my teacher told me I have to create three different programs (that is, 3 .c files), two with each sorting algorithm (both of them have to read integers from a text file separated with \n's and write the same numbers to another text file but sorted), and a benchmarking program. In the benchmark program I need to calculate the MIPs (million instructions per second) with the formula MIPs = NI/T*10^6, where NI is the number of instructions and T is the time required to execute those instructions. I have to be able to estimate the time each algorithm will take on any processor by calculating its MIPs and then solving that equation for T, like EstimatedTime = NI/MIPs*10^6.
My question is... how exactly do I measure the performance of a program with another program? I have never done something like that. I mean, I guess I can use the TIME functions in C and measure the time to execute X number of lines and stuff, but I can do that only if all 3 functions (2 sorting algorithms and 1 benchmark function) are in the same program. I don't even know how to start.
Oh and btw, I have to calculate the number of instructions by cross compiling the sorting algorithms from C to MIPS (the asm language) and counting how many instructions were used.
Any guidelines would be appreciated... I currently have these functions:
readfile (to read text files with ints on them)
writefile
sorting algorithms
On a Linux system, you can use hardware performance counters: perf stat ./a.out and get an accurate count of cycles, instructions, cache misses, and branch mispredicts. (other counters available, too, but those are the default ones).
This gives you the dynamic instruction count, counting instructions inside loops the number of times they actually ran.
Cross-compiling for MIPS and counting instructions would easily give you a static instruction count, but would require actually following how the asm works to figure out how many times each loop runs.
How you compile the several files and link them together depends on the compiler. With GCC for example it could be something as simple as
gcc -O3 -g3 -W -Wall -Wextra main.c sortalog1.c sortalgo_2.c [...] sortalgo_n.c -o sortingbenchmark
It's not the most common way to do it, but good enough for this assignment.
If you want to count the opcodes it is probably better to compile the individual c-files individually to ASM. Do the following for every C-file you want to analyze the assembler output:
gcc -c -S sortalgo_n.c
Don't forget to put your function declarations into a common header file and include it everywhere you use them!
For benchmarking: you do know the number of ASM-operations for every C-operation and can, although it's not easy, map that count to every line of the C code. If you have that, all you have to do is to increment a counter. E.g.: if a line of C-code translates to 123 ASM opcodes you increment the counter by 123.
You can use one global variable to do so. If you use more than one thread per sorting algorithm you need to take care that the additions are atomic (Either use _Atomic or mutexes or whatever your OS/compiler/libraries offer).
BTW: it looks like a very exact way to measure the runtime but not every ASM-opcode runs in the same number of cycles on the CPU in the real world. No need for bothering today but you should keep it in mind for tomorrow.

Get the CPU cycles

Is it possible to get the exact number of CPU cycles of a code in a C program?
I tried to use the C function clock and the assembly rdtsc but I only had a very rough approximation, and even with loops I didn't manage to get enough accuracy.
You can find below a code that I tried (unsuccessfully). For example, to get the cycles of an incrementation, I wanted to do
clk("++foo") - clk("")
hoping to get "1".
#define __clk(x) tmp=clock() ;\
x;\
return abs(tmp-clock());
inline int clk(char* x)
{
__clk(x)
}
Do you know if there is a way to get what I want? I'm currently doing C on Debian, but if needed I also have a Windows system, and if a solution is only available in another language, that's not a problem.

How to print data about the execution of my code ?

When programming in haskell we have the interpreter option :set +s. It prints some information about the code you ran. When on ghci, prints the time spent on running the code and the number of bytes used. when on hugs, prints the number of reductions made by the interpreter and the number of bytes used. How can I do the same thing in C ? I know how to print the time spent running my c code and how to print the number of clocks spent by the processor to run it. But what about the number of bytes and reductions ? I want to know a good way to compare two differents codes that do the same thing and compare which is the most efficient for me.
Thanks.
If you want to compare performance, just compare time and used memory. Allow both programs exploit the same number of processor cores, write equivalent programs in both language and run benchmarks. If you are using a Unix, time(1) is your friend.
Everything else is not relevant to performance. If a program performed 10x more functions calls than another one, but ran in half of the time, it is still the one having the better performance.
The benchmark game web site compares different language using time/space criteria. You may wish to follow the same spirit.
For more careful profiling of portions of the programs, rather than the whole program, you can either use a profiler (in C) or turn on the profiling options (in GHC Haskell). Criterion is also a popular Haskell library to benchmark Haskell programs. Profiling is typically useful to spot the "hot points" in the code: long-running loops, frequently called functions, etc. This is useful because it allows the programmer to know where optimization is needed. For instance, if a function cumulatively runs for 0.05s, obtaining a 10x speed increase on that is far less useful than a 5% optimization on a function cumulatively running for 20 minutes (0.045s vs 60s gain).

Is there a better way to benchmark a C program than timing?

I'm coding a little program that has to sort a large array (up to 4 million text strings). Seems like I'm doing quite well at it, since a combination of radixsort and mergesort already cut the original q(uick)sort execution time in less than half.
Execution time being the main point, since this is what I'm using to benchmark my piece of code.
My question is:
Is there a better (i. e. more reliable) way of benchmarking a program than just time the execution? It kinda works, but the same program (with the same background processes running) usually has slightly different execution times if run twice.
This kinda defeats the purpose of detecting small improvements. And several small improvements could add up to a big one...
Thanks in advance for any input!
Results:
I managed to get gprof to work under Windows (using gcc and MinGW). gcc behaves poorly (considering execution time) compared to my normal compiler (tcc), but it gave me quite some insight.
Try a profiling tool, that will also show you where the program is spending its time. gprof is the classic C profiling tool, at least on Unix.
Look at the time command. It tracks both the CPU time a process uses and the wall-clock time. You can also use something like gprof for profiling your code to find the parts of your program that are actually taking the most time. You could do a lower-tech version of profiling with timers in your code. Boost has a nice timer class, but it's easy to roll your own.
I don't think it's sufficient to just measure how long a piece of code takes to execute. Your environment is a constantly changing thing, so you have to take a statistical approach to measuring execution time.
Essentially you need to take N measurements, discard outliers, and calculate your average, median and standard deviation running time, with an uncertainty measurement.
Here's a good blog explaining why and how to do this (with code): http://blogs.perl.org/users/steffen_mueller/2010/09/your-benchmarks-suck.html
What do you use for timing execution time so far? There's C89 clock() in time.h for starters. On unixoid systems you might find getitimer() for ITIMER_VIRTUAL to measure process CPU time. See the respective manual pages for details.
You can also use a POSIX shell's times utility to benchmark the processor time used by a process and its children. The resolution is system dependent, like just anything about profiling. Try to wrap your C code in a loop, executing it as many times as necessary to reduce the "jitter" in the time the benchmarking reports.
Call your routine from a test harness, whereby it executes N + 1 times. Ignore the timing for the first iteration and then take the average of iterations 1..N. The reason for ignoring the first time is that is is often slightly inflated due to various effects, e.g. virtual memory, code being paged in, etc. The reason for averaging N iterations is that you get rid of artefacts caused by other processes, the scheduler, etc.
If you're running on Linux or similar You might also want to use taskset to pin your code to a specific CPU core (assuming it's single-threaded), ideally not core 0, since this tends to handle all interrupts.

how to count cycles?

I'm trying to find the find the relative merits of 2 small functions in C. One that adds by loop, one that adds by explicit variables. The functions are irrelevant themselves, but I'd like someone to teach me how to count cycles so as to compare the algorithms. So f1 will take 10 cycles, while f2 will take 8. That's the kind of reasoning I would like to do. No performance measurements (e.g. gprof experiments) at this point, just good old instruction counting.
Is there a good way to do this? Are there tools? Documentation? I'm writing C, compiling with gcc on an x86 architecture.
http://icl.cs.utk.edu/papi/
PAPI_get_real_cyc(3) - return the total number of cycles since some arbitrary starting point
Assembler instruction rdtsc (Read Time-Stamp Counter) retun in EDX:EAX registers the current CPU ticks count, started at CPU reset. If your CPU runing at 3GHz then one tick is 1/3GHz.
EDIT:
Under MS windows the API call QueryPerformanceFrequency return the number of ticks per second.
Unfortunately timing the code is as error prone as visually counting instructions and clock cycles. Be it a debugger or other tool or re-compiling the code with a re-run 10000000 times and time it kind of thing, you change where things land in the cache line, the frequency of the cache hits and misses, etc. You can mitigate some of this by adding or removing some code upstream from the module of code being tested, (to cause a few instructions added and removed changing the alignment of your program and sometimes of your data).
With experience you can develop an eye for performance by looking at the disassembly (as well as the high level code). There is no substitute for timing the code, problem is timing the code is error prone. The experience comes from many experiements and trying to fully understand why adding or removing one instruction made no or dramatic differences. Why code added or removed in a completely different unrelated area of the module under test made huge performance differences on the module under test.
As GJ has written in another answer I also recommend using the "rdtsc" instruction (rather than calling some operating system function which looks right).
I've written quite a few answers on this topic. Rdtsc allows you to calculate the elapsed clock cycles in the code's "natural" execution environment rather than having to resort to calling it ten million times which may not be feasible as not all functions are black boxes.
If you want to calculate elapsed time you might want to shut off energy-saving on the CPUs. If it's only a matter of clock cycles this is not necessary.
If you are trying to compare the performance, the easiest way is to put your algorithm in a loop and run it 1000 or 1000000 times.
Once you are running it enough times that the small differences can be seen, run time ./my_program which will give you the amount of processor time that it used.
Do this a few times to get a sampling and compare the results.
Trying to count instructions won't help you on x86 architecture. This is because different instructions can take significantly different amounts of time to execute.
I would recommend using simulators. Take a look at PTLsim it will give you the number of cycles, other than that maybe you would like to take a look at some tools to count the number of times each assembly line is executing.
Use gcc -S your_program.c. -S tells gcc to generate the assembly listing, that will be named your_program.s.
There are plenty of high performance clocks around. QueryPerformanceCounter is microsofts. The general trick is to run the function 10s of thousands of time and time how long it takes. Then divide the time taken by the number of loops. You'll find that each loop takes a slightly different length of time so this testing over multiple passes is the only way to truly find out how long it takes.
This is not really a trivial question. Let me try to explain:
There are several tools on different OS to do exactly what you want, but those tools are usually part of a bigger environment. Every instruction is translated into a certain number of cycles, depending on the CPU the compiler ran on, and the CPU the program was executed.
I can't give you a definitive answer, because I do not have enough data to pass my judgement on, but I work for IBM in the database area and we use tools to measure cycles and instructures for our code and those traces are only valid for the actual CPU the program was compiled and was running on.
Depending on the internal structure of your CPU's piplining and on the effeciency of your compiler, the resulting code will most likely still have cache misses and other areas you have to worry about. (In that case you may want to look into FDPR...)
If you want to know how many cycles your program needs to run on your CPU (which was compiled with your compiler), you have to understand how the CPU works and how the compiler generarated the code.
I'm sorry, if the answer was not sufficient enough to solve your problem at hand. You said you are using gcc on an x86 arch. I would work with getting the assembly code mapped to your CPU.
I'm sure you will find some areas, where gcc could have done a better job...

Resources