Is it possible to get the exact number of CPU cycles of a code in a C program?
I tried to use the C function clock and the assembly rdtsc but I only had a very rough approximation, and even with loops I didn't manage to get enough accuracy.
You can find below a code that I tried (unsuccessfully). For example, to get the cycles of an incrementation, I wanted to do
clk("++foo") - clk("")
hoping to get "1".
#define __clk(x) tmp=clock() ;\
x;\
return abs(tmp-clock());
inline int clk(char* x)
{
__clk(x)
}
Do you know if there is a way to get what I want? I'm currently doing C on Debian, but if needed I also have a Windows system, and if a solution is only available in another language, that's not a problem.
Related
I have a code in C and in assembly (x86 on linux) and I would like to compare their speed, but I don't know how to do it.
In C, the time.h library allows us to know the execution time of the program. But I don't know how to do that in assembly.
I have found the instruction rdtsc which allows us to know the number of clock cycles between two pieces of code. But I have the impression that there is a huge noise on the returned value (maybe because of what is running on the pc?) I don't see then how to compare the speed of these two programs. The time observed in the command prompt is apparently not a reference...
How should I proceed ? Thanks
I have tried to substitute values that I got with the assembly programm with the values I got from an empty code in order to have an average value, but values are still incoherent
I have some experience with coding but one of the biggest questions which annoying me is how to improve my code.
I every time check the complexity, readability and the correctness of the code but my question is how I can measure the size and the time of specific commands.
For example:
when I have the next problem:
A is an integer
B is an integer
C is an integer
if - A is bigger the B assign C=A
else - C=B
for that problem, we have 2 simple solutions -
1. use if-else statement
2. use ternary operator
for a dry check of the size of the file before compilation, i get that the second solution file is less from the first in a half (for 1000000 operations I get a difference of some MB).
My question is how I can measure the time difference between some codes which make the same operation but with different commands, and how much the compiler makes optimization for commands which is close like the 2 from the example.
The best and most direct way is to check an assembly code generated by your compiler at different optimization level.
//EDIT
I didn't mention benchmarking, because your question is about checking the difference between two source codes using different language constructions to do the same job.
Don't get me wrong, benchmaking is recommended solution of assuring general software performance, but in this particular scenario, it might be unreliable, because of extremely small execution time frames basic operations have.
Even when you calculate amortized time from multiple runs, the difference might be to much dependend on the OS and environment and thus pollute your results.
To learn more on the subject I recommend this talk from Cppcon, it's actaully kinda interesting.
But most importantly,
Quick peek under the hood by exploring assembly code can give you information whether two statements has been optimized into exactly same code. It might not be so clear from benchmarking the code.
In the case you asked (if vs tenary operator) it should always lead to same machine code, because tenary operator is just a syntactic sugar for if and physically it's actually the same operation.
Analyse the Time Complexity of the two algorithms. If they seem competitive,
Benchmark.
Provide a sufficient large input for your problem, so that the timing is not affected by other -OS- overheads.
Develop two programs that solve the same problem, but with a different approach.
I have some methods in Time measurements to time code. Example:
#include <sys/time.h>
#include <time.h>
typedef struct timeval wallclock_t;
void wallclock_mark(wallclock_t *const tptr)
{
gettimeofday(tptr, NULL);
}
double wallclock_since(wallclock_t *const tptr)
{
struct timeval now;
gettimeofday(&now, NULL);
return difftime(now.tv_sec, tptr->tv_sec)
+ ((double)now.tv_usec - (double)tptr->tv_usec) / 1000000.0;
}
int main(void)
{
wallclock_t t;
double s;
wallclock_mark(&t);
/*
* Solve the problem with Algorithm 1
*/
s = wallclock_since(&t);
printf("That took %.9f seconds wall clock time.\n", s);
return 0;
}
You will get a time measurement. Then you use solve the problem with "Algorithm 2", for example, and compare these measurements.
PS: Or you could check the Assembly code of every approach, for a more low level approach.
One of the ways is using time function in the bash shell followed by execution repeated for a large number of times. This will show which is better. And make A template which does nothing of the two and you can know the buffer time.
Please take the calculation for many cases and compare averages before making any conclusions.
I have an assignment where I need to make a benchmark program to test the performance of any processor with two sorting algorithms (an iterative one and a recursive one). The thing is my teacher told me I have to create three different programs (that is, 3 .c files), two with each sorting algorithm (both of them have to read integers from a text file separated with \n's and write the same numbers to another text file but sorted), and a benchmarking program. In the benchmark program I need to calculate the MIPs (million instructions per second) with the formula MIPs = NI/T*10^6, where NI is the number of instructions and T is the time required to execute those instructions. I have to be able to estimate the time each algorithm will take on any processor by calculating its MIPs and then solving that equation for T, like EstimatedTime = NI/MIPs*10^6.
My question is... how exactly do I measure the performance of a program with another program? I have never done something like that. I mean, I guess I can use the TIME functions in C and measure the time to execute X number of lines and stuff, but I can do that only if all 3 functions (2 sorting algorithms and 1 benchmark function) are in the same program. I don't even know how to start.
Oh and btw, I have to calculate the number of instructions by cross compiling the sorting algorithms from C to MIPS (the asm language) and counting how many instructions were used.
Any guidelines would be appreciated... I currently have these functions:
readfile (to read text files with ints on them)
writefile
sorting algorithms
On a Linux system, you can use hardware performance counters: perf stat ./a.out and get an accurate count of cycles, instructions, cache misses, and branch mispredicts. (other counters available, too, but those are the default ones).
This gives you the dynamic instruction count, counting instructions inside loops the number of times they actually ran.
Cross-compiling for MIPS and counting instructions would easily give you a static instruction count, but would require actually following how the asm works to figure out how many times each loop runs.
How you compile the several files and link them together depends on the compiler. With GCC for example it could be something as simple as
gcc -O3 -g3 -W -Wall -Wextra main.c sortalog1.c sortalgo_2.c [...] sortalgo_n.c -o sortingbenchmark
It's not the most common way to do it, but good enough for this assignment.
If you want to count the opcodes it is probably better to compile the individual c-files individually to ASM. Do the following for every C-file you want to analyze the assembler output:
gcc -c -S sortalgo_n.c
Don't forget to put your function declarations into a common header file and include it everywhere you use them!
For benchmarking: you do know the number of ASM-operations for every C-operation and can, although it's not easy, map that count to every line of the C code. If you have that, all you have to do is to increment a counter. E.g.: if a line of C-code translates to 123 ASM opcodes you increment the counter by 123.
You can use one global variable to do so. If you use more than one thread per sorting algorithm you need to take care that the additions are atomic (Either use _Atomic or mutexes or whatever your OS/compiler/libraries offer).
BTW: it looks like a very exact way to measure the runtime but not every ASM-opcode runs in the same number of cycles on the CPU in the real world. No need for bothering today but you should keep it in mind for tomorrow.
I need to find the time taken to execute a single instruction or a few couple of instructions and print it out in terms of milli seconds. Can some one please share the small code snippet for this.
Thanks.. I need to use this measure the time taken to execute some instructions in my project.
#include<time.h>
main()
{
clock_t t1=clock();
printf("Dummy Statement\n");
clock_t t2=clock();
printf("The time taken is.. %g ", (t2-t1));
Please look at the below liks too.
What’s the correct way to use printf to print a clock_t?
http://www.velocityreviews.com/forums/t454464-c-get-time-in-milliseconds.html
One instruction will take a lot shorter than 1 millisecond to execute. And if you are trying to measure more than one instruction it will get complicated (what about the loop that calls the instruction multiple times).
Also, most timing functions that you can use are just that: functions. That means they will execute instructions also. If you want to time one instruction then the best bet is to look up the specifications of the processor that you are using and see how many cycles it takes.
Doing this programatically isn't possible.
Edit:
Since you've updated your question to now refer to some instructions. You can measure sub-millisecond time on some processors. It would be nice to know the environment. This will work on x86 and linux, other environments will be different.
Clock get time allows forr sub-nanosecond accuracy. Or you can call the rdstc instruction yourself (good luck with this on a multiprocessor or smp system - you could be measuring the wrong thing, eg by having the instruction run on different processors).
The time to actually complete an instruction depends on the clock cycle time, and the depth of the pipeline the instruction traverses through the processor. As dave said, you can't really find this out by making a program. You can use some kind of timing function provided to you by your OS to measure the cpu time it takes to complete some small set of instructions. If you do this, try not to use any kind of instructions that rely on memory, or branching. Ideally you might do some kind of logical or arithmetic operations (so perhaps using some inline assembly in C).
I'm trying to find the find the relative merits of 2 small functions in C. One that adds by loop, one that adds by explicit variables. The functions are irrelevant themselves, but I'd like someone to teach me how to count cycles so as to compare the algorithms. So f1 will take 10 cycles, while f2 will take 8. That's the kind of reasoning I would like to do. No performance measurements (e.g. gprof experiments) at this point, just good old instruction counting.
Is there a good way to do this? Are there tools? Documentation? I'm writing C, compiling with gcc on an x86 architecture.
http://icl.cs.utk.edu/papi/
PAPI_get_real_cyc(3) - return the total number of cycles since some arbitrary starting point
Assembler instruction rdtsc (Read Time-Stamp Counter) retun in EDX:EAX registers the current CPU ticks count, started at CPU reset. If your CPU runing at 3GHz then one tick is 1/3GHz.
EDIT:
Under MS windows the API call QueryPerformanceFrequency return the number of ticks per second.
Unfortunately timing the code is as error prone as visually counting instructions and clock cycles. Be it a debugger or other tool or re-compiling the code with a re-run 10000000 times and time it kind of thing, you change where things land in the cache line, the frequency of the cache hits and misses, etc. You can mitigate some of this by adding or removing some code upstream from the module of code being tested, (to cause a few instructions added and removed changing the alignment of your program and sometimes of your data).
With experience you can develop an eye for performance by looking at the disassembly (as well as the high level code). There is no substitute for timing the code, problem is timing the code is error prone. The experience comes from many experiements and trying to fully understand why adding or removing one instruction made no or dramatic differences. Why code added or removed in a completely different unrelated area of the module under test made huge performance differences on the module under test.
As GJ has written in another answer I also recommend using the "rdtsc" instruction (rather than calling some operating system function which looks right).
I've written quite a few answers on this topic. Rdtsc allows you to calculate the elapsed clock cycles in the code's "natural" execution environment rather than having to resort to calling it ten million times which may not be feasible as not all functions are black boxes.
If you want to calculate elapsed time you might want to shut off energy-saving on the CPUs. If it's only a matter of clock cycles this is not necessary.
If you are trying to compare the performance, the easiest way is to put your algorithm in a loop and run it 1000 or 1000000 times.
Once you are running it enough times that the small differences can be seen, run time ./my_program which will give you the amount of processor time that it used.
Do this a few times to get a sampling and compare the results.
Trying to count instructions won't help you on x86 architecture. This is because different instructions can take significantly different amounts of time to execute.
I would recommend using simulators. Take a look at PTLsim it will give you the number of cycles, other than that maybe you would like to take a look at some tools to count the number of times each assembly line is executing.
Use gcc -S your_program.c. -S tells gcc to generate the assembly listing, that will be named your_program.s.
There are plenty of high performance clocks around. QueryPerformanceCounter is microsofts. The general trick is to run the function 10s of thousands of time and time how long it takes. Then divide the time taken by the number of loops. You'll find that each loop takes a slightly different length of time so this testing over multiple passes is the only way to truly find out how long it takes.
This is not really a trivial question. Let me try to explain:
There are several tools on different OS to do exactly what you want, but those tools are usually part of a bigger environment. Every instruction is translated into a certain number of cycles, depending on the CPU the compiler ran on, and the CPU the program was executed.
I can't give you a definitive answer, because I do not have enough data to pass my judgement on, but I work for IBM in the database area and we use tools to measure cycles and instructures for our code and those traces are only valid for the actual CPU the program was compiled and was running on.
Depending on the internal structure of your CPU's piplining and on the effeciency of your compiler, the resulting code will most likely still have cache misses and other areas you have to worry about. (In that case you may want to look into FDPR...)
If you want to know how many cycles your program needs to run on your CPU (which was compiled with your compiler), you have to understand how the CPU works and how the compiler generarated the code.
I'm sorry, if the answer was not sufficient enough to solve your problem at hand. You said you are using gcc on an x86 arch. I would work with getting the assembly code mapped to your CPU.
I'm sure you will find some areas, where gcc could have done a better job...