I am trying to determine time needed to read an element to make sure it's a cache hit or a cache miss. for reading to be in order I use _mm_lfence() function. I got unexpected results and after checking I saw that lfence function's overhead is not deterministic.
So I am executing the program that measures this overhead in a loop of for example 100 000 iteration. I get results of more than 1000 clock cycle for one iteration and next time it's 200. What can be a reason of such difference between lfence function overheads and if it is so unreliable how can I judge latency of cache hits and cache misses correctly? I was trying to use same approach as in this post: Memory latency measurement with time stamp counter
the code that gives unreliable results is this:
for(int i=0; i < arr_size; i++){
_mm_mfence();
_mm_lfence();
t1 = __rdtsc();
_mm_lfence();
_mm_lfence();
t2 = __rdtsc();
_mm_lfence();
arr[i] = t2-t1;
}
the values in arr vary in different ranges, arr_size is 100 000.
I get results of more than 1000 clock cycle for one iteration and next time it's 200.
Sounds like your CPU ramped up from idle to normal clock speed after the first few iterations.
Remember that RDTSC counts reference cycles (fixed frequency, equal or close to the max non-turbo frequency of the CPU), not core clock cycles. (idle/turbo / whatever). Older CPUs had RDTSC count core clock cycles, but for years now CPU vendors have had fixed RDTSC frequency making it useful for clock_gettime(), and advertized this fact with the invariant_tsc CPUID feature bit. See also Get CPU cycle count?
If you really want to use RDTSC instead of performance counters, disable turbo and use a warm-up loop to get your CPU to its max frequency.
There are libraries that let you program the HW performance counters, and set permissions so you can run rdpmc in user-space. This actually has lower overhead than rdtsc. See What will be the exact code to get count of last level cache misses on Intel Kaby Lake architecture for a summary of ways to access perf counters in user-space.
I also found a paper about adding user-space rdpmc support to Linux perf (PAPI): ftp://ftp.cs.uoregon.edu/pub/malony/ESPT/Papers/espt-paper-1.pdf. IDK if that made it into mainline kernel/perf code or not.
Related
I'm testing my code for sizes 1 to 1000 and I'm measuring the time needed for each iteration and I noticed big difference in time, even if the sizes differ by one.
I conducted 7 tests, but I'll only paste 3 here. The x-axis in the pictures represents matrix sizes(N x N), the y-axis represents time in microseconds.
Here is the code snippet I'm testing:
QueryPerformanceFrequency(&frequency);
QueryPerformanceCounter(&start);
for (i = 0; i < size; i++)
{
for (j = row_ptr[i]; j < row_ptr[i + 1]; j++)
{
result[i] += val[j] * x[col_idx[j]];
}
}
QueryPerformanceCounter(&end);
// Stop time measurement
interval = (double) (end.QuadPart - start.QuadPart) / frequency.QuadPart;
I tried to find a solution to this, like finding the sizes to fill my L3 Cache and test with those sizes but nothing came out of it.
Why am I getting these irregularities? And also, why am I getting lower time for some larger sizes of matrices?
PS Here's the full code if someone is interested:
Although the name may lead you to believe that QueryPerformanceCounter is measuring the performance of your task, it is actually measuring elapsed time, not the CPU time associated with the current task.
And even if it were measuring CPU time, there are a number of things which can distract the CPU for a few microseconds (or milliseconds in the case of other running tasks grabbing the CPU), including interrupt processing (eg., network traffic), page faults, and TLB flushes. All these various events add a bit of noise to benchmarks; the standard advice is to run your benchmarks many times and remove the outliers before analyzing the data.
Also, before you start running your benchmark, nothing useful will be in the CPUs caches, and the branch prediction will be inaccurate. So the first few benchmark runs will normally be successively faster, as the caches and branch prediction "warms up". (You need to be particularly careful about comparing two different algorithms by running them successively in the same benchmark program. If you don't account for cache-warmup of in-memory data, the first benchmark may appear to be slower than the second one even if they are exactly the same algorithm.)
Finally, there are often tiny differences between core performance in multicore chips, so a switch from one core to another can result in a vertical displacement in the benchmark graph, such as the one you see at approximately N=400 in the first graph.
I have used this equation in order to obtain the execution time:
Execution time = Cpu time + memory time
then,
Execution time = (#instructions * average instruction execution time) +
(Misses Cache l1 * latency L2) +
(Misses Cache l2 * latency access memory).
I have develop a simple program in order to check this equation, the pseudo-code is the next:
ini_time = get_cepu_time();
Init_Papi_counters;
//intesive computation code (matrix mult)
End_Papi_counters();
end_time = get_cepu_time();
end_time = end_time - ini_time.
The values obtain are the next:
Execution time: 194,111 sec
Cycles: 568949490685
Instructions: 676850501790
Misses L1: 30666388828
Misses L2: 1743525419
The latencies obtain in the intel manual are:
Acces L2: 4,8 ns
Acces main memory: 110 ns
Then, If I apply the equation:
Misses L1 * Latency l2 = 147 sec
Misses L2 * memory access time = 193 sec
As we can see, the sum of the component of memory time is greater than the total execution time:
194 < 147 + 193 ERRORRRRR
Could you help me in order to discover how I can approximate the execution time.
How did you come up with these "equations"? They are almost entirely incorrect for any modern CPU, which is why they produce garbage results.
execution time = cpu time + memory time
All modern CPUs are capable of accessing memory while computations are taking place. So there is significant overlap between these two measurements. In addition, in any non-trivial environment, lots of other things that can happen that take measurable "execution time" -- stalling on disk access or network access, servicing interrupts, etc...
Execution time = (#instructions * average instruction execution time) +
(Misses Cache l1 * latency L2) +
(Misses Cache l2 * latency access memory)
Setting aside cache misses, modern CPUs are pipelined and super-scalar; tens to hundreds of instructions are in flight simultaneously, and instructions * average execution time is far to simple of a model to capture the real complexity of the situation. instructions / (average instructions retired per time unit) is a more accurate model, but still woefully inadequate for most usage, as the realized retire rate is extremely dependent on the specifics of the code being executed.
As Mystical noted in his comment, processors can service multiple cache misses simultaneously, so you can't simply account for them via a linear model either. Except for the absolute simplest designs, modern CPUs are much to complex to be accurately described by any model of this form. The only way to get accurate performance data is to actually run the computation on the part in question, or to use a cycle-accurate simulator that actually models all of the dependencies and resources involved in each stage of execution (there are very few modern CPUs for which such simulators are readily available, as they are very complex to do correctly).
In trying to build a very latency sensitive application, that needs to send 100s of messages a seconds, each message having the time field, we wanted to consider optimizing gettimeofday.
Out first thought was rdtsc based optimization. Any thoughts ? Any other pointers ?
Required accurancy of the time value returned is in milliseconds, but it isn't a big deal if the value is occasionally out of sync with the receiver for 1-2 milliseconds.
Trying to do better than the 62 nanoseconds gettimeofday takes
POSIX Clocks
I wrote a benchmark for POSIX clock sources:
time (s) => 3 cycles
ftime (ms) => 54 cycles
gettimeofday (us) => 42 cycles
clock_gettime (ns) => 9 cycles (CLOCK_MONOTONIC_COARSE)
clock_gettime (ns) => 9 cycles (CLOCK_REALTIME_COARSE)
clock_gettime (ns) => 42 cycles (CLOCK_MONOTONIC)
clock_gettime (ns) => 42 cycles (CLOCK_REALTIME)
clock_gettime (ns) => 173 cycles (CLOCK_MONOTONIC_RAW)
clock_gettime (ns) => 179 cycles (CLOCK_BOOTTIME)
clock_gettime (ns) => 349 cycles (CLOCK_THREAD_CPUTIME_ID)
clock_gettime (ns) => 370 cycles (CLOCK_PROCESS_CPUTIME_ID)
rdtsc (cycles) => 24 cycles
These numbers are from an Intel Core i7-4771 CPU # 3.50GHz on Linux 4.0. These measurements were taken using the TSC register and running each clock method thousands of times and taking the minimum cost value.
You'll want to test on the machines you intend to run on though as how these are implemented varies from hardware and kernel version. The code can be found here. It relies on the TSC register for cycle counting, which is in the same repo (tsc.h).
TSC
Access the TSC (processor time-stamp counter) is the most accurate and cheapest way to time things. Generally, this is what the kernel is using itself. It's also quite straight-forward on modern Intel chips as the TSC is synchronized across cores and unaffected by frequency scaling. So it provides a simple, global time source. You can see an example of using it here with a walkthrough of the assembly code here.
The main issue with this (other than portability) is that there doesn't seem to be a good way to go from cycles to nanoseconds. The Intel docs as far as I can find state that the TSC runs at a fixed frequency, but that this frequency may differ from the processors stated frequency. Intel doesn't appear to provide a reliable way to figure out the TSC frequency. The Linux kernel appears to solve this by testing how many TSC cycles occur between two hardware timers (see here).
Memcached
Memcached bothers to do the cache method. It may simply be to make sure the performance is more predictable across platforms, or scale better with multiple cores. It may also no be a worthwhile optimization.
Have you actually benchmarked, and found gettimeofday to be unacceptably slow?
At the rate of 100 messages a second, you have 10ms of CPU time per message. If you have multiple cores, assuming it can be fully parallelized, you can easily increase that by 4-6x - that's 40-60ms per message! The cost of gettimeofday is unlikely to be anywhere near 10ms - I'd suspect it to be more like 1-10 microseconds (on my system, microbenchmarking it gives about 1 microsecond per call - try it for yourself). Your optimization efforts would be better spent elsewhere.
While using the TSC is a reasonable idea, modern Linux already has a userspace TSC-based gettimeofday - where possible, the vdso will pull in an implementation of gettimeofday that applies an offset (read from a shared kernel-user memory segment) to rdtsc's value, thus computing the time of day without entering the kernel. However, some CPU models don't have a TSC synchronized between different cores or different packages, and so this can end up being disabled. If you want high performance timing, you might first want to consider finding a CPU model that does have a synchronized TSC.
That said, if you're willing to sacrifice a significant amount of resolution (your timing will only be accurate to the last tick, meaning it could be off by tens of milliseconds), you could use CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE with clock_gettime. This is also implemented with the vdso as well, and guaranteed not to call into the kernel (for recent kernels and glibc).
Like bdonian says, if you're only sending a few hundred messages per second, gettimeofday is going to be fast enough.
However, if you were sending millions of messages per second, it might be different (but you should still measure that it is a bottleneck). In that case, you might want to consider something like this:
have a global variable, giving the current timestamp in your desired accuracy
have a dedicated background thread that does nothing except update the timestamp (if timestamp should be updated every T units of time, then have the thread sleep some fraction of T and then update the timestamp; use real-time features if you need to)
all other threads (or the main process, if you don't use threads otherwise) just reads the global variable
The C language does not guarantee that you can read the timestamp value if it is larger than sig_atomic_t. You could use locking to deal with that, but locking is heavy. Instead, you could use a volatile sig_atomic_t typed variable to index an array of timestamps: the background thread updates the next element in the array, and then updates the index. The other threads read the index, and then read the array: they might get a tiny bit out-of-date timestamp (but they get the right one next time), but they do not run into the problem where they read the timestamp at the same time it is being updated, and get some bytes of the old value and some of the new value.
But all this is much overkill for just hundreds of messages per second.
Below is a benchmark. I see about 30ns. printTime() from rashad How to get current time and date in C++?
#include <string>
#include <iostream>
#include <sys/time.h>
using namespace std;
void printTime(time_t now)
{
struct tm tstruct;
char buf[80];
tstruct = *localtime(&now);
strftime(buf, sizeof(buf), "%Y-%m-%d.%X", &tstruct);
cout << buf << endl;
}
int main()
{
timeval tv;
time_t tm;
gettimeofday(&tv,NULL);
printTime((time_t)tv.tv_sec);
for(int i=0; i<100000000; i++)
gettimeofday(&tv,NULL);
gettimeofday(&tv,NULL);
printTime((time_t)tv.tv_sec);
printTime(time(NULL));
for(int i=0; i<100000000; i++)
tm=time(NULL);
printTime(time(NULL));
return 0;
}
3 sec for 100,000,000 calls or 30ns;
2014-03-20.09:23:35
2014-03-20.09:23:38
2014-03-20.09:23:38
2014-03-20.09:23:41
Do you need the millisecond precision? If not you could simply use time() and deal with the unix timestamp.
Is it possible to determine the throughput of an application on a processor from the cycle counts (Processor instruction cycles) consumed by the application ? If yes, how to calculate it ?
If the process is entirely CPU bound, then you divide the processor speed by the number of cycles to get the throughput.
In reality, few processes are entirely CPU bound though, in which case you have to take other factors (disk speed, memory speed, serialization, etc.) into account.
Simple:
#include <time.h>
clock_t c;
c = clock(); // c holds clock ticks value
c = c / CLOCKS_PER_SEC; // real time, if you need it
Note that the value you get is an approximation, for more info see the clock() man page.
Some CPUs have internal performance registers which enable you to collect all sorts of interesting statistics, such as instruction cycles (sometimes even on a per execution unit basis), cache misses, # of cache/memory reads/writes, etc. You can access these directly, but depending on what CPU and OS you are using there may well be existing tools which manage all the details for you via a GUI. Often a good profiling tool will have support for performance registers and allow you to collect statistics using them.
If you use the Cortex-M3 from TI/Luminary Micro, you can make use of the driverlib delivered by TI/Luminary Micro.
Using the SysTick functions you can set the SysTickPeriod to 1 processor cycle: So you have 1 processor clock between interrupts. By counting the number of interrupts you should get a "near enough estimation" on how much time a function or function block take.
What are the common algorithms being used to measure the processor frequency?
Intel CPUs after Core Duo support two Model-Specific registers called IA32_MPERF and IA32_APERF.
MPERF counts at the maximum frequency the CPU supports, while APERF counts at the actual current frequency.
The actual frequency is given by:
You can read them with this flow
; read MPERF
mov ecx, 0xe7
rdmsr
mov mperf_var_lo, eax
mov mperf_var_hi, edx
; read APERF
mov ecx, 0xe8
rdmsr
mov aperf_var_lo, eax
mov aperf_var_hi, edx
but note that rdmsr is a privileged instruction and can run only in ring 0.
I don't know if the OS provides an interface to read these, though their main usage is for power management, so it might not provide such an interface.
I'm gonna date myself with various details in this answer, but what the heck...
I had to tackle this problem years ago on Windows-based PCs, so I was dealing with Intel x86 series processors like 486, Pentium and so on. The standard algorithm in that situation was to do a long series of DIVide instructions, because those are typically the most CPU-bound single instructions in the Intel set. So memory prefetch and other architectural issues do not materially affect the instruction execution time -- the prefetch queue is always full and the instruction itself does not touch any other memory.
You would time it using the highest resolution clock you could get access to in the environment you are running in. (In my case I was running near boot time on a PC compatible, so I was directly programming the timer chips on the motherboard. Not recommended in a real OS, usually there's some appropriate API to call these days).
The main problem you have to deal with is different CPU types. At that time there was Intel, AMD and some smaller vendors like Cyrix making x86 processors. Each model had its own performance characteristics vis-a-vis that DIV instruction. My assembly timing function would just return a number of clock cycles taken by a certain fixed number of DIV instructions done in a tight loop.
So what I did was to gather some timings (raw return values from that function) from actual PCs running each processor model I wanted to time, and record those in a spreadsheet against the known processor speed and processor type. I actually had a command-line tool that was just a thin shell around my timing function, and I would take a disk into computer stores and get the timings off of display models! (I worked for a very small company at the time).
Using those raw timings, I could plot a theoretical graph of what timings I should get for any known speed of that particular CPU.
Here was the trick: I always hated when you would run a utility and it would announce that your CPU was 99.8 Mhz or whatever. Clearly it was 100 Mhz and there was just a small round-off error in the measurement. In my spreadsheet I recorded the actual speeds that were sold by each processor vendor. Then I would use the plot of actual timings to estimate projected timings for any known speed. But I would build a table of points along the line where the timings should round to the next speed.
In other words, if 100 ticks to do all that repeating dividing meant 500 Mhz, and 200 ticks meant 250 Mhz, then I would build a table that said that anything below 150 was 500 Mhz, and anything above that was 250 Mhz. (Assuming those were the only two speeds available from that chip vendor). It was nice because even if some odd piece of software on the PC was throwing off my timings, the end result would often still be dead on.
Of course now, in these days of overclocking, dynamic clock speeds for power management, and other such trickery, such a scheme would be much less practical. At the very least you'd need to do something to make sure the CPU was in its highest dynamically chosen speed first before running your timing function.
OK, I'll go back to shooing kids off my lawn now.
One way on x86 Intel CPU's since Pentium would be to use two samplings of the RDTSC instruction with a delay loop of known wall time, eg:
#include <stdio.h>
#include <stdint.h>
#include <unistd.h>
uint64_t rdtsc(void) {
uint64_t result;
__asm__ __volatile__ ("rdtsc" : "=A" (result));
return result;
}
int main(void) {
uint64_t ts0, ts1;
ts0 = rdtsc();
sleep(1);
ts1 = rdtsc();
printf("clock frequency = %llu\n", ts1 - ts0);
return 0;
}
(on 32-bit platforms with GCC)
RDTSC is available in ring 3 if the TSC flag in CR4 is set, which is common but not guaranteed. One shortcoming of this method is that it is vulnerable to frequency scaling changes affecting the result if they happen inside the delay. To mitigate that you could execute code that keeps the CPU busy and constantly poll the system time to see if your delay period has expired, to keep the CPU in the highest frequency state available.
I use the following (pseudo)algorithm:
basetime=time(); /* time returns seconds */
while (time()==basetime);
stclk=rdtsc(); /* rdtsc is an assembly instruction */
basetime=time();
while (time()==basetime
endclk=rdtsc();
nclks=encdclk-stclk;
At this point you might assume that you've determined the clock frequency but even though it appears correct it can be improved.
All PCs contain a PIT (Programmable Interval Timer) device which contains counters which are (used to be) used for serial ports and the system clock. It was fed with a frequency of 1193182 Hz. The system clock counter was set to the highest countdown value (65536) resulting in a system clock tick frequency of 1193182/65536 => 18.2065 Hz or once every 54.925 milliseconds.
The number of ticks necessary for the clock to increment to the next second will therefore depend. Usually 18 ticks are required and sometimes 19. This can be handled by performing the algorithm (above) twice and storing the results. The two results will either be equivalent to two 18 tick sequences or one 18 and one 19. Two 19s in a row won't occur. So by taking the smaller of the two results you will have an 18 tick second. Adjust this result by multiplying with 18.2065 and dividing by 18.0 or, using integer arithmetic, multiply by 182065, add 90000 and divide by 180000. 90000 is one half of 180000 and is there for rounding. If you choose the calculation with integer route make sure you are using 64-bit multiplication and division.
You will now have a CPU clock speed x in Hz which can be converted to kHz ((x+500)/1000) or MHz ((x+5000000)/1000000). The 500 and 500000 are one half of 1000 and 1000000 respectively and are there for rounding. To calculate MHz do not go via the kHz value because rounding issues may arise. Use the Hz value and the second algorithm.
That was the intention of things like BogoMIPS, but CPUs are a lot more complicated nowadays. Superscalar CPUs can issue multiple instructions per clock, making any measurement based on counting clock cycles to execute a block of instructions highly inaccurate.
CPU frequencies are also variable based on offered load and/or temperature. The fact that the CPU is currently running at 800 MHz does not mean it will always be running at 800 MHz, it might throttle up or down as needed.
If you really need to know the clock frequency, it should be passed in as a parameter. An EEPROM on the board would supply the base frequency, and if the clock can vary you'd need to be able to read the CPUs power state registers (or make an OS call) to find out the frequency at that instant.
With all that said, there may be other ways to accomplish what you're trying to do. For example if you want to make high-precision measurements of how long a particular codepath takes, the CPU likely has performance counters running at a fixed frequency which are a better measure of wall-clock time than reading a tick count register.
"lmbench" provides a cpu frequency algorithm portable for different architecture.
It runs some different loops and the processor's clock speed is the greatest common divisor of the execution frequencies of the various loops.
this method should always work when we are able to get loops with cycle counts that are relatively prime.
http://www.bitmover.com/lmbench/
One option is to sense the CPU frequency, by running code with known instructions per loop
This functionality is contained in 7zip, since about v9.20 I think.
> 7z b
7-Zip 9.38 beta Copyright (c) 1999-2014 Igor Pavlov 2015-01-03
CPU Freq: 4266 4000 4266 4000 2723 4129 3261 3644 3362
The final number is meant to be correct (and on my PC and many others, I have found it to be quite correct - the test runs very quick so turbo may not kick in, and servers set in Balanced/Power Save modes most likely give readings of around 1ghz)
The source code is at GitHub (Official source is a download from 7-zip.org)
With the most significant portion being:
#define YY1 sum += val; sum ^= val;
#define YY3 YY1 YY1 YY1 YY1
#define YY5 YY3 YY3 YY3 YY3
#define YY7 YY5 YY5 YY5 YY5
static const UInt32 kNumFreqCommands = 128;
EXTERN_C_BEGIN
static UInt32 CountCpuFreq(UInt32 sum, UInt32 num, UInt32 val)
{
for (UInt32 i = 0; i < num; i++)
{
YY7
}
return sum;
}
EXTERN_C_END
On Intel CPUs, a common method to get the current (average) CPU frequency is to calculate it from a few CPU counters:
CPU_freq = tsc_freq * (aperf_t1 - aperf_t0) / (mperf_t1 - mperf_t0)
The TSC (Time Stamp Counter) can be read from userspace with dedicated x86 instructions, but its frequency has to be determined by calibration against a clock. The best approach is to get the TSC frequency from the kernel (which already has done the calibration).
The aperf and mperf counters are model specific registers MSRs that require root privileges for access. Again, there are dedicated x86 instructions for accessing the MSRs.
Since the mperf counter rate is directly proportional to the TSC rate and the aperf rate is directly proportional to the CPU frequency you get the CPU frequency with the above equation.
Of course, if the CPU frequency changes in your t0 - t1 time delta (e.g. due due frequency scaling) you get the average CPU frequency with this method.
I wrote a small utility cpufreq which can be used to test this method.
See also:
[PATCH] x86: Calculate MHz using APERF/MPERF for cpuinfo and scaling_cur_freq. 2016-04-01, LKML
Frequency-invariant utilization tracking for x86. 2020-04-02, LWN.net
I'm not sure why you need assembly for this. If you're on a machine that has the /proc filesystem, then running:
> cat /proc/cpuinfo
might give you what you need.
A quick google on AMD and Intel shows that CPUID should give you access to the CPU`s max frequency.