Why is TSC larger in VirtualBox when I enable "TiedToExecution"? - timer

Background (my understanding of how rdtsc is virtualized): I am experimenting with TSC values in VirtualBox. My current understanding of how VirtualBox emulates rdtsc is that in virtual mode, any call to rdtsc will be offset by a predetermined result, which is a value set in another register. This value would be rdtsc on the host when the virtual machine started.
An advantage to this strategy is that rdtsc will advance with wall clock time in an expected manner, but the disadvantage is that a process may perceive rdtsc to take longer than expected. For instance, in simple code like this:
x = rdtsc();
y = rdtsc();
z = y - x;
print z
executed on the guest, z may be larger than expected because of the wall-clock-time cost associated with trapping rdtsc. It would be even worse if the host OS swapped off the VirtualBox process in between these two calls.
From reading the VirtualBox manual (Change TSC Mode), I read there is an alternative virtualization technique which is supposed to directly simulate TSC. As I understand it, the offset value will only take into account time that the guest OS actually uses the CPU. The advantage is that with respect to cycles available, TSC will behave exactly as if it was on a host machine. The downside is that TSC will drift away from wall-clock-time as there are "missing cycles" that the guest OS is not aware of.
My goal: I am trying to set VirtualBox to do the 2nd option. I want to emulate the short-term behavior of rdtsc as if it were running in hardware as precisely as possible, and I don't care if it doesn't match wall-clock-time. I am fully aware that this is not "reliable" on SMP; it's for experimenting not for enterprise software.
What I did: First I wrote a simple test program that calls rdtsc repeatedly, then prints the results:
__inline__ uint64_t rdtsc()
{
uint32_t lo, hi;
__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
return (uint64_t)hi << 32 | lo;
}
int main()
{
int i;
uint64_t val[8];
val[0] = rdtsc();
val[0] = rdtsc();
val[0] = rdtsc();
val[0] = rdtsc();
val[0] = rdtsc();
val[0] = rdtsc();
val[0] = rdtsc();
val[0] = rdtsc();
for (i = 0; i < 8; i++) {
printf("rdtsc (%2d): %llX", i, val[i]);
if (i > 0) {
printf("\t\t (+%llX)", (val[i] - val[i - 1]));
}
printf("\n");
}
return 0;
}
I tried this program on my host machine. Then, I ran it in my VirtualBox machine. The deltas between rdtsc were essentially identical -- the only difference was the value itself on my host was about 30T more. Example output:
rdtsc ( 0): 334F2252A1824
rdtsc ( 1): 334F2252A1836 (+12)
rdtsc ( 2): 334F2252A1853 (+1D)
rdtsc ( 3): 334F2252A1865 (+12)
rdtsc ( 4): 334F2252A1877 (+12)
rdtsc ( 5): 334F2252A1889 (+12)
rdtsc ( 6): 334F2252A18A6 (+1D)
rdtsc ( 7): 334F2252A18B8 (+12)
Then, I changed the TSCTiedToExecution flag in VirtualBox, which I thought was supposed to ignore wall-clock-time in favor of more precise virtual cycle counting. I got this from the manual page I mentioned above:
./VBoxManage setextradata "HelloWorld" "VBoxInternal/TM/TSCTiedToExecution" 1
However this gave me unexpected results. The virtual program now returned:
rdtsc ( 0): F2252A1824
rdtsc ( 1): F2252A1836 (+B12)
rdtsc ( 2): F2252A1853 (+B1D)
rdtsc ( 3): F2252A1865 (+AFF)
rdtsc ( 4): F2252A1877 (+B13)
rdtsc ( 5): F2252A1889 (+AF2)
rdtsc ( 6): F2252A18A6 (+B1D)
rdtsc ( 7): F2252A18B8 (+B0C)
With TSCTiedToExecution on, rdtsc seems to be taking about 1100 cycles to execute....
Question: First, I am wondering why did I get this behavior? It seems like almost the opposite of what I would expect, and it certainly does not match with my understanding of how this is implemented.
Second, I am wondering how can I accomplish my original goal of having TSC advance for each virtual cycle as if it was on hardware?
My Setup: I am running on a 8x Intel(R) Xeon(R) CPU X5550 # 2.67GHz. VirtualBox has VMX and nested paging enabled. I compiled it from source, version: 4.1.2_OSE r38459.
Thanks in advance.
P.S. I started a bounty on this, but still no answers...

To make self crying try to disable "VBoxInternal/TM/TSCTiedToExecution" and run your test program again. The next code
ULONGLONG x1 = Cpu::Rdtsc();
ULONGLONG x2 = Cpu::Rdtsc();
DbgPrintUlong('D', x2 - x1, 30, 23);
running on VirtualBox with "VBoxInternal/TM/TSCTiedToExecution" disabled display that x2 - x1 took about 200 000 of cycles. In contrast, on machine with "VBoxInternal/TM/TSCTiedToExecution" enabled it took only 3 000 jf cycles. I think, this reduction is meant by next passage from the VirtualBox manual "In special circumstances it may be useful however to make the TSC (time stamp counter) in the guest reflect the time actually spent executing the guest."
So, I think we won't have better TSC emulation in VirtualBox for a long time.
The only thing that I can advise is to move on VmWare Workstation. It have much better emulation of TSC.

Related

Inconsistent values of ARM PMU cycles counter

I'm trying to measure performance of my code in linux kernel with pmu.
First of all I want to test pmu therefore created simple loop of couple operations in kernel. I placed it under spin lock with disabled interrupts so my test code can't be preempted. Then I printed cycle counter to check how much CPU cycles this loop takes. But I see very different values at each print: 100, 500, 1000, 200, ...
My question is: why I see so different values every time?
PS: in countrary to cycle counter, pmu's instruction counter is stable and I see same values every time.
I also tried to use arm timer but it also showing different values similar to pmu's cycle counter.
Here is how I use ARM timer to measure performance:
unsigned long long ticks_start, ticks_end;
int i = 0, j;
unsigned long flags;
spin_lock_irqsave(&lock, flags);
while (i++ < 100) {
j = 0;
asm volatile("mrs %0, CNTPCT_EL0" : "=r" (ticks_start));
while (j++ < 10000) {
asm volatile ("nop");
}
asm volatile("mrs %0, CNTPCT_EL0" : "=r" (ticks_end));
printk("ticks %d are: %llu\n", i, ticks_end - ticks_start);
}
spin_unlock_irqrestore(&lock, flags);
and output on real device are (cortex A-57):
...
ticks 31 are: 2287
ticks 32 are: 2287
ticks 33 are: 2287
ticks 34 are: 1984
ticks 35 are: 457
ticks 36 are: 1604
ticks 37 are: 2287
...
For using things like timers and PMU on Arm, you should be inserting an isb instruction before the read of the PMU register. The processor is allowed by the architecture to speculatively read the register early, or late since it is not dependent on your inner loop of nops.
So try this:
asm volatile("isb; mrs %0, CNTPCT_EL0" : "=r" (ticks_end));
The isb will flush the pipeline before letting the mrs instruction proceed. It is possible the CPU is also thermally throttling, but that should not affect your measurements using the cycle-counter, but it would if you were reading the generic timer to measure time.

Why does my Linux app get stopped every 0.5 seconds?

I have a 16 core Linux machine that is idle. If I run a trivial, single threaded C program that sits in a loop reading the cycle counter forever (using the rdtsc instruction), then every 0.5 seconds, I see a 0.17 ms jump in the timer value. In other words, every 0.5 seconds it seems that my application is stopped for 0.17ms. I would like to understand why this happens and what I can do about it. I understand Linux is not a real time operating system. I'm just trying to understand what is going on, so I can make the best use of what Linux provides.
I found someone else's software for measuring this problem - https://github.com/nokia/clocktick_jumps. Its results are consistent with my own.
To answer the "tell us what specific problem you are trying to solve" question - I work on high-speed networking applications using DPDK. About 60 million packets arrive per second. I need to decide what size to make the RX buffers and have reasons that the number I pick is sensible. The answer to this question is one part of that puzzle.
My code looks like this:
// Build with gcc -O2 -Wall
#include <stdio.h>
#include <unistd.h>
#include <x86intrin.h>
int main() {
// Bad way to learn frequency of cycle counter.
unsigned long long t1 = __rdtsc();
usleep(1000000);
double millisecs_per_tick = 1e3 / (double)(__rdtsc() - t1);
// Loop forever. Print message if any iteration takes unusually long.
t1 = __rdtsc();
while (1) {
unsigned long long t2 = __rdtsc();
double delta = t2 - t1;
delta *= millisecs_per_tick;
if (delta > 0.1) {
printf("%4.2f - Delay of %.2f ms.\n", (double)t2 * millisecs_per_tick, delta);
}
t1 = t2;
}
return 0;
}
I'm running on Ubuntu 16.04, amd64. My processor is an Intel Xeon X5672 # 3.20GHz.
I expect your system is scheduling another process to run on the same CPU, and you're either replaced or moved to another core with some timing penalty.
You can find the reason by digging into kernel events happening at the same time. For example systemtap, or perf can give you some insight. I'd start with the scheduler events to eliminate that one first: https://github.com/jav/systemtap/blob/master/tapset/scheduler.stp

The use of rdtsc() in my program to obtain the number of clock cycles for single- and double-word operations?

In theory the cost of double-word addition/subtraction is taken 2 times of a single-word. Similarly, the cost ratio of single-word multiplication to addition is taken as 3. I have written the following C program using GCC on Ubuntu LTS 14.04 to check the number of clock cycles on my machine, Intel Sandy Bridge Corei5-2410M. Although, most of the time the program returns 6 clock cycles for 128-bit addition but I have taken the best-case. I compiled using the command (gcc -o ow -O3 cost.c) and the result is given below
32-bit Add: Clock cycles = 1 64-bit Add: Clock cycles = 1 64-bit Mult: Clock cycles = 2 128-bit Add: Clock cycles = 5
The program is as follows:
#define n 500
#define counter 50000
typedef uint64_t utype64;
typedef int64_t type64;
typedef __int128 type128;
__inline__ utype64 rdtsc() {
uint32_t lo, hi;
__asm__ __volatile__ ("xorl %%eax,%%eax \n cpuid"::: "%rax", "%rbx", "%rcx", "%rdx");
__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
return (utype64)hi << 32 | lo;
}
int main(){
utype64 start, end;
type64 a[n], b[n], c[n];
type128 d[n], e[n], f[n];
int g[n], h[n];
unsigned short i, j;
srand(time(NULL));
for(i=0;i<n;i++){ g[i]=rand(); h[i]=rand(); b[i]=(rand()+2294967295); e[i]=(type128)(rand()+2294967295)*(rand()+2294967295);}
for(j=0;j<counter;j++){
start=rdtsc();
for(i=0;i<n;i++){ a[i]=(type64)g[i]+h[i]; }
end=rdtsc();
if((j+1)%5000 == 0)
printf("%lu-bit Add: Clock cycles = %lu \t", sizeof(g[0])*8, (end-start)/n);
start=rdtsc();
for(i=0;i<n;i++){ c[i]=a[i]+b[i]; }
end=rdtsc();
if((j+1)%5000 == 0)
printf("%lu-bit Add: Clock cycles = %lu \t", sizeof(a[0])*8, (end-start)/n);
start=rdtsc();
for(i=0;i<n;i++){ d[i]=(type128)c[i]*b[i]; }
end=rdtsc();
if((j+1)%5000 == 0)
printf("%lu-bit Mult: Clock cycles = %lu \t", sizeof(c[0])*8, (end-start)/n);
start=rdtsc();
for(i=0;i<n;i++){ f[i]=d[i]+e[i]; }
end=rdtsc();
if((j+1)%5000 == 0){
printf("%lu-bit Add: Clock cycles = %lu \n", sizeof(d[0])*8, (end-start)/n);
printf("f[%hu]= %ld %ld \n\n", i-7, (type64)(f[i-7]>>64), (type64)(f[i-7]));}
}
return 0;
}
There are two things in the result that bothers me.
1) Can the number of clock cycles for (64-bit) multiplication become 2?
2) Why the number of clock cycles for double-word addition is more than 2 times of the single-word addition?
I am mainly concerned for case (2). Now, the question arises that is it because of my program logic? Or Is it due to GCC compiler optimization?
In theory we know that the double-word addition/subtraction takes 2 times of a single-word.
No, we don't.
Similarly, the cost ratio of single-word multiplication to addition is taken as 3 because of fast integer multiplier of CPU.
No, it isn't.
You're not measuring instructions. You're measuring statements in your program. Which may or may not have any relationship with the instructions your compiler will emit. My compiler for example, after fixing your code so that it compiles, vectorized some of the loops. Adding multiple values per instruction. The first loop itself is still 23 instructions long and is still reported as 1 cycle by your code.
Modern (as in past 25 years) CPUs don't execute one instruction at a time. They'll have multiple instructions in flight at once and can execute them out of order.
Then you have memory accesses. On your CPU there are no instructions that can take a value from memory, add it to another value from memory and then store it in third memory location. So there must be multiple instructions executed already. Furthermore, memory accesses costs so much more than arithmetic instructions that anything that touches memory (unless it hits L1 cache all the time) will be dominated by the memory access time.
Furthermore, RDTSC might not even return the actual cycle count. Some CPUs have variable clock rates but still keep TSC going at the same rate regardless of how fast or slow the CPU is actually running because TSC is used by the operating system for time keeping. Others don't.
So you're not measuring what you think you're measuring and whoever told you those things was either oversimplifying vastly or hasn't seen CPU documentation in two decades.

How can I programmatically find the CPU frequency with C

I'm trying to find out if there is anyway to get an idea of the CPU frequency of the system my C code is running on.
To clarify, I'm looking for an abstract solution, (one that will not be tied to a specific architecture or OS) which can give me an idea of the operating frequency of the computer that my code is executing on. I don't need to be exact, but I'd like to be in the ball park (ie. I have a 2.2GHz processor, I'd like to be able to tell in my program that I'm within a few hundred MHz of that)
Does anyone have an idea use standard C code?
For the sake of completeness, already there is a simple, fast, accurate, user mode solution with a huge drawback: it works on Intel Skylake, Kabylake and newer processors only. The exact requirement is the CPUID level 16h support. According to the Intel Software Developer's Manual 325462 release 59, page 770:
CPUID.16h.EAX = Processor Base Frequency (in MHz);
CPUID.16h.EBX = Maximum Frequency (in MHz);
CPUID.16h.ECX = Bus (Reference) Frequency (in MHz).
Visual Studio 2015 sample code:
#include <stdio.h>
#include <intrin.h>
int main(void) {
int cpuInfo[4] = { 0, 0, 0, 0 };
__cpuid(cpuInfo, 0);
if (cpuInfo[0] >= 0x16) {
__cpuid(cpuInfo, 0x16);
//Example 1
//Intel Core i7-6700K Skylake-H/S Family 6 model 94 (506E3)
//cpuInfo[0] = 0x00000FA0; //= 4000 MHz
//cpuInfo[1] = 0x00001068; //= 4200 MHz
//cpuInfo[2] = 0x00000064; //= 100 MHz
//Example 2
//Intel Core m3-6Y30 Skylake-U/Y Family 6 model 78 (406E3)
//cpuInfo[0] = 0x000005DC; //= 1500 MHz
//cpuInfo[1] = 0x00000898; //= 2200 MHz
//cpuInfo[2] = 0x00000064; //= 100 MHz
//Example 3
//Intel Core i5-7200 Kabylake-U/Y Family 6 model 142 (806E9)
//cpuInfo[0] = 0x00000A8C; //= 2700 MHz
//cpuInfo[1] = 0x00000C1C; //= 3100 MHz
//cpuInfo[2] = 0x00000064; //= 100 MHz
printf("EAX: 0x%08x EBX: 0x%08x ECX: %08x\r\n", cpuInfo[0], cpuInfo[1], cpuInfo[2]);
printf("Processor Base Frequency: %04d MHz\r\n", cpuInfo[0]);
printf("Maximum Frequency: %04d MHz\r\n", cpuInfo[1]);
printf("Bus (Reference) Frequency: %04d MHz\r\n", cpuInfo[2]);
} else {
printf("CPUID level 16h unsupported\r\n");
}
return 0;
}
It is possible to find a general solution which gets the operating frequency correctly for one thread or many threads. This does not need admin/root privileges or access to model specific registers. I have tested this on Linux and Windows on Intel processors including, Nahalem, Ivy Bridge, and Haswell with one socket up to four sockets (40 threads). The results all deviate less than 0.5% from the correct answers. Before I show you how to do this let me show the results (from GCC 4.9 and MSVC2013):
Linux: E5-1620 (Ivy Bridge) # 3.60GHz
1 thread: 3.789, 4 threads: 3.689 GHz: (3.8-3.789)/3.8 = 0.3%, 3.7-3.689)/3.7 = 0.3%
Windows: E5-1620 (Ivy Bridge) # 3.60GHz
1 thread: 3.792, 4 threads: 3.692 GHz: (3.8-3.789)/3.8 = 0.2%, (3.7-3.689)/3.7 = 0.2%
Linux: 4xE7-4850 (Nahalem) # 2.00GHz
1 thread: 2.390, 40 threads: 2.125 GHz:, (2.4-2.390)/2.4 = 0.4%, (2.133-2.125)/2.133 = 0.4%
Linux: i5-4250U (Haswell) CPU # 1.30GHz
1 thread: within 0.5% of 2.6 GHz, 2 threads wthin 0.5% of 2.3 GHz
Windows: 2xE5-2667 v2 (Ivy Bridge) # 3.3 GHz
1 thread: 4.000 GHz, 16 threads: 3.601 GHz: (4.0-4.0)/4.0 = 0.0%, (3.6-3.601)/3.6 = 0.0%
I got the idea for this from this link
http://randomascii.wordpress.com/2013/08/06/defective-heat-sinks-causing-garbage-gaming/
To do this you you first do what you do from 20 years ago. You write some code with a loop where you know the latency and time it. Here is what I used:
static int inline SpinALot(int spinCount)
{
__m128 x = _mm_setzero_ps();
for(int i=0; i<spinCount; i++) {
x = _mm_add_ps(x,_mm_set1_ps(1.0f));
}
return _mm_cvt_ss2si(x);
}
This has a carried loop dependency so the CPU can't reorder this to reduce the latency. It always takes 3 clock cycles per iteration. The OS won't migrate the thread to another core because we will bind the threads.
Then you run this function on each physical core. I did this with OpenMP. The threads must be bound for this. In linux with GCC you can use export OMP_PROC_BIND=true to bind the threads and assuming you have ncores physical core do also export OMP_NUM_THREADS=ncores. If you want to programmatically bind and find the number of physical cores for Intel processors see this programatically-detect-number-of-physical-processors-cores-or-if-hyper-threading and thread-affinity-with-windows-msvc-and-openmp.
void sample_frequency(const int nsamples, const int n, float *max, int nthreads) {
*max = 0;
volatile int x = 0;
double min_time = DBL_MAX;
#pragma omp parallel reduction(+:x) num_threads(nthreads)
{
double dtime, min_time_private = DBL_MAX;
for(int i=0; i<nsamples; i++) {
#pragma omp barrier
dtime = omp_get_wtime();
x += SpinALot(n);
dtime = omp_get_wtime() - dtime;
if(dtime<min_time_private) min_time_private = dtime;
}
#pragma omp critical
{
if(min_time_private<min_time) min_time = min_time_private;
}
}
*max = 3.0f*n/min_time*1E-9f;
}
Finally run the sampler in a loop and print the results
int main(void) {
int ncores = getNumCores();
printf("num_threads %d, num_cores %d\n", omp_get_max_threads(), ncores);
while(1) {
float max1, median1, max2, median2;
sample_frequency(1000, 1000000, &max2, &median2, ncores);
sample_frequency(1000, 1000000, &max1, &median1,1);
printf("1 thread: %.3f, %d threads: %.3f GHz\n" ,max1, ncores, max2);
}
}
I have not tested this on AMD processors. I think AMD processors with modules (e.g Bulldozer) will have to bind to each module not each AMD "core". This could be done with export GOMP_CPU_AFFINITY with GCC. You can find a full working example at https://bitbucket.org/zboson/frequency which works on Windows and Linux on Intel processors and will correctly find the number of physical cores for Intel processors (at least since Nahalem) and binds them to each physical core (without using OMP_PROC_BIND which MSVC does not have).
This method has to be modified a bit for modern processors due to different frequency scaling for SSE, AVX, and AVX512.
Here is a new table I get after modifying my method (see the code after table) with four Xeon 6142 processors (16 cores per processor).
sums 1-thread 64-threads
SSE 1 3.7 3.3
SSE 8 3.7 3.3
AVX 1 3.7 3.3
AVX 2 3.7 3.3
AVX 4 3.6 2.9
AVX 8 3.6 2.9
AVX512 1 3.6 2.9
AVX512 2 3.6 2.9
AVX512 4 3.5 2.2
AVX512 8 3.5 2.2
These numbers agree with the frequencies in this table
https://en.wikichip.org/wiki/intel/xeon_gold/6142#Frequencies
The interesting thing is that I need to now do at least 4 parallel sums to achieve the lower frequencies. The latency for addps on Skylake is 4 clock cycles. These can go to two ports (with AVX512 ports 0 and 1 fuse to count and one AVX512 port and the other AVX512 operations goes to port 5).
Here is how I did eight parallel sums.
static int inline SpinALot(int spinCount) {
__m512 x1 = _mm512_set1_ps(1.0);
__m512 x2 = _mm512_set1_ps(2.0);
__m512 x3 = _mm512_set1_ps(3.0);
__m512 x4 = _mm512_set1_ps(4.0);
__m512 x5 = _mm512_set1_ps(5.0);
__m512 x6 = _mm512_set1_ps(6.0);
__m512 x7 = _mm512_set1_ps(7.0);
__m512 x8 = _mm512_set1_ps(8.0);
__m512 one = _mm512_set1_ps(1.0);
for(int i=0; i<spinCount; i++) {
x1 = _mm512_add_ps(x1,one);
x2 = _mm512_add_ps(x2,one);
x3 = _mm512_add_ps(x3,one);
x4 = _mm512_add_ps(x4,one);
x5 = _mm512_add_ps(x5,one);
x6 = _mm512_add_ps(x6,one);
x7 = _mm512_add_ps(x7,one);
x8 = _mm512_add_ps(x8,one);
}
__m512 t1 = _mm512_add_ps(x1,x2);
__m512 t2 = _mm512_add_ps(x3,x4);
__m512 t3 = _mm512_add_ps(x5,x6);
__m512 t4 = _mm512_add_ps(x7,x8);
__m512 t6 = _mm512_add_ps(t1,t2);
__m512 t7 = _mm512_add_ps(t3,t4);
__m512 x = _mm512_add_ps(t6,t7);
return _mm_cvt_ss2si(_mm512_castps512_ps128(x));
}
How you find the CPU frequency is both architecture AND OS dependent, and there is no abstract solution.
If we were 20+ years ago and you were using an OS with no context switching and the CPU executed the instructions given it in order, you could write some C code in a loop and time it, then based on the assembly it was compiled into compute the number of instructions at runtime. This is already making the assumption that each instruction takes 1 clock cycle, which is a rather poor assumption ever since pipelined processors.
But any modern OS will switch between multiple processes. Even then you can attempt to time a bunch of identical for loop runs (ignoring time needed for page faults and multiple other reasons why your processor might stall) and get a median value.
And even if the previous solution works, you have multi-issue processors. With any modern processor, it's fair game to re-order your instructions, issue a bunch of them in the same clock cycle, or even split them across cores.
The CPU frequency is a hardware related thing, so there's no general method that you can apply to get it, it also depend on the OS you are using.
For example if you are using Linux, you can either read the file /proc/cpuinfo or you can parse the dmesg boot log to get this value or if you want you can see how linux kernel handle this stuff here and try to customize the code to meet your need :
https://github.com/torvalds/linux/blob/master/arch/x86/kernel/cpu/proc.c
Regards.
I guess one way to get clock frequency from software is by hard coding knowledge of Hardware Reference Manual(HRM) into software. You can read the clock configuration registers from the software. Assuming you know the source clock frequency, software can use the multiplier and divisor values from the clock registers and apply appropriate formulas as mentioned in HRM to derive clock frequency.

Calculating CPU frequency in C with RDTSC always returns 0

The following piece of code was given to us from our instructor so we could measure some algorithms performance:
#include <stdio.h>
#include <unistd.h>
static unsigned cyc_hi = 0, cyc_lo = 0;
static void access_counter(unsigned *hi, unsigned *lo) {
asm("rdtsc; movl %%edx,%0; movl %%eax,%1"
: "=r" (*hi), "=r" (*lo)
: /* No input */
: "%edx", "%eax");
}
void start_counter() {
access_counter(&cyc_hi, &cyc_lo);
}
double get_counter() {
unsigned ncyc_hi, ncyc_lo, hi, lo, borrow;
double result;
access_counter(&ncyc_hi, &ncyc_lo);
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
result = (double) hi * (1 << 30) * 4 + lo;
return result;
}
However, I need this code to be portable to machines with different CPU frequencies. For that, I'm trying to calculate the CPU frequency of the machine where the code is being run like this:
int main(void)
{
double c1, c2;
start_counter();
c1 = get_counter();
sleep(1);
c2 = get_counter();
printf("CPU Frequency: %.1f MHz\n", (c2-c1)/1E6);
printf("CPU Frequency: %.1f GHz\n", (c2-c1)/1E9);
return 0;
}
The problem is that the result is always 0 and I can't understand why. I'm running Linux (Arch) as guest on VMware.
On a friend's machine (MacBook) it is working to some extent; I mean, the result is bigger than 0 but it's variable because the CPU frequency is not fixed (we tried to fix it but for some reason we are not able to do it). He has a different machine which is running Linux (Ubuntu) as host and it also reports 0. This rules out the problem being on the virtual machine, which I thought it was the issue at first.
Any ideas why this is happening and how can I fix it?
Okay, since the other answer wasn't helpful, I'll try to explain on more detail. The problem is that a modern CPU can execute instructions out of order. Your code starts out as something like:
rdtsc
push 1
call sleep
rdtsc
Modern CPUs do not necessarily execute instructions in their original order though. Despite your original order, the CPU is (mostly) free to execute that just like:
rdtsc
rdtsc
push 1
call sleep
In this case, it's clear why the difference between the two rdtscs would be (at least very close to) 0. To prevent that, you need to execute an instruction that the CPU will never rearrange to execute out of order. The most common instruction to use for that is CPUID. The other answer I linked should (if memory serves) start roughly from there, about the steps necessary to use CPUID correctly/effectively for this task.
Of course, it's possible that Tim Post was right, and you're also seeing problems because of a virtual machine. Nonetheless, as it stands right now, there's no guarantee that your code will work correctly even on real hardware.
Edit: as to why the code would work: well, first of all, the fact that instructions can be executed out of order doesn't guarantee that they will be. Second, it's possible that (at least some implementations of) sleep contain serializing instructions that prevent rdtsc from being rearranged around it, while others don't (or may contain them, but only execute them under specific (but unspecified) circumstances).
What you're left with is behavior that could change with almost any re-compilation, or even just between one run and the next. It could produce extremely accurate results dozens of times in a row, then fail for some (almost) completely unexplainable reason (e.g., something that happened in some other process entirely).
I can't say for certain what exactly is wrong with your code, but you're doing quite a bit of unnecessary work for such a simple instruction. I recommend you simplify your rdtsc code substantially. You don't need to do 64-bit math carries your self, and you don't need to store the result of that operation as a double. You don't need to use separate outputs in your inline asm, you can tell GCC to use eax and edx.
Here is a greatly simplified version of this code:
#include <stdint.h>
uint64_t rdtsc() {
uint64_t ret;
# if __WORDSIZE == 64
asm ("rdtsc; shl $32, %%rdx; or %%rdx, %%rax;"
: "=A"(ret)
: /* no input */
: "%edx"
);
#else
asm ("rdtsc"
: "=A"(ret)
);
#endif
return ret;
}
Also you should consider printing out the values you're getting out of this so you can see if you're getting out 0s, or something else.
As for VMWare, take a look at the time keeping spec (PDF Link), as well as this thread. TSC instructions are (depending on the guest OS):
Passed directly to the real hardware (PV guest)
Count cycles while the VM is executing on the host processor (Windows / etc)
Note, in #2 the while the VM is executing on the host processor. The same phenomenon would go for Xen, as well, if I recall correctly. In essence, you can expect that the code should work as expected on a paravirtualized guest. If emulated, its entirely unreasonable to expect hardware like consistency.
You forgot to use volatile in your asm statement, so you're telling the compiler that the asm statement produces the same output every time, like a pure function. (volatile is only implicit for asm statements with no outputs.)
This explains why you're getting exactly zero: the compiler optimized end-start to 0 at compile time, through CSE (common-subexpression elimination).
See my answer on Get CPU cycle count? for the __rdtsc() intrinsic, and #Mysticial's answer there has working GNU C inline asm, which I'll quote here:
// prefer using the __rdtsc() intrinsic instead of inline asm at all.
uint64_t rdtsc(){
unsigned int lo,hi;
__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
return ((uint64_t)hi << 32) | lo;
}
This works correctly and efficiently for 32 and 64-bit code.
hmmm I'm not positive but I suspect the problem may be inside this line:
result = (double) hi * (1 << 30) * 4 + lo;
I'm suspicious if you can safely carry out such huge multiplications in an "unsigned"... isn't that often a 32-bit number? ...just the fact that you couldn't safely multiply by 2^32 and had to append it as an extra "* 4" added to the 2^30 at the end already hints at this possibility... you might need to convert each sub-component hi and lo to a double (instead of a single one at the very end) and do the multiplication using the two doubles

Resources