How to determine how many times Time Stamp Counter (TSC) is reset - c

I need to measure time based on Time Stmp Counter (TSC) for some reasons. To read TSC, I am using the code below:
#include <stdio.h>
#include <inttypes.h>
inline volatile uint32_t RDTSC32() {
register uint32_t TSC asm("eax");
asm volatile (".byte 15, 49" : : : "eax", "edx");
return TSC;
}
inline volatile uint64_t RDTSC64() {
register uint64_t TSC asm("rax");
asm volatile (".byte 15, 49" : : : "rax", "rdx");
return TSC;
}
int main() {
while (1) {
printf("%" PRIu64 "\n", RDTSC64());
}
}
When I've tested it, it works fine. Except one thing. When it reaches the maximum counter value (some value higher than 4,256,448,731, in my environment,) the counter value gets reset to 0 and keeps going on.
In this situation, is there any way to see how many times TSC has been reset?
For example, the code below does not print a correct time difference:
#include <stdio.h>
int main() {
long long start, end;
start = RDTSC64();
// long long works to do
end = RDTSC64();
printf("%lld \n", end - start);
}

The time stamp counter is always 64-bit, see this statement on the Wipedia page:
The Time Stamp Counter (TSC) is a 64-bit register present on all x86 processors since the Pentium.
For some reason you're getting a truncated value with only 32 bits, which is why it wraps. The 64-bit value will need 146 years of continuous counting at 4 GHz to wrap.
Your code seems to want to use both eax and edx to hold the two 32 bit halves, as expected. Something must go wrong when moving the values to the single C variable. I believe the snippet you're using is for GCC; perhaps that's no longer your compiler?
Inspect the generated assembly, and check the compiler docs for a proper intrinsic function instead. This question has some good answers, with compiler-specific assembly.

Related

Why is nop not taking one clock cycle

I wrote a basic code to find out the amount of clock cycles taken by nop. We know nop takes one clock cycle.
#include <stdio.h>
#include <string.h>
#include <stdint.h>
int main(void)
{
uint32_t low1, low2, high1, high2;
uint64_t timestamp1, timestamp2;
asm volatile ("rdtsc" : "=a"(low1), "=d"(high1));
asm("nop");
asm volatile ("rdtsc" : "=a"(low2), "=d"(high2));
timestamp1 = ((uint64_t)high1 << 32) | low1;
timestamp2 = ((uint64_t)high2 << 32) | low2;
printf("Diff:%lu\n", timestamp2 - timestamp1);
return 0;
}
But the output is not 1.
It is sometimes 14 or 16.
May i know the reason behind this. Am i missing anything
We know nop takes one clock cycle.
A modern CPU can be thought of as a pipeline of stages; where the front end might fetch and decode multiple instructions in parallel and put the resulting micro-ops into a buffer where they wait for their dependencies to be satisfied (before being taken by an execution unit, where multiple micro-ops can be executed at the same time by multiple execution units).
A NOP has no micro-ops - it's simply discarded by the front end. It doesn't cost 1 cycle.
But the output is not 1.
It probably takes 14 or 16 cycles for the instructions the compiler generates to deal with the outputs of the first rdtsc, then prepare things for the second rdtsc, then the second rdtsc itself.
Note that rdtsc probably counts the cycles of a fixed frequency timer that has nothing the CPU's current (variable) clock frequency; so 14 or 16 "time cycles" might be (e.g.) 7 or 8 CPU cycles.

Using Time stamp counter to get the time stamp

I have used the below code to get the clock cycle of the processor
unsigned long long rdtsc(void)
{
unsigned hi, lo;
__asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi));
return ( (unsigned long long)lo)|( ((unsigned long long)hi)<<32 );
}
I get some value say 43, but what is the unit here? Is it in microseconds or nanoseconds.
I used below code to get the frequency of my board.
cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq
1700000
I also used below code to find my processor speed
dmidecode -t processor | grep "Speed"
Max Speed: 3700 MHz
Current Speed: 3700 MHz
Now how do I use above frequency and convert it to microseconds or milliseconds?
A simple answer to the stated question, "how do I convert the TSC frequency to microseconds or milliseconds?" is: You do not. What the TSC (Time Stamp Counter) clock frequency actually is, varies depending on the hardware, and may vary during runtime on some. To measure real time, you use clock_gettime(CLOCK_REALTIME) or clock_gettime(CLOCK_MONOTONIC) in Linux.
As Peter Cordes mentioned in a comment (Aug 2018), on most current x86-64 architectures the Time Stamp Counter (accessed by the RDTSC instruction and __rdtsc() function declared in <x86intrin.h>) counts reference clock cycles, not CPU clock cycles. His answer to a similar question in C++ is valid for C also in Linux on x86-64, because the compiler provides the underlying built-in when compiling C or C++, and rest of the answer deals with the hardware details. I recommend reading that one, too.
The rest of this answer assumes the underlying issue is microbenchmarking code, to find out how two implementations of some function compare to each other.
On x86 (Intel 32-bit) and x86-64 (AMD64, Intel and AMD 64-bit) architectures, you can use __rdtsc() from <x86intrin.h> to find out the number of TSC clock cycles elapsed. This can be used to measure and compare the number of cycles used by different implementations of some function, typically a large number of times.
Do note that there are hardware differences as to how the TSC clock is related to CPU clock. The abovementioned more recent answer goes into some detail on that. For practical purposes in Linux, it is sufficient in Linux to use cpufreq-set to disable frequency scaling (to ensure the relationship between the CPU and TSC frequencies does not change during microbenchmarking), and optionally taskset to restrict the microbenchmark to specific CPU core(s). That ensures that the results gathered in that microbenchmark yield results that can be compared to each other.
(As Peter Cordes commented, we also want to add _mm_lfence() from <emmintrin.h> (included by <immintrin.h>). This ensures that the CPU does not internally reorder the RDTSC operation compared to the function to be benchmarked. You can use -DNO_LFENCE at compile time to omit those, if you want.)
Let's say you have functions void foo(void); and void bar(void); that you wish to compare:
#include <stdlib.h>
#include <x86intrin.h>
#include <stdio.h>
#ifdef NO_LFENCE
#define lfence()
#else
#include <emmintrin.h>
#define lfence() _mm_lfence()
#endif
static int cmp_ull(const void *aptr, const void *bptr)
{
const unsigned long long a = *(const unsigned long long *)aptr;
const unsigned long long b = *(const unsigned long long *)bptr;
return (a < b) ? -1 :
(a > b) ? +1 : 0;
}
unsigned long long *measure_cycles(size_t count, void (*func)())
{
unsigned long long *elapsed, started, finished;
size_t i;
elapsed = malloc((count + 2) * sizeof elapsed[0]);
if (!elapsed)
return NULL;
/* Call func() count times, measuring the TSC cycles for each call. */
for (i = 0; i < count; i++) {
/* First, let's ensure our CPU executes everything thus far. */
lfence();
/* Start timing. */
started = __rdtsc();
/* Ensure timing starts before we call the function. */
lfence();
/* Call the function. */
func();
/* Ensure everything has been executed thus far. */
lfence();
/* Stop timing. */
finished = __rdtsc();
/* Ensure we have the counter value before proceeding. */
lfence();
elapsed[i] = finished - started;
}
/* The very first call is likely the cold-cache case,
so in case that measurement might contain useful
information, we put it at the end of the array.
We also terminate the array with a zero. */
elapsed[count] = elapsed[0];
elapsed[count + 1] = 0;
/* Sort the cycle counts. */
qsort(elapsed, count, sizeof elapsed[0], cmp_ull);
/* This function returns all cycle counts, in sorted order,
although the median, elapsed[count/2], is the one
I personally use. */
return elapsed;
}
void benchmark(const size_t count)
{
unsigned long long *foo_cycles, *bar_cycles;
if (count < 1)
return;
printf("Measuring run time in Time Stamp Counter cycles:\n");
fflush(stdout);
foo_cycles = measure_cycles(count, foo);
bar_cycles = measure_cycles(count, bar);
printf("foo(): %llu cycles (median of %zu calls)\n", foo_cycles[count/2], count);
printf("bar(): %llu cycles (median of %zu calls)\n", bar_cycles[count/2], count);
free(bar_cycles);
free(foo_cycles);
}
Note that the above results are very specific to the compiler and compiler options used, and of course on the hardware it is run on. The median number of cycles can be interpreted as "the typical number of TSC cycles taken", because the measurement is not completely reliable (may be affected by events outside the process; for example, by context switches, or by migration to another core on some CPUs). For the same reason, I don't trust the minimum, maximum, or average values.
However, the two implementations' (foo() and bar()) cycle counts above can be compared to find out how their performance compares to each other, in a microbenchmark. Just remember that microbenchmark results may not extend to real work tasks, because of how complex tasks' resource use interactions are. One function might be superior in all microbenchmarks, but poorer than others in real world, because it is only efficient when it has lots of CPU cache to use, for example.
In Linux in general, you can use the CLOCK_REALTIME clock to measure real time (wall clock time) used, in the very same manner as above. CLOCK_MONOTONIC is even better, because it is not affected by direct changes to the realtime clock the administrator might make (say, if they noticed the system clock is ahead or behind); only drift adjustments due to NTP etc. are applied. Daylight savings time or changes thereof does not affect the measurements, using either clock. Again, the median of a number of measurements is the result I seek, because events outside the measured code itself can affect the result.
For example:
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#ifdef NO_LFENCE
#define lfence()
#else
#include <emmintrin.h>
#define lfence() _mm_lfence()
#endif
static int cmp_double(const void *aptr, const void *bptr)
{
const double a = *(const double *)aptr;
const double b = *(const double *)bptr;
return (a < b) ? -1 :
(a > b) ? +1 : 0;
}
double median_seconds(const size_t count, void (*func)())
{
struct timespec started, stopped;
double *seconds, median;
size_t i;
seconds = malloc(count * sizeof seconds[0]);
if (!seconds)
return -1.0;
for (i = 0; i < count; i++) {
lfence();
clock_gettime(CLOCK_MONOTONIC, &started);
lfence();
func();
lfence();
clock_gettime(CLOCK_MONOTONIC, &stopped);
lfence();
seconds[i] = (double)(stopped.tv_sec - started.tv_sec)
+ (double)(stopped.tv_nsec - started.tv_nsec) / 1000000000.0;
}
qsort(seconds, count, sizeof seconds[0], cmp_double);
median = seconds[count / 2];
free(seconds);
return median;
}
static double realtime_precision(void)
{
struct timespec t;
if (clock_getres(CLOCK_REALTIME, &t) == 0)
return (double)t.tv_sec
+ (double)t.tv_nsec / 1000000000.0;
return 0.0;
}
void benchmark(const size_t count)
{
double median_foo, median_bar;
if (count < 1)
return;
printf("Median wall clock times over %zu calls:\n", count);
fflush(stdout);
median_foo = median_seconds(count, foo);
median_bar = median_seconds(count, bar);
printf("foo(): %.3f ns\n", median_foo * 1000000000.0);
printf("bar(): %.3f ns\n", median_bar * 1000000000.0);
printf("(Measurement unit is approximately %.3f ns)\n", 1000000000.0 * realtime_precision());
fflush(stdout);
}
In general, I personally prefer to compile the benchmarked function in a separate unit (to a separate object file), and also benchmark a do-nothing function to estimate the function call overhead (although it tends to give an overestimate for the overhead; i.e. yield too large an overhead estimate, because some of the function call overhead is latencies and not actual time taken, and some operations are possible during those latencies in the actual functions).
It is important to remember that the above measurements should only be used as indications, because in a real world application, things like cache locality (especially on current machines, with multi-level caching, and lots of memory) hugely affect the time used by different implementations.
For example, you might compare the speeds of a quicksort and a radix sort. Depending on the size of the keys, the radix sort requires rather large extra arrays (and uses a lot of cache). If the real application the sort routine is used in does not simultaneously use a lot of other memory (and thus the sorted data is basically what is cached), then a radix sort will be faster if there is enough data (and the implementation is sane). However, if the application is multithreaded, and the other threads shuffle (copy or transfer) a lot of memory around, then the radix sort using a lot of cache will evict other data also cached; even though the radix sort function itself does not show any serious slowdown, it may slow down the other threads and therefore the overall program, because the other threads have to wait for their data to be re-cached.
This means that the only "benchmarks" you should trust, are wall clock measurements used on the actual hardware, running actual work tasks with actual work data. Everything else is subject to many conditions, and are more or less suspect: indications, yes, but not very reliable.

Number of NOP Instructions per second

I'm writing a program to determine how many NOPs per second can be run, but the number I'm getting seems extremely small.
int main()
{
struct timeval tvStart, tvDiff, tvEnd;
unsigned int i;
unsigned long numberOfRuns = 0xffffffff;
gettimeofday(&tvStart, NULL);
for(i = 0; i < (unsigned int) 0xffffffff; i++)
{
hundred(); /*Simple assembly loop that runs 100 times and returns */
}
gettimeofday(&tvEnd, NULL);
timeval_subtract(&tvDiff, &tvEnd, &tvStart);
/* Get difference in time in microseconds */
unsigned long nopTime = (tvDiff.tv_sec * 1000000L) + tvDiff.tv_usec;
printf("NOP Seconds: %lu\n", nopTime);
gettimeofday(&tvStart, NULL);
for(i = 0; i < (unsigned int) 0xffffffff; i++)
{
none(); /* Assembly function that just returns */
}
gettimeofday(&tvEnd, NULL);
timeval_subtract(&tvDiff, &tvEnd, &tvStart);
/* Get difference in time in microseconds */
unsigned long retTime = (tvDiff.tv_sec * 1000000L) + tvDiff.tv_usec;
printf("RET Seconds: %lu\n", retTime);
unsigned long avgTime = nopTime - retTime;
/* Takes number of NOP runs and divides it by the time taken
and multiplies by 1,000,000 to convert to seconds */
printf("%lu\n", ((numberOfRuns * 100) / avgTime) * 1000000);
}
The first thing I do is run an assembly loop that consists of 100 NOP instructions 0xffffffff time and store the time it took in nopTime. Then, I do the same, but instead call an assembly function that just returns.
I believe I should be getting at least 1,000,000,000 NOP instructions per second, if not more, but I'm not even close. Here's the output of my last run:
NOP Seconds: 251077086
RET Seconds: 10450449
/* Calculated number of NOPs per second */
17000000
I'm not quite used to using larger data types, so are things being truncated and I'm not realizing it? Should I be making use of doubles? It seems that when I mess around with the data types, I get different numbers, but they are also fairly small numbers.
Is my logic just wrong?
I'm not sure if you can get NOP's in C but it may be possible using inline assembly. But even if you can write the NOPs inside the for loop with inline assembly, the actual loops generate arithmetic and branch instructions.
And if you compile without optimizations, you will even get memory loads and stores and those are slower.
Other than that, the theoretical speed of NOPs and nothing but NOP instructions on a pipelined CPU should be the same as the CPU frequency.
For practical purposes, if you really want to measure, you should write a loop in assembly that uses just registers, and inside the loop you have NOP instructions as much as they fit in a single instruction-cache block or maybe few blocks.
If you do this in C, compile with optimizations gcc -O3 so the for loop counter is only registers, and also make sure that the NOPs don't get optimized away. Look at the output assembly with gcc -S.

Errors using inline assembly in C

I'm trying my hand at assembly in order to use vector operations, which I've never really used before, and I'm admittedly having a bit of trouble grasping some of the syntax.
The relevant code is below.
unit16_t asdf[4];
asdf[0] = 1;
asdf[1] = 2;
asdf[2] = 3;
asdf[3] = 4;
uint16_t other = 3;
__asm__("movq %0, %%mm0"
:
: "m" (asdf));
__asm__("pcmpeqw %0, %%mm0"
:
: "r" (other));
__asm__("movq %%mm0, %0" : "=m" (asdf));
printf("%u %u %u %u\n", asdf[0], asdf[1], asdf[2], asdf[3]);
In this simple example, I'm trying to do a 16-bit compare of "3" to each element in the array. I would hope that the output would be "0 0 65535 0". But it won't even assemble.
The first assembly instruction gives me the following error:
error: memory input 0 is not directly addressable
The second instruction gives me a different error:
Error: suffix or operands invalid for `pcmpeqw'
Any help would be appreciated.
You can't use registers directly in gcc asm statements and expect them to match up with anything in other asm statements -- the optimizer moves things around. Instead, you need to declare variables of the appropriate type and use constraints to force those variables into the right kind of register for the instruction(s) you are using.
The relevant constraints for MMX/SSE are x for xmm registers and y for mmx registers. For your example, you can do:
#include <stdint.h>
#include <stdio.h>
typedef union xmmreg {
uint8_t b[16];
uint16_t w[8];
uint32_t d[4];
uint64_t q[2];
} xmmreg;
int main() {
xmmreg v1, v2;
v1.w[0] = 1;
v1.w[1] = 2;
v1.w[2] = 3;
v1.w[3] = 4;
v2.w[0] = v2.w[1] = v2.w[2] = v2.w[3] = 3;
asm("pcmpeqw %1,%0" : "+x"(v1) : "x"(v2));
printf("%u %u %u %u\n", v1.w[0], v1.w[1], v1.w[2], v1.w[3]);
}
Note that you need to explicitly replicate the 3 across all the relevant elements of the second vector.
From intel reference manual:
PCMPEQW mm, mm/m64 Compare packed words in mm/m64 and mm for equality.
PCMPEQW xmm1, xmm2/m128 Compare packed words in xmm2/m128 and xmm1 for equality.
Your pcmpeqw uses an "r" register which is wrong. Only "mm" and "m64" registers
valter
The code above failed when expanding the asm(), it never tried to even assemble anything. In this case, you are trying to use the zeroth argument (%0), but you didn't give any.
Check out the GCC Inline assembler HOWTO, or read the relevant chapter of your local GCC documentation.
He's right, the optimizer is changing register contents. Switching to intrinsics and using volatile to keep things a little more in place might help.

Calculating CPU frequency in C with RDTSC always returns 0

The following piece of code was given to us from our instructor so we could measure some algorithms performance:
#include <stdio.h>
#include <unistd.h>
static unsigned cyc_hi = 0, cyc_lo = 0;
static void access_counter(unsigned *hi, unsigned *lo) {
asm("rdtsc; movl %%edx,%0; movl %%eax,%1"
: "=r" (*hi), "=r" (*lo)
: /* No input */
: "%edx", "%eax");
}
void start_counter() {
access_counter(&cyc_hi, &cyc_lo);
}
double get_counter() {
unsigned ncyc_hi, ncyc_lo, hi, lo, borrow;
double result;
access_counter(&ncyc_hi, &ncyc_lo);
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
result = (double) hi * (1 << 30) * 4 + lo;
return result;
}
However, I need this code to be portable to machines with different CPU frequencies. For that, I'm trying to calculate the CPU frequency of the machine where the code is being run like this:
int main(void)
{
double c1, c2;
start_counter();
c1 = get_counter();
sleep(1);
c2 = get_counter();
printf("CPU Frequency: %.1f MHz\n", (c2-c1)/1E6);
printf("CPU Frequency: %.1f GHz\n", (c2-c1)/1E9);
return 0;
}
The problem is that the result is always 0 and I can't understand why. I'm running Linux (Arch) as guest on VMware.
On a friend's machine (MacBook) it is working to some extent; I mean, the result is bigger than 0 but it's variable because the CPU frequency is not fixed (we tried to fix it but for some reason we are not able to do it). He has a different machine which is running Linux (Ubuntu) as host and it also reports 0. This rules out the problem being on the virtual machine, which I thought it was the issue at first.
Any ideas why this is happening and how can I fix it?
Okay, since the other answer wasn't helpful, I'll try to explain on more detail. The problem is that a modern CPU can execute instructions out of order. Your code starts out as something like:
rdtsc
push 1
call sleep
rdtsc
Modern CPUs do not necessarily execute instructions in their original order though. Despite your original order, the CPU is (mostly) free to execute that just like:
rdtsc
rdtsc
push 1
call sleep
In this case, it's clear why the difference between the two rdtscs would be (at least very close to) 0. To prevent that, you need to execute an instruction that the CPU will never rearrange to execute out of order. The most common instruction to use for that is CPUID. The other answer I linked should (if memory serves) start roughly from there, about the steps necessary to use CPUID correctly/effectively for this task.
Of course, it's possible that Tim Post was right, and you're also seeing problems because of a virtual machine. Nonetheless, as it stands right now, there's no guarantee that your code will work correctly even on real hardware.
Edit: as to why the code would work: well, first of all, the fact that instructions can be executed out of order doesn't guarantee that they will be. Second, it's possible that (at least some implementations of) sleep contain serializing instructions that prevent rdtsc from being rearranged around it, while others don't (or may contain them, but only execute them under specific (but unspecified) circumstances).
What you're left with is behavior that could change with almost any re-compilation, or even just between one run and the next. It could produce extremely accurate results dozens of times in a row, then fail for some (almost) completely unexplainable reason (e.g., something that happened in some other process entirely).
I can't say for certain what exactly is wrong with your code, but you're doing quite a bit of unnecessary work for such a simple instruction. I recommend you simplify your rdtsc code substantially. You don't need to do 64-bit math carries your self, and you don't need to store the result of that operation as a double. You don't need to use separate outputs in your inline asm, you can tell GCC to use eax and edx.
Here is a greatly simplified version of this code:
#include <stdint.h>
uint64_t rdtsc() {
uint64_t ret;
# if __WORDSIZE == 64
asm ("rdtsc; shl $32, %%rdx; or %%rdx, %%rax;"
: "=A"(ret)
: /* no input */
: "%edx"
);
#else
asm ("rdtsc"
: "=A"(ret)
);
#endif
return ret;
}
Also you should consider printing out the values you're getting out of this so you can see if you're getting out 0s, or something else.
As for VMWare, take a look at the time keeping spec (PDF Link), as well as this thread. TSC instructions are (depending on the guest OS):
Passed directly to the real hardware (PV guest)
Count cycles while the VM is executing on the host processor (Windows / etc)
Note, in #2 the while the VM is executing on the host processor. The same phenomenon would go for Xen, as well, if I recall correctly. In essence, you can expect that the code should work as expected on a paravirtualized guest. If emulated, its entirely unreasonable to expect hardware like consistency.
You forgot to use volatile in your asm statement, so you're telling the compiler that the asm statement produces the same output every time, like a pure function. (volatile is only implicit for asm statements with no outputs.)
This explains why you're getting exactly zero: the compiler optimized end-start to 0 at compile time, through CSE (common-subexpression elimination).
See my answer on Get CPU cycle count? for the __rdtsc() intrinsic, and #Mysticial's answer there has working GNU C inline asm, which I'll quote here:
// prefer using the __rdtsc() intrinsic instead of inline asm at all.
uint64_t rdtsc(){
unsigned int lo,hi;
__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
return ((uint64_t)hi << 32) | lo;
}
This works correctly and efficiently for 32 and 64-bit code.
hmmm I'm not positive but I suspect the problem may be inside this line:
result = (double) hi * (1 << 30) * 4 + lo;
I'm suspicious if you can safely carry out such huge multiplications in an "unsigned"... isn't that often a 32-bit number? ...just the fact that you couldn't safely multiply by 2^32 and had to append it as an extra "* 4" added to the 2^30 at the end already hints at this possibility... you might need to convert each sub-component hi and lo to a double (instead of a single one at the very end) and do the multiplication using the two doubles

Resources