Where to find the source code of timespec_get? - c

The C11 standard provides the function timespec_get. If I run the example code on cppreference, or on my computer, it works:
#include <stdio.h>
#include <time.h>
int main(void)
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
char buff[100];
strftime(buff, sizeof buff, "%D %T", gmtime(&ts.tv_sec));
printf("Current time: %s.%09ld UTC\n", buff, ts.tv_nsec);
}
However, if I look at the sources of glibc here, the code is the following:
#include <time.h>
/* Set TS to calendar time based in time base BASE. */
int
timespec_get (struct timespec *ts, int base)
{
switch (base)
{
case TIME_UTC:
/* Not supported. */
return 0;
default:
return 0;
}
return base;
}
stub_warning (timespec_get)
Which... should not work...
Which leads to the question: where is the source code of timespec_get that is actually called?

The timespec_get function's implementation depends on the system the library is running on, so it appears both as a stub in time/timespec_get.c (in case no implementation is available) and as various system-dependent implementations elsewhere.
You can see the Linux implementation in sysdeps/unix/sysv/linux/timespec_get.c,
/* Set TS to calendar time based in time base BASE. */
int
timespec_get (struct timespec *ts, int base)
{
switch (base)
{
int res;
INTERNAL_SYSCALL_DECL (err);
case TIME_UTC:
res = INTERNAL_VSYSCALL (clock_gettime, err, 2, CLOCK_REALTIME, ts);
if (INTERNAL_SYSCALL_ERROR_P (res, err))
return 0;
break;
default:
return 0;
}
return base;
}
This is is just a thin wrapper around a vDSO call, and the vDSO is part of the Linux kernel itself. If you are curious, look for the definition of clock_gettime there. It's unusual that clock_gettime is in the vDSO, only a small number of syscalls are implemented this way.
Here is the x86 implementation for CLOCK_REALTIME, found in arch/x86/entry/vdso/vclock_gettime.c:
/* Code size doesn't matter (vdso is 4k anyway) and this is faster. */
notrace static int __always_inline do_realtime(struct timespec *ts)
{
unsigned long seq;
u64 ns;
int mode;
do {
seq = gtod_read_begin(gtod);
mode = gtod->vclock_mode;
ts->tv_sec = gtod->wall_time_sec;
ns = gtod->wall_time_snsec;
ns += vgetsns(&mode);
ns >>= gtod->shift;
} while (unlikely(gtod_read_retry(gtod, seq)));
ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, &ns);
ts->tv_nsec = ns;
return mode;
}
Basically, there is some memory in your process which is updated by the kernel, and some registers in your CPU which track the passage of time (or something provided by your hypervisor). The memory in your process is used to translate the value of these CPU registers into the wall clock time. You have to read these in a loop because they can change while you are reading them... the loop logic detects the case when you get a bad read, and tries again.

The timespec_get definition you linked to is a stub (see the stub_warning). The actual implementation will be under sysdeps for your platform. For example, here is the version for sysv: https://github.com/lattera/glibc/blob/a2f34833b1042d5d8eeb263b4cf4caaea138c4ad/sysdeps/unix/sysv/linux/timespec_get.c
int timespec_get (ts, base)
struct timespec *ts;
int base;
{
switch (base)
{
int res;
INTERNAL_SYSCALL_DECL (err);
case TIME_UTC:
res = INTERNAL_GETTIME (CLOCK_REALTIME, ts);
if (INTERNAL_SYSCALL_ERROR_P (res, err))
return 0;
break;
default:
return 0;
}
return base;
}

Related

How to make "long tv_nsec" and "time_t tv_sec" compatible?

I am writting a wrapper function sleep_new() for clock_nanosleep() which would make thread suspension easier for me.
// POSIX.1-2017 is what compiler is confined to.
#define _XOPEN_SOURCE 700
#include <stdint.h>
#include <time.h>
#include <stdio.h>
#include <string.h>
// POSIX headers.
// Other headers
#include "sleep_new.h"
void sleep_new(long value, const char unit[3]){
// Create a timespec structure and set it's members.
// Members are added together!!! So to set time "1.5 s" we set "t.tv_sec = 1" and "t.tv_sec = 500000000".
// Members ".tv_sec" and ".tv_nsec" represent unit and not value!
struct timespec sleep_time;
// Set flags i.e. TIMER_ABSTIME to 0 to use relative instead of absolute time.
int flags = 0;
// Choose the clock i.e. CLOCK_MONOTONIC is a "clock_id" for the clock started at system start.
int clock_id = CLOCK_MONOTONIC;
// Set timespec structure's members according to the chosen unit.
if (!strcmp(unit, "s")) {
sleep_time.tv_sec = value;
sleep_time.tv_nsec = 0;
}
else if (!strcmp(unit, "ns")){
sleep_time.tv_sec = 0;
sleep_time.tv_nsec = value;
}
else if (!strcmp(unit, "us")){
sleep_time.tv_sec = 0;
sleep_time.tv_nsec = value * 1000;
}
else if (!strcmp(unit, "ms")){
sleep_time.tv_sec = 0;
sleep_time.tv_nsec = value * 1000000;
}
else{
puts("Unit not supported - choose between: s, ms, us, ns\n");
}
// Because last argument is NULL in case of error, remaining time is not stored in "t".
clock_nanosleep(clock_id, flags, &sleep_time, NULL);
}
int main(int argc, char *argv[])
{
// Counter.
uint8_t i;
for(i = 0; i < 256; i++){
// Stdout is newline buffered. This is why we either have to include `\n` at the end or flush() it manually.
// So uncomment one example A or B.
// A
//printf("%d\n", i);
// B
printf("%d, ", i);
fflush(stdout);
// Because last argument is NULL in case of error, remaining time is not stored in "t".
sleep_new(1000, "ms");
}
return 0;
}
If I call this function with sleep_new(1, "s") or sleep_new(2, "s") it works fine, because it sets the sleep_time.tv_nsec = 0; and sleep_time.tv_sec = value;.
In any other scenarios i.e. sleep_new(1000, "ms") something is wrong and sleep is not applied. I debugged application and values are applied to the timespec members just fine but clock_nanosec() just ignores them.
I am using type long for the value because I read in the POSIX here where header time.h defines timespec structure's members tv_nsec who needs long and member tv_sec who uses time_t which is in turn defined in header sys/types.h like this:
time_t shall be an integer type.
So because long can also hold int values I expected this to work, but it doesn't. Does anyone have any suggestion?
The tv_nsec is the number of nanoseconds in a second - 1000 * 1000000 nanoseconds is too much. That's 1 second! tv_nsec should range from 0 to 999999999. The proper calculation could look like:
sleep_time.tv_sec = value / 1000;
sleep_time.tv_nsec = (value % 1000) * 1000000;

using gettimeofday() equivalents on windows

I'm trying to use 2 different equivalents for UNIX's gettimeofday() function on Windows, using Visual Studio 2013.
I took the first one from here. As the second one, I'm using the _ftime64_s function, as explained here.
They work, but not as I expected. I want to get different values when printing the seconds, or at least the milliseconds, but I get the same value for the printings with gettimeofday() (mytime1 & mytime2) and with _ftime64_s (mytime3 & mytime4).
However, it worth mentioning that the value of the milliseconds is indeed different between these two functions (that is, the milliseconds value of mytime1/mytime2 is different from mytime3/mytime4).
Here's my code:
#include <stdio.h>
#include <Windows.h>
#include <stdint.h>
#include <sys/timeb.h>
#include <time.h>
#define WIN32_LEAN_AND_MEAN
int gettimeofday(struct timeval * tp, struct timezone * tzp)
{
// Note: some broken versions only have 8 trailing zero's, the correct epoch has 9 trailing zero's
static const uint64_t EPOCH = ((uint64_t)116444736000000000ULL);
SYSTEMTIME system_time;
FILETIME file_time;
uint64_t time;
GetSystemTime(&system_time);
SystemTimeToFileTime(&system_time, &file_time);
time = ((uint64_t)file_time.dwLowDateTime);
time += ((uint64_t)file_time.dwHighDateTime) << 32;
tp->tv_sec = (long)((time - EPOCH) / 10000000L);
tp->tv_usec = (long)(system_time.wMilliseconds * 1000);
return 0;
}
int main()
{
/* working with struct timeval and gettimeofday equivalent */
struct timeval mytime1;
struct timeval mytime2;
gettimeofday(&(mytime1), NULL);
gettimeofday(&(mytime2), NULL);
printf("Seconds: %d\n", (int)(mytime1.tv_sec));
printf("Milliseconds: %d\n", (int)(mytime1.tv_usec));
printf("Seconds: %d\n", (int)(mytime2.tv_sec));
printf("Milliseconds: %d\n", (int)(mytime2.tv_usec));
/* working with _ftime64_s */
struct _timeb mytime3;
struct _timeb mytime4;
_ftime64_s(&mytime3);
_ftime64_s(&mytime4);
printf("Seconds: %d\n", mytime3.time);
printf("Milliseconds: %d\n", mytime3.millitm);
printf("Seconds: %d\n", mytime4.time);
printf("Milliseconds: %d\n", mytime4.millitm);
return (0);
}
I tried other format specifiers (%f, %lu) and castings ((float), (double), (long), (size_t)), but it didn't matter. Suggestions will be welcomed.
QueryPerformanceCounter is used for accurate timing on windows. Usage can be as follows:
uint64_t microseconds()
{
LARGE_INTEGER fq, t;
QueryPerformanceFrequency(&fq);
QueryPerformanceCounter(&t);
return (1000000 * t.QuadPart) / fq.QuadPart;
}
This does not work with any EPOCH as far as I know. For that you need GetSystemTimePreciseAsFileTime which is only available on Windows 8 and higher.
uint64_t MyGetSystemTimePreciseAsFileTime()
{
HMODULE lib = LoadLibraryW(L"kernel32.dll");
if (!lib) return 0;
FARPROC fp = GetProcAddress(lib, "GetSystemTimePreciseAsFileTime");
ULARGE_INTEGER largeInt;
largeInt.QuadPart = 0;
if (fp)
{
T_GetSystemTimePreciseAsFileTime* pfn = (T_GetSystemTimePreciseAsFileTime*)fp;
FILETIME fileTime = { 0 };
pfn(&fileTime);
largeInt.HighPart = fileTime.dwHighDateTime;
largeInt.LowPart = fileTime.dwLowDateTime;
}
FreeLibrary(lib);
return largeInt.QuadPart;
}
int main()
{
uint64_t t1 = microseconds();
uint64_t t2 = microseconds();
printf("t1: %llu\n", t1);
printf("t2: %llu\n", t2);
return (0);
}

Visual Studio missing header file "sys/time.h" [duplicate]

I would like to measure time in C, and I am having a tough time figuring it out, all I want is something like this:
start a timer
run a method
stop the timer
report the time taken (at least to micro accuracy)
Any help would be appreciated.
(I am compiling in windows using mingw)
High resolution timers that provide a resolution of 1 microsecond are system-specific, so you will have to use different methods to achieve this on different OS platforms. You may be interested in checking out the following article, which implements a cross-platform C++ timer class based on the functions described below:
[Song Ho Ahn - High Resolution Timer][1]
Windows
The Windows API provides extremely high resolution timer functions: QueryPerformanceCounter(), which returns the current elapsed ticks, and QueryPerformanceFrequency(), which returns the number of ticks per second.
Example:
#include <stdio.h>
#include <windows.h> // for Windows APIs
int main(void)
{
LARGE_INTEGER frequency; // ticks per second
LARGE_INTEGER t1, t2; // ticks
double elapsedTime;
// get ticks per second
QueryPerformanceFrequency(&frequency);
// start timer
QueryPerformanceCounter(&t1);
// do something
// ...
// stop timer
QueryPerformanceCounter(&t2);
// compute and print the elapsed time in millisec
elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
printf("%f ms.\n", elapsedTime);
}
Linux, Unix, and Mac
For Unix or Linux based system, you can use gettimeofday(). This function is declared in "sys/time.h".
Example:
#include <stdio.h>
#include <sys/time.h> // for gettimeofday()
int main(void)
{
struct timeval t1, t2;
double elapsedTime;
// start timer
gettimeofday(&t1, NULL);
// do something
// ...
// stop timer
gettimeofday(&t2, NULL);
// compute and print the elapsed time in millisec
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms
elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms
printf("%f ms.\n", elapsedTime);
}
On Linux you can use clock_gettime():
clock_gettime(CLOCK_REALTIME, &start); // get initial time-stamp
// ... do stuff ... //
clock_gettime(CLOCK_REALTIME, &end); // get final time-stamp
double t_ns = (double)(end.tv_sec - start.tv_sec) * 1.0e9 +
(double)(end.tv_nsec - start.tv_nsec);
// subtract time-stamps and
// multiply to get elapsed
// time in ns
Here's a header file I wrote to do some simple performance profiling (using manual timers):
#ifndef __ZENTIMER_H__
#define __ZENTIMER_H__
#ifdef ENABLE_ZENTIMER
#include <stdio.h>
#ifdef WIN32
#include <windows.h>
#else
#include <sys/time.h>
#endif
#ifdef HAVE_STDINT_H
#include <stdint.h>
#elif HAVE_INTTYPES_H
#include <inttypes.h>
#else
typedef unsigned char uint8_t;
typedef unsigned long int uint32_t;
typedef unsigned long long uint64_t;
#endif
#ifdef __cplusplus
extern "C" {
#pragma }
#endif /* __cplusplus */
#define ZTIME_USEC_PER_SEC 1000000
/* ztime_t represents usec */
typedef uint64_t ztime_t;
#ifdef WIN32
static uint64_t ztimer_freq = 0;
#endif
static void
ztime (ztime_t *ztimep)
{
#ifdef WIN32
QueryPerformanceCounter ((LARGE_INTEGER *) ztimep);
#else
struct timeval tv;
gettimeofday (&tv, NULL);
*ztimep = ((uint64_t) tv.tv_sec * ZTIME_USEC_PER_SEC) + tv.tv_usec;
#endif
}
enum {
ZTIMER_INACTIVE = 0,
ZTIMER_ACTIVE = (1 << 0),
ZTIMER_PAUSED = (1 << 1),
};
typedef struct {
ztime_t start;
ztime_t stop;
int state;
} ztimer_t;
#define ZTIMER_INITIALIZER { 0, 0, 0 }
/* default timer */
static ztimer_t __ztimer = ZTIMER_INITIALIZER;
static void
ZenTimerStart (ztimer_t *ztimer)
{
ztimer = ztimer ? ztimer : &__ztimer;
ztimer->state = ZTIMER_ACTIVE;
ztime (&ztimer->start);
}
static void
ZenTimerStop (ztimer_t *ztimer)
{
ztimer = ztimer ? ztimer : &__ztimer;
ztime (&ztimer->stop);
ztimer->state = ZTIMER_INACTIVE;
}
static void
ZenTimerPause (ztimer_t *ztimer)
{
ztimer = ztimer ? ztimer : &__ztimer;
ztime (&ztimer->stop);
ztimer->state |= ZTIMER_PAUSED;
}
static void
ZenTimerResume (ztimer_t *ztimer)
{
ztime_t now, delta;
ztimer = ztimer ? ztimer : &__ztimer;
/* unpause */
ztimer->state &= ~ZTIMER_PAUSED;
ztime (&now);
/* calculate time since paused */
delta = now - ztimer->stop;
/* adjust start time to account for time elapsed since paused */
ztimer->start += delta;
}
static double
ZenTimerElapsed (ztimer_t *ztimer, uint64_t *usec)
{
#ifdef WIN32
static uint64_t freq = 0;
ztime_t delta, stop;
if (freq == 0)
QueryPerformanceFrequency ((LARGE_INTEGER *) &freq);
#else
#define freq ZTIME_USEC_PER_SEC
ztime_t delta, stop;
#endif
ztimer = ztimer ? ztimer : &__ztimer;
if (ztimer->state != ZTIMER_ACTIVE)
stop = ztimer->stop;
else
ztime (&stop);
delta = stop - ztimer->start;
if (usec != NULL)
*usec = (uint64_t) (delta * ((double) ZTIME_USEC_PER_SEC / (double) freq));
return (double) delta / (double) freq;
}
static void
ZenTimerReport (ztimer_t *ztimer, const char *oper)
{
fprintf (stderr, "ZenTimer: %s took %.6f seconds\n", oper, ZenTimerElapsed (ztimer, NULL));
}
#ifdef __cplusplus
}
#endif /* __cplusplus */
#else /* ! ENABLE_ZENTIMER */
#define ZenTimerStart(ztimerp)
#define ZenTimerStop(ztimerp)
#define ZenTimerPause(ztimerp)
#define ZenTimerResume(ztimerp)
#define ZenTimerElapsed(ztimerp, usec)
#define ZenTimerReport(ztimerp, oper)
#endif /* ENABLE_ZENTIMER */
#endif /* __ZENTIMER_H__ */
The ztime() function is the main logic you need — it gets the current time and stores it in a 64bit uint measured in microseconds. You can then later do simple math to find out the elapsed time.
The ZenTimer*() functions are just helper functions to take a pointer to a simple timer struct, ztimer_t, which records the start time and the end time. The ZenTimerPause()/ZenTimerResume() functions allow you to, well, pause and resume the timer in case you want to print out some debugging information that you don't want timed, for example.
You can find a copy of the original header file at http://www.gnome.org/~fejj/code/zentimer.h in the off chance that I messed up the html escaping of <'s or something. It's licensed under MIT/X11 so feel free to copy it into any project you do.
The following is a group of versatile C functions for timer management based on the gettimeofday() system call. All the timer properties are contained in a single ticktimer struct - the interval you want, the total running time since the timer initialization, a pointer to the desired callback you want to call, the number of times the callback was called. A callback function would look like this:
void your_timer_cb (struct ticktimer *t) {
/* do your stuff here */
}
To initialize and start a timer, call ticktimer_init(your_timer, interval, TICKTIMER_RUN, your_timer_cb, 0).
In the main loop of your program call ticktimer_tick(your_timer) and it will decide whether the appropriate amount of time has passed to invoke the callback.
To stop a timer, just call ticktimer_ctl(your_timer, TICKTIMER_STOP).
ticktimer.h:
#ifndef __TICKTIMER_H
#define __TICKTIMER_H
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/time.h>
#include <sys/types.h>
#define TICKTIMER_STOP 0x00
#define TICKTIMER_UNCOMPENSATE 0x00
#define TICKTIMER_RUN 0x01
#define TICKTIMER_COMPENSATE 0x02
struct ticktimer {
u_int64_t tm_tick_interval;
u_int64_t tm_last_ticked;
u_int64_t tm_total;
unsigned ticks_total;
void (*tick)(struct ticktimer *);
unsigned char flags;
int id;
};
void ticktimer_init (struct ticktimer *, u_int64_t, unsigned char, void (*)(struct ticktimer *), int);
unsigned ticktimer_tick (struct ticktimer *);
void ticktimer_ctl (struct ticktimer *, unsigned char);
struct ticktimer *ticktimer_alloc (void);
void ticktimer_free (struct ticktimer *);
void ticktimer_tick_all (void);
#endif
ticktimer.c:
#include "ticktimer.h"
#define TIMER_COUNT 100
static struct ticktimer timers[TIMER_COUNT];
static struct timeval tm;
/*!
#brief
Initializes/sets the ticktimer struct.
#param timer
Pointer to ticktimer struct.
#param interval
Ticking interval in microseconds.
#param flags
Flag bitmask. Use TICKTIMER_RUN | TICKTIMER_COMPENSATE
to start a compensating timer; TICKTIMER_RUN to start
a normal uncompensating timer.
#param tick
Ticking callback function.
#param id
Timer ID. Useful if you want to distinguish different
timers within the same callback function.
*/
void ticktimer_init (struct ticktimer *timer, u_int64_t interval, unsigned char flags, void (*tick)(struct ticktimer *), int id) {
gettimeofday(&tm, NULL);
timer->tm_tick_interval = interval;
timer->tm_last_ticked = tm.tv_sec * 1000000 + tm.tv_usec;
timer->tm_total = 0;
timer->ticks_total = 0;
timer->tick = tick;
timer->flags = flags;
timer->id = id;
}
/*!
#brief
Checks the status of a ticktimer and performs a tick(s) if
necessary.
#param timer
Pointer to ticktimer struct.
#return
The number of times the timer was ticked.
*/
unsigned ticktimer_tick (struct ticktimer *timer) {
register typeof(timer->tm_tick_interval) now;
register typeof(timer->ticks_total) nticks, i;
if (timer->flags & TICKTIMER_RUN) {
gettimeofday(&tm, NULL);
now = tm.tv_sec * 1000000 + tm.tv_usec;
if (now >= timer->tm_last_ticked + timer->tm_tick_interval) {
timer->tm_total += now - timer->tm_last_ticked;
if (timer->flags & TICKTIMER_COMPENSATE) {
nticks = (now - timer->tm_last_ticked) / timer->tm_tick_interval;
timer->tm_last_ticked = now - ((now - timer->tm_last_ticked) % timer->tm_tick_interval);
for (i = 0; i < nticks; i++) {
timer->tick(timer);
timer->ticks_total++;
if (timer->tick == NULL) {
break;
}
}
return nticks;
} else {
timer->tm_last_ticked = now;
timer->tick(timer);
timer->ticks_total++;
return 1;
}
}
}
return 0;
}
/*!
#brief
Controls the behaviour of a ticktimer.
#param timer
Pointer to ticktimer struct.
#param flags
Flag bitmask.
*/
inline void ticktimer_ctl (struct ticktimer *timer, unsigned char flags) {
timer->flags = flags;
}
/*!
#brief
Allocates a ticktimer struct from an internal
statically allocated list.
#return
Pointer to the newly allocated ticktimer struct
or NULL when no more space is available.
*/
struct ticktimer *ticktimer_alloc (void) {
register int i;
for (i = 0; i < TIMER_COUNT; i++) {
if (timers[i].tick == NULL) {
return timers + i;
}
}
return NULL;
}
/*!
#brief
Marks a previously allocated ticktimer struct as free.
#param timer
Pointer to ticktimer struct, usually returned by
ticktimer_alloc().
*/
inline void ticktimer_free (struct ticktimer *timer) {
timer->tick = NULL;
}
/*!
#brief
Checks the status of all allocated timers from the
internal list and performs ticks where necessary.
#note
Should be called in the main loop.
*/
inline void ticktimer_tick_all (void) {
register int i;
for (i = 0; i < TIMER_COUNT; i++) {
if (timers[i].tick != NULL) {
ticktimer_tick(timers + i);
}
}
}
Using the time.h library, try something like this:
long start_time, end_time, elapsed;
start_time = clock();
// Do something
end_time = clock();
elapsed = (end_time - start_time) / CLOCKS_PER_SEC * 1000;
If your Linux system supports it, clock_gettime(CLOCK_MONOTONIC) should be a high resolution timer that is unaffected by system date changes (e.g. NTP daemons).
Great answers for GNU environments above and below...
But... what if you're not running on an OS? (or a PC for that matter, or you need to time your timer interrupts themselves?) Here's a solution that uses the x86 CPU timestamp counter directly... Not because this is good practice, or should be done, ever, when running under an OS...
Caveat: Only works on x86, with frequency scaling disabled.
Under Linux, only works on non-tickless kernels
rdtsc.c:
#include <sys/time.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
typedef unsigned long long int64;
static __inline__ int64 getticks(void)
{
unsigned a, d;
asm volatile("rdtsc" : "=a" (a), "=d" (d));
return (((int64)a) | (((int64)d) << 32));
}
int main(){
int64 tick,tick1;
unsigned time=0,mt;
// mt is the divisor to give microseconds
FILE *pf;
int i,r,l,n=0;
char s[100];
// time how long it takes to get the divisors, as a test
tick = getticks();
// get the divisors - todo: for max performance this can
// output a new binary or library with these values hardcoded
// for the relevant CPU - if you use the equivalent assembler for
// that CPU
pf = fopen("/proc/cpuinfo","r");
do {
r=fscanf(pf,"%s",&s[0]);
if (r<0) {
n=5; break;
} else if (n==0) {
if (strcmp("MHz",s)==0) n=1;
} else if (n==1) {
if (strcmp(":",s)==0) n=2;
} else if (n==2) {
n=3;
};
} while (n<3);
fclose(pf);
s[9]=(char)0;
strcpy(&s[4],&s[5]);
mt=atoi(s);
printf("#define mt %u // (%s Hz) hardcode this for your a CPU-specific binary ;-)\n",mt,s);
tick1 = getticks();
time = (unsigned)((tick1-tick)/mt);
printf("%u ms\n",time);
// time the duration of sleep(1) - plus overheads ;-)
tick = getticks();
sleep(1);
tick1 = getticks();
time = (unsigned)((tick1-tick)/mt);
printf("%u ms\n",time);
return 0;
}
compile and run with
$ gcc rdtsc.c -o rdtsc && ./rdtsc
It reads the divisor for your CPU from /proc/cpuinfo and shows how long it took to read that in microseconds, as well as how long it takes to execute sleep(1) in microseconds... Assuming the Mhz rating in /proc/cpuinfo always contains 3 decimal places :-o

completely remove function call at runtime in C

Is it possible to completely remove function call from C code at runtime and insert it back when needed.
I'm not sure if ELF can be modified at run time, so that no cpu cycle is wasted incase of no use of function.
I don't want to place a 'if' check before the function call to avoid calling a function.
For example if global flag g_flg=1 then func1 should look like below
void func1(int x)
{
/* some processing */
func2(y);
/* some processing */
}
if global g_flag=0 then func1 should look like below
void func1(int x)
{
/* some processing */
/* some processing */
}
Don't optimize something that doesn't need it. Have you tried assessing the potential improvement on your performance?
Try setting g_flg to 1 and execute this:
if (g_flg == 1) {func2(y);}
Then try executing this:
func2(y);
Both 1 million times (or whatever number of times you can run it in a reasonable time). I'm quite sure you'll notice there is virtually no difference between both things.
Plus, apart from that, I think what you want to do is impossible, because ELF is a binary (compiled) format.
What you could probably get away with doing instead would be something like this:
struct Something;
typedef struct Something Something;
int myFunction(Something * me, int i)
{
// do a bunch of stuff
return 42; // obviously the answer
}
int myFunctionDoNothing(Something * dummy1, int dummy2)
{
return 0;
}
int (*function)(Something *, int) = myFunctionDoNothing;
// snip to actual use of function
int i;
function = myFunctionDoNothing;
for (i = 0; i < 100000; ++i) function(NULL, 5 * i); // does nothing
function = myFunction;
for (i = 0; i < 100000; ++i) function(NULL, 5 * i); // does something
WARNING
This might be a premature optimization. Depending on how your compiler treats this and how your cpu handles branching, you might actually lose performance this way as opposed to the naive way (stopping it in the function with a flag)
On most desktop and server architectures branching is faster than indirect calls, since they do branch prediction and/or speculative execution. I have never heard of an architecture where indirect call is faster than a single branch. (Jump tables, for switch() statements, have more than one branch, and are therefore a different thing altogether.)
Consider the following microbenchmark I threw together. test.c:
/* test.c */
volatile long test_calls = 0L;
volatile long test_sum = 0L;
void test(long counter)
{
test_calls++;
test_sum += counter;
}
work.c:
/* work.c */
void test(long counter);
/* Work function, to be measured */
void test_work(long counter, int flag)
{
if (flag)
test(counter);
}
/* Dummy function, to measure call overhead */
void test_none(long counter __attribute__((unused)), int flag __attribute__((unused)) )
{
return;
}
and harness.c:
#define _POSIX_C_SOURCE 200809L
#include <unistd.h>
#include <stdlib.h>
#include <time.h>
#include <stdint.h>
#include <string.h>
#include <stdio.h>
/* From test.c */
extern volatile long test_calls;
extern volatile long test_sum;
/* Dummy function, to measure call overhead */
void test_none(long counter, int flag);
/* Work function, to be measured */
void test_work(long counter, int flag);
/* Timing harness -- GCC x86; modify for other architectures */
struct timing {
struct timespec wall_start;
struct timespec wall_stop;
uint64_t cpu_start;
uint64_t cpu_stop;
};
static inline void start_timing(struct timing *const mark)
{
clock_gettime(CLOCK_REALTIME, &(mark->wall_start));
mark->cpu_start = __builtin_ia32_rdtsc();
}
static inline void stop_timing(struct timing *const mark)
{
mark->cpu_stop = __builtin_ia32_rdtsc();
clock_gettime(CLOCK_REALTIME, &(mark->wall_stop));
}
static inline double cpu_timing(const struct timing *const mark)
{
return (double)(mark->cpu_stop - mark->cpu_start); /* Cycles */
}
static inline double wall_timing(const struct timing *const mark)
{
return (double)(mark->wall_stop.tv_sec - mark->wall_start.tv_sec)
+ (double)(mark->wall_stop.tv_nsec - mark->wall_start.tv_nsec) / 1000000000.0;
}
static int cmpdouble(const void *aptr, const void *bptr)
{
const double a = *(const double *)aptr;
const double b = *(const double *)bptr;
if (a < b)
return -1;
else
if (a > b)
return +1;
else
return 0;
}
void report(double *const wall, double *const cpu, const size_t count)
{
printf("\tInitial call: %.0f cpu cycles, %.9f seconds real time\n", cpu[0], wall[0]);
qsort(wall, count, sizeof (double), cmpdouble);
qsort(cpu, count, sizeof (double), cmpdouble);
printf("\tMinimum: %.0f cpu cycles, %.9f seconds real time\n", cpu[0], wall[0]);
printf("\t5%% less than %.0f cpu cycles, %.9f seconds real time\n", cpu[count/20], wall[count/20]);
printf("\t25%% less than %.0f cpu cycles, %.9f seconds real time\n", cpu[count/4], wall[count/4]);
printf("\tMedian: %.0f cpu cycles, %.9f seconds real time\n", cpu[count/2], wall[count/2]);
printf("\t75%% less than %.0f cpu cycles, %.9f seconds real time\n", cpu[count-count/4-1], wall[count-count/4-1]);
printf("\t95%% less than %.0f cpu cycles, %.9f seconds real time\n", cpu[count-count/20-1], wall[count-count/20-1]);
printf("\tMaximum: %.0f cpu cycles, %.9f seconds real time\n", cpu[count-1], wall[count-1]);
}
int main(int argc, char *argv[])
{
struct timing measurement;
double *wall_seconds = NULL;
double *cpu_cycles = NULL;
unsigned long count = 0UL;
unsigned long i;
int flag;
char dummy;
if (argc != 3 || !strcmp(argv[1], "-h") || !strcmp(argv[1], "--help")) {
fprintf(stderr, "\n");
fprintf(stderr, "Usage: %s COUNT FLAG\n", argv[0]);
fprintf(stderr, "\n");
return 1;
}
if (sscanf(argv[1], " %lu %c", &count, &dummy) != 1) {
fprintf(stderr, "%s: Invalid COUNT.\n", argv[1]);
return 1;
}
if (count < 1UL) {
fprintf(stderr, "%s: COUNT is too small.\n", argv[1]);
return 1;
}
if (!(unsigned long)(count + 1UL)) {
fprintf(stderr, "%s: COUNT is too large.\n", argv[1]);
return 1;
}
if (sscanf(argv[2], " %d %c", &flag, &dummy) != 1) {
fprintf(stderr, "%s: Invalid FLAG.\n", argv[2]);
return 1;
}
wall_seconds = malloc(sizeof (double) * (size_t)count);
cpu_cycles = malloc(sizeof (double) * (size_t)count);
if (!wall_seconds || !cpu_cycles) {
free(cpu_cycles);
free(wall_seconds);
fprintf(stderr, "Cannot allocate enough memory. Try smaller COUNT.\n");
return 1;
}
printf("Call and measurement overhead:\n");
fflush(stdout);
for (i = 0UL; i < count; i++) {
start_timing(&measurement);
test_none(i, flag);
stop_timing(&measurement);
wall_seconds[i] = wall_timing(&measurement);
cpu_cycles[i] = cpu_timing(&measurement);
}
report(wall_seconds, cpu_cycles, (size_t)count);
printf("\n");
printf("Measuring FLAG==0 calls: ");
fflush(stdout);
test_calls = 0L;
test_sum = 0L;
for (i = 0UL; i < count; i++) {
start_timing(&measurement);
test_work(i, 0);
stop_timing(&measurement);
wall_seconds[i] = wall_timing(&measurement);
cpu_cycles[i] = cpu_timing(&measurement);
}
printf("%ld calls, sum %ld.\n", test_calls, test_sum);
report(wall_seconds, cpu_cycles, (size_t)count);
printf("\n");
printf("Measuring FLAG==%d calls:", flag);
fflush(stdout);
test_calls = 0L;
test_sum = 0L;
for (i = 0UL; i < count; i++) {
start_timing(&measurement);
test_work(i, flag);
stop_timing(&measurement);
wall_seconds[i] = wall_timing(&measurement);
cpu_cycles[i] = cpu_timing(&measurement);
}
printf("%ld calls, sum %ld.\n", test_calls, test_sum);
report(wall_seconds, cpu_cycles, (size_t)count);
printf("\n");
printf("Measuring alternating FLAG calls: ");
fflush(stdout);
test_calls = 0L;
test_sum = 0L;
for (i = 0UL; i < count; i++) {
start_timing(&measurement);
test_work(i, i & 1);
stop_timing(&measurement);
wall_seconds[i] = wall_timing(&measurement);
cpu_cycles[i] = cpu_timing(&measurement);
}
printf("%ld calls, sum %ld.\n", test_calls, test_sum);
report(wall_seconds, cpu_cycles, (size_t)count);
printf("\n");
free(cpu_cycles);
free(wall_seconds);
return 0;
}
Put the three files in an empty directory, then compile and build ./call-test:
rm -f *.o
gcc -W -Wall -O3 -fomit-frame-pointer -c harness.c
gcc -W -Wall -O3 -fomit-frame-pointer -c work.c
gcc -W -Wall -O3 -fomit-frame-pointer -c test.c
gcc harness.o work.o test.o -lrt -o call-test
On AMD Athlon II X4 640, using gcc-4.6.3 (Xubuntu 10.04), running
./call-test 1000000 1
tells me that the overhead is just 2 clock cycles (< 1ns) for the test alone (branch not taken), and just 4 clock cycles (just over a nanosecond) when calling the second function which increases test_calls and adds the counter to test_sum.
When omitting all optimizations (use -O0 and leave out -fomit-frame-pointer when compiling), the test alone costs about 3 clock cycles (3 cycles if branch not taken), and about 9 cycles if the branch is taken and the work is done to update the two extra variables.
(The two extra variables let you easily see that the harness does actually do all it should do; they're just an extra check. And I wanted to have some work in the second function, so the timing differences would be easier to spot.)
The above interpretation is only valid for the case when the code is already cached; i.e. run recently. If the code is run only rarely, it won't be in cache. However, then the test overhead matters even less. Caching effects -- for example, if "nearby" code has been run (you can see this for the call overhead measurement, the other test functions code' tends to get cached too!) -- are much larger anyway. (While the test harness does produce the initial call results separately, don't put too much faith in it, since it does not try to clear any caches in any way.)
My conclusion is that adding
if (flag)
debug_function_call();
to any normal code is perfectly fine: the overhead is literally neglible; practically irrelevant. As always, consider the overall algorithm instead. Any enhancements in the algorithm yield much bigger rewards than worrying about the code the compiler generates.
(Since I wrote the test code above at one sitting, there are likely some bugs and/or brainfarts in them. Check, and if you find any, let me know below so I can fix the code.)

How do I measure a time interval in C?

I would like to measure time in C, and I am having a tough time figuring it out, all I want is something like this:
start a timer
run a method
stop the timer
report the time taken (at least to micro accuracy)
Any help would be appreciated.
(I am compiling in windows using mingw)
High resolution timers that provide a resolution of 1 microsecond are system-specific, so you will have to use different methods to achieve this on different OS platforms. You may be interested in checking out the following article, which implements a cross-platform C++ timer class based on the functions described below:
[Song Ho Ahn - High Resolution Timer][1]
Windows
The Windows API provides extremely high resolution timer functions: QueryPerformanceCounter(), which returns the current elapsed ticks, and QueryPerformanceFrequency(), which returns the number of ticks per second.
Example:
#include <stdio.h>
#include <windows.h> // for Windows APIs
int main(void)
{
LARGE_INTEGER frequency; // ticks per second
LARGE_INTEGER t1, t2; // ticks
double elapsedTime;
// get ticks per second
QueryPerformanceFrequency(&frequency);
// start timer
QueryPerformanceCounter(&t1);
// do something
// ...
// stop timer
QueryPerformanceCounter(&t2);
// compute and print the elapsed time in millisec
elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
printf("%f ms.\n", elapsedTime);
}
Linux, Unix, and Mac
For Unix or Linux based system, you can use gettimeofday(). This function is declared in "sys/time.h".
Example:
#include <stdio.h>
#include <sys/time.h> // for gettimeofday()
int main(void)
{
struct timeval t1, t2;
double elapsedTime;
// start timer
gettimeofday(&t1, NULL);
// do something
// ...
// stop timer
gettimeofday(&t2, NULL);
// compute and print the elapsed time in millisec
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms
elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms
printf("%f ms.\n", elapsedTime);
}
On Linux you can use clock_gettime():
clock_gettime(CLOCK_REALTIME, &start); // get initial time-stamp
// ... do stuff ... //
clock_gettime(CLOCK_REALTIME, &end); // get final time-stamp
double t_ns = (double)(end.tv_sec - start.tv_sec) * 1.0e9 +
(double)(end.tv_nsec - start.tv_nsec);
// subtract time-stamps and
// multiply to get elapsed
// time in ns
Here's a header file I wrote to do some simple performance profiling (using manual timers):
#ifndef __ZENTIMER_H__
#define __ZENTIMER_H__
#ifdef ENABLE_ZENTIMER
#include <stdio.h>
#ifdef WIN32
#include <windows.h>
#else
#include <sys/time.h>
#endif
#ifdef HAVE_STDINT_H
#include <stdint.h>
#elif HAVE_INTTYPES_H
#include <inttypes.h>
#else
typedef unsigned char uint8_t;
typedef unsigned long int uint32_t;
typedef unsigned long long uint64_t;
#endif
#ifdef __cplusplus
extern "C" {
#pragma }
#endif /* __cplusplus */
#define ZTIME_USEC_PER_SEC 1000000
/* ztime_t represents usec */
typedef uint64_t ztime_t;
#ifdef WIN32
static uint64_t ztimer_freq = 0;
#endif
static void
ztime (ztime_t *ztimep)
{
#ifdef WIN32
QueryPerformanceCounter ((LARGE_INTEGER *) ztimep);
#else
struct timeval tv;
gettimeofday (&tv, NULL);
*ztimep = ((uint64_t) tv.tv_sec * ZTIME_USEC_PER_SEC) + tv.tv_usec;
#endif
}
enum {
ZTIMER_INACTIVE = 0,
ZTIMER_ACTIVE = (1 << 0),
ZTIMER_PAUSED = (1 << 1),
};
typedef struct {
ztime_t start;
ztime_t stop;
int state;
} ztimer_t;
#define ZTIMER_INITIALIZER { 0, 0, 0 }
/* default timer */
static ztimer_t __ztimer = ZTIMER_INITIALIZER;
static void
ZenTimerStart (ztimer_t *ztimer)
{
ztimer = ztimer ? ztimer : &__ztimer;
ztimer->state = ZTIMER_ACTIVE;
ztime (&ztimer->start);
}
static void
ZenTimerStop (ztimer_t *ztimer)
{
ztimer = ztimer ? ztimer : &__ztimer;
ztime (&ztimer->stop);
ztimer->state = ZTIMER_INACTIVE;
}
static void
ZenTimerPause (ztimer_t *ztimer)
{
ztimer = ztimer ? ztimer : &__ztimer;
ztime (&ztimer->stop);
ztimer->state |= ZTIMER_PAUSED;
}
static void
ZenTimerResume (ztimer_t *ztimer)
{
ztime_t now, delta;
ztimer = ztimer ? ztimer : &__ztimer;
/* unpause */
ztimer->state &= ~ZTIMER_PAUSED;
ztime (&now);
/* calculate time since paused */
delta = now - ztimer->stop;
/* adjust start time to account for time elapsed since paused */
ztimer->start += delta;
}
static double
ZenTimerElapsed (ztimer_t *ztimer, uint64_t *usec)
{
#ifdef WIN32
static uint64_t freq = 0;
ztime_t delta, stop;
if (freq == 0)
QueryPerformanceFrequency ((LARGE_INTEGER *) &freq);
#else
#define freq ZTIME_USEC_PER_SEC
ztime_t delta, stop;
#endif
ztimer = ztimer ? ztimer : &__ztimer;
if (ztimer->state != ZTIMER_ACTIVE)
stop = ztimer->stop;
else
ztime (&stop);
delta = stop - ztimer->start;
if (usec != NULL)
*usec = (uint64_t) (delta * ((double) ZTIME_USEC_PER_SEC / (double) freq));
return (double) delta / (double) freq;
}
static void
ZenTimerReport (ztimer_t *ztimer, const char *oper)
{
fprintf (stderr, "ZenTimer: %s took %.6f seconds\n", oper, ZenTimerElapsed (ztimer, NULL));
}
#ifdef __cplusplus
}
#endif /* __cplusplus */
#else /* ! ENABLE_ZENTIMER */
#define ZenTimerStart(ztimerp)
#define ZenTimerStop(ztimerp)
#define ZenTimerPause(ztimerp)
#define ZenTimerResume(ztimerp)
#define ZenTimerElapsed(ztimerp, usec)
#define ZenTimerReport(ztimerp, oper)
#endif /* ENABLE_ZENTIMER */
#endif /* __ZENTIMER_H__ */
The ztime() function is the main logic you need — it gets the current time and stores it in a 64bit uint measured in microseconds. You can then later do simple math to find out the elapsed time.
The ZenTimer*() functions are just helper functions to take a pointer to a simple timer struct, ztimer_t, which records the start time and the end time. The ZenTimerPause()/ZenTimerResume() functions allow you to, well, pause and resume the timer in case you want to print out some debugging information that you don't want timed, for example.
You can find a copy of the original header file at http://www.gnome.org/~fejj/code/zentimer.h in the off chance that I messed up the html escaping of <'s or something. It's licensed under MIT/X11 so feel free to copy it into any project you do.
The following is a group of versatile C functions for timer management based on the gettimeofday() system call. All the timer properties are contained in a single ticktimer struct - the interval you want, the total running time since the timer initialization, a pointer to the desired callback you want to call, the number of times the callback was called. A callback function would look like this:
void your_timer_cb (struct ticktimer *t) {
/* do your stuff here */
}
To initialize and start a timer, call ticktimer_init(your_timer, interval, TICKTIMER_RUN, your_timer_cb, 0).
In the main loop of your program call ticktimer_tick(your_timer) and it will decide whether the appropriate amount of time has passed to invoke the callback.
To stop a timer, just call ticktimer_ctl(your_timer, TICKTIMER_STOP).
ticktimer.h:
#ifndef __TICKTIMER_H
#define __TICKTIMER_H
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/time.h>
#include <sys/types.h>
#define TICKTIMER_STOP 0x00
#define TICKTIMER_UNCOMPENSATE 0x00
#define TICKTIMER_RUN 0x01
#define TICKTIMER_COMPENSATE 0x02
struct ticktimer {
u_int64_t tm_tick_interval;
u_int64_t tm_last_ticked;
u_int64_t tm_total;
unsigned ticks_total;
void (*tick)(struct ticktimer *);
unsigned char flags;
int id;
};
void ticktimer_init (struct ticktimer *, u_int64_t, unsigned char, void (*)(struct ticktimer *), int);
unsigned ticktimer_tick (struct ticktimer *);
void ticktimer_ctl (struct ticktimer *, unsigned char);
struct ticktimer *ticktimer_alloc (void);
void ticktimer_free (struct ticktimer *);
void ticktimer_tick_all (void);
#endif
ticktimer.c:
#include "ticktimer.h"
#define TIMER_COUNT 100
static struct ticktimer timers[TIMER_COUNT];
static struct timeval tm;
/*!
#brief
Initializes/sets the ticktimer struct.
#param timer
Pointer to ticktimer struct.
#param interval
Ticking interval in microseconds.
#param flags
Flag bitmask. Use TICKTIMER_RUN | TICKTIMER_COMPENSATE
to start a compensating timer; TICKTIMER_RUN to start
a normal uncompensating timer.
#param tick
Ticking callback function.
#param id
Timer ID. Useful if you want to distinguish different
timers within the same callback function.
*/
void ticktimer_init (struct ticktimer *timer, u_int64_t interval, unsigned char flags, void (*tick)(struct ticktimer *), int id) {
gettimeofday(&tm, NULL);
timer->tm_tick_interval = interval;
timer->tm_last_ticked = tm.tv_sec * 1000000 + tm.tv_usec;
timer->tm_total = 0;
timer->ticks_total = 0;
timer->tick = tick;
timer->flags = flags;
timer->id = id;
}
/*!
#brief
Checks the status of a ticktimer and performs a tick(s) if
necessary.
#param timer
Pointer to ticktimer struct.
#return
The number of times the timer was ticked.
*/
unsigned ticktimer_tick (struct ticktimer *timer) {
register typeof(timer->tm_tick_interval) now;
register typeof(timer->ticks_total) nticks, i;
if (timer->flags & TICKTIMER_RUN) {
gettimeofday(&tm, NULL);
now = tm.tv_sec * 1000000 + tm.tv_usec;
if (now >= timer->tm_last_ticked + timer->tm_tick_interval) {
timer->tm_total += now - timer->tm_last_ticked;
if (timer->flags & TICKTIMER_COMPENSATE) {
nticks = (now - timer->tm_last_ticked) / timer->tm_tick_interval;
timer->tm_last_ticked = now - ((now - timer->tm_last_ticked) % timer->tm_tick_interval);
for (i = 0; i < nticks; i++) {
timer->tick(timer);
timer->ticks_total++;
if (timer->tick == NULL) {
break;
}
}
return nticks;
} else {
timer->tm_last_ticked = now;
timer->tick(timer);
timer->ticks_total++;
return 1;
}
}
}
return 0;
}
/*!
#brief
Controls the behaviour of a ticktimer.
#param timer
Pointer to ticktimer struct.
#param flags
Flag bitmask.
*/
inline void ticktimer_ctl (struct ticktimer *timer, unsigned char flags) {
timer->flags = flags;
}
/*!
#brief
Allocates a ticktimer struct from an internal
statically allocated list.
#return
Pointer to the newly allocated ticktimer struct
or NULL when no more space is available.
*/
struct ticktimer *ticktimer_alloc (void) {
register int i;
for (i = 0; i < TIMER_COUNT; i++) {
if (timers[i].tick == NULL) {
return timers + i;
}
}
return NULL;
}
/*!
#brief
Marks a previously allocated ticktimer struct as free.
#param timer
Pointer to ticktimer struct, usually returned by
ticktimer_alloc().
*/
inline void ticktimer_free (struct ticktimer *timer) {
timer->tick = NULL;
}
/*!
#brief
Checks the status of all allocated timers from the
internal list and performs ticks where necessary.
#note
Should be called in the main loop.
*/
inline void ticktimer_tick_all (void) {
register int i;
for (i = 0; i < TIMER_COUNT; i++) {
if (timers[i].tick != NULL) {
ticktimer_tick(timers + i);
}
}
}
Using the time.h library, try something like this:
long start_time, end_time, elapsed;
start_time = clock();
// Do something
end_time = clock();
elapsed = (end_time - start_time) / CLOCKS_PER_SEC * 1000;
If your Linux system supports it, clock_gettime(CLOCK_MONOTONIC) should be a high resolution timer that is unaffected by system date changes (e.g. NTP daemons).
Great answers for GNU environments above and below...
But... what if you're not running on an OS? (or a PC for that matter, or you need to time your timer interrupts themselves?) Here's a solution that uses the x86 CPU timestamp counter directly... Not because this is good practice, or should be done, ever, when running under an OS...
Caveat: Only works on x86, with frequency scaling disabled.
Under Linux, only works on non-tickless kernels
rdtsc.c:
#include <sys/time.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
typedef unsigned long long int64;
static __inline__ int64 getticks(void)
{
unsigned a, d;
asm volatile("rdtsc" : "=a" (a), "=d" (d));
return (((int64)a) | (((int64)d) << 32));
}
int main(){
int64 tick,tick1;
unsigned time=0,mt;
// mt is the divisor to give microseconds
FILE *pf;
int i,r,l,n=0;
char s[100];
// time how long it takes to get the divisors, as a test
tick = getticks();
// get the divisors - todo: for max performance this can
// output a new binary or library with these values hardcoded
// for the relevant CPU - if you use the equivalent assembler for
// that CPU
pf = fopen("/proc/cpuinfo","r");
do {
r=fscanf(pf,"%s",&s[0]);
if (r<0) {
n=5; break;
} else if (n==0) {
if (strcmp("MHz",s)==0) n=1;
} else if (n==1) {
if (strcmp(":",s)==0) n=2;
} else if (n==2) {
n=3;
};
} while (n<3);
fclose(pf);
s[9]=(char)0;
strcpy(&s[4],&s[5]);
mt=atoi(s);
printf("#define mt %u // (%s Hz) hardcode this for your a CPU-specific binary ;-)\n",mt,s);
tick1 = getticks();
time = (unsigned)((tick1-tick)/mt);
printf("%u ms\n",time);
// time the duration of sleep(1) - plus overheads ;-)
tick = getticks();
sleep(1);
tick1 = getticks();
time = (unsigned)((tick1-tick)/mt);
printf("%u ms\n",time);
return 0;
}
compile and run with
$ gcc rdtsc.c -o rdtsc && ./rdtsc
It reads the divisor for your CPU from /proc/cpuinfo and shows how long it took to read that in microseconds, as well as how long it takes to execute sleep(1) in microseconds... Assuming the Mhz rating in /proc/cpuinfo always contains 3 decimal places :-o

Resources