Execution time in nano seconds and related issues - c

I am using the following code to compute execution time in milli-secs.
struct timespec tp;
if (clock_gettime (CLOCK_REALTIME, &tp) == 0)
return ((tp.tv_sec * 1000000000) + tp.tv_nsec);
else
return ;
Can you please tell me whether this is correct?
Let's name this function comptime_nano().
Now, I write the following code in main() to check execution times of following operations.
unsigned long int a, b, s1, s3;
a = (unsigned long int)(1) << 63;
b = (unsigned long int)(1) << 63;
btime = comptime_nano();
s1 = b >> 30;
atime = comptime_nano();
printf ("Time =%ld for %lu\n", (atime - btime), s1);
btime = comptime_nano();
s3 = a >> 1;
atime = comptime_nano();
printf ("Time =%ld for %lu\n", (atime - btime), s3);
To my surprise, the first operation takes about roughly 4 times more time than the second. Again, if I change the relative ordering of these operations, the respective timings change drastically.
Please comment...

clock_gettime is not accurate enough for that kind of measurement. If you need to measure operations like that, do the operation several thousand (or several million times) in a loop before comparison. The two operations above should take the same amount of time but the second in your example code does not have the overhead of loading a, b, s1, and s3 into the processor's cache.
Also, what's going on here?
struct timespec tp;
if (clock_gettime (CLOCK_REALTIME, &tp) == 0)
return ((tp.tv_sec * 1000000000) + tp.tv_nsec);
else
return ;
The first return is illegal if the function returns void, and the second is illegal if it does not return void....
EDIT: 1000000000 also overflows the range of int.

If your resolution isn't good enough, and you are running on an Intel PC, try using the realtime time-stamp counter (RDTSC). I found this code for using it on Umbutu:
#include<sys/time.h>
#include<time.h>
typedef unsigned long long ticks;
static __inline__ ticks getticks(void)
{
unsigned a, d;
asm("cpuid");
asm volatile("rdtsc" : "=a" (a), "=d" (d));
return (((ticks)a) | (((ticks)d) << 32));
}

Related

Measuring Elapsed Time Using clock_gettime(CLOCK_MONOTONIC)

I have to elapse the measuring time during multiple threads. I must get an output like this:
Starting Time | Thread Number
00000000000 | 1
00000000100 | 2
00000000200 | 3
Firstly, I used gettimeofday but I saw that there are some negative numbers then I made little research and learn that gettimeofday is not reliable to measure elapsed time. Then I decide to use clock_gettime(CLOCK_MONOTONIC).
However, there is a problem. When I use second to measure time, I cannot measure time precisely. When I use nanosecond, length of end.tv_nsec variable cannot exceed 9 digits (since it is a long variable). That means, when it has to move to the 10th digit, it still remains at 9 digits and actually the number gets smaller, causing the elapsed time to be negative.
That is my code:
long elapsedTime;
struct timespec end;
struct timespec start2;
//gettimeofday(&start2, NULL);
clock_gettime(CLOCK_MONOTONIC,&start2);
while(c <= totalCount)
{
if(strcmp(algorithm,"FCFS") == 0)
{
printf("In SErunner count=%d \n",count);
if(count > 0)
{
printf("Count = %d \n",count);
it = deQueue();
c++;
tid = it->tid;
clock_gettime(CLOCK_MONOTONIC,&end);
usleep( 1000*(it->value));
elapsedTime = ( end.tv_sec - start2.tv_sec);
printf("Process of thread %d finished with value %d\n",it->tid,it->value);
fprintf(outputFile,"%ld %d %d\n",elapsedTime,it->value,it->tid+1);
}
}
Unfortunately, timespec does not have microsecond variable. If you can help me I will be very happy.
Write a helper function that calculates the difference between two timespecs:
int64_t difftimespec_ns(const struct timespec after, const struct timespec before)
{
return ((int64_t)after.tv_sec - (int64_t)before.tv_sec) * (int64_t)1000000000
+ ((int64_t)after.tv_nsec - (int64_t)before.tv_nsec);
}
If you want it in microseconds, just divide it by 1000, or use:
int64_t difftimespec_us(const struct timespec after, const struct timespec before)
{
return ((int64_t)after.tv_sec - (int64_t)before.tv_sec) * (int64_t)1000000
+ ((int64_t)after.tv_nsec - (int64_t)before.tv_nsec) / 1000;
}
Remember to include <inttypes.h>, so that you can use conversion "%" PRIi64 to print integers of int64_t type:
printf("%09" PRIi64 " | 5\n", difftimespec_ns(after, before));
To calculate the delta (elapsed time), you need to make an substraction between two timeval or two timespec structures depending on the services you are using.
For timeval, there is a set of operations to manipulate struct timeval in <sys/time.h> (e.g. /usr/include/x86_64-linux-gnu/sys/time.h):
# define timersub(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \
(result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
if ((result)->tv_usec < 0) { \
--(result)->tv_sec; \
(result)->tv_usec += 1000000; \
} \
} while (0)
For timespec, if you don't have them installed in your header files, copy something like the macro defined in this source code:
#define timespecsub(tsp, usp, vsp) \
do { \
(vsp)->tv_sec = (tsp)->tv_sec - (usp)->tv_sec; \
(vsp)->tv_nsec = (tsp)->tv_nsec - (usp)->tv_nsec; \
if ((vsp)->tv_nsec < 0) { \
(vsp)->tv_sec--; \
(vsp)->tv_nsec += 1000000000L; \
} \
} while (0)
You could convert the time to a double value using some code such as :
double
clocktime_BM (clockid_t clid)
{
struct timespec ts = { 0, 0 };
if (clock_gettime (clid, &ts))
return NAN;
return (double) ts.tv_sec + 1.0e-9 * ts.tv_nsec;
}
The returned double value contains something in seconds. On most machines, double-s are IEEE 754 floating point numbers, and basic operations on them are fast (less than a µs each). Read the floating-point-gui.de for more about them. In 2020 x86-64 based laptops and servers have some HPET. Don't expect a microsecond precision on time measurements (since Linux runs many processes, and they might get scheduled at arbitrary times; read some good textbook about operating systems for explanations).
(the above code is from Bismon, funded thru CHARIOT; something similar appears in RefPerSys)
On Linux, be sure to read syscalls(2), clock_gettime(2), errno(3), time(7), vdso(7).
Consider studying the source code of the Linux kernel and/or of the GNU libc and/or of musl-libc. See LinuxFromScratch and OSDEV and kernelnewbies.
Be aware of The year 2038 problem on some 32 bits computers.

C: fastest way to evaluate a function on a finite set of small integer values by using a lookup table?

I am currently working on a project where I would like to optimize some numerical computation in Python by calling C.
In short, I need to compute the value of y[i] = f(x[i]) for each element in an huge array x (typically has 10^9 entries or more). Here, x[i] is an integer between -10 and 10 and f is function that takes x[i] and returns a double. My issue is that f but it takes a very long time to evaluate in a way that is numerically stable.
To speed things up, I would like to just hard code all 2*10 + 1 possible values of f(x[i]) into constant array such as:
double table_of_values[] = {f(-10), ...., f(10)};
And then just evaluate f using a "lookup table" approach as follows:
for (i = 0; i < N; i++) {
y[i] = table_of_values[x[i] + 11]; //instead of y[i] = f(x[i])
}
Since I am not really well-versed at writing optimized code in C, I am wondering:
Specifically - since x is really large - I'm wondering if it's worth doing second-degree optimization when evaluating the loop (e.g. by sorting x beforehand, or by finding a smart way to deal with the negative indices (aside from just doing [x[i] + 10 + 1])?
Say x[i] were not between -10 and 10, but between -20 and 20. In this case, I could still use the same approach, but would need to hard code the lookup table manually. Is there a way to generate the look-up table dynamically in the code so that I make use of the same approach and allow for x[i] to belong to a variable range?
It's fairly easy to generate such a table with dynamic range values.
Here's a simple, single table method:
#include <malloc.h>
#define VARIABLE_USED(_sym) \
do { \
if (1) \
break; \
if (!! _sym) \
break; \
} while (0)
double *table_of_values;
int table_bias;
// use the smallest of these that can contain the values the x array may have
#if 0
typedef int xval_t;
#endif
#if 0
typedef short xval_t;
#endif
#if 1
typedef char xval_t;
#endif
#define XLEN (1 << 9)
xval_t *x;
// fslow -- your original function
double
fslow(int i)
{
return 1; // whatever
}
// ftablegen -- generate variable table
void
ftablegen(double (*f)(int),int lo,int hi)
{
int len;
table_bias = -lo;
len = hi - lo;
len += 1;
// NOTE: you can do free(table_of_values) when no longer needed
table_of_values = malloc(sizeof(double) * len);
for (int i = lo; i <= hi; ++i)
table_of_values[i + table_bias] = f(i);
}
// fcached -- retrieve cached table data
double
fcached(int i)
{
return table_of_values[i + table_bias];
}
// fripper -- access x and table arrays
void
fripper(xval_t *x)
{
double *tptr;
int bias;
double val;
// ensure these go into registers to prevent needless extra memory fetches
tptr = table_of_values;
bias = table_bias;
for (int i = 0; i < XLEN; ++i) {
val = tptr[x[i] + bias];
// do stuff with val
VARIABLE_USED(val);
}
}
int
main(void)
{
ftablegen(fslow,-10,10);
x = malloc(sizeof(xval_t) * XLEN);
fripper(x);
return 0;
}
Here's a slightly more complex way that allows many similar tables to be generated:
#include <malloc.h>
#define VARIABLE_USED(_sym) \
do { \
if (1) \
break; \
if (!! _sym) \
break; \
} while (0)
// use the smallest of these that can contain the values the x array may have
#if 0
typedef int xval_t;
#endif
#if 1
typedef short xval_t;
#endif
#if 0
typedef char xval_t;
#endif
#define XLEN (1 << 9)
xval_t *x;
struct table {
int tbl_lo; // lowest index
int tbl_hi; // highest index
int tbl_bias; // bias for index
double *tbl_data; // cached data
};
struct table ftable1;
struct table ftable2;
double
fslow(int i)
{
return 1; // whatever
}
double
f2(int i)
{
return 2; // whatever
}
// ftablegen -- generate variable table
void
ftablegen(double (*f)(int),int lo,int hi,struct table *tbl)
{
int len;
tbl->tbl_bias = -lo;
len = hi - lo;
len += 1;
// NOTE: you can do free tbl_data when no longer needed
tbl->tbl_data = malloc(sizeof(double) * len);
for (int i = lo; i <= hi; ++i)
tbl->tbl_data[i + tbl->tbl_bias] = fslow(i);
}
// fcached -- retrieve cached table data
double
fcached(struct table *tbl,int i)
{
return tbl->tbl_data[i + tbl->tbl_bias];
}
// fripper -- access x and table arrays
void
fripper(xval_t *x,struct table *tbl)
{
double *tptr;
int bias;
double val;
// ensure these go into registers to prevent needless extra memory fetches
tptr = tbl->tbl_data;
bias = tbl->tbl_bias;
for (int i = 0; i < XLEN; ++i) {
val = tptr[x[i] + bias];
// do stuff with val
VARIABLE_USED(val);
}
}
int
main(void)
{
x = malloc(sizeof(xval_t) * XLEN);
// NOTE: we could use 'char' for xval_t ...
ftablegen(fslow,-37,62,&ftable1);
fripper(x,&ftable1);
// ... but, this forces us to use a 'short' for xval_t
ftablegen(f2,-99,307,&ftable2);
return 0;
}
Notes:
fcached could/should be an inline function for speed. Notice that once the table is calculated once, fcached(x[i]) is quite fast. The index offset issue you mentioned [solved by the "bias"] is trivially small in calculation time.
While x may be a large array, the cached array for f() values is fairly small (e.g. -10 to 10). Even if it were (e.g.) -100 to 100, this is still about 200 elements. This small cached array will [probably] stay in the hardware memory cache, so access will remain quite fast.
Thus, sorting x to optimize H/W cache performance of the lookup table will have little to no [measurable] effect.
The access pattern to x is independent. You'll get best performance if you access x in a linear manner (e.g. for (i = 0; i < 999999999; ++i) x[i]). If you access it in a semi-random fashion, it will put a strain on the H/W cache logic and its ability to keep the needed/wanted x values "cache hot"
Even with linear access, because x is so large, by the time you get to the end, the first elements will have been evicted from the H/W cache (e.g. most CPU caches are on the order of a few megabytes)
However, if x only has values in a limited range, changing the type from int x[...] to short x[...] or even char x[...] cuts the size by a factor of 2x [or 4x]. And, that can have a measurable improvement on the performance.
Update: I've added an fripper function to show the fastest way [that I know of] to access the table and x arrays in a loop. I've also added a typedef named xval_t to allow the x array to consume less space (i.e. will have better H/W cache performance).
UPDATE #2:
Per your comments ...
fcached was coded [mostly] to illustrate simple/single access. But, it was not used in the final example.
The exact requirements for inline has varied over the years (e.g. was extern inline). Best use now: static inline. However, if using c++, it may be, yet again different. There are entire pages devoted to this. The reason is because of compilation in different .c files, what happens when optimization is on or off. Also, consider using a gcc extension. So, to force inline all the time:
__attribute__((__always_inline__)) static inline
fripper is the fastest because it avoids refetching globals table_of_values and table_bias on each loop iteration. In fripper, compiler optimizer will ensure they remain in registers. See my answer: Is accessing statically or dynamically allocated memory faster? as to why.
However, I coded an fripper variant that uses fcached and the disassembled code was the same [and optimal]. So, we can disregard that ... Or, can we? Sometimes, disassembling the code is a good cross check and the only way to know for sure. Just an extra item when creating fully optimized C code. There are many options one can give to the compiler regarding code generation, so sometimes it's just trial and error.
Because benchmarking is important, I threw in my routines for timestamping (FYI, [AFAIK] the underlying clock_gettime call is the basis for python's time.clock()).
So, here's the updated version:
#include <malloc.h>
#include <time.h>
typedef long long s64;
#define SUPER_INLINE \
__attribute__((__always_inline__)) static inline
#define VARIABLE_USED(_sym) \
do { \
if (1) \
break; \
if (!! _sym) \
break; \
} while (0)
#define TVSEC 1000000000LL // nanoseconds in a second
#define TVSECF 1e9 // nanoseconds in a second
// tvget -- get high resolution time of day
// RETURNS: absolute nanoseconds
s64
tvget(void)
{
struct timespec ts;
s64 nsec;
clock_gettime(CLOCK_REALTIME,&ts);
nsec = ts.tv_sec;
nsec *= TVSEC;
nsec += ts.tv_nsec;
return nsec;
)
// tvgetf -- get high resolution time of day
// RETURNS: fractional seconds
double
tvgetf(void)
{
struct timespec ts;
double sec;
clock_gettime(CLOCK_REALTIME,&ts);
sec = ts.tv_nsec;
sec /= TVSECF;
sec += ts.tv_sec;
return sec;
)
double *table_of_values;
int table_bias;
double *dummyptr;
// use the smallest of these that can contain the values the x array may have
#if 0
typedef int xval_t;
#endif
#if 0
typedef short xval_t;
#endif
#if 1
typedef char xval_t;
#endif
#define XLEN (1 << 9)
xval_t *x;
// fslow -- your original function
double
fslow(int i)
{
return 1; // whatever
}
// ftablegen -- generate variable table
void
ftablegen(double (*f)(int),int lo,int hi)
{
int len;
table_bias = -lo;
len = hi - lo;
len += 1;
// NOTE: you can do free(table_of_values) when no longer needed
table_of_values = malloc(sizeof(double) * len);
for (int i = lo; i <= hi; ++i)
table_of_values[i + table_bias] = f(i);
}
// fcached -- retrieve cached table data
SUPER_INLINE double
fcached(int i)
{
return table_of_values[i + table_bias];
}
// fripper_fcached -- access x and table arrays
void
fripper_fcached(xval_t *x)
{
double val;
double *dptr;
dptr = dummyptr;
for (int i = 0; i < XLEN; ++i) {
val = fcached(x[i]);
// do stuff with val
dptr[i] = val;
}
}
// fripper -- access x and table arrays
void
fripper(xval_t *x)
{
double *tptr;
int bias;
double val;
double *dptr;
// ensure these go into registers to prevent needless extra memory fetches
tptr = table_of_values;
bias = table_bias;
dptr = dummyptr;
for (int i = 0; i < XLEN; ++i) {
val = tptr[x[i] + bias];
// do stuff with val
dptr[i] = val;
}
}
int
main(void)
{
ftablegen(fslow,-10,10);
x = malloc(sizeof(xval_t) * XLEN);
dummyptr = malloc(sizeof(double) * XLEN);
fripper(x);
fripper_fcached(x);
return 0;
}
You can have negative indices in your arrays. (I am not sure if this is in the specifications.) If you have the following code:
int arr[] = {1, 2 ,3, 4, 5};
int* lookupTable = arr + 3;
printf("%i", lookupTable[-2]);
it will print out 2.
This works because arrays in c are defined as pointers. And if the pointer does not point to the begin of the array, you can access the item before the pointer.
Keep in mind though that if you have to malloc() the memory for arr you probably cannot use free(lookupTable) to free it.
I really think Craig Estey is on the right track for building your table in an automatic way. I just want to add a note for looking up the table.
If you know that you will run the code on a Haswell machine (with AVX2) you should make sure your code utilise VGATHERDPD which you can utilize with the _mm256_i32gather_pd intrinsic. If you do that, your table lookups will fly! (You can even detect avx2 on the fly with cpuid(), but that's another story)
EDIT:
Let me elaborate with some code:
#include <stdint.h>
#include <stdio.h>
#include <immintrin.h>
/* I'm not sure if you need the alignment */
double table[8] __attribute__((aligned(16)))= { 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 };
int main()
{
int32_t i[4] = { 0,2,4,6 };
__m128i index = _mm_load_si128( (__m128i*) i );
__m256d result = _mm256_i32gather_pd( table, index, 8 );
double* f = (double*)&result;
printf("%f %f %f %f\n", f[0], f[1], f[2], f[3]);
return 0;
}
Compile and run:
$ gcc --std=gnu99 -mavx2 gathertest.c -o gathertest && ./gathertest
0.100000 0.300000 0.500000 0.700000
This is fast!

Measuring time in millisecond precision

My program is going to race different sorting algorithms against each other, both in time and space. I've got space covered, but measuring time is giving me some trouble. Here is the code that runs the sorts:
void test(short* n, short len) {
short i, j, a[1024];
for(i=0; i<2; i++) { // Loop over each sort algo
memused = 0; // Initialize memory marker
for(j=0; j<len; j++) // Copy scrambled list into fresh array
a[j] = n[j]; // (Sorting algos are in-place)
// ***Point A***
switch(i) { // Pick sorting algo
case 0:
selectionSort(a, len);
case 1:
quicksort(a, len);
}
// ***Point B***
spc[i][len] = memused; // Record how much mem was used
}
}
(I removed some of the sorting algos for simplicity)
Now, I need to measure how much time the sorting algo takes. The most obvious way to do this is to record the time at point (a) and then subtract that from the time at point (b). But none of the C time functions are good enough:
time() gives me time in seconds, but the algos are faster than that, so I need something more accurate.
clock() gives me CPU ticks since the program started, but seems to round to the nearest 10,000; still not small enough
The time shell command works well enough, except that I need to run over 1,000 tests per algorithm, and I need the individual time for each one.
I have no idea what getrusage() returns, but it's also too long.
What I need is time in units (significantly, if possible) smaller than the run time of the sorting functions: about 2ms. So my question is: Where can I get that?
gettimeofday() has microseconds resolution and is easy to use.
A pair of useful timer functions is:
static struct timeval tm1;
static inline void start()
{
gettimeofday(&tm1, NULL);
}
static inline void stop()
{
struct timeval tm2;
gettimeofday(&tm2, NULL);
unsigned long long t = 1000 * (tm2.tv_sec - tm1.tv_sec) + (tm2.tv_usec - tm1.tv_usec) / 1000;
printf("%llu ms\n", t);
}
For measuring time, use clock_gettime with CLOCK_MONOTONIC (or CLOCK_MONOTONIC_RAW if it is available). Where possible, avoid using gettimeofday. It is specifically deprecated in favor of clock_gettime, and the time returned from it is subject to adjustments from time servers, which can throw off your measurements.
You can get the total user + kernel time (or choose just one) using getrusage as follows:
#include <sys/time.h>
#include <sys/resource.h>
double get_process_time() {
struct rusage usage;
if( 0 == getrusage(RUSAGE_SELF, &usage) ) {
return (double)(usage.ru_utime.tv_sec + usage.ru_stime.tv_sec) +
(double)(usage.ru_utime.tv_usec + usage.ru_stime.tv_usec) / 1.0e6;
}
return 0;
}
I elected to create a double containing fractional seconds...
double t_begin, t_end;
t_begin = get_process_time();
// Do some operation...
t_end = get_process_time();
printf( "Elapsed time: %.6f seconds\n", t_end - t_begin );
The Time Stamp Counter could be helpful here:
static unsigned long long rdtsctime() {
unsigned int eax, edx;
unsigned long long val;
__asm__ __volatile__("rdtsc":"=a"(eax), "=d"(edx));
val = edx;
val = val << 32;
val += eax;
return val;
}
Though there are some caveats to this. The timestamps for different processor cores may be different, and changing clock speeds (due to power saving features and the like) can cause erroneous results.

Time in milliseconds in C

Using the following code:
#include<stdio.h>
#include<time.h>
int main()
{
clock_t start, stop;
int i;
start = clock();
for(i=0; i<2000;i++)
{
printf("%d", (i*1)+(1^4));
}
printf("\n\n");
stop = clock();
//(double)(stop - start) / CLOCKS_PER_SEC
printf("%6.3f", start);
printf("\n\n%6.3f", stop);
return 0;
}
I get the following output:
56789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713171417151716171717181719172017211722172317241725172617271728172917301731173217331734173517361737173817391740174117421743174417451746174717481749175017511752175317541755175617571758175917601761176217631764176517661767176817691770177117721773177417751776177717781779178017811782178317841785178617871788178917901791179217931794179517961797179817991800180118021803180418051806180718081809181018111812181318141815181618171818181918201821182218231824182518261827182818291830183118321833183418351836183718381839184018411842184318441845184618471848184918501851185218531854185518561857185818591860186118621863186418651866186718681869187018711872187318741875187618771878187918801881188218831884188518861887188818891890189118921893189418951896189718981899190019011902190319041905190619071908190919101911191219131914191519161917191819191920192119221923192419251926192719281929193019311932193319341935193619371938193919401941194219431944194519461947194819491950195119521953195419551956195719581959196019611962196319641965196619671968196919701971197219731974197519761977197819791980198119821983198419851986198719881989199019911992199319941995199619971998199920002001200220032004
2.169
2.169
Start and stop times are the same. Does it mean that the program hardly takes time to complete execution?
If 1. is false, then atleast the no.of digits beyond the (.) should differ, which does not happen here. Is my logic correct?
Note: I need to calculate the time taken for execution, and hence the above code.
Yes, this program has likely used less than a millsecond. Try using microsecond resolution with timeval.
e.g:
#include <sys/time.h>
struct timeval stop, start;
gettimeofday(&start, NULL);
//do stuff
gettimeofday(&stop, NULL);
printf("took %lu us\n", (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
You can then query the difference (in microseconds) between stop.tv_usec - start.tv_usec. Note that this will only work for subsecond times (as tv_usec will loop). For the general case use a combination of tv_sec and tv_usec.
Edit 2016-08-19
A more appropriate approach on system with clock_gettime support would be:
struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC_RAW, &start);
//do stuff
clock_gettime(CLOCK_MONOTONIC_RAW, &end);
uint64_t delta_us = (end.tv_sec - start.tv_sec) * 1000000 + (end.tv_nsec - start.tv_nsec) / 1000;
Here is what I write to get the timestamp in millionseconds.
#include<sys/time.h>
long long timeInMilliseconds(void) {
struct timeval tv;
gettimeofday(&tv,NULL);
return (((long long)tv.tv_sec)*1000)+(tv.tv_usec/1000);
}
A couple of things might affect the results you're seeing:
You're treating clock_t as a floating-point type, I don't think it is.
You might be expecting (1^4) to do something else than compute the bitwise XOR of 1 and 4., i.e. it's 5.
Since the XOR is of constants, it's probably folded by the compiler, meaning it doesn't add a lot of work at runtime.
Since the output is buffered (it's just formatting the string and writing it to memory), it completes very quickly indeed.
You're not specifying how fast your machine is, but it's not unreasonable for this to run very quickly on modern hardware, no.
If you have it, try adding a call to sleep() between the start/stop snapshots. Note that sleep() is POSIX though, not standard C.
This code snippet can be used for displaying time in seconds,milliseconds and microseconds:
#include <sys/time.h>
struct timeval start, stop;
double secs = 0;
gettimeofday(&start, NULL);
// Do stuff here
gettimeofday(&stop, NULL);
secs = (double)(stop.tv_usec - start.tv_usec) / 1000000 + (double)(stop.tv_sec - start.tv_sec);
printf("time taken %f\n",secs);
You can use gettimeofday() together with the timedifference_msec() function below to calculate the number of milliseconds elapsed between two samples:
#include <sys/time.h>
#include <stdio.h>
float timedifference_msec(struct timeval t0, struct timeval t1)
{
return (t1.tv_sec - t0.tv_sec) * 1000.0f + (t1.tv_usec - t0.tv_usec) / 1000.0f;
}
int main(void)
{
struct timeval t0;
struct timeval t1;
float elapsed;
gettimeofday(&t0, 0);
/* ... YOUR CODE HERE ... */
gettimeofday(&t1, 0);
elapsed = timedifference_msec(t0, t1);
printf("Code executed in %f milliseconds.\n", elapsed);
return 0;
}
Note that, when using gettimeofday(), you need to take seconds into account even if you only care about microsecond differences because tv_usec will wrap back to zero every second and you have no way of knowing beforehand at which point within a second each sample is obtained.
From man clock:
The clock() function returns an approximation of processor time used by the program.
So there is no indication you should treat it as milliseconds. Some standards require precise value of CLOCKS_PER_SEC, so you could rely on it, but I don't think it is advisable.
Second thing is that, as #unwind stated, it is not float/double. Man times suggests that will be an int.
Also note that:
this function will return the same value approximately every 72 minutes
And if you are unlucky you might hit the moment it is just about to start counting from zero, thus getting negative or huge value (depending on whether you store the result as signed or unsigned value).
This:
printf("\n\n%6.3f", stop);
Will most probably print garbage as treating any int as float is really not defined behaviour (and I think this is where most of your problem comes). If you want to make sure you can always do:
printf("\n\n%6.3f", (double) stop);
Though I would rather go for printing it as long long int at first:
printf("\n\n%lldf", (long long int) stop);
The standard C library provides timespec_get. It can tell time up to nanosecond precision, if the system supports. Calling it, however, takes a bit more effort because it involves a struct. Here's a function that just converts the struct to a simple 64-bit integer so you can get time in milliseconds.
#include <stdio.h>
#include <inttypes.h>
#include <time.h>
int64_t millis()
{
struct timespec now;
timespec_get(&now, TIME_UTC);
return ((int64_t) now.tv_sec) * 1000 + ((int64_t) now.tv_nsec) / 1000000;
}
int main(void)
{
printf("Unix timestamp with millisecond precision: %" PRId64 "\n", millis());
}
Unlike clock, this function returns a Unix timestamp so it will correctly account for the time spent in blocking functions, such as sleep.
Modern processors are too fast to register the running time. Hence it may return zero. In this case, the time you started and ended is too small and therefore both the times are the same after round of.

Timer to find elapsed time in a function call in C

I want to calculate time elapsed during a function call in C, to the precision of 1 nanosecond.
Is there a timer function available in C to do it?
If yes please provide a sample code-snippet.
Pseudo code
Timer.Start()
foo();
Timer.Stop()
Display time elapsed in execution of foo()
Environment details: - using gcc 3.4 compiler on a RHEL machine
May I ask what kind of processor you're using? If you're using an x86 processor, you can look at the time stamp counter (tsc). This code snippet:
#define rdtsc(low,high) \
__asm__ __volatile__("rdtsc" : "=a" (low), "=d" (high))
will put the number of cycles the CPU has run in low and high respectively (it expects 2 longs; you can store the result in a long long int) as follows:
inline void getcycles (long long int * cycles)
{
unsigned long low;
long high;
rdtsc(low,high);
*cycles = high;
*cycles <<= 32;
*cycles |= low;
}
Note that this returns the number of cycles your CPU has performed. You'll need to get your CPU speed and then figure out how many cycles per ns in order to get the number of ns elapsed.
To do the above, I've parsed the "cpu MHz" string out of /proc/cpuinfo, and converted it to a decimal. After that, it's just a bit of math, and remember that 1MHz = 1,000,000 cycles per second, and that there are 1 billion ns / sec.
On Intel and compatible processors you can use rdtsc instruction which can be wrapped into an asm() block of C code easily. It returns the value of a built-in processor cycle counter that increments on each cycle. You gain high resolution and such timing is extremely fast.
To find how fast this increments you'll need to calibrate - call this instruction twice over a fixed time period like five seconds. If you do this on a processor that shifts frequency to lower power consumption you may have problems calibrating.
Use clock_gettime(3). For more info, type man 3 clock_gettime. That being said, nanosecond precision is rarely necessary.
Any timer functionality is going to have to be platform-specific, especially with that precision requirement.
The standard solution in POSIX systems is gettimeofday(), but it has only microsecond precision.
If this is for performance benchmarking, the standard way is to make the code under test take enough time to make the precision requirement less severe. In other words, run your test code for a whole second (or more).
There is no timer in c which has guaranteed 1 nanosecond precision. You may want to look into clock() or better yet The POSIX gettimeofday()
We all waste our time recreating this test sample. Why not post something compile ready? Anyway, here is mine with results.
CLOCK_PROCESS_CPUTIME_ID resolution: 0 sec 1 nano
clock_gettime 4194304 iterations : 459.427311 msec 0.110 microsec / call
CLOCK_MONOTONIC resolution: 0 sec 1 nano
clock_gettime 4194304 iterations : 64.498347 msec 0.015 microsec / call
CLOCK_REALTIME resolution: 0 sec 1 nano
clock_gettime 4194304 iterations : 65.494828 msec 0.016 microsec / call
CLOCK_THREAD_CPUTIME_ID resolution: 0 sec 1 nano
clock_gettime 4194304 iterations : 427.133157 msec 0.102 microsec / call
rdtsc 4194304 iterations : 115.427895 msec 0.028 microsec / call
Dummy 16110479703957395943
rdtsc in milliseconds 4194304 iterations : 197.259866 msec 0.047 microsec / call
Dummy 4.84682e+08 UltraHRTimerMs 197 HRTimerMs 197.26
#include <time.h>
#include <cstdio>
#include <string>
#include <iostream>
#include <chrono>
#include <thread>
enum { TESTRUNS = 1024*1024*4 };
class HRCounter
{
private:
timespec start, tmp;
public:
HRCounter(bool init = true)
{
if(init)
SetStart();
}
void SetStart()
{
clock_gettime(CLOCK_MONOTONIC, &start);
}
double GetElapsedMs()
{
clock_gettime(CLOCK_MONOTONIC, &tmp);
return (double)(tmp.tv_nsec - start.tv_nsec) / 1000000 + (tmp.tv_sec - start.tv_sec) * 1000;
}
};
__inline__ uint64_t rdtsc(void) {
uint32_t lo, hi;
__asm__ __volatile__ ( // serialize
"xorl %%eax,%%eax \n cpuid"
::: "%rax", "%rbx", "%rcx", "%rdx");
/* We cannot use "=A", since this would use %rax on x86_64 and return only the lower 32bits of the TSC */
__asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi));
return (uint64_t)hi << 32 | lo;
}
inline uint64_t GetCyclesPerMillisecondImpl()
{
uint64_t start_cyles = rdtsc();
HRCounter counter;
std::this_thread::sleep_for (std::chrono::seconds(3));
uint64_t end_cyles = rdtsc();
double elapsed_ms = counter.GetElapsedMs();
return (end_cyles - start_cyles) / elapsed_ms;
}
inline uint64_t GetCyclesPerMillisecond()
{
static uint64_t cycles_in_millisecond = GetCyclesPerMillisecondImpl();
return cycles_in_millisecond;
}
class UltraHRCounter
{
private:
uint64_t start_cyles;
public:
UltraHRCounter(bool init = true)
{
GetCyclesPerMillisecond();
if(init)
SetStart();
}
void SetStart() { start_cyles = rdtsc(); }
double GetElapsedMs()
{
uint64_t end_cyles = rdtsc();
return (end_cyles - start_cyles) / GetCyclesPerMillisecond();
}
};
int main()
{
auto Run = [](std::string const& clock_name, clockid_t clock_id)
{
HRCounter counter(false);
timespec spec;
clock_getres( clock_id, &spec );
printf("%s resolution: %ld sec %ld nano\n", clock_name.c_str(), spec.tv_sec, spec.tv_nsec );
counter.SetStart();
for ( int i = 0 ; i < TESTRUNS ; ++ i )
{
clock_gettime( clock_id, &spec );
}
double fb = counter.GetElapsedMs();
printf( "clock_gettime %d iterations : %.6f msec %.3f microsec / call\n", TESTRUNS, ( fb ), (( fb ) * 1000) / TESTRUNS );
};
Run("CLOCK_PROCESS_CPUTIME_ID",CLOCK_PROCESS_CPUTIME_ID);
Run("CLOCK_MONOTONIC",CLOCK_MONOTONIC);
Run("CLOCK_REALTIME",CLOCK_REALTIME);
Run("CLOCK_THREAD_CPUTIME_ID",CLOCK_THREAD_CPUTIME_ID);
{
HRCounter counter(false);
uint64_t dummy;
counter.SetStart();
for ( int i = 0 ; i < TESTRUNS ; ++ i )
{
dummy += rdtsc();
}
double fb = counter.GetElapsedMs();
printf( "rdtsc %d iterations : %.6f msec %.3f microsec / call\n", TESTRUNS, ( fb ), (( fb ) * 1000) / TESTRUNS );
std::cout << "Dummy " << dummy << std::endl;
}
{
double dummy;
UltraHRCounter ultra_hr_counter;
HRCounter counter;
for ( int i = 0 ; i < TESTRUNS ; ++ i )
{
dummy += ultra_hr_counter.GetElapsedMs();
}
double fb = counter.GetElapsedMs();
double final = ultra_hr_counter.GetElapsedMs();
printf( "rdtsc in milliseconds %d iterations : %.6f msec %.3f microsec / call\n", TESTRUNS, ( fb ), (( fb ) * 1000) / TESTRUNS );
std::cout << "Dummy " << dummy << " UltraHRTimerMs " << final << " HRTimerMs " << fb << std::endl;
}
return 0;
}
I don't know if you'll find any timers that provide resolution to a single nanosecond -- it would depend on the resolution of the system clock -- but you might want to look at http://code.google.com/p/high-resolution-timer/. They indicate they can provide resolution to the microsecond level on most Linux systems and in the nanoseconds on Sun systems.
Making benchmarks on this scale is not a good idea. You have overhead for getting the time at the least, which can render your results unreliable if you work on nanoseconds. You can either use your platforms system calls or boost::Date_Time on a larger scale [preferred].
Can you just run it 10^9 times and stopwatch it?
You can use standard system calls like gettimeofday, if you are certain that your process gets 100% if the CPU time. I can think of many situation in which, while you are executing foo () other threads and processes might steal CPU time.
You are asking for something that is not possible this way. You would need HW level support to get to that level of precision and even then control the variables very carefully. What happens if you get an interrupt while running your code? What if the OS decides to run some other piece of code?
And what does your code do? Does it use RAM memory? What if your code and/or data is or is not in the cache?
In some environments you can use HW level counters for this job provided you control those variables. But how do you prevent context switches in Linux?
For instance, in Texas Instruments' DSP tools (Code Composer Studio) you can profile the code very exactly because the whole debugging environment is set such that the emulator (e.g. Blackhawk) receives info about every operation run. You can also set watchpoints which are coded directly into a HW block inside the chip in some processors. This works because the memory lanes are also routed to this debugging block.
They do offer functions in their CSL's (Chip Support Library) which are what you are asking for with the timing overhead being a few cycles. But this is only available for their processors and is completely dependant on reading the timer values from the HW registers.

Resources