choose between timeval and clock() to calculate elapsed time in C - c

I am using timeval as well as clock() function to see the time difference between 2 actions in my c program. Somehow timeval seems to give me the right amount of elapsed time in milliseconds where clock() gives very very less value.
g_time = clock();
gettimeofday(&decode_t,NULL);
after sometime
delay =((float)(clock()-g_time)/(float)CLOCKS_PER_SEC);
gettimeofday(&poll_t,NULL);
delay1 = ((poll_t.tv_sec - decode_t.tv_sec)*1000 + (poll_t.tv_usec - decode_t.tv_usec)/1000.0) ;
printf("\ndelay1: %f delay: %f ",delay1,delay);
usual output is:
delay1: 1577.603027 delay: 0.800000
delay1 is in milliseconds and delay is in sec.
Iam using archlinux 64 bit.I can't understand why this is happening.

From the clock(3) manual page:
The clock() function returns an approximation of processor time used by the program.
So the clock function doesn't return the amount of time passed, but a number of "ticks" that your program have run. And as you know, in a multi-tasking system your program can be paused at any time to let other programs run.

Related

C (time.h) clock_t = clock() producing wrong duration

Some example code:
clock_t clock_start = clock();
for( ... ) { ... do stuff ... }
clock_t clock_stop = clock();
double duration = 1000.0 * (clock_stop - clock_start) / CLOCKS_PER_SEC;
printf("time: %f ms\n", duration);
When I ran this code it produced an output of:
time: 4756.869000 ms
This is clearly wrong. I estimate the actual time taken is about 10 seconds, and verified this via a stop watch.
There appears to be a factor of about 2 - 3 missing.
Is it possible that CLOCKS_PER_SEC is defined as something nonsensical on my system? (I am using a Raspberry Pi 3, with Raspberry Pi OS.) Is there any way to check this? Or is it more likely that something else is the cause of the issue.
I am aware of alternative methods of measuring time on posix systems. I will implement some tests with one of those as a possible alternative, regardless.
The clock() function returns an approximation of processor time used by the program.
It says "processor time", not the amount of time that a stopwatch would say. If you want to measure the amount of time passing in the real world, you simply need to use one of those other functions.

How can I get the most accurate time for a process in in C?

I am not looking for nanoseconds or accuracy to the umpteenth decimal, however I would like to have some decent accuracy and to minimise margin of error with the results.
For this, I used sys/time.h, time.h, on a Mac Book Pro and a stop watch (my iPhone's stopwatch app, that is), and I am getting rather inconstant results.
My iPhone's stopwatch report: 140.66 seconds and the two functions with three trials are seen below:
1. time.h gave me 136.957065 seconds —— sys/time.h gave me 140.335872 seconds
2. time.h gave me 140.822794 seconds —— sys/time.h gave me 151.858111 seconds
3. time.h gave me 146.064458 seconds —— sys/time.h gave me 162.149713 seconds
And here is my code snippet:
void myExpensiveFunction()
{
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
clock_t fbegin = clock();
/* expensive algorithm */
clock_t fend = clock();
gettimeofday(&tv2, NULL);
printf("Expensive function took: %f seconds\n", (double)(fend - fbegin) / CLOCKS_PER_SEC);
printf("Expensive function took: %f seconds\n",
(double)(tv2.tv_usec - tv1.tv_usec) / 1000000 +
(double)(tv2.tv_sec - tv1.tv_sec));
}
While this isn't mission critical and I am sure there are some inconsistencies due to my computer itself, I am curious if there is a better function or methodology I should be using or be aware of. For testing and curiosity sake, I tried this same thing on my Linux mail server, and got similar results, sys/time.h was ten seconds slower.

How can i measure execution time of a function in c, can i print processor tick?

I have a function, i need to know how much time it took for execution, to my belief it should take less than 10 millisecond, so im planing to print time before entering function and after execution of function
im running on C code Linux RT and my ide is eclipse, so which function can give me time pressie to milliseconds?
you can use the <time.h> header , to get the CPU time used by a task or function within a C code. use the clock_t clock(void); The value returned is the CPU time used so far as a clock_t; to get the number of seconds used, divide by CLOCKS_PER_SEC. see man pages clock:
clock_t start = clock();
/*the function i want to time it*/
foo();
clock_t end = clock();
double times = (double)(end - start) / CLOCKS_PER_SEC;

clock() returning a negative value in C

I'm using a quite simple code to measure for time of execution.It works well until I am not sure may be not more than 20 minutes.But after(>20min.)it is returning negative results.I searched throughout the forums and tried everything like changing the datatype,using long unsigned (which is returning 0) but failed again.
The following is the snippet of my code
main()
{
time_t start,stop;
double time_arm;
start = clock();
/* ....... */
stop = clock();
time_arm=(double)(stop-start)/(double)CLOCKS_PER_SEC;
printf("Time Taken by ARM only is %lf \n",time_arm);
}
output is
Time Taken by ARM only is -2055.367296
Any help is appreciated,thanks in advance.
POSIX requires CLOCKS_PER_SEC to be 1,000,000. That means your count is in microseconds - and 231 microseconds is about 35 minutes. Your timer is just overflowing, so you can't get meaningful results when that happens.
Why you see the problem at 20 minutes, I'm not sure - maybe your CLOCKS_PER_SEC isn't POSIX-compatible. Regardless, your problem is timer overflow. You'll need to handle this problem in a different way - maybe look into getrusage(2).
clock_t is long which is a 32-bit signed value - it can hold 2^31 before it overflows. On a 32-bit system the CLOCKS_PER_SEC value is equal to 1000000 [as mentioned by POSIX] clock() will return the same value almost after every 72 minutes.
According to MSDN also it can return -1.
The elapsed wall-clock time since the start of the process (elapsed
time in seconds times CLOCKS_PER_SEC). If the amount of elapsed time
is unavailable, the function returns –1, cast as a clock_t.
On a side note:-
clock() measures CPU time used by the program and it does NOT measure real time and clock() return value is specified in microseconds.
You could try using clock_gettime() with the CLOCK_PROCESS_CPUTIME_ID option if you are on a POSIX system. clock_gettime() uses struct timespec to return time information so doesn't suffer from the same problem as using a single integer.

Measuring elapsed time

I added following code to measure to learn how many milliseconds program lasted.
The problem is I get 1065 Clocks and 1065 Milli seconds. Is it normal that they are equal?
Maybe my equation is wrong which converts clock into milliseconds? Thanks in advance
finishClock = clock();
timeCount = finishClock - startClock ;
printf("Clocks passed: %f\n Milli Seconds passed: %f",timeCount, timeCount*1000/CLOCKS_PER_SEC);
Based on the Tcl References here, they are synonymous:
If the -option argument is -milliseconds, then the command is
synonymous with clock milliseconds (see below). This usage is
obsolete, and clock milliseconds is to be considered the preferred way
of obtaining a count of milliseconds.

Resources