Measuring time with time.h? - c

I'm trying to measure time it takes to run a command using my own command interpreter, but is the time correct? When I run a command it says time much longer than expected:
miniShell>> pwd
/home/dac/.clion11/system/cmake/generated/c0a6fa89/c0a6fa89/Debug
Execution time 1828 ms
I'm using the gettimeofday as can be seen from the code. Isn't it wrong somewhere and should be changed so that the timing looks reasonable?
If I make a minimal example, then it looks and runs like this:
#include <sys/stat.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main(int argc, char *argv[]) {
long time;
struct timeval time_start;
struct timeval time_end;
gettimeofday(&time_start, NULL);
printf("run program>> ");
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec-time_start.tv_sec)*1000000 + time_end.tv_usec-time_start.tv_usec;
printf("Execution time %ld ms\n", time); /*Print out the execution time*/
return (0);
}
Then I run it
/home/dac/.clion11/system/cmake/generated/c0a6fa89/c0a6fa89/Debug/oslab
run program>> Execution time 14 ms
Process finished with exit code 0
The above 14 ms seems reasonable, why is the time so long for my command?

The tv_usec in struct timeval is a time in microseconds, not milliseconds.

You compute the time incorrectly. tv_usec, where the u stands for the Greek lowercase letter μ ("mu"), holds a number of microseconds. Fix the formula this way:
gettimeofday(&time_end, NULL);
time = (((time_end.tv_sec - time_start.tv_sec) * 1000000LL) +
time_end.tv_usec - time_start.tv_usec) / 1000;
printf("Execution time %ld ms\n", time); /* Print out the execution time*/
It is preferable to make the computation in 64 bits to avoid overflows if long is 32 bits and the elapsed time can exceed 40 minutes.
If you want to preserve the maximum precision, keep the computation in microseconds and print the number of milliseconds with a decimal point:
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec - time_start.tv_sec) * 1000000 +
time_end.tv_usec - time_start.tv_usec;
printf("Execution time %ld.%03ld ms\n", time / 1000, time % 1000);

Related

Inconsistent busy waiting time in c

I have a character that should "eat" for 200 microseconds, "sleep" for 200 microseconds, and repeat, until they die, which happens if they haven't eaten for time_to_die microseconds.
In the code snippet in function main indicated below, the struct time_to_die has a member tv_usec configured for 1000 microseconds and I expect it to loop forever.
After some time, one execution of the function busy_wait takes around 5 times more than it is supposed to (enough to kill the character), and the character dies. I want to know why and how to fix it.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
struct timeval time_add_microseconds(struct timeval time, long long microseconds)
{
time.tv_usec += microseconds;
while (time.tv_usec >= 1000000)
{
time.tv_sec += 1;
time.tv_usec -= 1000000;
}
return (time);
}
short time_compare(struct timeval time_one, struct timeval time_two)
{
if (time_one.tv_sec != time_two.tv_sec)
{
if (time_one.tv_sec > time_two.tv_sec)
return (1);
else
return (-1);
}
else
{
if (time_one.tv_usec > time_two.tv_usec)
return (1);
else if (time_one.tv_usec == time_two.tv_usec)
return (0);
else
return (-1);
}
}
// Wait until interval in microseconds has passed or until death_time is reached.
void busy_wait(int interval, struct timeval last_eaten_time, struct timeval time_to_die)
{
struct timeval time;
struct timeval end_time;
struct timeval death_time;
gettimeofday(&time, NULL);
end_time = time_add_microseconds(time, interval);
death_time = time_add_microseconds(last_eaten_time, time_to_die.tv_sec * 1000000ULL + time_to_die.tv_usec);
while (time_compare(time, end_time) == -1)
{
gettimeofday(&time, NULL);
if (time_compare(time, death_time) >= 0)
{
printf("%llu died\n", time.tv_sec * 1000000ULL + time.tv_usec);
exit(1);
}
}
}
int main(void)
{
struct timeval time;
struct timeval time_to_die = { .tv_sec = 0, .tv_usec = 1000};
struct timeval last_eaten_time = { .tv_sec = 0, .tv_usec = 0 };
while (true)
{
gettimeofday(&time, NULL);
printf("%llu eating\n", time.tv_sec * 1000000ULL + time.tv_usec);
last_eaten_time = time;
busy_wait(200, last_eaten_time, time_to_die);
gettimeofday(&time, NULL);
printf("%llu sleeping\n", time.tv_sec * 1000000ULL + time.tv_usec);
busy_wait(200, last_eaten_time, time_to_die);
}
}
Note: Other than the system functions I already used in my code, I'm only allowed to use usleep, write, and malloc and free.
Thank you for your time.
after some time, one execution of the function busy_wait takes around 5 times more than it is supposed to (enough to kill the character), and the character dies. I want to know why and how to fix it.
There are multiple possibilities. Many of them revolve around the fact that there is more going on in your computer while the program runs than just the program running. Unless you're running on a realtime operating system, the bottom line is that you can't fix some of the things that could cause such behavior.
For example, your program shares the CPU with the system itself and with all the other processes running on it. That may be more processes than you think: right now, there are over 400 live processes on my 6-core workstation. When there are more processes demanding CPU time than there are CPUs to run them on, the system will split the available time among the contending processes, preemptively suspending processes when their turns expire.
If your program happens to be preempted during a busy wait, then chances are good that substantially more than 200 μs of wall time will elapse before it is next scheduled any time on a CPU. Time slice size is usually measured in milliseconds, and on a general-purpose OS, there is no upper (or lower) bound on the time between the elapse of one and the commencement of the same program's next one.
As I did in comments, I observe that you are using gettimeofday to measure elapsed time, yet that is not on your list of allowed system functions. One possible resolution of that inconsistency is that you're not meant to perform measurements of elapsed time, but rather to assume / simulate. For example, usleep() is on the list, so perhaps you're meant to usleep() instead of busy wait, and assume that the sleep time is exactly what was requested. Or perhaps you're meant to just adjust an internal time counter instead of actually pausing execution at all.
Why
Ultimately: because an interrupt or trap is delivered to the CPU core executing your program, which transfers control to the operating system.
Some common causes:
The operating system is running its process scheduling using a hardware timer which fires a regular intervals. I.e. the OS is running some kind of fair scheduler and it has to check if your process' time is up for now.
Some device in your system needs to be serviced. E.g. a packet arrived over the network, your sound card's output buffer is running low and must be refilled, etc.
Your program voluntarily makes a request to the operating system that transfers control to it. Basically: anytime you make a syscall, the kernel may have to wait for I/O, or it may decide that it's time to schedule a different process, or both. In your case, the calls to printf will at some point result in a write(2) syscall that will end up performing some I/O.
What to do
Cause 3 can be avoided by ensuring that no syscalls are made, i.e. never trapping in to the OS.
Causes 1 and 2 are very difficult to completely get rid of. You're essentially looking for a real-time operating system (RTOS). An OS like Linux can approximate that by placing processes in different scheduling domains (SCHED_FIFO/SCHED_RR). If you're willing to switch to a kernel that is tailored towards real-time applications, you can get even further. You can also look in to topics like "CPU isolation".
Just to illustrate the printf, but also gettimeofday timings I was mentionned in comments, I tried 2 things
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
struct timeval time;
long long histo[5000];
for(int i=0; i<5000; i++){
gettimeofday(&time, NULL);
histo[i]=time.tv_sec * 1000000ULL + time.tv_usec;
}
long long min=1000000000;
long long max=0;
for(int i=1; i<5000; i++){
long long dt=histo[i]-histo[i-1];
if(dt<min) min=dt;
if(dt>max) max=dt;
if(dt>800) printf("%d %lld\n", i, dt);
}
printf("avg: %f min=%lld max=%lld\n", (histo[4999]-histo[0])/5000.0, min, max);
}
So all it does here, is just looping in 5000 printf/gettimeofday iterations. And then measuring (after the loop) the mean, min and max.
On my X11 terminal (Sakura), average is 8 μs per loop, with min 1 μs, and max 3790 μs! (other measurement I made show that this 3000 or so μs is also the only one over 200 μs. In other words, it never goes over 200 μs. Except when it does "bigly").
So, on average, everything goes well. But once in a while, a printf takes almost 4ms (which is not enough, it that doesn't happen several times in a row for a human user to even notice it. But is way more than needed to make your code fail).
On my console (no X11) terminal (a 80x25 terminal, that may, or may not use text mode of my graphics card, I never was sure), mean is 272 μs, min 193 μs, and max is 1100 μs. Which (retroactively) is not surprising. This terminal is slow, but simpler, so less prone to "surprises".
But, well, it fails faster, because probability of going over 200 μs is very high, even if it is not a lot over, more than half of the loops take more than 200 μs.
I also tried measurements on a loop without printf.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
struct timeval time;
long long old=0;
long long ntot=0;
long long nov10=0;
long long nov100=0;
long long nov1000=0;
for(int i=0;;i++){
gettimeofday(&time, NULL);
long long t=time.tv_sec * 1000000ULL + time.tv_usec;
if(old){
long long dt=t-old;
ntot++;
if(dt>10){
nov10++;
if(dt>100){
nov100++;
if(dt>1000) nov1000++;
}
}
}
if(i%10000==0){
printf("tot=%lld >10=%lld >100=%lld >1000=%lld\n", ntot, nov10, nov100, nov1000);
old=0;
}else{
old=t;
}
}
}
So, it measures something that I could pompously call a "logarithmic histogram" of timings.
This times, independent from the terminal (I put back old to 0 each times I print something so that those times doesn't count)
Result
tot=650054988 >10=130125 >100=2109 >1000=2
So, sure, 99.98% of the times, gettimeofday takes less than 10μs.
But, 3 times each millions call (and that means, in your code, only a few seconds), it takes more than 100μs. And twice in my experiment, it took more than 1000 μs. Just gettimeofday, not the printf.
Obviously, it's not gettimeofday that took 1ms. But simply, something more important occurred on my system, and that process had to wait 1ms to get some cpu time from the scheduler.
And bear in mind that this is on my computer. And on my computer, your code runs fine (well, those measurement shows that it would have failed eventually if I let it run as long as I let those measurements run).
On your computer, those numbers (the 2 >1000) are certainly way more, so it fails very quickly.
preemptive multitasks OS are simply not made to guarantee executions times in micro-seconds. You have to use a Real Time OS for that (RT-linux for example. It it sills exist, anyway — I haven't used it since 2002).
As pointed out in the other answers, there is no way to make this code work as I expected without a major change in its design, within my constraints. So I changed my code to not depend on gettimeofday for determining whether a philosopher died, or determining the time value to print. Instead, I just add 200 μs to time every time my character eats/sleeps. This does feel like a cheap trick. Because while at the start I display the correct system wall time, my time variable will differentiate from the system time more and more as the program runs, but I guess this is what was wanted from me.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
struct timeval time_add_microseconds(struct timeval time, long long microseconds)
{
time.tv_usec += microseconds;
while (time.tv_usec >= 1000000)
{
time.tv_sec += 1;
time.tv_usec -= 1000000;
}
return (time);
}
short time_compare(struct timeval time_one, struct timeval time_two)
{
if (time_one.tv_sec != time_two.tv_sec)
{
if (time_one.tv_sec > time_two.tv_sec)
return (1);
else
return (-1);
}
else
{
if (time_one.tv_usec > time_two.tv_usec)
return (1);
else if (time_one.tv_usec == time_two.tv_usec)
return (0);
else
return (-1);
}
}
bool is_destined_to_die(int interval, struct timeval current_time, struct timeval last_eaten_time, struct timeval time_to_die)
{
current_time = time_add_microseconds(current_time, interval);
if ((current_time.tv_sec * 1000000ULL + current_time.tv_usec) - (last_eaten_time.tv_sec * 1000000ULL + last_eaten_time.tv_usec) >= time_to_die.tv_sec * 1000000ULL + time_to_die.tv_usec)
return (true);
else
return (false);
}
// Wait until interval in microseconds has passed or until death_time is reached.
void busy_wait(int interval, struct timeval current_time, struct timeval last_eaten_time, struct timeval time_to_die)
{
struct timeval time;
struct timeval end_time;
struct timeval death_time;
gettimeofday(&time, NULL);
if (is_destined_to_die(interval, current_time, last_eaten_time, time_to_die))
{
death_time = time_add_microseconds(last_eaten_time, time_to_die.tv_sec * 1000000 + time_to_die.tv_usec);
while (time_compare(time, death_time) == -1)
gettimeofday(&time, NULL);
printf("%llu died\n", time.tv_sec * 1000000ULL + time.tv_usec);
exit(1);
}
end_time = time_add_microseconds(time, interval);
while (time_compare(time, end_time) == -1)
gettimeofday(&time, NULL);
}
int main(void)
{
struct timeval time;
struct timeval time_to_die = { .tv_sec = 0, .tv_usec = 1000};
struct timeval last_eaten_time = { .tv_sec = 0, .tv_usec = 0 };
gettimeofday(&time, NULL);
while (true)
{
printf("%llu eating\n", time.tv_sec * 1000000ULL + time.tv_usec);
last_eaten_time = time;
busy_wait(200, time, last_eaten_time, time_to_die);
time = time_add_microseconds(time, 200);
printf("%llu sleeping\n", time.tv_sec * 1000000ULL + time.tv_usec);
busy_wait(200, time, last_eaten_time, time_to_die);
time = time_add_microseconds(time, 200);
}
}

How to get execution time of c program?

I am using clock function for my c program to print execution time of current program.I am getting wrong time in output.I want to display time in seconds,milliseconds and microseconds.
#include <stdio.h>
#include <unistd.h>
#include <time.h>
int main()
{
clock_t start = clock();
sleep(3);
clock_t end = clock();
double time_taken = (double)(end - start)/CLOCKS_PER_SEC; // in seconds
printf("time program took %f seconds to execute \n", time_taken);
return 0;
}
time ./time
time program took 0.081000 seconds to execute
real 0m3.002s
user 0m0.000s
sys 0m0.002s
I expect output around 3 seconds however it display wrong.
As you see if I run this program using Linux command time I am getting correct time,I want to display same time using my c program.
Contrary to popular belief, the clock() function retrieves CPU time, not elapsed clock time as the name confusingly may induce people to believe.
Here is the language from the C Standard:
7.27.2.1 The clock function
Synopsis
#include <time.h>
clock_t clock(void);
Description
The clock function determines the processor time used.
Returns
The clock function returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation. To determine the time in seconds, the value returned by the clock function should be divided by the value of the macro CLOCKS_PER_SEC. If the processor time used is not available, the function returns the value (clock_t)(−1). If the value cannot be represented, the function returns an unspecified value.
To retrieve the elapsed time, you should use one of the following:
the time() function with a resolution of 1 second
the timespec_get() function which may be more precise, but might not be available on all systems
the gettimeofday() system call available on linux systems
the clock_gettime() function.
See What specifically are wall-clock-time, user-cpu-time, and system-cpu-time in UNIX? for more information on this subject.
Here is a modified version using gettimeoday():
#include <stdio.h>
#include <unistd.h>
#include <sys/time.h>
int main() {
struct timeval start, end;
gettimeofday(&start, NULL);
sleep(3);
gettimeofday(&end, NULL);
double time_taken = end.tv_sec + end.tv_usec / 1e6 -
start.tv_sec - start.tv_usec / 1e6; // in seconds
printf("time program took %f seconds to execute\n", time_taken);
return 0;
}
Output:
time program took 3.005133 seconds to execute

Measuring execution time with clock in sec in C not working

I'm trying to measure execution time in C using clock() under linux using the following:
#include <time.h>
#include <stdio.h>
#include <unistd.h>
int main(int argc, char const* argv[])
{
clock_t begin, end;
begin = clock();
sleep(2);
end = clock();
double spent = ((double)(end-begin)) / CLOCKS_PER_SEC;
printf("%ld %ld, spent: %f\n", begin, end, spent);
return 0;
}
The output is:
1254 1296, spent: 0.000042
The documentation says to divide the clock time by CLOCKS_PER_SEC to get the execution time in sec, but this seems pretty incorrect for a 2sec sleep.
What's the problem?
Sleeping takes almost no execution time. The program just has to schedule its wakeup and then put itself to sleep. While it's asleep, it is not executing. When it's woken up, it doesn't have to do anything at all. That all takes a very tiny fraction of a second.
It doesn't take more execution time to sleep longer. So the fact that there's a 2 second period when the program is not executing has no effect.
clock measures CPU time (in Linux at least). A sleeping process consumes no CPU time.
If you want to measure a time interval as if with a stopwatch, regardless of what your process is doing, use clock_gettime with CLOCK_MONOTONIC.
man clock() has the answer:
The clock() function returns an approximation of processor time used by the program.
Which clearly tells that clock() returns the processor time used by the program, not what you were expecting the total run time of the program which is typically done using gettimeofday.

C Timer Difference returning 0ms

I'm learning C (and Cygwin) and trying to complete a simple remote execution system for an assignment.
One simple requirement that I'm getting hung up on is: 'Client will report the time taken for the server to respond to each query.'
I've tried searching around and implemented other working solutions but always getting back 0 as a result.
A snippet of what I have:
#include <time.h>
for(;;)
{
//- Reset loop variables
bzero(sendline, 1024);
bzero(recvline, 1024);
printf("> ");
fgets(sendline, 1024, stdin);
//- Handle program 'quit'
sendline[strcspn(sendline, "\n")] = 0;
if (strcmp(sendline,"quit") == 0) break;
//- Process & time command
clock_t start = clock(), diff;
write(sock, sendline, strlen(sendline)+1);
read(sock, recvline, 1024);
sleep(2);
diff = clock() - start;
int msec = diff * 1000 / CLOCKS_PER_SEC;
printf("%s (%d s / %d ms)\n\n", recvline, msec/1000, msec%1000);
}
I've also tried using a float, and instead of dividing by 1000, multiplying by 10,000 just to see if there is any glint of a value, but always getting back 0.
Clearly something must be wrong with how I'm implementing this, but after much reading I can't figure it out.
--Edit--
Printout of values:
clock_t start = clock(), diff;
printf("Start time: %lld\n", (long long) start);
//process stuff
sleep(2);
printf("End time: %lld\n", (long long) clock());
diff = clock() - start;
printf("Diff time: %lld\n", (long long) diff);
printf("Clocks per sec: %d", CLOCKS_PER_SEC);
Result:
Start time: 15
End time: 15
Diff time: 0
Clocks per sec: 1000
-- FINAL WORKING CODE --
#include <sys/time.h>
//- Setup clock
struct timeval start, end;
//- Start timer
gettimeofday(&start, NULL);
//- Process command
/* Process stuff */
//- End timer
gettimeofday(&end, NULL);
//- Calculate differnce in microseconds
long int usec =
(end.tv_sec * 1000000 + end.tv_usec) -
(start.tv_sec * 1000000 + start.tv_usec);
//- Convert to milliseconds
double msec = (double)usec / 1000;
//- Print result (3 decimal places)
printf("\n%s (%.3fms)\n\n", recvline, msec);
I think you misunderstand clock() and sleep().
clock measure CPU time used by your program, but sleep will sleep without using any CPU time. Maybe you want to use time() or gettimeofday() instead?
Cygwin means you're on Windows.
On Windows, the "current time" on an executing thread is only updated every 64th of a second (roughly 16ms), so if clock() is based on it, even if it returns a number of milliseconds, it will never be more precise than 15.6ms.
GetThreadTimes() has the same limitation.

Timing in C with time.h

I am working on Ubuntu, and I want to time an assembler function in C.
Thats my code:
#include <time.h>
#include <stdio.h>
#include <unistd.h>
extern void assembler_function(char*,int);
int main(){
char *text1 = "input.txt";
clock_t start=clock();
sleep(3); // used for test
//assembler_function(text1,0);
clock_t stop=clock();
//printf("%d %f\n",(int)stop,((float)stop)/CLOCKS_PER_SEC);
printf("Time : %f \n",(double)start/CLOCKS_PER_SEC);
printf("Time : %f \n",(double)stop/CLOCKS_PER_SEC);
printf("Time : %f \n",(double)(stop-start)/CLOCKS_PER_SEC);
return 0;
}
The results are :
Time : 0.000000
Time : 0.000000
Time : 0.000000
If CLOCKS_PER_SEC is the typical value of 1000000, it's entirely possible that the range you're measuring is less than one clock (1 microsecond). Also, sleep will not contribute to increases in clock aside from the overhead of the call itself, since clock measures process time, not wall clock time.
Instead, try measuring the time taken to perform multiple calls to the assembler function, then dividing the results by the number of iterations. If the total time to execute the assembler function is very small (e.g. 1ns) you'll need to be clever about how you do this, otherwise the overhead of the loop could end up being a significant part of the measurement.
Here's a simple example, compiled on a Ubuntu box:
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include <sys/time.h>
#include <unistd.h>
#include <time.h>
int64_t timestamp_now (void)
{
struct timeval tv;
gettimeofday (&tv, NULL);
return (int64_t) tv.tv_sec * CLOCKS_PER_SEC + tv.tv_usec;
}
double timestamp_to_seconds (int64_t timestamp)
{
return timestamp / (double) CLOCKS_PER_SEC;
}
int
main ()
{
int64_t start = timestamp_now ();
sleep (1);
printf ("sleep(1) took %f seconds\n",
timestamp_to_seconds (timestamp_now () - start));
return 0;
}
From the helpful man-page for clock():
DESCRIPTION
The clock() function returns an **approximation of processor time used by the program**.
In other words, clock() is probably not what you want here, because you want to count elapsed wall-clock time, not CPU-usage time (note: sleep() uses almost no CPU time - it just sets an alarm for a wake-up time in the future and then, well, sleeps...).
Use difftime:
First:
time_t t_ini;
t_ini=time(NULL);
At the end:
difftime((int)time(NULL), (int)t_ini);

Resources