C Timer Difference returning 0ms - c

I'm learning C (and Cygwin) and trying to complete a simple remote execution system for an assignment.
One simple requirement that I'm getting hung up on is: 'Client will report the time taken for the server to respond to each query.'
I've tried searching around and implemented other working solutions but always getting back 0 as a result.
A snippet of what I have:
#include <time.h>
for(;;)
{
//- Reset loop variables
bzero(sendline, 1024);
bzero(recvline, 1024);
printf("> ");
fgets(sendline, 1024, stdin);
//- Handle program 'quit'
sendline[strcspn(sendline, "\n")] = 0;
if (strcmp(sendline,"quit") == 0) break;
//- Process & time command
clock_t start = clock(), diff;
write(sock, sendline, strlen(sendline)+1);
read(sock, recvline, 1024);
sleep(2);
diff = clock() - start;
int msec = diff * 1000 / CLOCKS_PER_SEC;
printf("%s (%d s / %d ms)\n\n", recvline, msec/1000, msec%1000);
}
I've also tried using a float, and instead of dividing by 1000, multiplying by 10,000 just to see if there is any glint of a value, but always getting back 0.
Clearly something must be wrong with how I'm implementing this, but after much reading I can't figure it out.
--Edit--
Printout of values:
clock_t start = clock(), diff;
printf("Start time: %lld\n", (long long) start);
//process stuff
sleep(2);
printf("End time: %lld\n", (long long) clock());
diff = clock() - start;
printf("Diff time: %lld\n", (long long) diff);
printf("Clocks per sec: %d", CLOCKS_PER_SEC);
Result:
Start time: 15
End time: 15
Diff time: 0
Clocks per sec: 1000
-- FINAL WORKING CODE --
#include <sys/time.h>
//- Setup clock
struct timeval start, end;
//- Start timer
gettimeofday(&start, NULL);
//- Process command
/* Process stuff */
//- End timer
gettimeofday(&end, NULL);
//- Calculate differnce in microseconds
long int usec =
(end.tv_sec * 1000000 + end.tv_usec) -
(start.tv_sec * 1000000 + start.tv_usec);
//- Convert to milliseconds
double msec = (double)usec / 1000;
//- Print result (3 decimal places)
printf("\n%s (%.3fms)\n\n", recvline, msec);

I think you misunderstand clock() and sleep().
clock measure CPU time used by your program, but sleep will sleep without using any CPU time. Maybe you want to use time() or gettimeofday() instead?

Cygwin means you're on Windows.
On Windows, the "current time" on an executing thread is only updated every 64th of a second (roughly 16ms), so if clock() is based on it, even if it returns a number of milliseconds, it will never be more precise than 15.6ms.
GetThreadTimes() has the same limitation.

Related

Inconsistent busy waiting time in c

I have a character that should "eat" for 200 microseconds, "sleep" for 200 microseconds, and repeat, until they die, which happens if they haven't eaten for time_to_die microseconds.
In the code snippet in function main indicated below, the struct time_to_die has a member tv_usec configured for 1000 microseconds and I expect it to loop forever.
After some time, one execution of the function busy_wait takes around 5 times more than it is supposed to (enough to kill the character), and the character dies. I want to know why and how to fix it.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
struct timeval time_add_microseconds(struct timeval time, long long microseconds)
{
time.tv_usec += microseconds;
while (time.tv_usec >= 1000000)
{
time.tv_sec += 1;
time.tv_usec -= 1000000;
}
return (time);
}
short time_compare(struct timeval time_one, struct timeval time_two)
{
if (time_one.tv_sec != time_two.tv_sec)
{
if (time_one.tv_sec > time_two.tv_sec)
return (1);
else
return (-1);
}
else
{
if (time_one.tv_usec > time_two.tv_usec)
return (1);
else if (time_one.tv_usec == time_two.tv_usec)
return (0);
else
return (-1);
}
}
// Wait until interval in microseconds has passed or until death_time is reached.
void busy_wait(int interval, struct timeval last_eaten_time, struct timeval time_to_die)
{
struct timeval time;
struct timeval end_time;
struct timeval death_time;
gettimeofday(&time, NULL);
end_time = time_add_microseconds(time, interval);
death_time = time_add_microseconds(last_eaten_time, time_to_die.tv_sec * 1000000ULL + time_to_die.tv_usec);
while (time_compare(time, end_time) == -1)
{
gettimeofday(&time, NULL);
if (time_compare(time, death_time) >= 0)
{
printf("%llu died\n", time.tv_sec * 1000000ULL + time.tv_usec);
exit(1);
}
}
}
int main(void)
{
struct timeval time;
struct timeval time_to_die = { .tv_sec = 0, .tv_usec = 1000};
struct timeval last_eaten_time = { .tv_sec = 0, .tv_usec = 0 };
while (true)
{
gettimeofday(&time, NULL);
printf("%llu eating\n", time.tv_sec * 1000000ULL + time.tv_usec);
last_eaten_time = time;
busy_wait(200, last_eaten_time, time_to_die);
gettimeofday(&time, NULL);
printf("%llu sleeping\n", time.tv_sec * 1000000ULL + time.tv_usec);
busy_wait(200, last_eaten_time, time_to_die);
}
}
Note: Other than the system functions I already used in my code, I'm only allowed to use usleep, write, and malloc and free.
Thank you for your time.
after some time, one execution of the function busy_wait takes around 5 times more than it is supposed to (enough to kill the character), and the character dies. I want to know why and how to fix it.
There are multiple possibilities. Many of them revolve around the fact that there is more going on in your computer while the program runs than just the program running. Unless you're running on a realtime operating system, the bottom line is that you can't fix some of the things that could cause such behavior.
For example, your program shares the CPU with the system itself and with all the other processes running on it. That may be more processes than you think: right now, there are over 400 live processes on my 6-core workstation. When there are more processes demanding CPU time than there are CPUs to run them on, the system will split the available time among the contending processes, preemptively suspending processes when their turns expire.
If your program happens to be preempted during a busy wait, then chances are good that substantially more than 200 μs of wall time will elapse before it is next scheduled any time on a CPU. Time slice size is usually measured in milliseconds, and on a general-purpose OS, there is no upper (or lower) bound on the time between the elapse of one and the commencement of the same program's next one.
As I did in comments, I observe that you are using gettimeofday to measure elapsed time, yet that is not on your list of allowed system functions. One possible resolution of that inconsistency is that you're not meant to perform measurements of elapsed time, but rather to assume / simulate. For example, usleep() is on the list, so perhaps you're meant to usleep() instead of busy wait, and assume that the sleep time is exactly what was requested. Or perhaps you're meant to just adjust an internal time counter instead of actually pausing execution at all.
Why
Ultimately: because an interrupt or trap is delivered to the CPU core executing your program, which transfers control to the operating system.
Some common causes:
The operating system is running its process scheduling using a hardware timer which fires a regular intervals. I.e. the OS is running some kind of fair scheduler and it has to check if your process' time is up for now.
Some device in your system needs to be serviced. E.g. a packet arrived over the network, your sound card's output buffer is running low and must be refilled, etc.
Your program voluntarily makes a request to the operating system that transfers control to it. Basically: anytime you make a syscall, the kernel may have to wait for I/O, or it may decide that it's time to schedule a different process, or both. In your case, the calls to printf will at some point result in a write(2) syscall that will end up performing some I/O.
What to do
Cause 3 can be avoided by ensuring that no syscalls are made, i.e. never trapping in to the OS.
Causes 1 and 2 are very difficult to completely get rid of. You're essentially looking for a real-time operating system (RTOS). An OS like Linux can approximate that by placing processes in different scheduling domains (SCHED_FIFO/SCHED_RR). If you're willing to switch to a kernel that is tailored towards real-time applications, you can get even further. You can also look in to topics like "CPU isolation".
Just to illustrate the printf, but also gettimeofday timings I was mentionned in comments, I tried 2 things
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
struct timeval time;
long long histo[5000];
for(int i=0; i<5000; i++){
gettimeofday(&time, NULL);
histo[i]=time.tv_sec * 1000000ULL + time.tv_usec;
}
long long min=1000000000;
long long max=0;
for(int i=1; i<5000; i++){
long long dt=histo[i]-histo[i-1];
if(dt<min) min=dt;
if(dt>max) max=dt;
if(dt>800) printf("%d %lld\n", i, dt);
}
printf("avg: %f min=%lld max=%lld\n", (histo[4999]-histo[0])/5000.0, min, max);
}
So all it does here, is just looping in 5000 printf/gettimeofday iterations. And then measuring (after the loop) the mean, min and max.
On my X11 terminal (Sakura), average is 8 μs per loop, with min 1 μs, and max 3790 μs! (other measurement I made show that this 3000 or so μs is also the only one over 200 μs. In other words, it never goes over 200 μs. Except when it does "bigly").
So, on average, everything goes well. But once in a while, a printf takes almost 4ms (which is not enough, it that doesn't happen several times in a row for a human user to even notice it. But is way more than needed to make your code fail).
On my console (no X11) terminal (a 80x25 terminal, that may, or may not use text mode of my graphics card, I never was sure), mean is 272 μs, min 193 μs, and max is 1100 μs. Which (retroactively) is not surprising. This terminal is slow, but simpler, so less prone to "surprises".
But, well, it fails faster, because probability of going over 200 μs is very high, even if it is not a lot over, more than half of the loops take more than 200 μs.
I also tried measurements on a loop without printf.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
struct timeval time;
long long old=0;
long long ntot=0;
long long nov10=0;
long long nov100=0;
long long nov1000=0;
for(int i=0;;i++){
gettimeofday(&time, NULL);
long long t=time.tv_sec * 1000000ULL + time.tv_usec;
if(old){
long long dt=t-old;
ntot++;
if(dt>10){
nov10++;
if(dt>100){
nov100++;
if(dt>1000) nov1000++;
}
}
}
if(i%10000==0){
printf("tot=%lld >10=%lld >100=%lld >1000=%lld\n", ntot, nov10, nov100, nov1000);
old=0;
}else{
old=t;
}
}
}
So, it measures something that I could pompously call a "logarithmic histogram" of timings.
This times, independent from the terminal (I put back old to 0 each times I print something so that those times doesn't count)
Result
tot=650054988 >10=130125 >100=2109 >1000=2
So, sure, 99.98% of the times, gettimeofday takes less than 10μs.
But, 3 times each millions call (and that means, in your code, only a few seconds), it takes more than 100μs. And twice in my experiment, it took more than 1000 μs. Just gettimeofday, not the printf.
Obviously, it's not gettimeofday that took 1ms. But simply, something more important occurred on my system, and that process had to wait 1ms to get some cpu time from the scheduler.
And bear in mind that this is on my computer. And on my computer, your code runs fine (well, those measurement shows that it would have failed eventually if I let it run as long as I let those measurements run).
On your computer, those numbers (the 2 >1000) are certainly way more, so it fails very quickly.
preemptive multitasks OS are simply not made to guarantee executions times in micro-seconds. You have to use a Real Time OS for that (RT-linux for example. It it sills exist, anyway — I haven't used it since 2002).
As pointed out in the other answers, there is no way to make this code work as I expected without a major change in its design, within my constraints. So I changed my code to not depend on gettimeofday for determining whether a philosopher died, or determining the time value to print. Instead, I just add 200 μs to time every time my character eats/sleeps. This does feel like a cheap trick. Because while at the start I display the correct system wall time, my time variable will differentiate from the system time more and more as the program runs, but I guess this is what was wanted from me.
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
struct timeval time_add_microseconds(struct timeval time, long long microseconds)
{
time.tv_usec += microseconds;
while (time.tv_usec >= 1000000)
{
time.tv_sec += 1;
time.tv_usec -= 1000000;
}
return (time);
}
short time_compare(struct timeval time_one, struct timeval time_two)
{
if (time_one.tv_sec != time_two.tv_sec)
{
if (time_one.tv_sec > time_two.tv_sec)
return (1);
else
return (-1);
}
else
{
if (time_one.tv_usec > time_two.tv_usec)
return (1);
else if (time_one.tv_usec == time_two.tv_usec)
return (0);
else
return (-1);
}
}
bool is_destined_to_die(int interval, struct timeval current_time, struct timeval last_eaten_time, struct timeval time_to_die)
{
current_time = time_add_microseconds(current_time, interval);
if ((current_time.tv_sec * 1000000ULL + current_time.tv_usec) - (last_eaten_time.tv_sec * 1000000ULL + last_eaten_time.tv_usec) >= time_to_die.tv_sec * 1000000ULL + time_to_die.tv_usec)
return (true);
else
return (false);
}
// Wait until interval in microseconds has passed or until death_time is reached.
void busy_wait(int interval, struct timeval current_time, struct timeval last_eaten_time, struct timeval time_to_die)
{
struct timeval time;
struct timeval end_time;
struct timeval death_time;
gettimeofday(&time, NULL);
if (is_destined_to_die(interval, current_time, last_eaten_time, time_to_die))
{
death_time = time_add_microseconds(last_eaten_time, time_to_die.tv_sec * 1000000 + time_to_die.tv_usec);
while (time_compare(time, death_time) == -1)
gettimeofday(&time, NULL);
printf("%llu died\n", time.tv_sec * 1000000ULL + time.tv_usec);
exit(1);
}
end_time = time_add_microseconds(time, interval);
while (time_compare(time, end_time) == -1)
gettimeofday(&time, NULL);
}
int main(void)
{
struct timeval time;
struct timeval time_to_die = { .tv_sec = 0, .tv_usec = 1000};
struct timeval last_eaten_time = { .tv_sec = 0, .tv_usec = 0 };
gettimeofday(&time, NULL);
while (true)
{
printf("%llu eating\n", time.tv_sec * 1000000ULL + time.tv_usec);
last_eaten_time = time;
busy_wait(200, time, last_eaten_time, time_to_die);
time = time_add_microseconds(time, 200);
printf("%llu sleeping\n", time.tv_sec * 1000000ULL + time.tv_usec);
busy_wait(200, time, last_eaten_time, time_to_die);
time = time_add_microseconds(time, 200);
}
}

Does `clock` measure `sleep` i.e. suspended threads?

I am trying to understand the clock_t clock(void); function better and have following question:
Did I understand that correctly that clock measures the number of ticks of a process since it is actively running and sleep suspends the calling thread – in this case there is only one thread, namely the main-thread – and therefore suspends the whole process. Which means that clock does not measure the cpu-time (ticks) of the process, since it is not actively running?
If so what is the recommended way to measure the actual needed time?
"The clock() function returns an approximation of processor time used by the program." Source
"The CPU time (process time) is measured in clock ticks or seconds." Source
"The number of clock ticks per second can be obtained using: sysconf(_SC_CLK_TCK);" Source
#include <stdio.h> // printf
#include <time.h> // clock
#include <unistd.h> // sleep
int main()
{
printf("ticks per second: %zu\n", sysconf(_SC_CLK_TCK));
clock_t ticks_since_process_startup_1 = clock();
sleep(1);
clock_t ticks_since_process_startup_2 = clock();
printf("ticks_probe_1: %zu\n", ticks_since_process_startup_1);
printf("sleep(1);\n");
printf("ticks_probe_2: %zu\n", ticks_since_process_startup_2);
printf("ticks diff: %zu <-- should be 100\n", ticks_since_process_startup_2 - ticks_since_process_startup_1);
printf("ticks diff sec: %Lf <-- should be 1 second\n", (long double)(ticks_since_process_startup_2 - ticks_since_process_startup_1) / CLOCKS_PER_SEC);
return 0;
}
Resulting Output:
ticks per second: 100
ticks_probe_1: 603
sleep(1);
ticks_probe_2: 616
ticks diff: 13 <-- should be 100
ticks diff sec: 0.000013 <-- should be 1 second
Does clock measure sleep i.e. suspended threads?
No.
(Well, it could measure it, there's nothing against it. You could have a very bad OS that implements sleep() as a while (!time_to_sleep_expired()) {} busy loop. But any self-respected OS will try to make the process not to use CPU when in sleep()).
Which means that clock does not measure the cpu-time (ticks) of the process, since it is not actively running?
Yes.
If so what is the recommended way to measure the actual needed time?
To measure "real-time" use clock_gettime(CLOCK_MONOTONIC, ...) on a POSIX system.
The number of clock ticks per second can be obtained using: sysconf(_SC_CLK_TCK);
Yes, but note that sysconf(_SC_CLK_TCK); is not CLOCKS_PER_SECOND. You do not use ex times() in your function, you use clock(), I do not really get why you print sysconf(_SC_CLK_TCK);. Anyway, see for example sysconf(_SC_CLK_TCK) vs. CLOCKS_PER_SEC .

Time measuring in C program

I need to measure real and virtual (CPU) time of loop execution. This loop includes multiprocessing and child processes.
I tried with CPU:
clock_t start, end;
cpu_time_used=((double)(end-start)/CLOCKS_PER_SEC); //in sec
and for real time:
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
gettimeofday(&tv2, NULL);
printf("Total time = %f sec. \n", (double)(tv2.tv_usec - tv1.tv_usec) / 1000000) + (tv2.tv_sec - tv1.tv_sec));
But the measurements are not adequate. As far as I understand the CPU time should be a bit longer in this case, but there is almost no difference between them.

Measuring time with time.h?

I'm trying to measure time it takes to run a command using my own command interpreter, but is the time correct? When I run a command it says time much longer than expected:
miniShell>> pwd
/home/dac/.clion11/system/cmake/generated/c0a6fa89/c0a6fa89/Debug
Execution time 1828 ms
I'm using the gettimeofday as can be seen from the code. Isn't it wrong somewhere and should be changed so that the timing looks reasonable?
If I make a minimal example, then it looks and runs like this:
#include <sys/stat.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main(int argc, char *argv[]) {
long time;
struct timeval time_start;
struct timeval time_end;
gettimeofday(&time_start, NULL);
printf("run program>> ");
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec-time_start.tv_sec)*1000000 + time_end.tv_usec-time_start.tv_usec;
printf("Execution time %ld ms\n", time); /*Print out the execution time*/
return (0);
}
Then I run it
/home/dac/.clion11/system/cmake/generated/c0a6fa89/c0a6fa89/Debug/oslab
run program>> Execution time 14 ms
Process finished with exit code 0
The above 14 ms seems reasonable, why is the time so long for my command?
The tv_usec in struct timeval is a time in microseconds, not milliseconds.
You compute the time incorrectly. tv_usec, where the u stands for the Greek lowercase letter μ ("mu"), holds a number of microseconds. Fix the formula this way:
gettimeofday(&time_end, NULL);
time = (((time_end.tv_sec - time_start.tv_sec) * 1000000LL) +
time_end.tv_usec - time_start.tv_usec) / 1000;
printf("Execution time %ld ms\n", time); /* Print out the execution time*/
It is preferable to make the computation in 64 bits to avoid overflows if long is 32 bits and the elapsed time can exceed 40 minutes.
If you want to preserve the maximum precision, keep the computation in microseconds and print the number of milliseconds with a decimal point:
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec - time_start.tv_sec) * 1000000 +
time_end.tv_usec - time_start.tv_usec;
printf("Execution time %ld.%03ld ms\n", time / 1000, time % 1000);

Linux issues on setting a timer function

I am creating a process with 2 children, 1 of the children is responsible to read questions (line by line from a file), output every question and reading the answer, and the other one is responsable to measure the time elapsed and notify the user at each past 1 minute about the remaining time. My problem is that i couldn't find any useful example of how i can make this set time function to work. Here is what i have tried so far. The problem is that it outputs the same elapsed time every time and never gets out from the loop.
#include<time.h>
#define T 600000
int main(){
clock_t start, end;
double elapsed;
start = clock();
end = start + T;
while(clock() < end){
elapsed = (double) (end - clock()) / CLOCKS_PER_SEC;
printf("you have %f seconds left\n", elapsed);
sleep(60);
}
return 0;
}
As I commented, you should read the time(7) man page.
Notice that clock(3) measure processor time, not real time.
I suggest using clock_gettime(2) with CLOCK_REALTIME (or perhaps CLOCK_MONOTONIC). See also localtime(3) and strftime(3).
Also timer_create(2), the Linux specific timerfd_create(2) and poll(2) etc... Read also Advanced Linux Programming.
If you dare use signals, read carefully signal(7). But timerfd_create and poll should probably be enough to you.
Here is something simple that seems to work:
#include <stdio.h>
#define TIME 300
int main()
{
int i;
for (i=TIME; i > 0; i--)
{
printf("You have [%d] minutes left.\n", i/60);
sleep(60);
}
}
Give it a try.

Resources