Time measuring in C program - c

I need to measure real and virtual (CPU) time of loop execution. This loop includes multiprocessing and child processes.
I tried with CPU:
clock_t start, end;
cpu_time_used=((double)(end-start)/CLOCKS_PER_SEC); //in sec
and for real time:
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
gettimeofday(&tv2, NULL);
printf("Total time = %f sec. \n", (double)(tv2.tv_usec - tv1.tv_usec) / 1000000) + (tv2.tv_sec - tv1.tv_sec));
But the measurements are not adequate. As far as I understand the CPU time should be a bit longer in this case, but there is almost no difference between them.

Related

Does `clock` measure `sleep` i.e. suspended threads?

I am trying to understand the clock_t clock(void); function better and have following question:
Did I understand that correctly that clock measures the number of ticks of a process since it is actively running and sleep suspends the calling thread – in this case there is only one thread, namely the main-thread – and therefore suspends the whole process. Which means that clock does not measure the cpu-time (ticks) of the process, since it is not actively running?
If so what is the recommended way to measure the actual needed time?
"The clock() function returns an approximation of processor time used by the program." Source
"The CPU time (process time) is measured in clock ticks or seconds." Source
"The number of clock ticks per second can be obtained using: sysconf(_SC_CLK_TCK);" Source
#include <stdio.h> // printf
#include <time.h> // clock
#include <unistd.h> // sleep
int main()
{
printf("ticks per second: %zu\n", sysconf(_SC_CLK_TCK));
clock_t ticks_since_process_startup_1 = clock();
sleep(1);
clock_t ticks_since_process_startup_2 = clock();
printf("ticks_probe_1: %zu\n", ticks_since_process_startup_1);
printf("sleep(1);\n");
printf("ticks_probe_2: %zu\n", ticks_since_process_startup_2);
printf("ticks diff: %zu <-- should be 100\n", ticks_since_process_startup_2 - ticks_since_process_startup_1);
printf("ticks diff sec: %Lf <-- should be 1 second\n", (long double)(ticks_since_process_startup_2 - ticks_since_process_startup_1) / CLOCKS_PER_SEC);
return 0;
}
Resulting Output:
ticks per second: 100
ticks_probe_1: 603
sleep(1);
ticks_probe_2: 616
ticks diff: 13 <-- should be 100
ticks diff sec: 0.000013 <-- should be 1 second
Does clock measure sleep i.e. suspended threads?
No.
(Well, it could measure it, there's nothing against it. You could have a very bad OS that implements sleep() as a while (!time_to_sleep_expired()) {} busy loop. But any self-respected OS will try to make the process not to use CPU when in sleep()).
Which means that clock does not measure the cpu-time (ticks) of the process, since it is not actively running?
Yes.
If so what is the recommended way to measure the actual needed time?
To measure "real-time" use clock_gettime(CLOCK_MONOTONIC, ...) on a POSIX system.
The number of clock ticks per second can be obtained using: sysconf(_SC_CLK_TCK);
Yes, but note that sysconf(_SC_CLK_TCK); is not CLOCKS_PER_SECOND. You do not use ex times() in your function, you use clock(), I do not really get why you print sysconf(_SC_CLK_TCK);. Anyway, see for example sysconf(_SC_CLK_TCK) vs. CLOCKS_PER_SEC .

Array sum with 4 threads is slower than 1 thread [duplicate]

I've always used clock() to measure how much time my application took from start to finish, as;
int main(int argc, char *argv[]) {
const clock_t START = clock();
// ...
const double T_ELAPSED = (double)(clock() - START) / CLOCKS_PER_SEC;
}
Since I've started using POSIX threads this seem to fail. It looks like clock() increases N times faster with N threads. As I don't know how many threads are going to be running simultaneously, this approach fails. So how can I measure how much time has passed ?
clock() measure the CPU time used by your process, not the wall-clock time. When you have multiple threads running simultaneously, you can obviously burn through CPU time much faster.
If you want to know the wall-clock execution time, you need to use an appropriate function. The only one in ANSI C is time(), which typically only has 1 second resolution.
However, as you've said you're using POSIX, that means you can use clock_gettime(), defined in time.h. The CLOCK_MONOTONIC clock in particular is the best to use for this:
struct timespec start, finish;
double elapsed;
clock_gettime(CLOCK_MONOTONIC, &start);
/* ... */
clock_gettime(CLOCK_MONOTONIC, &finish);
elapsed = (finish.tv_sec - start.tv_sec);
elapsed += (finish.tv_nsec - start.tv_nsec) / 1000000000.0;
(Note that I have done the calculation of elapsed carefully to ensure that precision is not lost when timing very short intervals).
If your OS doesn't provide CLOCK_MONOTONIC (which you can check at runtime with sysconf(_SC_MONOTONIC_CLOCK)), then you can use CLOCK_REALTIME as a fallback - but note that the latter has the disadvantage that it will generate incorrect results if the system time is changed while your process is running.
What timing resolution do you need? You could use time() from time.h for second resolution. If you need higher resolution, then you could use something more system specific. See Timer function to provide time in nano seconds using C++

Is the CLOCKS_PER_SEC value "wrong" inside a virtual machine

I code inside a virtual machine( Linux Ubuntu) that's installed on a windows.
According to this page, the CLOCKS_PER_SEC value in the library on Linux should always be 1 000 000.
When I run this code:
int main()
{
printf("%d\n", CLOCKS_PER_SEC);
while (1)
printf("%f, %f \n\n", (double)clock(), (double)clock()/CLOCKS_PER_SEC);
return 0;
}
The first value is 1 000 000 as it should be, however, the value that should show the number of seconds does NOT increase at the normal pace (it takes between 4 and 5 seconds to increase by 1)
Is this due to my working on a virtual machine? How can I solve this?
This is expected.
The clock() function does not return the wall time (the time that real clocks on the wall display). It returns the amount of CPU time used by your program. If your program is not consuming every possible scheduler slice, then it will increase slower than wall time, if your program consumes slices on multiple cores at the same time it can increase faster.
So if you call clock(), and then sleep(5), and then call clock() again, you'll find that clock() has barely increased at all. Even though sleep(5) waits for 5 real seconds, it doesn't consume any CPU, and CPU usage is what clock() measures.
If you want to measure wall clock time you will want clock_gettime() (or the older version gettimeofday()). You can use CLOCK_REALTIME if you want to know the civil time (e.g. "it's 3:36 PM") or CLOCK_MONOTONIC if you want to measure time intervals. In this case, you probably want CLOCK_MONOTONIC.
#include <stdio.h>
#include <time.h>
int main() {
struct timespec start, now;
clock_gettime(CLOCK_MONOTONIC, &start);
while (1) {
clock_gettime(CLOCK_MONOTONIC, &now);
printf("Elapsed: %f\n",
(now.tv_sec - start.tv_sec) +
1e-9 * (now.tv_nsec - start.tv_nsec));
}
}
The usual proscriptions against using busy-loops apply here.

Measuring time with time.h?

I'm trying to measure time it takes to run a command using my own command interpreter, but is the time correct? When I run a command it says time much longer than expected:
miniShell>> pwd
/home/dac/.clion11/system/cmake/generated/c0a6fa89/c0a6fa89/Debug
Execution time 1828 ms
I'm using the gettimeofday as can be seen from the code. Isn't it wrong somewhere and should be changed so that the timing looks reasonable?
If I make a minimal example, then it looks and runs like this:
#include <sys/stat.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main(int argc, char *argv[]) {
long time;
struct timeval time_start;
struct timeval time_end;
gettimeofday(&time_start, NULL);
printf("run program>> ");
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec-time_start.tv_sec)*1000000 + time_end.tv_usec-time_start.tv_usec;
printf("Execution time %ld ms\n", time); /*Print out the execution time*/
return (0);
}
Then I run it
/home/dac/.clion11/system/cmake/generated/c0a6fa89/c0a6fa89/Debug/oslab
run program>> Execution time 14 ms
Process finished with exit code 0
The above 14 ms seems reasonable, why is the time so long for my command?
The tv_usec in struct timeval is a time in microseconds, not milliseconds.
You compute the time incorrectly. tv_usec, where the u stands for the Greek lowercase letter μ ("mu"), holds a number of microseconds. Fix the formula this way:
gettimeofday(&time_end, NULL);
time = (((time_end.tv_sec - time_start.tv_sec) * 1000000LL) +
time_end.tv_usec - time_start.tv_usec) / 1000;
printf("Execution time %ld ms\n", time); /* Print out the execution time*/
It is preferable to make the computation in 64 bits to avoid overflows if long is 32 bits and the elapsed time can exceed 40 minutes.
If you want to preserve the maximum precision, keep the computation in microseconds and print the number of milliseconds with a decimal point:
gettimeofday(&time_end, NULL);
time = (time_end.tv_sec - time_start.tv_sec) * 1000000 +
time_end.tv_usec - time_start.tv_usec;
printf("Execution time %ld.%03ld ms\n", time / 1000, time % 1000);

C Timer Difference returning 0ms

I'm learning C (and Cygwin) and trying to complete a simple remote execution system for an assignment.
One simple requirement that I'm getting hung up on is: 'Client will report the time taken for the server to respond to each query.'
I've tried searching around and implemented other working solutions but always getting back 0 as a result.
A snippet of what I have:
#include <time.h>
for(;;)
{
//- Reset loop variables
bzero(sendline, 1024);
bzero(recvline, 1024);
printf("> ");
fgets(sendline, 1024, stdin);
//- Handle program 'quit'
sendline[strcspn(sendline, "\n")] = 0;
if (strcmp(sendline,"quit") == 0) break;
//- Process & time command
clock_t start = clock(), diff;
write(sock, sendline, strlen(sendline)+1);
read(sock, recvline, 1024);
sleep(2);
diff = clock() - start;
int msec = diff * 1000 / CLOCKS_PER_SEC;
printf("%s (%d s / %d ms)\n\n", recvline, msec/1000, msec%1000);
}
I've also tried using a float, and instead of dividing by 1000, multiplying by 10,000 just to see if there is any glint of a value, but always getting back 0.
Clearly something must be wrong with how I'm implementing this, but after much reading I can't figure it out.
--Edit--
Printout of values:
clock_t start = clock(), diff;
printf("Start time: %lld\n", (long long) start);
//process stuff
sleep(2);
printf("End time: %lld\n", (long long) clock());
diff = clock() - start;
printf("Diff time: %lld\n", (long long) diff);
printf("Clocks per sec: %d", CLOCKS_PER_SEC);
Result:
Start time: 15
End time: 15
Diff time: 0
Clocks per sec: 1000
-- FINAL WORKING CODE --
#include <sys/time.h>
//- Setup clock
struct timeval start, end;
//- Start timer
gettimeofday(&start, NULL);
//- Process command
/* Process stuff */
//- End timer
gettimeofday(&end, NULL);
//- Calculate differnce in microseconds
long int usec =
(end.tv_sec * 1000000 + end.tv_usec) -
(start.tv_sec * 1000000 + start.tv_usec);
//- Convert to milliseconds
double msec = (double)usec / 1000;
//- Print result (3 decimal places)
printf("\n%s (%.3fms)\n\n", recvline, msec);
I think you misunderstand clock() and sleep().
clock measure CPU time used by your program, but sleep will sleep without using any CPU time. Maybe you want to use time() or gettimeofday() instead?
Cygwin means you're on Windows.
On Windows, the "current time" on an executing thread is only updated every 64th of a second (roughly 16ms), so if clock() is based on it, even if it returns a number of milliseconds, it will never be more precise than 15.6ms.
GetThreadTimes() has the same limitation.

Resources