Getting the real, user, sys time of functions within a program - c

looking for assistance on getting the real, user, sys times of functions within my C program. For instance how long it took to read in a file. I have been looking at the #include and using the time() function with the -p flag but I am striking out on the execution of this. I guess my question is if I have:
time_t total_time;
time_t start, end;
start = time(-p, &start);
<some code>
end = time(-p, &end);
printf("real = %e, user = %S, sys = %S\n", ???????????);
I understand the differences between the three times, just don't know the proper execution of getting the results.

Check time()'s return value:
The value returned generally represents the number of seconds since 00:00 hours, Jan 1, 1970 UTC (i.e., the current unix timestamp). Although libraries may use a different representation of time: Portable programs should not use the value returned by this function directly, but always rely on calls to other elements of the standard library to translate them to portable types (such as localtime, gmtime or difftime).
To display the real time, read this answer.
For user and system, read: How to measure user/system cpu time for a piece of program?

The real elapsed time can be determined by calling time() before and after, and subtracting the results. If you want to be portable, you should use the difftime function to perform the subtraction, because the return value from time() is not guaranteed to be a number.
If you are using a POSIX compliant system, however, the return value from time() will be the number of seconds since the epoch, so you could just subtract them to get the elapsed seconds.
If you wanted a higher resolution result, you could use gettimeofday(), which returns a timeval struct containing millisecond resolution time.
The most portable way to access the user & system times would be to call the clock() C library function, however you have noted in your comments to #gsamaras that your system does not have this function.
Depending on the system you are working with, you may be able to call the system functions times() or getrusage(). getrusage() would be easier to use, because it returns the values as timeval structs, which contain seconds and milliseconds. times() returns clock ticks which you would have to convert if you wanted actual time units.
If your program is threaded, and you are using Linux, getrusage() offers an additional advantage: you can get the resource consumption for the current thread.
Whichever function you choose the process would be to obtain the initial reading, the final reading, and subtract the two results to see the time consumed by your function.

Related

Using clock() as a timer in c

I have the following basic three lines of code to see if the clock is timing things correctly:
#include <unistd.h>
#define wait(seconds) sleep( seconds )
printf("%lu\n", clock());
wait(2);
printf("%lu\n", clock());
wait() seems to be working fine -- as when I run it, it 'feels' like it is pausing for 2s.
However, here is what the print command gives:
253778
253796
And so when I do something like:
(double) (clock() - t0) / CLOCKS_PER_SEC
It gives me useless results.
What is clock() doing here that I'm not understanding, and how can I fix this to get an accurate timer?
ISO C says that the "clock function returns the implementation's best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation."
In other words, it is not a real-time clock.
On platforms where C code runs as a process that can be put to sleep by an underlying operating system, a proper implementation of clock stops counting when that happens.
ISO C provides a function time that is intended to provide calendar time, encoded as a time_t arithmetic value. The difftime function computes the difference between two time_t values as a floating-point value of type double measuring seconds. On many systems, the precision of time_t is no better than one second, though.
There are POSIX functions for finer resolution time such as gettimeofday, or clock_gettime (with a suitable clock type argument).
If you have a read of the man page for clock(), you will see the following listed in the notes:
On several other implementations, the value returned by clock() also
includes the times of any children whose status has been collected via
wait(2) (or another wait-type call). Linux does not include the times
of waited-for children in the value returned by clock(). The times(2)
function, which explicitly returns (separate) information about the
caller and its children, may be preferable.
So clock() doesn't always include the time taken by something like wait() on all implementations. If you are on Linux, you may want to take a look at the times() function, defined in sys/times.h, as it returns a structure including the time (in clock ticks) taken by children (thus functions like wait() are included).

get process start time in kernel module solaris

I am trying to get process start time in kernel module.
I get the proc struct pointer, and from the proc I take field p_mstart ()
typedef struct proc {
.....
/*
* Microstate accounting, resource usage, and real-time profiling
*/
hrtime_t p_mstart; /* hi-res process start time */
this return me the number: 1976026375725303
struct proc* iterated_process_ptr = curproc
LOG("***KERNEL***: PID=%d, StartTime=%lld",iterated_process_ptr->p_pidp->pid_id, iterated_process_ptr->p_mstart);
What is this number ?
In the documentation solaris write:
The gethrtime() function returns the current high-resolution real time. Time is expressed as nanoseconds since some arbitrary time in the past.
And in the book Solaris Internals they write:
Within the process, the operating system maintains a high-resolution teimstamp that marks process start and terminate times, A p_mstart field, the process start time, is set in the kernel fork() code when the process is created.... it return 64-bit value expressed in nanosecond
The number 1976026375725303 does not make sense at all.
If i divide by 1,000,000,000 and then by 3600 in order to get hours, i get 528 hours, 22 days, but my uptime is 5 days..
Based on answer received at google group: comp.unix.solaris.
Instead of going to proc -> p_mstart
I need to take
iterated_process_ptr ->p_user.u_start
This bring me the same struct (timestruc_t) as userspace
typedef struct psinfo {
psinfo ->pr_start; /* process start time, from the epoch */
The number 1976026375725303 does not make sense at all.
Yes it does. Per the very documentation that you quoted:
Time is expressed as nanoseconds since some arbitrary time in the
past.
Thus, the value can be used to calculate how long ago the process started:
hrtime_t howLongAgo = gethrtime() - p->p_mstart;
That produces a value in nanoseconds for how long ago the process started.
And note that the value produced is accurate - the value from iterated_process_ptr ->p_user.u_start is subject to system clock changes, so you can't say, "This process has been running for 3 hours, 15 minutes, and 3 seconds" unless you also know the system clock hasn't been reset or modified in any way.
Per the Solaris 11 gethrtime.9F man page:
Description
The gethrtime() function returns the current high-resolution real
time. Time is expressed as nanoseconds since some arbitrary time in
the past; it is not correlated in any way to the time of day, and
thus is not subject to resetting or drifting by way of adjtime(2) or
settimeofday(3C). The hi-res timer is ideally suited to performance
measurement tasks, where cheap, accurate interval timing is required.
Return Values
gethrtime() always returns the current high-resolution real time.
There are no error conditions.
...
Notes
Although the units of hi-res time are always the same (nanoseconds),
the actual resolution is hardware dependent. Hi-res time is guaranteed
to be monotonic (it does not go backward, it does not periodically
wrap) and linear (it does not occasionally speed up or slow down for
adjustment, as the time of day can), but not necessarily unique: two
sufficiently proximate calls might return the same value.
The time base used for this function is the same as that for
gethrtime(3C). Values returned by both of these functions can be
interleaved for comparison purposes.

How to measure user/system cpu time for a piece of program?

In section 3.9 of the classic APUE(Advanced Programming in the UNIX Environment), the author measured the user/system time consumed in his sample program which runs against varying buffer size(an I/O read/write program).
The result table goes kinda like(all the time are in the unit of second):
BUFF_SIZE USER_CPU SYSTEM_CPU CLOCK_TIME LOOPS
1 124.89 161.65 288.64 103316352
...
512 0.27 0.41 7.03 201789
...
I'm curious about and really wondering how to measure the USER/SYSTEM CPU time for a piece of program?
And in this example, what does the CLOCK TIME mean and how to measure it?
Obviously it isn't simply the sum of user CPU time and system CPU time.
You could easily measure the running time of a program using the time command under *nix:
$ time myprog
real 0m2.792s
user 0m0.099s
sys 0m0.200s
The real or CLOCK_TIME refers to the wall clock time i.e the time taken from the start of the program to finish and includes even the time slices taken by other processes when the kernel context switches them. It also includes any time, the process is blocked (on I/O events, etc.)
The user or USER_CPU refers to the CPU time spent in the user space, i.e. outside the kernel. Unlike the real time, it refers to only the CPU cycles taken by the particular process.
The sys or SYSTEM_CPU refers to the CPU time spent in the kernel space, (as part of system calls). Again this is only counting the CPU cycles spent in kernel space on behalf of the process and not any time it is blocked.
In the time utility, the user and sys are calculated from either times() or wait() system calls. The real is usually calculated using the time differences in the 2 timestamps gathered using the gettimeofday() system call at the start and end of the program.
One more thing you might want to know is real != user + sys. On a multicore system the user or sys or their sum can quite easily exceed the real time.
Partial answer:
Well, CLOCK_TIME is same as time shown by a clock, time passed in the so called "real world".
One way to measure that is to use gettimeofday POSIX function, which stores time to caller's struct timeval, containing UNIX seconds field and a microsecond field (actual accuracy is often less). Example for using that in typical benchmark code (ignoring errors etc):
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
do_operation_to_measure();
gettimeofday(&tv2, NULL);
// get difference, fix value if microseconds became negative
struct timeval tvdiff = { tv2.tv_sec - tv1.tv_sec, tv2.tv_usec - tv1.tv_usec };
if (tvdiff.tv_usec < 0) { tvdiff.tv_usec += 1000000; tvdiff.tv_sec -= 1; }
// print it
printf("Elapsed time: %ld.%06ld\n", tvdiff.tv_sec, tvdiff.tv_usec);

measuring time of multi-threading program

I measured time with function clock() but it gave bad results. I mean it gives the same results for program with one thread and for the same program running with OpenMP with many threads. But in fact, I notice with my watch that with many threads program counts faster.
So I need some wall-clock timer...
My question is: What is better function for this issue?
clock_gettime() or mb gettimeofday() ? or mb something else?
if clock_gettime(),then with which clock? CLOCK_REALTIME or CLOCK_MONOTONIC?
using mac os x (snow leopard)
If you want wall-clock time, and clock_gettime() is available, it's a good choice. Use it with CLOCK_MONOTONIC if you're measuring intervals of time, and CLOCK_REALTIME to get the actual time of day.
CLOCK_REALTIME gives you the actual time of day, but is affected by adjustments to the system time -- so if the system time is adjusted while your program runs that will mess up measurements of intervals using it.
CLOCK_MONOTONIC doesn't give you the correct time of day, but it does count at the same rate and is immune to changes to the system time -- so it's ideal for measuring intervals, but useless when correct time of day is needed for display or for timestamps.
I think clock() counts the total CPU usage among all threads, I had this problem too...
The choice of wall-clock timing method is personal preference. I use an inline wrapper function to take time-stamps (take the difference of 2 time-stamps to time your processing). I've used floating point for convenience (units are in seconds, don't have to worry about integer overflow). With multi-threading, there are so many asynchronous events that in my opinion it doesn't make sense to time below 1 microsecond. This has worked very well for me so far :)
Whatever you choose, a wrapper is the easiest way to experiment
inline double my_clock(void) {
struct timeval t;
gettimeofday(&t, NULL);
return (1.0e-6*t.tv_usec + t.tv_sec);
}
usage:
double start_time, end_time;
start_time = my_clock();
//some multi-threaded processing
end_time = my_clock();
printf("time is %lf\n", end_time-start_time);

NTP timestamp in C program

I wish to write a C program which obtains the system time and hence
uses this time to print out its ntp equivalent.
Am I right in saying that the following is correct for the seconds part
of the ntp time?
long int ntp.seconds = time(NULL) + 2208988800;
How might I calculate the fraction part?
The fractional part to add obviously is 0ps ... ;-)
So the question for the fraction could be reduced to how accurate is the system clock.
gettimeofday() gets you micro seconds. clock_gettime() could get you nano seconds.
Anyhow I doubt you'll be reaching the theoratically possible resolution the 32bit wide value for the fraction allows (at least on a standard PC).

Resources