Bad Results: time(NULL) and clock() - c

#import <stdio.h>
#import <time.h>
int main (void) {
printf("Clock ticks per second: %d\n", CLOCKS_PER_SEC);
double check = clock();
int timex = time(NULL);
for (int x = 0; x <= 500000; x++) {
printf(".");
}
puts("\n");
printf("Total Time by Clock: %7.7f\n", (clock() - check) / CLOCKS_PER_SEC );
printf("Total Time by Time: %d\n", time(NULL) - timex);
getchar();
}
When I execute the above code I get results like:
Total Time by Clock: 0.0108240
Total Time by Time: 12
I would like to have clock() represent a number as close to as possible as time.
The total time presented above was done on a macbook, however, the code works excellent on my laptop (windows).
The CLOCKS_PER_SECOND macro returns 1000 on the PC, 1,000,000 on the MAC.

clock() on windows returns the wall clock time. clock() on *nixes return the CPU time your program has spent, which is not going to be a lot, you're likely blocked when doing I/O here.

printf() to console makes system call for each functon call, and time spent blocked in console redrawing, etc. do not count for process time.
Make some heavy calculations there.
for (long int x = 0; x <= 5000000000; x++) {
sqrt(2.9999);
}

time() returns a time_t. When you assign that to an int it is possible that you lose information. What happens if you use time_t throughout?
int main(void) {
time_t timex = time(0);
/* ... */
printf("%d", (int)(time(0) - timex));
}

Related

Execution time using clock() gives 0 clock cycles

I want to be able to time the execution time of specific parts of an application.
Here is a reduced code to show what I can not understand:
#include <stdio.h>
#include <time.h>
void fun(){
double result = 0;
for (int i = 1; i<200;i++){
for (int j = 1; j<200;j++){
result += i/j; // Random mathematical expression
}
}
}
int main(int argc, char const *argv[])
{
clock_t start, end;
start = clock();
fun();
end = clock();
clock_t clocks_taken = (end - start);
double total_time = ((double)(end - start)/CLOCKS_PER_SEC)*1000; //millisec
printf("Clock cycles: %d. Total time: %lf ms", clocks_taken, total_time);
}
Gives the output:
Clock cycles: 0. Total time: 0.000000 ms
From what I know a clock cycle is a single execution by the processor and a nested forloop should require hundreds of cycles? Or am I mistaking?
I need to be able to measure execution time for tasks that takes around 1 millisec even if the current example does not. If it matters, I am building and running the application in Windows.
You don't use the result, so it does not need to be computed. Since your function does nothing observable, it takes nearly no time at all. There is no point in benchmarking "toy" code.
Note that "clock cycles" are not CPU clock cycles. They are cycles of some kind of CPU usage timer that may tick in surprisingly large increments. What is CLOCKS_PER_SEC on your platform?

How does time.h clock() work under Windows?

I am trying to create a simple queue schedule for an embedded System in C.
The idea is that within a Round Robin some functions are called based on the time constraints declared in the Tasks[] array.
#include <time.h>
#include <stdio.h>
#include <windows.h>
#include <stdint.h>
//Constants
#define SYS_TICK_INTERVAL 1000UL
#define INTERVAL_0MS 0
#define INTERVAL_10MS (100000UL / SYS_TICK_INTERVAL)
#define INTERVAL_50MS (500000UL / SYS_TICK_INTERVAL)
//Function calls
void task_1(clock_t tick);
void task_2(clock_t tick);
uint8_t get_NumberOfTasks(void);
//Define the schedule structure
typedef struct
{
double Interval;
double LastTick;
void (*Function)(clock_t tick);
}TaskType;
//Creating the schedule itself
TaskType Tasks[] =
{
{INTERVAL_10MS, 0, task_1},
{INTERVAL_50MS, 0, task_2},
};
int main(void)
{
//Get the number of tasks to be executed
uint8_t task_number = get_NumberOfTasks();
//Initializing the clocks
for(int i = 0; i < task_number; i++)
{
clock_t myClock1 = clock();
Tasks[i].LastTick = myClock1;
printf("Task %d clock has been set to %f\n", i, myClock1);
}
//Round Robin
while(1)
{
//Go through all tasks in the schedule
for(int i = 0; i < task_number; i++)
{
//Check if it is time to execute it
if((Tasks[i].LastTick - clock()) > Tasks[i].Interval)
{
//Execute it
clock_t myClock2 = clock();
(*Tasks[i].Function)(myClock2);
//Update the last tick
Tasks[i].LastTick = myClock2;
}
}
Sleep(SYS_TICK_INTERVAL);
}
}
void task_1(clock_t tick)
{
printf("%f - Hello from task 1\n", tick);
}
void task_2(clock_t tick)
{
printf("%f - Hello from task 2\n", tick);
}
uint8_t get_NumberOfTasks(void)
{
return sizeof(Tasks) / sizeof(*Tasks);
}
The code compiles without a single warning, but I guess I don't understand how the command clock() work.
Here you can see what I get when I run the program:
F:\AVR Microcontroller>timer
Task 0 clock has been set to 0.000000
Task 1 clock has been set to 0.000000
I tried changing Interval and LastTick from float to double just to make sure this was not a precision error, but still it does not work.
%f is not the right formatting specifier to print out myClock1 as clock_t is likely not double. You shouldn't assume that clock_t is double. If you want to print myClock1 as a floating point number you have to manually convert it to double:
printf("Task %d clock has been set to %f\n", i, (double)myClock1);
Alternatively, use the macro CLOCKS_PER_SEC to turn myClock1 into a number of seconds:
printf("Task %d clock has been set to %f seconds\n", i,
(double)myClock1 / CLOCKS_PER_SEC);
Additionally, your subtraction in the scheduler loop is wrong. Think about it: clock() grows larger with the time, so Tasks[i].LastTick - clock() always yields a negative value. I think you want clock() - Tasks[i].LastTick instead.
The behavior of the clock function is depending on the operating system. On Windows it basically runs of the wall clock, while on e.g. Linux it's the process CPU time.
Also, the result of clock by itself is useless, it's only use is in comparison between two clocks (e.g. clock_end - clock_start).
Finally, the clock_t type (which clock returns) is an integer type, you only get floating point values if you cast a difference (as the one above) to e.g. double and divide by CLOCKS_PER_SEC. Attempting to print a clock_t using the "%f" format will lead to undefined behavior.
Reading a clock reference might help.

how to define a loop to be run for some seconds/minutes long

My purpose is to execute a while loop for a defined time (e.g. 90 seconds for this example). It does not have to be exactly 90 s, but 1-2 second inaccuracy is acceptable. I trued to use clock()` function for this purpose:
int main(void){
clock_t start, end;
volatile double elapsed;
start = clock();
int terminate = 1;
while(terminate)
{
end = clock();
elapsed = ((double) (end-start)) / (double) CLOCKS_PER_SEC *1000;
printf("elapsed time:%f\n",elapsed);
if(elapsed >= 90.0)
terminate = 0;
usleep(50000);
}
printf("done..\n");
return 0;
}
when I run it on my laptop (x86, 3.13 kernel, gcc 4.8.2), my stopwatch measures 72 seconds for it to be completed. (1000 was necessary to have the elapsed in seconds accuracy on my laptop)
When I run it on an ARM device (armv5tejl, 3.12 kernel, gcc 4.6.3) it takes 58 seconds to complete the code. (I needed to used 100 on elapsed on the armv5).
I run the code on a room temperature, so the clock should be stable. I know that kernel sleeps the threads and has inaccuracy with time to wake them up, etc. Therefore, as I said previously I don't expect to get a perfect timing, but it should have some accuracy.
I had tried to use only usleep (even nanosleep) but the resolution was not good as well. At the end I come up with the the bottom code that fetches the system time (hour, minute, second) then calculate the elapsed time. And it works with a good accuracy.
I wonder if there is another solution that would be less costly to use?
typedef struct{
int hour;
int minute;
int second;
} timeInfo;
timeInfo getTimeInfo(void){
timeInfo value2return;
time_t rawtime;
struct tm * timeinfo;
time(&rawtime);
timeinfo = localtime(&rawtime);
value2return.hour = timeinfo->tm_hour;
value2return.minute = timeinfo->tm_min;
value2return.second = timeinfo->tm_sec;
return value2return;
}
int checkElapsedTime(const timeInfo *Start, const timeInfo *Stop, const int Reference){
if(Stop->hour < Start->hour){
printf("1:%d\n", (Stop->hour +24) *3600 + Stop->minute *60 + Stop->second - (Start->hour *3600 +Start->minute * 60 + Start->second));
if( ( (Stop->hour +24) *3600 + Stop->minute *60 + Stop->second - (Start->hour *3600 +Start->minute * 60 + Start->second)) >= Reference )
return 0; //while(0): terminate the loop
else
return 1; //while(1)
}else{
printf("2:%d\n",Stop->hour *3600 + Stop->minute *60 + Stop->second - (Start->hour *3600 +Start->minute * 60 + Start->second));
if( (Stop->hour *3600 + Stop->minute *60 + Stop->second - (Start->hour *3600 +Start->minute * 60 + Start->second)) >= Reference )
return 0;
else
return 1;
}
}
int main(void){
timeInfo stop, start = getTimeInfo();
int terminate = 1;
while(terminate)
{
stop = getTimeInfo();
terminate = checkElapsedTime(&start, &stop, 90);
usleep(5000); //to decrease the CPU load
}
printf("terminated\n");
return 0;
}
Lastly, I need to run it inside a pthread.
Use time() vs. clock(). Coding goal is to determine wall time elapsed and not processor time used.
Current code calculated the process time elapsed * 1000 and compared that to 90 seconds.
clock(), which #uesp implied, returns "The clock function determines the processor time used." C11dr §7.27.2.1 2.
time() which returns "The time function determines the current calendar time" §7.27.2.4 2
difftime() does a nice job of finding the difference between 2 time_t (in whatever units/type they are) and returning the difference in seconds.
int main(void) {
time_t start, end;
double elapsed; // seconds
start = time(NULL);
int terminate = 1;
while (terminate) {
end = time(NULL);
elapsed = difftime(end, start);
if (elapsed >= 90.0 /* seconds */)
terminate = 0;
else // No need to sleep when 90.0 seconds elapsed.
usleep(50000);
}
printf("done..\n");
return 0;
}
Minor: Note: when using clock(), no need for * 1000. On a windows based machine running gcc, for me, clock() also returned the calling process CPU time.
elapsed = ((double) (end-start)) / (double) CLOCKS_PER_SEC *1000;
elapsed = ((double) (end-start)) / CLOCKS_PER_SEC;
Minor: No need for volatile. elapsed is only changing due to this code.
// volatile double elapsed;
double elapsed;
The reason your first version doesn't seem to work is that on Linux clock() measures the used CPU time and not the real time (see here). Since you are sleeping the process then the real and CPU times don't match up. The solution is to check the real clock time as in your second example.
Note that on Windows clock() does give you the real clock time (see here).
Use alarm and catch the signal. The signal handler will interrupt the process execution. You could also try pthread_cancel. Loop or sleep based methods of individual running time t can be inaccurate by time t. If the loop is a long running, tight execution path, sleeping or breaking will not solve your problem at all.

timestamp in c with milliseconds precision

I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?

Time stamp in the C programming language

How do I stamp two times t1 and t2 and get the difference in milliseconds in C?
This will give you the time in seconds + microseconds
#include <sys/time.h>
struct timeval tv;
gettimeofday(&tv,NULL);
tv.tv_sec // seconds
tv.tv_usec // microseconds
Standard C99:
#include <time.h>
time_t t0 = time(0);
// ...
time_t t1 = time(0);
double datetime_diff_ms = difftime(t1, t0) * 1000.;
clock_t c0 = clock();
// ...
clock_t c1 = clock();
double runtime_diff_ms = (c1 - c0) * 1000. / CLOCKS_PER_SEC;
The precision of the types is implementation-defined, ie the datetime difference might only return full seconds.
If you want to find elapsed time, this method will work as long as you don't reboot the computer between the start and end.
In Windows, use GetTickCount(). Here's how:
DWORD dwStart = GetTickCount();
...
... process you want to measure elapsed time for
...
DWORD dwElapsed = GetTickCount() - dwStart;
dwElapsed is now the number of elapsed milliseconds.
In Linux, use clock() and CLOCKS_PER_SEC to do about the same thing.
If you need timestamps that last through reboots or across PCs (which would need quite good syncronization indeed), then use the other methods (gettimeofday()).
Also, in Windows at least you can get much better than standard time resolution. Usually, if you called GetTickCount() in a tight loop, you'd see it jumping by 10-50 each time it changed. That's because of the time quantum used by the Windows thread scheduler. This is more or less the amount of time it gives each thread to run before switching to something else. If you do a:
timeBeginPeriod(1);
at the beginning of your program or process and a:
timeEndPeriod(1);
at the end, then the quantum will change to 1 ms, and you will get much better time resolution on the GetTickCount() call. However, this does make a subtle change to how your entire computer runs processes, so keep that in mind. However, Windows Media Player and many other things do this routinely anyway, so I don't worry too much about it.
I'm sure there's probably some way to do the same in Linux (probably with much better control, or maybe with sub-millisecond quantums) but I haven't needed to do that yet in Linux.
/*
Returns the current time.
*/
char *time_stamp(){
char *timestamp = (char *)malloc(sizeof(char) * 16);
time_t ltime;
ltime=time(NULL);
struct tm *tm;
tm=localtime(&ltime);
sprintf(timestamp,"%04d%02d%02d%02d%02d%02d", tm->tm_year+1900, tm->tm_mon,
tm->tm_mday, tm->tm_hour, tm->tm_min, tm->tm_sec);
return timestamp;
}
int main(){
printf(" Timestamp: %s\n",time_stamp());
return 0;
}
Output: Timestamp: 20110912130940 // 2011 Sep 12 13:09:40
Use #Arkaitz Jimenez's code to get two timevals:
#include <sys/time.h>
//...
struct timeval tv1, tv2, diff;
// get the first time:
gettimeofday(&tv1, NULL);
// do whatever it is you want to time
// ...
// get the second time:
gettimeofday(&tv2, NULL);
// get the difference:
int result = timeval_subtract(&diff, &tv1, &tv2);
// the difference is storid in diff now.
Sample code for timeval_subtract can be found at this web site:
/* Subtract the `struct timeval' values X and Y,
storing the result in RESULT.
Return 1 if the difference is negative, otherwise 0. */
int
timeval_subtract (result, x, y)
struct timeval *result, *x, *y;
{
/* Perform the carry for the later subtraction by updating y. */
if (x->tv_usec < y->tv_usec) {
int nsec = (y->tv_usec - x->tv_usec) / 1000000 + 1;
y->tv_usec -= 1000000 * nsec;
y->tv_sec += nsec;
}
if (x->tv_usec - y->tv_usec > 1000000) {
int nsec = (x->tv_usec - y->tv_usec) / 1000000;
y->tv_usec += 1000000 * nsec;
y->tv_sec -= nsec;
}
/* Compute the time remaining to wait.
tv_usec is certainly positive. */
result->tv_sec = x->tv_sec - y->tv_sec;
result->tv_usec = x->tv_usec - y->tv_usec;
/* Return 1 if result is negative. */
return x->tv_sec < y->tv_sec;
}
how about this solution? I didn't see anything like this in my search. I am trying to avoid division and make solution simpler.
struct timeval cur_time1, cur_time2, tdiff;
gettimeofday(&cur_time1,NULL);
sleep(1);
gettimeofday(&cur_time2,NULL);
tdiff.tv_sec = cur_time2.tv_sec - cur_time1.tv_sec;
tdiff.tv_usec = cur_time2.tv_usec + (1000000 - cur_time1.tv_usec);
while(tdiff.tv_usec > 1000000)
{
tdiff.tv_sec++;
tdiff.tv_usec -= 1000000;
printf("updated tdiff tv_sec:%ld tv_usec:%ld\n",tdiff.tv_sec, tdiff.tv_usec);
}
printf("end tdiff tv_sec:%ld tv_usec:%ld\n",tdiff.tv_sec, tdiff.tv_usec);
Also making aware of interactions between clock() and usleep(). usleep() suspends the program, and clock() only measures the time the program is running.
If might be better off to use gettimeofday() as mentioned here
Use gettimeofday() or better clock_gettime()
U can try routines in c time library (time.h). Plus take a look at the clock() in the same lib. It gives the clock ticks since the prog has started. But you can save its value before the operation you want to concentrate on, and then after that operation capture the cliock ticks again and find the difference between then to get the time difference.
#include <sys/time.h>
time_t tm = time(NULL);
char stime[4096];
ctime_r(&tm, stime);
stime[strlen(stime) - 1] = '\0';
printf("%s",stime);
This program clearly shows how to do it. Takes time 1 pauses for 1 second and then takes time 2, the difference between the 2 times should be 1000 milliseconds. So your answer is correct
#include <stdio.h>
#include <time.h>
#include <unistd.h>
// Name: miliseconds.c
// gcc /tmp/miliseconds.c -o miliseconds
struct timespec ts1, ts2; // time1 and time2
int main (void) {
// get time1
clock_gettime(CLOCK_REALTIME, &ts1);
sleep(1); // 1 second pause
// get time2
clock_gettime(CLOCK_REALTIME, &ts2);
// nanoseconds difference in mili
long miliseconds1= (ts2.tv_nsec - ts1.tv_nsec) / 10000000 ;
// seconds difference in mili
long miliseconds2 = (ts2.tv_sec - ts1.tv_sec)*1000;
long miliseconds = miliseconds1 + miliseconds2;
printf("%ld\n", miliseconds);
return 0;
}

Resources