I'm using the function clock() in C to get the seconds through ticks but with this little test program bellow I figured out that my processor hasn't being counting the ticks correctly, because the seconds are too much unsynchronized with real time and in addition I had to multiply the result by 100 to get something more similar to seconds but I think it doesnt make sense. In this program, 10s are almost equivalent of 7s in real life. Could someone help me to become the clock() function a bit more precise?
I'm using Beaglebone Black rev C with Kernel 3.8.13-bone70, Debian 4.6.3-14 and gcc version 4.6.3
thanks in advance!
Here's my test program:
#include <stdio.h>
#include <time.h>
int main(){
while(1)
printf("\n%f", (double)100*clock()/CLOCKS_PER_SEC);
return 0;
}
The result returned by the clock() function isn't expected to be synchronized with real time. It returns
the implementation’s best approximation to the processor time used by
the program since the beginning of an implementation-defined era
related only to the program invocation
If you want a high-precision indication of real (wall-clock) time, you'll need to use some system-specific function such as gettimeofday() or clock_gettime() -- or, if your implementation supports it, the standard timespec_get function, added in C11.
Your program calls printf in a loop. I'd expect it to spend most of its time waiting for I/O, yielding to other processes, and for CPU time as indicated by clock() (when converted to seconds) to advance much more slowly that real time. I suspect your multiplication by 100 is throwing off your results; clock()/CLOCKS_PER_SEC should be a correct indication of CPU time.
Note Keith Thompson's answer which points you in the right direction.
However, one might expect a tight infinite loop to use most of the CPU in a given period (assuming nothing else is happening), and therefore any serious deviation between the CPU time spent on this tight infinite loop versus real time might be interesting to explore. To that end, I rigged this short program up:
#include <stdio.h>
#include <time.h>
int main(void) {
time_t t0 = time(NULL);
while(1) {
printf("%f\r", ((double) clock())/CLOCKS_PER_SEC);
fflush(stdout);
if (difftime(time(NULL), t0) > 5.0) {
break;
}
}
return 0;
}
On Windows, I get:
C:\...\Temp> timethis clk.exe
TimeThis : Command Line : clk.exe
TimeThis : Start Time : Mon Apr 27 17:00:34 2015
5.093000
TimeThis : Command Line : clk.exe
TimeThis : Start Time : Mon Apr 27 17:00:34 2015
TimeThis : End Time : Mon Apr 27 17:00:40 2015
TimeThis : Elapsed Time : 00:00:05.172
On *nix, you can use the time CLI utility to measure time.
The timing seems close enough to me.
Note also Zan's point about comms lags.
Keith's answer is most correct but there is another problem with your program even if you did change it to use real time.
You have it creating output in a tight loop with no sleeps at all. If your output device is even a little bit slow the time displayed will be off by however slow the buffering is.
If you ran this over a 9,600 baud serial console for example, it could be as much as 30 seconds behind. At least, that's about how much lag I remember having observed in the far past when we still used 9,600 baud consoles.
Related
I have the following code:
#include <stdio.h>
#include <time.h>
int main(){
clock_t timerS;
int i=1, targetTime=2;
scanf("%d", &targetTime);
while(i!=0){
timerS = clock();
while ((double)((clock() - timerS) / CLOCKS_PER_SEC) < targetTime){
//do something
}
//do another thing but delayed by the given time
if(targetTime>=0.5)
targetTime-=0.02;
else i=0;
}
return 0;
}
And what I want to do is having a loop which does something for (initially) an inputted amount of seconds and also doing another thing after targetTime-seconds have passed.
But after the first loop, to change the speed with which these operations are made(more specifically -0.02 seconds in this case).
An example would be getting multiple user inputs from user for 2 seconds, and displaying all the inputs made in these 2 seconds afterwards.
First problem is
If the initial given time is smaller than 1 second (for example 0.6), the other thing isn't delayed by 0.6 seconds, but is done immediately.
Second problem is
Actually similar to the first, if I subtract 0.02 seconds (in this case) from targetTime, it again does the other thing immediately and not in targetTime-0.02 seconds as I intend it to.
I'm new to this "clock" and "time" topic in C so I guess I'm doing something wrong regarding how these operations should be done. Also, please don't give an overly-complicated explanation/solution because of the above-mentioned reason.
Thanks!
Don't use the clock(2) system call, as it is obsolete and has been fully superseeded by machine independent replacements.
You can use, if your system supports it, clock_gettime(2), that will give you up to nanosecond precission (depending on the platform, but at least in linux on Intel architectures it is almost warranted) or, if you cannot use it, at least you'll have gettimeofday(2), which is derived from BSD systems, and provides you with a clock with microsecond resolution.
If you want to stop your program for some delay, you have also sleep(2) (second based) usleep(2) (microsecond based) or even nsleep(2) (nanosecond based)
Anyway, any of these calls has a tick that is not based on the system heartbeat, and the resolution is uniform and not system dependant.
I mistakenly initiated targetTime as int instead of double. Changing it to double solves the issue easily. Sorry!
I got the following problem: I have to measure the time a program needs to be executed. A scalar version of the program works fine with the code below, but when using OpenMP, it works on my PC, but not on the resource I am supposed to use.
In fact:
scalar program rt 34s
openmp program rt 9s
thats my pc (everything working) -compiled with visual studio
the ressource I have to use (I think Linux, compiled with gcc):
scalar program rt 9s
openmp program rt 9s (but the text pops immediately afterwards up, so it should be 0-1s)
my gues is, that it adds all ticks, which is about the same amount and devides them by the tick rate of a single core. My question is how to solve this, and if there is a better way to watch the time in the console on c.
clock_t start, stop;
double t = 0.0;
assert((start = clock()) != -1);
... code running
t = (double)(stop - start) / CLOCKS_PER_SEC;
printf("Run time: %f\n", t);
To augment Mark's answer: DO NOT USE clock()
clock() is an awful misunderstanding from the old computer era, who's actual implementation differs greatly from platform to platform. Behold:
on Linux, *BSD, Darwin (OS X) -- and possibly other 4.3BSD descendants -- clock() returns the processor time (not the wall-clock time!) used by the calling process, i.e. the sum of each thread's processor time;
on IRIX, AIX, Solaris -- and possibly other SysV descendants -- clock() returns the processor time (again not the wall-clock time) used by the calling process AND all its terminated child processes for which wait, system or pclose was executed;
HP-UX doesn't even seem to implement clock();
on Windows clock() returns the wall-clock time (not the processor time).
In the descriptions above processor time usually means the sum of user and system time. This could be less than the wall-clock (real) time, e.g. if the process sleeps or waits for file IO or network transfers, or it could be more than the wall-clock time, e.g. when the process has more than one thread, actively using the CPU.
Never use clock(). Use omp_get_wtime() - it exists on all platforms, supported by OpenMP, and always returns the wall-clock time.
Converting my earlier comment to an answer in the spirit of doing anything for reputation ...
Use two calls to omp_get_wtime to get the wallclock time (in seconds) between two points in your code. Note that time is measured individually on each thread, there is no synchronisation of clocks across threads.
Your problem is clock. By the C standard it measures the time passed on the CPU for your process, not wall clock time. So this is what linux does (usually they stick to the standards) and then the total CPU time for the sequential program or the parallel program are the same, as they should be.
Windows OS deviate from that, in that there clock is the wall clock time.
So use other time measurement functions. For standard C this would be time or if you need more precision with the new C11 standard you could use timespec_get, for OpenMP there are other possibilities as have already be mentioned.
I'm working on a timing system and I'll implement a timer class.
#include <windows.h>
#include <stdio.h>
#include <time.h>
int main()
{
clock_t t1, t2;
t1 = clock();
Sleep(10);
t2 = clock();
printf("%i\n", (int)(t2 - t1));
return 0;
}
This program should print "10" but it prints "15" or "16". I need more accurate which is less than 1 ms! Suggestions? (maybe with select()'s timeout?)
NOTE: I've run this program on Windows 7 Ultimate x86. Program compiled with MinGW (C/C++) x86.
NOW I THINK >>
Sleep() is accurate to the operating system's clock interrupt rate. Which by default on Windows ticks 64 times per second. Or once every 15.625 msec, as you found out.
You can increase that rate, call timeBeginPeriod(10). Use timeEndPeriod(10) when you're done. You are still subject to normal thread scheduling latencies so you still don't have a guarantee that your thread will resume running after 10 msec. And won't when the machine is heavily loaded. Using SetThreadPriority() to boost the priority, increasing the odds that it will.
Your problem is that the "clock" only ticks 64 times per second (I've seen systems that can go up to 2000, but no higher). If you are creating a timer class, you probably want to have much higher resolution than even 2000. Use QueryPerformanceCounter to get higher resolution. See QueryPerformanceCounter and overflows for an example.
Note that if you want to sleep for very short intervals, you will have to call QueryPerformanceCounter in a tight loop.
Sleep() is not accurate in the way you want it to be.
It will cause the thread to sleep for AT LEAST the length of time you specify, but there's no guarantee that the OS will return control to your thread at exactly that specified time.
I'm wondering how to make a time delay of a few seconds using C , without using while loop. The samples I got are using while loop.
This works but I dont want to use the while loop. Please help
while(clock() < endwaitTime)
{
if(!GetFlag())
{
print(" Canceled ");
return ;
}
}
You can use sleep() to pause your application for the given amount of seconds, or you can use usleep() to pause your application for the given amount of microseconds.
You can also explore the blocking properties of select() to have a microsecond precision pausing. Some applications prefer to do that, don't ask me why.
About your while() loop, never do that. It is not pausing. Your application will loop using 99% of the CPU until the time has elapsed. Its a very dumb way of doing that.
Also, its preferable that you use time() to get the current UNIX time and use that as reference, and difftime() to get the time delta in seconds to use with sleep().
You may have problems with clock(), because this function on a 32-bit system will return the same number every ~72 minutes, and you will often have a endwaitTime with lower value than the current return value of clock().
Following http://linux.die.net/man/3/sleep
#include <unistd.h>
...
// note clock() is seconds of CPU time, but we will sleep and not use CPU time
// therefore clock() is not useful here ---
// Instead expiration should be tested with time(), which gives the "date/time" in secs
// since Jan 1, 1970
long int expiration = time()+300 ; // 5 minutes = 300 secs. Change this as needed.
while(time() < expiration)
{
sleep(2); // dont chew up CPU time, but check every 2 seconds
if(!GetFlag())
{
print(" Canceled ");
return ;
}
}
...
Of course, you can get rid of the while loop completely if a single sleep() is good enough. For a short pause this may be OK. Input is still queued by Linux while the process is sleeping, and will be delivered to the stdin of the program when it wakes up from the sleep.
example
sec 1 ---
sec 2 ---
sec 3 ---
Each print should have a delay of 1 sec.
In the absense of any other information in your question...
You should find a sleep function in nearly any C environment (note that it's all lower case). It's usually in time.h or unistd.h, and accepts the number of seconds that it should delay execution.
Many environments will also have nanosleep, which is an equivalent that accepts a number of nanoseconds rather than seconds. Also in time.h on many systems.
Bottom-line is that your C environment is likely to provide such a function, whether it's sleep, _sleep, Sleep, or something else similar, and whether it accepts seconds, nanoseconds, or milliseconds. You'll have to refer to the documentation for your environment to find the specific one.
#include <windows.h>
...
Sleep(timeInMilliseconds); //sleeps the current thread
hth
Unfortunately there isn't a portable version of sleep() so instead you could write a delay() function using the standard functions in time.h as follows:
void delay(int seconds)
{
time_t t = time(NULL);
while (difftime(time(NULL), t) < seconds) ;
}
Note that this isn't ideal as it keeps the cpu busy during the delay.