Is this a correct use of sleep()? - c

I'm using sleep like this to grab a frame every 1/25th of a second. OS is Debian 6 armel.
#define VIDEO_FRAME_RATE 25.0f
while (RECORDING) {
sprintf(buffer, "Someting from a data struct that is updated\n");
fprintf(Output, buffer);
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
usleep(usecToSleep);
}
Question: What is the guarantee that the loop will output the buffer to Output file descriptor at every 1/25th of a second?
Is there a better way to do this in C? I need it to be as precise as possible to prevent drifting.
Thank you.

Your "recording operation" still takes some time...
So you need to calculate time that should be spent on on frame (usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;) and then calculate time operation really took, adjust sleep time accordingly:
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
lastFrameUsec = 0;
while (RECORDING) {
sprintf(buffer, "Someting from a data struct that is updated\n");
fprintf(Output, buffer);
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
// currentFrameUec - lastFrameUsec = actual time spending on operation
currentFrameUsec = getUsecElapsedFromStart();
actualSleep = usecToSleep - (currentFrameUsec - lastFrameUsec);
// If there's time to sleep left, sleep
if(actualSleep > 0){
usleep(actualSleep);
lastFrameUsec = getUsecElepasedFromStart();
} else {
lastFrameUsec = currentFrameUsec;
}
}
I'm not aware of multi platform getUsecElapsedFromStart() so you probably will have to implement your own for example like this one.
int getUsecElapsedFromStart(const struct timespec *tstart)
{
struct timespec *tnow;
clock_gettime(CLOCK_MONOTONIC, &tnow);
return (int)((tnow tv_sec*10.0e9 + tnow.tv_nsec) -
(tstart.tv_ses*10.0e9 + tstart.tv_nsec));
}
clock_gettime(CLOCK_MONOTONIC, &tstart);
while(RECORDING){
// ...
currentFrameUsec = getUsecElapsedFromStart(&tstart);
}

In response to your first question, there is no such guarantee. usleep() promises only that it will sleep at least as long as you tell it to. But it may sleep longer:
The usleep() function suspends execution of the calling process for (at least) usec microseconds. The
sleep may be lengthened slightly by any system activity or by the time spent processing the call or by
the granularity of system timers.

Related

Measure time for sending a file via socket in C [duplicate]

What is the method to use a timer in C? I need to wait until 500 ms for a job. Please mention any good way to do this job. I used sleep(3); But this method does not do any work in that time duration. I have something that will try until that time to get any input.
Here's a solution I used (it needs #include <time.h>):
int msec = 0, trigger = 10; /* 10ms */
clock_t before = clock();
do {
/*
* Do something to busy the CPU just here while you drink a coffee
* Be sure this code will not take more than `trigger` ms
*/
clock_t difference = clock() - before;
msec = difference * 1000 / CLOCKS_PER_SEC;
iterations++;
} while ( msec < trigger );
printf("Time taken %d seconds %d milliseconds (%d iterations)\n",
msec/1000, msec%1000, iterations);
You can use a time_t struct and clock() function from time.h.
Store the start time in a time_t struct by using clock() and check the elapsed time by comparing the difference between stored time and current time.
Yes, you need a loop. If you already have a main loop (most GUI event-driven stuff does) you can probably stick your timer into that. Use:
#include <time.h>
time_t my_t, fire_t;
Then (for times over 1 second), initialize your timer by reading the current time:
my_t = time(NULL);
Add the number of seconds your timer should wait and store it in fire_t. A time_t is essentially a uint32_t, you may need to cast it.
Inside your loop do another
my_t = time(NULL);
if (my_t > fire_t) then consider the timer fired and do the stuff you want there. That will probably include resetting it by doing another fire_t = time(NULL) + seconds_to_wait for next time.
A time_t is a somewhat antiquated unix method of storing time as the number of seconds since midnight 1/1/1970 but it has many advantages. For times less than 1 second you need to use gettimeofday() (microseconds) or clock_gettime() (nanoseconds) and deal with a struct timeval or struct timespec which is a time_t and the microseconds or nanoseconds since that 1 second mark. Making a timer works the same way except when you add your time to wait you need to remember to manually do the carry (into the time_t) if the resulting microseconds or nanoseconds value goes over 1 second. Yes, it's messy. See man 2 time, man gettimeofday, man clock_gettime.
sleep(), usleep(), nanosleep() have a hidden benefit. You see it as pausing your program, but what they really do is release the CPU for that amount of time. Repeatedly polling by reading the time and comparing to the done time (are we there yet?) will burn a lot of CPU cycles which may slow down other programs running on the same machine (and use more electricity/battery). It's better to sleep() most of the time then start checking the time.
If you're trying to sleep and do work at the same time you need threads.
May be this examples help to you
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
/*
Implementation simple timeout
Input: count milliseconds as number
Usage:
setTimeout(1000) - timeout on 1 second
setTimeout(10100) - timeout on 10 seconds and 100 milliseconds
*/
void setTimeout(int milliseconds)
{
// If milliseconds is less or equal to 0
// will be simple return from function without throw error
if (milliseconds <= 0) {
fprintf(stderr, "Count milliseconds for timeout is less or equal to 0\n");
return;
}
// a current time of milliseconds
int milliseconds_since = clock() * 1000 / CLOCKS_PER_SEC;
// needed count milliseconds of return from this timeout
int end = milliseconds_since + milliseconds;
// wait while until needed time comes
do {
milliseconds_since = clock() * 1000 / CLOCKS_PER_SEC;
} while (milliseconds_since <= end);
}
int main()
{
// input from user for time of delay in seconds
int delay;
printf("Enter delay: ");
scanf("%d", &delay);
// counter downtime for run a rocket while the delay with more 0
do {
// erase the previous line and display remain of the delay
printf("\033[ATime left for run rocket: %d\n", delay);
// a timeout for display
setTimeout(1000);
// decrease the delay to 1
delay--;
} while (delay >= 0);
// a string for display rocket
char rocket[3] = "-->";
// a string for display all trace of the rocket and the rocket itself
char *rocket_trace = (char *) malloc(100 * sizeof(char));
// display trace of the rocket from a start to the end
int i;
char passed_way[100] = "";
for (i = 0; i <= 50; i++) {
setTimeout(25);
sprintf(rocket_trace, "%s%s", passed_way, rocket);
passed_way[i] = ' ';
printf("\033[A");
printf("| %s\n", rocket_trace);
}
// erase a line and write a new line
printf("\033[A");
printf("\033[2K");
puts("Good luck!");
return 0;
}
Compile file, run and delete after (my preference)
$ gcc timeout.c -o timeout && ./timeout && rm timeout
Try run it for yourself to see result.
Notes:
Testing environment
$ uname -a
Linux wlysenko-Aspire 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ gcc --version
gcc (Ubuntu 4.8.5-2ubuntu1~14.04.1) 4.8.5
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Run code for x amount of time

To preface, I am on a Unix (linux) system using gcc.
What I am stuck on is how to accurately implement a way to run a section of code for a certain amount of time.
Here is an example of something I have been working with:
struct timeb start, check;
int64_t duration = 10000;
int64_t elapsed = 0;
ftime(&start);
while ( elapsed < duration ) {
// do a set of tasks
ftime(&check);
elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
}
I was thinking this would have carried on for 10000ms or 10 seconds, but it didn't, almost instantly. I was basing this off other questions such as How to get the time elapsed in C in milliseconds? (Windows) . But then I thought that if upon the first call of ftime, the struct is time = 1, millitm = 999 and on the second call time = 2, millitm = 01 it would be calculating the elapsed time as being 1002 milliseconds. Is there something I am missing?
Also the suggestions in the various stackoverflow questions, ftime() and gettimeofday(), are listed as deprecated or legacy.
I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check. But milliseconds since the epoch requires 42 bits and I'm trying to keep everything in the loop as efficient as possible.
What approach could I take towards this?
Code is incorrect calculating elapsed time.
// elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
elapsed = ((check.time - start.time) * (int64_t)1000) + (check.millitm - start.millitm);
There is some concern about check.millitm - start.millitm. On systems with struct timeb *tp, it can be expected that the millitm will be promoted to int before subtraction occurs. So the difference will be in the range [-1000 ... 1000].
struct timeb {
time_t time;
unsigned short millitm;
short timezone;
short dstflag;
};
IMO, more robust code would handle ms conversion in a separate helper function. This matches OP's "I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check."
int64_t timeb_to_ms(struct timeb *t) {
return (int64_t)t->time * 1000 + t->millitm;
}
struct timeb start, check;
ftime(&start);
int64_t start_ms = timeb_to_ms(&start);
int64_t duration = 10000 /* ms */;
int64_t elapsed = 0;
while (elapsed < duration) {
// do a set of tasks
struct timeb check;
ftime(&check);
elapsed = timeb_to_ms(&check) - start_ms;
}
If you want efficiency, let the system send you a signal when a timer expires.
Traditionally, you can set a timer with a resolution in seconds with the alarm(2) syscall.
The system then sends you a SIGALRM when the timer expires. The default disposition of that signal is to terminate.
If you handle the signal, you can longjmp(2) from the handler to another place.
I don't think it gets much more efficient than SIGALRM + longjmp (with an asynchronous timer, your code basically runs undisturbed without having to do any extra checks or calls).
Below is an example for you:
#define _XOPEN_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
#include <setjmp.h>
static jmp_buf jmpbuf;
void hndlr();
void loop();
int main(){
/*sisv_signal handlers get reset after a signal is caught and handled*/
if(SIG_ERR==sysv_signal(SIGALRM,hndlr)){
perror("couldn't set SIGALRM handler");
return 1;
}
/*the handler will jump you back here*/
setjmp(jmpbuf);
if(0>alarm(3/*seconds*/)){
perror("couldn't set alarm");
return 1;
}
loop();
return 0;
}
void hndlr(){
puts("Caught SIGALRM");
puts("RESET");
longjmp(jmpbuf,1);
}
void loop(){
int i;
for(i=0; ; i++){
//print each 100-milionth iteration
if(0==i%100000000){
printf("%d\n", i);
}
}
}
If alarm(2) isn't enough, you can use timer_create(2) as EOF suggests.

Simple <Time.h> program takes large amount CPU

I was trying to familiarize myself with the C time.h library by writing something simple in VS. The following code simply prints the value of x added to itself every two seconds:
int main() {
time_t start = time(NULL);
time_t clock = time(NULL);
time_t clockTemp = time(NULL); //temporary clock
int x = 1;
//program will continue for a minute (60 sec)
while (clock <= start + 58) {
clockTemp = time(NULL);
if (clockTemp >= clock + 2) { //if 2 seconds has passed
clock = clockTemp;
x = ADD(x);
printf("%d at %d\n", x, timeDiff(start, clock));
}
}
}
int timeDiff(int start, int at) {
return at - start;
}
My concern is with the amount of CPU that this program takes, about 22%. I figure this problem stems from the constant updating of the clockTemp (just below the while statement), but I'm not sure how to fix this issue. Is it possible that this is a visual studio problem, or is there a special way to check for time?
Solution
the code needed the sleep function so that it wouldn't need to run constantly.
I added sleep with #include <windows.h> and put Sleep (2000) //2 second sleep at the end of the while
while (clock <= start + 58) {
...
Sleep(2000); }
The problem is not in the way you are checking the current time. The problem is that there is nothing to limit the frequency with which the loop runs. Your program continues to execute statements as quickly as it can, and eats up a ton of processor time. (In the absence of other programs, on a single-threaded CPU, it would use 100% of your processor time.)
You need to add a "sleep" method inside your loop, which will indicate to the processor that it can stop processing your program for a short period of time. There are many ways to do this; this question has some examples.

timestamp in c with milliseconds precision

I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?

Creating a timeout using time and difftime

gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC

Resources