Creating a timeout using time and difftime - c

gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU

You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).

Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC

Related

Measure time for sending a file via socket in C [duplicate]

What is the method to use a timer in C? I need to wait until 500 ms for a job. Please mention any good way to do this job. I used sleep(3); But this method does not do any work in that time duration. I have something that will try until that time to get any input.
Here's a solution I used (it needs #include <time.h>):
int msec = 0, trigger = 10; /* 10ms */
clock_t before = clock();
do {
/*
* Do something to busy the CPU just here while you drink a coffee
* Be sure this code will not take more than `trigger` ms
*/
clock_t difference = clock() - before;
msec = difference * 1000 / CLOCKS_PER_SEC;
iterations++;
} while ( msec < trigger );
printf("Time taken %d seconds %d milliseconds (%d iterations)\n",
msec/1000, msec%1000, iterations);
You can use a time_t struct and clock() function from time.h.
Store the start time in a time_t struct by using clock() and check the elapsed time by comparing the difference between stored time and current time.
Yes, you need a loop. If you already have a main loop (most GUI event-driven stuff does) you can probably stick your timer into that. Use:
#include <time.h>
time_t my_t, fire_t;
Then (for times over 1 second), initialize your timer by reading the current time:
my_t = time(NULL);
Add the number of seconds your timer should wait and store it in fire_t. A time_t is essentially a uint32_t, you may need to cast it.
Inside your loop do another
my_t = time(NULL);
if (my_t > fire_t) then consider the timer fired and do the stuff you want there. That will probably include resetting it by doing another fire_t = time(NULL) + seconds_to_wait for next time.
A time_t is a somewhat antiquated unix method of storing time as the number of seconds since midnight 1/1/1970 but it has many advantages. For times less than 1 second you need to use gettimeofday() (microseconds) or clock_gettime() (nanoseconds) and deal with a struct timeval or struct timespec which is a time_t and the microseconds or nanoseconds since that 1 second mark. Making a timer works the same way except when you add your time to wait you need to remember to manually do the carry (into the time_t) if the resulting microseconds or nanoseconds value goes over 1 second. Yes, it's messy. See man 2 time, man gettimeofday, man clock_gettime.
sleep(), usleep(), nanosleep() have a hidden benefit. You see it as pausing your program, but what they really do is release the CPU for that amount of time. Repeatedly polling by reading the time and comparing to the done time (are we there yet?) will burn a lot of CPU cycles which may slow down other programs running on the same machine (and use more electricity/battery). It's better to sleep() most of the time then start checking the time.
If you're trying to sleep and do work at the same time you need threads.
May be this examples help to you
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
/*
Implementation simple timeout
Input: count milliseconds as number
Usage:
setTimeout(1000) - timeout on 1 second
setTimeout(10100) - timeout on 10 seconds and 100 milliseconds
*/
void setTimeout(int milliseconds)
{
// If milliseconds is less or equal to 0
// will be simple return from function without throw error
if (milliseconds <= 0) {
fprintf(stderr, "Count milliseconds for timeout is less or equal to 0\n");
return;
}
// a current time of milliseconds
int milliseconds_since = clock() * 1000 / CLOCKS_PER_SEC;
// needed count milliseconds of return from this timeout
int end = milliseconds_since + milliseconds;
// wait while until needed time comes
do {
milliseconds_since = clock() * 1000 / CLOCKS_PER_SEC;
} while (milliseconds_since <= end);
}
int main()
{
// input from user for time of delay in seconds
int delay;
printf("Enter delay: ");
scanf("%d", &delay);
// counter downtime for run a rocket while the delay with more 0
do {
// erase the previous line and display remain of the delay
printf("\033[ATime left for run rocket: %d\n", delay);
// a timeout for display
setTimeout(1000);
// decrease the delay to 1
delay--;
} while (delay >= 0);
// a string for display rocket
char rocket[3] = "-->";
// a string for display all trace of the rocket and the rocket itself
char *rocket_trace = (char *) malloc(100 * sizeof(char));
// display trace of the rocket from a start to the end
int i;
char passed_way[100] = "";
for (i = 0; i <= 50; i++) {
setTimeout(25);
sprintf(rocket_trace, "%s%s", passed_way, rocket);
passed_way[i] = ' ';
printf("\033[A");
printf("| %s\n", rocket_trace);
}
// erase a line and write a new line
printf("\033[A");
printf("\033[2K");
puts("Good luck!");
return 0;
}
Compile file, run and delete after (my preference)
$ gcc timeout.c -o timeout && ./timeout && rm timeout
Try run it for yourself to see result.
Notes:
Testing environment
$ uname -a
Linux wlysenko-Aspire 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ gcc --version
gcc (Ubuntu 4.8.5-2ubuntu1~14.04.1) 4.8.5
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Need some help in C code for optimization (Poll + delay/sleep)

Currently I'm polling the register to get the expected value and now I want reduce the CPU usage and increase the performance.
So, I think, if we do polling for particular time (Say for 10ms) and if we didn't get expected value then wait for some time (like udelay(10*1000) or usleep(10*1000) delay/sleep in ms) then continue to do polling for more more extra time (Say 100ms) and still if you didn't get the expected value then do sleep/delay for 100ms.....vice versa... need to do till it reach to maximum timeout value.
Please let me know if anything.
This is the old code:
#include <sys/time.h> /* for setitimer */
#include <unistd.h> /* for pause */
#include <signal.h> /* for signal */
#define INTERVAL 500 //timeout in ms
static int timedout = 0;
struct itimerval it_val; /* for setting itimer */
char temp_reg[2];
int main(void)
{
/* Upon SIGALRM, call DoStuff().
* Set interval timer. We want frequency in ms,
* but the setitimer call needs seconds and useconds. */
if (signal(SIGALRM, (void (*)(int)) DoStuff) == SIG_ERR)
{
perror("Unable to catch SIGALRM");
exit(1);
}
it_val.it_value.tv_sec = INTERVAL/1000;
it_val.it_value.tv_usec = (INTERVAL*1000) % 1000000;
it_val.it_interval = it_val.it_value;
if (setitimer(ITIMER_REAL, &it_val, NULL) == -1)
{
perror("error calling setitimer()");
exit(1);
}
do
{
temp_reg[0] = read_reg();
//Read the register here and copy the value into char array (temp_reg
if (timedout == 1 )
return -1;//Timedout
} while (temp_reg[0] != 0 );//Check the value and if not try to read the register again (poll)
}
/*
* DoStuff
*/
void DoStuff(void)
{
timedout = 1;
printf("Timer went off.\n");
}
Now I want to optimize and reduce the CPU usage and want to improve the performance.
Can any one help me on this issue ?
Thanks for your help on this.
Currently I'm polling the register to get the expected value [...]
wow wow wow, hold on a moment here, there is a huge story hidden behind this sentence; what is "the register"? what is "the expected value"? What does read_reg() do? are you polling some external hardware? Well then, it all depends on how your hardware behaves.
There are two possibilities:
Your hardware buffers the values that it produces. This means that the hardware will keep each value available until you read it; it will detect when you have read the value, and then it will provide the next value.
Your hardware does not buffer values. This means that values are being made available in real time, for an unknown length of time each, and they are replaced by new values at a rate that only your hardware knows.
If your hardware is buffering, then you do not need to be afraid that some values might be lost, so there is no need to poll at all: just try reading the next value once and only once, and if it is not what you expect, sleep for a while. Each value will be there when you get around to reading it.
If your hardware is not buffering, then there is no strategy of polling and sleeping that will work for you. Your hardware must provide an interrupt, and you must write an interrupt-handling routine that will read every single new value as quickly as possible from the moment that it has been made available.
Here are some pseudo code that might help:
do
{
// Pseudo code
start_time = get_current_time();
do
{
temp_reg[0] = read_reg();
//Read the register here and copy the value into char array (temp_reg
if (timedout == 1 )
return -1;//Timedout
// Pseudo code
stop_time = get_current_time();
if (stop_time - start_time > some_limit) break;
} while (temp_reg[0] != 0 );
if (temp_reg[0] != 0)
{
usleep(some_time);
start_time = get_current_time();
}
} while (temp_reg[0] != 0 );
To turn the pseudo code into real code, see https://stackoverflow.com/a/2150334/4386427

Run code for x amount of time

To preface, I am on a Unix (linux) system using gcc.
What I am stuck on is how to accurately implement a way to run a section of code for a certain amount of time.
Here is an example of something I have been working with:
struct timeb start, check;
int64_t duration = 10000;
int64_t elapsed = 0;
ftime(&start);
while ( elapsed < duration ) {
// do a set of tasks
ftime(&check);
elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
}
I was thinking this would have carried on for 10000ms or 10 seconds, but it didn't, almost instantly. I was basing this off other questions such as How to get the time elapsed in C in milliseconds? (Windows) . But then I thought that if upon the first call of ftime, the struct is time = 1, millitm = 999 and on the second call time = 2, millitm = 01 it would be calculating the elapsed time as being 1002 milliseconds. Is there something I am missing?
Also the suggestions in the various stackoverflow questions, ftime() and gettimeofday(), are listed as deprecated or legacy.
I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check. But milliseconds since the epoch requires 42 bits and I'm trying to keep everything in the loop as efficient as possible.
What approach could I take towards this?
Code is incorrect calculating elapsed time.
// elapsed += ((check.time - start.time) * 1000) + (check.millitm - start.millitm);
elapsed = ((check.time - start.time) * (int64_t)1000) + (check.millitm - start.millitm);
There is some concern about check.millitm - start.millitm. On systems with struct timeb *tp, it can be expected that the millitm will be promoted to int before subtraction occurs. So the difference will be in the range [-1000 ... 1000].
struct timeb {
time_t time;
unsigned short millitm;
short timezone;
short dstflag;
};
IMO, more robust code would handle ms conversion in a separate helper function. This matches OP's "I believe I could convert the start time into milliseconds, and the check time into millseconds, then subtract start from check."
int64_t timeb_to_ms(struct timeb *t) {
return (int64_t)t->time * 1000 + t->millitm;
}
struct timeb start, check;
ftime(&start);
int64_t start_ms = timeb_to_ms(&start);
int64_t duration = 10000 /* ms */;
int64_t elapsed = 0;
while (elapsed < duration) {
// do a set of tasks
struct timeb check;
ftime(&check);
elapsed = timeb_to_ms(&check) - start_ms;
}
If you want efficiency, let the system send you a signal when a timer expires.
Traditionally, you can set a timer with a resolution in seconds with the alarm(2) syscall.
The system then sends you a SIGALRM when the timer expires. The default disposition of that signal is to terminate.
If you handle the signal, you can longjmp(2) from the handler to another place.
I don't think it gets much more efficient than SIGALRM + longjmp (with an asynchronous timer, your code basically runs undisturbed without having to do any extra checks or calls).
Below is an example for you:
#define _XOPEN_SOURCE
#include <unistd.h>
#include <stdio.h>
#include <signal.h>
#include <setjmp.h>
static jmp_buf jmpbuf;
void hndlr();
void loop();
int main(){
/*sisv_signal handlers get reset after a signal is caught and handled*/
if(SIG_ERR==sysv_signal(SIGALRM,hndlr)){
perror("couldn't set SIGALRM handler");
return 1;
}
/*the handler will jump you back here*/
setjmp(jmpbuf);
if(0>alarm(3/*seconds*/)){
perror("couldn't set alarm");
return 1;
}
loop();
return 0;
}
void hndlr(){
puts("Caught SIGALRM");
puts("RESET");
longjmp(jmpbuf,1);
}
void loop(){
int i;
for(i=0; ; i++){
//print each 100-milionth iteration
if(0==i%100000000){
printf("%d\n", i);
}
}
}
If alarm(2) isn't enough, you can use timer_create(2) as EOF suggests.

Simple <Time.h> program takes large amount CPU

I was trying to familiarize myself with the C time.h library by writing something simple in VS. The following code simply prints the value of x added to itself every two seconds:
int main() {
time_t start = time(NULL);
time_t clock = time(NULL);
time_t clockTemp = time(NULL); //temporary clock
int x = 1;
//program will continue for a minute (60 sec)
while (clock <= start + 58) {
clockTemp = time(NULL);
if (clockTemp >= clock + 2) { //if 2 seconds has passed
clock = clockTemp;
x = ADD(x);
printf("%d at %d\n", x, timeDiff(start, clock));
}
}
}
int timeDiff(int start, int at) {
return at - start;
}
My concern is with the amount of CPU that this program takes, about 22%. I figure this problem stems from the constant updating of the clockTemp (just below the while statement), but I'm not sure how to fix this issue. Is it possible that this is a visual studio problem, or is there a special way to check for time?
Solution
the code needed the sleep function so that it wouldn't need to run constantly.
I added sleep with #include <windows.h> and put Sleep (2000) //2 second sleep at the end of the while
while (clock <= start + 58) {
...
Sleep(2000); }
The problem is not in the way you are checking the current time. The problem is that there is nothing to limit the frequency with which the loop runs. Your program continues to execute statements as quickly as it can, and eats up a ton of processor time. (In the absence of other programs, on a single-threaded CPU, it would use 100% of your processor time.)
You need to add a "sleep" method inside your loop, which will indicate to the processor that it can stop processing your program for a short period of time. There are many ways to do this; this question has some examples.

Is this a correct use of sleep()?

I'm using sleep like this to grab a frame every 1/25th of a second. OS is Debian 6 armel.
#define VIDEO_FRAME_RATE 25.0f
while (RECORDING) {
sprintf(buffer, "Someting from a data struct that is updated\n");
fprintf(Output, buffer);
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
usleep(usecToSleep);
}
Question: What is the guarantee that the loop will output the buffer to Output file descriptor at every 1/25th of a second?
Is there a better way to do this in C? I need it to be as precise as possible to prevent drifting.
Thank you.
Your "recording operation" still takes some time...
So you need to calculate time that should be spent on on frame (usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;) and then calculate time operation really took, adjust sleep time accordingly:
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
lastFrameUsec = 0;
while (RECORDING) {
sprintf(buffer, "Someting from a data struct that is updated\n");
fprintf(Output, buffer);
usecTosleep = (1.0f/VIDEO_FRAME_RATE) * 1000000;
// currentFrameUec - lastFrameUsec = actual time spending on operation
currentFrameUsec = getUsecElapsedFromStart();
actualSleep = usecToSleep - (currentFrameUsec - lastFrameUsec);
// If there's time to sleep left, sleep
if(actualSleep > 0){
usleep(actualSleep);
lastFrameUsec = getUsecElepasedFromStart();
} else {
lastFrameUsec = currentFrameUsec;
}
}
I'm not aware of multi platform getUsecElapsedFromStart() so you probably will have to implement your own for example like this one.
int getUsecElapsedFromStart(const struct timespec *tstart)
{
struct timespec *tnow;
clock_gettime(CLOCK_MONOTONIC, &tnow);
return (int)((tnow tv_sec*10.0e9 + tnow.tv_nsec) -
(tstart.tv_ses*10.0e9 + tstart.tv_nsec));
}
clock_gettime(CLOCK_MONOTONIC, &tstart);
while(RECORDING){
// ...
currentFrameUsec = getUsecElapsedFromStart(&tstart);
}
In response to your first question, there is no such guarantee. usleep() promises only that it will sleep at least as long as you tell it to. But it may sleep longer:
The usleep() function suspends execution of the calling process for (at least) usec microseconds. The
sleep may be lengthened slightly by any system activity or by the time spent processing the call or by
the granularity of system timers.

Resources