C/UNIX Execute a function once every x milliseconds - c

How do I execute a function once every 1000 milliseconds using alarm() or sleep? I want the program to do something else if the function does not execute or complete in 1000 milliseconds.
EDIT: added Pseudocode
while(true) {
alarm(1000);
execute function;
sleep(1000);
alarm(0);
}
Now if alarm(1000) signals SIGALRM is this where I could call the other function?
I'm new to this sort of stuff so not even sure if I am using it right.

How crisp is the requirement, ie, how much jitter can you tolerate?
What version of UNIX?
Basically, if this is a hard deadline -- it sounds like one -- you're going to need to do some special stuff, because basic UNIX isn't really a hard-real-time system.
Let's assume for the moment that you mean Linux. You'll want to
use nice(2) to raise the process priority
use a fine-grained timer, as with ualarm(3)
That will probably do. If you need finer grained, or more predictable, timing, then you probably need to write a kernel extension to it can be driven with a kernel timer. (Solaris has some improved support for hard-real-time, but it's still not really a hard real-time system.)
Life will get considerably easier if you can use a real-time Linux or UNIX. Here's a list of some options. Here's an article you might find usefui.
Update
You should also look at nanosleep(2) or setitimer(2). Notice that all of these say the interval is at least the one in the argument. If you have a hard deadline, you need to wait for somewhat less than the actual interval and then figure out what to do with any change on your thousand millisecs.

I have used this function on Linux to "sleep" in milliseconds:
void Sleep(unsigned int milliSeconds)
{
struct timespec req = {0};
time_t seconds = (int) (milliSeconds / 1000);
milliSeconds = milliSeconds - (seconds * 1000);
req.tv_sec = seconds;
req.tv_nsec = milliSeconds * 1000000L;
while (nanosleep(&req, &req) == -1)
continue;
}

while (1) {
sleep(1);
/* Act */
}
If you need tighter delays, the normal way to do it is to call select() with no fds and a low timeout.

Related

Linux timer interval

I want to run a timer with interval of 5 ms. I created a Linux timer and when a sigalrm_handler is called I'm checking elapsed time from a previous call. I'm getting times like: 4163, 4422, 4266, 4443, 4470 4503, 4288 microseconds when I want intervals to be about 5000 microseconds with a least possible error. I don't know why this interval is not constant but it varies and is much lower than it should be.
Here is my code:
static int time_count;
static int counter;
struct itimerval timer={0};
void sigalrm_handler(int signum)
{
Serial.print("SIGALRM received, time: ");
Serial.println(micros()-time_count);
time_count=micros();
}
void setup() {
Serial.begin(9600);
timer.it_value.tv_sec = 1;
timer.it_interval.tv_usec = 5000;
signal(SIGALRM, &sigalrm_handler);
setitimer(ITIMER_REAL, &timer, NULL);
time_count = micros();
}
I want to run a timer with interval of 5 ms.
You probably cannot get that period reliably, because it is smaller than a reasonable PC hardware can handle.
As a rule of thumb, 50 Hz (or perhaps 100Hz) is probably the highest reliable frequency you could get. And it is not a matter of software, but of hardware.
Think of your typical processor cache (a few megabytes). You could need a few milliseconds to fill it. Or think of time to handle a page fault; it probably would take more than a millisecond.
And Intel Edison is not a top-fast processor. I won't be surprised if converting a number to a string and displaying that string on some screen could take about a millisecond (but I leave you to check that). This could explain your figures.
Regarding software, see also time(7) (or consider perhaps some busy waiting approach inside the kernel; I don't recommend that).
Look also into /proc/interrupts several times (see proc(5)) by running a few times some cat /proc/interrupts command in a shell. You'll probably see that the kernel gets interrupted less frequently than once every one or a few milliseconds.
BTW your signal handler calls non-async-signal-safe functions (so is undefined behavior). Read signal(7) & signal-safety(7).
So it looks like your entire approach is wrong.
Maybe you want some RTOS, at least if you need some hard real-time (and then, you might consider upgrading your hardware to something faster and more costly).

Implementing time delay function in C

I want to implement a delay function using null loops. But the amount of time needed to complete a loop once is compiler and machine dependant. I want my program to determine the time on its own and delay the program for the specified amount of time. Can anyone give me any idea how to do this?
N. B. There is a function named delay() which suspends the system for the specified milliseconds. Is it possible to suspend the system without using this function?
First of all, you should never sit in a loop doing nothing. Not only does it waste energy (as it keeps your CPU 100% busy counting your loop counter) -- in a multitasking system it also decreases the whole system performance, because your process is getting time slices all the time as it appears to be doing something.
Next point is ... I don't know of any delay() function. This is not standard C. In fact, until C11, there was no standard at all for things like this.
POSIX to the rescue, there is usleep(3) (deprecated) and nanosleep(2). If you're on a POSIX-compliant system, you'll be fine with those. They block (means, the scheduler of your OS knows they have nothing to do and schedules them only after the end of the call), so you don't waste CPU power.
If you're on windows, for a direct delay in code, you only have Sleep(). Note that THIS function takes milliseconds, but has normally only a precision around 15ms. Often good enough, but not always. If you need better precision on windows, you can request more timer interrupts using timeBeginPeriod() ... timeBeginPeriod(1); will request a timer interrupt each millisecond. Don't forget calling timeEndPeriod() with the same value as soon as you don't need the precision any more, because more timer interrupts come with a cost: they keep the system busy, thus wasting more energy.
I had a somewhat similar problem developing a little game recently, I needed constant ticks in 10ms intervals, this is what I came up with for POSIX-compliant systems and for windows. The ticker_wait() function in this code just suspends until the next tick, maybe this is helpful if your original intent was some timing issue.
Unless you're on a real-time operating system, anything you program yourself directly is not going to be accurate. You need to use a system function to sleep for some amount of time like usleep in Linux or Sleep in Windows.
Because the operating system could interrupt the process sooner or later than the exact time expected, you should get the system time before and after you sleep to determine how long you actually slept for.
Edit:
On Linux, you can get the current system time with gettimeofday, which has microsecond resolution (whether the actual clock is that accurate is a different story). On Windows, you can do something similar with GetSystemTimeAsFileTime:
int gettimeofday(struct timeval *tv, struct timezone *tz)
{
const unsigned __int64 epoch_diff = 11644473600000000;
unsigned __int64 tmp;
FILETIME t;
if (tv) {
GetSystemTimeAsFileTime(&t);
tmp = 0;
tmp |= t.dwHighDateTime;
tmp <<= 32;
tmp |= t.dwLowDateTime;
tmp /= 10;
tmp -= epoch_diff;
tv->tv_sec = (long)(tmp / 1000000);
tv->tv_usec = (long)(tmp % 1000000);
}
return 0;
}
You could do something like find the exact time it is at a point in time and then keep it in a while loop which rechecks the time until it gets to whatever the time you want. Then it just breaks out and continue executing the rest of your program. I'm not sure if I see much of a benefit in looping rather than just using the delay function though.

writing a string every n milliseconds

I am trying to write a simple win32 console application in C to simulate stock price ticks, I need to specify the time interval so that every n milliseconds a new price is published.
The end goal is to write test data to a database and stress test an application which is supposed to react to new ticks and perform calcs.
My simple price server will be structured as follows
int main (void)
{
int n = 0;
//set interval to 1 millisecond
while (true) {
printf ("New price...\n");
// Publish price and write to database
SleepExecution (n);
}
return 0;
}
I have not been able to find an API call which will allow me to stop the execution of the above code for the arbitrary n milliseconds. Sleep looks to be the solution but I would prefer not to use it.
Are there any libraries you would recommend using or samples on the web I could draw inspiration from?
CreateWaitableTimer is great for running code periodically. Combined with timeBeginPeriod to increase the system timer rate, and turning up your thread priority so it wakes on time, you should have a solution that's 99.99% effective.
Suspending execution is an operating system-specific operation.
For MS Windows (Win 95 and after), use the function Sleep() where the parameter is the minimum rescheduling time in milliseconds.
For Linux, there are several ways, but int nanosleep(const struct timespec *req, struct timespec *rem); allows nanosecond precision for most uses. sleep() allows one second precision, so it probably isn't useful for your purposes.
I see that win32 was mentioned. To use Sleep(), the program can probably use a constant value, depending on the requirements, but if high consistency is required, then dynamically compute the delay based on how much longer it is until the next time a update is needed.

Loops/timers in C

How does one create a timer in C?
I want a piece of code to continuously fetch data from a gps parsers output.
Are there good libraries for this or should it be self written?
Simplest method available:
#include <pthread.h>
void *do_smth_periodically(void *data)
{
int interval = *(int *)data;
for (;;) {
do_smth();
usleep(interval);
}
}
int main()
{
pthread_t thread;
int interval = 5000;
pthread_create(&thread, NULL, do_smth_periodically, &interval)
...
}
On POSIX systems you can create (and catch) an alarm. Alarm is simple but set in seconds. If you need finer resolution than seconds then use setitimer.
struct itimerval tv;
tv.it_interval.tv_sec = 0;
tv.it_interval.tv_usec = 100000; // when timer expires, reset to 100ms
tv.it_value.tv_sec = 0;
tv.it_value.tv_usec = 100000; // 100 ms == 100000 us
setitimer(ITIMER_REAL, &tv, NULL);
And catch the timer on a regular interval by setting sigaction.
One doesn't "create a timer in C". There is nothing about timing or scheduling in the C standard, so how that is accomplished is left up to the Operating System.
This is probably a reasonable question for a C noob, as many languages do support things like this. Ada does, and I believe the next version of C++ will probably do so (Boost has support for it now). I'm pretty sure Java can do it too.
On linux, probably the best way would be to use pthreads. In particular, you need to call pthread_create() and pass it the address of your routine, which presumably contains a loop with a sleep() (or usleep()) call at the bottom.
Note that if you want to do something that approximates real-time scheduling, just doing a dumb usleep() isn't good enough because it won't account for the execution time of the loop itself. For those applications you will need to set up a periodic timer and wait on that.
SDL provides a cross platform timer in C.
http://www.libsdl.org/cgi/docwiki.cgi/SDL_AddTimer
If your using Windows, you can use SetTimer,else you can build a timer out of timeGetTime and _beginthreadex along with a queue of timers with callbacks
The question about a timer is quite unspecific, though there are two functions that come to my mind that will help you:
sleep() This function will cause execution to stop for a specified number of seconds. You can also use usleep and nanosleep if you want to specify the sleeptime more exactly
gettimeofday() Using this function you are able to stop between to timesteps.
See manpages for further explanation :)
If the gps data is coming from some hardware device, like over a serial port, then one thing that you may consider is changing the architecture around so that the parser kicks off the code that you are trying to run when more data is available.
It could do this through a callback function or it could send an event - the actual implementation would depend on what you have available.

getting elapsed time since process start

I need a way to get the elapsed time (wall-clock time) since a program started, in a way that is resilient to users meddling with the system clock.
On windows, the non standard clock() implementation doesn't do the trick, as it appears to work just by calculating the difference with the time sampled at start up, so that I get negative values if I "move the clock hands back".
On UNIX, clock/getrusage refer to system time, whereas using function such as gettimeofday to sample timestamps has the same problem as using clock on windows.
I'm not really interested in precision, and I've hacked a solution by having a half a second resolution timer spinning in the background countering the clock skews when they happen
(if the difference between the sampled time and the expected exceeds 1 second i use the expected timer for the new baseline) but I think there must be a better way.
I guess you can always start some kind of timer. For example under Linux a thread
that would have a loop like this :
static void timer_thread(void * arg)
{
struct timespec delay;
unsigned int msecond_delay = ((app_state_t*)arg)->msecond_delay;
delay.tv_sec = 0;
delay.tv_nsec = msecond_delay * 1000000;
while(1) {
some_global_counter_increment();
nanosleep(&delay, NULL);
}
}
Where app_state_t is an application structure of your choice were you store variables. If you want to prevent tampering, you need to be sure no one killed your thread
For POSIX, use clock_gettime() with CLOCK_MONOTONIC.
I don't think you'll find a cross-platform way of doing that.
On Windows what you need is GetTickCount (or maybe QueryPerformanceCounter and QueryPerformanceFrequency for a high resolution timer). I don't have experience with that on Linux, but a search on Google gave me clock_gettime.
Wall clock time can bit calculated with the time() call.
If you have a network connection, you can always acquire the time from an NTP server. This will obviously not be affected in any the local clock.
/proc/uptime on linux maintains the number of seconds that the system has been up (and the number of seconds it has been idle), which should be unaffected by changes to the clock as it's maintained by the system interrupt (jiffies / HZ). Perhaps windows has something similar?

Resources