I find myself in need for a repeating block of code I can execute. If I were in an object I could simply pass self to the NSTimer scheduling. I am in a pure C project at the moment and I don't see any obvious NSTimer analogs. Is there a correct way to approach this?
If you really want the CF way of doing this:
CFRunLoopTimer is the CoreFoundation counter-part you're probably looking for.
You'll need to construct a separate CFRunLoop on a different thread if you want the timer to fire an asynchronous job. Otherwise, you can use use the main application thread by calling CFRunLoopGetCurrent() and set that to be the "parent loop" (called a "mode") that responds to the timer's events.
Here's what you do:
Initialize a CFRunLoopTimerContext struct (on the stack is fine)
Calculate the next time at which the run loop fires using type CFAbsoluteTime and CFAbsoluteTimeGetCurrent() plus some CFTimeInterval (can be a double aka CGFloat)
Create the run loop timer with CFRunLoopTimerCreate, passing in the next fire time and the callback function pointer
Add the run loop timer to either a runloop on a separate thread or the main thread of the executable (see above)
I don't know much of anything about the foundation framework but you can hack together a timer using the mach/POSIX APIs which allow you to tap into kernel services and processes for your C application.
Using a combination of mach_absolute_time() and nanosleep() system calls may allow you to emulate a timer. It's a bit nasty if you've never done any systems programming on a *NIX like platform.
On Linux, it's much easier to implement a POSIX timer or make use of clock_gettime() and clock_nanosleep (POSIX functions) which are not available on the mac.
There's lots of info on Linux/Unix systems programming in C but not so much on a mac, so I'd recommend having a look at this question.
Also, I made my own little emulation of clock_gettime() and clock_nanosleep for the mac (shameless plug) Find it here
Related
The title says it all.
I have seen solutions in C++. For example, this one and some which (also C++) are rather old.
I am open to using glib or any other opensource library. I care only for Linux implementation.
I came across glib's timer functions but they are all synchronous in nature and not close to what setTimeout() does.
I can think of one solution of having a separate thread which continually, in a loop, checks if a timer, such as the one provided by Glib, has expired and then fire the corresponding function. Of course, that would be ridiculously inefficient.
I also came across this one here which suggests the use of alarm(2). It can be used but granularity is in seconds only.
EDIT: Alas alarm() cancels any previously set alarm()
You mention that you need more than one timer in your process, so you might want the POSIX timer_* family of functions (timer_create, timer_settime, etc). They have nanosecond granularity. You can specify a few different actions for when the timer expires: raise a signal, or call a function from a separate thread. You have to link with -lrt to use these functions.
There is also timerfd_create, which let you detect timer expiration by a file becoming readable, using poll or select or their relatives.
If you want to use GLib and its timeout sources (g_timeout_add(), g_timeout_add_seconds(), etc.), then you will need to run a GLib main loop (GMainLoop) and run the rest of your application in a main loop callback too. This will be quite a large change if your application isn’t already written in an event-driven style, but would be necessary for polling sockets or running a UI anyway.
There’s an example of how to use a timeout source in an application here.
Note that GLib’s timer API (g_timer_*()) is not for running callbacks after a certain timeout, it’s for timing how long something takes (like a stopwatch). It’s not relevant here.
I have to implement a simple os and a virtual machine, for a project, that supports some basic functions. This os will run on the virtual machine and the virtual machine like a normal program in Linux.
Suppose that now is the quantum that the virtual machine is executed.
How is possible to receive some extra timer signals in order to divide the virtual machine execution time in smaller quanta?
How many timers are available in my cpu? (It's more like a general question)
Can I handle timer signals inside the virtual machine with a user lever interrupt handler?
Any help or guidance would be very appreciated.
Thank you
I suggest you use exactly 1 interrupt, and organize your timers in either a queue (for few times, e.g. <50) or in a heap, which is quite a quick tree which, at any time, gives you access to the smallest element, that is, the element with the next Timer to be handled.
Thus you have one interrupt, one handler, and many Timers with associated functions that will be called by that single handler.
In fact, the normal program is also using interrupt(system-level),for example when they want to use system call.
In user level ,you can use swapcontext/makecontext to simulate system level swap context, but when you want to get time(to caculate the time difference) , you have to use syscall.So you'd better use the system timer directly, it's not a bad idea.
I would like to be able to 'capture' an hrtimer interrupt with a linux kernel module and replay the interrupt at a later period in time. Any thoughts on how to go about doing this?
Use case: A program that calls sleep(1). My module will grab the hrtimer interrupt when it fires after 1 second, wait for 'x' amount of time, then re-fire the interrupt, waking the process.
Note: I do not want to hook the sleep system call.
Thanks!
Quite honestly, writing a Linux kernel module just to modify the behavior of sleep() for a single application sounds like overkill.
For most cases you should be able to use a preloadable shared object to intercept/override the sleep() function family with your own implementations. The application will call your implementation and your code may then call the real function with a modified parameter list.
This method is much simpler and less intrusive than anything involving kernel programming, although it will not work if your application is statically linked or if it uses direct system calls instead of library functions.
Is there a way I can use the delay command and have something else running in the background?
Kinda, if you use interrupts. delay itself uses these. But it's not as elegant as a multi-threaded solution (which is probably what you're looking for). There is a Multi-Threading library for Arduino but I'm not sure how well, or even if, it works.
The Arduino is only capable of running a single thread at a time meaning it can only do one thing at a time. You can use interrupts to literally interrupt the normal flow of your code but it's still technically not executing at the same time. The library I linked to attempts to implement what you might call a crude "hyper-threaded" solution. Two threads executing in tandem on a single physical processing core.
If you need other code to execute, you need to learn how to program with millis(). This involved converting your code from "step by step" execution to a time-based state machine.
For example if you want a LED to flash, you have two states for that LED: On and Off. You change the state when enough time has elapsed.
Here are a series of examples of how to convert delay()-based code into millis()-based code:
http://www.cmiyc.com/blog/2011/01/06/millis-tutorial/
Usually all you need is a timer and a ISR routine. You won't manage to live without Interrupts :P Here you can find a good explanation about this.
I agree with JamesC4S, state machine is probably the right formalism to use in your case. You could for example try the ThingML language (which uses components, state machines, etc), and which compiles to Arduino code. A simple example can be found here.
In an embedded project, we're supposed to implement a task scheduler with different priorities, the project implementation is in C and is run on an Arduino device.
Now that we're in the researching phase, one question popped but nobody had experience enough to have a certain answer:
How is it possible to control the execution time of a function? How do we keep track of time before the function returns so we can interrupt it for example when a time-out occurs?
One suggestion was to use fork(), but since Arduino does not include an operation system, there's no kernel to handle a thread. Or am I wrong?
Any input will be helpful, thanks a bunch,
You need a timer. All non-cooperative multi tasking systems (i.e. those which don't depend on the function to say "you can interrupt me now" all the time) use a timer to stop the execution after some time (say 100ms).
In the interrupt handler, check if there is another "thread" which can run and switch context.
A pretty simple implementation is a "ready list": Whenever a task or thread could do some work, add it to the ready list.
When the timer fires, add the current task at the end of the list and make the head of the list the current task.
In an embedded system a task scheduler is the core of an operating system (usually an RTOS), so you are being asked to implement one not to use one.
A simple example of how such a scheduler works is described in Jean Labrosse's boot Micro C/OS-II. It describes a complete RTOS kernel with scheduling and IPC. For your project you can take the description of this core and implement your own (or you could use the included source code).
Such a kernel works by scheduling at certain OS calls and on a timer interrupt. A context switch involves storing the processor registers for one task and replacing then with teh registers for another. Because this register save/restore includes the stack-pointer and program counter, control is switched between threads.
It may be that simpler forms of scheduling (rather than preemptive) scheduling are called for. One method is to implement task functions that run to completion and where necessary store their own state and are implemented as state-machines, and then have a simple loop that polls a timer and call's each 'task' function according to a schedule table (that includes the periodicity of the task and a pointer to its function, so that say one function will be called every second, while another will be called every millisecond.