In an embedded project, we're supposed to implement a task scheduler with different priorities, the project implementation is in C and is run on an Arduino device.
Now that we're in the researching phase, one question popped but nobody had experience enough to have a certain answer:
How is it possible to control the execution time of a function? How do we keep track of time before the function returns so we can interrupt it for example when a time-out occurs?
One suggestion was to use fork(), but since Arduino does not include an operation system, there's no kernel to handle a thread. Or am I wrong?
Any input will be helpful, thanks a bunch,
You need a timer. All non-cooperative multi tasking systems (i.e. those which don't depend on the function to say "you can interrupt me now" all the time) use a timer to stop the execution after some time (say 100ms).
In the interrupt handler, check if there is another "thread" which can run and switch context.
A pretty simple implementation is a "ready list": Whenever a task or thread could do some work, add it to the ready list.
When the timer fires, add the current task at the end of the list and make the head of the list the current task.
In an embedded system a task scheduler is the core of an operating system (usually an RTOS), so you are being asked to implement one not to use one.
A simple example of how such a scheduler works is described in Jean Labrosse's boot Micro C/OS-II. It describes a complete RTOS kernel with scheduling and IPC. For your project you can take the description of this core and implement your own (or you could use the included source code).
Such a kernel works by scheduling at certain OS calls and on a timer interrupt. A context switch involves storing the processor registers for one task and replacing then with teh registers for another. Because this register save/restore includes the stack-pointer and program counter, control is switched between threads.
It may be that simpler forms of scheduling (rather than preemptive) scheduling are called for. One method is to implement task functions that run to completion and where necessary store their own state and are implemented as state-machines, and then have a simple loop that polls a timer and call's each 'task' function according to a schedule table (that includes the periodicity of the task and a pointer to its function, so that say one function will be called every second, while another will be called every millisecond.
Related
I am working on a project where I need to execute 2 pieces of code off TIM interrupts. One of them has a slightly higher priority than the other, and both will be running on 2 different timers (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
if so, what would be a better strategy to achieve something close to what I need?
Cheers
Interrupts will triger. But remember:
Its priority (not the RTOS priority as they are unrelated) must be lower the SVC interrupt if you want to use any ...fromISR RTOS functions
They will not happen at the same time (as you have only one core)
I am working on a project where I need to execute 2 pieces of code off
TIM interrupts. One of them has a slightly higher priority than the
other, and both will be running on 2 different timers...
What exactly do you mean by "one of them has a [..] higher priority" - the HW timer events will occur just when the timer underflows occur. I think you mean, the handler code servicing the timeout events.
... (of course not at the same time interval). Due to both timers being proportional to another (one is 1KHz, one is 8Khz) both will trigger at the same time.
In embedded realtime programming, you should never build on the assumption that IRQ events are not occurring at the same time: Your ISR handlers may be suppressed at the moment when a trigger event occurs. This way, even if two concurrent events trigger closely after each other, it may look for your software code as if they had triggered at the same time. The solution is what your question points at: Context priorities (of tasks (= "threads") and ISRs (= "Interrupt handlers")) let you avoid the question which event came earlier and control which event to treat first.
Since I am already using the RTOS middle-ware for another purposes (threads of a much lower priority than these too), I was thinking of creating one thread of each these routines.
You are free to deploy code to an RTOS task or to an ISR, but keep in mind that any ISR will have a higher priority than any task. Your TIM event will trigger an ISR (= interrupt context), but you can (and often should) use the ISR to send a notification (or event, or semaphore, or queue message) to a task in order to have the main part of the timer event processed at the lower priority of a task.
However, looking at how cubeMX is generating code, I am even wondering if this is possible.
CubeMX is not limiting you to use or not use tasks. The question is rather how far CubeMX will generate the code you need, and how much you have to add manually. Please note that you don't have to use the CubeMX feature to generate tasks through its configuration, but this can be done by your own C code, too.
I can start/stop these timers from any thread, but there is only one HAL_TIM_PeriodElapsedCallback which you usually fill with if statements like so:
if (htim->Instance == TIM2)
Am I correct to assume, regardless of which thread the timers are started from, the TIM callback will always occur "outside" of the RTOS environment?
Yes, you are. The question who started the timer is not relevant to the context type/selection triggered by the timer. In any case, the TIM will trigger its ISR (at the interrupt priority configured for that interrupt).
If you use the CubeHAL library, it will implement the root of that ISR, check which of the TIMs related to that ISR have elapsed, and invoke the code you printed. Here, you can insert your user code to the different TIM instances (like TIM2 in your case).
if so, what would be a better strategy to achieve something close to what I need?
Re-check your favourite textbook on RTOS and microcontrollers. Any SO answer cannot include all the theory to solve the problem properly.
Decide whether there will be any more urgent reaction on your system than treating the timeout events. If no, you may implement the timeout reaction in the ISR handler. If yes (or in cases of doubt), implement the ISR with a task notification that goes to a task where you do what the timeout event requires. This may be the task from where you started the timer, or another one.
I am tracing Linux 0.11
https://mirrors.edge.kernel.org/pub/linux/kernel/Historic/old-versions/
I see there are many schedule() call in different place, not just the one inside do_timer().
Few questions here:
do_timer() (#sched.c) will be called every time the timer timeout? This timer is based on an x86 interrupt call?
Since there are many schedule() calls outside of do_timer(), can I say that is kind of preempting? or what's the purpose?
Any operation that blocks calls schedule() to yield control.
Some tasks' state has changed, it needs to be updated in schedule().
Some tasks' are working and still a lot of work, schedule() for balance.
Since there are many schedule() calls outside of do_timer(), can I say that is kind of preempting? or what's the purpose?
For a real OS; most task switches occur because a task blocks waiting for something (user input, network packet, disk IO, ..) or a task unblocks because something it was waiting for happened (and the unblocked task has higher priority and preempts the currently running lower priority task).
The whole "task switch caused by timer IRQ" thing is mostly just a fallback to guard against malicious CPU hogs (denial of service attacks); and for normal software under normal conditions you could disable it (delete the schedule() from the timer IRQ handler) and nobody would notice or care. Note: Some people will say it's also for "non-malicious" CPU bound tasks, but CPU bound tasks are relatively rare, and (ignoring the fact that the Linux scheduler has never been good for task priorities) for CPU bound tasks it's better to rely on an effective system of task priorities (e.g. give the CPU bound tasks a low priority so that almost everything will preempt them).
Also note that various courses on OS theory start with "so simple it never actually happens in practice" concepts, which is almost always a pure round-robin scheduler with tasks that never block (often with "Hey, we can accurately predict the future and know exactly how long each task will run for" nonsense), which is mostly fine as a first step (in a "learn to walk before you run" way) but sucks big salty dog balls if it's not followed by more realistic and more complex concepts (better scheduling algorithms, task priorities, multiple simultaneous scheduling algorithms/"scheduler policies", multi-CPU, interactive/latency sensitive tasks, ..) because it leaves the student/victim with little more than misinformation (e.g. the ever re-occurring "all tasks switches are caused by timer IRQ" misconception).
do_timer() (#sched.c) will be called every time the timer timeout? This timer is based on an x86 interrupt call?
I'm guessing that the timer was the raw PIT chip's IRQ (given that Linux version 0.11 was "absolute beginner developer with no intention of making it portable" historical memorabilia from before thousands of volunteers fixed half of the worst parts).
Also don't forget that the scheduler uses time for two different things - the "current task has used too much CPU time" thing that almost never matters, and figuring out when tasks that are blocked/sleeping (e.g. because they called sleep()) should unblock/wake up. The do_timer() might be for either of these things and might be for both (I don't know without looking at it).
We are using embedded C for the VxWorks real time operating system.
Currently, all of our UDP connections are started with TaskSpawn().
This routine creates and activates a new task with a specified
priority and options and returns a system-assigned ID.
We specify the task size, a priority, and pass in an entry point.
These are continuous connections, and thus every entry point contains an infinite loop where we delay before the next iteration.
Then I discovered period().
period spawns a task to call a function periodically.
Period sounds like what we should be using instead, but I can't find any information on when you would prefer this function over TaskSpawn. Period also doesn't allow specifying the task size or the priority, so how is it decided? Is the task size dynamic? What will the priority be?
There are also watchdogs.
Any task may create a watchdog timer and use it to run a specified
routine in the context of the system-clock ISR, after a specified
delay.
Again, this seems to be in line with the goal of processing data at a particular rate. Which do I choose when a task must continuously execute code at the same rate (i.e. in real time)?
What are the differences between these 3 methods?
Here is a little clarification:
taskSpawn(..) creates a task with which you're free to do anything with you like.
Watchdogs shall only be used to monitor time constraints. Remember that the callback of the watchdog is executed within the context of the system clock ISR which has many limitations (e.g. free stack size, never use blocking function calls in an ISR, ...). Additionally executing "a lot of code" in the system clock ISR slows down your entire system.
period(..) is intended to be a helper for the VxWorks shell and not to be used by a program.
With that being said your only option is to use taskSpawn(..) unless you're doing some very simple stuff in which case period(..) might be ok to use.
If you need to do things cyclically in a specific time frame you might look at timers or taskDelay(..) in combination with sysClkRateSet(..).
Another option is to create two tasks. One that is setting a semaphore after a specific time intervall and the other "worker" tasks waits for this semaphore to do something. With that approach you separate "timing" from "action" which proved to be benefitial according to my experience. You also might want to monitor excution time of the "worker" task by using a watchdog.
I'm working on a project for automotive system where we use the MPC5748 MCU. The application uses an RTOS based on AUTOSAR OS, and this MPC target support two type of watchdogs; software and hardware (they have used soft WDT).
My mission is to fit an algorithm within this application, the development of the algorithm has been done, the problem is that in the task where the algorithm is running is a 1ms task and the algorithm needs much more time than the time dedicated to this function.
I'm a newbie to the embedded world.By the way, in the algorithm main function the program will reset itself and this seems to be a timeOut generated by the expiration of the watchdog.
My questions are:
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Can i add a sleep(<1ms) on the desired function in order to wait a little bit witout affecting other tasks
What are other options to try?
NB: This is a general problem on the watchdog timer and any useful informations will be much helpful for me. Sorry because I can't share the code.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
Let's forget that one - it is a really bad idea. If it is possible to defeat the watchdog, then it is possible to do it by error, and then the whole point of the watchdog is defeated. Apart from that its an XY question - a question about your proposed solution to a different problem - you should ask about the problem directly.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Yes you need another task, but you should not add a "big delay" and it is probably unnecessary and certainly a bad design. If the 1ms task needs the result of the algorithm then, the algorithm should run in a service task triggered by the 1ms task and run asynchronously to the 1ms task, the service task then makes the results available to the 1ms task when available (by shared memory or message passing perhaps). Alternatively if the result is not specifically needed by the 1ms task, the service task could take the necessary action independently of the 1ms task.
There are many options, but essentially it seems that your task partitioning is inappropriate; your CAN Rx task should be responsible for receiving CAN messages only, and any action required in response to CAN messages deferred to one or more other tasks, perhaps fed from a message queue.
What are other options to try ?
Software design should not be a matter of trial and error - get the design right, implement the design. However you might consider whether 1ms is appropriate; is it possible that the period can be extended to encompass the worst case execution time without causing a failure to meet deadlines in general? If the answer is "no" then the algorithm does not belong in this task.
I don't think so you can disable/delay the WATCHDOG timer and even if you could that's not a good option to go for.
The problem what think is that the task you are calling is of 1ms, which is very less to read CAN messages and then operate on the same. The minimum task time i think should be of 5ms and the optimal time should be of 10ms.
Can I disable the watchdog timer for this specified function (which must not be disabled but just for testing purpose)? It is possible to use more timeOut for the watchdog on that specified function?
You should never disable the watchdog anywhere in your code.
It might not even be possible, on the MPC5x families you typically set up the watchdog once, and then for safety reasons all watchdog registers turn to read-only registers.
Must I develop another task with a big delay in other to run the algorithm? But the problem is that the algorithm need to be synchronised with the 1ms task since we are receiving CAN commands.
Ideally you should only service the watchdog from one single location in the program. Your CAN peripheral will be FlexCAN, which has a lot of available "mailboxes" for CAN messages. In most cases, you shouldn't need to poll it, but a flag will be set when the desired message arrive.
So it isn't obvious to me why you would need a delay to wait for them. Simply do:
void the_task (void)
{
wdog_refresh();
... // do other things
if(can_message_available)
{
// do something with the message
}
... // do other things
}
rather than
// BAD:
while(!can_message_available)
; // do nothing
Even if you need to use the CAN as FIFO and poll it repeatedly, you would still use the same approach. You'd just have to ensure that the task runs often enough that there will never be an overflow in the FIFO buffer.
I have to implement a simple os and a virtual machine, for a project, that supports some basic functions. This os will run on the virtual machine and the virtual machine like a normal program in Linux.
Suppose that now is the quantum that the virtual machine is executed.
How is possible to receive some extra timer signals in order to divide the virtual machine execution time in smaller quanta?
How many timers are available in my cpu? (It's more like a general question)
Can I handle timer signals inside the virtual machine with a user lever interrupt handler?
Any help or guidance would be very appreciated.
Thank you
I suggest you use exactly 1 interrupt, and organize your timers in either a queue (for few times, e.g. <50) or in a heap, which is quite a quick tree which, at any time, gives you access to the smallest element, that is, the element with the next Timer to be handled.
Thus you have one interrupt, one handler, and many Timers with associated functions that will be called by that single handler.
In fact, the normal program is also using interrupt(system-level),for example when they want to use system call.
In user level ,you can use swapcontext/makecontext to simulate system level swap context, but when you want to get time(to caculate the time difference) , you have to use syscall.So you'd better use the system timer directly, it's not a bad idea.