Interupts Vs Poling a Device - c

In my application a no. of devices (camera, A/D, D/A etc ) are communicating with a server. I have two options for saving power consumptions in a device as not all devices has to work always:
1- Do poling, i.e each device periodically keep on looking at a content of a file where it gets a value for wake or sleep. If it finds wake, then it wakes up and does its job.
In this case actually the device will be sleeping but the driver will be active and poling.
2- Using interrupts, I can awake a device when needed.
I am not able to decide which way to go and why. Can someone please enlighten me in this regard?
Platform: Windows 7, 32 bit, running on Intel Core2Duo

Polling is imprecise by its nature. The higher your target precision gets, the more wasteful the polling becomes. Ideally, you should consider polling only if you cannot do something with interrupts; otherwise, using an interrupt should be preferred.
One exception to this rule is if you would like to "throttle" something intentionally, for example, when you may get several events per second, but you would like to react to only one event per minute. In such cases you often use a combination of polling and interrupts, where an interrupt sets a flag, and polling does the real job, but only when the flag is set.

If your devices are to be woken up periodically, I would go for the polling with the appropriate frequency (which is always easier to setup because it's just looking at a bit). If the waking events are asynchronous, I would rather go for an interrupt-driven architecture, despite the code and electronic overhead.

Well it depends on your hardware and software atchitecture and complexity of software. It is alwasy better to choose interrupt mechanism over polling.
As in polling your controller will be busy continuously polling the hardware to check if desired value is available.
While using interrupt mechanism will free the controller to perform other tasks, and when interrupt arises your ISR can perform task for specific need.

Related

Best task schedule strategies in embedded applications

I am trying to find a better way to organize sub-tasks for embedded applications. I am more interested in Power Electronics applications. I am not a software engineer, but a Power Electronics Engineer. However, in most cases I need to develop the code.
In those applications, the main will stay in a infinite loop, and the control algorithm will run in a ISR (Interrupt Service Routine). However, in some applications extra low-priority sub-tasks are necessary (e.g. communication, alarm handling). Those sub-tasks cannot run in the ISR routine due to time limitation (the control algorithm has the higher priority). I would like to know the best ways to handle task schedule for embedded applications.
One simple way, in code snippet below, is just put all the sub-tasks inside the infinite loop (if all have the same priority). The application will run the ISR routine periodically (each switching period, for example) and use the left time to run the subtasks in a Round Robin approach. However, in this method all the subtasks will run in a unknown period. Consequently I will not be able to add timer routines (increment and check) inside those tasks. Also, if the software stays trapped (due to some bad code) in a low-priority task, the other tasks will not be executed (or the watchdog timer will be activated).
void main(void)
{
Init();
for(;;) /* There is a ISR routine with the control Algorithm*/
{
SubTask1();
SubTask2();
SubTask3();
}
}
It is possible to use other ISR routines (controlled for timer modules, for example) and control the interrupt priority to run one specific task. However, this method will demand a more careful study of the device, in order to set all the interrupt priorities correctly.
Do you know a better method? What schedule tasking methods are the most efficient for embedded applications?
The question hits on some general principals in embedded software.
1) Limit what you do in ISRs to the bare minimum
2) Coordinate different activities by using an RTOS
3) Improve performance by designing the software as event driven
The way to efficiently implement the sub-tasks is to move them from a polled loop to being event driven. If they are an alarm condition you want to check for periodically, use your RTOS to call that code from a timer. For communications, have that code do a blocking wait for an event, like the arrival of a message. Event driven code is much more efficient because it doesn't have to spin through all the polling looking for the events to handle.
The tools of an event driven design (threads, timers, blocking, etc) are provided by an RTOS, point 3) leads to point 2). An RTOS also solves your issues with the sub-tasks running at unknown times and for unknown durations, if there are remaining tasks that are not event driven.
Finally, there are a variety of reasons to limit how much you do in an ISR. It's harder to debug ISR code. It's harder to synchronize what the ISR does with the rest to the tasks. The alternative is to doing the same thing as a high priority task that waits for an event from the ISR.
But the biggest reason is future flexibility. Running the control algorithm in the ISR makes it hard to add another high priority task. Or maybe there will be a new requirement for the control algorithm to report status or write to a disk. Moving the code out of the ISR give you more options.

Reading a 4 µs long +5V TTL from a parallel port -- when to use kernel interrupts

I've got an experimental box of tricks running that, every 100 ms or so, will spit out a 4 microsecond long +5V pulse of electricity on a TTL line. The exact time that this happens is not known ahead of time, but it's important -- so I'd like to use the Red Hat 5.3 computer that essentially runs the experiment to service this TTL, and create a glorified timestamp.
At the moment, what I've done is wired the TTL into pin 13 of the parallel port (STATUS_SELECT, one of the input lines on a parallel port) on the linux box, spawn a process when the experiment starts, use chrt to change its scheduled priority to 99 -- i.e. high -- and then just poll the parallel port repeatedly in a while loop until the pin goes high. I then create an accurate timestamp, and, in a non-blocking way write it to disk.
Obviously, this is inefficient -- sometimes the process is suspended, and a TTL will be missed. As the computer is, itself, busy doing other things (namely acquiring data from my experimental bit of kit -- an MRI scanner!) this happens quite often. Polling is easy, but probably bad.
My question is this: doing something quickly when a TTL occurs seems like the bread-and-butter of computing, but, as far as I can tell, it's only possible to deal with interrupts on linux if you're a kernel module. The parallel port can generate interrupts, and libraries like paraport let you build kernel modules relatively quickly, where you have to supply your own handler.
Is the best way to deal with this problem and create accurate (±25 ms) timestamps for an experiment whenever that TTL comes in -- to write a kernel module that provides a list of recent interrupts to somewhere in /proc, and then read them out with a normal process later? Is that approach not going to work, and be very CPU inefficient -- or open a bag of worms to do with interrupt priority I'm not aware of?
Most importantly, this seems like it should be a solved problem -- is it, and if so do any wise people wish to point me in the right direction? Writing a kernel module seems like, frankly, a lot of hard, risky work for something that feels as if it should perhaps be simple.
The premise that "it's only possible to deal with interrupts on linux if you're a kernel module" dismisses some fairly common and effective strategies.
The simple course of action for responding to interrupts in userspace (especially infrequent ones) is to have a driver which created a kernel device (or in some cases sysfs node) where either a read() or perhaps a custom ioctl() from userspace will block until the interrupt occurs. You'd have to check if the default parallel port driver supports this, but it's extremely common with the GPIO drivers on embedded-type boards, and the basic scheme could be borrowed into the parallel port - provided that the hardware supports true interrupts.
If very precise timing is the goal, you might do better to customize the kernel module to record the timestamp there, and implement a mechanism where a read() from userspace blocks until the interrupt occurs, and then obtains the kernel's already recorded timestamp as the read data - thus avoiding the variable latency of waking userspace and calling back into the kernel to get the time.
You might also look at true local-bus serial ports (if present) as an alternate-interrupt capable interface in cases where the available parallel port is some partial or indirect implementation which doesn't support them.
In situations where your only available interface is something indirect and high latency such as USB, or where you want a lot of host- and operation-system- independence, then it may indeed make sense to use an external microcontroller. In that case, you would probably try to set the micro's clock from the host system, and then have it give you timestamp messages every time it sees an event. If your experiment only needs the timestamps to be relative to each other within a given experimental session, this should work well. But if you need to establish an absolute time synchronization across the USB latency, you may have to do some careful roundtrip measurement and then estimation of the latency in order to compensate it (see NTP for an extreme example).

Watchdog fires on high interrupt rate

I am working on a customized/proprietary RTOS provided by my client.
The RTOS uses round robin scheduling with priority preemption.
Scenario is -
The Renesas H8S controller is running at 20 MHz
I have configured interrupt for ethernet interrupt (A LAN9221 chip is interrupting)
An OS task which reads the data from LAN controller is running at highest priority in OS
Another OS task TCP which is second highest priority task in system
An OS task which referesh watchdog
I have generated network traffic to simulate bombarding condition on the network.
Problem is at high data rates (more than 500 packets/second) on ethernet ISR watchdog is getting fired which is configured for 1 second.
Watchdog is configured to be serviced by a lower priority task of OS to detect any problem in OS functionality.
I doubt the frequency of ISR and higher priority tasks are not letting the watchdog task to be scheduled. To confirm my doubt i have serviced the watchdog in ISR itself and found working till 2000 packets/second.
Could you please suggest how can handle the situation so the watchdog should not fire even on higher data/interrupt rate.
Watchdog is refreshed in OS task running at normal OS priority which helps in catching endless loop.
The task which is at highest OS priority is Ethernet packet reading task.
There is one hardware interrupt which is raised when Ethernet receives packet and in ISR we schedule waiting Ethernet packet reading task.
Also in my system the OS is not running using timer interrupt (Like other OS run).
The OS is round robin and relinquish the control voluntarily. So increasing the watchdog task priority above the normal is not possible otherwise OS will always find it at higher priority and ready (watchdog is refreshed in infinite loop no waiting for any event) and other tasks will not get time to execute.
Only tasks which are waiting on some event can have high priorities.
So the problem is watchdog task is not getting time to refresh because of frequent interrupts and continuous scheduling of high priority tasks (Ethernet packet reading).
Try to give you watchdog a higher priority.
This might seem wrong at first glance. A watchdog shouldn't get a high priority but that's only true for systems which aren't under heavy load. Under heavy load, the scheduling will push the watchdog back (it's low prio after all) which can cause spurious time outs.
Giving the watchdog a high priority should not have a big impact on performance (it's a small task, runs not very often, triggered by an interrupt) but makes sure it can't starve.
The disadvantage is that you can't catch endless loops anymore (since the loop can now be interrupted by the watchdog).
You should also consider badly designed hardware or a bad mapping of interrupts. Maybe you can give the watchdog IRQ a higher priority than the network card. That would allow the watchdog to process its interrupts in a timely fashion without you having to give the task a higher priority.
Or you can try to increment a counter when a network packet has been processed. A new, high priority watchdog thread could watch this counter and re-configure the low-prio watchdog task not to fire as long as the counter changes.
In any form of real-time application you need, by definition, to be 100% aware of what is going on. You must know how much time each task consumes. Measure the time needed for each task with an oscilloscope by toggling a pin. Then calculate these times for the whole system. If the higher priority tasks take too much time, well, then obviously the dog will starve.
If this is too complex to measure because of acyclic or non-deterministic behavior, the program needs to be fixed. If the watchdog sits in a high priority task, you have pretty much disabled it for any task with lower prio. You might as well shut the watchdog off entirely then.
Trial & error patches, giving the watchdog higher prio, or increasing the CPU clock until the bug goes away is simply not a professional approach.
But then of course, the hardware might not be sufficient to service such a high data load as you expect. Then you may have no other option but to either use dirty patches or re-design the product from scratch with a suitable MCU.
It is probably not a matter of telling how to do it, the architecture you described should work. What you need to do is discover why the watchdog is not serviced.
If your RTOS does not have instrumentation or tools for debugging and testing, you could add I/O toggling in the watchdog loop and watch it with a scope - all the periods where it stops toggling are where higher priority tasks or interrupts are running -if that happens for more than one second, the watchdog will trigger. You might then add similar instrumentation to your other tasks and ISRs to see what is taking the time.
Is it possible that you are dead-locking under high load so that the system is in fact failing? A situation where the watchdog firing would be entirely valid. You don't want to stop it firing if it is in fact detecting an system failure - you want to fix the system failure.
If the task that handles network packets consumes so much time that it prevents the task responsible for refreshing the watchdog from getting CPU time; then the system is unable to handle high networking load. The watchdog problem is only a symptom of this "unable to handle high network load" problem.
The solution is to use a faster CPU, slow down the network, reduce the overhead of handling packets, or some combination of these options; so that the system can handle high network load (and so that the task that refreshes the watchdog does get run). Note that "handling high network load" may include dropping packets, which is the normal/established approach for handling network congestion.

Linux RTOS sleep() - wakeup() for timer task

I have a task which is basically a TIMER; so it goes to sleep and is supposed to wake up periodically.. So the timer task sleeps for say 10ms. But what is happening is that it is inconsistent in waking up and cannot be relied upon to awaken in time correctly.
In fact, in my runs, there is a big difference in sleep times. Sometimes it can vary by 1-2 ms in awakening and very few times does not come back to at all. This is because the kernel scheduler puts all the sleeping and waiting tasks in a queue and then when it polls to see who is to be awakened, I think it is round robin. So sometimes the task would have expired by the time the scheduler polls again. Sometimes, when there are interrupts, the ISR gets control and delays the timer from waking up.
What is the best solution to handle this kind of problem?
(Additional details: The task is a MAC timer for a wireless network; RTOS is a u-velOSity microkernel)
You should be using the timer API provided by the OS instead on relying on the scheduler. Here's an introduction to the timer API for Linux drivers.
If you need hardcore timing, the OS scheduler is not likely to be good enough (as you've found).
If you can, use a separate timer peripheral, and use it's ISR to do as little as you can get away with (timestamping some critical data, set some flags for example) and then let your higher-jitter routine make use of that data with its less guaranteed timing.
Linux is not an RTOS, and that is probably the root of your problem.
You can render Linux more suited to real-time use in various ways and to various extent. See A comparison of real-time Linux approaches for some methods and an assessment of the level of real-time performance you can expect.

real time intervals in C/C++

It is possible to make real time intervals in not Real-Time Linux application in C/C++?
I'm writing a ADC simulator. This is an application that generates packages with certain frequency. It is important that the frequency of package generation as closely as possible corresponded to the sampling rate of ADC. Why I don't want to use sleep() and usleep() to set package generation time intervals.
Thanks.
It is possible to make real time intervals in not Real-Time Linux application in C/C++?
No... if it were, it would be a Real-Time Linux system.
That said, you can probably get very close, so it depends on your intervals and tolerances. Your only serious option for sub-timeslice precision is to nail the sending thread to a core and let it spin, while keeping other processing off that core, but that's very wasteful of hardware....
If you can afford to have latencies long enough for your sending code to be re-scheduled then you can look at setting up alarms & signal handlers, but that's potentially massively higher latency, perhaps only on relatively rare occasions where the cores have all been otherwise utilised. To assess how well this works, you've got to do real measurements under realistic system loads.
The packet generator shouldn't be with the packet sender.
If you want the packets to be sent on time, you should create the packets before hand, and send them to the packer sender.
So you need a thread with a work queue, and use a sleep on that thread to send the packets on time. (you can look a boost's sleep())

Resources