HW Timer (PWM) or SW Timer to control a LED - timer

As the title says, is it generally good practice to use General Purpose Timers for dimming a LED (PWM with variable duty cycle) or is it better to use OS scheduling/tasks when available (RTOS ecc)?
I recently saw an example of a blinking led using the RTOS internal timers and i was wondering if the period of the timer can be fastened up to the point where you can dim a led (~2Khz).
Regards,

Pulsing a LED in software could flicker if some other task were to interfere with its scheduling, and you won't get much fine control over brightness. So if PWM hardware is available (and it can work with that pin, and isn't needed for something else), I would use the hardware.
A common pattern is to use PWM to control the visible brightness of the LED, then to have a regularly scheduled sofware task to vary it smoothly (to produce fades, blinks and so forth), based on a counter and some state/variables which might be controlled by other tasks.

Related

How to know whether an IRQ was served immediately on ARM Cortex M0+ (or any other MCU)

For my application (running on an STM32L082) I need accurate (relative) timestamping of a few types of interrupts. I do this by running a timer at 1 MHz and taking its count as soon as the ISR is run. They are all given the highest priority so they pre-empt less important interrupts. The problem I'm facing is that they may still be delayed by other interrupts at the same priority and by code that disables interrupts, and there seems to be no easy way to know this happened. It is no problem that the ISR was delayed, as long as I know that the particular timestamp is not accurate because of this.
My current approach is to let each ISR and each block of code with interrupts disabled check whether interrupts are pending using NVIC->ISPR[0] and flagging this for the pending ISR. Each ISR checks this flag and, if needed, flags the timestamp taken as not accurate.
Although this works, it feels like it's the wrong way around. So my question is: is there another way to know whether an IRQ was served immediately?
The IRQs in question are EXTI4-15 for a GPIO pin change and RTC for the wakeup timer. Unfortunately I'm not in the position to change the PCB layout and use TIM input capture on the input pin, nor to change the MCU used.
update
The fundamental limit to accuracy in the current setup is determined by the nature of the internal RTC calibration, which periodically adds/removes 32kHz ticks, leading to ~31 µs jitter. My goal is to eliminate (or at least detect) additional timestamping inaccuracies where possible. Having interrupts blocked incidentally for, say, 50+ µs is hard to avoid and influences measurements, hence the need to at least know when this occurs.
update 2
To clarify, I think this is a software question, asking if a particular feature exists and if so, how to use it. The answer I am looking for is one of: "yes it is possible, just check bit X of register Y", or "no it is not possible, but MCU ... does have such a feature, called ..." or "no, such a feature is generally not available on any platform (but the common workaround is ...)". This information will guide me (and future readers) towards a solution in software, and/or requirements for better hardware design.
In general
The ideal solution for accurate timestamping is to use timer capture hardware (built-in to the microcontroller, or an external implementation). Aside from that, using a CPU with enough priority levels to make your ISR always the highest priority could work, or you might be able to hack something together by making the DMA engine sample the GPIO pins (specifics below).
Some microcontrollers have connections between built-in peripherals that allow one peripheral to trigger another (like a GPIO pin triggering timer capture even though it isn't a dedicated timer capture input pin). Manufacturers have different names for this type of interconnection, but a general overview can be found on Wikipedia, along with a list of the various names. Exact capabilities vary by manufacturer.
I've never come across a feature in a microcontroller for indicating if an ISR was delayed by a higher priority ISR. I don't think it would be a commonly-used feature, because your ISR can be interrupted by a higher priority ISR at any moment, even after you check the hypothetical was_delayed flag. A higher priority ISR can often check if a lower priority interrupt is pending though.
For your specific situation
A possible approach is to use a timer and DMA (similar to audio streaming, double-buffered/circular modes are preferred) to continuously sample your GPIO pins to a buffer, and then you scan the buffer to determine when the pins changed. Note that this means the CPU must scan the buffer before it is overwritten again by DMA, which means the CPU can only sleep in short intervals and must keep the timer and DMA clocks running. ST's AN4666 is a relevant document, and has example code here (account required to download example code). They're using a different microcontroller, but they claim the approach can be adapted to others in their lineup.
Otherwise, with your current setup, I don't think there is a better solution than the one you're using (the flag that's set when you detect a delay). The ARM Cortex-M0+ NVIC does not have a feature to indicate if an ISR was delayed.
A refinement to your current approach might be making the ISRs as short as possible, so they only do the timestamp collection and then put any other work into a queue for processing by the main application at a lower priority (only applicable if the work is more complex than the enqueue operation, and if the work isn't time-sensitive). Eliminating or making the interrupts-disabled regions short should also help.

Which MCU(Cortex-M) for time critical GPIO application?

We have an application which runs on PIC24H, we would like to port it to another MCU, preferably ARM Cortex. Application is extremely time critical, meaning that we need extremely deterministic code behaviour. In short, there are pulses which are obtained via special hardware to GPIO pins, data is analyzed right away. Processing of data is not complex(we don't need a beefy cpu/mcu to do it). After analyzing the data GPIO output pins are written to their values.
App in 3 short lines:
process input pins
determine pattern within processing of input pins
based on the received pattern write output pins
PIC24H is working at 40MHz, we can toggle the pin in 25ns, we would be grateful with at least 2x speed for future upgrades. So MCU which can run deterministic code and toggle pins with at least 80MHz (12.5ns) would be just fine. We don't need toggling of the pins at constant fast rate, we need a mcu which can toggle it in less than 25ns. We can't waste cycles while toggling, if one cycle is off we loose synchronization. Everything must be done in one cycle precision(or two but constant two cycles), so code should be 100% deterministic.
Please let me know if I'm missing something or if what we need can be done using some other methods on Cortex-M. Just keep in mind that if one cycle is lost(due cache or similar) we loose signal sync and app will not do it's work right or at all.
Thanks!
Br
According to this blog post, the interrupt latency for Cortex-M ranges from 12 to 16 cycles (assuming you are not using FPU registers) with best-case memories. M0 and M0+ are slower than M3/M4/M7. On top of this, you need to add the GPIO access times (and watch out for different clock frequencies between the core and the peripherals. Cortex-M7 will suppport higher clock speeds than M3/M4.
It still isn't clear how many cycles are consumed in recognising a pattern, and how an interrupt is useful in doing this - generally a low latency interface function like this would be an obvious target for dedicated hardware, but since you have an existing software solution it seems the problem is mis-specified.
Providing you avoid accessing any 'slow' peripherals which might stall the bus, the interrupt latency should be deterministic - any specific device should have documentation which covers this.
NXP have an application note which describes some of the detail of how to measure what is going on.

Generating a tone with PWM signal to a speaker on a PIC32 microcontroller

I'm currently working on generating a tone on a PIC32 device. The information I've found has not been enough to give me a complete understanding of how to achieve this. As I understand it a PWM signal sends 1's and 0's with specified duty cycle and frequency such that it's possible to make something rotate in a certain speed for example. But that to generate a tone this is not enough. I'm primarily focusing on the following two links to create the code:
http://umassamherstm5.org/tech-tutorials/pic32-tutorials/pic32mx220-tutorials/pwm
http://www.mikroe.com/chapters/view/54/chapter-6-output-compare-module/#ch6.4
And also the relevant parts in the reference manual.
One of the links states that to play audio it's necessary to use the timer interrupts. How should these be used? Is it necessary to compute the value of the wave with for example a sine function and then combine this with the timer interrupts to define the duty cycle after each interrupt flag?
The end result will be a program that responds to button presses and plays sounds. If a low pass filter is necessary this will be implemented as well.
If you're using PWM to simulate a DAC and output arbitrary audio (for a simple and dirty tone of a given frequency you don't need this complexity), you want to take audio samples (PCM) and convert them each into the respective duty cycle.
Reasonable audio begins at sample rates of 8KHz (POTS). So, for every (every 1/8000th of second) sample you'll need to change the duty cycle. And you want these changes to be regular as irregularities will contribute to audible distortions. So you can program a timer to generate interrupts at 8KHz rate and in the ISR change the duty cycle according to the new audio sample value (this ISR has to read the samples from memory, unless they form a simple pattern and may be computed on the fly).
When you change the duty cycle at a rate of 8KHz you generate a periodic wave at the frequency of 4KHz. This is very well audible. Filtering it well in analogue circuitry without affecting the sound that you want to hear may not be a very easy thing to do (sharp LPF filters are tricky/expensive, cheap filters are poor). Instead you can up the sample rate to either above twice what the speaker can produce (or the human ear can hear) or at least well above the maximum frequency that you want to produce (in this latter case a cheap analogue filter can help rid the unwanted periodic wave without much effect on what you want to hear, you don't need as much sharpness here).
Be warned, if the sample rate is higher than that of your audio file, you'll need a proper upsampler/sample-rate converter. Also remember that raising the sample rate will raise CPU utilization (ISR invoked more times per second, plus sample rate conversion, unless your audio is pre-converted) and power consumption.
[I've done this before on my PC's speaker, but it's now ruined, thanks to SMM/SMIs used by the BIOS and the chipset.]
For playing simple tones trough PWM you first need a driver circuit since the PIC cannot drive a speaker directly. Typically a push-pull is used as actively driving both high and low results in better speaker response. It also allows for a series capacitor, acting as a simple high-pass filter to protect the speaker from long DC periods.
This, for example, should work: http://3.bp.blogspot.com/-FFBftqQ0o8c/Tb3x2ouLV1I/AAAAAAAABIA/FFmW9Xdwzec/s400/sound.png
(source: http://electro-mcu-stuff.blogspot.be/ )
The PIC32 has hardware PWM that you can program to generate PWM at a specific frequency and duty cycle. The PWM frequency controls the tone, thus by changing the PWM frequency at intervals you can play simple music. The duty cycle affects the volume, but not linearly. High duty cycles come very close to pure DC and will be cut off by the capacitor, low duty cycles may be inaudible. Some experimentation is in order.
The link mentions timer interrupts because they are not talking about playing simple notes but using PWM + a low pass filter as a simple DAC to play real audio. In this case timer interrupts would be used to update the duty cycle with the next PCM sample to be played at regular intervals (the sampling rate).

Uboot timer with interrupts

I would like to show Uboot progress with blinking LED's. For this purpose I need delay which will not use while loop (non-blocking), but interrupts instead.
Is there any implementation of timers inside Uboot ?
I have looked a little bit, but I didn't find non-blocking delays.
Do I need to implement if from scratch ?
I use at91SAM9 with Uboot 2010.06.
Thank you
I use U-Boot for ARM processors, I have not seen any interrupt implementions. Polling gets the job done for all the peripheral devices I'm familiar with. Timer is implemented, I like the simplicity of their udelay_masked().
I haven't used it, but it looks like CONFIG_SHOW_BOOT_PROGRESS is available to you. README suggests you add show_boot_progress(int) to blink the LED. Each blink would use blocking delay. Maybe you use different color and/or blink pattern for the checkpoints you want to show have passed.

Software PWM without clobbering the CPU?

This is an academic question (I'm not necessarily planning on doing it) but I am curious about how it would work. I'm thinking of a userland software (rather than hardware) solution.
I want to produce PWM signals (let's say for a small number of digital GPIO pins, but more than 1). I would probably write a program which created a Pthread, and then infinitely looped over the duty cycle with appropriate sleep()s etc in that thread to get the proportions right.
Would this not clobber the CPU horribly? I imagine the frequency would be somewhere around the 100 Hz mark. I've not done anything like this before but I can imagine that the constant looping, context switches etc wouldn't be great for multitasking or CPU usage.
Any advice about CPU in this case use and multitasking? FWIW I'm thinking of a single-core processor. I have a feeling answers could range from 'that will make your system unusable' to 'the numbers involved are orders of magnitude smaller than will make an impact to a modern processor'!
Assume C because it seems most appropriate.
EDIT: Assume Linux or some other general purpose POSIX operating system on a machine with access to hardware GPIO pins.
EDIT: I had assumed it would be obvious how I would implement PWM with sleep. For the avoidance of doubt, something like this:
while (TRUE)
{
// Set all channels high
for (int c = 0; x < NUM_CHANNELS)
{
set_gpio_pin(c, 1);
}
// Loop over units within duty cycle
for (int x = 0; x < DUTY_CYCLE_UNITS; x++)
{
// Set channels low when their number is up
for (int c = 0; x < NUM_CHANNELS)
{
if (x > CHANNELS[c])
{
set_gpio_pin(c, 0);
}
}
sleep(DUTY_CYCLE_UNIT);
}
}
Use a driver if you can. If your embedded device has a PWM controller, then fine, else dedicate a hardware timer to generating the PWM intervals and driving the GPIO pins.
If you have to do this at user level, raising a process/thread to a high priority and using sleep() calls is sure to generate a lot of jitter and a poor pulse-width range.
You do not very clearly state the ultimate purpose of this, but since you have tagged this embedded and pthreads I will assume you have a dedicated chip with a linux variant running.
In this case, I would suggest the best way to create PWM output is through your main program loop, since I assume the PWM is part of a greater control application. Most simple embedded applications (no UI) can run in a single thread with periodic updates of the GPIOs in your main thread.
For example:
InitIOs();
while(1)
{
// Do stuff
UpdatePWM();
}
That being said, check your chip specification, in most embedded devices there are dedicated PWM output pins (that can also act as GPIOs) and those can be configured simply in hardware by setting a duty cycle and updating that duty cycle as required. In this case, the hardware will do the work for you.
If you can clarify your situation a bit I can likely give you a more detailed answer.
A better way is probably to use some kind interrupt-driven approach. I suppose it depends on your system, but IIRC Arduino uses interrupts for PWM.
100Hz seems about doable from user space. Typical OS task scheduler timeslices are around 10ms, too, so your CPU will already be multitasking at about that interval. You'll probably want to use a high process priority (low niceness) to ensure the sleeps won't overrun (much), and keep track of actual wall time and potentially adjust your sleep values down based on that feedback to avoid drift. You'll also need to make sure the timer the kernel uses for this on your hardware has a high enough resolution!
If you're very low on RAM and swapping heavily, you could run into problems with your program being paged out to disk. Also, if the kernel is doing other CPU-intensive stuff, this would also introduce unacceptable delays. (other, lower priority user space tasks should be ok) If keeping the frequency constant is critical, you're better off solving this in the kernel (or even running a realtime kernel).
Using a thread and sleeping on an OS that is not an RTOS is not going to produce very accurate or consistent results.
A better method is to use a timer interrupt and toggle the GPIO in the ISR. Unlike using a hardware PWM output on a hardware timer, this approach allows you to use a single timer for multiple signals and for other purposes. You will still probably see more jitter that a hardware PWM and the practical frequency range and pulse resolution will be much lower that is achievable in hardware, but at least the jitter will be in the order of microseconds rather than milliseconds.
If you have a timer, you can set that up to kick an interrupt each time a new PWM edge is required. With some clever coding, you can queue these up so the interrupt handler knows which of many PWM channels and whether a high or low going edge is required, and then schedule itself for the next required edge.
If you have enough of these timers, then its even easier as you can allocate one per PWM channel.
On an embedded controller with a low-latency interrupt response, this can produce surprisingly good results.
I fail to understand why you would want to do PWM in software with all of the inherent timing jitter that interrupt servicing and software interactions will introduce (e.g. the PWM interrupt hits when interrupts are disabled, the processor is servicing a long uninterruptible instruction, or another service routine is active). Most modern microcontrollers (ARM-7, ARM Cortex-M, AVR32, MSP, ...) have timers that can either be configured to produce or are dedicated as PWM generators. These will produce multiple rock steady PWM signals that, once set up, require zero processor input to keep running. These PWM outputs can be configured so that two signals do not overlap or have simultaneous edges, as required by the application.
If you are relying on the OS sleep function to set the time between the PWM edges then this will run slow. The sleep function will set the minimum time between task activations and the time between these will be delayed by the task switches, the presence of a higher priority thread or other kernel function running.

Resources