Two ISR with the same timer AVR - timer

I'd like to know if it's possible to use 2 different ISR (e.g. ICP input capture and timer overflow) in AVR microcontrollers

Yes, it is possible. Each independent interrupt source has its own vector.

Related

ARM Cortex M3 - Add a new interrupt to the end of the vector table?

I am doing some bare metal C development on an ARM Cortex M3 SoC, and I wanted to check and see if it is possible to add a new user-defined interrupt handler to the NVIC. I am adding my own IRQ with the plan of triggering it via software, either via NVIC_SetPendingIRQ() or via the NVIC->STIR register. Neither seem to work.
I have added my interrupt vector name to the end of the vector list in the CMSIS startup assembler file, and added the corresponding enum to the system header, and while debugging and executing the function call NVIC_EnableIRQ(), it doesn't correctly update the NVIC->ISER (Interrupt Set Enable Register). So I guess, the question is, can you even add your own interrupt? There are 256 total interrupts than can be used in the ARM Cortex M3, and I just followed how the others were added so I figured it wouldn't be an issue.
Thank you.
The datasheet for your SoC should say how many interrupts are supported by the NVIC. While 240 is the maximum possible number for a Cortex-M3 device in general, the actual number on your chip is defined by the implementation and it makes sense to have that number be as small as possible to reduce costs.
In general, there is no way to add interrupts in software, but you might be able to use the SVCall interrupt, which is designed to be triggered by software. Or you could find some other interrupt you aren't using in your system, and which is not being activated by hardware, and try to use that for your purposes.
References:
Nested Vectored Interrupt Controller in the Cortex-M3 Devices Generic User Guide
The SVC instruction invokes the SVCall handler with an 8-bit service number available to the handler which can be used to invoke a handler from a look-up table (essentially a secondary vector table for software interrupts).
An example of that can be found at https://developer.arm.com/documentation/ka004005/latest, except there it uses a switch rather than a look-up table - to the same effect.

Which MCU(Cortex-M) for time critical GPIO application?

We have an application which runs on PIC24H, we would like to port it to another MCU, preferably ARM Cortex. Application is extremely time critical, meaning that we need extremely deterministic code behaviour. In short, there are pulses which are obtained via special hardware to GPIO pins, data is analyzed right away. Processing of data is not complex(we don't need a beefy cpu/mcu to do it). After analyzing the data GPIO output pins are written to their values.
App in 3 short lines:
process input pins
determine pattern within processing of input pins
based on the received pattern write output pins
PIC24H is working at 40MHz, we can toggle the pin in 25ns, we would be grateful with at least 2x speed for future upgrades. So MCU which can run deterministic code and toggle pins with at least 80MHz (12.5ns) would be just fine. We don't need toggling of the pins at constant fast rate, we need a mcu which can toggle it in less than 25ns. We can't waste cycles while toggling, if one cycle is off we loose synchronization. Everything must be done in one cycle precision(or two but constant two cycles), so code should be 100% deterministic.
Please let me know if I'm missing something or if what we need can be done using some other methods on Cortex-M. Just keep in mind that if one cycle is lost(due cache or similar) we loose signal sync and app will not do it's work right or at all.
Thanks!
Br
According to this blog post, the interrupt latency for Cortex-M ranges from 12 to 16 cycles (assuming you are not using FPU registers) with best-case memories. M0 and M0+ are slower than M3/M4/M7. On top of this, you need to add the GPIO access times (and watch out for different clock frequencies between the core and the peripherals. Cortex-M7 will suppport higher clock speeds than M3/M4.
It still isn't clear how many cycles are consumed in recognising a pattern, and how an interrupt is useful in doing this - generally a low latency interface function like this would be an obvious target for dedicated hardware, but since you have an existing software solution it seems the problem is mis-specified.
Providing you avoid accessing any 'slow' peripherals which might stall the bus, the interrupt latency should be deterministic - any specific device should have documentation which covers this.
NXP have an application note which describes some of the detail of how to measure what is going on.

Can I disable Interrupts on a BBB for a short duration (0.5ms)?

I am trying to write a small driver program on a Beaglebone Black that needs to send a signal with timings like this:
I need to send 360 bits of information. I'm wondering if I can turn off all interrupts on the board for a duration of 500µs while I send the signal. I have no idea if I can just turn off all the interrupts like that. Searches have been unkind to me so far. Any ideas how I might achieve this? I do have some prototypes in assembly language for the signal, but I'm pretty sure its being broken by interrupts.
So for example, I'm hoping I could have something like this:
disable_irq();
/* asm code to send my bytes */
reenable_irq();
What would the bodies of disable_irq() and reenable_irq() look like?
The calls you would want to use are local_irq_disable() and local_irq_enable() to disable & enable IRQs locally on the current CPU. This also has the effect of disabling all preemption on the CPU.
Now lets talk about your general approach. If I understand you correctly, you'd like to bit bang your protocol over a GPIO with timing accurate to < 1/3 us.
This will be a challenge. Tests show that the Beaglebone black GPIO toggle frequency is going to max out at ~2.78MHz writing directly to the SoC IO registers in kernel mode (~0.18 us minimum pulse width).
So, although this might be achievable by the thinnest of margins by writing atomic code in kernel space, I propose another concept:
Implement your custom serial protocol on the SPI bus.
Why?
The SPI bus can be clocked up to 48MHz on the Beaglebone Black, its buffered and can be used with the DMA engine. Therefore, you don't have to worry about disabling interrupts and monopolizing your CPU for this one interface. With a timing resolution of ~0.021us (# 48MHz), you should be able to achieve your timing needs with an acceptable margin of error.
With the bus configured for Single Channel Continuous Transfer Transmit-Only Master mode and 30-bit word length (2 30-bit words for each bit of your protocol):
To write a '0' with your protocol, you'd write the 2 word sequence - 17 '1's followed by 43 '0's - on SPI (#48MHz).
To write a '1' with your protocol, you'd write the 2 word sequence - 43 '1's followed by 17 '0's - on SPI (#48MHz).
From your signal timmings it's easy to figure out that SPI or other serial peripheral can not reach your demand. In your timmings, encoding is based on the width of the pulse. So let's get to the point:
Q1 Could you turn off all interrupts for a duration of 500µs?
A: 0.5ms is quite a long time in embedded system. ISR is born to enable the concurrency of multi-task and improve the real-time capability. Your should keep in mind that ISR and context-switch(in some chip architecture) are all influenced by global interrupt.
But if your top priority is to perform the timmings, and the real-time window of other tasks are acceptable, of cause you can disable the global interrupt in the duration. Even longer. If not, don't do ATOM operation in such a long time.
Q2 How?
A: For a certain chip, there's asm instruction for open/close global interrupt undoubtedly. Find the instructions or the APIs provided by your OS, do the 3 steps below(pseudocode):
state_t tState = get_interrupt_status( );
disable_interrupt( );
... /*your operation here*/
resume_interrupt( tState );

Implementing a non-standard SPI variation on ARM Cortex M3

I need to create a driver for a flash memory chip connected to a STM32 Cortex M3 MCU. The chip is controlled via an SPI bus. I intended to use integrated SPI peripheral of the MCU, but unfortunately it only supports 8- or 16-bit data packets while the flash chip commands are 14 bit long. Thus, I have to implement the protocol from scratch using GPIOs. My question is: what is the right way to ensure correct timings of the signals? I currently think of inserting delays between asserting and deasserting GPIO lines with interrupts disabled, but it seems fairly unreliable to me. Are there any better methods?
Jeb's answer is the preferred method and you should use the hardware SPI if possible, and if DMA is an option that is nice as well.
If you for some reason find out that you cannot use the hardware SPI, but that you must implement it using "bit-banging" over GPIO, you should check what options there are available in the timer/PWM hardware on the MCU. You cannot and should not use blunt "hobbyist burn-away delays" as in the link you posted, the real-time performance will be crap and you will occupy the CPU 100%.
Most MCU timers come with a pin output feature, that would allow a pin to change state when the timer elapses. The pseudo code would then be:
Determine if the next bit to send is 1 or 0.
Set the MCU polarity register accordingly, so that it will switch the pin to a high or low level.
When the timer elapses, you need to set the polarity once again, likely through an interrupt. How to do this is very hardware-dependent.
At the same time as you bit-bang the data (MOSI), you also need to generate the clock and chip select. The clock can be generated in the same way as the data, or possibly through a PWM signal if that option is available. Chip select is the easiest part as you only need to pull a pin low during the data transmission.
Finally, there is most likely some application note or official example over how to write a software SPI for your particular MCU.
I would recommend to use the build in SPI and DMA if possible!
You could remapping your data into an array of bytes with a size of a multiple of 14bits.
So you have to send a multiple of 7*4Bits=28bytes each time.
Then you can use the standard SPI with 8Bit-size.
But this should be much faster with SPI/DMA than bit banging the GPIO's.
Some devices that use obscure data lengths are designed so that at the start of a transaction they will either ignore all "0" bits that are clocked in before the first "1", or all "1" bits that are clocked in before the first "0". If your device happens to be designed in such a fashion, you may be able to use 8- or 16-bit SPI mode by clocking out two "junk" bits along with the bits of interest.

Software PWM without clobbering the CPU?

This is an academic question (I'm not necessarily planning on doing it) but I am curious about how it would work. I'm thinking of a userland software (rather than hardware) solution.
I want to produce PWM signals (let's say for a small number of digital GPIO pins, but more than 1). I would probably write a program which created a Pthread, and then infinitely looped over the duty cycle with appropriate sleep()s etc in that thread to get the proportions right.
Would this not clobber the CPU horribly? I imagine the frequency would be somewhere around the 100 Hz mark. I've not done anything like this before but I can imagine that the constant looping, context switches etc wouldn't be great for multitasking or CPU usage.
Any advice about CPU in this case use and multitasking? FWIW I'm thinking of a single-core processor. I have a feeling answers could range from 'that will make your system unusable' to 'the numbers involved are orders of magnitude smaller than will make an impact to a modern processor'!
Assume C because it seems most appropriate.
EDIT: Assume Linux or some other general purpose POSIX operating system on a machine with access to hardware GPIO pins.
EDIT: I had assumed it would be obvious how I would implement PWM with sleep. For the avoidance of doubt, something like this:
while (TRUE)
{
// Set all channels high
for (int c = 0; x < NUM_CHANNELS)
{
set_gpio_pin(c, 1);
}
// Loop over units within duty cycle
for (int x = 0; x < DUTY_CYCLE_UNITS; x++)
{
// Set channels low when their number is up
for (int c = 0; x < NUM_CHANNELS)
{
if (x > CHANNELS[c])
{
set_gpio_pin(c, 0);
}
}
sleep(DUTY_CYCLE_UNIT);
}
}
Use a driver if you can. If your embedded device has a PWM controller, then fine, else dedicate a hardware timer to generating the PWM intervals and driving the GPIO pins.
If you have to do this at user level, raising a process/thread to a high priority and using sleep() calls is sure to generate a lot of jitter and a poor pulse-width range.
You do not very clearly state the ultimate purpose of this, but since you have tagged this embedded and pthreads I will assume you have a dedicated chip with a linux variant running.
In this case, I would suggest the best way to create PWM output is through your main program loop, since I assume the PWM is part of a greater control application. Most simple embedded applications (no UI) can run in a single thread with periodic updates of the GPIOs in your main thread.
For example:
InitIOs();
while(1)
{
// Do stuff
UpdatePWM();
}
That being said, check your chip specification, in most embedded devices there are dedicated PWM output pins (that can also act as GPIOs) and those can be configured simply in hardware by setting a duty cycle and updating that duty cycle as required. In this case, the hardware will do the work for you.
If you can clarify your situation a bit I can likely give you a more detailed answer.
A better way is probably to use some kind interrupt-driven approach. I suppose it depends on your system, but IIRC Arduino uses interrupts for PWM.
100Hz seems about doable from user space. Typical OS task scheduler timeslices are around 10ms, too, so your CPU will already be multitasking at about that interval. You'll probably want to use a high process priority (low niceness) to ensure the sleeps won't overrun (much), and keep track of actual wall time and potentially adjust your sleep values down based on that feedback to avoid drift. You'll also need to make sure the timer the kernel uses for this on your hardware has a high enough resolution!
If you're very low on RAM and swapping heavily, you could run into problems with your program being paged out to disk. Also, if the kernel is doing other CPU-intensive stuff, this would also introduce unacceptable delays. (other, lower priority user space tasks should be ok) If keeping the frequency constant is critical, you're better off solving this in the kernel (or even running a realtime kernel).
Using a thread and sleeping on an OS that is not an RTOS is not going to produce very accurate or consistent results.
A better method is to use a timer interrupt and toggle the GPIO in the ISR. Unlike using a hardware PWM output on a hardware timer, this approach allows you to use a single timer for multiple signals and for other purposes. You will still probably see more jitter that a hardware PWM and the practical frequency range and pulse resolution will be much lower that is achievable in hardware, but at least the jitter will be in the order of microseconds rather than milliseconds.
If you have a timer, you can set that up to kick an interrupt each time a new PWM edge is required. With some clever coding, you can queue these up so the interrupt handler knows which of many PWM channels and whether a high or low going edge is required, and then schedule itself for the next required edge.
If you have enough of these timers, then its even easier as you can allocate one per PWM channel.
On an embedded controller with a low-latency interrupt response, this can produce surprisingly good results.
I fail to understand why you would want to do PWM in software with all of the inherent timing jitter that interrupt servicing and software interactions will introduce (e.g. the PWM interrupt hits when interrupts are disabled, the processor is servicing a long uninterruptible instruction, or another service routine is active). Most modern microcontrollers (ARM-7, ARM Cortex-M, AVR32, MSP, ...) have timers that can either be configured to produce or are dedicated as PWM generators. These will produce multiple rock steady PWM signals that, once set up, require zero processor input to keep running. These PWM outputs can be configured so that two signals do not overlap or have simultaneous edges, as required by the application.
If you are relying on the OS sleep function to set the time between the PWM edges then this will run slow. The sleep function will set the minimum time between task activations and the time between these will be delayed by the task switches, the presence of a higher priority thread or other kernel function running.

Resources