Handling multiple interrupts with FreeRTOS on STM32 - arm

My MCU based control system must check 18 switch contact status fastly. I will use STM32F7 MCU and it has maximum 16 int. handler. So I have been decided to use IO expendar IC and divided groups. Now I have 12 IO external interrupt and 2 more interrupt comes from IO expander. In addition FreeRTOS will has ethernet, uart and canbus tasks for communications. Interrupts are very critical for system. There is milisecond difference between them and I have to detect all pins status correctly. I need and expert advice for this situation.
My questions are :
Is this a proper way ? Using 14 external interrupt onFreeRTOS that handles multiple communication task
Is there any better way for it ?

Using an IO expander seems like the wrong approach to your problem (additional complexity and cost). You don't have to assign a dedicated ISR to each pin. Just read the GPIOx_IDR register after any GPIO interrupt, then check the relevant bits STM32 Datasheet

Related

How reliable is DMA to GPIO on STM32 MCUs?

ST has some application notes that talk about emulating a parallel bus using DMA to GPIO. I appreciate that, but it doesn't answer important questions. I am looking through the reference manual, and I can't seem to find clarify the things that I am concerned about.
I am most concerned about the jitter. The reference manual repeatedly states, that when DMA is triggered (e.g., by a timer), the DMA controller will read the memory and transfer the value to the peripheral. That might be fine with peripherals that have their own FIFO. There, when space is available in the FIFO, DMA is triggered and fills the FIFO. That will probably happen before the FIFO runs empty.
But with GPIO, if the DMA channels doesn't have a FIFO itself, the data will not be ready when the timer triggers and it needs to be fetched from SRAM. So between the timer triggering and between the value actually arriving in the GPIO output register, some time may pass. This might be measurable when looking at the clock output by the timer and the GPIO pins. The DMA controller has to compete for access to the SRAM with the running program, so certain activities by the program may increase the jitter.
Maybe that is a colossal oversight on my part, but ST's reference manual doesn't seem mention a FIFO as part of the DMA. If that is the case, that would result in jitter which may impact performance at higher frequencies.
I need to toggle 3 to 4 pins synchronously to a clock from 100kHz to 1MHz. I am considering DMA to GPIO and also abusing a QuadSPI controller. I am currently testing on a STM32L4 but I'm also considering STM32F4 or even F1.
DMA to/from GPIOit is just memory-to-memory transfer. Many STM32 uCs have built in DMA FIFOs - but they will have not use here.
The core has always priority over the DMA so if it can be the issue (very unlikely) place the core accesible data (this data which uC will access when DMA is active in the separate memory area - for example CCM (if your uC has one)
Answering the question
memory to/FROM GPIO is very reliable - I personally did not have any problems with it.
If your clock can be anything between 100 kHz and 1 MHz, I guess you're not worried about jitter in the clock itself, only jitter in the data versus the clock. If your clock need not be continuous, a novel idea then is to do some preprocessing of the data to include the clock signal as part of the GPIO data. Then you could trigger the DMA at regular intervals using a timer, and you'll get the data frequency on the bus at half that rate with perfect alignment between clock and data.
So if you you want to send the four-bit data 5 6 B D with data valid on the positive clock edge, prepare the DMA buffer as so: 05 15 06 16 0B 1B 0D 1D and connect the GPIO pin 4 as the clock. Leave a final byte in the buffer to reset the clock/bus to idle state, if you need.
You can of course extend the idea and incorporate control signals such as chip selects and tri-state signals for external buffers, if needed.
Also take note that not all DMA blocks may have access to the AHB bus which is holding the GPIO registers. For example on STM32F40x, only DMA2 can be used (this is what got me, until I read this answer https://stackoverflow.com/a/46619315/6552613).
I haven't fully explored this space yet, but, by disabling interrupts and polling for interrupt flags in my main loop, it's made the jitter on my GPIO DMA basically disappear! Granted it might just be the set of interrupts have enabled, but everything down to the systick timer was killing me. By polling the interrupts in the main loop it seems to have fixed my issue.
Note that this is on an STM32F042, and I never exceed 6 MHz for my period. When I try to, i.e. try to go to 8 MHz sampling out, everything falls apart. YMMV

Can I disable Interrupts on a BBB for a short duration (0.5ms)?

I am trying to write a small driver program on a Beaglebone Black that needs to send a signal with timings like this:
I need to send 360 bits of information. I'm wondering if I can turn off all interrupts on the board for a duration of 500µs while I send the signal. I have no idea if I can just turn off all the interrupts like that. Searches have been unkind to me so far. Any ideas how I might achieve this? I do have some prototypes in assembly language for the signal, but I'm pretty sure its being broken by interrupts.
So for example, I'm hoping I could have something like this:
disable_irq();
/* asm code to send my bytes */
reenable_irq();
What would the bodies of disable_irq() and reenable_irq() look like?
The calls you would want to use are local_irq_disable() and local_irq_enable() to disable & enable IRQs locally on the current CPU. This also has the effect of disabling all preemption on the CPU.
Now lets talk about your general approach. If I understand you correctly, you'd like to bit bang your protocol over a GPIO with timing accurate to < 1/3 us.
This will be a challenge. Tests show that the Beaglebone black GPIO toggle frequency is going to max out at ~2.78MHz writing directly to the SoC IO registers in kernel mode (~0.18 us minimum pulse width).
So, although this might be achievable by the thinnest of margins by writing atomic code in kernel space, I propose another concept:
Implement your custom serial protocol on the SPI bus.
Why?
The SPI bus can be clocked up to 48MHz on the Beaglebone Black, its buffered and can be used with the DMA engine. Therefore, you don't have to worry about disabling interrupts and monopolizing your CPU for this one interface. With a timing resolution of ~0.021us (# 48MHz), you should be able to achieve your timing needs with an acceptable margin of error.
With the bus configured for Single Channel Continuous Transfer Transmit-Only Master mode and 30-bit word length (2 30-bit words for each bit of your protocol):
To write a '0' with your protocol, you'd write the 2 word sequence - 17 '1's followed by 43 '0's - on SPI (#48MHz).
To write a '1' with your protocol, you'd write the 2 word sequence - 43 '1's followed by 17 '0's - on SPI (#48MHz).
From your signal timmings it's easy to figure out that SPI or other serial peripheral can not reach your demand. In your timmings, encoding is based on the width of the pulse. So let's get to the point:
Q1 Could you turn off all interrupts for a duration of 500µs?
A: 0.5ms is quite a long time in embedded system. ISR is born to enable the concurrency of multi-task and improve the real-time capability. Your should keep in mind that ISR and context-switch(in some chip architecture) are all influenced by global interrupt.
But if your top priority is to perform the timmings, and the real-time window of other tasks are acceptable, of cause you can disable the global interrupt in the duration. Even longer. If not, don't do ATOM operation in such a long time.
Q2 How?
A: For a certain chip, there's asm instruction for open/close global interrupt undoubtedly. Find the instructions or the APIs provided by your OS, do the 3 steps below(pseudocode):
state_t tState = get_interrupt_status( );
disable_interrupt( );
... /*your operation here*/
resume_interrupt( tState );

Implementing a non-standard SPI variation on ARM Cortex M3

I need to create a driver for a flash memory chip connected to a STM32 Cortex M3 MCU. The chip is controlled via an SPI bus. I intended to use integrated SPI peripheral of the MCU, but unfortunately it only supports 8- or 16-bit data packets while the flash chip commands are 14 bit long. Thus, I have to implement the protocol from scratch using GPIOs. My question is: what is the right way to ensure correct timings of the signals? I currently think of inserting delays between asserting and deasserting GPIO lines with interrupts disabled, but it seems fairly unreliable to me. Are there any better methods?
Jeb's answer is the preferred method and you should use the hardware SPI if possible, and if DMA is an option that is nice as well.
If you for some reason find out that you cannot use the hardware SPI, but that you must implement it using "bit-banging" over GPIO, you should check what options there are available in the timer/PWM hardware on the MCU. You cannot and should not use blunt "hobbyist burn-away delays" as in the link you posted, the real-time performance will be crap and you will occupy the CPU 100%.
Most MCU timers come with a pin output feature, that would allow a pin to change state when the timer elapses. The pseudo code would then be:
Determine if the next bit to send is 1 or 0.
Set the MCU polarity register accordingly, so that it will switch the pin to a high or low level.
When the timer elapses, you need to set the polarity once again, likely through an interrupt. How to do this is very hardware-dependent.
At the same time as you bit-bang the data (MOSI), you also need to generate the clock and chip select. The clock can be generated in the same way as the data, or possibly through a PWM signal if that option is available. Chip select is the easiest part as you only need to pull a pin low during the data transmission.
Finally, there is most likely some application note or official example over how to write a software SPI for your particular MCU.
I would recommend to use the build in SPI and DMA if possible!
You could remapping your data into an array of bytes with a size of a multiple of 14bits.
So you have to send a multiple of 7*4Bits=28bytes each time.
Then you can use the standard SPI with 8Bit-size.
But this should be much faster with SPI/DMA than bit banging the GPIO's.
Some devices that use obscure data lengths are designed so that at the start of a transaction they will either ignore all "0" bits that are clocked in before the first "1", or all "1" bits that are clocked in before the first "0". If your device happens to be designed in such a fashion, you may be able to use 8- or 16-bit SPI mode by clocking out two "junk" bits along with the bits of interest.

configuring UART as FIQ

I am working on lpc2468 and using UART0 of the controller for communication with sim300 gprs
module. Sometimes if i send a command for reading the signal strength of the sim the input I
receive is not correct. After looking upon the problem I found the problem that sometimes
when the UART is receiving information at same time the timer gets called and the software
goes to the timer block. in that duration some bytes sent by the module gets missed. To
prevent this i want to configure UART0 as FIQ i.e. interrupt having highest priority. can I
configure UART0 as FIQ.If yes How?
From LPC2048 data sheet,
The ARM processor core has two interrupt inputs called Interrupt
ReQuest (IRQ) and Fast Interrupt ReQuest (FIQ). The VIC takes 32
interrupt request inputs which can be programmed as FIQ or vectored
IRQ types. The programmable assignment scheme means that priorities
of interrupts from the various peripherals can be dynamically
assigned and adjusted.
So you need to find out where are the programmable registers of the Interrupt controller and change the interrupt type of UART to FIQ.
If you have simulation support, then see this to know how to change interrupt types and priorities.

how to access to trasfer data form io to memory on ARM9 s3c2440 with DMA or without DMA

I want to transfer 8 bit parallel data from IO to memory ,the data is coming very fast at speed of roughly 5 Mhz ,I am using embedded linux on ARM9 based kit by friendly arm which is using S3C2440(400Mhz) processor can any body pleas tell me where to start,my data is a video signal that is coming from a adc
I have read the on internet that I can do this using DMA but I need a start ...
Forget about DMA on this device. The ADC is not available as a DMA source. One reason for this is that DMA is only useful for transferring multiple bytes/words/whatever - the overhead of setting up, starting the DMA and handling an OnCompletion interrupt makes it pointless for occasional transfers of one item. Your ADC has no buffering, just the one output register with 10 sig. bits.
Use an FIQ handler to extract the ADC result. How you buffer the output and signal it for further processing is up to you and the linux driver framework.
have a look at these articles to for brieif theroy
http://my.opera.com/richasn/blog/2011/01/15/application-of-dma-way-in-data-acquisition-in-arm-system
http://my.opera.com/richasn/blog/2011/01/14/application-of-dma-way-in-data-acquisition-in-arm-system

Resources