Read non conventional ADC with STM32F3 - c

I'm attempting to interface an STM32F303 Nucleo with an AD7748-4 ADC. Datasheet for the ADC:
https://www.analog.com/media/en/technical-documentation/data-sheets/ad7768-7768-4.pdf
The issue is, the ADC DOES NOT output the converted value through the SPI port, but rather employs a Data Ready Signal (DRDY), a Data Clock (DCLK), and a combination of 4 Data Outputs (DOUT0-DOUT3). The output streams 96 bits serially through one wire if I set it up that way, but timing is critical in my application and I need to clock the data in using DOUT0 to DOUT2, which would each output 32 bits. If I were serially streaming the data, I could trick the SPI port into reading it, but I'm not. The ADC is running at 20MHz, so DCLK will be operating at the same frequency. The Nucleo runs at a maximum of 72MHz, but when the DAM is utilized, it sets the clock to 64MHz.
In the STM manual, it describes a "GPIO port input data register (GPIOx_IDR) (x = A..H)" as being a read only register - my understanding is that the lower 16 bits can store an inputted value up to 16 bits (most likely for memory data R/W) - so the question is, how can I configure the GPIO to read in the data? I'm at a slight impass here. My instinct tells me that the Nucleo may not be fast enough to read the data coming from the ADC... Any ideas? All being written in C/C++ basically bare metal... I'm new to the Nucleo, haven't written code in 4 years - pardon any lapse in knowledge...

If DCLK works at 20Mhz, the uC is obviously not fast enough (you have about 3 instructions between each cycle, so even assembly language would be difficult to implement...). As I am not familiar with the stm architecture, I can only suggest a trick that will maybe spark some ideas in your head. Rather than using a crystal for the ADC, use a timer from the STM that is connected to an output pin, and clock the ADC using that pin (MCLK). When configuring the ADC using spi, idle mode, etc. you can leave this clock signal at 20Mhz. But when you need a sample from the ADC, stop the STM timer and clock the ADC "manually". (you practically control the DCLK signal). After your conversion routine is over, restart the timer at 20Mhz.

Related

Implementing an SSI slave interface on STM32 Board

I am trying to implement a SSI Slave Protocol on a STM32 Board. Since the STM32 Boards don't have a SSI interface, I used its SPI interface in Slave(Transmit only mode). The master SSI sends 24 clock signals and the slave reacts by sending its data(3 Bytes) over the MISO pins. The problem I am facing is that the data is always shifted on the left on every clock signal coming from the master. For example assuming I am constantly sending 0x010101 from slave.
At first transmission the master receives 0x010101
At Second transmission the master receives 0x020202
At third transmission the master receives 0x040404
Can someone please give me some hints on how to solve this problem?
The data-shift with each transmission can happen when the SPI slave recognizes an (unexpected) additional clock pulse. Looking at the SSI protocol description on Wikipedia this actually makes sense:
In order to transmit N bits of data the master emits N clock cycles, followed by another clock pulse to signal the end of the transfer (so-called "Monoflop Time" - referring to the original hardware implementation of the SSI interface). Since the SPI protocol / SPI slave does not know about this additional clock pulse, it begins to output the first bit of the next data byte, which is in turn not recognized by the SSI master. As a result this leads to a shift in the data bits recognized by the SSI master on the next SSI frame.
Unfortunately, it is not easy to handle the Monoflop time correctly with the SPI slave. In order to deal with the additional clock pulse, we could try to set the SPI frame size to 25 bits on the slave side. Since the STM32 hardware only supports SPI frame sizes between 4 bit and 16 bit, the only choice is to set it to 5 bit. This is not very convenient, since we need to convert the 3 byte (24 bit) output data into 5 blocks of 5 bit (24 bit output data + 1 bit dummy data), but it should work for a "normal" transfer.
Things get more complicated though, if we also want to handle the cases "Multiple transmissions" and "Interrupting transmission" correctly. We need to monitor the clock signal to be able to detect the monoflop timeout. This can be done using a STM32 hardware timer with an external trigger. When the timer expires, we need to reset the SPI unit (in order to handle an interrupted transmission) and update the output value. This "simple" task can be quite challenging since it requires a couple of instructions - requiring a fast MCU depending on the SSI clock frequency.
Alternatively the SSI protocol can be implemented using a software-only "bit banging" solution. But this requires a fast MCU as well in order to handle a fast SSI clock correctly.
IMHO the best solution is to use a small (inexpensive) FPGA to implement the SSI slave and let the MCU feed it with data over a traditional SPI interface.

How reliable is DMA to GPIO on STM32 MCUs?

ST has some application notes that talk about emulating a parallel bus using DMA to GPIO. I appreciate that, but it doesn't answer important questions. I am looking through the reference manual, and I can't seem to find clarify the things that I am concerned about.
I am most concerned about the jitter. The reference manual repeatedly states, that when DMA is triggered (e.g., by a timer), the DMA controller will read the memory and transfer the value to the peripheral. That might be fine with peripherals that have their own FIFO. There, when space is available in the FIFO, DMA is triggered and fills the FIFO. That will probably happen before the FIFO runs empty.
But with GPIO, if the DMA channels doesn't have a FIFO itself, the data will not be ready when the timer triggers and it needs to be fetched from SRAM. So between the timer triggering and between the value actually arriving in the GPIO output register, some time may pass. This might be measurable when looking at the clock output by the timer and the GPIO pins. The DMA controller has to compete for access to the SRAM with the running program, so certain activities by the program may increase the jitter.
Maybe that is a colossal oversight on my part, but ST's reference manual doesn't seem mention a FIFO as part of the DMA. If that is the case, that would result in jitter which may impact performance at higher frequencies.
I need to toggle 3 to 4 pins synchronously to a clock from 100kHz to 1MHz. I am considering DMA to GPIO and also abusing a QuadSPI controller. I am currently testing on a STM32L4 but I'm also considering STM32F4 or even F1.
DMA to/from GPIOit is just memory-to-memory transfer. Many STM32 uCs have built in DMA FIFOs - but they will have not use here.
The core has always priority over the DMA so if it can be the issue (very unlikely) place the core accesible data (this data which uC will access when DMA is active in the separate memory area - for example CCM (if your uC has one)
Answering the question
memory to/FROM GPIO is very reliable - I personally did not have any problems with it.
If your clock can be anything between 100 kHz and 1 MHz, I guess you're not worried about jitter in the clock itself, only jitter in the data versus the clock. If your clock need not be continuous, a novel idea then is to do some preprocessing of the data to include the clock signal as part of the GPIO data. Then you could trigger the DMA at regular intervals using a timer, and you'll get the data frequency on the bus at half that rate with perfect alignment between clock and data.
So if you you want to send the four-bit data 5 6 B D with data valid on the positive clock edge, prepare the DMA buffer as so: 05 15 06 16 0B 1B 0D 1D and connect the GPIO pin 4 as the clock. Leave a final byte in the buffer to reset the clock/bus to idle state, if you need.
You can of course extend the idea and incorporate control signals such as chip selects and tri-state signals for external buffers, if needed.
Also take note that not all DMA blocks may have access to the AHB bus which is holding the GPIO registers. For example on STM32F40x, only DMA2 can be used (this is what got me, until I read this answer https://stackoverflow.com/a/46619315/6552613).
I haven't fully explored this space yet, but, by disabling interrupts and polling for interrupt flags in my main loop, it's made the jitter on my GPIO DMA basically disappear! Granted it might just be the set of interrupts have enabled, but everything down to the systick timer was killing me. By polling the interrupts in the main loop it seems to have fixed my issue.
Note that this is on an STM32F042, and I never exceed 6 MHz for my period. When I try to, i.e. try to go to 8 MHz sampling out, everything falls apart. YMMV

Can I disable Interrupts on a BBB for a short duration (0.5ms)?

I am trying to write a small driver program on a Beaglebone Black that needs to send a signal with timings like this:
I need to send 360 bits of information. I'm wondering if I can turn off all interrupts on the board for a duration of 500µs while I send the signal. I have no idea if I can just turn off all the interrupts like that. Searches have been unkind to me so far. Any ideas how I might achieve this? I do have some prototypes in assembly language for the signal, but I'm pretty sure its being broken by interrupts.
So for example, I'm hoping I could have something like this:
disable_irq();
/* asm code to send my bytes */
reenable_irq();
What would the bodies of disable_irq() and reenable_irq() look like?
The calls you would want to use are local_irq_disable() and local_irq_enable() to disable & enable IRQs locally on the current CPU. This also has the effect of disabling all preemption on the CPU.
Now lets talk about your general approach. If I understand you correctly, you'd like to bit bang your protocol over a GPIO with timing accurate to < 1/3 us.
This will be a challenge. Tests show that the Beaglebone black GPIO toggle frequency is going to max out at ~2.78MHz writing directly to the SoC IO registers in kernel mode (~0.18 us minimum pulse width).
So, although this might be achievable by the thinnest of margins by writing atomic code in kernel space, I propose another concept:
Implement your custom serial protocol on the SPI bus.
Why?
The SPI bus can be clocked up to 48MHz on the Beaglebone Black, its buffered and can be used with the DMA engine. Therefore, you don't have to worry about disabling interrupts and monopolizing your CPU for this one interface. With a timing resolution of ~0.021us (# 48MHz), you should be able to achieve your timing needs with an acceptable margin of error.
With the bus configured for Single Channel Continuous Transfer Transmit-Only Master mode and 30-bit word length (2 30-bit words for each bit of your protocol):
To write a '0' with your protocol, you'd write the 2 word sequence - 17 '1's followed by 43 '0's - on SPI (#48MHz).
To write a '1' with your protocol, you'd write the 2 word sequence - 43 '1's followed by 17 '0's - on SPI (#48MHz).
From your signal timmings it's easy to figure out that SPI or other serial peripheral can not reach your demand. In your timmings, encoding is based on the width of the pulse. So let's get to the point:
Q1 Could you turn off all interrupts for a duration of 500µs?
A: 0.5ms is quite a long time in embedded system. ISR is born to enable the concurrency of multi-task and improve the real-time capability. Your should keep in mind that ISR and context-switch(in some chip architecture) are all influenced by global interrupt.
But if your top priority is to perform the timmings, and the real-time window of other tasks are acceptable, of cause you can disable the global interrupt in the duration. Even longer. If not, don't do ATOM operation in such a long time.
Q2 How?
A: For a certain chip, there's asm instruction for open/close global interrupt undoubtedly. Find the instructions or the APIs provided by your OS, do the 3 steps below(pseudocode):
state_t tState = get_interrupt_status( );
disable_interrupt( );
... /*your operation here*/
resume_interrupt( tState );

Implementing a non-standard SPI variation on ARM Cortex M3

I need to create a driver for a flash memory chip connected to a STM32 Cortex M3 MCU. The chip is controlled via an SPI bus. I intended to use integrated SPI peripheral of the MCU, but unfortunately it only supports 8- or 16-bit data packets while the flash chip commands are 14 bit long. Thus, I have to implement the protocol from scratch using GPIOs. My question is: what is the right way to ensure correct timings of the signals? I currently think of inserting delays between asserting and deasserting GPIO lines with interrupts disabled, but it seems fairly unreliable to me. Are there any better methods?
Jeb's answer is the preferred method and you should use the hardware SPI if possible, and if DMA is an option that is nice as well.
If you for some reason find out that you cannot use the hardware SPI, but that you must implement it using "bit-banging" over GPIO, you should check what options there are available in the timer/PWM hardware on the MCU. You cannot and should not use blunt "hobbyist burn-away delays" as in the link you posted, the real-time performance will be crap and you will occupy the CPU 100%.
Most MCU timers come with a pin output feature, that would allow a pin to change state when the timer elapses. The pseudo code would then be:
Determine if the next bit to send is 1 or 0.
Set the MCU polarity register accordingly, so that it will switch the pin to a high or low level.
When the timer elapses, you need to set the polarity once again, likely through an interrupt. How to do this is very hardware-dependent.
At the same time as you bit-bang the data (MOSI), you also need to generate the clock and chip select. The clock can be generated in the same way as the data, or possibly through a PWM signal if that option is available. Chip select is the easiest part as you only need to pull a pin low during the data transmission.
Finally, there is most likely some application note or official example over how to write a software SPI for your particular MCU.
I would recommend to use the build in SPI and DMA if possible!
You could remapping your data into an array of bytes with a size of a multiple of 14bits.
So you have to send a multiple of 7*4Bits=28bytes each time.
Then you can use the standard SPI with 8Bit-size.
But this should be much faster with SPI/DMA than bit banging the GPIO's.
Some devices that use obscure data lengths are designed so that at the start of a transaction they will either ignore all "0" bits that are clocked in before the first "1", or all "1" bits that are clocked in before the first "0". If your device happens to be designed in such a fashion, you may be able to use 8- or 16-bit SPI mode by clocking out two "junk" bits along with the bits of interest.

how to access to trasfer data form io to memory on ARM9 s3c2440 with DMA or without DMA

I want to transfer 8 bit parallel data from IO to memory ,the data is coming very fast at speed of roughly 5 Mhz ,I am using embedded linux on ARM9 based kit by friendly arm which is using S3C2440(400Mhz) processor can any body pleas tell me where to start,my data is a video signal that is coming from a adc
I have read the on internet that I can do this using DMA but I need a start ...
Forget about DMA on this device. The ADC is not available as a DMA source. One reason for this is that DMA is only useful for transferring multiple bytes/words/whatever - the overhead of setting up, starting the DMA and handling an OnCompletion interrupt makes it pointless for occasional transfers of one item. Your ADC has no buffering, just the one output register with 10 sig. bits.
Use an FIQ handler to extract the ADC result. How you buffer the output and signal it for further processing is up to you and the linux driver framework.
have a look at these articles to for brieif theroy
http://my.opera.com/richasn/blog/2011/01/15/application-of-dma-way-in-data-acquisition-in-arm-system
http://my.opera.com/richasn/blog/2011/01/14/application-of-dma-way-in-data-acquisition-in-arm-system

Resources