STM32F401 Nucleo SPI Clock issue, using STM32CubeF4 - c

STM32F401RE --> CortexM4
I am using SPI1 with HAL low level drivers, STM32CubeF4.
I have used the following SPI configuration:-
SPI_HandleTypeDef hspi;
hspi.Instance=SPI1;
hspi.Init.Mode=SPI_MODE_MASTER;
hspi.Init.Direction=SPI_DIRECTION_2LINES;
hspi.Init.DataSize=SPI_DATASIZE_8BIT;
hspi.Init.CLKPolarity=SPI_POLARITY_HIGH;
hspi.Init.CLKPhase=SPI_PHASE_1EDGE;
hspi.Init.NSS=SPI_NSS_SOFT;
hspi.Init.BaudRatePrescaler=SPI_BAUDRATEPRESCALER_256;
hspi.Init.FirstBit=SPI_FIRSTBIT_MSB;
hspi.Init.TIMode=SPI_TIMODE_ENABLED;
hspi.Init.CRCCalculation=SPI_CRCCALCULATION_DISABLED;
hspi.Init.CRCPolynomial=0;
HAL_SPI_Init(&hspi);
After sending some bytes on MOSI. On logic analyzer, I see clock and MOSI data are mismatched.
This is a very zoomed in pic, If required I can send the whole logic analyzer file.
For read and write, I am using
HAL_spi_transmit(&hspi, Byte, 1, 5);
HAL_spi_Receive(&hspi, Byte, 1, 5);
I did the almost same spi configuration with stm32l151rct6a, using std periph library and it is working.
Question: Why is the clock not synchronised with MOSI?

I could imagine that there is just a command flow like
SET_SCK_HIGH();
SET_MOSI_HIGH();
in the master, and thus the effects are visible at a slightly different time.

Related

Issue in interfacing E-Ink display with STM8S103F3P6 microcontroller

I am using Waveshare 1.54" ePaper Module. Using SPI peripheral:
CPU freq is 16Mhz
SPI Prescaler DIV by 8
MSB FIRST
CPOL=0, CPHA=1
The Display does not response but it respond with TI CC1310 properly.
The problem with SPI is after transmitting byte it does not go to ideal high state.
I have checked with logic analyser.
The SPI is initialised thus:
/****************** Initializing The SPI Peripheral ******************/
void SPI_setup(void)
{
CLK_PeripheralClockConfig(CLK_PERIPHERAL_SPI, ENABLE); //Enable SPI Peripheral Clock
//Set the MOSI, MISO and SCk at high Level.
//GPIO_ExternalPullUpConfig(GPIOC, (GPIO_Pin_TypeDef)(GPIO_PIN_6),ENABLE);
SPI_DeInit();
SPI_Init(SPI_FIRSTBIT_MSB, //Send MSB First
SPI_BAUDRATEPRESCALER_8, //Fosc/16 = 1MHz
SPI_MODE_MASTER,
SPI_CLOCKPOLARITY_LOW, //IDEAL Clock Polarity is LOW
SPI_CLOCKPHASE_2EDGE, //The first clock transition is the first data capture edge
SPI_DATADIRECTION_2LINES_FULLDUPLEX, //Only TX is Enable
SPI_NSS_SOFT,
0x00);
SPI_Cmd(ENABLE);
}
This is pretty much the same problem you had at Issue in interfacing SPI e-ink display with PIC 18F46K22 only on a different processor. Worth noting that CPHA on STM8 has the opposite sense to CPE on PIC18 which may be the cause of your error. That is to say that CPHA=1 on the STM8 has the same effect as CKE=0 on the PIC18. You really have to look at the timing diagrams for each part carefully.
From https://www.waveshare.com/wiki/1.54inch_e-Paper_Module:
Compare with the STM8 reference manual:
Clearly you need one of:
CPHA=1 / CPOL=1 (SPI_CLOCKPOLARITY_HIGH / SPI_CLOCKPHASE_2EDGE) or
CPHA=0 / CPOL=0 (SPI_CLOCKPOLARITY_LOW / SPI_CLOCKPHASE_1EDGE)
If it is the SCLK that you want to be normally-high, then you need the first option - although I fail to see why that is "ideal", the Waveshare diagram clearly indicates that either is acceptable.

Implementing an SSI slave interface on STM32 Board

I am trying to implement a SSI Slave Protocol on a STM32 Board. Since the STM32 Boards don't have a SSI interface, I used its SPI interface in Slave(Transmit only mode). The master SSI sends 24 clock signals and the slave reacts by sending its data(3 Bytes) over the MISO pins. The problem I am facing is that the data is always shifted on the left on every clock signal coming from the master. For example assuming I am constantly sending 0x010101 from slave.
At first transmission the master receives 0x010101
At Second transmission the master receives 0x020202
At third transmission the master receives 0x040404
Can someone please give me some hints on how to solve this problem?
The data-shift with each transmission can happen when the SPI slave recognizes an (unexpected) additional clock pulse. Looking at the SSI protocol description on Wikipedia this actually makes sense:
In order to transmit N bits of data the master emits N clock cycles, followed by another clock pulse to signal the end of the transfer (so-called "Monoflop Time" - referring to the original hardware implementation of the SSI interface). Since the SPI protocol / SPI slave does not know about this additional clock pulse, it begins to output the first bit of the next data byte, which is in turn not recognized by the SSI master. As a result this leads to a shift in the data bits recognized by the SSI master on the next SSI frame.
Unfortunately, it is not easy to handle the Monoflop time correctly with the SPI slave. In order to deal with the additional clock pulse, we could try to set the SPI frame size to 25 bits on the slave side. Since the STM32 hardware only supports SPI frame sizes between 4 bit and 16 bit, the only choice is to set it to 5 bit. This is not very convenient, since we need to convert the 3 byte (24 bit) output data into 5 blocks of 5 bit (24 bit output data + 1 bit dummy data), but it should work for a "normal" transfer.
Things get more complicated though, if we also want to handle the cases "Multiple transmissions" and "Interrupting transmission" correctly. We need to monitor the clock signal to be able to detect the monoflop timeout. This can be done using a STM32 hardware timer with an external trigger. When the timer expires, we need to reset the SPI unit (in order to handle an interrupted transmission) and update the output value. This "simple" task can be quite challenging since it requires a couple of instructions - requiring a fast MCU depending on the SSI clock frequency.
Alternatively the SSI protocol can be implemented using a software-only "bit banging" solution. But this requires a fast MCU as well in order to handle a fast SSI clock correctly.
IMHO the best solution is to use a small (inexpensive) FPGA to implement the SSI slave and let the MCU feed it with data over a traditional SPI interface.

Read non conventional ADC with STM32F3

I'm attempting to interface an STM32F303 Nucleo with an AD7748-4 ADC. Datasheet for the ADC:
https://www.analog.com/media/en/technical-documentation/data-sheets/ad7768-7768-4.pdf
The issue is, the ADC DOES NOT output the converted value through the SPI port, but rather employs a Data Ready Signal (DRDY), a Data Clock (DCLK), and a combination of 4 Data Outputs (DOUT0-DOUT3). The output streams 96 bits serially through one wire if I set it up that way, but timing is critical in my application and I need to clock the data in using DOUT0 to DOUT2, which would each output 32 bits. If I were serially streaming the data, I could trick the SPI port into reading it, but I'm not. The ADC is running at 20MHz, so DCLK will be operating at the same frequency. The Nucleo runs at a maximum of 72MHz, but when the DAM is utilized, it sets the clock to 64MHz.
In the STM manual, it describes a "GPIO port input data register (GPIOx_IDR) (x = A..H)" as being a read only register - my understanding is that the lower 16 bits can store an inputted value up to 16 bits (most likely for memory data R/W) - so the question is, how can I configure the GPIO to read in the data? I'm at a slight impass here. My instinct tells me that the Nucleo may not be fast enough to read the data coming from the ADC... Any ideas? All being written in C/C++ basically bare metal... I'm new to the Nucleo, haven't written code in 4 years - pardon any lapse in knowledge...
If DCLK works at 20Mhz, the uC is obviously not fast enough (you have about 3 instructions between each cycle, so even assembly language would be difficult to implement...). As I am not familiar with the stm architecture, I can only suggest a trick that will maybe spark some ideas in your head. Rather than using a crystal for the ADC, use a timer from the STM that is connected to an output pin, and clock the ADC using that pin (MCLK). When configuring the ADC using spi, idle mode, etc. you can leave this clock signal at 20Mhz. But when you need a sample from the ADC, stop the STM timer and clock the ADC "manually". (you practically control the DCLK signal). After your conversion routine is over, restart the timer at 20Mhz.

STM32F1 - Using master SPI on bare metal

I've been trying to port some of my AVR code to drive a simple SPI LCD to ARM as a learning exercise (I'm very new to ARM in general). For this I just need to use SPI in master mode.
I looked in the datasheet for my device (STM32F103C8) and found that the SPI1 pins I need, SCK and MOSI are mapped as alternative functions of PA5 and PA7, respectively, along with other peripherals (pg.29). My understanding is that in order to use the SPI function on these pins, I need to make sure that anything else mapped to the same pin is disabled. When looking at the defaults for the peripheral clock control register, however, it looks like the other features are already disabled.
I looked at the SPI section in the reference manual, including section 25.3.3 - Configuring the SPI in master mode. First I enabled the SPI1 master clock in APB2ENR and followed the steps in this section to configure SPI1 to my needs. I also changed the settings for PA5/7 to set their mode to "Alternate Function Output push-pull" (9.1.4). Finally, I enabled SPI1 by setting CR1_SPE.
From my reading, I had thought that by loading a value into the SPI1 data register after configuring SPI as above, the data would be shifted out. However, after writing the data, the TXE flag in the SPI status register never becomes set, indicating that the data I wrote into it is just sat there.
At this point, I'm assuming that there is something else I've failed to configure correctly. For example, I'm not 100% sure about what to do with the PA5/7 pins. I've tried to understand what I can from the datasheets, but I'm not getting anywhere. Is there anything else that needs to be done before it'll work?
I'm almost certain that you did not set SSM and SSI bits in SPIx->CR1 register. SPI in these chips is pretty simple, for the polled transfers you need to set SSM, SSI, SPE, MSTR, correct format (LSBFIRST, CPOL, CPHA) and proper baudrate (BR) in SPIx->CR1 and you're good to go.

Implementing a non-standard SPI variation on ARM Cortex M3

I need to create a driver for a flash memory chip connected to a STM32 Cortex M3 MCU. The chip is controlled via an SPI bus. I intended to use integrated SPI peripheral of the MCU, but unfortunately it only supports 8- or 16-bit data packets while the flash chip commands are 14 bit long. Thus, I have to implement the protocol from scratch using GPIOs. My question is: what is the right way to ensure correct timings of the signals? I currently think of inserting delays between asserting and deasserting GPIO lines with interrupts disabled, but it seems fairly unreliable to me. Are there any better methods?
Jeb's answer is the preferred method and you should use the hardware SPI if possible, and if DMA is an option that is nice as well.
If you for some reason find out that you cannot use the hardware SPI, but that you must implement it using "bit-banging" over GPIO, you should check what options there are available in the timer/PWM hardware on the MCU. You cannot and should not use blunt "hobbyist burn-away delays" as in the link you posted, the real-time performance will be crap and you will occupy the CPU 100%.
Most MCU timers come with a pin output feature, that would allow a pin to change state when the timer elapses. The pseudo code would then be:
Determine if the next bit to send is 1 or 0.
Set the MCU polarity register accordingly, so that it will switch the pin to a high or low level.
When the timer elapses, you need to set the polarity once again, likely through an interrupt. How to do this is very hardware-dependent.
At the same time as you bit-bang the data (MOSI), you also need to generate the clock and chip select. The clock can be generated in the same way as the data, or possibly through a PWM signal if that option is available. Chip select is the easiest part as you only need to pull a pin low during the data transmission.
Finally, there is most likely some application note or official example over how to write a software SPI for your particular MCU.
I would recommend to use the build in SPI and DMA if possible!
You could remapping your data into an array of bytes with a size of a multiple of 14bits.
So you have to send a multiple of 7*4Bits=28bytes each time.
Then you can use the standard SPI with 8Bit-size.
But this should be much faster with SPI/DMA than bit banging the GPIO's.
Some devices that use obscure data lengths are designed so that at the start of a transaction they will either ignore all "0" bits that are clocked in before the first "1", or all "1" bits that are clocked in before the first "0". If your device happens to be designed in such a fashion, you may be able to use 8- or 16-bit SPI mode by clocking out two "junk" bits along with the bits of interest.

Resources