Flash Memory writing and reading through SPI - c

It's the first time I try to use the SPI protocol. I am trying to understand an example code that came with my development kit (which has a STM32F207VCT6 microcontroller). This code implements communication (reading and writing) with a AT45DB041D flash memory.
Every time this example code manages to read the memory it not only sends information about what to be read but also asks to receive data rigth away. This data received isn't used for any purpose (apparently). The real data to be read will be asked again by the receive command later by sending a 0x00 byte first. The following code shows that:
void AT45DBXX_Read_ID(u8 *IData){
u8 i;
AT45DBXX_BUSY();
AT45DBXX_Enable; //Chip Select drive to low
SPIx_Send_byte(Read_ID);
for(i=0;i<4;i++)
{
IData[i] = SPIx_Receive_byte();
}
AT45DBXX_Disable; //Chip Select drive to high
}
Definitions:
void SPIx_Send_byte(u16 data){
while(SPI_I2S_GetFlagStatus(Open207V_SPIx, SPI_I2S_FLAG_TXE)==RESET);
SPI_I2S_SendData(Open207V_SPIx,data);
while(SPI_I2S_GetFlagStatus(Open207V_SPIx, SPI_I2S_FLAG_RXNE)==RESET);
SPI_I2S_ReceiveData(Open207V_SPIx);
}
u16 SPIx_Receive_byte(void){
while(SPI_I2S_GetFlagStatus(Open207V_SPIx, SPI_I2S_FLAG_TXE)==RESET);
SPI_I2S_SendData(Open207V_SPIx,0x00);
while(SPI_I2S_GetFlagStatus(Open207V_SPIx, SPI_I2S_FLAG_RXNE)==RESET);
return SPI_I2S_ReceiveData(Open207V_SPIx);
}
As you can see the SPIx_Send_byte code is not only sending what to be read but also receiving information that won't be used.
Can someone help me understand why that needs to be done and why its needed to send the 0x00 byte to actually receive the data?
Thanks!

SPI is a full-duplex, bi-directional bus where data is both sent to the slave and received from the slave at the same time. Your SPI controller doesn't know if a given byte is glong from the master, or from the slave, or both. Therefore, whenever you send a byte, you must also read a byte, if only to throw it away. By the same token, you cannot receive a byte without sending a byte, even if the slave will throw it away.
Take a look at Wikipedia.
So, what you code is doing is
Sending Read_ID to the slave.
Reading and throwing away the byte simultaneously read out of the slave.
Write 0 to the slave to enable the slave to send a byte of data.
Read the data byte that was simultaneously read out of the slave.
Loop back to #3.
By the way, such questions would be better suited for the EE Stack Exchange as it is more about the hardware protocol as opposed to programming.

Related

Linux UART imx8 how to quickly detect frame end?

I have an imx8 module running Linux on my PCB and i would like some tips or pointers on how to modify the UART driver to allow me to be able to detect the end of frame very quickly (less than 2ms) from my user space C application. The UART frame does not have any specific ending character or frame length. The standard VTIME of 100ms is much too long
I am reading from a Sim card, i have no control over the data, no control over the size or content of the data. I just need to detect the end of frame very quickly. The frame could be 3 bytes or 500. The SIM card reacts to data that it receives, typically I send it a couple of bytes and then it will respond a couple of ms later with an uninterrupted string of bytes of unknown length. I am using an iMX8MP
I thought about using the IDLE interrupt to detect the frame end. Turn it on when any byte is received and off once the idle interrupt fires. How can I propagate this signal back to user space? Or is there an existing method to do this?
Waiting for an "idle" is a poor way to do this.
Use termios to set raw mode with VTIME of 0 and VMIN of 1. This will allow the userspace app to get control as soon as a single byte arrives. See:
How to read serial with interrupt serial?
How do I use termios.h to configure a serial port to pass raw bytes?
How to open a tty device in noncanonical mode on Linux using .NET Core
But, you need a "protocol" of sorts, so you can know how much to read to get a complete packet. You prefix all data with a struct that has (e.g.) A type and a payload length. Then, you send "payload length" bytes. The receiver gets/reads that fixed length struct and then reads the payload which is "payload length" bytes long. This struct is always sent (in both directions).
See my answer: thread function doesn't terminate until Enter is pressed for a working example.
What you have/need is similar to doing socket programming using a stream socket except that the lower level is the UART rather than an actual socket.
My example code uses sockets, but if you change the low level to open your uart in raw mode (as above), it will be very similar.
UPDATE:
How quickly after the frame finished would i have the data at the application level? When I try to read my random length frames currently reading in 512 byte chunks, it will sometimes read all the frame in one go, other times it reads the frame broken up into chunks. –
Engo
In my link, in the last code block, there is an xrecv function. It shows how to read partial data that comes in chunks.
That is what you'll need to do.
Things missing from your post:
You didn't post which imx8 board/configuration you have. And, which SIM card you have (the protocols are card specific).
And, you didn't post your other code [or any code] that drives the device and illustrates the problem.
How much time must pass without receiving a byte before the [uart] device is "idle"? That is, (e.g.) the device sends 100 bytes and is then finished. How many byte times does one wait before considering the device to be "idle"?
What speed is the UART running at?
A thorough description of the device, its capabilities, and how you intend to use it.
A uart device doesn't have an "idle" interrupt. From some imx8 docs, the DMA device may have an "idle" interrupt and the uart can be driven by the DMA controller.
But, I looked at some of the linux kernel imx8 device drivers, and, AFAICT, the idle interrupt isn't supported.
I need to read everything in one go and get this data within a few hundred microseconds.
Based on the scheduling granularity, it may not be possible to guarantee that a process runs in a given amount of time.
It is possible to help this a bit. You can change the process to use the R/T scheduler (e.g. SCHED_FIFO). Also, you can use sched_setaffinity to lock the process to a given CPU core. There is a corresponding call to lock IRQ interrupts to a given CPU core.
I assume that the SIM card acts like a [passive] device (like a disk). That is, you send it a command, and it sends back a response or does a transfer.
Based on what command you give it, you should know how many bytes it will send back. Or, it should tell you how many optional bytes it will send (similar to the struct in my link).
The method you've described (e.g.) wait for idle, then "race" to get/process the data [for which you don't know the length] is fraught with problems.
Even if you could get it to work, it will be unreliable. At some point, system activity will be just high enough to delay wakeup of your process and you'll miss the window.
If you're reading data, why must you process the data within a fixed period of time (e.g. 100 us)? What happens if you don't? Does the device catch fire?
Without more specific information, there are probably other ways to do this.
I've programmed such systems before that relied on data races. They were unreliable. Either missing data. Or, for some motor control applications, device lockup. The remedy was to redesign things so that there was some positive/definitive way to communicate that was tolerant of delays.
Otherwise, I think you've "fallen in love" with "idle interrupt" idea, making this an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem

How to Read data from RS232 port without RS232 task creation (Embedded FreeRTOS C)?

I want to write C code for an embedded system such that the data received at the RS232 port should be read continuously without creating a separate "RS232 TASK" for reading the data.
Can anyone help me with this?
I just need a basic approach for reading data without task creation.
Identify the function that tells you if some data was received. Commonly it returns a boolean value or the number of received bytes. (BTW, most protocols on RS232 allows 5 to 8 data bits per transmission.)
Use that function in a conditional block to call the next function that actually reads one or more received bytes. In case that nothing was received, this prevents your loop to block.
Example (without knowing how the functions are named in your case):
/* any task */ {
for (;;) /* or any other way of looping */ {
/* do some stuff, if needed */
if (areRs232DataAvailable()) {
uint8_t data = fetchRs232ReceivedByte();
/* handle received data */
}
/* do some stuff, if needed */
}
}
I would ask why you think reading data from a UART (which I assume is what you mean by "RS-232") requires a task at all? A solution will depend a great deal on your platform and environment and you have not specified other than FreeRTOS which does not provide any serial I/O support.
If your platform or device library already includes serial I/O, then you might use that, but at the very lowest level, the UART will have a status register with a "data available" bit, and a register or FIFO containing that data. You can simply poll the data availability, then read the data.
To avoid data loss while the processor is perhaps busy with other tasks, you would use either interrupts or DMA. At the very least the UART will be capable of generating an interrupt on receipt of a character. The interrupt handler would place the new data into a FIFO buffer (such as an RTOS message queue), and tasks that receive serial data simply read from the buffer asynchronously.
DMA works similarly, but you queue the data in response to the DMA interrupt. That will reduce the interrupt rate, but you have to deal with the possibility of a partially full DMA buffer waiting indefinitely. Also not all platforms necessarily support UART as a DMA source, or even DMA at all.

STM32 USB CDC Long packet receive

I need to send data from the PC to my STM32F3, so I decided to use a built-in USB in uC.
But now I have a problem - I want to send to stm32 big amount of data at once - I mean something like 200-500 Bytes.
When I send from PC with minicom packets which have less than 64 chart - everything is fine - callback CDC_Receive_FS(uint8_t* Buf, uint32_t *Len) occurs once - it enables UsbRxFlag, just to inform the running program that there is data available.
static int8_t CDC_Receive_FS(uint8_t* Buf, uint32_t *Len)
{
/* USER CODE BEGIN 6 */
USBD_CDC_SetRxBuffer(&hUsbDeviceFS, &Buf[0]);
USBD_CDC_ReceivePacket(&hUsbDeviceFS);
if( (Buf[0] == 'A') & (Buf[1] == 'T') ){
GPIOB->BSRR = (uint32_t)RX_Led_Pin;
UsbRxFlag = 1;
}
return (USBD_OK);
/* USER CODE END 6 */
}
But when I try to send more data (just long text from minicom ) to uC, something weird happens - sometimes uC doesn't react at all - sometimes it doesn't take into account some data.
How can I handle sending to STM32F3 more than 64Bytes over USB-CDC?
The maximum packet length for full-speed USB communication is 64 bytes. So the data will be transferred in chunks of 64 bytes and needs to be reassembled on the other end.
USB CDC is based on bulk transfer endpoints and implements a data stream (also known as pipe), not a message stream. It's basically a stream of bytes. So if you send 200 bytes, do not expect any indication of where the 200 bytes end. Such information is not transmitted.
Your code looks a bit suspicious:
You probably meant '&&' instead of '&' as pointed out by Reinstate Monica.
Unless you change buffers, USBD_CDC_SetRxBuffer only needs to be called once at initialization.
When CDC_Receive_FS is called, a data packet has already been received. Buf will point to the buffer you have specified with USBD_CDC_SetRxBuffer. Len provides the length of the packet. So the first thing you would do is process the received data. Once the data has been processed and the buffer can be reused again, you would call USBD_CDC_ReceivePacket to indicate that you are ready to receive the next packet. So move USBD_CDC_SetRxBuffer to another function (unless you want to use several buffers) and move USBD_CDC_ReceivePacket to the end of CDC_Receive_FS.
The incorrect order of the function calls could likely have led to the received data being overwritten while you are still processing it.
But the biggest issue is likely that you expect that the entire data is received in a single piece if you sent is as a single piece, or that it at least contains an indication of the end of the piece. That's not the case. You will have to implement this yourself.
If you are using a text protocol, you could buffer all incoming data until you detect a line feed. Then you know that you have a complete command and can execute it.
The following is a general purpose implementation for reading an arbitrary number of bytes: https://github.com/philrawlings/bluepill-usb-cdc-test.
The full code is a little too long to post here, but this essentially modifies usb_cdc_if.c to create a circular buffer and exposes additional functions (CDC_GetRxBufferBytesAvailable_FS(), CDC_ReadRxBuffer_FS() and CDC_FlushRxBuffer_FS()) which can be consumed from main.c. The readme.md text shown on the main page describes all the code changes required.
As mentioned by #Codo, you will need to either add termination characters to your source data, or include a "length" value (which itself would be a fixed number of bytes) at the beginning to then indicate how many bytes are in the data payload.

Implement UART frame controller

I'm programming on a STM32 board and I'm confused on how to use my peripherals : polling, interrupt, DMA, DMA interrupt...
Actually, I coded an UART module which send basics data and it works in polling, interrupt and DMA mode.
But I'd like to be able to send and receive specific frames with variable lengths, for example:
[ START | LGTH | CMD_ID | DATA(LGTH) | CRC ]
I also have sensors and I'd like to interact received DATA in these UART frames with sensors.
So, what I don't understand is:
how to program the UART module to work in "frame" mode? (buffer? circular DMA? interrupt? where, when..)
when I'm able to send or receive frame with my UART, what is the best way to interact with sensors? (inside a timer interrupt? in a state machine ? with extern variable? ...)
Here is my Libraries tree
In future, the idea is to carry this application in freertos
Thank you!
Absolutelly in DMA when it is available.
You have one big (good solution is cyclic) buffer and you just write data from one side. If DMA does not already work, you start the DMA with your buffer.
If DMA works, you just write your data to buffer and you wait DMA transfer complete interrupt.
Later in this interrupt you increase read pointer of buffer (as you sent some data already) and check if any data available to send over DMA. Set memory address to DMA and number of bytes in buffer to send.
Again, when DMA TC IRQ happens, do process again.
There is no support for FRAME, but only in plain bytes. It means you have to "invent" your own frame protocol and use it in app.
Later, when you want to send that FRAME over UART, you have to:
Write start byte to buffer
Write other header bytes
Write actual data
Write stop bytes/CRC/whatever
Check if DMA does not work, if it does not, start it.
Normally, I use this frame concept:
[START, ADDRESS, CMD, LEN, DATA, CRC, STOP]
START: Start byte indicating start of frame
ADDRESS: Address of device when multiple devices are in use on bus
CMD: Command ID
LEN: 2 bytes for data length
DATA: Actual data in bytes of variable length
CRC: 2 bytes for CRC including: address, cmd, len, data
STOP: Stop byte indicating end of frame
This is how I do it in every project where necessary. This does not use CPU to send data, just sets DMA and starts transmission.
From app perspective, you just have to create send_send(data, len) function which will create frame and put it to buffer for transmission.
Buffer size must be big enough to fit your requirements:
How much data at particular time (is it continues or a lot of data at small time)
UART baudrate
For specific question, ask and maybe I can provide some code examples from my libraries as reference.
In this case, where you need to implement that protocol, I would probably use plain interrupts and, in the handler, use a byte-by-byte state-machine to parse the incoming bytes into a frame buffer.
Only when a complete, valid frame has been received is it necessary to signal some semaphore/event and requuest a scheduler run, otherwise, you can handle any protocol error as you require - maybe tx some 'error-repeat' message and reset the state-machine to await the nexx start-of-frame buyte.
If you use DMA for this, then the variable frame-length is going to be awkward and you STILL have to iterate the received data to validate you protocol:(
DMA doesn't sound like a good fit for this, to me...
EDIT: if no preemptive multitasker, then forget about all that semaphore gunge above:) Still, it's easier to check a boolean 'validFrameRx' flag than parse DMA block data.

Data structure for storing serial port data in firmware

I am sending data from a linux application through serial port to an embedded device.
In the current implementation a byte circular buffer is used in the firmware. (Nothing but an array with a read and write pointer)
As the bytes come in, it is written to the circular bufffer.
Now the PC application appears to be sending the data too fast for the firmware to handle. Bytes are missed resulting in the firmware returning WRONG_INPUT too mant times.
I think baud rate (115200) is not the issue. A more efficient data structure at the firmware side might help. Any suggestions on choice of data structure?
A circular buffer is the best answer. It is the easiest way to model a hardware FIFO in pure software.
The real issue is likely to be either the way you are collecting bytes from the UART to put in the buffer, or overflow of that buffer.
At 115200 baud with the usual 1 start bit, 1 stop bit and 8 data bits, you can see as many as 11520 bytes per second arrive at that port. That gives you an average of just about 86.8 µs per byte to work with. In a PC, that will seem like a lot of time, but in a small microprocessor, it might not be all that many total instructions or in some cases very many I/O register accesses. If you overfill your buffer because bytes are arriving on average faster than you can consume them, then you will have errors.
Some general advice:
Don't do polled I/O.
Do use a Rx Ready interrupt.
Enable the receive FIFO, if available.
Empty the FIFO completely in the interrupt handler.
Make the ring buffer large enough.
Consider flow control.
Sizing your ring buffer large enough to hold a complete message is important. If your protocol has known limits on the message size, then you can use the higher levels of your protocol to do flow control and survive without the pains of getting XON/XOFF flow to work right in all of the edge cases, or RTS/CTS to work as expected in both ends of the wire which can be nearly as hairy.
If you can't make the ring buffer that large, then you will need some kind of flow control.
There is nothing better than a circular buffer.
You could use a slower baud rate or speed up the application in the firmware so that it can handle data coming at full speed.
If the output of the PC is in bursts it may help to make the buffer big enough to handle one burst.
The last option is to implement some form of flow control.
What do you mean by embedded device ? I think most of current DSP and processor can easily handle this kind of load. The problem is not with the circular buffer, but how do you collect bytes from the serial port.
Does your UART have a hardware fifo ? If yes, then you should enable it. If you have an interrupt per byte, you can quickly get into trouble, especially if you are working with an OS or with virtual memory, where the IRQ cost can be quit high.
If your receiving firmware is very simple (no multitasking), and you don't have an hardware fifo, polled mode can be a better solution than interrupt driven, because then your processor is doing only UART data reception, and you have no interrupt overhead.
Another problem might be with the transfer protocol. For example if you have long packet of data that you have to checksum, and you do the whole checksum at the end of the packet, then all the processing time of the packet is at the end of it, and that is why you may miss the beginning of the next packet.
So circular buffer is fine and you have to way to improve :
- The way you interact with the hardware
- The protocol (packet length, acknoledgment etc ...)
Before trying to solve the problem, first you need to establish what the problem really is. Otherwise you might waste time trying to fix something that isn't actually broken.
Without knowing more about your set-up it's hard to give more specific advice. But you should investigate further to establish what exactly the hardware and software is currently doing when the bytes come in, and then what is the weak point where they're going missing.
A circular buffer with Interrupt driven IO will work on the smallest and slowest of embedded targets.
First try it at the lowest baud rate and only then try at high speeds.
Using a circular buffer in conjunction with IRQ is an excellent suggestion. If your processor generates an interrupt each time a byte is received take that byte and store it in the buffer. How you decide to empty that buffer depends on if you are processing a stream of data or data packets. If you are processing a stream simply have your background process remove the bytes from the buffer and process them first-in-first-out. If you are processing packets then just keep filing the buffer until you have a complete packet. I've used the packet method successfully many times in the past. I would implement some type of flow control as well to signal to the PC if something went wrong like a full buffer or if packet-processing time is long to indicate to the PC when it is ready for the next packet.
You could implement something like IP datagram which contains data length, id, and checksum.
Edit:
Then you could hard-code some fixed length for the packets, for example 1024 byte or whatever that makes sense for the device. PC side would then check if the queue is full at the device every time it writes in a packet. Firmware side would run checksum to see if all data is valid, and read up till the data length.

Resources