Init a serial communication with c library open() causes TX to send one bit on RPi - c

I'm trying to set up a serial communication between the RPI and an FPGA. However, there is an issue when using the standard C library open() to init the serial interface: I'm using a scope to monitor what is sent and received via the RX and TX lines. A call to open causes the TX line of the RPI to go low for the length of one bit. I do not see this behavior with other computers/linux PCs. The point is, the FPGA assumes a valid transmission, since he thinks it's a start bit, but it's not.
I checked with minicom installed on the RPI. Same thing. Starting minicom causes the TX line sending one bit. Once minicom has started, the communication runs as expected and all bytes have the correct frame size. Is there any way to suppress the TX line going low upon the open call to init the serial communication? Is this an expected behavior?

This is a super far-fetched hunch, but this code seems a bit suspicious, from the pl011_startup() function in the PL011 serial port driver:
/*
* Provoke TX FIFO interrupt into asserting.
*/
It seems as if it's twiddling the TX line when starting up the port, which would explain the pulse you're seeing. More investigation would surely be needed before concluding this is what happens, of course.
So, I guess my "answer" boils down to: that sounds weird, perhaps it's something with the driver?
Of course, one way of working around this is to apply some care in the FPGA end, assuming you have more control over it. "Proper" framing would take care of this, and make it clear that the spurious send can be discarded.
UPDATE: I meant that if "proper" messages were to be always framed by some sequence of bytes, the FPGA might be able to discard invalid ("unframed") data anyway, and thus become immune to the random pulse. For instance, messages could be defined to always start with SOH (start of header) or SOT (start of text) symbols (bytes with the values 0x01 and 0x02, respectively).

Related

Linux UART imx8 how to quickly detect frame end?

I have an imx8 module running Linux on my PCB and i would like some tips or pointers on how to modify the UART driver to allow me to be able to detect the end of frame very quickly (less than 2ms) from my user space C application. The UART frame does not have any specific ending character or frame length. The standard VTIME of 100ms is much too long
I am reading from a Sim card, i have no control over the data, no control over the size or content of the data. I just need to detect the end of frame very quickly. The frame could be 3 bytes or 500. The SIM card reacts to data that it receives, typically I send it a couple of bytes and then it will respond a couple of ms later with an uninterrupted string of bytes of unknown length. I am using an iMX8MP
I thought about using the IDLE interrupt to detect the frame end. Turn it on when any byte is received and off once the idle interrupt fires. How can I propagate this signal back to user space? Or is there an existing method to do this?
Waiting for an "idle" is a poor way to do this.
Use termios to set raw mode with VTIME of 0 and VMIN of 1. This will allow the userspace app to get control as soon as a single byte arrives. See:
How to read serial with interrupt serial?
How do I use termios.h to configure a serial port to pass raw bytes?
How to open a tty device in noncanonical mode on Linux using .NET Core
But, you need a "protocol" of sorts, so you can know how much to read to get a complete packet. You prefix all data with a struct that has (e.g.) A type and a payload length. Then, you send "payload length" bytes. The receiver gets/reads that fixed length struct and then reads the payload which is "payload length" bytes long. This struct is always sent (in both directions).
See my answer: thread function doesn't terminate until Enter is pressed for a working example.
What you have/need is similar to doing socket programming using a stream socket except that the lower level is the UART rather than an actual socket.
My example code uses sockets, but if you change the low level to open your uart in raw mode (as above), it will be very similar.
UPDATE:
How quickly after the frame finished would i have the data at the application level? When I try to read my random length frames currently reading in 512 byte chunks, it will sometimes read all the frame in one go, other times it reads the frame broken up into chunks. –
Engo
In my link, in the last code block, there is an xrecv function. It shows how to read partial data that comes in chunks.
That is what you'll need to do.
Things missing from your post:
You didn't post which imx8 board/configuration you have. And, which SIM card you have (the protocols are card specific).
And, you didn't post your other code [or any code] that drives the device and illustrates the problem.
How much time must pass without receiving a byte before the [uart] device is "idle"? That is, (e.g.) the device sends 100 bytes and is then finished. How many byte times does one wait before considering the device to be "idle"?
What speed is the UART running at?
A thorough description of the device, its capabilities, and how you intend to use it.
A uart device doesn't have an "idle" interrupt. From some imx8 docs, the DMA device may have an "idle" interrupt and the uart can be driven by the DMA controller.
But, I looked at some of the linux kernel imx8 device drivers, and, AFAICT, the idle interrupt isn't supported.
I need to read everything in one go and get this data within a few hundred microseconds.
Based on the scheduling granularity, it may not be possible to guarantee that a process runs in a given amount of time.
It is possible to help this a bit. You can change the process to use the R/T scheduler (e.g. SCHED_FIFO). Also, you can use sched_setaffinity to lock the process to a given CPU core. There is a corresponding call to lock IRQ interrupts to a given CPU core.
I assume that the SIM card acts like a [passive] device (like a disk). That is, you send it a command, and it sends back a response or does a transfer.
Based on what command you give it, you should know how many bytes it will send back. Or, it should tell you how many optional bytes it will send (similar to the struct in my link).
The method you've described (e.g.) wait for idle, then "race" to get/process the data [for which you don't know the length] is fraught with problems.
Even if you could get it to work, it will be unreliable. At some point, system activity will be just high enough to delay wakeup of your process and you'll miss the window.
If you're reading data, why must you process the data within a fixed period of time (e.g. 100 us)? What happens if you don't? Does the device catch fire?
Without more specific information, there are probably other ways to do this.
I've programmed such systems before that relied on data races. They were unreliable. Either missing data. Or, for some motor control applications, device lockup. The remedy was to redesign things so that there was some positive/definitive way to communicate that was tolerant of delays.
Otherwise, I think you've "fallen in love" with "idle interrupt" idea, making this an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem

Why does read not terminate for my USB device?

I have an USB-device connected with my RaspberryPi 3 B+ (Raspbian Buster Lite 2019-07-10). I also write an small program to read data from the USB Device. The device has an custom, CDC conform firmware, so the device is detected correctly by the OS and and tty is attached.
But when I call 'read' for the device, the syscall never terminates. The odd thing is, when I access the device with MiniCom, CuteCom or even H-Term, it works correct.
I already tried the answers from the following questions:
Reading and writing to serial port in C on Linux
How to open, read, and write from serial port in C?
C program to read data from USB device connected to the system
None of them worked.
I also tried to flush the tty with tcflush and tcdrain.
int dev = open("/dev/ttyACM0", O_RWDR | O_NOCTTY);
// I tried to adapt the device parameters with termios, see above
uint8_t req[] = {0x21, 0x42, 0x00, 0x12}; // the actual request
write(dev, req, 4);
uint8_t resp[12];
read(dev, resp, 8); // does not terminate
I expect a result, 12 bytes long, but read just blocks. If I try the O_NONBLOCK/O_NDELAY option, read terminates with the EAGAIN error.
Of course I checked the result values from every syscall and library function, they all returned/terminated as expected. And as I mentioned above, when connected with a terminal program the device works how it should, so it can't be the firmware.
Also I traced with strace what minicom and cutecom did, but they also did nothing more than open, write and read and of course I tried everything with sudo, so the rights are not the problem.
Seems like the device has never taken the request, so its never processing and getting blocked. so, something prob in the way the request is written. Did you try setting the speed of the port/interface?
I found the Answer myself: the ICANON bit in the termios struct, c_lflag field musst be cleared and the VMIN field in the c_cc array should be set to 1.
Further explanations are found here:
https://www.gnu.org/software/libc/manual/html_node/Local-Modes.html#Local-Modes
https://www.gnu.org/software/libc/manual/html_node/Mode-Data-Types.html
(be carefull, some parts of this documentary are a little bit outdated, for example the CCTS_OFLOW Flag does not exist on Raspbian-Buster)

Speed up / modify tcdrain() function

I'll skirt round the long and tedious story of how we got where we are, but the situation is this:
We are using half-duplex RS485 serial comms and (by necessity) driving the TX/RX flag "manually" via GPIO pin toggling. In order to make this work we're using tcdrain() to wait until the Tx buffer is empty before flipping back to Rx mode.
The problem is that tcdrain() seems to wait (block) for quite a while after the last character has been transmitted, which causes us a bit of a bottleneck.
I've seen suggestions that the default tcdrain() code just multiplies the baud rate by the (maximum) size of the serial buffer, sleep()s for that time period and then returns.- and I could easily believe that.
So, can anyone suggest ways to either:
Speed up tcdrain() perhaps by shortening the serial buffer
Modify tcdrain() (or related code/parameters) to actually wait for the last character to be sent by the hardware, or wait for a period more closely related to the buffer contents
I've grepped our (embedded) kernel (2.6.x) code and can't see any references other than a single header file (termios.h).
Edit to add: As per this post, if for example we could reduce the serial Tx buffer to 1 byte using an IOCTL I assume the write() call would/could block while chars were written, then return, which would allow us to avoid relying on tcdrain() and just use a very short usleep() before toggling the Tx/Rx pin. I will experiment when I get a moment, in the meantime any suggestions/examples welcome.

RS232 Serial Pin Read in C in Linux

Ist there any possibility to read values from the pins of the COM Port? Any solution in C under Linux is appreciated!
Yes, see for instance this guide.
You use the ioctl() function, to read the various control pins. Data is, of course, best read through the normal read() handling, you don't want to be polling asynchronuous serial data.
I don't think your assumption (expressed in a comment) that the driver must check the pin-states to handle data is correct, normally a serial port is "backed" by an UART and that typically handles the RX/TX pins in hardware.
Am pretty sure , you can't read/write pins of UART.
Even at the hardware level , you have to read/write an entire byte.There is no bit access or read/write pin access.
The Byte is read/written in the Receive/Transmit UART buffer .
In either ways you can't just access the buffer directly , on your behalf the linux driver will do the job. You just have to make use of the driver in your application , to work with the UART , the linux driver for UART provides , standard API's like open(),read(),write(),ioctl() through which you interact the UART device.
If you want to work with drivers , and new to this field , the best place to start will be
this book.
The exact answer to this question depends on the precise hardware in question. I know of a piece of code where I worked, based on receiving the letter 'a' as the indication of bitrate, and it would poll the RX pin to detect the transitions between 0 and 1 to detect the "width" of the bits, and it would then calculate the correct clock-rate for the serial port and configure the serial port to match the bitrate of the other end.
A "PC" type hardware solution will not be able to read the RX/TX pins. In other hardware, it may be possible to do so. Many embedded systems allow various pins to be configured as inputs, outputs or "have a function" (in our case, RX, TX, CTS, RTS, etc) - so for example, you could configure the RX pin to be a input, and thus read the state of it. Of course, the normal serial port drivers will probably set these pins to "have a function" [or expect the boot code running before the kernel is started to have configure it this way]. So you would have to reconfigure the pins in some kernel code of your own, most likely. Beware that this may cause unexpected side-effects with the driver for the actual serial port - it may "get upset" when it tries to do things to the serial port and it's "not working as expected" because it's been "misconfigured".
You can almost certainly read (and/or write) the state of the control pins, such as CTS, RTS via IOCTL calls.

Data structure for storing serial port data in firmware

I am sending data from a linux application through serial port to an embedded device.
In the current implementation a byte circular buffer is used in the firmware. (Nothing but an array with a read and write pointer)
As the bytes come in, it is written to the circular bufffer.
Now the PC application appears to be sending the data too fast for the firmware to handle. Bytes are missed resulting in the firmware returning WRONG_INPUT too mant times.
I think baud rate (115200) is not the issue. A more efficient data structure at the firmware side might help. Any suggestions on choice of data structure?
A circular buffer is the best answer. It is the easiest way to model a hardware FIFO in pure software.
The real issue is likely to be either the way you are collecting bytes from the UART to put in the buffer, or overflow of that buffer.
At 115200 baud with the usual 1 start bit, 1 stop bit and 8 data bits, you can see as many as 11520 bytes per second arrive at that port. That gives you an average of just about 86.8 µs per byte to work with. In a PC, that will seem like a lot of time, but in a small microprocessor, it might not be all that many total instructions or in some cases very many I/O register accesses. If you overfill your buffer because bytes are arriving on average faster than you can consume them, then you will have errors.
Some general advice:
Don't do polled I/O.
Do use a Rx Ready interrupt.
Enable the receive FIFO, if available.
Empty the FIFO completely in the interrupt handler.
Make the ring buffer large enough.
Consider flow control.
Sizing your ring buffer large enough to hold a complete message is important. If your protocol has known limits on the message size, then you can use the higher levels of your protocol to do flow control and survive without the pains of getting XON/XOFF flow to work right in all of the edge cases, or RTS/CTS to work as expected in both ends of the wire which can be nearly as hairy.
If you can't make the ring buffer that large, then you will need some kind of flow control.
There is nothing better than a circular buffer.
You could use a slower baud rate or speed up the application in the firmware so that it can handle data coming at full speed.
If the output of the PC is in bursts it may help to make the buffer big enough to handle one burst.
The last option is to implement some form of flow control.
What do you mean by embedded device ? I think most of current DSP and processor can easily handle this kind of load. The problem is not with the circular buffer, but how do you collect bytes from the serial port.
Does your UART have a hardware fifo ? If yes, then you should enable it. If you have an interrupt per byte, you can quickly get into trouble, especially if you are working with an OS or with virtual memory, where the IRQ cost can be quit high.
If your receiving firmware is very simple (no multitasking), and you don't have an hardware fifo, polled mode can be a better solution than interrupt driven, because then your processor is doing only UART data reception, and you have no interrupt overhead.
Another problem might be with the transfer protocol. For example if you have long packet of data that you have to checksum, and you do the whole checksum at the end of the packet, then all the processing time of the packet is at the end of it, and that is why you may miss the beginning of the next packet.
So circular buffer is fine and you have to way to improve :
- The way you interact with the hardware
- The protocol (packet length, acknoledgment etc ...)
Before trying to solve the problem, first you need to establish what the problem really is. Otherwise you might waste time trying to fix something that isn't actually broken.
Without knowing more about your set-up it's hard to give more specific advice. But you should investigate further to establish what exactly the hardware and software is currently doing when the bytes come in, and then what is the weak point where they're going missing.
A circular buffer with Interrupt driven IO will work on the smallest and slowest of embedded targets.
First try it at the lowest baud rate and only then try at high speeds.
Using a circular buffer in conjunction with IRQ is an excellent suggestion. If your processor generates an interrupt each time a byte is received take that byte and store it in the buffer. How you decide to empty that buffer depends on if you are processing a stream of data or data packets. If you are processing a stream simply have your background process remove the bytes from the buffer and process them first-in-first-out. If you are processing packets then just keep filing the buffer until you have a complete packet. I've used the packet method successfully many times in the past. I would implement some type of flow control as well to signal to the PC if something went wrong like a full buffer or if packet-processing time is long to indicate to the PC when it is ready for the next packet.
You could implement something like IP datagram which contains data length, id, and checksum.
Edit:
Then you could hard-code some fixed length for the packets, for example 1024 byte or whatever that makes sense for the device. PC side would then check if the queue is full at the device every time it writes in a packet. Firmware side would run checksum to see if all data is valid, and read up till the data length.

Resources