I'm trying to setup half duplex communication in my program. I have my RS485 transceiver using the RTS flag (TIOCM_RTS) to toggle back and forth between transmit and receive. And to send/receive data I need to change RTS flag manually:
Set RTS to High.
Send data.
Set RTS to low.
int setRTS(int level) {
int status;
ioctl(ser_port, TIOCMGET, &status);
if(level) {
status |= TIOCM_RTS;
} else {
status &= ~TIOCM_RTS;
}
ioctl(ser_port, TIOCMSET, &status);
return 1;
}
My question is: shouldn't the linux kernel be able to switch RTS automatically?
And how to ensure that data was sent before calling setRTS(0)?
shouldn't the linux kernel be able to switch RTS automatically?
Yes, there is kernel framework for this starting in Linux 3.0.
There are two ioctls in include/uapi/asm-generic/ioctls.h:
#define TIOCGRS485 0x542E
#define TIOCSRS485 0x542F
to retrieve and configure a tty serial port driver in RS-485 mode.
These ioctls use the struct serial_rs485:
/*
* Serial interface for controlling RS485 settings on chips with suitable
* support. Set with TIOCSRS485 and get with TIOCGRS485 if supported by your
* platform. The set function returns the new state, with any unsupported bits
* reverted appropriately.
*/
struct serial_rs485 {
__u32 flags; /* RS485 feature flags */
#define SER_RS485_ENABLED (1 << 0) /* If enabled */
#define SER_RS485_RTS_ON_SEND (1 << 1) /* Logical level for
RTS pin when
sending */
#define SER_RS485_RTS_AFTER_SEND (1 << 2) /* Logical level for
RTS pin after sent*/
#define SER_RS485_RX_DURING_TX (1 << 4)
__u32 delay_rts_before_send; /* Delay before send (milliseconds) */
__u32 delay_rts_after_send; /* Delay after send (milliseconds) */
__u32 padding[5]; /* Memory is cheap, new structs
are a royal PITA .. */
};
I've used this RS-485 capabilty on Atmel and Etrax SoCs, but otherwise implementation of these ioctls in Linux UART/USART drivers is very sparse.
If your driver doesn't have it, then consider implementing it yourself. You could use the implementation in drivers/tty/serial/atmel_serial.c as a guide. Also read the Linux kernel document for RS485.
This can indeed be tricky - to do it pro-actively you need to know when the last byte cleared the UART engine, or at least when it entered (when the buffer went empty) and add a delay calculated from the baud rate and word length. This is indeed something it can be worth implementing in the serial driver itself, where all of that is visible.
However, this problem is most commonly encountered on a shared bus where you also receive everything you transmit. If that is the case, you can use receiving the end of your own transmission (assuming you discover that promptly) as the trigger to disable the driver.
Related
Here's what I have: PCA9555 chip that has inputs, if a signal state on the input changes, the interrupt signal is sent. Then I can read the chip via I2C to check the inputs.
What I need - when a pin changes state, I need to read the chip, check which pin changed state and notify my app about it.
So I have an interrupt and the interrupt handler MUST NOT block the MCU.
My obvious choice is using HAL_I2C_Mem_Read_IT(), right?
I made the whole code, tested it. It seemed like it worked... For a while.
Until I added reading the chip like every 100ms.
The code still works, but I see the blinking things stutter, stop blinking for more than a second or even 2. So - it became obvious that HAL_I2C_Mem_Read_IT() BLOCKS my interrupt that causes the MCU to freeze.
I checked the HAL sources and found this:
static HAL_StatusTypeDef I2C_RequestMemoryRead(I2C_HandleTypeDef *hi2c, uint16_t DevAddress,
uint16_t MemAddress, uint16_t MemAddSize, uint32_t Timeout,
uint32_t Tickstart)
{
I2C_TransferConfig(hi2c, DevAddress, (uint8_t)MemAddSize, I2C_SOFTEND_MODE, I2C_GENERATE_START_WRITE);
/* Wait until TXIS flag is set */
if (I2C_WaitOnTXISFlagUntilTimeout(hi2c, Timeout, Tickstart) != HAL_OK)
{
return HAL_ERROR;
}
/* If Memory address size is 8Bit */
if (MemAddSize == I2C_MEMADD_SIZE_8BIT)
{
/* Send Memory Address */
hi2c->Instance->TXDR = I2C_MEM_ADD_LSB(MemAddress);
}
/* If Memory address size is 16Bit */
else
{
/* Send MSB of Memory Address */
hi2c->Instance->TXDR = I2C_MEM_ADD_MSB(MemAddress);
/* Wait until TXIS flag is set */
if (I2C_WaitOnTXISFlagUntilTimeout(hi2c, Timeout, Tickstart) != HAL_OK)
{
return HAL_ERROR;
}
/* Send LSB of Memory Address */
hi2c->Instance->TXDR = I2C_MEM_ADD_LSB(MemAddress);
}
/* Wait until TC flag is set */
if (I2C_WaitOnFlagUntilTimeout(hi2c, I2C_FLAG_TC, RESET, Timeout, Tickstart) != HAL_OK)
{
return HAL_ERROR;
}
return HAL_OK;
}
As the name I2C_WaitOnTXISFlagUntilTimeout() suggests - it WAITS. Yes, it's a while loop that blocks the executing thread until a flag is set:
static HAL_StatusTypeDef I2C_WaitOnFlagUntilTimeout(I2C_HandleTypeDef *hi2c, uint32_t Flag, FlagStatus Status,
uint32_t Timeout, uint32_t Tickstart)
{
while (__HAL_I2C_GET_FLAG(hi2c, Flag) == Status)
{
/* Check for the Timeout */
if (Timeout != HAL_MAX_DELAY)
{
if (((HAL_GetTick() - Tickstart) > Timeout) || (Timeout == 0U))
{
hi2c->ErrorCode |= HAL_I2C_ERROR_TIMEOUT;
hi2c->State = HAL_I2C_STATE_READY;
hi2c->Mode = HAL_I2C_MODE_NONE;
/* Process Unlocked */
__HAL_UNLOCK(hi2c);
return HAL_ERROR;
}
}
}
return HAL_OK;
}
There are 3 of those lagging functions.
For my application this is a show stopper. It just doesn't work, since it depends on handling the events in real time. Also it has a GUI that freezes when the interrupt handler blocks.
Is there a quick workaround for this? Is it a bug in HAL driver?
Do I have to implement my own non-blocking function? It seems like many, many hours of coding, since the function is non trivial and tightly coupled with the rest of the module.
My idea is to rewrite it and replace while loops with my non-blocking delay function that uses a timer interrupt to continue work after some time passes. To make it more non-trivial, each callback would have to receive the necessary state data to continue. Then the state machine to figure out where we are with my I2C_RequestMemoryRead_ process. At the end I just call the registered callback and done. It should work truly non-blocking...
But I have deadlines. Can it be done faster? How is it even possible the HAL "_IT" function BLOCKS the thread with some while loops? It's just wrong! It defeats the entire purpose of an "interrupt mode function". If it blocks, there already IS a blocking version that is simpler.
I solved the problem with hacking the original HAL driver.
https://gist.github.com/HTD/e36fb68488742f27a737a5d096170623
After adding the files to the project the original HAL driver needs to be modified as described in the stm32h7xx_hal_i2c_nb.h comment.
For use with STM32CubeIDE it's best to move the modified driver file to a different location to prevent the IDE from overwriting it.
I left other *IT functions unchanged, but it's very easy to add similar modifications to all remaining functions.
The hacked driver was tested in real world application on my STM32H747I-DISCO board with PCA9555 16-bit I/O expander.
For the test I spammed the input with random noise signals that just crashed the original driver. Here it works, doesn't block other threads, the GUI works with full speed.
I am looking at the following function which retargets stdout to UART in the STM32 Std peripheral library.
int _write(int fd, char *ptr, int len) {
uint32_t primask = __get_PRIMASK();
__disable_irq();
for (int i = 0; i < len; i++) {
while (USART_GetFlagStatus(RETARGET_CFG_UART, USART_FLAG_TXE) == RESET) {
}
USART_SendData(RETARGET_CFG_UART, (uint8_t) * (ptr + i));
}
if (!primask) {
__enable_irq();
}
return len;
}
Before transmitting over UART it masks exceptions which can have a priority set via __disable_irq() (which I understand includes all peripheral and GPIO interrupts).
My question is, why is this UART tx implemented this way?
Can the disable/enable irq calls be removed so that other functionality in the program is not delayed due to potentially lengthy data transactions?
I suspect the author of that code is disabling all interrupts just because there might be some interrupts that write bytes to the UART, and the author wants to ensure that all the bytes sent to the _write function get written consecutively, instead of having other, unrelated bytes in the middle.
I'm assuming all the ISRs defined in your system are relatively quick compared to the serial transmission, and the receiver doesn't care if there are some small delays in the transmission. So it should be OK to leave any interrupts enabled if those interrupts are not related to the UART.
You should get a firm grasp of all the ISRs in your system and what they do, and then you should use that to decide which specific IRQs should be disabled during this operation (if any).
It's a quasi-bad design.
For one thing, always calling __disable_irq but not __enable_irq is suspicious.
It prevents nested calls where the caller is also doing __disable_irq/__enable_irq.
It assumes that a Tx ISR is not involved. That is, the UART is running in polled mode. But, if that were true, why did the code disable/enable interrupts?
If there is more data to send to the UART Tx than can be put into the Tx FIFO, the code [as you've seen] will block.
This means that the base/task level can't do other things.
When I do such UART code, I use a ring queue. I define one that is large enough to accomodate any bursts (e.g. queue is 10,000).
The design assumes that if the _write is called and the ring queue becomes full before all data is added, the ring queue size should be increased (i.e. ring queue full is a [fatal] design error).
Otherwise, the base/task level is trying to send too much data. That is, it's generating more data than can be sent at the Tx baud rate.
The _write process:
Copy bytes to the ring queue.
Copy as many bytes from the queue to the UART as space in the UART Tx FIFO allows.
Tx ISR will be called if space becomes available. It repeats step (2)
With this, _write will not block if the UART Tx FIFO becomes full.
The Tx ISR will pick up the slack. That is, when there is more space available in the FIFO, and the Tx is "ready", the Tx ISR will be called.
Here is some pseudo code to illustrate what I mean:
// push_tx_data -- copy data from Tx ring queue into UART Tx FIFO
int
push_tx_data(void)
{
// fill Tx FIFO with as much data as it can hold
while (1) {
// no free space in Tx FIFO
if (USART_GetFlagStatus(RETARGET_CFG_UART, USART_FLAG_TXE) != RESET)
break;
// dequeue byte to transmit (stop if queue empty)
int i = tx_queue_dequeue();
if (i < 0)
break;
// put this data byte into the UART Tx FIFO
uint8_t buf = i;
USART_SendData(RETARGET_CFG_UART, buf);
}
}
// tx_ISR -- handle interrupt from UART Tx
void
tx_ISR(void)
{
// presumeably interrupts are already disabled in the ISR ...
// copy all the data we can
push_tx_data();
// clear the interrupt pending flag in the interrupt controller or ISR if
// necessary (i.e.) USART_SendData didn't clear it when the FIFO was full
}
// _write -- send data
int
_write(int fd, char *ptr, int len)
{
uint32_t primask = __get_PRIMASK();
// running from task level so disable interrupts
if (! primask)
__disable_irq();
int sent = 0;
// add to [S/W] tx ring queue
for (int i = 0; i < len; i++) {
if (! tx_queue_enqueue(ptr[i]))
break;
++sent;
}
// send data from queue into FIFO
push_tx_data();
// reenable interrupts (task level)
if (! primask)
__enable_irq();
return sent;
}
I am extremely disconcerted.
I'm making a remote controlled machine using a pi pico to drive the motors and read some sensors, and a raspberry pi 4 to send commands to the pi pico via serial and host the web interface.
The following code seems to work... but... If I remove the if with uart_is_writable coding and its content it doesn't work. Does anyone have any idea why?
#include <stdlib.h>
#include <string.h>
#include "pico/stdlib.h"
#include "hardware/uart.h"
#include "hardware/irq.h"
//DEFINES
#define UART_ID uart0
#define BAUD_RATE 19200
#define DATA_BITS 8
#define STOP_BITS 1
#define PARITY UART_PARITY_NONE
#define UART_TX_PIN 0
#define UART_RX_PIN 1
#define LED_PIN PICO_DEFAULT_LED_PIN
static int chars_rxed = 0;
volatile char uCommand[32] = {0, 0};
void on_uart_rx(void) {
char tmp_string[] = {0, 0};
new_command = true;
while (uart_is_readable(UART_ID)) {
uint8_t ch = uart_getc(UART_ID);
tmp_string[0] = ch;
strcat(uCommand, tmp_string);
if(uart_is_writable(UART_ID)){
uart_putc(UART_ID, '-');
uart_puts(UART_ID, uCommand);
uart_putc(UART_ID, '-');
}
chars_rxed++;
}
}
int main(){
uart_init(UART_ID, BAUD_RATE);
gpio_set_function(UART_TX_PIN, GPIO_FUNC_UART);
gpio_set_function(UART_RX_PIN, GPIO_FUNC_UART);
uart_set_hw_flow(UART_ID, false, false);
uart_set_format(UART_ID, DATA_BITS, STOP_BITS, PARITY);
uart_set_fifo_enabled(UART_ID, false);
int UART_IRQ = UART_ID == uart0 ? UART0_IRQ : UART1_IRQ;
irq_set_exclusive_handler(UART_IRQ, on_uart_rx);
irq_set_enabled(UART_IRQ, true);
uart_set_irq_enables(UART_ID, true, false);
uart_puts(UART_ID, "\nOK\n");
while (1){
tight_loop_contents();
if(uCommand[0] != 0){
uart_putc(UART_ID, '/');
uart_puts(UART_ID, uCommand);
memset(uCommand, 0, sizeof(uCommand));
}
}
}
The example code you linked to is for a simple tty/echo implementation.
You'll need to tweak it for your use case.
Because Tx interrupts are disabled, all output to the transmitter has to be polled I/O. Also, the FIFOs in the uart are disabled, so only single char I/O is used.
Hence, the uart_is_writable to check whether a char can be transmitted.
The linked code ...
echos back the received char in the Rx ISR. So, it needs to call the above function. Note that if Tx is not ready (i.e. full), the char is not echoed and is dropped.
I don't know whether uart_putc and uart_puts check for ready-to-transmit in this manner.
So, I'll assume that they do not.
This means that if you call uart_putc/uart_puts and the Tx is full, the current/pending char in the uart may get trashed/corrupted.
So, uart_is_writable should be called for any/each char to be sent.
Or ... uart_putc/uart_puts do check and will block until space is available in the uart Tx fifo. For you use case, such blocking is not desirable.
What you want ...
Side note: I have done similar [product/commercial grade] programming on an RPi for motor control via a uart, so some of this is from my own experience.
For your use case, you do not want to echo the received char. You want to append it to a receive buffer.
To implement this, you probably want to use ring queues: one for received chars and one for chars to be transmitted.
I assume you have [or will have] devised some sort of simple packet protocol to send/receive commands. The Rpi sends commands that are (e.g.):
Set motor speed
Get current sensor data
The other side should respond to these commands and execute the desired action or return the desired data.
Both processors probably need to have similar service loops and ISRs.
The Rx ISR just checks for space available in the Rx ring queue. If space is available, it gets a char from the uart and enqueues it. If no Rx char is available in the uart, the ISR may exit.
The base level code service loop should:
Check if the uart Tx can accept another character (via uart_is_writable) and, if so, it can dequeue a char from the Tx ring queue [if available] and send it (via uart_putc). It can loop on this to keep the uart transmitter busy.
Check to see if enough chars are received to form a packet/message from the other side. If there is such a packet, it can service the "request" contained in it [dequeueing the chars to make more space in the Rx ring queue].
If the base level needs to send a message, it should enqueue it to the Tx ring queue. It will be sent [eventually] in the prior step.
Some additional thoughts ...
The linked code only enables Rx interrupts and runs the Tx in polled mode. This may be enough. But, for maximum throughput and lowest latency, you may want to make the Tx interrupt driven as well.
You may also wish the enable the FIFOs in the uart so you can queue up multiple characters. This can reduce the number of calls to the ISR and provide better throughput/latency because the service loop doesn't have to poll so often.
I'm trying to implement a timeout for some hardware transmissions, to add security to a big project. I already implemented timeout using select for UART transmission, but I don't know how to add a timeout in a SPI transmission.
This is my reading code:
int spi_read(int fd, char command, char* buffer, int size, int timeout)
{
struct spi_ioc_transfer xfer[2];
int status;
memset(buffer, 0, sizeof(buffer));
memset(xfer, 0, sizeof(xfer));
xfer[0].tx_buf = (unsigned int)(&command);
xfer[0].len = 1;
xfer[1].rx_buf = (unsigned int)buffer;
xfer[1].len = size;
status = ioctl(fd, SPI_IOC_MESSAGE(2), xfer);
if(status < 0)
return NOERROR;
else
return EHWFAULT1;
}
It sends a byte sized command and receives a response of certain size (in half duplex mode). How can I implement a timeout in the response? Can it be implemented using select? Should I separe both transactions and use select or better use an alarm?
Then, I have the same question for a full duplex mode, which is implemented too using ioctl. Can you give me any hints?
In hardware the SPI master does not 'wait' for a response. By definition, the SPI master provides the clock cycles and the slave must reply. The concept of waiting for a response doesn't apply to the SPI bus. (I'm assuming you're operating the SPI master)
(deeper in the protocol, the SPI might poll the hardware to see if it's done/ready; but the SPI bus itself is getting an immediate answer every time).
To clarify: the SPI clocks in whatever is on the SPI MISO pin. Whatever level is on the MISO pin is the reply, even if the slave is not explicitly driving a level. The only way to detect a non responsive slave is to pullup/pulldown the MISO in a way that can not be interpreted as a valid message.
I am running into a timing issue with a serial port in C. Long story short, I send a command to a device at one baud rate (say 9600) and expect a response from it at another (say 38400). I have no control over this protocol. A prototype of the situation would look like the following:
open() serial port
set baud rate to 9600 using termios struct
write() command
change baud rate to 38400 using termios struct
read() response
I am running into a problem where the device does not understand the command I sent because the baud rate is changing to 38400 before it completes the write. I am pretty certain write works fine because it returns the number of bytes I intend to write. I tried adding a usleep(100000) after the call to write and while that works sometimes, I cannot guarantee the entire message will be transmitted at 9600 before I change the baud and read a response. I also tried flushing the buffer with tcflush(fd, TCOFLUSH) but I do not want to discard any data so this is also not the correct way.
How can I force write all the serial data and be guaranteed it is written before the next call to change the baud rate to 38400? This seems to be happening at the chip level so is my only hope to include the FTDI libraries (it is an FTDI chip) and access the registers to see when the data is done being transmitted? Thanks.
Use WaitCommEvent with a mask that includes EV_TXEMPTY to wait for the message to be sent out by the driver.
Why not just set the transmitter to 9600 and the receiver to 38400? Most common serial port hardware support this.
// fd = file descriptor of serial port. For example:
// int fd = open ("/dev/ttyUSB0", O_RDWR | O_NOCTTY | O_SYNC);
int
somefunction (int fd)
{
struct termios tty;
memset (&tty, 0, sizeof tty);
if (tcgetattr (fd, &tty) != 0)
{
error ("error %d from tcgetattr: %s", errno, strerror (errno));
return -1;
}
cfsetospeed (&tty, B9600); // set tty transmitter to 9600
cfsetispeed (&tty, B38400); // set receiver to 38400
if (tcsetattr (fd, TCSADRAIN, &tty) != 0) // waits until pending output done before changing
{
error ("error %d from tcsetattr", errno);
return -1;
}
return 0;
}
I've amended my code to use TCSADRAIN instead of TCSANOW so that the rate change does not occur until after all pending output has been sent.
You should be using tcdrain() or the TCSADRAIN optional action for tcsetattr() instead of tcflush
I used the Windows FTDI API and found their entire model to be annoyingly asynchronous. With a normal serial port I would expect you could do the math on your message length (in bits, including start, stop, parity) and baud rate and have a reasonable estimate of how long to sleep until your message is transmitted. However, with the FTDI part you're dealing with USB latencies and unknown queuing in the FTDI part itself. If your device's replies come in under 1ms it might not even be possible to turn the FTDI around reliably between your TX and RX phases.
Would it be possible to hook up the RX to a different UART? That would greatly simplify your problem.
If not, you might consider using a special cable that connects your TX back to your RX so that you can see your message go out before you cut over. With a diode you could avoid the device also seeing its own transmissions. 3/4 duplex?