SPI timeout in Linux and C - c

I'm trying to implement a timeout for some hardware transmissions, to add security to a big project. I already implemented timeout using select for UART transmission, but I don't know how to add a timeout in a SPI transmission.
This is my reading code:
int spi_read(int fd, char command, char* buffer, int size, int timeout)
{
struct spi_ioc_transfer xfer[2];
int status;
memset(buffer, 0, sizeof(buffer));
memset(xfer, 0, sizeof(xfer));
xfer[0].tx_buf = (unsigned int)(&command);
xfer[0].len = 1;
xfer[1].rx_buf = (unsigned int)buffer;
xfer[1].len = size;
status = ioctl(fd, SPI_IOC_MESSAGE(2), xfer);
if(status < 0)
return NOERROR;
else
return EHWFAULT1;
}
It sends a byte sized command and receives a response of certain size (in half duplex mode). How can I implement a timeout in the response? Can it be implemented using select? Should I separe both transactions and use select or better use an alarm?
Then, I have the same question for a full duplex mode, which is implemented too using ioctl. Can you give me any hints?

In hardware the SPI master does not 'wait' for a response. By definition, the SPI master provides the clock cycles and the slave must reply. The concept of waiting for a response doesn't apply to the SPI bus. (I'm assuming you're operating the SPI master)
(deeper in the protocol, the SPI might poll the hardware to see if it's done/ready; but the SPI bus itself is getting an immediate answer every time).
To clarify: the SPI clocks in whatever is on the SPI MISO pin. Whatever level is on the MISO pin is the reply, even if the slave is not explicitly driving a level. The only way to detect a non responsive slave is to pullup/pulldown the MISO in a way that can not be interpreted as a valid message.

Related

Why are all irq disabled for retarget write on STM32?

I am looking at the following function which retargets stdout to UART in the STM32 Std peripheral library.
int _write(int fd, char *ptr, int len) {
uint32_t primask = __get_PRIMASK();
__disable_irq();
for (int i = 0; i < len; i++) {
while (USART_GetFlagStatus(RETARGET_CFG_UART, USART_FLAG_TXE) == RESET) {
}
USART_SendData(RETARGET_CFG_UART, (uint8_t) * (ptr + i));
}
if (!primask) {
__enable_irq();
}
return len;
}
Before transmitting over UART it masks exceptions which can have a priority set via __disable_irq() (which I understand includes all peripheral and GPIO interrupts).
My question is, why is this UART tx implemented this way?
Can the disable/enable irq calls be removed so that other functionality in the program is not delayed due to potentially lengthy data transactions?
I suspect the author of that code is disabling all interrupts just because there might be some interrupts that write bytes to the UART, and the author wants to ensure that all the bytes sent to the _write function get written consecutively, instead of having other, unrelated bytes in the middle.
I'm assuming all the ISRs defined in your system are relatively quick compared to the serial transmission, and the receiver doesn't care if there are some small delays in the transmission. So it should be OK to leave any interrupts enabled if those interrupts are not related to the UART.
You should get a firm grasp of all the ISRs in your system and what they do, and then you should use that to decide which specific IRQs should be disabled during this operation (if any).
It's a quasi-bad design.
For one thing, always calling __disable_irq but not __enable_irq is suspicious.
It prevents nested calls where the caller is also doing __disable_irq/__enable_irq.
It assumes that a Tx ISR is not involved. That is, the UART is running in polled mode. But, if that were true, why did the code disable/enable interrupts?
If there is more data to send to the UART Tx than can be put into the Tx FIFO, the code [as you've seen] will block.
This means that the base/task level can't do other things.
When I do such UART code, I use a ring queue. I define one that is large enough to accomodate any bursts (e.g. queue is 10,000).
The design assumes that if the _write is called and the ring queue becomes full before all data is added, the ring queue size should be increased (i.e. ring queue full is a [fatal] design error).
Otherwise, the base/task level is trying to send too much data. That is, it's generating more data than can be sent at the Tx baud rate.
The _write process:
Copy bytes to the ring queue.
Copy as many bytes from the queue to the UART as space in the UART Tx FIFO allows.
Tx ISR will be called if space becomes available. It repeats step (2)
With this, _write will not block if the UART Tx FIFO becomes full.
The Tx ISR will pick up the slack. That is, when there is more space available in the FIFO, and the Tx is "ready", the Tx ISR will be called.
Here is some pseudo code to illustrate what I mean:
// push_tx_data -- copy data from Tx ring queue into UART Tx FIFO
int
push_tx_data(void)
{
// fill Tx FIFO with as much data as it can hold
while (1) {
// no free space in Tx FIFO
if (USART_GetFlagStatus(RETARGET_CFG_UART, USART_FLAG_TXE) != RESET)
break;
// dequeue byte to transmit (stop if queue empty)
int i = tx_queue_dequeue();
if (i < 0)
break;
// put this data byte into the UART Tx FIFO
uint8_t buf = i;
USART_SendData(RETARGET_CFG_UART, buf);
}
}
// tx_ISR -- handle interrupt from UART Tx
void
tx_ISR(void)
{
// presumeably interrupts are already disabled in the ISR ...
// copy all the data we can
push_tx_data();
// clear the interrupt pending flag in the interrupt controller or ISR if
// necessary (i.e.) USART_SendData didn't clear it when the FIFO was full
}
// _write -- send data
int
_write(int fd, char *ptr, int len)
{
uint32_t primask = __get_PRIMASK();
// running from task level so disable interrupts
if (! primask)
__disable_irq();
int sent = 0;
// add to [S/W] tx ring queue
for (int i = 0; i < len; i++) {
if (! tx_queue_enqueue(ptr[i]))
break;
++sent;
}
// send data from queue into FIFO
push_tx_data();
// reenable interrupts (task level)
if (! primask)
__enable_irq();
return sent;
}

Waiting for serial transmission to complete in Win32

I seem to be having a bit of trouble in waiting for the completion of serial data transmissions.
My interpretation of the relevant MSDN article is the EV_TXEMPTY event is the correct signal and which indicates that:
EV_TXEMPTY - The last character in the output buffer was sent.
However in my tests the event always fires immediately as soon as the data has been submitted to the buffer and long before the final has actually reached the wire. See the repro code below where the period is always zero.
Have I made an error in the implementation, am I misunderstanding the purpose of the flag, or is this feature simply not supported by modern drivers? In the latter case is there a viable workaround, say some form of synchronous line state request?
For the record the tests were conducted with FTDI USB-RS485 and TTL-232R devices in a Windows 10 system, a USB-SERIAL CH340 interface on a Windows 7 system, as well as the on-board serial interface of a 2005-vintage Windows XP machine. In the FTDI case sniffing the USB bus reveals only bulk out transactions and no obvious interrupt notification of the completion.
#include <stdio.h>
#include <windows.h>
static int fatal(void) {
fprintf(stderr, "Error: I/O error\n");
return 1;
}
int main(int argc, const char *argv[]) {
static const char payload[] = "Hello, World!";
// Use a suitably low bitrate to maximize the delay
enum { BAUDRATE = 300 };
// Ask for the port name on the command line
if(argc != 2) {
fprintf(stderr, "Syntax: %s {COMx}\n", argv[0]);
return 1;
}
char path[MAX_PATH];
snprintf(path, sizeof path, "\\\\.\\%s", argv[1]);
// Open and configure the serial device
HANDLE handle = CreateFileA(path, GENERIC_WRITE, 0, NULL,
OPEN_EXISTING, 0, NULL);
if(handle == INVALID_HANDLE_VALUE)
return fatal();
DCB dcb = {
.DCBlength = sizeof dcb,
.BaudRate = BAUDRATE,
.fBinary = TRUE,
.ByteSize = DATABITS_8,
.Parity = NOPARITY,
.StopBits = ONESTOPBIT
};
if(!SetCommState(handle, &dcb))
return fatal();
if(!SetCommMask(handle, EV_TXEMPTY))
return fatal();
// Fire off a write request
DWORD written;
unsigned int elapsed = GetTickCount();
if(!WriteFile(handle, payload, sizeof payload, &written, NULL) ||
written != sizeof payload)
return fatal();
// Wait for transmit completion and measure time elapsed
DWORD event;
if(!WaitCommEvent(handle, &event, NULL))
return fatal();
if(!(event & EV_TXEMPTY))
return fatal();
elapsed = GetTickCount() - elapsed;
// Display the final result
const unsigned int expected_time =
(sizeof payload * 1000 /* ms */ * 10 /* bits/char */) / BAUDRATE;
printf("Completed in %ums, expected %ums\n", elapsed, expected_time);
return 0;
}
The background is that this is part of a Modbus RTU protocol test suite where I am attempting to inject >3.5 character idle delays between characters on the wire to validate device response.
Admittedly, an embedded realtime system would have been more far suitable for the task but for various reasons I would prefer to stick to a Windows environment while controlling the timing as best as possible.
According to the comments by #Hans Passant and #RbMm the output buffer being referred in the EV_TXEMPTY documentation is an intermediate buffer and the event indicates that data has been forwarded to the driver. No equivalent notification event is defined which encompasses the full chain down to the final device buffers.
No general workaround is presently clear to me short of a manual delay based upon the bitrate and adding a significant worst-case margin for any remaining buffer layers to be traversed, inter-character gaps, clock skew, etc.
I would therefore very much appreciate answers with better alternate solutions.
Nevertheless, for my specific application I have implemented a viable workaround.
The target hardware is a half-duplex bus with a FTDI RS485 interface. This particular device offers an optional local-echo mode in which data actively transmitted onto the bus is not actively filtered from the reception.
After each transmission I am therefore able to wait for the expected echo to appear as a round-trip confirmation. In addition, this serves to detect certain faults such as a short-circuited bus.

automatically changing RTS for RS-485 communication

I'm trying to setup half duplex communication in my program. I have my RS485 transceiver using the RTS flag (TIOCM_RTS) to toggle back and forth between transmit and receive. And to send/receive data I need to change RTS flag manually:
Set RTS to High.
Send data.
Set RTS to low.
int setRTS(int level) {
int status;
ioctl(ser_port, TIOCMGET, &status);
if(level) {
status |= TIOCM_RTS;
} else {
status &= ~TIOCM_RTS;
}
ioctl(ser_port, TIOCMSET, &status);
return 1;
}
My question is: shouldn't the linux kernel be able to switch RTS automatically?
And how to ensure that data was sent before calling setRTS(0)?
shouldn't the linux kernel be able to switch RTS automatically?
Yes, there is kernel framework for this starting in Linux 3.0.
There are two ioctls in include/uapi/asm-generic/ioctls.h:
#define TIOCGRS485 0x542E
#define TIOCSRS485 0x542F
to retrieve and configure a tty serial port driver in RS-485 mode.
These ioctls use the struct serial_rs485:
/*
* Serial interface for controlling RS485 settings on chips with suitable
* support. Set with TIOCSRS485 and get with TIOCGRS485 if supported by your
* platform. The set function returns the new state, with any unsupported bits
* reverted appropriately.
*/
struct serial_rs485 {
__u32 flags; /* RS485 feature flags */
#define SER_RS485_ENABLED (1 << 0) /* If enabled */
#define SER_RS485_RTS_ON_SEND (1 << 1) /* Logical level for
RTS pin when
sending */
#define SER_RS485_RTS_AFTER_SEND (1 << 2) /* Logical level for
RTS pin after sent*/
#define SER_RS485_RX_DURING_TX (1 << 4)
__u32 delay_rts_before_send; /* Delay before send (milliseconds) */
__u32 delay_rts_after_send; /* Delay after send (milliseconds) */
__u32 padding[5]; /* Memory is cheap, new structs
are a royal PITA .. */
};
I've used this RS-485 capabilty on Atmel and Etrax SoCs, but otherwise implementation of these ioctls in Linux UART/USART drivers is very sparse.
If your driver doesn't have it, then consider implementing it yourself. You could use the implementation in drivers/tty/serial/atmel_serial.c as a guide. Also read the Linux kernel document for RS485.
This can indeed be tricky - to do it pro-actively you need to know when the last byte cleared the UART engine, or at least when it entered (when the buffer went empty) and add a delay calculated from the baud rate and word length. This is indeed something it can be worth implementing in the serial driver itself, where all of that is visible.
However, this problem is most commonly encountered on a shared bus where you also receive everything you transmit. If that is the case, you can use receiving the end of your own transmission (assuming you discover that promptly) as the trigger to disable the driver.

Cannot Wake up Atmel ATSHA204 Using I2C

I've recently been given the task of writing drivers for some of the I2C devices in our product. I was a complete beginner at this, but I've managed to use a mixture of i2cset and i2cget along with smbus to control some LEDs.
My latest task is to read a 7 byte serial number from the EEPROM of an Atmel ATSHA204 chip. My first job is to wake up the chip. The data sheet says this is done as follows
The Wake condition requires either that the system processor manually drive
the SDA pin low for tWLO, or that a data byte of 0x00 is transmitted at a
clock rate sufficiently slow that SDA is low for a minimum period of tWLO.
When the device is awake, the normal processor I2C hardware and/or software
can be used for device communications up to and including the I/O sequence
required to put the device back into low-power (sleep) mode.
So it seems I have to manually set one of the I2C pins low for a time of tWLO which is apparently at least 60 microseconds before I can use conventional I2C calls. I'm really not sure how this is done, Ideally I'd do this in C, so would some variation of the following work?
int file;
file = open("/dev/i2c-0", O_RDWR);
if (file < 0)
{
exit(1);
}
int addr = 0x64; // Address of LED driver
if (ioctl(file, I2C_SLAVE, addr) < 0)
{
return 1;
}
write(file, 0x00, 1); // write a 0x00 to the chip to wake it up
I guess I'm not sure about the last bit, how do I keep writing until the device has woken up? I'd appreciate any help with this, as low-level programming such as this is new to me.
You don't want to pass 0x00 to the write function. That param isn't a value, it's a pointer to a buffer containing the data. You need to know what clock speed your I2C is running. That will determine how many bytes of 0x00 need to be written satisfy the required duration for wakeup.
Below could help you to awake the chip
bool wakeup (int fd)
{
unsigned char buf[4] = {0};
bool awake = false;
if (fcntl(fd, F_GETFD) < 0)
perror ("Invalid FD.\n");
while (!awake){
if (write(fd, buf, sizeof(buf)) > 1){
printf ("Device is awake!\n");
}
if (read (fd, buf, sizeof(buf)) == 4){
printf ("Awake Done!\n");
awake = true;
}
else {
printf ("Failed to awake the device!");
break;
}
}
return awake;
}

Force transmission of all serial data in C

I am running into a timing issue with a serial port in C. Long story short, I send a command to a device at one baud rate (say 9600) and expect a response from it at another (say 38400). I have no control over this protocol. A prototype of the situation would look like the following:
open() serial port
set baud rate to 9600 using termios struct
write() command
change baud rate to 38400 using termios struct
read() response
I am running into a problem where the device does not understand the command I sent because the baud rate is changing to 38400 before it completes the write. I am pretty certain write works fine because it returns the number of bytes I intend to write. I tried adding a usleep(100000) after the call to write and while that works sometimes, I cannot guarantee the entire message will be transmitted at 9600 before I change the baud and read a response. I also tried flushing the buffer with tcflush(fd, TCOFLUSH) but I do not want to discard any data so this is also not the correct way.
How can I force write all the serial data and be guaranteed it is written before the next call to change the baud rate to 38400? This seems to be happening at the chip level so is my only hope to include the FTDI libraries (it is an FTDI chip) and access the registers to see when the data is done being transmitted? Thanks.
Use WaitCommEvent with a mask that includes EV_TXEMPTY to wait for the message to be sent out by the driver.
Why not just set the transmitter to 9600 and the receiver to 38400? Most common serial port hardware support this.
// fd = file descriptor of serial port. For example:
// int fd = open ("/dev/ttyUSB0", O_RDWR | O_NOCTTY | O_SYNC);
int
somefunction (int fd)
{
struct termios tty;
memset (&tty, 0, sizeof tty);
if (tcgetattr (fd, &tty) != 0)
{
error ("error %d from tcgetattr: %s", errno, strerror (errno));
return -1;
}
cfsetospeed (&tty, B9600); // set tty transmitter to 9600
cfsetispeed (&tty, B38400); // set receiver to 38400
if (tcsetattr (fd, TCSADRAIN, &tty) != 0) // waits until pending output done before changing
{
error ("error %d from tcsetattr", errno);
return -1;
}
return 0;
}
I've amended my code to use TCSADRAIN instead of TCSANOW so that the rate change does not occur until after all pending output has been sent.
You should be using tcdrain() or the TCSADRAIN optional action for tcsetattr() instead of tcflush
I used the Windows FTDI API and found their entire model to be annoyingly asynchronous. With a normal serial port I would expect you could do the math on your message length (in bits, including start, stop, parity) and baud rate and have a reasonable estimate of how long to sleep until your message is transmitted. However, with the FTDI part you're dealing with USB latencies and unknown queuing in the FTDI part itself. If your device's replies come in under 1ms it might not even be possible to turn the FTDI around reliably between your TX and RX phases.
Would it be possible to hook up the RX to a different UART? That would greatly simplify your problem.
If not, you might consider using a special cable that connects your TX back to your RX so that you can see your message go out before you cut over. With a diode you could avoid the device also seeing its own transmissions. 3/4 duplex?

Resources