Linux Serial Port: Blocking Read with Timeout - c

I have studied many useful threads and some tutorials, but I'm still having some issues with something that should be very simple. For reference here are some threads that I've perused:
How to implement a timeout in read function call?
how to open, read, and write from serial port in C
At any rate, I have a bit of a problem. My code works fine if I receive data. If I don't, the read() function stalls and the only way to get out of my program is to use kill -9 (NOTE: I use signal handling to signal to the thread reading the serial data to terminate. This is not the culprit, the read() call still stalls even if I have removed my signal handling). What I'm trying to do is to have a read that blocks and reads a chunk at a time (therefore saving CPU usage), however if the read receives no data, I wan't it to timeout.
Here are the settings that I'm applying to the port:
struct termios serial_struct;
serial_struct.c_cflag = B115200 | CS8 | CLOCAL | CREAD;
serial_struct.c_iflag = IGNPAR;
serial_struct.c_oflag = 0;
serial_struct.c_lflag = 0;
serial_struct.c_cc[VTIME] = 1; // timeout after .1s that isn't working
serial_struct.c_cc[VMIN] = 64; // want to read a chunk of 64 bytes at a given time
I then set these settings with tcsetattr() and confirm that the port received the settings via tcgetattr(). I'm thinking that my settings may be conflicting, because my reads appear to be blocking and wait until 64 bytes are received, but do not do anything with regards to the timeout. I understand that I can use select() to deal with a timeout, but I'm hoping to avoid the multiple system calls.
As always, thanks in advance for the help.

From man 3 termios:
MIN > 0; TIME > 0: TIME specifies the limit for a timer in tenths of a second. Once an initial byte of input becomes available, the timer is restarted after each further byte is received. read(2) returns either when the lesser of the number of bytes requested or MIN byte have been read, or when the inter-byte timeout expires. Because the timer is only started after the initial byte becomes available, at least one byte will be read.
Note that the timer does not start until at least one byte of data is received. After receiving that first data byte, the read will timeout if there is ever a gap of TIME tenths of a second between receiving consecutive data bytes.

Related

Speed up / modify tcdrain() function

I'll skirt round the long and tedious story of how we got where we are, but the situation is this:
We are using half-duplex RS485 serial comms and (by necessity) driving the TX/RX flag "manually" via GPIO pin toggling. In order to make this work we're using tcdrain() to wait until the Tx buffer is empty before flipping back to Rx mode.
The problem is that tcdrain() seems to wait (block) for quite a while after the last character has been transmitted, which causes us a bit of a bottleneck.
I've seen suggestions that the default tcdrain() code just multiplies the baud rate by the (maximum) size of the serial buffer, sleep()s for that time period and then returns.- and I could easily believe that.
So, can anyone suggest ways to either:
Speed up tcdrain() perhaps by shortening the serial buffer
Modify tcdrain() (or related code/parameters) to actually wait for the last character to be sent by the hardware, or wait for a period more closely related to the buffer contents
I've grepped our (embedded) kernel (2.6.x) code and can't see any references other than a single header file (termios.h).
Edit to add: As per this post, if for example we could reduce the serial Tx buffer to 1 byte using an IOCTL I assume the write() call would/could block while chars were written, then return, which would allow us to avoid relying on tcdrain() and just use a very short usleep() before toggling the Tx/Rx pin. I will experiment when I get a moment, in the meantime any suggestions/examples welcome.

Get progress of socket read operation

I'd like to write little file transfer program in C (I'm working on Linux).
I'm quite new to sockets programming, but I already managed to write little server and
client programs.
My question:
If I have something like this:
int read_bytes = read(socket_id, buffer, 4096);
While reading from the socket, how can I get the progress of reading?
For example I want to display every 100ms how many bytes have been transferred
so far. I'm pretty sure I have to use threads or other async functions here.
Is there a function so I can get the number of bytes read and the number of bytes to read?
Update 1:
Accumulating read()
int total_read_bytes=0, read_bytes;
char buffer[4096];
do {
read_bytes = read(socket_id, buffer, 4096);
total_read_bytes += read_bytes;
} while(read_bytes > 0); /* Read to end or read failed */
Isn't this very, very inefficient? (For example to transfer a 1MiB file)
Thank you
If you have control to the code of both the server and the client, you can let the sender to tell the receiver the size of the file before actually sending the file. For example, using the first 8 bytes in the message. That's the number of bytes to read.
By accumulating the read_bytes in your example, you can get number of bytes read
Each recv call will block until it's read some more, but it won't (by default / sans MSG_WAITALL) wait for the supplied buffer to be full. That means there's no need to reduce the buffer size in an effort to get more frequent updates to the total bytes read information, you can trivially keep a total of recv() return values that updates as packets arrive (or as fast as your app can process them from the OS buffers).
As you observe, if that total is being updated 100 times a second, you probably don't want to issue 100 screen updates, preferring to limit them to your 100ms min interval. You could use a timer/alarm to manage that, but then you have more edge cases to see if the timer is already pending etc.. It's probably simpler just to use two threads where one checks periodically if an update is needed:
thread1:
while ((n_bytes = recv(...)) > 0)
total_bytes += n_bytes;
set screen-updater-termination flag
thread2:
last_total_bytes = -1;
while the screen-updater-termination flag's not set
sleep 100ms
if last_total_bytes != total_bytes
update screen
last_total_bytes = total_bytes
Of course you'll need to use a mutex or atomic operations to coordinate some of these actions.
Is there a function so I can get... the number of bytes to read?
Not from TCP itself - it doesn't have a notion of message sizes, and the API just ensures an app receives the bytes in the same order they were sent. If you want to be able to display a "N bytes received so far, X more expected", then at the application protocol level, the sending side should prefix the data in a logic message with the size - commonly in either a fixed width binary format (ideally using htonl or similar to send and ntohl on the recv side to avoid endian issues), or in a textual representation that's either fixed with or separated from the data by a known sentinel character (e.g. perhaps a NUL, a space or newline).

Linux serial port reading in asynchronized mode

I have a trouble in reading datas from serial in Linux system. I am trying to connect a sensor with the linux system by using UART. I can read and write /deve/ttyS1. But the problem is that I dont want to poll the message from UART. Instead, I want to use the asynchronized mode to get the data. As the data comes, the call back function will enter a certain routine and run my code. The problem no is that the sensor sends me different packets and each one contains various bytes of data. They are coming every one second!
For example:
Time Sensor MyLinux
1s 50bytes
124bytes
2s 40bytes
174bytes
3s 60bytes
244bytes
My question is how to use the asynch serial programming so that in the call back function, those two packets can be read as two messages
say
50 bytes comes, the call back function can let me read 50 bytes
127 bytes comes, the call back function can let me read 127 bytes
Now, its like, 50 bytes comes, I can only read 27bytes, and the rest 23 are in the next message.
My setting for serial port in POSIX is:
/* now we setup the values in port's termios */
serial->tio.c_cflag=baudrate|databits|checkparity|stopbits|CLOCAL|CREAD;
serial->tio.c_iflag=IGNPAR;
serial->tio.c_oflag=0;
serial->tio.c_lflag=0;
serial->tio.c_cc[VMIN]=28;
serial->tio.c_cc[VTIME]=6;
/* we flush the port */
tcflush(serial->fd,TCOFLUSH);
tcflush(serial->fd,TCIFLUSH);
/* we send new config to the port */
tcsetattr(serial->fd,TCSANOW,&(serial->tio));
Try to set VMIN and VTIME at following values:
serial->tio.c_cc[VMIN]=0;
serial->tio.c_cc[VTIME]=1;
Then you'll come out of select() after reading a complete chunk of data from your sensor. If the number of bytes is less, than you've expected, you can set timeout for select and read the data once more appending to your current buffer. If you get data before the timeout, then you have your complete message.

What's the 'safest' way to read from this buffer?

I'm trying to read and write a serial port in Linux (Ubuntu 12.04) where a microcontroller on the other end blasts 1 or 3 bytes whenever it finishes a certain task. I'm able to successfully read and write to the device, but the problem is my reads are a little 'dangerous' right now:
do
{
nbytes = read(fd, buffer, sizeof(buffer));
usleep(50000);
} while(nbytes == -1);
I.e. to simply monitor what the device is sending me, I poll the buffer every half second. If it's empty, it idles in this loop. If it receives something or errors, it kicks out. Some logic then processes the 1 or 3 packets and prints it to a terminal. A half second is usually a long enough window for something to fully appear in the buffer, but quick enough for a human who will eventually see it to not think it's slow.
'Usually' is the keyword. If I read the buffer in the middle of it blasting 3 bytes. I'll get a bad read; the buffer will have either 1 or 2 bytes in it and it'll get rejected in the packet processing (If I catch the first of a 3 byte packet, it won't be a purposefully-sent-one-byte value).
Solutions I've considered/tried:
I've thought of simply reading in one byte at a time and feeding in additional bytes if its part of a 3 byte transmission. However this creates some ugly loops (as read() only returns the number of bytes of only the most previous read) that I'd like to avoid if I can
I've tried to read 0 bytes (eg nbytes = read(fd, buffer, 0);) just to see how many bytes are currently in the buffer before I try to load it into my own buffer, but as I suspected it just returns 0.
It seems like a lot of my problems would be easily solved if I could peek into the contents of the port buffer before I load it into a buffer of my own. But read() is destructive up to the amount of bytes that you tell it to read.
How can I read from this buffer such that I don't do it in the middle of receiving a transmission, but do it fast enough to not appear slow to a user? My serial messenger is divided into a sender and receiver thread, so I don't have to worry about my program loop blocking somewhere and neglecting the other half.
Thanks for any help.
Fix your packet processing. I always end up using a state machine for instances like this, so that if I get a partial message, I remember (stateful) where I left off processing and can resume when the rest of the packet arrives.
Typically I have to verify a checksum at the end of the packet, before proceeding with other processing, so "where I left off processing" is always "waiting for checksum". But I store the partial packet, to be used when more data arrives.
Even though you can't peek into the driver buffer, you can load all those bytes into your own buffer (in C++ a deque is a good choice) and peek into that all you want.
You need to know how large the messages being sent are. There are a couple of ways to do that:
Prefix the message with the length of the message.
Have a message-terminator, a byte (or sequence of bytes) that can not be part of a message.
Use the "command" to calculate the length, i.e. when you read a command-byte you know how much data should follow, so read that amount.
The second method is best for cases when you can come out of sync, because then read until you get the message-terminator sequence and you're sure that the next bytes will be a new message.
You can of course combine these methods.
To poll a device, you should better use a multiplexing syscall like poll(2) which succeeds when some data is available for reading from that device. Notice that poll is multiplexing: you can poll several file descriptors at once, and poll will succeed as soon as one (any) file descriptor is readable with POLLIN (or writable, if so asked with POLLOUT, etc...).
Once poll succeeded for some fd which you POLLIN you can read(2) from that fd
Of course, you need to know the conventions used by the hardware device about its messages. Notice that a single read could get several messages, or only a part of one (or more). There is no way to prevent reading of partial messages (or "packets") - probably because your PC serial I/O is much faster than the serial I/O inside your microcontroller. You should bear with that, by knowing the conventions defining the messages (and if you can change the software inside the microcontroller, define an easy convention for that) and implementing the appropriate state machine and buffering, etc...
NB: There is also the older select(2) syscall for multiplexing, which has limitations related to the C10K problem. I recommend poll instead of select in new code.

Serial programming: measuring time between characters

I am sending/receiving data over a serial line in Linux and I would like to find the delay between characters.
Modbus uses a 3.5 character delay to detect message frame boundaries. If there is more than a 1.5 character delay, the message frame is declared incomplete.
I'm writing a quick program in C which is basically
fd = open(MODEMDEVICE, O_RDWR | O_NOCTTY | O_NONBLOCK);
// setup newtio
....
tcsetattr(fd, TCSANOW, &newtio);
for(;;) {
res = read(fs, buf, 1);
if (res > 0) {
// store time in milliseconds?
//do stuff
}
}
Is there some way of measuring the time here? Or do I need to look at retrieving data from the serial line in a different way?
I've also tried hooking into SIGIO to get a signal whenever there is data but I seem to get data 8 bytes at a time.
(yes, I know there exist some modbus libraries but I want to use this in other applications)
The simple answer is... you cannot (not without writing you own serial driver)!
If you are writing a MODBUS master there is some hope: You can either detect the end of a slave response by waiting any amount of time (provided its longer than 3.5 chars) without receiving anything (select(2) can help you here), or by parsing the response on the fly, as you read it (the second method wastes much less time). You must also be careful to wait at least 3.5 characters-time before staring to transmit a new request, after receiving the response to the previous request. "At least" is operative here! Waiting more doesn't matter. Waiting less does.
If you a writing a MODBUS slave then you' re out of luck. You simply cannot do it reliably from userspace Linux. You have to write you own serial driver.
BTW, this is not Linux's fault. This is due to the unbelievable stupidity of MODBUS's framing method.
MODbus is like a lot of old protocols and really hates modern hardware.
The reason you're getting 8 bytes at a time is :
Your PC has a (at least) 16 byte serial FIFO on receive and transmit, in the hardware. Most are 64byte or bigger.
It is possible to tell the uart device to time out and issue a received interrupt after a number of char times.
The Trigger Level is adjustable, but the low-level driver sets it "smartly". try low-latency mode using setserial)
You can fiddle with the code in the serial driver if you must. Google it (mature content warning) it is not pretty.
so the routine is as pseudocode
int actual=read (packet, timeout of 1.5 chars)
look at actual # of received bytes
if less than a packet, has issues, discard.
not great.
You can't use timeouts. On higher baud rates 3.5 character timeout means a few milliseconds, or even hundreds of microseconds. Such timeouts can't be handled in the Linux user space.
On the client side, it isn't a big deal since Modbus doesn't send asynchronous messages. So it's up to you not to send 2 consecutive messages within 3.5 character timeout.
On the server side, the problem is that if your clients have an extremely short response timeouts and Linux is too busy you can't write a bullet-proof framing solution. There is a chance that read() function will return more than one packet. Here is (a little contrived) example.
Client writes a packet to server. Timeout is let's say 20 ms.
Let's say that Linux is at the moment very busy, so kernel doesn't wake up your thread within next 50 ms.
After 20 ms client detects that it didn't receive any response so it sends another packet to server (maybe resent the previous one).
If Linux wakes up your reading thread after 50 ms, read() function can get 2 packets or even 1 and half depending to how many bytes were received by the serial port driver.
In my implementation I use a simple method that tries to parse bytes on-the-fly - first detecting the function code and then I try to read all remaining bytes for a specific function. If I get one and half packet I parse just the first one and remaining bytes are left in the buffer. If more bytes come within a short timeout I add them and try to parse, otherwise I discard them. It's not a perfect solution (for instance some sub-codes for function 8 doesn't have a fixed size) but since MODBUS RTU doesn't have any STX ETX characters, it's the best one I were able to figure out.
I think you are going about this the wrong way. There is a built in mechanism for ensuring that characters come in all together.
Basically, you are going to want to use ioctl() and set the VMIN and VTIME parameters appropriately. In this case, it seems like you'd want VMIN (minimum number of characters in a packet) to be 0 and VTIME (minimum amount of time allowed between characters to be 15 (they are tenths of seconds).
Some really basic example code:
struct termio t;
t.c_cc[ VMIN ] = 0;
t.c_cc[ VTIME ] = 15;
if (ioctl( fd, TCSETAW, &t ) == -1)
{
printf( msg, "ioctl(set) failed on port %s. Exiting...", yourPort);
exit( 1 );
}
Do this before your open() but before your read(). Here's a couple of links that I've found wildly helpful:
Serial Programming Guide
Understanding VMIN and VMAX
I hope that at least helps/points you in the right direction even if it isn't a perfect answer for your question.

Resources