Flush communications handle receive buffer? - c

In Win32 C is there an API call to flush (dump) the contents of a COM port recieve buffer? I could only find functions to flush the transmit buffers.

`PurgeComm()' can drop all characters in either or both the Tx and Rx buffers, and abort any pending read and/or write operations on the port. To do everything to a port, say something like:
PurgeComm(hPort, PURGE_RXABORT|PURGE_TXABORT|PURGE_RXCLEAR|PURGE_TXCLEAR)
You may also want to make sure that you've handled or explicitly ignored any pending errors on the port as well, probably with ClearCommError().
ReadFile() can be used to empty just the Rx buffer and FIFO by reading all available bytes into a waste buffer. Note that you might need to have "unnatural" knowledge in order to size that buffer correctly, or repeat the ReadFile() call until it has no more to say.
However, reading the buffer to flush it will only make sense if you have the COMMTIMEOUTS set "rationally" first or the read will block until the buffer is filled.

flushing a receive buffer doesn't make sense, to get data out of a com port receive buffer just call ReadFile on the handle to the com port
FlushFileBuffers synchronously forces the transmission of data in the transmit buffers
PurgeComm empties the buffer without transmission or reception (its a delete basically)

Related

STM32 Serial DMA - Finding the beginning of the Stream

I have a known serial stream format that I am capturing via the DMA. It has header and footer bytes. But sometimes the MCU starts capturing in the middle of the stream and then the sync is out because the DMA is looking for a set number of bytes. I have read of people using circular buffers, but I have struggled to grasp this concept.
Instead, I was thinking of disabling the DMA and enabling the a serial interrupt at the start up of the MCU. Then cycle through each byte that is captured by the interrupt to find the start byte. Then, once I have found the start byte, disable the serial interrupt capturing and enable the DMA to take over the capturing of the stream.
Does this sound feasible? Thanks for any input.
I am using STM32 HAL libs with the new STM32 IDE that includes STM32 CubeMX.
If I understand your reference to circular buffers correctly, the concept is simple. You have a large buffer with a write pointer and a read pointer. The write function writes data into the buffer from the write pointer onward, taking care that once it reaches the end of the buffer, it wraps around and dumps the data at the beginning of the buffer and onward. Then you need a reader function that reads the data from the read pointer onward, and again, taking care of wrap around at the end of the buffer.
Both the read and write pointers start at the beginning of the buffer. The two conditions that you have to check are:
1) When the read pointer is at the same location as the write pointer, there is nothing (more) to read.
2) When the write pointer increments and runs into the read pointer location, you have a buffer overflow condition. This should never happen, so either you must use a larger buffer, or have the reader task runs more frequently, or you start throwing things out.
So in your scenario, the DMA just dumps data, and your reader task looks for the header bytes and processes the data until it finds the footer bytes.
As the protocol has idle gaps between packets, you can use the idle interrupt feature of the UART to synchronize the receiver.
Enable the UART interrupt, simply start receiving with DMA, and set UARTx->CR1 |= USART_CR1_IDLEIE. Whenever the idle interrupt is triggered, look at the DMA channel, if it's still running, stop the transfer and discard the input buffer (as this means that the receive was started in the middle of the packet) and start receiving the next packet.

Will read (socket) block until the buffer is full?

I wrote a simple C socket program that sends an INIT package to the server to indicate to prepare a text transfer. The server does not sends any data back at that time.
After sending the INIT package the client sends a GET package and waits for chunks of data from the server.
So every time the server receives a GET package it will send a chunk of data to the client.
So far so good. The buffer has a size of 512 bytes, a chunk is 100 Bytes plus a little overhead big.
But my problem is that the client does not receive the second message.
So my guess is that read() will blpck until the buffer is full. Is that right or what might be the reason for that?
It depends. For TCP sockets read may return before the buffer is full, and you may need to receive in a loop to get a whole message. For UDP sockets the size you read is typically the size of a single packet (datagram) and then read may block until it has read all the requested data.
The answer is no: read() on a tcp/ip socket will not block until the buffer has the amount of data you requested. read() will return immediately in all cases if any data is available, even if your socket is blocking and you've requested more data than is available.
Keep in mind that TCP/IP is a byte stream protocol and you must treat it as such. The interface is under no obligation to transmit your data together in a single packet, as long as it is presented to you in the order you placed it in the socket.
The answer is no , read is not blocking call , You can refer below points to guess the error
Several Checkpoints you can find :
Find out what read is returning at the second time .
memset the buffer every time in while before recv
use fflush(stdout) if not able to output.
Make sure all three are present . if problem not solved yet .please post source code here

Timeout event in read system call for reading serial port

I am reading the data from serial using read system call. It seems that this call is reading only one byte though it given how many bytes to read
bytes_read = read(fp, buffer, 20);
I don't know how much bytes the sender will send. If I know, then I would read that many times. I suspect that, while reading in to serial second bytes didn't arrived, so that it is coming out. Due to this I want to implement timeout. read call should wait for that much of time, read all the bytes till that timeout. I want experts to help me in this.
You can control the timeouts and line buffer characteristics via the termios(3) library call.

flush linux OS serial buffer

I have a serial program connecting to two devices via two different ports. Whenever I read, of course I have a local buffer with allocated statically with size of packet I am willing to read from serial. My boss, however, noted that storing packets to this local buffer will not be safe, rather he advised to check if I can flush linux OS buffer every time I read from serial. what is your opinion? and how can I do that programatically in ubuntu ?
I think this problem gets solved, if I add TCSAFLUSH to the tcsetattr function. this makes it flush the buffer after all data has been written to serial. this happens just before the next read. hopefully if I usleep() for some time ;)
what if your opinion?
The function you are looking for is tcdrain(fd) or the tcsetattr() option TCSADRAIN.
TCSAFLUSH (and tcflush()) empty the buffer by discarding the data - tcdrain() waits (blocking) until all data has been sent from the buffer:
Line control
...
tcdrain() waits until all output written to the object referred to by fd has been transmitted.
-- man termios
I use the function just before resetting the port options to what they were before I changed them and close the port:
void SerialPort::close() {
if (_fd > -1) {
tcdrain(_fd);
ioctl(_fd, TCSETS2, &_savedOptions);
::close(_fd);
}
_fd = -1;
}

Socket being busy causes delays in poll()

I'm using a TCP socket to send data from a client. The data is prepared, in advance, so I always have more data to send. Moreover, I can't change the size of the writes, otherwise the server will complain. I do the following:
while (1) {
poll(for POLLOUT condition);
write(to TCP socket);
if (no more data)
break;
}
The problem is that the POLL takes a very long time. I assume this is the time that the socket is actually being written to (or responded to). Is there anyway that I can reduce the amount of time spent in the poll? It is currently a major bottleneck.
Socket being busy causes delays in poll()
Of course it does. That's what poll() is for. To delay until a socket becomes readable or writable.
Your writer is faster than your reader. Look for a solution at the reading end. Your writing end is behaving correctly.
However calling it every time at the head of that loop is pointless. Only call it when you need to know the socket has become writable. It is normally writable all the time, except when your socket send buffer is full, so calling it every time is a waste of time.
Just keep writing until you get EAGAIN/EWOULDBLOCK. Then is the time to call poll(), to tell you when there is space in the socket send buffer. Then just resume writing again as before.
Poll will raise a POLLOUT event when there's enough buffer space to enqueue further data. (Look at this link - Man (7) socket )
If it doesn't, it means write buffer is full and it means you're writing faster than the read capabilities of the other peer. Or simply the network is slower than you expect.

Resources