fflush of named pipe (fifo) blocks system for greater than 1 second in C code on raspberry pi - c

I am using a raspberry pi (either 3b+ or even the 4GB model 4 running either Stretch Lite or Buster Lite) to read an ADC and report the values to a named pipe (fifo). When I open the pipe I use the O_NONBLOCK parameter. The code is written in C. The data rate is not especially fast, as the values are written only once per second. I read the values using cat in a terminal.
I have set up a timer in my code, and typically the fprintf followed by fflush requires less than 1 millisecond. However, somewhat frequently (once every 10-15 minutes), it can take sometimes over 1 second to complete!
The code is part of a much larger project, but these are the lines I am using around this fifo implementation:
int chOutputData(int ch, struct output_s *output)
{
int nchar;
if(output->complete_flag) {
nchar = fprintf(fdch[ch], "m %d %d %d\n",
output->val1,
output->val2,
output->val3,
);
fflush(fdch[ch]);
return(nchar);
} else {
chOutputError(ch, "Output data is not complete.\n");
exit(1);
}
}
val1, val2, val3 are just some simple numbers, nothing crazy big.
Am I not using fprintf with fflush correctly? Do I need to set some sort of buffer? Is this related to slow sd cards on raspberry pi? Even if this took up to 5-10ms to complete I would not complain, but I have a watchdog that is tripped around 100ms so when this takes over 1 second I have issues.
I am open to other ideas of how to spit out this string. I wonder if MQTT might be a means to publish this data?
Thanks!

Related

writing datastream wirh defined delay between each byte

I need to send a constant data stream from a computer to a device. The device is connected to a ftdi-chip and the ftdi is communicating via usb with the computer. Unfortunately, at full speed, the device needs a delay between each transferred byte.
Issue:
At present I’m using WriteFile() with a hand written Delay (this is just a sample, not the full code):
for (i = 0; i < size; i++)
{
WriteFile( m_hIDComDev, (LPSTR) &ucByte, 1, &dwBytesWritten, &m_OverlappedWrite )
QueryPerformanceCounter(&start);
QueryPerformanceCounter(&stop);
while (double(stop.QuadPart - start.QuadPart) < DELAY)
{
QueryPerformanceCounter(&stop);
}
}
Depending on the mood of my machine, this delay differs between 100 us and nearly 1s. I use a Scope to display the delay at the ftdi output.
Question:
Is there a way to configure the com port or the ftdi with a defined delay between each byte?
With over words, getting rid of the “for” and use WriteFile with the size of the data instead of 1.
Alternatively, does anyone know a different way, which fits?
Limitation:
The whole Program should be time optimized. The device can run with 230,4kBaud. Running with lower speed (38,4kBaud) already solves the problem. However, the overall time is too long.
Programming in C using Visual Studio.

Cyclic data reading (like 1Wire DS18B20 temperature) without blocking main program

I'm trying to do some temperature reading using DS18B20 sensor on Raspberry Pi. My problem is that reading data from this sensor takes time. It's not much, more or less 1 s, but I cannot allow my main program to wait until this is done. I don't need to have 'most recent value'. It's temperature, so I'm gonna ask about it every minute or so. But the sensor can make measurement for example every 10 s and this will provide me recent enough value. In mean time I have to process other requests made to application. So I am thinking of some kind of endless measure loop. In general it would look like:
> start time measurement
> get DS18B20 value from 1 wire
> parse output
> stop measure time
> get the execution time, and put it in some global variable
> sleep for UPDATE_EVERY_X - execution time
So I thought of using fork(), but this creates zombies when the main process exit. Main application is some kind of server, so it won't exit gently in most times, so I need some kind of additional protection and this should be not Linux unique method. I'm trying to write my app as portable I can.
Second thought of mine is to use threads. Dispatch one thread to do this infinite loop, and implement some basic producer - consumer, with mutexes etc. That thread will only lock the output temperature when all is done, so this will make significant difference in blocking time.
Third option is to use asynchronous IO. But this is kind of magic for me right know. I did not used this before but it appears in search results.
And for claryfication, this is not strictly about 1Wire DS18B20, but about main approach when you need to do task every x sec and share information between processes, kind of embedded timer interupts.
Best regards, voodoo16.
If you want the simplest option, remember that the "Convert T" and "Read Scratchpad" commands are two distinct steps. Your program can read the temperature, then come back for the value later. There's no need to stall the program, unless you want to get the temperature value the exact instant the conversion finishes.
mainLoop() {
while(1) {
// Do things
int16_t rawTemperature = readScratchpad(myTemperatureSensor); // Get the last temperature reading
if(!getTemperature(myTemperatureSensor)) { // Start the next conversion
//Temperature conversion was not done, throw out value
rawTemperature = INT16_MAX;
}
else {
float temperature = ((float)rawTemperature / 4.0f);
saveTemperature(temperature);
}
// Do other things while temperature conversion happens
}
}
Where in this case, getTemperature issues a "Convert T" command, and returns 0 or 1 according to the datasheet (replies 0 if the prior conversion wasn't done, 1 if a new conversion has been started).

How to buffer and write to disk a low latency input with C

I need to write C code (not C# or C++) to get data from hardware analyzer (through 100Mb TCP/IP network) and write it to disk.
Why I say "Low latency", well, Hardware analyser have a 9KB internal buffer, this is 2ms of data storing, after that it runs into buffer overflow and I lose information packets.
My application is able to get this buffer without losing any packet, but I noticed I wasn't able to write data to the disk at this rate.
My code looks like this:
int main (int argc, char * argv [] )
{
pthread_t th_rx; /* Thread for receiving packets */
outputfile = fopen ("output.log", "wb");
link_open(); // open device link
pthread_create ( &th_rx, NULL, read_packets, 0 );
// running loop
fclose (outputfile);
pthread_exit(NULL);
link_close(); // close device link
return 0;
}
static thread_result read_packets (void *arg)
{
char rxbuf[40960];
while (receiving)
{
bytes_received = read_packet(); //read packet from device buffer
rxbuf = extract_data(bytes_received);
fwrite (rxbuf, 1, bytes_received, outputfile);
}
return thread_return;
}
What I need here are ideas to how to do that.
1: don't write packet upon received, create a circular? buffer
2: Creating 2 thread circular buffer?
Any ideas of how to improve my code and what I can do to get stable writes?
Code examples will be very appreciated because I'm feeling lost :(
Thanks
As you guessed correctly, the firs thing you should do is, do not write to disk while you are reading from device.
create circular/ ring buffer (or just list of buffers) to store data read from device. This operation will be fast enough, so that device memory does not fill up.
However, if device read is non-blocking you may want o decide to sleep so that your loop do not hog the cpu.
One thread reads data from device and puts in buffer.
In another thread, keep writing the buffer to disk.
As you said hw analyzer has 9 kb memory for only 2 ms. Then speed required for writing to HDD is 9*1024*500 = 4608000 b/s or 4.6 Mb/s. I believe that test computer has speed of writing at HDD at least 10 Mb/s. So the only one problem here is to write to HDD with such a big portions of data which can give enough time to do writing. For instance, buffer can be 1 sec or 4.6 Mb. Storing of such portion of data will be faster then collecting of 1 s of data from hw device and "writing thread" takes no more than 500 ms to do its job. In real program this time can differ but in any way it must be enough for single-core systems as well.Therefore you may use two ping-pong (one is filled with data from hw, one is stored to HDD) buffers with two threads and without circular buffer. Thus you can use producer/consumer model which is well described and has a lot of examples.
You should look into asynchronous I/O. However Async-I/O APIs are operating system dependent and you didn't specify which OS you're using. Also your use of threads makes no sense. If you want to use threads (instead of async I/O) then you'd have one thread doing the packet reading and the other thread should to the writing. Putting it both into a single thread gives you no benefit.

serial device ignores EscapeCommFunction with C

I am migrating (finally) from MSDOS to Windows XP for controlling a meter via the serial port. My old C DOS code works fine.
I want to do as follows:
meter is continuously taking readings
every few sec, but does not send any
info until it is requested by the
computer
when computer is ready to receive
info from meter, it requests it. It does not accept info otherwise.
My problem is that the readings are just coming into the computer as they are generated by the meter.
I have set the DCB serail params as follows, intending to control communication using RTS and DTR:
dcbSerialParams.BaudRate=CBR_4800;
dcbSerialParams.ByteSize=7;
dcbSerialParams.StopBits=TWOSTOPBITS;
dcbSerialParams.Parity=EVENPARITY;
dcbSerialParams.fDtrControl=DTR_CONTROL_ENABLE;
dcbSerialParams.fRtsControl=RTS_CONTROL_ENABLE;
My old code under DOS was like this:
outportb(COM1+4,0x03); /* start Minolta reading */
for(j=0;j<=10;j++) /*each reading consists of 11 ascii characters*/
{
while(!((inportb(COM1+5)) & 1)); /*wait until char received*/
reading[j]=inportb(COM1);
}
sscanf ( &reading[4], "%f", &lum[k] );
outportb(COM1+4,0x00); /* stop Minolta reading */
It seems to me that this should work:
void serial_notready(void)
{
EscapeCommFunction(hSerial,CLRDTR);
EscapeCommFunction(hSerial,CLRRTS);
}
void serial_ready(void)
{
EscapeCommFunction(hSerial,SETDTR);
EscapeCommFunction(hSerial,SETRTS);
}
int serial_read(char reading[])
{
DWORD dwBytesRead = 0;
int nbytes=11;
ReadFile(hSerial, reading, nbytes, &dwBytesRead, NULL);
return(dwBytesRead);
}
serial_ready(void);
x = 0; while(x == 0){x=serial_read(reading);}
serial_notready(void);
HOWEVER, the Minolta does not wait to receive the RTS from the computer. It just goes ahead and sends readings as each becomes available. At the same time, the computer does not reject any unwanted reading, but accepts it.
I have been bashing my head against the wall trying to figure this out, trying all kinds of permutations to no avail. Any help greatly appreciated!
Update:
The underlying story is that I present a given luminance (brightness) on a display and then need the corresponding luminance reading. This is done for a whole set of luminances.
L ---
U ---
M ---
TIME
I present lum1, lum2, lum3, lum4,.... If the measurements are not synchronised to the display, then I may get a supposed reading3 that is actually lum2, or some sort of average because the reading crossed the border between the lum2 and lum3 displays. And, as you said, Hans,the readings will always lag behind the display luminances. Even if I were always systematically one reading behind it would be bad (my situation is worse-- it is a a random relation between the reading and the luminance).
So the behaviour of the windows serial routines is a nightmare for me. Thanks again for the help!
dcbSerialParams.fDtrControl=DTR_CONTROL_ENABLE;
dcbSerialParams.fRtsControl=RTS_CONTROL_ENABLE;
You enable the DTR and RTS signals right away. The meter will immediately start sending data when you open the port. That data gets buffered in the driver's receive buffer. You didn't have a buffer before in the DOS code. It depends how long it takes for you to call serial_notready(). You'll have a pretty full buffer if that takes a second or so. Yes, that makes it look like the meter is just sending data. And you are always reading an old sample.
Start with the DCB values set to DISABLE. Beware that the scheme is brittle, you could turn the signal off pretty reliably back in the DOS. Now you've got a driver in between. You may well end up turning RTS off too late. Which risks getting a stale reading. An alternative is to startup a thread that just reads continuously. And have your main code just use the last value that it read. The overhead is quite low, serial ports are slow.
The first thing to do would be to check the return values of the calls to EscapeCommFunction(). If the return value is zero, the call failed and you should use GetLastError() to receive more error information.
I use a free third party serial port emulator VPS. its got a interval request timer that dectate when excatly the data need to be updated/grapped. also allow me to log the bus packets into an excel file.

Serial programming: measuring time between characters

I am sending/receiving data over a serial line in Linux and I would like to find the delay between characters.
Modbus uses a 3.5 character delay to detect message frame boundaries. If there is more than a 1.5 character delay, the message frame is declared incomplete.
I'm writing a quick program in C which is basically
fd = open(MODEMDEVICE, O_RDWR | O_NOCTTY | O_NONBLOCK);
// setup newtio
....
tcsetattr(fd, TCSANOW, &newtio);
for(;;) {
res = read(fs, buf, 1);
if (res > 0) {
// store time in milliseconds?
//do stuff
}
}
Is there some way of measuring the time here? Or do I need to look at retrieving data from the serial line in a different way?
I've also tried hooking into SIGIO to get a signal whenever there is data but I seem to get data 8 bytes at a time.
(yes, I know there exist some modbus libraries but I want to use this in other applications)
The simple answer is... you cannot (not without writing you own serial driver)!
If you are writing a MODBUS master there is some hope: You can either detect the end of a slave response by waiting any amount of time (provided its longer than 3.5 chars) without receiving anything (select(2) can help you here), or by parsing the response on the fly, as you read it (the second method wastes much less time). You must also be careful to wait at least 3.5 characters-time before staring to transmit a new request, after receiving the response to the previous request. "At least" is operative here! Waiting more doesn't matter. Waiting less does.
If you a writing a MODBUS slave then you' re out of luck. You simply cannot do it reliably from userspace Linux. You have to write you own serial driver.
BTW, this is not Linux's fault. This is due to the unbelievable stupidity of MODBUS's framing method.
MODbus is like a lot of old protocols and really hates modern hardware.
The reason you're getting 8 bytes at a time is :
Your PC has a (at least) 16 byte serial FIFO on receive and transmit, in the hardware. Most are 64byte or bigger.
It is possible to tell the uart device to time out and issue a received interrupt after a number of char times.
The Trigger Level is adjustable, but the low-level driver sets it "smartly". try low-latency mode using setserial)
You can fiddle with the code in the serial driver if you must. Google it (mature content warning) it is not pretty.
so the routine is as pseudocode
int actual=read (packet, timeout of 1.5 chars)
look at actual # of received bytes
if less than a packet, has issues, discard.
not great.
You can't use timeouts. On higher baud rates 3.5 character timeout means a few milliseconds, or even hundreds of microseconds. Such timeouts can't be handled in the Linux user space.
On the client side, it isn't a big deal since Modbus doesn't send asynchronous messages. So it's up to you not to send 2 consecutive messages within 3.5 character timeout.
On the server side, the problem is that if your clients have an extremely short response timeouts and Linux is too busy you can't write a bullet-proof framing solution. There is a chance that read() function will return more than one packet. Here is (a little contrived) example.
Client writes a packet to server. Timeout is let's say 20 ms.
Let's say that Linux is at the moment very busy, so kernel doesn't wake up your thread within next 50 ms.
After 20 ms client detects that it didn't receive any response so it sends another packet to server (maybe resent the previous one).
If Linux wakes up your reading thread after 50 ms, read() function can get 2 packets or even 1 and half depending to how many bytes were received by the serial port driver.
In my implementation I use a simple method that tries to parse bytes on-the-fly - first detecting the function code and then I try to read all remaining bytes for a specific function. If I get one and half packet I parse just the first one and remaining bytes are left in the buffer. If more bytes come within a short timeout I add them and try to parse, otherwise I discard them. It's not a perfect solution (for instance some sub-codes for function 8 doesn't have a fixed size) but since MODBUS RTU doesn't have any STX ETX characters, it's the best one I were able to figure out.
I think you are going about this the wrong way. There is a built in mechanism for ensuring that characters come in all together.
Basically, you are going to want to use ioctl() and set the VMIN and VTIME parameters appropriately. In this case, it seems like you'd want VMIN (minimum number of characters in a packet) to be 0 and VTIME (minimum amount of time allowed between characters to be 15 (they are tenths of seconds).
Some really basic example code:
struct termio t;
t.c_cc[ VMIN ] = 0;
t.c_cc[ VTIME ] = 15;
if (ioctl( fd, TCSETAW, &t ) == -1)
{
printf( msg, "ioctl(set) failed on port %s. Exiting...", yourPort);
exit( 1 );
}
Do this before your open() but before your read(). Here's a couple of links that I've found wildly helpful:
Serial Programming Guide
Understanding VMIN and VMAX
I hope that at least helps/points you in the right direction even if it isn't a perfect answer for your question.

Resources