Reading from a COM port destroys lines - c

I'm trying to read data from a COM port line-by-line in Windows. In PuTTY, the COM connection looks fine - my serial device (an MSP430 Launchpad) outputs the string "Data" once per second. However, when I use a simple C program to read the COM port and print out the number of bytes read, then the data itself, it gets completely mangled:
0
6 Data
2 Data
4 ta
6 Data
3 Data
3 a
a
6 Data
6 Data
2 Data
The lines saying 6 Data are correct (four characters, then \r\n), but what's happening to those lines that do not contain a complete message? According to the documentation, ReadFile should read an entire line by default. Is this incorrect - do I need to buffer it myself and wait for a linefeed character?
Note that not all those errors would occur in each run of the code; I did a few runs and compiled a variety of errors for your viewing pleasure. Here's the code I'm using:
#include <windows.h>
#include <stdio.h>
static DCB settings;
static HANDLE serial;
static char line[200];
static unsigned long read;
static unsigned int lineLength = sizeof(line) / sizeof(char);
int main(void) {
int i = 10;
serial = CreateFile("COM4",
GENERIC_READ | GENERIC_WRITE,
0, NULL,
OPEN_EXISTING,
0, NULL);
GetCommState(serial, &settings);
settings.BaudRate = CBR_9600;
settings.ByteSize = 8;
settings.Parity = NOPARITY;
settings.StopBits = ONESTOPBIT;
SetCommState(serial, &settings);
while(i) {
ReadFile(serial, &line, lineLength, &read, 0);
printf("%lu %s\n", read, line);
i--;
}
scanf("%c", &read);
return 0;
}
Compiled in Windows 7 64-bit using Visual Studio Express 2012.

What's happening is that the ReadFile is returning after it gets any data. Since data may come on a serial port at some point in the future, ReadFile will return when it gets some amount of data on the serial port. The same thing happens in Linux as well, if you attempt to read from a serial port. The data that you get back may or may not be an entire line, depending on how much information is in the buffer when your process gets dispatched again.
If you take another look at the documentation, notice that it will only return a line when the HANDLE is in console mode:
Characters can be read from the console input buffer by using ReadFile with a handle to console input. The console mode determines the exact behavior of the ReadFile function. By default, the console mode is ENABLE_LINE_INPUT, which indicates that ReadFile should read until it reaches a carriage return. If you press Ctrl+C, the call succeeds, but GetLastError returns ERROR_OPERATION_ABORTED. For more information, see CreateFile.

Related

Linux fread() call to USB device freezes when exposed to high CPU utilization

Recently I've written a driver for a drawing tablet called the "Boogie Board RIP" to be able to use it as an input device for linux. It can be connected via usb to a computer. When the provided pen is near or touching the device's screen, it will send data telling where the pen is on the screen.
Basically the driver works great. I can write on it as if it was a wacom tablet.
At unpredictable times, the program will hang on the line below and the cursor on my computer screen will stay in place
fread(packet, sizeof(char), BYTES, f);
Where:
"packet" is an array of 8 bytes
"BYTES" is 8
"f" is a file opened in binary read (rb) mode. In my case it's /dev/usb/hiddev0
The basic program layout is a while loop that reads a byte at a time. Below is a mock-up of the much larger thing:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
char *path = argv[1];
unsigned char packet[8];
FILE *f = fopen(path, "rb");
int i;
while (1) {
fread(packet, sizeof(char), 8, f);
for(i=0;i<8;i++) {
printf("%x", packet[i]);
fflush(stdout);
}
}
}
I started to notice that my driver would "freeze" more often when I was running more things, like watching youtube, playing music... These aren't great examples. Basically I began to suspect it was related to CPU utilization. So I wrote the program below to test it:
#include <stdio.h>
int main() {
while(1) {
printf("a");
fflush(stdout);
fflush(stdout);
fflush(stdout);
fflush(stdout);
fflush(stdout);
fflush(stdout);
fflush(stdout);
}
}
Running this "infinite loop" program in a separate terminal while running the other program above in another terminal will result in harshly more frequent freezes. Basically the program will stop at the fread() line without fail within 2 seconds.
I learned that when the call to fread() doesn't get any data I can still read from the device with another instance of that program or simply printing out the contents of the file via sudo cat /dev/usb/hiddev0. The original process remains stuck while the new program will spit out the data coming from the device.
It seems as though the file simply closes. But that doesn't make sense because then fread would segfault on the next read. Looking for any ideas.
EDIT:
I solved this by using libusb to deal with reading from my device rather than trying to read directly from the device file.

ReadFile on standard input crashes on Windows 7

Trying to read standard input with ReadFile works on Windows 8+ but crashes on Windows 7.
#include <windows.h>
int main() {
char c[1];
HANDLE in = GetStdHandle(STD_INPUT_HANDLE);
ReadFile(in, c, 1, NULL, NULL);
return 0;
}
produces
Program received signal SIGSEGV, Segmentation fault.
0x00000000770f5803 in VerifyConsoleIoHandle () from C:\Windows\system32\kernel32.dll
on Windows 7
the lpNumberOfBytesRead argument is required, unless the ReadFile will complete asynchronously (the file/device was opened with the correct flags and lpOverlapped is provided)
On Windows 8 and later this parameter is checked for NULL before writing (effectively making it optional) but this is not documented anywhere.
Reading standard input without checking the number of bytes read is a bad idea anyway, since the number of bytes read could be less than requested (or even 0) if input is redirected to a pipe.

Segmentation fault on sending text from file from server to client

I'm currently still programming a simple client-server application in C via Ubuntu. So far my login function seems to have worked well (enter some text in client, grab text from a file in server and verify), but this particular display option is giving me trouble.
Some snippets of the server-side code (I grabbed the file copy to buffer function below from another site):
char bflag[1]; //mainmenu option recveived from client
char buffer[BUFSIZE+1]; //BUFSIZE is 1024
long lSize;
size_t result;
FILE * userf;
userf = fopen("Books.txt", "r+b");
recv(new_sockfd, bflag, BUFSIZE, 0); //receive flag from clientside
if ( (strncmp (bflag, "a", 1)) == 0) //display flag received
{
fseek (userf , 0 , SEEK_END);
lSize = ftell (userf);
rewind (userf);
// copy the file into the buffer:
result = fread (buffer,1,lSize,userf);
send(new_sockfd, buffer, BUFSIZE, 0);
}
fclose(userf);
And on the client side, utilizing a switch for the various options:
char bbuf[BUFSIZE+1]; //BUFSIZE is 1024
switch (mmenuc)
{
case 1:
{
strcpy (mmenuf, "a");
send (sockfd, mmenuf, BUFSIZE,0);//send flag 'a' to server
system("clear");
printf("Listing of books available:\n");
printf("O = Available X = Unavailable\n");
printf("\n");
recv (sockfd, bbuf, BUFSIZE,0);
printf ("%s", bbuf);
printf("\n");
getchar(); //eats the "\n"
getchar(); //to pause
break;
}
The problem that I am facing now is that the all the text in the file is retrieved and appears on the client side terminal fine, but on the server side terminal it gives a Segmentation Fault.
I assume there's a buffer overflow somewhere, but I'm not sure what's causing it.
Also, the Books.txt file is padded with spaces for an editing function later.
The server probably stores something like "a< cr >< lf >" in the buffer "Bflag". Not good. Should cause an error, but does not always cause one immediately.
You do not need to figure out the size of your file before you do the read:
Just issue: result = fread (buffer,1,BUFSIZE,userf);
Now, if your file ends up being larger than the buffer, your program won't crash but just not read all the file. You can change your working program later on to handle the case that the file is larger than one buffer. Use "result" (if it is larger than zero) for the number of bytes-to-write to the client.
If your file is (more than a few bytes) larger than BUFSIZE, it will probably cause a "segmentation fault" on exit of the function you provided in the first codeblock. I think that's where your segmentation fault comes from.

Making terminal input send after a certain number of characters

I am creating a Linux terminal program using C.
I am trying to make a two digit code address a array location.
I don't want to have to hit enter after every two digit input, I want the input to just be sent to my buffer variable through scanf directly after to characters are entered.
I do not have a code sample, as i have no idea how to approach this.
Thanks for any help!
You've got two options, which solve the same problem in nearly the same way. The first is to use stdbuf when you run your program; the invocation is:
stdbuf -i0 ./a.out
Using that prevents stdin from being line-buffered, and will let you use fread() or similar commands to retrieve input as it happens.
The other is to put the terminal in raw mode. It's well-described here. But the downside is that control characters are no longer dealt with. In your program, you
#include <termios.h>
main(){
struct termios trm;
tcgetattr(STDIN_FILENO, &trm); /* get the current settings */
trm.c_cc[VMIN] = 1; /* return after 1 byte read; you might make this a 2*/
trm.c_cc[VTIME] = 0; /* block forever until 1 byte is read */
tcsetattr(STDIN_FILENO, TCSANOW, &trm);
}

How to prevent stdin stream from reading data from associated file descriptor on program start?

I'm using select() call to detect input presence in the main cycle of my program. This makes me use raw file descriptor (0) instead of stdin.
While working in this mode I've noticed that my software occasionally loses a chunk of input at the beginning. I suspect that stdin consumes some of it on the program start. Is there a way to prevent this behavior of stdin or otherwise get the whole input data?
The effect described can be reproduced only with some data on standard input at the very moment of program start. My executable should be used as xinetd service in a way that it always has some input on the start.
Standard input is read in the following way:
Error processInput() {
struct timeval ktimeout;
int fd=fileno(stdin);
int maxFd=fd+1;
FD_ZERO(&fdset);
FD_SET(fd, &fdset);
ktimeout.tv_sec = 0;
ktimeout.tv_usec = 1;
int selectRv=-1;
while ((selectRv=select(maxFd, &fdset, NULL, NULL, &ktimeout)) > 0) {
int left=MAX_BUFFER_SIZE-position-1;
assert(left>0);
int bytesCount=read(fd, buffer+position, left);
//Input processing goes here
}
}
Don't mix cooked and raw meat together. Try replacing the read() call with the equivalent fread() call.
It is very likely that fileno(stdin) is initializing the stdin object, causing it to read and buffer some input. Or perhaps you are already calling something that causes it to initialize (scanf(), getchar(), etc...).

Resources