Reading raw bytes from a serial port - c

I'm trying to read raw bytes from a serial port sent by a IEC 870-5-101 win32 protocol simulator with a program written in C running on Linux 32bit.
It's working fine for byte values like 0x00 - 0x7F. But for values beginning from 0x80 to 0xAF the high bit is wrong, e.g.:
0x7F -> 0x7F //correct
0x18 -> 0x18 //correct
0x79 -> 0x79 //correct
0x80 -> 0x00 //wrong
0xAF -> 0x2F //wrong
0xFF -> 0x7F //wrong
After digging around for two days now, I have no idea, what's causing this.
This is my config of the serial port:
cfsetispeed(&config, B9600);
cfsetospeed(&config, B9600);
config.c_cflag |= (CLOCAL | CREAD);
config.c_cflag &= ~CSIZE; /* Mask the character size bits */
config.c_cflag |= (PARENB | CS8); /* Parity bit Select 8 data bits */
config.c_cflag &= ~(PARODD | CSTOPB); /* even parity, 1 stop bit */
config.c_cflag |= CRTSCTS; /*enable RTS/CTS flow control - linux only supports rts/cts*/
config.c_iflag &= ~(IXON | IXOFF | IXANY); /*disable software flow control*/
config.c_oflag &= ~OPOST; /* enable raw output */
config.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG); /* enable raw input */
config.c_iflag &= ~(INPCK | PARMRK); /* DANGEROUS no parity check*/
config.c_iflag |= ISTRIP; /* strip parity bits */
config.c_iflag |= IGNPAR; /* DANGEROUS ignore parity errors*/
config.c_cc[VTIME] = 1; /*timeout to read a character in tenth of a second*/
I'm reading from the serial port with:
*bytesread = read((int) fd, in_buf, BytesToRead);
Right after this operation "in_buf" contains the wrong byte, so I guess there's something wrong with my config, which is a port from a win32 DCB structure.
Thanks for any ideas!

Based on your examples, only the 8th bit (the high bit) is wrong, and it's wrong by being always 0. You are setting ISTRIP in your line discipline on the Linux side, and that would cause this. ISTRIP does not, as the comment in the C code claims, strip parity bits. It strips the 8th data bit.
If ISTRIP is set, valid input bytes shall first be stripped to seven bits; otherwise, all eight bits shall be processed. IEEE Std 1003.1, 2004 Edition, chapter 11, General Terminal Interface

Related

Data Not Being Stored in Receive Data Register, (UART RXNE Flag not setting)

I have been programming the stm32l412kb nucleo board, attempting to achieve basic UART communication. Transmission from the board works great but the board is not appearing to receive any data.
For the software side, I have tried using standard HAL code in a few ways different, in both interrupt and non-interrupt mode. I have tied a more basic approach (shown below). From debugging line by line I have found that the Receive Data register (RDR) is not filling (and consequently the flag which sets when there is data there is not setting). This has been the error in each case.
This aim of this code is to send back the character entered.
#include "stm32l4xx.h"
int main(void)
{
/* USER CODE BEGIN 1 */
/*The Usart2 peripheral needs its clock to be enabled.*/
RCC->APB1ENR1 |= RCC_APB1ENR1_USART2EN;
RCC->AHB2ENR |= RCC_AHB2ENR_GPIOAEN;
/*The 72 MHz APB1 bus clock with a 9600baud rate gives a baud rate for the register of 0x1D4C*/
USART2->BRR = 0x1D4C;
/*For USART2 we need to enable the overall UART (U) driver, the transmission lines(T) and the reading lines(R). UART Enable is last.*/
USART2->CR1 |= USART_CR1_RE | USART_CR1_TE | USART_CR1_UE;
/*Setting transmission pin*/
GPIOA->MODER |= GPIO_MODE_AF_PP;
GPIOA->OSPEEDR |= GPIO_SPEED_FREQ_HIGH;
/* USER CODE END 1 */
/* Infinite loop */
/* USER CODE BEGIN WHILE */
while (1)
{
if (USART2->ISR & USART_ISR_RXNE) //if RX is not empty
{
char temp = USART2->RDR; //fetch the data received
USART2->TDR = temp; //send it back out
while (!(USART2->ISR & USART_ISR_TC)); //wait for TX to be complete
}
}
return 0;
}
To send the data I have used RealTerm Serial Capture and have tried also the stm32cubeIDE console. One possibility of the source of the problem is that the DataSheet says
"In the USART, the start bit is detected when a specific sequence of samples is recognized. This sequence is: 1 1 1 0 X 0 X 0 X 0 0 0 0."
I have not coded any way of leading my data with this, however, from all the examples I have seen from a couple of books as well as videos, they did not need to think about this and it worked perfectly. Could it be a hardware problem? Is there something I'm not initialising? I have even tried different cables.
Many thanks in advance for any help,
Harry
/*********************************UPDATE**************************************/
First and foremost, thank you very much for the help, I now understand basics such as how to use the datasheet to configure the registers. It is much appreciated. I have updated my code but still the problem remains.
So I have updated my configuration as so:
/*Configuring GPIO Pins*/
/*Clearing whatever is held in the mode registers for pins 2 and 3 (Inverting with their masks.)*/
GPIOA -> MODER &= ~(GPIO_MODER_MODE2_Msk | GPIO_MODER_MODE3_Msk);
/*The 2 bits 10 are being shifted to the position which configures Mode of pin 2 and also for pin 3 in the mode register.
*(10 is alterntive function mode).*/
GPIOA -> MODER |= (0b10 << GPIO_MODER_MODE2_Pos) | (0b10 << GPIO_MODER_MODE3_Pos);
/*Clearing whatever is held in the output speed registers for pins 2 and 3*/
GPIOA -> OSPEEDR &= ~(GPIO_OSPEEDR_OSPEED2_Msk | GPIO_OSPEEDR_OSPEED3_Msk);
/*Setting the speed of pins 2 and 3 to be very high(11)*/
GPIOA -> OSPEEDR |= (0b11 << GPIO_OSPEEDR_OSPEED2_Pos) | (0b11 << GPIO_OSPEEDR_OSPEED3_Pos);
/*Clearing whatever is held in the alternative function registers for pins 2 and 3.*/
GPIOA -> AFR[0] &= ~(GPIO_AFRL_AFSEL2_Msk | GPIO_AFRL_AFSEL3_Msk);
/*Setting the pins 2 and 3 to their alternative functions(TX and RX)*/
GPIOA -> AFR[0] |= (7 << GPIO_AFRL_AFSEL2_Pos) | (7 << GPIO_AFRL_AFSEL3_Pos);
/*Clock Configuration*/
/*Enabling the USART2 peripheral clock.*/
RCC->APB1ENR1 &= ~(RCC_APB1ENR1_USART2EN_Msk);
RCC->APB1ENR1 |= (0b1 << RCC_APB1ENR1_USART2EN_Pos);
/*Enabling the GPIOA port peripheral clock*/
RCC->AHB2ENR &= ~(RCC_AHB2ENR_GPIOAEN_Msk);
RCC->AHB2ENR |= (0b1 << RCC_AHB2ENR_GPIOAEN_Pos);
/*USART Configuartion*/
/*The 72 MHz APB1 bus clock with a 9600baud rate gives a baud rate for the register of 0x1D4C*/
USART2->BRR = 0x1D4C;
/*For USART2 we need to enable the overall UART (U) driver, the transmission lines(T) and the reading lines(R). UART Enable is last.*/
USART2->CR1 &= ~(USART_CR1_RE_Msk | USART_CR1_TE_Msk | USART_CR1_UE_Msk);
USART2->CR1 |= USART_CR1_RE | USART_CR1_TE | USART_CR1_UE;
Which has greatly developed my understanding of how to properly configure the device. However, I'm still having a problem with the overall aim of the code to bounce back a character, as the data is still not being read by the MCU. I will pursue on and update if it's successful. I'm thankful for any further suggestions.
this does not initialize the GPIO MODER or OPEEDR regiters.
GPIOA->MODER |= GPIO_MODE_AF_PP;
GPIOA->OSPEEDR |= GPIO_SPEED_FREQ_HIGH;
GPIO_MODE_AF_PP & GPIO_SPEED_FREQ_HIGH are HAL definitions and cant be used on the register level.
You need to set the appropriate values for every pin you use:
It will never receive or send anything as you forgot to set the GPIO -> AF registers and the hardware is not connected to the pins internally.
You can find the alternate functions mappin in the Datasheet
and the AF GPIO registers in the Reference Manual
this sequence should be:
GPIOA -> MODER &= ~(GPIO_MODER_MODE2_Msk | GPIO_MODER_MODE3_Msk);
GPIOA -> MODER |= (0b10 << GPIO_MODER_MODE2_Pos) | (0b10 << GPIO_MODER_MODE3_Pos);
GPIOA -> OSPEEDR &= ~(GPIO_OSPEEDR_OSPEED2_Msk | GPIO_OSPEEDR_OSPEED23Msk);
GPIOA -> OSPEEDR |= (0b11 << GPIO_OSPEEDR_OSPEED2_Pos) | (0b11 << GPIO_OSPEEDR_OSPEED3_Pos);
GPIOA -> AFR[0] &= ~(GPIO_AFRL_AFSEL2_Msk | GPIO_AFRL_AFSEL3_Msk);
GPIOA -> AFR[0] |= (7 << GPIO_AFRL_AFSEL2_Pos) | (7 << GPIO_AFRL_AFSEL3_Pos);

Can't communicate with certain baud rates

My application is able to communicate with baud rates like 4800, 9600 and 115200 but can't with 14400 or 38400. I have to include asm/termios because I need struct termios2 since I'm going to use c_ispeed and c_ospeed members for any buad rate.
Also the second problem I encounter is that read function doesn't return afterVTIME. Do you know why this happens? Any help is appreciated. Thanks.
#include <asm/termios.h>
int serialDevice = open("/dev/ttyUSB0", O_RDWR | O_NOCTTY | O_SYNC);
serialSettings.baudRate = 4800;
serialSettings.dataBits = 8;
serialSettings.hardwareFlowControl = 0;
serialSettings.parity = 0;
serialSettings.parityOdd = 0;
serialSettings.stopBits = 1;
serialSettings.xonxoff = 1;
setSerialSettings(serialDevice, &serialSettings);
//-------------------------------------------------------
int8_t setSerialSettings(int serialDevice, Serial_Params_t *settings)
{
struct termios2 tty;
memset(&tty, 0, sizeof tty);
// get current serial settings
if (ioctl(serialDevice, TCGETS2, &tty) == -1)
{
sendLog("Can't get serial attributes | setSerialSettings", LOG_TYPE_ERROR);
return FALSE;
}
// baudrate
tty.c_cflag &= ~CBAUD;
tty.c_cflag |= BOTHER;
tty.c_ispeed = MAX(110, MIN(settings->baudRate, MAX_BAUDRATE));
tty.c_ospeed = MAX(110, MIN(settings->baudRate, MAX_BAUDRATE));
// enable input parity check
tty.c_iflag |= INPCK;
// data bits: CS5, CS6, CS7, CS8
tty.c_cflag &= ~CSIZE;
switch (settings->dataBits)
{
case 5:
tty.c_cflag |= CS5;
break;
case 6:
tty.c_cflag |= CS6;
break;
case 7:
tty.c_cflag |= CS7;
break;
case 8:
default:
tty.c_cflag |= CS8;
break;
}
// stop bit
switch (settings->stopBits)
{
case 1:
default:
tty.c_cflag &= ~CSTOPB;
break;
case 2:
tty.c_cflag |= CSTOPB;
}
// parity
if (settings->parity == 1)
tty.c_cflag |= PARENB;
else
tty.c_cflag &= ~PARENB;
// odd/even parity
if (settings->parityOdd == 1)
tty.c_cflag |= PARODD;
else
tty.c_cflag &= ~PARODD;
// flow control
// XON/XOFF
if (settings->xonxoff == 1)
tty.c_iflag |= (IXON | IXOFF | IXANY);
else
tty.c_iflag &= ~(IXON | IXOFF | IXANY);
// enable RTS/CTS
if (settings->hardwareFlowControl == 1)
tty.c_cflag |= CRTSCTS;
else
tty.c_cflag &= ~CRTSCTS;
tty.c_cc[VMIN] = 1; // return read function when receive 1 byte
tty.c_cc[VTIME] = 10; // 1 seconds read timeout (deciseconds)
tty.c_cflag |= CREAD | CLOCAL; // turn on READ & ignore ctrl lines
// non-canonical mode
tty.c_iflag &= ~(IGNBRK | BRKINT | PARMRK | ISTRIP | INLCR | IGNCR | ICRNL);
tty.c_lflag &= ~(ECHO | ECHONL | ICANON | ISIG | IEXTEN);
tty.c_oflag &= ~OPOST;
// flush port & apply attributes
tcflush(serialDevice, TCIFLUSH);
if (ioctl(serialDevice, TCSETS2, &tty) == -1)
{
sendLog("Can't set serial attributes | setSerialSettings", LOG_TYPE_ERROR);
return FALSE;
}
return TRUE;
}
My application is able to communicate with baud rates like 4800, 9600 and 115200 but can't with 14400 or 38400.
There is a pretty nice writeup for how custom serial speed setting works here: https://github.com/npat-efault/picocom/blob/master/termios2.txt.
In brief, given a struct termios2 identified by tty, to set both input and output speed to custom values, you must
ensure that tty.c_cflag & CBAUD == BOTHER. You appear to do this correctly.
set the desired output speed in tty.c_ospeed. You do this, too.
either
ensure that (tty.c_cflag >> IBSHIFT) & CBAUD == B0, in which case the output speed will also be used as the input speed, or
ensure that (tty.c_cflag >> IBSHIFT) & CBAUD == BOTHER, in which case tty.c_ispeed will be used as the input speed.
You do not do either of those. I'm uncertain why this would cause incorrect communication for some speeds and not others, but the driver is reputed to play some interesting games with speed settings, and maybe you've stumbled across one.
As for
read function doesn't return after VTIME
I think you have incorrect expectations. You are setting VMIN and VTIME both to nonzero values. In this case, VTIME is the maximum inter-character time, not an overall read timeout. With these settings, a blocking read will wait indefinitely for the first character, then will keep reading subsequent characters, up to the requested number, as long as each one arrives within VTIME deciseconds of the previous one.
If you want an overall timeout on every read() call, then set VMIN to 0, and be prepared for some read() calls to read 0 bytes. As always, read() may also read a positive number of bytes but fewer than requested. That may be more likely to happen in this configuration than in the one you're presently using, depending on your choice of VTIME and the behavior of the peer.

serial aplication writtes "0D" as "0A"

I have a serial app on C that receives data and writes it onto a binary file. The problem is that all the data is the same but when i have 0A on sending side, i have 0D on receiving side. I have set the serial port on raw mode and opened the file with wb option. Any clue how to avoid this? If some code is needed, i'll post it.
thanks
EDIT--------------------------
File opening:
FILE *fout;
fout = fopen(file,"wb");
Serial options:
options.c_cflag |= (CLOCAL | CREAD);
options.c_cflag &= ~PARENB;
options.c_cflag &= ~CSTOPB;
options.c_cflag &= ~CSIZE;
options.c_cflag |= CS8;
/* To disable software flow control simply mask those bits: */
options.c_iflag &= ~(IXON | IXOFF | IXANY);
options.c_lflag &= ~(ICANON|ECHO|ECHOE|ISIG);
tcsetattr(fd, TCSANOW, &options);
success = 1;
return success;
Writting on the file:
fwrite(buffer,1,n,fout);
----------FIX----------------------
Setting this option fixes the problem :
options.c_oflag &= ~OPOST;
You need to mask off the ICRNL mode which translates the enter (carriage return) key to the newline character. This is on the receiving end, not in your program. There's also a corresponding output mode that might be set on your end, but less likely.

How are flags represented in the termios library?

I'm new to C and driver programming. Currently, I'm programming a user space driver to communicate with RS232 over USB using Debian. While researching, I came across the following bit of code.
tty.c_cflag &= ~PARENB; // No Parity
tty.c_cflag &= ~CSTOPB; // 1 Stop Bit
tty.c_cflag &= ~CSIZE;
tty.c_cflag |= CS8; // 8 Bits
I understand the consequences of these lines, however, these operations would only make sense if each control flag constant (PARENB, CSTOPB, etc.) was the same length of a combination of these flags. I can't seem to verify this through any documentation (one of my main grievances with C thus far, somewhat harder to find easy to understand documentation.) to confirm this.
I would like to ensure that I'm understanding the program correctly, as it's a purely inductive approach and I'm unsure as to why these flags would be stored as such. Could somebody verify these findings, or point out something I may be overlooking?
Ex.
tty.c_cflag hypothetically is 4-bits long, each of the flags from the
previous code block corresponding to bits 3, 2, 1, 0. Then I believe the
following is how these are stored, if we were to say flags PARENB (3) and
CSTOPB (2) are high, and the other two flags are disabled.
tty.c_cflag = 1100
PARENB = 1000
CSTOPB = 0100
CSIZE = 0000
CS8 = 0000
In C, the best documentation you'll ever find is the source code itself, which you can find on your computer at /usr/include/termios.h (actually spread over one or more of the includes within it) — here's the bsd based termios.h for apples I based my answer on, values are likely to change depending on your flavour of Unix.
There, you'll find out that your tty object is of type struct termios, defined as follows:
struct termios {
tcflag_t c_iflag; /* input flags */
tcflag_t c_oflag; /* output flags */
tcflag_t c_cflag; /* control flags */
tcflag_t c_lflag; /* local flags */
cc_t c_cc[NCCS]; /* control chars */
speed_t c_ispeed; /* input speed */
speed_t c_ospeed; /* output speed */
};
So c_cflag is of type tcflag_t, which is defined by the following line:
typedef unsigned long tcflag_t;
And an unsigned long is expected to be 4 bytes, i.e. 32bits.
Then all the flags you're using in your example are being defined as follows; using 8 bytes values:
#define PARENB 0x00001000 /* parity enable */
#define CSTOPB 0x00000400 /* send 2 stop bits */
#define CSIZE 0x00000300 /* character size mask */
#define CS8 0x00000300 /* 8 bits */
That being said, the way it works is that c_cflag is used as a bit array, meaning that each bit is significant for a function. This is a method commonly used because bit operations are "cheap" in processing power (your CPU can do a bit operation in one cycle), and "cheap" in memory space, as instead of using an array of 32 booleans to store values (a boolean type having a size of 1 byte to store one binary value), you're able to store 8 binary values per byte.
Another advantage, and optimization, is that because your CPU is at least 32-bits, and likely to be 64-bits in 2015, it can apply a mask over the 32 values in one CPU cycle.
An alternative representation of the bitmask would be to create a struct like the following:
struct tcflag_t {
bool cignore;
uint8_t csize;
bool cstopb;
bool cread;
bool parenb;
bool hupcl;
bool clocal;
bool ccts_oflow;
bool crts_iflow;
bool cdtr_iflow;
bool ctdr_oflow;
bool ccar_oflow;
};
Which would be 12 bytes. And to change them, you'd have to do 12 operations.
Then the operations you can do on bytes follows the boolean logic, which is defined by truth tables:
The And (&), Or (|) and Not (~) truth tables:
| a | b | & | | a | b | | | | a | ~ |
| - | - | - | | - | - | - | | 0 | 1 |
| 0 | 0 | 0 | | 0 | 0 | 0 | | 1 | 0 |
| 0 | 1 | 0 | | 0 | 1 | 1 |
| 1 | 0 | 0 | | 1 | 0 | 1 |
| 1 | 1 | 1 | | 1 | 1 | 1 |
We usually nickname the And operator as "force to zero" and the Or operator as "force to 1", because
unless both values are 1, the And will result in 0, and unless both values are 0, the Or will
result in 1.
So if we consider that tty.c_cflag = 0x00000000 and you want to enable the parity check:
tty.c_cflag |= PARENB;
and then tty.c_cflag will contain 0x00001000, i.e. 0b1000000000000
Then you want to setup 7 bit size:
tty.c_cflag |= CS7;
and tty.c_cflag will contain 0x00001200, i.e. 0b1001000000000
Now, let's get back to your question: your "equivalent" example is not really representative, as you're considering CSIZE and CS8 to contain no value.
So let's get through the code you've taken from the example:
tty.c_cflag &= ~PARENB; // No Parity
tty.c_cflag &= ~CSTOPB; // 1 Stop Bit
tty.c_cflag &= ~CSIZE;
tty.c_cflag |= CS8; // 8 Bits
Here, tty.c_cflag contains an unknown value:
0b????????????????????????????????
And you know you want no parity, one stop bit, and a data size of 8 bits. So here you're
negating the "set parity" value to turn it off:
~PARENB == 0b0111111111111
And then using the And operator, you're forcing the bit to zero:
tty.c_cflag &= ~PARENB —→ 0b???????????????????0????????????
Then you do the same with CSTOPB:
tty.c_cflag &= ~CSTOPB —→ 0b???????????????????0?0??????????
and finally CSIZE:
tty.c_cflag &= ~CSIZE —→ 0b???????????????????0?000????????
For CSIZE, the goal is to make sure the two bit values for the length of data is reset.
Then you set up the right length by forcing to 1 the value:
tty.c_cflag |= CS8 —→ 0b???????????????????0?011????????
Actually, resetting CSIZE to 00 and then setting up CS8 to 11 is useless, as
doing directly tty.c_cflag |= CS8 will make it 11. But this is good practice in case
you want to change from CS8 to CS7, which will then set only one of the two bits, the
other one staying at the original value.
Finally, when you'll open your serial port, the library will check for those values to
configure the port, and use defaults for all the other values you haven't forced and
you'll be able to use your serial port.
I hope my explanation is helping you to better understand what's going on with flag settings
on the serial port, and the use of bitmasks altogether. FYI, the same principle is being
used for a lot of other things, like for example IPv4 netmasks, file I/O, etc.
The actual values of the macros depend on the platform (e.g., on Linux CSTOPB is defined as 0100 whereas on some BSDs it is 02000). This is why you should not make assumptions as to their exact values.
For instance, it is indeed common that CSIZE and CS8 have the same value, but on some platforms they might not, hence you first AND with the complement of the CSIZE mask (which sets all of the bits that affect character size to zero), and then OR in the value for those bits. If you were to assume that CS8 is the same pattern as the mask, you could omit the first step, but then the code would do the wrong thing, and in a very obscure manner without any warning, on a platform where this assumption didn't hold.
Here PARENB and CSTOPB are individual bit flags (exactly one 1-bit) that can be set with the bitwise-OR |, and cleared by bitwise-ANDing & their complement ~. Meanwhile the character sizes, including CS8, can have any number of 1-bits, including zero – they are more like little integers stored in specific bits of a larger integer. CSIZE is a mask that has 1-bits in all the places that signify character size (any of CS5, CS6, CS7, CS8) – this mask can be used to either extract exactly the character size (e.g., to test if ((tty.c_flag & CSIZE) == CS8)), or to clear it before setting (as is the case here with tty.c_flag &= ~CSIZE).

Can't set serial port configuration in Linux in C

I have writen some Linux program to comunicate my device. I have "same" my program for Windows ("same" because of it's same logic).
I'm using 8N2 data frame format # 9600 bps, with neither software (XOn/XOff) nor hardware (RTS/CTS) flow control. I don't use DTR, DCD, RI, DSR pins of RS-232 9-pin D-sub. I use only RX and TX pins to communicate with my device.
In Linux I have this part of code:
struct termios PortOpts, result;
int fd; /* File descriptor for the port */
/* Configure Port */
tcgetattr(fd, &PortOpts);
// BaudRate - 9600
cfsetispeed(&PortOpts, B9600);
cfsetospeed(&PortOpts, B9600);
// enable reciever and set local mode, frame format - 8N2, no H/W flow control
PortOpts.c_cflag &= (~(CLOCAL | CREAD | CSIZE | CSTOPB));
PortOpts.c_cflag |= ((CLOCAL | CREAD | CS8 | CSTOPB) & (~PARENB));
PortOpts.c_cflag &= (~CRTSCTS);
// PortOpts.c_cflag &= ~PARENB
// PortOpts.c_cflag |= CSTOPB
// PortOpts.c_cflag &= ~CSIZE;
// PortOpts.c_cflag |= CS8;
// no parity check, no software flow control on input
PortOpts.c_iflag |= IGNPAR;
PortOpts.c_iflag &= ~(IGNBRK | BRKINT | PARMRK | ISTRIP | INLCR | IGNCR | ICRNL | IXON | IXOFF | IXANY | INPCK);
// raw data input mode
PortOpts.c_lflag &= ~(ECHO | ECHONL | ICANON | ISIG | IEXTEN);
// raw data output
PortOpts.c_oflag &= ~OPOST;
// setting timeouts
PortOpts.c_cc[VMIN] = 1; // minimum number of chars to read in noncanonical (raw mode)
PortOpts.c_cc[VTIME] = 5; // time in deciseconds to wait for data in noncanonical mode (raw mode)
if (tcsetattr(fd, TCSANOW, &PortOpts) == 0) {
tcgetattr(fd, &result);
if ( (result.c_cflag != PortOpts.c_cflag) ||
(result.c_oflag != PortOpts.c_oflag) ||
(result.c_iflag != PortOpts.c_iflag) ||
(result.c_cc[VMIN] != PortOpts.c_cc[VMIN]) ||
(result.c_cc[VTIME] != PortOpts.c_cc[VTIME]) ) {
perror("While configuring port1");
close(fd);
return 1;
}
} else {
perror("While configuring port2");
close(fd);
return 1;
};
File descriptor 'fd' is for '/dev/ttyS0' device.
I get that message: While configuring port2: Input/output error
I have a laptop, though I don't have any serial ports except for USB. Is this a reason?
I run program as 'root'.
Sorry for my broken English.
Can you run strace on the program; that will give more specifics of just where the IO error is occurring.
One thing to keep in mind - errno doesn't get reset, so the actual error could be from any system call before the perror.

Resources