How are flags represented in the termios library? - c

I'm new to C and driver programming. Currently, I'm programming a user space driver to communicate with RS232 over USB using Debian. While researching, I came across the following bit of code.
tty.c_cflag &= ~PARENB; // No Parity
tty.c_cflag &= ~CSTOPB; // 1 Stop Bit
tty.c_cflag &= ~CSIZE;
tty.c_cflag |= CS8; // 8 Bits
I understand the consequences of these lines, however, these operations would only make sense if each control flag constant (PARENB, CSTOPB, etc.) was the same length of a combination of these flags. I can't seem to verify this through any documentation (one of my main grievances with C thus far, somewhat harder to find easy to understand documentation.) to confirm this.
I would like to ensure that I'm understanding the program correctly, as it's a purely inductive approach and I'm unsure as to why these flags would be stored as such. Could somebody verify these findings, or point out something I may be overlooking?
Ex.
tty.c_cflag hypothetically is 4-bits long, each of the flags from the
previous code block corresponding to bits 3, 2, 1, 0. Then I believe the
following is how these are stored, if we were to say flags PARENB (3) and
CSTOPB (2) are high, and the other two flags are disabled.
tty.c_cflag = 1100
PARENB = 1000
CSTOPB = 0100
CSIZE = 0000
CS8 = 0000

In C, the best documentation you'll ever find is the source code itself, which you can find on your computer at /usr/include/termios.h (actually spread over one or more of the includes within it) — here's the bsd based termios.h for apples I based my answer on, values are likely to change depending on your flavour of Unix.
There, you'll find out that your tty object is of type struct termios, defined as follows:
struct termios {
tcflag_t c_iflag; /* input flags */
tcflag_t c_oflag; /* output flags */
tcflag_t c_cflag; /* control flags */
tcflag_t c_lflag; /* local flags */
cc_t c_cc[NCCS]; /* control chars */
speed_t c_ispeed; /* input speed */
speed_t c_ospeed; /* output speed */
};
So c_cflag is of type tcflag_t, which is defined by the following line:
typedef unsigned long tcflag_t;
And an unsigned long is expected to be 4 bytes, i.e. 32bits.
Then all the flags you're using in your example are being defined as follows; using 8 bytes values:
#define PARENB 0x00001000 /* parity enable */
#define CSTOPB 0x00000400 /* send 2 stop bits */
#define CSIZE 0x00000300 /* character size mask */
#define CS8 0x00000300 /* 8 bits */
That being said, the way it works is that c_cflag is used as a bit array, meaning that each bit is significant for a function. This is a method commonly used because bit operations are "cheap" in processing power (your CPU can do a bit operation in one cycle), and "cheap" in memory space, as instead of using an array of 32 booleans to store values (a boolean type having a size of 1 byte to store one binary value), you're able to store 8 binary values per byte.
Another advantage, and optimization, is that because your CPU is at least 32-bits, and likely to be 64-bits in 2015, it can apply a mask over the 32 values in one CPU cycle.
An alternative representation of the bitmask would be to create a struct like the following:
struct tcflag_t {
bool cignore;
uint8_t csize;
bool cstopb;
bool cread;
bool parenb;
bool hupcl;
bool clocal;
bool ccts_oflow;
bool crts_iflow;
bool cdtr_iflow;
bool ctdr_oflow;
bool ccar_oflow;
};
Which would be 12 bytes. And to change them, you'd have to do 12 operations.
Then the operations you can do on bytes follows the boolean logic, which is defined by truth tables:
The And (&), Or (|) and Not (~) truth tables:
| a | b | & | | a | b | | | | a | ~ |
| - | - | - | | - | - | - | | 0 | 1 |
| 0 | 0 | 0 | | 0 | 0 | 0 | | 1 | 0 |
| 0 | 1 | 0 | | 0 | 1 | 1 |
| 1 | 0 | 0 | | 1 | 0 | 1 |
| 1 | 1 | 1 | | 1 | 1 | 1 |
We usually nickname the And operator as "force to zero" and the Or operator as "force to 1", because
unless both values are 1, the And will result in 0, and unless both values are 0, the Or will
result in 1.
So if we consider that tty.c_cflag = 0x00000000 and you want to enable the parity check:
tty.c_cflag |= PARENB;
and then tty.c_cflag will contain 0x00001000, i.e. 0b1000000000000
Then you want to setup 7 bit size:
tty.c_cflag |= CS7;
and tty.c_cflag will contain 0x00001200, i.e. 0b1001000000000
Now, let's get back to your question: your "equivalent" example is not really representative, as you're considering CSIZE and CS8 to contain no value.
So let's get through the code you've taken from the example:
tty.c_cflag &= ~PARENB; // No Parity
tty.c_cflag &= ~CSTOPB; // 1 Stop Bit
tty.c_cflag &= ~CSIZE;
tty.c_cflag |= CS8; // 8 Bits
Here, tty.c_cflag contains an unknown value:
0b????????????????????????????????
And you know you want no parity, one stop bit, and a data size of 8 bits. So here you're
negating the "set parity" value to turn it off:
~PARENB == 0b0111111111111
And then using the And operator, you're forcing the bit to zero:
tty.c_cflag &= ~PARENB —→ 0b???????????????????0????????????
Then you do the same with CSTOPB:
tty.c_cflag &= ~CSTOPB —→ 0b???????????????????0?0??????????
and finally CSIZE:
tty.c_cflag &= ~CSIZE —→ 0b???????????????????0?000????????
For CSIZE, the goal is to make sure the two bit values for the length of data is reset.
Then you set up the right length by forcing to 1 the value:
tty.c_cflag |= CS8 —→ 0b???????????????????0?011????????
Actually, resetting CSIZE to 00 and then setting up CS8 to 11 is useless, as
doing directly tty.c_cflag |= CS8 will make it 11. But this is good practice in case
you want to change from CS8 to CS7, which will then set only one of the two bits, the
other one staying at the original value.
Finally, when you'll open your serial port, the library will check for those values to
configure the port, and use defaults for all the other values you haven't forced and
you'll be able to use your serial port.
I hope my explanation is helping you to better understand what's going on with flag settings
on the serial port, and the use of bitmasks altogether. FYI, the same principle is being
used for a lot of other things, like for example IPv4 netmasks, file I/O, etc.

The actual values of the macros depend on the platform (e.g., on Linux CSTOPB is defined as 0100 whereas on some BSDs it is 02000). This is why you should not make assumptions as to their exact values.
For instance, it is indeed common that CSIZE and CS8 have the same value, but on some platforms they might not, hence you first AND with the complement of the CSIZE mask (which sets all of the bits that affect character size to zero), and then OR in the value for those bits. If you were to assume that CS8 is the same pattern as the mask, you could omit the first step, but then the code would do the wrong thing, and in a very obscure manner without any warning, on a platform where this assumption didn't hold.
Here PARENB and CSTOPB are individual bit flags (exactly one 1-bit) that can be set with the bitwise-OR |, and cleared by bitwise-ANDing & their complement ~. Meanwhile the character sizes, including CS8, can have any number of 1-bits, including zero – they are more like little integers stored in specific bits of a larger integer. CSIZE is a mask that has 1-bits in all the places that signify character size (any of CS5, CS6, CS7, CS8) – this mask can be used to either extract exactly the character size (e.g., to test if ((tty.c_flag & CSIZE) == CS8)), or to clear it before setting (as is the case here with tty.c_flag &= ~CSIZE).

Related

Controlling one single bit on a certain port in AVR chips using macros in C

I have experiences in AVR programming with CodeVisionAVR. Recently I switched to using Atmel Studio and found a minor but annoying problem: I can't control each PIN/PORT as easy as I did in CodeVisionAVR.
Firstly, let's initialize one certain port:
// Initialzing one port, for example PORT D:
// Bit7=In Bit6=In Bit5=In Bit4=In Bit3=In Bit2=In Bit1=In Bit0=Out
DDRD=0x01;
// State: Bit7=T Bit6=T Bit5=T Bit4=T Bit3=T Bit2=T Bit1=T Bit0=1
PORTD=(0<<PORTD7) | (0<<PORTD6) | (0<<PORTD5) | (0<<PORTD4) | (0<<PORTD3) | (0<<PORTD2) | (0<<PORTD1) | (1<<PORTD0);
// After initializing, Port D bit 0 should be the only pin set to out.
In CodeVisionAVR, I can do this:
PORTD.0 = 0; // Set Port D bit 0 to low
PORTD.0 = 1; // Or to high
But in Atmel Studio, addressing Port D bit 0 as PORTD.0 gives me an error. I have to do this instead:
PORTD = (1<<PORTD0); // Set Port D bit 0 to high
PORTD = (0<<PORTD0); // and low
As you can see, addressing by shifting bits is much less clean and harder to read / write. I thought CVAVR used something like a struct to imitate the dot (.) addressing method (in C we don't have classes), or some overloaded operators, or some macros, but after digging the included header files in CVAVR, I could only find this:
sfrb PORTD=0x12;
// ... other ports ...
sfrb PIND=0x10;
My question is, can I imitate the bit addressing method of CVAVR in Atmel Studio? If yes, how? Or was that some IDE-exclusive feature?
What you are saying about CodeVisionAVR
PORTD.0 = 0; // Set Port D bit 0 to low
PORTD.0 = 1; // Or to high
is not a valid C semantic, so if you didn't make a mistake on your predicate it must be a codevision compiler extension.
That kind of assignment in C represent an struct access, but you can not declare a struct member (or any other identifier) starting with a number, so PORTD.0 will produce an error.
Also when you do this:
PORTD = (1<<PORTD0); // Set Port D bit 0 to low
PORTD = (0<<PORTD0); // and high
you are not doing what you comment, you are assigning (assuming PORTD0==1) 0x1 to PORTD in the first expression and 0x0 in the second. If your intention is to manipulate only one bit, this is what you should do:
PORTD |= (1<<PORTD0); //sets bit PORTD0 high in PORTD
PORTD &= ~(1<<PORTD0); //sets bit PORTD0 low in PORTD
PORTD ^= (1<<PORTD0); //toggles PORTD0 in PORTD
You should read about bit manipulation in C, here is a post with more examples
Sometimes those actions are encapsulated in a macro, this an example of how you could do it:
#define BitSet(Port,Bit) (Port|=(1<<Bit))
#define BitClear(Port,Bit) (Port&=~(1<<Bit))
#define BitToggle(Port,Bit) (Port^=(1<<Bit))
#define SetBits(Port,BitMask) (Port|=BitMask)
#define ClearBits(Port,BitMask) (Port&=~BitMask)
#define ToggleBits(Port,BitMask) (Port^=BitMask)
//then you can use it
BitSet(PORTD,0) ; //Sets the bit0
BitSet(PORTD,1) ; //Sets the bit1
BitSet(PORTD,2) ; //Sets the bit2
// or
BitSet(PORTD,PORTD0) ; //Sets the bit0
BitSet(PORTD,PORTD1) ; //Sets the bit1
BitSet(PORTD,PORTD2) ; //Sets the bit2
...
SetBits(PORTD,0x55) ; //Sets the bits 0,2,4, and 6. Leaves the other unchanged
ClearBits(PORTD,0x55) ; //Clear the bits 0,2,4, and 6. Leaves the other unchanged

STM32F051 - Different idle state depending on overcurrent input

I have a STM32F051 driving a H-bridge (with proper gate drivers and overcurrent pulse sent back to the MCU) which power a transformer, using TIM1 and complementary signals (and dead time generation).
I am trying to configure a different "safe" state depending on which overcurrent pulse I receive:
On high side overcurrent, turn off low side fets, turn on high side fets.
On low side overcurrent, turn off high side fets, turn on low side fets.
Idea is to improve overcurrent performance on an inverter.
Is there a possibility to manually set the outputs of the timers to a defined state immediately when receiving a pulse on a GPIO ? I tried with the break function, but you can only set one predefined "safe" state. For my application I need two (for now, more to come).
In the end I found the result and I share it with you.
The source code and examples of libopencm3 helped me to find the answer.
#define TIM_CCMR1_OC1M_INACTIVE (0x2 << 4)
#define TIM_CCMR1_OC1M_FORCE_LOW (0x4 << 4)
#define TIM_CCMR1_OC1M_FORCE_HIGH (0x5 << 4)
#define TIM_CCMR1_OC1M_PWM2 (0x7 << 4)
#define TIM_CCMR1_OC2M_INACTIVE (0x2 << 12)
#define TIM_CCMR1_OC2M_FORCE_LOW (0x4 << 12)
#define TIM_CCMR1_OC2M_FORCE_HIGH (0x5 << 12)
#define TIM_CCMR1_OC2M_PWM2 (0x7 << 12)
Utility functions to disable and enable outputs.
void disable_pwm(){
TIM1->CCER &= ~(TIM_CCER_CC1E | TIM_CCER_CC1NE | TIM_CCER_CC2E | TIM_CCER_CC2NE);
}
void enable_pwm(){
TIM1->CCER |= (TIM_CCER_CC1E | TIM_CCER_CC1NE | TIM_CCER_CC2E | TIM_CCER_CC2NE);
}
Here is how to force the H bridge to short the load to ground as an example.
TIM1->CCMR1 &= ~TIM_CCMR1_OC1M_Msk;
TIM1->CCMR1 |= TIM_CCMR1_OC1M_FORCE_LOW;
TIM1->CCMR1 &= ~TIM_CCMR1_OC2M_Msk;
TIM1->CCMR1 |= TIM_CCMR1_OC2M_FORCE_LOW;
Hope this will be useful to someone else!

Data Not Being Stored in Receive Data Register, (UART RXNE Flag not setting)

I have been programming the stm32l412kb nucleo board, attempting to achieve basic UART communication. Transmission from the board works great but the board is not appearing to receive any data.
For the software side, I have tried using standard HAL code in a few ways different, in both interrupt and non-interrupt mode. I have tied a more basic approach (shown below). From debugging line by line I have found that the Receive Data register (RDR) is not filling (and consequently the flag which sets when there is data there is not setting). This has been the error in each case.
This aim of this code is to send back the character entered.
#include "stm32l4xx.h"
int main(void)
{
/* USER CODE BEGIN 1 */
/*The Usart2 peripheral needs its clock to be enabled.*/
RCC->APB1ENR1 |= RCC_APB1ENR1_USART2EN;
RCC->AHB2ENR |= RCC_AHB2ENR_GPIOAEN;
/*The 72 MHz APB1 bus clock with a 9600baud rate gives a baud rate for the register of 0x1D4C*/
USART2->BRR = 0x1D4C;
/*For USART2 we need to enable the overall UART (U) driver, the transmission lines(T) and the reading lines(R). UART Enable is last.*/
USART2->CR1 |= USART_CR1_RE | USART_CR1_TE | USART_CR1_UE;
/*Setting transmission pin*/
GPIOA->MODER |= GPIO_MODE_AF_PP;
GPIOA->OSPEEDR |= GPIO_SPEED_FREQ_HIGH;
/* USER CODE END 1 */
/* Infinite loop */
/* USER CODE BEGIN WHILE */
while (1)
{
if (USART2->ISR & USART_ISR_RXNE) //if RX is not empty
{
char temp = USART2->RDR; //fetch the data received
USART2->TDR = temp; //send it back out
while (!(USART2->ISR & USART_ISR_TC)); //wait for TX to be complete
}
}
return 0;
}
To send the data I have used RealTerm Serial Capture and have tried also the stm32cubeIDE console. One possibility of the source of the problem is that the DataSheet says
"In the USART, the start bit is detected when a specific sequence of samples is recognized. This sequence is: 1 1 1 0 X 0 X 0 X 0 0 0 0."
I have not coded any way of leading my data with this, however, from all the examples I have seen from a couple of books as well as videos, they did not need to think about this and it worked perfectly. Could it be a hardware problem? Is there something I'm not initialising? I have even tried different cables.
Many thanks in advance for any help,
Harry
/*********************************UPDATE**************************************/
First and foremost, thank you very much for the help, I now understand basics such as how to use the datasheet to configure the registers. It is much appreciated. I have updated my code but still the problem remains.
So I have updated my configuration as so:
/*Configuring GPIO Pins*/
/*Clearing whatever is held in the mode registers for pins 2 and 3 (Inverting with their masks.)*/
GPIOA -> MODER &= ~(GPIO_MODER_MODE2_Msk | GPIO_MODER_MODE3_Msk);
/*The 2 bits 10 are being shifted to the position which configures Mode of pin 2 and also for pin 3 in the mode register.
*(10 is alterntive function mode).*/
GPIOA -> MODER |= (0b10 << GPIO_MODER_MODE2_Pos) | (0b10 << GPIO_MODER_MODE3_Pos);
/*Clearing whatever is held in the output speed registers for pins 2 and 3*/
GPIOA -> OSPEEDR &= ~(GPIO_OSPEEDR_OSPEED2_Msk | GPIO_OSPEEDR_OSPEED3_Msk);
/*Setting the speed of pins 2 and 3 to be very high(11)*/
GPIOA -> OSPEEDR |= (0b11 << GPIO_OSPEEDR_OSPEED2_Pos) | (0b11 << GPIO_OSPEEDR_OSPEED3_Pos);
/*Clearing whatever is held in the alternative function registers for pins 2 and 3.*/
GPIOA -> AFR[0] &= ~(GPIO_AFRL_AFSEL2_Msk | GPIO_AFRL_AFSEL3_Msk);
/*Setting the pins 2 and 3 to their alternative functions(TX and RX)*/
GPIOA -> AFR[0] |= (7 << GPIO_AFRL_AFSEL2_Pos) | (7 << GPIO_AFRL_AFSEL3_Pos);
/*Clock Configuration*/
/*Enabling the USART2 peripheral clock.*/
RCC->APB1ENR1 &= ~(RCC_APB1ENR1_USART2EN_Msk);
RCC->APB1ENR1 |= (0b1 << RCC_APB1ENR1_USART2EN_Pos);
/*Enabling the GPIOA port peripheral clock*/
RCC->AHB2ENR &= ~(RCC_AHB2ENR_GPIOAEN_Msk);
RCC->AHB2ENR |= (0b1 << RCC_AHB2ENR_GPIOAEN_Pos);
/*USART Configuartion*/
/*The 72 MHz APB1 bus clock with a 9600baud rate gives a baud rate for the register of 0x1D4C*/
USART2->BRR = 0x1D4C;
/*For USART2 we need to enable the overall UART (U) driver, the transmission lines(T) and the reading lines(R). UART Enable is last.*/
USART2->CR1 &= ~(USART_CR1_RE_Msk | USART_CR1_TE_Msk | USART_CR1_UE_Msk);
USART2->CR1 |= USART_CR1_RE | USART_CR1_TE | USART_CR1_UE;
Which has greatly developed my understanding of how to properly configure the device. However, I'm still having a problem with the overall aim of the code to bounce back a character, as the data is still not being read by the MCU. I will pursue on and update if it's successful. I'm thankful for any further suggestions.
this does not initialize the GPIO MODER or OPEEDR regiters.
GPIOA->MODER |= GPIO_MODE_AF_PP;
GPIOA->OSPEEDR |= GPIO_SPEED_FREQ_HIGH;
GPIO_MODE_AF_PP & GPIO_SPEED_FREQ_HIGH are HAL definitions and cant be used on the register level.
You need to set the appropriate values for every pin you use:
It will never receive or send anything as you forgot to set the GPIO -> AF registers and the hardware is not connected to the pins internally.
You can find the alternate functions mappin in the Datasheet
and the AF GPIO registers in the Reference Manual
this sequence should be:
GPIOA -> MODER &= ~(GPIO_MODER_MODE2_Msk | GPIO_MODER_MODE3_Msk);
GPIOA -> MODER |= (0b10 << GPIO_MODER_MODE2_Pos) | (0b10 << GPIO_MODER_MODE3_Pos);
GPIOA -> OSPEEDR &= ~(GPIO_OSPEEDR_OSPEED2_Msk | GPIO_OSPEEDR_OSPEED23Msk);
GPIOA -> OSPEEDR |= (0b11 << GPIO_OSPEEDR_OSPEED2_Pos) | (0b11 << GPIO_OSPEEDR_OSPEED3_Pos);
GPIOA -> AFR[0] &= ~(GPIO_AFRL_AFSEL2_Msk | GPIO_AFRL_AFSEL3_Msk);
GPIOA -> AFR[0] |= (7 << GPIO_AFRL_AFSEL2_Pos) | (7 << GPIO_AFRL_AFSEL3_Pos);

c - custom input, text overwrites when writing at the middle of a word in the command line

I'm writing a custom shell in C. I'm currently trying to get the user input.
I've put the terminal in raw mode:
term.c_iflag &= ~(IGNBRK | BRKINT | PARMRK | ISTRIP | INLCR | IGNCR | ICRNL | IXON);
term.c_oflag &= ~OPOST;
term.c_lflag &= ~(ECHO | ECHONL | ICANON | IEXTEN);
term.c_cflag &= ~(CSIZE | PARENB);
term.c_cflag |= CS8;
term.c_cc[VMIN] = 1;
term.c_cc[VTIME] = 0;
so that I can process arrow keys and other special keys.
The problem I have is that when I try to write something in the middle of a word, the text gets overwritten.
I get this:
But I need this:
Note: I initially have the string "abc", and with the cursor at 'b', I press 'z'. In my case, 'z' overwrites 'b', but I need it to be inserted between 'a' and 'b', and move "bc" to the right with one column.
Is there a flag in termios that I'm missing or I have to process everything the hard way?
No termios flag. Instead you could consult termcap for im capability string:
im
String of commands to enter insert mode. If the terminal has no special insert mode, but it can insert characters with a special command, im should be defined with a null value, because the vi editor assumes that insertion of a character is impossible if im is not provided. New programs should not act like vi. They should pay attention to im only if it is defined.

Reading raw bytes from a serial port

I'm trying to read raw bytes from a serial port sent by a IEC 870-5-101 win32 protocol simulator with a program written in C running on Linux 32bit.
It's working fine for byte values like 0x00 - 0x7F. But for values beginning from 0x80 to 0xAF the high bit is wrong, e.g.:
0x7F -> 0x7F //correct
0x18 -> 0x18 //correct
0x79 -> 0x79 //correct
0x80 -> 0x00 //wrong
0xAF -> 0x2F //wrong
0xFF -> 0x7F //wrong
After digging around for two days now, I have no idea, what's causing this.
This is my config of the serial port:
cfsetispeed(&config, B9600);
cfsetospeed(&config, B9600);
config.c_cflag |= (CLOCAL | CREAD);
config.c_cflag &= ~CSIZE; /* Mask the character size bits */
config.c_cflag |= (PARENB | CS8); /* Parity bit Select 8 data bits */
config.c_cflag &= ~(PARODD | CSTOPB); /* even parity, 1 stop bit */
config.c_cflag |= CRTSCTS; /*enable RTS/CTS flow control - linux only supports rts/cts*/
config.c_iflag &= ~(IXON | IXOFF | IXANY); /*disable software flow control*/
config.c_oflag &= ~OPOST; /* enable raw output */
config.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG); /* enable raw input */
config.c_iflag &= ~(INPCK | PARMRK); /* DANGEROUS no parity check*/
config.c_iflag |= ISTRIP; /* strip parity bits */
config.c_iflag |= IGNPAR; /* DANGEROUS ignore parity errors*/
config.c_cc[VTIME] = 1; /*timeout to read a character in tenth of a second*/
I'm reading from the serial port with:
*bytesread = read((int) fd, in_buf, BytesToRead);
Right after this operation "in_buf" contains the wrong byte, so I guess there's something wrong with my config, which is a port from a win32 DCB structure.
Thanks for any ideas!
Based on your examples, only the 8th bit (the high bit) is wrong, and it's wrong by being always 0. You are setting ISTRIP in your line discipline on the Linux side, and that would cause this. ISTRIP does not, as the comment in the C code claims, strip parity bits. It strips the 8th data bit.
If ISTRIP is set, valid input bytes shall first be stripped to seven bits; otherwise, all eight bits shall be processed. IEEE Std 1003.1, 2004 Edition, chapter 11, General Terminal Interface

Resources