Explain how setting termios CSIZE works? - c

I have been reading the termios man page, and I'm confused by CSIZE.
Its obscure explanation is:
CSIZE: Character size mask. Values are CS5, CS6, CS7, or CS8.
Later, in the rawmode example, they first NOT CSIZE:
termios_p->c_cflag &= ~(CSIZE | PARENB);
And then they OR one of its settings:
termios_p->c_cflag |= CS8;
I don't understand how this works because CS5-8 are nowhere else even mentioned, let alone explained or their value showed so I can infer what happened.
Can you explain what happens when you NOT CSIZE, and when you OR CS5 or CS8? And also what even is a character size mask and what do the numbers mean (owait, does it work like ISTRIP? Is CS7 like istrip? x & 01111111)
Thank you!

You're not supposed to have to know or care what the numeric values of CSIZE, CS5, CS6, CS7, or CS8 are. All you need to know at the level of actual numbers, is that somewhere in c_cflag is a bit field that can hold at least four distinct values (namely CS5, CS6, CS7, and CS8); that, assuming the termios structure has been initialized correctly, the expression c_cflag & CSIZE will be equal to one of the four CSx quantities; and that you can set the field to one of those four quantities with the two steps
termios_p->c_cflag &= ~CSIZE;
termios_p->c_cflag |= CSx; // x = one of 5, 6, 7, 8
(Your version of those two steps uses ~(CSIZE|PARENB) in the first step -- that means your first step clears the PARENB flag as well as the CSIZE bitfield.)
Now, the symbolic constants do have a meaning, which the termios manpage doesn't bother to document because this entire mechanism is super obsolete and the only thing anyone not engaged in retrocomputing is likely to want to do with it nowadays is ensure it's in CS8 mode, but I am old enough that I can guess what it means just from the names. Remember that this API was originally designed to control an actual, physical serial I/O port. One of the parameters you have to decide on, when you send character data over a serial line, is "how many bits per character?" Nowadays the only answer anyone ever wants is 8, but back in the 1970s, hardware terminals that transmitted 7, 6, or even (rarely) 5 bits per character were still common enough that the designers of this API thought it was worth being able to talk to them.
(I recall reading somewhere that a design goal of both this API, and the higher-level "curses" API, was being able to connect any of the dozens of different terminal models to be found on the campus of the University of California, Berkeley, circa 1980-1983, with any of the smaller (but still more than one) number of minicomputer models also found there.)
So that's what this does. Set the CSIZE field to CS5 and your serial line will transmit and receive five-bit characters. CS6, six-bit characters, and so on.

Related

Usage of Xilinix Built in UART Function to Bitmask certain Value

I am using the Xilinx uartps data sheet in order to write functions to disable and enable Flow Control For UART. I found the appropriate bitmask defined in the data sheet, however I am not sure which built in function I should call with this mask in order to pass this bit mask to.
I thought initially it may be this function,
However upon closer inspection, it takes a u16 as an argument and the bitmask I want to use is a u20. Anyone familiar with this library, what function do I call with the bit mask in order enable flow control
Here is a link to the datasheet as well.
https://xilinx.github.io/embeddedsw.github.io/uartps/doc/html/api/group__uartps__v3__11.html#gad74cdb596f414bee06ebd9159f496cef
You are correct in what you are using. The masks have a hex "0" more than the 16bits they should represent, but if you look at the options, this bit is never used.
Configuration options
#define XUARTPS_OPTION_SET_BREAK 0x0080U
These constants specify the options that may be set or retrieved with the driver, each is a unique bit mask such that multiple options may be specified.
#define XUARTPS_OPTION_STOP_BREAK 0x0040U
Stops break transmission.
#define XUARTPS_OPTION_RESET_TMOUT 0x0020U
Reset the receive timeout.
#define XUARTPS_OPTION_RESET_TX 0x0010U
Reset the transmitter.
#define XUARTPS_OPTION_RESET_RX 0x0008U
Reset the receiver.
#define XUARTPS_OPTION_ASSERT_RTS 0x0004U
Assert the RTS bit.
#define XUARTPS_OPTION_ASSERT_DTR 0x0002U
Assert the DTR bit.
#define XUARTPS_OPTION_SET_FCM 0x0001U
Turn on flow control mode.
This means that you might get a warning when using such flags, but you'll get the result you want anyway.

How to send scan code >255 from hid ble keyboard from esp32 over gatt?

I'm using esp-32 esp-idf HID library (https://github.com/espressif/esp-idf/tree/master/examples/bluetooth/ble_hid_device_demo) to make a custom keyboard that sends scan codes to an android device. I need to send scan code = 310, that contains two bytes of data.
I have a device that requires ble hid keyboard's button's scan code = 310[dec]. As I tried to send this code as uint8_t key_vaule[], as it's used in ble_hid_demo_main.c in ble_hid_device_demo project, the device recieved another scan code, it was truncated from 000 0001 0011 0110 [310dec] to 0011 0110 [155dec]. I suppose it happens because of the 8 bit size of the transferred variables instead of 16 bit. Modyfying the libraries from uint8_t size to uint16_t gave nothing, the result was still truncated. Is there a way to send a two-byte code instead of 1 byte?
HID scan codes are always 8 bit. Key combinations such as left-CTRL+<, in this case, are a sequence of a "key modifier" (0x01 for left-CTRL) and a key code (0x36 for <,).
Whist 0x0136 happens to be 31010, it is a mistake to think of multi-byte scan codes sequences as a single integer rather that a byte sequence for a number of reasons:
the integer byte for the machine architecture may not match that defined for HID code sequences,
In an HID keyboard report, it there is a single key modifier byte and up to six key codes - for combinations of up-to 6 regular keys and eight modifier bits for shift, alt, ctrl etc. combinations pressed simultaneously,
In an HID keyboard report there is a "reserved" byte between the modifier and the first key code in any case, so the 0x01 and 0x36 are not contiguous in any case regardless of the machine byte order.
In the case of HID scan codes, your 31010 is in fact two bytes 0x01 and 0x36 (in hexadecimal). When talking about byte sequences it is more natural to use hexadecimal notation - especially in teh case ofthe modifier which is a bit-mask for multiple shift/ctrl etc. keys. The 0x36 represents the key <,, and the 0x01 is a key modifier for left-CTRL.
If your value 310 was truncated when you assigned it to a 16 bit integer most likely you passed it as a single value to an interface that expected a uint8_t. But as explained above, sending a 16bit integer is not correct in any case.
Rather then sending 0x0136 or 31010 you need to send a byte sequence to form a valid keyboard report as described by your device's keyboard report descriptor. In an HID keyboard report, the first byte is the "modifier mask" (0x01/left-CTRL), the second byte is reserved, then there are up-to 6 key codes (allowing multi-key combinations) the actual number of keys supported and therefore length of the report is defined by the report descriptor.
Looking at the API in the HID demo you linked however it is clear that all that is abstracted away and it seems that what you actually need to do is something like:
uint8_t key = HID_KEY_COMMA ;
esp_hidd_send_keyboard_value( hid_conn_id, LEFT_CONTROL_KEY_MASK, &key, 1 ) ;
Note that the modifier is a bit-mask allowing any combination of modifier keys, such as LEFT_CONTROL_KEY_MASK|RIGHT_CONTROL_KEY_MASK. The HID would use this to indicate multiple shifts, but a receiver might use it to allow either the left or right keys without distinction.

Serial Instruction bit not clear

I've bought a cheap Wingstar 144x32 LCD, because it would be nice to have it plugged onto my NodeMCU for showing some information.
What I wasn't expecting, was that nowhere on the internet could I find a working library for that LCD. So I thought I'd write my own.
I spent several hours reading through the datasheet, trying to figure out how the SPI instructions were passed to the LCD. I then discovered that on some other site there was an example code for the Arduino (which is faaar too long to understand properly) and one for the ATMega, which is short and way easier to understand.
I opened the files and saw that the interfacing is quite "simple", if I can say so. It looks like this:
write_command(0x38); // function set -- 8 bit mode, basic instruction set
write_command(0x0F); // display control -- cursor on, blinking
write_command(0x01); // display clear
write_command(0x06); // entry mode -- cursor moves right, address counter increased by 1
write_command isn't that important to mention here, because it only sends the command through SPI:
void write_command(unsigned char command) {
SPI_WriteByte(0xF8); // send command header
SPI_WriteByte(command&0xF0); // send high nibble
_delay_us(250);
SPI_WriteByte((command<<4)&0xF0); // send low nibble
_delay_us(750);
}
Despite not understanding what the &0xF0 or the <<4)&0xF0 do, I moved on.
I picked randomly the "function set" instruction and converted it to binary text to see if it's doing that what I'm thinking.
0x38 = 0000 0000 0011 1000
Excluding the first 8 characters (It wouldn't make sense with those), I'm left with 00111000, which would make sense putting it there:
Because: DL=1 (8bit interface selected, like the comment in the code for the ATMega - great) and RE=0 (Basic instruction, like in the comment in the code for the ATMega - great!).
But now the real question: What are those "X" in the instruction codes? I already searched the entire datasheet and found nothing about those "X"-es. Why are they inconsistent? What are they supposed to be doing there?
I hope I didn't mess it up too bad.
Any help is highly appreciated.
They are 'don't care' bits. On read, they contain no useful information. On write, they do nothing.

What is the correct form for a Character struct in a VESA/VGA early kernel console?

I am currently working on a kernel for x86 (just for fun). I am trying to implement a fairly usable early console, to report on the loading of drivers and to allow me to play with memory during boot if I need to, with the 80x25 character VESA/VGA console located at 0xB8000. I would like to do this with a struct representing a character and its attribute byte. However, I am wondering how to correctly format my Character struct. I currently have:
#define CONSOLE_SIZE_X 80 //The length of a line in the console
#define CONSOLE_SIZE_Y 25 //The number of lines in the console
#define CONSOLE_MEMLOC 0xB8000 //The position of the console in system memory
#define ATTR_DEFAULT 0x07 //DOS Default
#define ATTR_ERROR 0x1F //BSOD Colors
#define ATTR_PHOSPHOR 0x02 //"Phosphor" colors, green on black
typedef struct {
char character = (char) 0x0000;
char attribute = (char) 0x0000;
} Character; //A VESA VGA character
typedef struct {
int pos_x = 0;
int pos_y = 0;
char defaultAttrib = ATTR_DEFAULT;
Character buffer[CONSOLE_SIZE_Y][CONSOLE_SIZE_X];
} VESAConsole;
The VESAConsole struct is for logical purposes only (i.e. it does not represent any important set of positions in RAM); its Character buffer[][] will be copied to the actual location of the console by a function cFlushBuffer(Character* console, Character* buffer). This will allow me to implement multiple consoles in early mode for easier debugging on my part (in the manner of screen or tmux).
So, I really have 2 questions: Is my Character struct correct, and are there any gotchas that I need to be aware of when dealing with the early VESA/VGA console?
Firstly yes, your Character struct is correct, assuming no padding.
And then as to the gotchas, there are two situations:
Either you used some known-good code to setup the VGA hardware (e.g. asked the BIOS to do it, asked GRUB to do it, took an example from another OS, ...),
Or you did the hardware setup yourself.
In the first case, you're good to go. There aren't any really evil gotchas once the setup is done correctly. Just write directly to memory and let the hardware handle it.
In the second, there is an (almost) infinite variety of ways things could go wrong. Graphics hardware is notoriously hard to setup, and although VGA is nothing compared to what modern graphic cards use, it's still far from easy.
Possible side effects include, but are not limited to, blank screens, crashes, burning and/or exploding CRT monitors, etc ..
If you're interested in further reading, you may want to take a look at the various resources on the OSDev Wiki.

embedding chars in int and vice versa

I have smart card on which I can store bytes (multiple of 16).
If I do: Save(byteArray, length) then I can do Receive(byteArray,length)
and I think I will get byte array in the same order I stored.
Now, I have such issue. I realized if I store integer on this card,
and some other machine (with different endianness) reads it, it may get wrong data.
So, I thought maybe solution is I always store data on this card, in a little
endian way, and always retrieve the data in a little endian way (I will write apps to read and write, so I am free to interpret numbers as I like.). Is this possible?
Here is something I have come up with:
Embed integer in char array:
int x;
unsigned char buffer[250];
buffer[0] = LSB(x);
buffer[1] = LSB(x>>8);
buffer[2] = LSB(x>>16);
buffer[3] = LSB(x>>24);
Important is I think that LSB function should return the least significant byte regardless of the endiannes of the machine, how would such LSB function look like?
Now, to reconstruct the integer (something like this):
int x = buffer[0] | (buffer[1]<<8) | (buffer[2]<<16) | (buffer[3]<<24);
As I said I want this to work, regardless of the endiannes of the machine who reads it and writes it. Will this work?
The 'LSB' function may be implemented via a macro as below:-
#define LSB(x) ((x) & 0xFF)
Provided x is unsigned.
If your C library is posix compliant, then you have standard functions available to do exactly what you are trying to code. ntohl, ntohs, htonl, htons (network to host long, network to host short, ...). That way you don't have to change your code if you want to compile it for a big-endian or for a little-endian architecture. The functions are defined in arpa/inet.h (see http://linux.die.net/man/3/ntohl).
I think the answer for your question is YES, you can write data on a smart card such that it is universally (and correctly) read by readers of both big endian AND little endian orientation. With one big caveat: it would be incumbent on the reader to do the interpretation, not your smart card interpreting the reader, would it not? That is, as you know there are many routines to determine endianess (1, 2, 3). But it is the readers that would have to contain code to test endianess, not your card.
Your code example works, but I am not sure it would be necessary given the nature of the problem as it is presented.
By the way, HERE is a related post.

Resources