I am doing a little program in C with the ncurses library on Linux.
I decided to check the input I received with the getch() function, more specifically, the backspace key.
The backspace ASCII decimal value is 127, link: here
I decided to print the numerical decimal value of the keys I pressed, for example:
a -> 97
A -> 65
] -> 93
...
The latter are correct.
However, the following values are not correct:
Backspace -> 7 (which is BELL)
Supr -> 74 (which is 'J')
Here is the test code:
#include <curses.h>
int main(int argc, char **argv)
{
char ch;
int column,line;
int s_column,s_line;
initscr();
clear();
noecho();
raw();
keypad(stdscr,TRUE);
printw("Type: \n> ");
refresh();
getyx(stdscr,s_line,s_column);
while((ch=getch())!='\n')
{
printw("%d",ch);
addch(ch);
refresh();
}
endwin();
return 0;
}
NOTE: changing raw() to cbreak() generates the same output
Output test: (note: I type: 'a','A',(Backspace),(Supr),'J')
Type:
> 97a65A7^G74J74J
I don't understand why this is happening, can somebody explain why the Backspace key outputs 7 instead of 127, and Supr outputs 74, which is the same sa 'J'?
For special function keys, getch() doesn't necessarily return the ASCII character, it returns one of the KEY_xxx codes in <curses.h>. In the case of Backspace, this is:
#define KEY_BACKSPACE 0407 /* backspace key */
Since you declare ch as char rather than int, the value 0407 is being truncated to 07.
Change the declaration to:
int ch;
and then it will display 263 when you press Backspace. addch() will still display ^G, though, because it doesn't use the KEY_xxx macros. You need to handle these characters in your code.
I believe the "special" keys are generating multi-character readings, which explains the ^ in the output.
See caret notation for more.
Related
Pressing Del is dealt with as it is an extended key, it returns ASCII code -32 then 83, while pressing Ctrl + back returns the expected 127: the actual ASCII of Del button.
Do you know any explanation for this?
char x;
x = getch();
if(x == -32 || x == 0){
x = getch();
printf("The key you pressed is an Extended Key, with ASCII code: %d\n", x);
}else{
printf("The key you pressed is a Normal Key, with ASCII code: %d\n", x);
}
The Del key on a standard keyboard does not generate a "Delete" ASCII code: code 127 in the ASCII table isn't meant to do what the Del key does on a computer, it was designed for electric typewriters.
The Del key is a "special key" in a similar vein as the arrow keys, so when you submit it to getch() it will read out it's scan code.
I was trying to write a basic program to print ā (a with overline) in C using curses and non-spacing characters. I have set the locale to en_US.UTF-8 and I am able to print international language characters using that. This code only prints a without overline. I am getting similar results with ncurses too. What else do I need to do to get ā on screen?
#include <curses.h>
#include <locale.h>
#include <wchar.h>
#include <assert.h>
int main() {
setlocale(LC_ALL, "");
initscr();
int s = 0x41; // represents 'a'
int ns = 0x0305; // represents COMBINING OVERLINE (a non-spacing character)
assert(wcwidth(ns) == 0);
wchar_t wstr[] = { s, ns, L'\0'};
cchar_t *cc;
int x = setcchar(cc, wstr, 0x00, 0, NULL);
assert(x == 0);
add_wch(cc);
refresh();
getch();
endwin();
return 0;
}
The curses calls need a pointer to data, not just a pointer.
It's okay to pass a null-terminated array for the wide-characters, but the pointer for the cchar_t data needs some repair.
Here's a fix for the program:
> diff -u foo.c.orig foo.c
--- foo.c.orig 2020-05-21 19:50:48.000000000 -0400
+++ foo.c 2020-05-21 19:51:46.799849136 -0400
## -3,7 +3,7 ##
#include <wchar.h>
#include <assert.h>
-int main() {
+int main(void) {
setlocale(LC_ALL, "");
initscr();
int s = 0x41; // represents 'a'
## -12,11 +12,11 ##
assert(wcwidth(ns) == 0);
wchar_t wstr[] = { s, ns, L'\0'};
- cchar_t *cc;
- int x = setcchar(cc, wstr, 0x00, 0, NULL);
+ cchar_t cc;
+ int x = setcchar(&cc, wstr, 0x00, 0, NULL);
assert(x == 0);
- add_wch(cc);
+ add_wch(&cc);
refresh();
getch();
That produces (on xterm) a "A" with an overbar:
(For what it's worth, 0x61 is "a", while 0x41 is "A").
Your code is basically correct aside from the declaration of cc. You'd be well-advised to hide the cursor, though; I think it is preventing you from seeing the overbar incorrectly rendered in the following character position.
I modified your code as follows:
#include <curses.h>
#include <locale.h>
#include <wchar.h>
#include <assert.h>
int main() {
setlocale(LC_ALL, "");
initscr();
int s = 0x41; // represents 'A'
int ns = 0x0305; // represents COMBINING OVERLINE (a non-spacing character)
assert(wcwidth(ns) == 0);
wchar_t wstr[] = { s, ns, L'\0'};
cchar_t cc; /* Changed *cc to cc */
int x = setcchar(&cc, wstr, 0x00, 0, NULL); /* Changed cc to &cc */
assert(x == 0);
set_curs(0); /* Added to hide the cursor */
add_wch(&cc); /* Changed cc to &cc */
refresh();
getch();
endwin();
return 0;
}
I tested on a kubuntu system, since that's what I have handy. The resulting program worked perfectly on xterm (which has ugly fonts) but not on konsole. On konsole, it rendered the overbar in the following character position, which is clearly a rendering bug since the overbar appears on top of the following character if there is one. After changing konsole's font to Liberation Mono, the test program worked perfectly.
The rendering bug is not going to be easy to track down because it is hard to reproduce, although from my experiments it seems to show up reliably when the font is DejaVu Sans Mono. Curiously, my system is set up to use non-spacing characters from DejaVu Sans Mono as substitutes in other fonts, such as Ubuntu Mono, and when these characters are used as substitutes, the spacing appears to be correct. However, Unicode rendering is sufficiently intricate that I cannot actually prove that the substitute characters really come from the configured font, and the rendering bug seems to come and go. It may depend on the font cache, although I can't prove that either.
If I had more to go on I'd file a bug report, and if I get motivated to look at this some more tomorrow, I might find something. Meanwhile, any information that other people can provide will undoubtedly be useful; at a minimum, that should include operating system and console emulator, with precise version numbers, and a list of fonts tried along with an indication whether they succeeded or not.
It's not necessary to use ncurses to see this bug, by the way. It's sufficient to test in your shell:
printf '\u0041\u0305\u000a'
will suffice. I found it interesting to test
printf '\u0041\u0305\u0321\u000a'
as well.
The system I tested it on:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
$ konsole --version
konsole 17.12.3
$ # Fonts showing bug
$ otfinfo -v /usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf
Version 2.37
$ # Fonts not showing bug
$ otfinfo -v /usr/share/fonts/truetype/liberation/LiberationMono-Regular.ttf
Version 1.07.4
There are multiple issues here. First, you're storing the result of setcchar to random memory at an uninitialized pointer, cc. Whenever a function takes a pointer for output, you need to pass the address of an object where the result will be stored, not an uninitialized pointer variable. The output must be an array of sufficient length to store the number of characters in the input. I'm not sure what the null termination convention is so to be safe I'd use:
cchar_t cc[3];
int x = setcchar(cc, wstr, 0x00, 0, NULL);
Then, the add_wch function takes only a single character to add, and replaces or appends based on whether it's a spacing or non-spacing character. So you need to call it once for each character.
Recently I tried to print text in C with underscore. My console doesn't support ANSI escape character so I tried using DBCS, which my console does support. To do so, I had to change the console text attributes. At the beginning I used SetConsoleTextAttribute to change it but later when I wanted to remember the color and ONLY change the underscore I started using GetConsoleScreenBufferInfoEx and SetConsoleScreenBufferInfoEx to also get the previous attributes. That's when I noticed that when I use the former, it only affects the text which I print after the call, and in the case of the latter, I also change the attributes of the previous text.
For example, I wrote 2 short codes and compiled them.
Code 1:
#include <Windows.h>
#include <stdio.h>
int main()
{
printf("Code 1:\n");
HANDLE out = GetStdHandle(STD_OUTPUT_HANDLE);
DWORD mode = 0;
int flag = 1;
flag &= GetConsoleMode(out, &mode);
flag &= SetConsoleMode(out, mode | ENABLE_LVB_GRID_WORLDWIDE);
//7 is the default foreground - gray
SetConsoleTextAttribute(out, 7 | COMMON_LVB_UNDERSCORE);
printf("Hello World! 1==%d", flag);
getchar();
SetConsoleTextAttribute(out, 7);
printf("Goodbye World! 1==%d", flag);
getchar();
return 0;
}
And code 2:
#include <Windows.h>
#include <stdio.h>
typedef CONSOLE_SCREEN_BUFFER_INFOEX CSBI;
int main()
{
printf("Code 2:\n");
HANDLE out = GetStdHandle(STD_OUTPUT_HANDLE);
DWORD mode = 0;
int flag = 1;
flag &= GetConsoleMode(out, &mode);
flag &= SetConsoleMode(out, mode | ENABLE_LVB_GRID_WORLDWIDE);
CSBI csbi = { 0 };
csbi.cbSize = sizeof(csbi);
flag &= GetConsoleScreenBufferInfoEx(out, &csbi);
csbi.wAttributes |= COMMON_LVB_UNDERSCORE;
flag &= SetConsoleScreenBufferInfoEx(out, &csbi);
printf("Hello World! 1==%d", flag);
getchar();
csbi.wAttributes &= ~COMMON_LVB_UNDERSCORE;
flag &= SetConsoleScreenBufferInfoEx(out, &csbi);
printf("Goodbye World! 1==%d", flag);
getchar();
return 0;
}
the flag is to make sure that all function return TRUE
In the first code, 'Code 1' will remain without underscore, 'Hello World!' will have underscore, and 'Goodbye World!' won't have an underscore.
In the second code, everything will have underscore until I enter a new line, and then everything will lose their underscore.
Does anyone have an idea why is it like that? I though that they will do the same about console text attributes.
Thanks, Roy
In the second code, everything will have underscore until I enter a new line, and then everything will lose their underscore.
After my test, the final effect of the two pieces of code is the same.
Does anyone have an idea why is it like that? I though that they will do the same about console text attributes.
SetConsoleTextAttribute: Sets the attributes of characters written to
the console screen buffer by the WriteFile or WriteConsole function,
or echoed by the ReadFile or ReadConsole function. This function
affects text written after the function call.
SetConsoleScreenBufferInfoEx: Sets extended information about the
specified console screen buffer.
For the comment, on the properties of the console text, SetConsoleTextAttribute and SetConsoleScreenBufferInfoEx can achieve the same effect, such as changing the color of the text or add underscore.
I have the following code
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <time.h>
#include <stdbool.h>
#define dimensions 5
int RandomNumInRange(int M, int N)
{
return M + rand() / (RAND_MAX / (N - M + 1) + 1);
}
char ** CreateWorld(int dim)
{
int i,j;
char **world = malloc(dim *sizeof(char*));
for(i=0;i<dim;i++)
world[i]=malloc(dim*sizeof(char));
for(i=0;i<dim;i++)
for(j=0;j<dim;j++)
world[i][j]=42;
return world;
}
void CreateCastle(char **world)
{
//assuming world is big enough
//to hold a match of 2
int randRow,randCol;
//1 to dimension -2 so we can spawn a 3x3 castle
randRow = RandomNumInRange(1,dimensions-2);
randCol = RandomNumInRange(1,dimensions-2);
printf("position: %d %d\n", randRow, randCol);
world[randRow][randCol]='c';
//fill the rest so castle is 3x3
//assuming there is enough space for that
world[randRow-1][randCol-1]=35;
world[randRow-1][randCol]=35;
world[randRow-1][randCol+1]=35;
world[randRow][randCol-1]=35;
world[randRow][randCol+1]=35;
world[randRow+1][randCol-1]=35;
world[randRow+1][randCol]=35;
world[randRow+1][randCol+1]=35;
}
void DisplayWorld(char** world)
{
int i,j;
for(i=0;i<dimensions;i++)
{
for(j=0;j<dimensions;j++)
{
printf("%c",world[i][j]);
}
printf("\n");
}
}
int main(void){
system("clear");
int i,j;
srand (time(NULL));
char **world = CreateWorld(dimensions);
DisplayWorld(world);
CreateCastle(world);
printf("Castle Positions:\n");
DisplayWorld(world);
//free allocated memory
free(world);
//3 star strats
char ***world1 = malloc(3 *sizeof(char**));
for(i=0;i<3;i++)
world1[i]=malloc(3*sizeof(char*));
for(i=0;i<3;i++)
for(j=0;j<3;j++)
world1[i][j]="\u254B";
for(i=0;i<3;i++){
for(j=0;j<3;j++)
printf("%s",world1[i][j]);
puts("");
}
free(world1);
//end
return 0 ;
}
If I use the system("clear") command, I get a line consisting of "[3;J"
followed by an expected output. If I run the program again, I get the same gibberish, then many blank newlines, then the expected output. If I put the system("clear") command in comments then both the "[3;J" and the blank newlines don't show and the output is expected.
Edit: it seems the error is not in the code, but rather in the way the terminal on my system is (not) set. Thank you all for your input, I definitely have a lot of interesting stuff to read and learn now.
The codes being sent by your clear command from don't seem to be compatible with the Gnome terminal emulator, which I believe is what you would be using.
The normal control codes to clear a console are CSI H CSI J. (CSI is the Control Sequence Initializer: an escape character \033 followed by a [). CSI H sends the cursor to the home position, and CSI J clears from the cursor position to the end of the screen. You could also use CSI 2 J which clears the entire screen.
On Linux consoles and some terminal emulators, you can use CSI 3 J to clear both the entire screen and the scrollback. I would consider it unfriendly to do this (and the clear command installed on my system doesn't.)
CSI sequences can typically contain semicolons to separate numeric arguments. However, the J command doesn't accept more than one numeric argument and the semicolon seems to cause Gnome terminal to fail to recognize the control sequence. In any event, I don't believe Gnome terminal supports CSI 3 J.
The clear command normally uses the terminfo database to find the correct control sequences for the terminal. It identifies the terminal by using the value of the TERM environment variable, which suggests that you have to wrong value for that variable. Try setting export TERM=xterm and see if you get different results. If that works, you'll have to figure out where Linux Mint configures environment variables and fix it.
On the whole, you shouldn't need to use system("clear") to clear your screen; it's entirely too much overhead for such a simple task. You would be better off using tputs from the ncurses package. However, that also uses the terminfo database, so you will have to fix your TERM setting in any case.
I'm currently having a problem with Xlib where whenever I call XKeysymToKeycode() and pass in an uppercase KeySym, it returns a lowercase KeyCode. Google doesn't really seem to have an answer to this question, or too much documentation at all on the functions I'm using, for that matter.
Here's the code I am using:
#include <ctype.h>
#include <stdio.h>
#include <stdlib.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <X11/keysym.h>
#include <X11/extensions/XTest.h>
int main(void) {
Display *display;
char *ptr;
char c[2] = {0, 0};
KeySym ksym;
KeyCode kcode;
display = XOpenDisplay(0);
ptr = "Test";
while (*ptr) {
c[0] = *ptr;
ksym = XStringToKeysym(c);
printf("Before XKeysymToKeycode(): %s\n", XKeysymToString(ksym));
kcode = XKeysymToKeycode(display, ksym);
printf("Key code after XKeysymToKeycode(): %s\n", XKeysymToString(XKeycodeToKeysym(display, kcode, 0)));
ptr++;
}
XCloseDisplay(display);
return 0;
}
It can be compiled with gcc -o sendkeys sendkeys_min.c -lX11 -lXtst -g -Wall -Wextra -pedantic -ansi (Assuming it has been saved as sendkeys_min.c.)
The current output is the following:
Before XKeysymToKeycode(): T
Key code after XKeysymToKeycode(): t
Before XKeysymToKeycode(): e
Key code after XKeysymToKeycode(): e
Before XKeysymToKeycode(): s
Key code after XKeysymToKeycode(): s
Before XKeysymToKeycode(): t
Key code after XKeysymToKeycode(): t
The expected output, is, of course, that the first T in "Test" is still uppercase after being ran through XKeysymToKeycode(). (Note that this is not my actual program, but a simplified version for posting here. In the actual program, I am sending key events with the resulting keycode, and the keys sent still have the problem exhibited here (They all become lowercase))
KeySyms and KeyCodes are semantically different, and there is not a 1-1 relationship between them.
A KeyCode is an arbitrary small integer representing a key on the keyboard. (Not a character. A key.) Xlib requires that key codes be in the range [8, 255], but fortunately most keyboards have only a bit more than 100 keys.
A KeySym is a representation of some actual character associated with a key. There will almost always be several of these: lower- and upper-case letters correspond to the same key on most terminal layouts.
So there is no such thing as an "upper-case" or "lower-case" KeyCode. When you get the KeyCode corresponding to a Keysym, you are actually losing information.
In Xlib, a given key has at least four corresponding KeySyms (lower-case, upper-case, alternate lower-case, alternate upper-case), although some might be unassigned. When you ask for the KeySym corresponding to a KeyCode, you need to supply an index; index 0 (as in your code) will get the unshifted unmodified character.
For a given keypress, the translation to a KeySym will take into account the state of the modifier keys. There are eight of these, including the Shift and Lock modifiers. Ignoring Lock, which complicates the situation, the shift modifier key would normally turn lower-case letters into their upper-case equivalents (for alphabetic keys).
Keyboard handling is much more complicated than that brief summary, but it's a start.
For your task, you probably should take a look at XkbKeysymToModifiers.