I'm studying c++ and I have found this function of C++ library:
setlocale (http://www.cplusplus.com/reference/clocale/setlocale/) but I'm not able to understand what is it for.
I have used on Ubuntu:
printf ("Locale is: %s\n", setlocale(LC_ALL,"") );
it prints Locale is: en_US.UTF-8
but on macOs it prints:
Locale is: C what does means this C?
In what context and how should it be used?
Read on Linux the setlocale(3) and locale(7) man pages. Read also the internationalization and localization wikipage.
On Debian and Ubuntu, you might run (as root) the dpkg-reconfigure locales to add more locales.
Then you could set your LANG (and LC_ALL and others) environment variables (see environ(7)) to change the language of message.
For example, I have the French UTF-8 locale installed. If I do
% env LC_ALL=fr_FR.UTF-8 ls /tmp/nonexisting
I'm getting the error message in French:
ls: impossible d'accéder à '/tmp/nonexisting': Aucun fichier ou dossier de ce type
If I use the C locale (which is the default one), it is in English:
% env LC_ALL=C ls /tmp/nonexisting
ls: cannot access '/tmp/nonexisting': No such file or directory
As a rule of thumb, you want to export LC_ALL=C LANG=C before running commands that you exhibit on this forum (e.g. because you don't want to show error messages from the compiler or the shell in French).
If you are coding a program which you want to internationalize (ez.g. to be easily usable by people understanding only French or Chinese), you need at least to use gettext(3) (notably for printf format strings!) and perhaps textdomain(3) and you'll need to use msgfmt(1) for dealing with message catalogs. Of course you'll need to build catalogs of translated messages.
Localization also influences how numbers are parsed and printed (with a comma or a dot separating the thousands or the decimal digits) and how money and time get printed and parsed (e.g. strftime(3)).
It's locale specific settings. For example, in some countries comma is used for decimal separator, in others, it's dot. In the US 22,001 is often understood to be 22 thousand and 1, in some European countries that's 22 point 001.
Dates can be given in DD/MM/YYYYY (most of Europe) or MM/DD/YYYY (in the US) format and so forth.
Related
I'am new in frama-c. So I apologize in advance for my question.
I would like to make a plugin that will modify the source code, clone some functions, insert some functions calls and I would like my plugin to generate a second file that will contain the modified version of the input file.
I would like to know if it is possible to generate a new file c with frama-c. For example, the results of the Sparecode and Semantic constant folding plugins are displayed on the terminal directly and not in a file. So I would like to know if Frama-c has the function to write to a file instead of sending the result of the analysis to the standard output.
Of course we can redirect the output of frama-c to a file.c for example, but in this case, for the plugin scf for example, the results of value is there and I found that frama-c replaces for example the "for" loops by while.
But what I would like is that frama-c can generate a file that will contain my original code plus the modifications that I would have inserted.
I looked in the directory src / kernel_services / ast_printing but I have not really found functions that can guide me.
Thanks.
On the command line, option -ocode <file> indicates that any subsequent -print will be done in <file> instead of the standard output (use -ocode "" after that if you want to print on stdout again). Note that -print prints the code corresponding to the current project. You can use -then-on <prj> to change the project you're interested in. More information is of course available in the user manual.
All of this is of course available programmatically. In particular, File.pretty_ast by defaults pretty-prints (i.e. output a C program) the AST of the current project on stdout, but takes two optional argument for changing the project or the formatter to which the output should be done.
I am developing an ncurses application myself in C. The problem is that putty displays alternative character set characters like ACS_VLINE as letters. My locale is
LANG=en_US.UTF-8
and I have set
export NCURSES_NO_UTF8_ACS=1
I have also set putty to UTF-8 and tried different fonts. The characters display fine on the tty on the actual machine so I think the issue is with putty. I have also tried linking ncursesw instead of ncurses.
It is a combination of things. The recommended TERM for PuTTY is "putty", but due to inertia, most people use "xterm". The line-drawing support in the xterm terminal description is different from PuTTY's assumptions because xterm supports luit, which has some limitations with the way the alternate character set is managed (see Debian Bug report #254316:
ncurses-base: workaround for screen's handling of register sgr0 isn't quite right).
If you use infocmp to compare, you may see these lines which deal with the alternate character set:
acsc: '``aaffggiijjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~', '``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~'.
enacs: NULL, '\E(B\E)0'.
rmacs: '\E(B', '^O'.
smacs: '\E(0', '^N'.
VT100s can have two character sets "designated", referred to as G0 and G1:
The "xterm" line-drawing works by changing the designation of G0 between the ASCII and line-drawing characters,
the "putty" line-drawing works by designating ASCII in G0 and line-drawing in G1 and switching between the two with the shift-in/shift-out control characters.
Although both are VT100-compatible, the xterm scheme does not work with PuTTY. Because of the tie-in with luit, the normal terminal description for xterm will not change (unless it proves possible to modify luit to solve the problem for its users), so a workaround is needed for users of PuTTY:
use a different terminal description as recommended, e.g., TERM=putty. PuTTY's settings dialog lets you set environment variables to pass to the remote machine.
This has the drawback that some systems do not have the full ncurses terminal database installed, due to "size" (it is 6.8Mb on my local machine). Also, TERM may not be on the list of allowed ssh environment variables.
you can compile your own terminfo entry with ncurses' tic, e.g.,
cat >foo <<"EOF"
xterm|my terminfo entry,
enacs=\E(B\E)0,
rmacs=^O,
smacs=^N,
use=xterm-new,
EOF
tic foo
use GNU screen. It does its own fixes, and happens to compensate for PuTTY's problems.
Further reading
SCS – Select Character Set (VT100 manual)
4.4 Character Set Selection (SCS) (VT220 manual)
The terminfo database is big—do I need all of that? (ncurses FAQ)
Putty: login, execute command/change environment variable, and do NOT close the session
I want to make some additions. I faced same issue: many ncurses-based tools like dialog, menuconfig and nconfig from Linux kenrel sources, even mc is broken when built with ncurses (although mc is built using Slang on many OSes and not affected).
Here is what happened
ncurses uses smacs record from terminfo to switch to "alternative charset" and then it uses acsc to draw boxes. It sends a which is box-drawing character in alternative charset (ACS).
This is VT100 graphics.
Some terminal emulators nowadays do not support ACS when in UTF-8 because apps have ability to send real box-drawing codepoints.
There is unofficial capability U8 (capital U!) in terminfo that tells ncurses: "Instead of ACS use real box-drawing codepoints."
I have this capability infocmp -x xterm-utf and for putty aswell, but not for xterm.
As you can read in ncurses(3) (https://invisible-island.net/ncurses/man/ncurses.3x.html), ncurses is aware of Linux console and GNU screen (and tmux, which also uses screen as TERM) and always behave like if U8 were set.
For other terminals that do not support ACS when in UTF, you can set NCURSES_NO_UTF8_ACS.
Unfortunatelly, ncurses is not aware of putty.
There is also luit that may convert ACS to Unicode points.
So, here is what we can do to run ncurses + putty in UTF-8:
Use terminal with U8#1 capability. This one is set for putty (btw, I suggest to use putty-256color instead). You can create your own entry with U8#1 and colors#256 and compile it with tic -x. Be carefull that mouse may not work on terminals that do not start with xterm (see mouseinterval(3), BUGS section). This is why I do not use putty terminal. I suggest to copy xterm-utf8, add colors#256, compile and stay with it: it works perfectly with putty, mouse and utf8.
You can set NCURSES_NO_UTF8_ACS in your profile.
You can run screen or tmux: it will set TERM to screen and fix ncurses
You can run luit: it will do all convertions for you.
Since putty 0.71 you can ask putty to support ACS drawings even in UTF-8
I recently switched to a Macbook Air and thus to OS X.
I imported some of my current projects to it and tried to compile them with my Makefile.
My Makefile has some custom imput adding colors with /bin/echo -e "\033[0;31m" for example + the text. It's working great on my old computer (OpenSuse distrib) but it doesn't even compile my binary anymore on my Mac.
Here's what I get when I try to prompt a custom line through my Makefile :
-e \033[0;31m (MY TEXT) \033[00m
As I use custom imput when compiling my .o files, none of them are get compiled so my project build fail.
My Makefile work great without these custom output but I'd like to know why they don't work on OS X.
I can post my Makefile code if some people request it for further investigation.
This is similar, but not quite a duplicate of Color termcaps Konsole?. The problem is that -e is not an option of OSX echo (which follows POSIX). If you take out the -e, it will work as you expect.
The -e option is used in some implementations to allow \e as a synonym for \033 (but your example uses the latter anyway).
Whether you use echo or printf for POSIX scripts is largely a matter of taste, since both accept the same set of backslash sequences. For example printf, of course, accepts % sequences for formatting numbers, but C++ programmers have gotten into the habit of (cout vs echo) not using the printf-style calls.
For reference.
printf - write formatted output
echo - write arguments to standard output
I have a program written in C using ncurses. It let user input and display it. It does not display correctly if user input utf8 chars.
I saved the chars user inputed to a file. And I cat this file directly in Shell, it display correctly.
I searched stackoverflow and google, and tried several methods, such as link with ncursesw, display incorrectly.
And I ldd /usr/bin/screen:libncurses.so.5 => /usr/lib64/libncurses.so.5
screen can display what user input correctly.
How to make ncurses display UTF-8 chars correctly ?
What is the general way to display UTF-8 chars in C using ncurses?
You need to have called setlocale(LC_CTYPE, ""); (with a UTF-8 based locale configured) before initializing ncurses. You also need to make sure your ncurses is actually built with wide char support ("ncursesw") but on modern distros this is the default/only build.
#need these as well on top of installation and locate setting
#at least check locale
locale
locale-gen en_US.UTF-8
#vim ~/.bashrc # add 3 lines once ok and fix the profile
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8
export NCURSES_NO_UTF8_ACS=1
I'd like to write an application in C which uses arrow-keys to navigate and F-keys for other functions, such as saving to a file, language selection, etc.
Propably the values depend on the platform, so how could I find out which values the keys have?
If they don't, or if you know them, I don't have to know how to find out;)
Edit:
My platforms are Linux and M$ Windows.
Therefore, I'm looking for a solution as portable as possible.
(Propably something like
#ifdef __unix__
#define F1 'some number'
/* ... */
#define ARROW_UP 'some other number'
#elif __WIN32__ || MSDOS /*whatever*/
#define F1 'something'
/* ... */
#define ARROW_UP 'something different'
#endif
)
I think that depends on $TERM, but either way it's going to be a sequence of characters. I get this:
% read x; echo $x | od -c --
^[[15~
0000000 033 [ 1 5 ~ \n
0000006
That's my F5 key, and apologies for this being a *nix-centric answer, if that's not your platform.
This problem is a lot messier than anyone would like. In brief, each of these keys sends a sequence of characters, and the details depend on which terminal emulator is being used. On Unix, you have a couple of choices:
Write your app to use the curses library and use its interface to the terminfo database.
Parse the the terminfo database yourself to find the sequences. Or to use an API, look at the sources for tput to see how this is done.
Use the command-line program tput to help discover the sequences. For example, to learn the sequence for F10, run tput kf10 | od -a. Keep in mind this changes from terminal to terminal, so you should run tput each time you run your app.
Write your application using one of the X libraries and get access to 'key symbols' rather than a sequence of characters. the XLookupKeysym function is a good place to get started; you'll find names of the symbols in /usr/include/X11/keysymdef.h. If you are going to connect over the network from non-X systems (e.g., Windows) this option is not so good.
I have no idea how to do any of this on Windows.
If you're on a Unix or Linux machine, simply enter cntl-V and then the key while in vim*. You'll see the code.
HTH
cheers,
Rob
This handy site has lots of useful nuggets of info on the raw scancodes:
http://www.win.tue.nl/~aeb/linux/kbd/scancodes.html#toc1
Some of the keys generate 2 character key codes.
One technique I have successfully used on the pc, is to get the 2 characters, and set the 7th bit on the second character, and return that character as the answer. It lets the rest of the program treat each keystroke as a single character.
The solution to your problem with depend on what keys you need to support. Hiding the translation behind a function of some sort, is generally a good idea.