Convert user input to unicode - c

So, I'm trying to get some input from the user in a C program, doing fscanf(stdin, "%s", buffer)
When I input the character å i get a value of 134 which corresponds to the codepage 437.
But when i use the windows function GetACP() i get 1252 as the active codepage and 134 doesn't match å in that codepage. I tried setting the codepage to UTF-8 but that didn't give me any input at all.
Is there a way of getting the corresponding codepage for user input and convert that to unicode format? Or if there's a better way of getting the input.
I've been looking around a lot and I can't find much info on this.

The code page used by the console window is called the OEM code page for historical reasons. You can get the default code page with GetOEMCP and the currently selected code page with GetConsoleCP.
You can set the console to use UTF-8 with the command chcp 65001, but Microsoft does not guarantee it to work in all cases.
If you don't need normal C++ I/O to the console, you can use the Console Functions instead e.g. WriteConsoleW to output a Unicode string.

Related

Escape character for '£' in batch file

I am trying to execute a program from command line where there will be parameters. In my password there is a symbol '£', which I could not find to escape.
It is always good to enclose a parameter string like a quite good password containing also other characters than ASCII letters and numbers in double quotes.
But care must be taken on using characters in batch files which are not from ASCII table, i.e. the code point value (byte value) of the character is greater 127 decimal.
On using Windows Notepad to write a batch file and saving the file with ANSI encoding, the characters with a code point value greater 127 are saved using the code page according to Windows Region and Language settings. For North American and Western European countries this means using code page Windows-1252. The pound sign has decimal value 163 (hexadecimal: A3) in this code page.
But in a command process a different code page is used which can be seen by opening a command prompt window and run the command CHCP (change code page) without any parameter. This command outputs the active code page for command process which depends also on Windows Region and Language settings. The code page OEM 437 is used in North American countries and OEM 850 in Western European countries by default within a command process. The pound sign has the decimal value 156 in code page 437 as also in code page 850.
In other words you need to know what the application which compares the password expects for the pound sign in password:
A byte with value 163 as the password was defined using a GUI application.
A byte with value 156 as the password was defined from within a command prompt window.
Or 1 or even more other byte values depending on the code page and character encoding (ANSI, OEM, UTF-8, UTF-16) used as the password with pound sign was defined. For example UTF-8 character encoding uses 2 bytes with the decimal values 194 and 163 to encode a pound sign.
So what to write into the batch file?
Well, you have to find that out by yourself.
For example the password was defined from within a command prompt window using code page 850 and so the pound sign in stored password is a single byte with value 156. The batch file is edited in Notepad using code page 1252 and therefore the character œ must be used in password to have a byte with value 156 in the batch file in password string.
Thank you for your detailed answer #Mofi.
Background: My CMD program calls SQLPlus and the database password contains a '£'.
Summing this up into a short fix, the following steps worked for me.
The fix:
Open your script in a robust text editor (e.g. Atom, Notepad++,
etc)
Change the file encoding (of the text editor) to CP-1252
Add chcp 1252>nul to the top of your script
Run your script and enjoy the results!
As you have found, handling of the UK pound sign is a trap for the unwary in batch files.
The issue here is that a UK pound sign £ is not an ascii character, so is processed differently by the command prompt and Windows GUI programs like Notepad.
A solution that worked for me was to change the code page in the batch file to 650001 for unicode before using the £ sign.
This idea was discussed at Change the active console Code Page, which explains that the default code page is determined by the Windows Locale.
For example, put this code at the start of your batch file:
#echo off
:: Change the code page to Unicode/65001 before using non-ascii characters.
chcp 65001

How to output special characters in cmd window?

Once i write a c program and try to output special characters (like ä ö ü ß) with printf() on the cmd window on windows 10 it only shows sth like ▒▒▒▒▒▒▒▒▒▒▒▒
But if i just type them in the cmd window without a c programm being executed it displays these characters properly.
When i change the console type to standard output in netbeans the output is correct as well.
I tried to change the codepage of cmd but it didnt fix the problem.
I use the gcc c compiler.
The reason is the usage of different code pages for character encoding.
In GUI text editor on writing program code stored in a file on which each character is encoded with just a single byte the code page Windows-1252 is used in Western European and North American countries.
In console window opened on running a console application an OEM code page is used which is in Western European countries OEM 850 and in North American countries OEM 437.
So you need for ÄÖÜäöüß different byte values written in code to get those characters displayed as expected in the console window at least on execution in Western European and North American countries.
Character Windows-1252 OEM 850
Ä \xC4 \x8E
Ö \xD6 \x99
Ü \xDC \x9A
ä \xE4 \x84
ö \xF6 \x94
ü \xF1 \x8C
ß \xDF \xE1
The code page used by default in a console window can be seen by opening a command prompt window and run either chcp (change code page) or mode which both display the active code page.
The default code page for GUI applications and console applications on a computer for a user account depends on the Windows region and language settings for this user account.
Some web pages you should read to better understand character encoding:
Character encoding (English Wikipedia article)
On the Goodness of Unicode by Tim Bray
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) by Joel Spolksy
What's the best default new file format? (UltraEdit forum topic)
Programmers should not write non ASCII characters into strings output by a compiled executable because it depends on which code page is used by the compiler on creating the binary representation (bytes) of the characters in executable. It is better to use the hexadecimal notation when active code page on execution of the application is known or defined by the application before the string is output.
It is also possible to store strings in the executable in Unicode, determine the encoding of the output handle before output any string and convert each Unicode string to the encoding of the output handle before the string is written to the output handle.
And of course it depends on used output font how the bytes in the strings in the executable are finally really displayed on screen.

Complications in reading extra characters

While using the default console font Raster Fonts 8x12 I am unable to read extra characters using
ReadConsoleOutputCharacter(). These characters will be printed out as ?.
If I change the console font to "Consolas" or "Lucida Console", these extra characters, read by
ReadConsoleOutputCharacter() are printed out without a problem.
Is there anything I can do about that?
Anyway, I fixed it changing the locale after a conversion for a console I/O mapping:
SetConsoleOutputCP(GetACP());
SetConsoleCP(GetACP());
setlocale(LC_ALL, "");
#David Heffernan
I suggest you to read this
According to the docs https://msdn.microsoft.com/en-us/library/windows/desktop/ms684969%28v=vs.85%29.aspx
This function uses either Unicode characters or 8-bit characters from
the console's current code page. The console's code page defaults
initially to the system's OEM code page. To change the console's code
page, use the SetConsoleCP or SetConsoleOutputCP functions, or use the
chcp or mode con cp select= commands.
I believe you're getting back a unicode string that needs to be encoded to a charset before trying to be displayed.

How to output foreign characters in console?

How can I print foreign characters on the screen using C?
Here's my code, which doesn't work:
#include <stdio.h>
#include <locale.h>
int main(){
setlocale(LC_ALL,"Turkish");
printf("İ ş ğ ü ö ı");
system("pause");
return 0;
}
On my Windows there is no such characters in the 'Terminal' font. I think you can't print them.
But I suggest you to check this font yourself. Maybe you have a different version of it.
If you're using a narrow charset then you need to make sure that the terminal/console is using the same charset and the source code file is encoded in the correct encoding, otherwise of course the system will misinterpret the character codes
To set the charset in the console run chcp. For example to use code page Windows-1254 run chcp 1254. You can use SetConsoleOutputCP to set the code page programmatically, like SetConsoleOutputCP(1254)
However you should avoid the legacy ANSI code pages and use Unicode instead. The current preferred way on Windows is to output Unicode characters as wide char with wprintf. You may need to set the mode to wide first with
int result = _setmode(_fileno(stdout), _O_U16TEXT);
then
wprintf(L"İ ş ğ ü ö ı");
See also wprintf manual in Windows, Linux or Mac. However on POSIX systems UTF-8 is preferred
On older Windows UTF-8 support on console is not very good, but it's increasingly getting better, and Windows 10 even supports UTF-8 as a locale so you can just call SetConsoleOutputCP(CP_UTF8); or SetConsoleOutputCP(65001); (or run chcp 65001 in the console) and it'll work immediately, provided that you saved the source code as UTF-8. Remember to also set the font to the one that supports those characters like Lucida Console or Consolas. The default raster font contains very a limited number of characters and appears with a lot of aliasing. It also doesn't work well on modern hidpi displays
There are already lots of questions about outputting Unicode on this site like Output unicode strings in Windows console app or UTF-8 character in .NET Console Application. Please have a look and try to see which one fits you.
Edit
When you use
wchar_t c=L'ğ';
fputwc(c,ptr);
you're printing to a file and not the console. In that case just the stream of bytes is saved into the file. When you open the file again, it's the job of the editor to treat the bytes in the correct charset and print it correctly. For example the character "ğ" is stored as c4 9f in UTF-8 and when open the file as UTF-8, the editor knows that it represents the char "ğ" to display
Unfortunately there's no character encoding information embedded in a text file so the editor must choose one. Remember There Ain't No Such Thing As Plain Text (must read). A simple editor may just choose to open the file as ANSI in the current Windows codepage and the characters won't be displayed correctly if the original encoding is not that one and you'll just see garbage
Some more advanced editors like Notepad++ or MS Word will try to guess the encoding of the file. But as with any guessing, it can be wrong and the result is again a file with garbage
The simplest solution is to add a BOM to the beginning of the file so the editor can recognize the encoding easily. If your files doesn't contain a BOM you need to tell the editor to read the file in the correct encoding if the encoding is wrong (for wchar_t on Windows like that it's UTF-16LE). For example in Notepad++ it's this menu
Unfortunately the OP didn't edit the question to show what was tried, there's nothing more I can explain
Your code works: http://ideone.com/K9hrv5
setlocale(LC_ALL,"Turkish");
printf("İ ş ğ ü ö ı");
The only issue is that you have to set your terminal locale as well before executing your c program's output binary.
Setting your terminal locale works:
setlocale only affects the runtime locale. It doesn't make your compiler support extra source file characters.
You may need to specify the non-ASCII characters in your source file by using character constants (e.g. \xF1 for the character with code 241).

Does gnome-terminal support DOS code pages?

In my C program I've had to swap my unicode box-drawing characters into escaped characters for DOS code page 437 to get it to work in the Windows command prompt. Is it possible to change the code page of gnome-terminal to display these characters correctly when natively compiling the program for linux?
Thanks.
From https://nethackwiki.com/wiki/IBMgraphics
The current gnome-terminal does not
have a setting for code page 437, but
it does support other code pages that
are equivalent for NetHack's purposes,
such as 862 (Hebrew).
To set code page 862 on
gnome-terminal:
Select Terminal->Set Character Encoding->Add or Remove.
In the pane on the left, select the line with description Hebrew and
encoding IBM862.
Click the right-pointing arrow between the two panes.
Click Close.
The above steps only need to be done
once for the lifetime of the Gnome
installation. Once done, it is
sufficient to:
Select Terminal, Set Character Encoding, and then Hebrew (IBM862).
It should be noted that the current
default gnome-terminal font in Ubuntu
Jaunty fully supports DECgraphics as
long as eight_bit_tty is set to false.
If you need these characters, you should use their correct Unicode codepoint values and output them as UTF-8. Or, if you prefer, you can output them as wide characters and let the standard library's locale system take care of converting them to UTF-8 or another "native" encoding the user has selected (which might even be CP437, although I've never seen a system setup that poorly...).

Resources