I'm trying to make a simple card game but after switching from Win 7 to Xubuntu 14.04 even the most simple things do not work anymore. I tried this for 3 days and still can't solve it.
What happens is that the console is giving me 3 diamond question marks, for the code below.
#include <stdio.h>
#include <stdlib.h>
#define herz "\xe2\x99\xa5"
#define karo "\xe2\x99\xa6"
#define kreuz "\xe2\x99\xa3"
#define pik "\xe2\x99\xa0"
int main()
{
char ch = '0';
printf("%c%c%c%c",herz,karo,kreuz,pik);
return 0;
}
I tried this with the code:blocks console and the xubuntu one.
(xterm -T $TITLE -e and xfce4-terminal -T $TITLE -x)
Console LANG is en_US.UTF-8.
I tried several fonts and it didn't change a thing. I can type in special characters manually in the console but when C tries to print them it does not work.
%c is used to print single characters. Since you are trying to print strings, use %s instead. Your print statement will be
wprintf("%s%s%s%s",herz,karo,kreuz,pik);
You have defined literal constant character strings where you need literal constant wide characters.
const wchar_t herz = L'\ue299a5';
You then need to print using wprintf() with the %lc format specifier:
wprintf("%lc%lc%lc%lc", herz, karo, kreuz, pikl) ;
Related
I'm trying to print ≠ for a University Project
I'm using CodeBlocks with encoding UTF-8
I searched methods and i found one using UNICODE in C and i tried to replicate in a blank code to test and didn't print
#include <stdio.h>
#include <locale.h>
#include <wchar.h>
int main(){
setlocale(LC_ALL,"");
printf("\%lc\n",(wchar_t)0x2260);
}
i tried other things but it doesn't print anything or printed random characters like "â%", can someone help or give an idea to print the character?
Does it have a relation to my language? Because when i tried to print a Portuguese frase with setlocale(LC_ALL,""); also printed random characters.
EDIT: I'm using Windows 10, in cmd is going normally but in CodeBlocks Terminal doesn't.
Add ru_RU.CP1251 locale (on debian uncomment ru_RU.CP1251 in /etc/locale.gen and run sudo locale-gen) and
compile the following program with gcc -fexec-charset=cp1251 test.c (input file is in UTF-8). The result is empty. Just letter 'я' is wrong.
Other letters are determined either lowercase or uppercase just fine.
#include <locale.h>
#include <ctype.h>
#include <stdio.h>
int main (void)
{
setlocale(LC_ALL, "ru_RU.CP1251");
char c = 'я';
int i;
char z;
for (i = 7; i >= 0; i--) {
z = 1 << i;
if ((z & c) == z) printf("1"); else printf("0");
}
printf("\n");
if (islower(c))
printf("lowercase\n");
if (isupper(c))
printf("uppercase\n");
return 0;
}
Why neither islower() nor isupper() work on letter я?
The answer is that the encoding for the lower case version of that character in CP 1251 is decimal 255, and islower() and isupper() for your implementation do not accept or return that value (which is often interpreted as EOF).
You need to track down the source code for the runtime library to see what it does and why.
The solution is to write your own implementations, or wrap the ones you have. Personally, I never use these functions directly because of the many gotchas.
Igor, if your file is UTF-8 it's of no sense to try to use code page 1251, as it has nothing in common with utf-8 encoding. Just use locale ru_RU.UTF-8 and you'll be able to display your file without any problem. Or, if you insist on using ru_RU.CP1251, you'll need to first convert your file from utf-8 encoding to cp1251 (you can use the iconv(1) utility for that)
iconv --from-code=utf-8 --to-code=cp1251 your_file.txt > your_converted_file.txt
On other side, the --fexec-charset=cp1251 only affects the characters used on the executable, but you have not specified the input charset to use in string literals in your source code. Probably, the compiler is determining that from the environment (which you have set in your LANG or LC_CHARSET environment variables)
Only once you control exactly what locales are used at each stage, you'll get coherent results.
The main reason an effort is being made to switch all countries to a common charset (UTF) is exactly to not have to deal with all these locale settings at each stage.
If you deal always with documents encoded in CP1251, you'll need to use that encoding for everything on your computer, but when you receive some document encoded in utf-8, then you'll have to convert it to be able to see it right.
I mostly recommend you to switch to utf-8, as it's an encoding that has support for all countries character sets, but at this moment, that decision is only yours.
NOTE
On debian linux:
$ sed 's/^/ /' pru-$$.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <locale.h>
#define P(f,v) printf(#f"(%d /* '%c' */) => %d\n", (v), (v), f(v))
#define Q(v) do{P(isupper,(v));P(islower,(v));}while(0)
int main()
{
setlocale(LC_ALL, "");
Q(0xff);
}
Compiled with
$ make pru-$$
cc pru-1342.c -o pru-1342
execution with ru_RU.CP1251 locale
$ locale | sed 's/^/ /'
LANG=ru_RU.CP1251
LANGUAGE=
LC_CTYPE="ru_RU.CP1251"
LC_NUMERIC="ru_RU.CP1251"
LC_TIME="ru_RU.CP1251"
LC_COLLATE="ru_RU.CP1251"
LC_MONETARY="ru_RU.CP1251"
LC_MESSAGES="ru_RU.CP1251"
LC_PAPER="ru_RU.CP1251"
LC_NAME="ru_RU.CP1251"
LC_ADDRESS="ru_RU.CP1251"
LC_TELEPHONE="ru_RU.CP1251"
LC_MEASUREMENT="ru_RU.CP1251"
LC_IDENTIFICATION="ru_RU.CP1251"
LC_ALL=
$ pru-$$
isupper(255 /* 'я' */) => 0
islower(255 /* 'я' */) => 512
So, glibc is not faulty, the fault is in your code.
The first comment of Jonathan Leffler to OP is true. isxxx() (and iswxxx()) functions are required to handle EOF (WEOF) argument
(probably to be fool-proof).
This is why int was chosen as the argument type. When we pass argument of type char or character literal, it is
promoted to int (preserving the sign). And because by default char type and character literals are signed in gcc,
0xFF becomes -1, which is by unhappy coincidence the value of EOF.
Therefore always do explicit typecasting when passing parameters of type char (and character literals with code 0xFF) to functions, using int argument type (don't count on the unsignedness of char, because it is implementation-defined). Typecasting may be either done via (unsigned char), or via (uint8_t), which is less to type (you must include stdint.h).
See also https://sourceware.org/bugzilla/show_bug.cgi?id=20792 and Why passing char as parameter to islower() does not work correctly?
Reading about how to use shift sequences to print characters from other character sets I've arrived at the following code (of which I'm sure the escape sequence is incorrect, however I do not know why):
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("\x1B\x28\x49\x0E\xB3"); /* Should print: ウ */
return 0;
}
This however is not working for me as it outputs a "?" in the terminal rather than the character "ウ". My font does indeed have support for the character. If someone could explain what I'm doing incorrectly and how I would go about correcting this(still using shift sequences), that would be greatly appreciated.
Thank you
Your are using ISO-2022-JP-3. Hence you need to write your program as follows:
int main ()
{
// switch to JIS X 0201-1976 Kana set (1 byte per character)
printf ("\x1B(I");
printf ("\x33"); /* ウ */
// mandatory switch back to ASCII before end of line
printf ("\x1B(B");
printf ("\n");
return 0;
}
Note however that it is unlikely to be the character set expected by the terminal (on linux, this is most likely UTF-8). You can use iconv to perform the conversion:
$ ./main | iconv -f ISO-2022-JP-3
Alternatively you can use iconv(3) to perform the conversion inside your program.
What happens if you do echo 'ウ' >/tmp/x && od -x /tmp/x - do you see the same hex characters as you are using in the example above? I'm betting not, and I've based this answer on that bet.
Your cat works because ウ is encoded in your source file as UTF-8.
You have your terminal set to UTF-8 (or more likely it's just defaulting to UTF-8) so UTF-8 works, but Shift-JIS does not.
This question already has answers here:
Showing characters in extended ASCII code (Ubuntu)
(3 answers)
Closed 9 years ago.
I tried the following
printf ("%c", 236); //236 is the ASCII value for infinity
But I am just getting garbage output on the screen.
printf was working correctly for ASCII values less than 128. So I tried the following
printf ("%c", 236u); //unsigned int 236
Still I am just getting garbage only. So, what should I do to make printf display ASCII values from 128 to 255.
Like everyone else in the comments already mentioned, you would not be able to reliably print characters after 127 (and assuming it as ASCII) since ASCII is only defined upto 127. Also the output you see very much depends on the terminal settings (i.e. which locale it is configured to).
If you're fine using UTF-8 to print, you could give wprintf a try as shown below:
#include <stdio.h>
#include <wchar.h>
#include <locale.h>
int main()
{
setlocale( LC_ALL, "en_US.UTF-8" );
wprintf (L"%lc\n", 8734);
return 0;
}
It would produce the following output:
∞
8734 (or 0x221E) is the equivalent of the UTF-8 UNICODE character for the symbol ∞.
Standard C does not have a symbol for infinite. That's for your implementation (eg. your compiler, your operating system, your terminal and your hardware) to define. Consider that C was designed with portability for systems that use non-ASCII character sets in mind (eg. EBCDIC).
Edit:
I can only use stdio.h and stdlib.h
I would like to iterate through a char array filled with chars.
However chars like ä,ö take up twice the space and use two elements.
This is where my problem lies, I don't know how to access those special chars.
In my example the char "ä" would use hmm[0] and hmm[1].
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
char* hmm = "äö";
printf("%c\n", hmm[0]); //i want to print "ä"
printf("%i\n", strlen(hmm));
return 0;
}
Thanks, i tried to run my attached code in Eclipse, there it works. I assume because it uses 64 bits and the "ä" has enough space to fit. strlen confirms that each "ä" is only counted as one element.
So i guess i could somehow tell it to allocate more space for each char (so "ä" can fit)?
#include <stdio.h>
#include <stdlib.h>
int main()
{
char* hmm = "äüö";
printf("%c\n", hmm[0]);
printf("%c\n", hmm[1]);
printf("%c\n", hmm[2]);
return 0;
}
A char always used one byte.
In your case you think that "ä" is one char: Wrong.
Open your .c source code with an hexadecimal viewer and you will see that ä is using 2 char because the file is encoded in UTF8
Now the question is do you want to use wide character ?
#include <stdio.h>
#include <stdlib.h>
#include <wchar.h>
#include <locale.h>
int main()
{
const wchar_t hmm[] = L"äö";
setlocale(LC_ALL, "");
wprintf(L"%ls\n", hmm);
wprintf(L"%lc\n", hmm[0]);
wprintf(L"%i\n", wcslen(hmm));
return 0;
}
Your data is in a multi-byte encoding. Therefore, you need to use multibyte character handling techniques to divvy up the string. For example:
#include <stdio.h>
#include <string.h>
#include <locale.h>
int main(void)
{
char* hmm = "äö";
int off = 0;
int len;
int max = strlen(hmm);
setlocale(LC_ALL, "");
printf("<<%s>>\n", hmm);
printf("%zi\n", strlen(hmm));
while (hmm[off] != '\0' && (len = mblen(&hmm[off], max - off)) > 0)
{
printf("<<%.*s>>\n", len, &hmm[off]);
off += len;
}
return 0;
}
On my Mac, it produced:
<<äö>>
4
<<ä>>
<<ö>>
The call to setlocale() was crucial; without that, the program runs in the "C" locale instead of my en_US.UTF-8 locale, and mblen() mishandled things:
<<äö>>
4
<<?>>
<<?>>
<<?>>
<<?>>
The questions marks appear because the bytes being printed are invalid single bytes as far as the UTF-8 terminal is concerned.
You can also use wide characters and wide-character printing, as shown in benjarobin's answer..
Sorry to drag this on. Though I think its important to highlight some issues. As I understand it OS-X has the ability to have the default OS code page to be UTF-8 so the answer is mostly in regards to Windows that under the hood uses UTF-16, and its default ACP code page is dependent on the specified OS region.
Firstly you can open Character Map, and find that
äö
Both reside in the code page 1252 (western), so this is not a MBCS issue. The only way it could be a MBCS issue is if you saved the file using MBCS (Shift-JIS,Big5,Korean,GBK) encoding.
The answer, of using
setlocale( LC_ALL, "" )
Does not give insight into the reason why, äö was rendered in the command prompt window incorrectly.
Command Prompt does use its own code pages, namely OEM code pages. Here is a reference to the following (OEM) code pages available with their character map's.
Going into command prompt and typing the following command (Chcp) Will reveal the current OEM code page that the command prompt is using.
Following Microsoft documentation by using setlocal(LC_ALL,"") it details the following behavior.
setlocale( LC_ALL, "" );
Sets the locale to the default, which is the user-default ANSI code page obtained from the operating system.
You can do this manually, by using chcp and passing your required code page, then run your application and it should output the text perfectly fine.
If it was a multie byte character set problem then there would be a whole list of other issues:
Under MBCS, characters are encoded in either one or two bytes. In two-byte characters, the first, or "lead-byte," signals that both it and the following byte are to be interpreted as one character. The first byte comes from a range of codes reserved for use as lead bytes. Which ranges of bytes can be lead bytes depends on the code page in use. For example, Japanese code page 932 uses the range 0x81 through 0x9F as lead bytes, but Korean code page 949 uses a different range.
Looking at the situation, and that the length was 4 instead of 2. I would say that the file format has been saved in UTF-8 (It could in fact been saved in UTF-16, though you would of run into problems sooner than later with the compiler). You're using characters that are not within the ASCII range of 0 to 127, UTF-8 is encoding the Unicode code point to two bytes. Your compiler is opening the file and assuming its your default OS code page or ANSI C. When parsing your string, it's interpreting the string as a ANSI C Strings 1 byte = 1 character.
To sove the issue, under windows convert the UTF-8 string to UTF-16 and print it with wprintf. Currently there is no native UTF-8 support for the Ascii/MBCS stdio functions.
For Mac OS-X, that has the default OS code page of UTF-8 then I would recommend following Jonathan Leffler solution to the problem because it is more elegant. Though if you port it to Windows later, you will find you will need to covert the string from UTF-8 to UTF-16 using the example bellow.
In either solution you will still need to change the command prompt code page to your operating system code page to print the characters above ASCII correctly.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <Windows.h>
#include <locale>
// File saved as UTF-8, with characters outside the ASCII range
int main()
{
// Set the OEM code page to be the default OS code page
setlocale(LC_ALL, "");
// äö reside outside of the ASCII range and in the Unicode code point Western Latin 1
// Thus, requires a lead byte per unicode code point when saving as UTF-8
char* hmm = "äö";
printf("UTF-8 file string using Windows 1252 code page read as:%s\n",hmm);
printf("Length:%d\n", strlen(hmm));
// Convert the UTF-8 String to a wide character
int nLen = MultiByteToWideChar(CP_UTF8, 0,hmm, -1, NULL, NULL);
LPWSTR lpszW = new WCHAR[nLen];
MultiByteToWideChar(CP_UTF8, 0, hmm, -1, lpszW, nLen);
// Print it
wprintf(L"wprintf wide character of UTF-8 string: %s\n", lpszW);
// Free the memory
delete[] lpszW;
int c = getchar();
return 0;
}
UTF-8 file string using Windows 1252 code page read as:äö
Length:4
wprintf wide character of UTF-8 string: äö
i would check your command prompt font/code page to make sure that it can display your os single byte encoding. note command prompt has its own code page that differs to your text editor.