I am trying to read Chinese characters from an infile, and I have found a few questions on the subject here but nothing that works for me or suits my needs. I am using the fread() implementation from this question, but it is not working. I am running Linux.
#define UNICODE
#ifdef UNICODE
#define _UNICODE
#else
#define _MBCS
#endif
#include <locale.h>
#include <stdio.h>
#include <wchar.h>
#include <string.h>
#include <stdlib.h>
int main(int argc, char * argv[]) {
FILE *infile = fopen(argv[1], "r");
wchar_t test[2] = L"\u4E2A";
setlocale(LC_ALL, "");
printf("%ls\n", test); //test
wcscpy(test, L"\u4F60"); //test
printf("%ls\n", test); //test
for (int i = 0; i < 5; i++){
fread(test, 2, 2, infile);
printf("%ls\n", test);
}
return 0;
}
I use the following text file to test it:
一个人
两本书
三张桌子
我喜欢一个猫
and the program outputs:
个
你
������
Anyone have any wisdom on the subject?
Edit: Also, that's all of my code because I'm not sure where it fails. There's some stuff in there where I test to make sure I can print unicode wchars that isn't entirely relevant to the question.
If you really need to read a UTF-8 (or rather a locale charmap) file one codepoint at a time you can use fscanf as below. But do note, this is codepoints not characters, characters may consist of multiple codepoints because of combining codes and some of the codepoints are most definitely not printable.
#include <locale.h>
#include <stdio.h>
#include <wchar.h>
#include <string.h>
#include <stdlib.h>
int
main(int argc, char *argv[])
{
FILE *infile = fopen(argv[1], "r");
wchar_t test[2] = L"\u4E2A";
setlocale(LC_ALL, "");
printf("%ls\n", test); //test
wcscpy(test, L"\u4F60"); //test
printf("%ls\n", test); //test
for (int i = 0; i < 5; i++) {
fscanf(infile, "%1ls", test);
printf("%ls\n", test);
}
return 0;
}
Most of the time you probably won't need to use the locale functionality because UTF-8 generally just works if you treat it as an opaque encoding. Part of this is because all non ASCII characters have all their component bytes in the 128..253 range (not a typo, 254 and 255 are unused) another part is that the bytes 128..159 are always continuation bytes all the start bytes for characters are 160..253 which means an error will just break one character not the rest of the stream. (Okay, codepoints vs characters is only really there to try to convince you that dividing UTF-8 up into "characters" probably won't do what you want).
You are telling fread to read two 2-byte values in each call; however, the characters you want to read have 3-byte UTF-8 encodings. In general, you need to decode the UTF-8 stream as a whole, not in fixed-sized byte chunks.
Related
I am trying to print "□" and "■" using c.
I tried printf("%c", (char)254u); but it didn't work.
Any help? Thank you!
I do not know what is (char)254u in your code. First you set locale to unicode, next you just printf it. That is it.
#include <locale.h>
#include <stdio.h>
int main()
{
setlocale(LC_CTYPE, "en_US.UTF-8");
printf("%lc", u'□');
return 0;
}
You can use directly like this :
#include <stdio.h>
int main()
{
printf("■");
return 0;
}
You can print Unicode characters using _setmode.
Sets the file translation mode. learn more
#include <fcntl.h>
#include <stdio.h>
#include <io.h>
int main(void) {
_setmode(_fileno(stdout), _O_U16TEXT);
wprintf(L"\x25A0\x25A1\n");
return 0;
}
output
■□
As other answers have mentioned, you have need to set the proper locale to use the UTF-8 encoding, defined by the Unicode Standard. Then you can print it with %lc using any corresponding number. Here is a minimal code example:
#include <stdio.h>
#include <locale.h>
int main() {
setlocale(LC_CTYPE, "en_US.UTF-8"); // Defined proper UTF-8 locale
printf("%lc\n", 254); // Using '%lc' to specify wchar_t instead of char
return 0;
}
If you want to store it in a variable, you must use a wchar_t, which allows the number to be mapped to its Unicode symbol. This answer provides more detail.
wchar_t x = 254;
printf("%lc\n", x);
So here's what I got:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <math.h>
int main()
{
FILE *fin;
struct STR {
float d;
int x;
} testing;
testing.d = 11.12;
testing.x = 31121;
fin = fopen("output.txt", "w");
//fprintf(fin,"%7.4f %7d\n",testing.d,testing.x);
fwrite(&testing, sizeof(struct STR),1,fin);
fclose(fin);
return 0;
}
So what happens when I compile and run? I get this:
"…ë1A‘y "
When I comment out the fwrite and use the fprintf, I get this:
"11.1200 31121"
Can someone explain this to me? I tried running it on windows and on linux, and both times the output was obscure.
Also, I guess while we're on the subject, how come the size of the text file with "11.1200 31121" is 16 bytes? I thought that integers (on a 32-bit machine) were 4 bytes each? Is it 16 bytes because there are 16 total characters in the txt file?
Thanks
You are opening the file as a text file but you are writing binary data, it's not human readable. To read it properly you need fread(). Instead fprintf() writes text as you can check.
So
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
#include <math.h>
struct STR {
float d;
int x;
} testing;
int main()
{
FILE *file;
testing.d = 11.12;
testing.x = 31121;
file = fopen("output.txt", "wb");
if (file != NULL)
{
fwrite(&testing, sizeof(struct STR), 1, file);
fclose(file);
}
file = fopen("output.txt", "rb");
if ((file != NULL) && (fread(&testing, sizeof(struct STR), 1, file) == 1))
{
fprintf(stdout, "%f -- %d\n", testing.d, testing.x);
fclose(file);
}
return 0;
}
should make it clear.
As iharob said, you’re writing binary data that are getting interpreted as nonsense characters in the current locale, not human-readable ASCII. In addition, the reason your compiler is allocating sixteen bytes to the structure is padding. The reason the compiler is padding to 16 bytes is that your CPU has special instructions to index arrays of structures more efficiently when their size is a small power of two.
If you really want to serialize your data in a portable binary format, or transmit it over a network, you should both use an exact-width type such as int32_t rather than int (which has been 16, 32 or 64 bits and might have other widths than those) and also convert to a specific endianness rather than whatever the native byte order happens to be. The classic solution is htonl(). Also, write out each field separately to avoid problems with padding, or use a compiler extension to pack your structure and turn padding off.
I want to store a string with characters from extend ascii table, and print them.
I tried:
wchar_t wp[] = L"Росси́йская Акаде́мия Нау́к ";
printf("%S", wp);
I can compile but when I run it, nothing is actually displayed in my terminal.
Could you help me please?
Edit: In response to this comment:
wprintf(L"%s", wp);
Sorry, I forgot to mention that I can only use write(), as was only using printf for my first attempts.
If you want wide chars (16 bit each) as output, use the following code, as suggested by Michael:
wprintf(L"%s", wp);
If you need utf8 output, you have to use iconv() for conversion between the two. See question 7469296 as a starting point.
You need to call setlocale() first and use %ls in printf():
#include <stdio.h>
#include <wchar.h>
#include <locale.h>
int main(int argc, char *argv[])
{
setlocale(LC_ALL, "");
// setlocale(LC_ALL, "C.UTF-8"); // this also works
wchar_t wp[] = L"Росси́йская Акаде́мия Нау́к";
printf("%ls\n", wp);
return 0;
}
For more about setlocale(), refer to Displaying wide chars with printf
Just trying to output this unicode character ☒ in C using MinGW. I first put it on a buffer using swprintf, and then write it to the stdout using wprintf.
#include <stdio.h>
int main(int argc, char **argv)
{
wchar_t buffer[50];
wchar_t c = L'☒';
swprintf(buffer, L"The character is: %c.", c);
wprintf(buffer);
return 0;
}
The output under Windows 8 is:
The character is: .
Other characters such as Ɣ doesn't work neither.
What I am doing wrong?
You're using %c, but %c is for char, even when you use it from wprintf(). Use %lc, because the parameter is whar_t.
swprintf(buffer, L"The character is: %lc.", c);
This kind of error should normally be caught by compiler warnings, but it doesn't always happen. In particular, catching this error is tricky because both %c and %lc actually take int arguments, not char and wchar_t (the difference is how they interpret the int).
To output Unicode (or to be more precise UTF-16LE) to the Windows console, you have to change the file translation mode to _O_U16TEXT or _O_WTEXT. The latter one includes the BOM which isn't of interest in this case.
The file translation mode can be changed with _setmode. But it takes a file descriptor (abbreviated fd) and not a FILE *! You can get the corresponding fd from a FILE * with _fileno.
Here's an example that should work with MinGW and its variants, and also with various Visual Studio versions.
#define _CRT_NON_CONFORMING_SWPRINTFS
#include <stdio.h>
#include <io.h>
#include <fcntl.h>
int
main(void)
{
wchar_t buffer[50];
wchar_t c = L'Ɣ';
_setmode(_fileno(stdout), _O_U16TEXT);
swprintf(buffer, L"The character is: %c.", c);
wprintf(buffer);
return 0;
}
This works for me:
#include <locale.h>
#include <stdio.h>
#include <wchar.h>
int main(int argc, char **argv)
{
wchar_t buffer[50];
wchar_t c = L'☒';
if (!setlocale(LC_CTYPE, "")) {
fprintf(stderr, "Cannot set locale\n");
return 1;
}
swprintf(buffer, sizeof buffer, L"The character is %lc.", c);
wprintf(buffer);
return 0;
}
What I changed:
I added wchar.h include required by the use of swprintf
I added size as the second argument of swprintf as required by C
I changed %c conversion specification to %lc
I change locale using setlocale
This FAQ explains how to use UniCode / wide characters in MinGW:
https://sourceforge.net/p/mingw-w64/wiki2/Unicode%20apps/
I have been given this school project. I have to alphabetically sort list of items by Czech rules. Before I dig deeper, I have decided to test it on a 16 by 16 matrix so I did this:
typedef struct {
wint_t **field;
}LIST;
...
setlocale(LC_CTYPE,NULL);
....
list->field=(wint_t **)malloc(16*sizeof(wint_t *));
for(int i=0;i<16;i++)
list->field[i]=(wint_t *)malloc(16*sizeof(wint_t));
In another function I am trying to assign a char. Like this:
sorted->field[15][15] = L'C';
wprintf(L"%c\n",sorted->field[15][15]);
Everything is fine. Char is printed. But when I try to change it to
sorted->field[15][15] = L'Č';
It says: Extraneous characters in wide character constant ignored. (Xcode) And the printing part is skipped. The main.c file is in UTF-8. If I try to print this:
printf("ěščřžýááíé\n");
It prints it out as written. I am not sure if I should allocate mem using wint_t or wchar_t or if I am doing it right. I tested it with both but none of them works.
clang seems to support entering arbitrary byte sequences into to wide strings with the \x notation:
wchar_t c = L'\x2126';
This compiles without notice.
Edit: Adapting what I find on wikipedia about wide characters, the following works for me:
#include <stdio.h>
#include <wchar.h>
#include <stdlib.h>
#include <locale.h>
int main(void)
{
setlocale(LC_ALL,"");
wchar_t myChar1 = L'\x2126';
wchar_t myChar2 = 0x2126; // hexadecimal encoding of char Ω using UTF-16
wprintf(L"This is char: %lc \n",myChar1);
wprintf(L"This is char: %lc \n",myChar2);
}
and prints nice Ω characters in my terminal. Make sure that your teminal is able to interpret utf-8 characters.