I discovered an interesting problem when processing UTF-8 strings containing non-ASCII chars with C standard library formatting functions like sprintf():
The functions of the printf() family are not aware of utf-8 and process everything based on the number of bytes, not chars. Therefore the formatting is incorrect.
Simple example:
#include <stdio.h>
int main(int argc, char *argv[])
{
const char* testMsg = "Tääääßt";
char buf[1024];
int len;
sprintf(buf, "|%7.7s|", testMsg);
len = strlen(buf);
printf("Result=\"%s\", len=%d", buf, len);
return 0;
}
The result is:
Result="|Täää|", len=7
Most probably some of you will recommand to convert the application from char to wchar_t and use fwprintf(), etc., but that's absolutely impossible because of huge existing applications. I could imagine writing a wrapper that uses these functions internally, but this would be tricky and very inefficient.
So the best solution would be a UTF-8-aware replacement for the formatting functions of the Standard C Library.
Currently I'm working on QNX 6.4, but replies for other operating systems. e.g. Linux, are also very welcome.
Well, once you ask printf to do intelligent padding of Unicode characters, you run into major problems. As they say,
w͢͢͝h͡o͢͡ ̸͢k̵͟n̴͘ǫw̸̛s͘ ̀́w͘͢ḩ̵a҉̡͢t ̧̕h́o̵r͏̵rors̡ ̶͡͠lį̶e͟͟ ̶͝in͢ ͏t̕h̷̡͟e ͟͟d̛a͜r̕͡k̢̨ ͡h̴e͏a̷̢̡rt́͏ ̴̷͠ò̵̶f̸ u̧͘ní̛͜c͢͏o̷͏d̸͢e̡͝?͞
How many Unicode characters are in Tääääßt? Well, it could be anywhere from 7 to 11, depending on how it's encoded. Each ä can be written as U+00E4, which is one character, or it could be written as U+0061 U+0308, which is two characters. So your next hope is to count grapheme clusters. (No, normalization won't make the problem go away.)
But, how wide is a grapheme cluster? Obviously, a is one column wide. U+200B should be zero columns wide, it's a "zero-width" space. Should each ひらがな be two columns wide? They usually are in terminal emulators. What happens when you format ひらがな as 7 columns, do you get "ひらが ", which adds a space, or do you get "ひらが", which is only 6 columns?
If you cut something up which mixes RTL and LTR text, should you reset the text direction afterwards? What are you going to do? (Some terminal emulators, such as Apple's, support a mixture of left-to-right and right-to-left text.)
What is your goal by truncating text? Are you trying to show the user a string in limited space, or are you trying to write a format that uses fixed-width fields?
Basically, if you want to cut Unicode text into chunks, you shouldn't be doing it with something as simple as printf (or wprintf, which is quite possibly worse). Use LibICU (website) to iterate over the breaks you want. Writing a UTF-8 aware version of printf is asking for all sorts of trouble that you don't want.
The following C99 code snippet defines the function u8printf where format specifiers such as %10s yield 10 utf-8 code points, that is characters rather than bytes. Don't forget to set the locale with setlocale(LC_ALL,"") somewhere before this routine is called. This works because the wprintf uses wchar_t internally. You can define u8fprintf and u8sprintf in a similar way. If you want to write this without C99 variable length arrays than a suitable combination of malloc/free is also possible.
int u8printf(char *fmt,...){
va_list ap;
va_start(ap,fmt);
int n=mbstowcs(0,fmt,0);
if(n==-1) return -1;
wchar_t wfmt[n+1];
mbstowcs(wfmt,fmt,n+1);
for(int m=128;m<=32768;m*=2){
wchar_t wbuf[m];
int r=vswprintf(wbuf,m,wfmt,ap);
if(r!=-1) {
char buf[m*4];
wcstombs(buf,wbuf,m*4);
fputs(buf,stdout);
return r;
}
}
return -1;
va_end(ap);
}
Related
I want to use √ symbol in the program written below.
#include <stdio.h>
main(){
char a='√';
if (a=='√'){
printf("Working");
}
else{
printf("Not working");
}
}
√ is not ASCII that's why its not working. But I want to know to make it work.
Thanks in advance.
There are two different things going on here to be aware of:
The source C file itself may not be able to contain this character correctly.
The char type within the semantics of the actual program does not support this character, either.
As to the first issue, it depends on your platform (etc) but being conservative with C source is most portable, which means sticking to ASCII characters only within the code file. That means, e.g., in comments as well as within meaningful code. That said, lots of platforms will allow and support Unicode characters inside the source files.
Regarding the second, a char is old-fashioned for containing characters, and is limited to an octet, which means arbitrary unicode characters with values above 0xFF just don't fit inside of them. I suppose some non-ASCII characters do in a platform dependent way (Windows code pages?) above value 0x7F, but in this case, I would treat this as a string, using a unicode escape sequence for this character: "\u221A".
char * sqrt = "\u221A";
if (strcmp(sqrt, "\u221A") == 0) {
printf("Working");
} else {
printf("Not working");
}
Heads-up that C strings (char*) are not really designed around non-ASCII characters either, so in this case you end up embedding the UTF8 encoded representation of the character (which is three bytes long) inside the char string. This works, preserves the value, and the compare works, but if you're going to be working with unicode more generally...
If your platform supports "wide characters" (wchar_t or unichar or similar) that can hold Unicode characters, then you can use those types to hold this character, and do direct equality comparisons like you were doing:
wchar_t sqrt = L'\u221A';
if (sqrt == L'\u221A') {
...
(FYI Be a little aware that these wide char types may not be wide enough for arbitrary Unicode code points on your platform thus might work for the square root char, but not, say, an emoji.)
Finally, for the sake of completeness, I feel honor-bound to admit that given a contemporary development environment/toolchain and target platform, you could probably get away with using the explicit character in a widechar literal like so:
wchar_t sqrt = L'√';
if (sqrt == L'√') {
....
But I'm old-fashioned, this feels sketchy, and I don't recommend it. :)
As we know, different encodings map different representations to same characters. Using setlocale we can specify the encoding of strings that are read from input, but does this apply to string literals as well? I'd find this surprising since these are compile-time!
This matters for tasks as simple as, for example, determining whether a string read from input contains a specific character. When reading strings from input it seems sensible to set the locale to to the user's locale (setlocale("LC_ALL", "");) so that the string is read and processed correctly. But when we're comparing this string with a character literal, won't problems arise due to mismatched encoding?
In other words: The following snippet seems to work for me. But doesn't it work only because of coincidence? Because - for example? - the source code happened to be saved in the same encoding that is used on the machine during runtime?
#include <stdio.h>
#include <wchar.h>
#include <stdlib.h>
#include <locale.h>
int main()
{
setlocale(LC_ALL, "");
// Read line and convert it to wide string so that wcschr can be used
// So many lines! And that's even though I'm omitting the necessary
// error checking for brevity. Ah I'm also omitting free's
char *s = NULL; size_t n = 0;
getline(&s, &n, stdin);
mbstate_t st = {0}; const char* cs = s;
size_t wn = mbsrtowcs(NULL, &cs, 0, &st);
wchar_t *ws = malloc((wn+1) * sizeof(wchar_t));
st = (mbstate_t){0};
mbsrtowcs(ws, &cs, (wn+1), &st);
int contains_guitar = (wcschr(ws, L'🎸') != NULL);
if(contains_guitar)
printf("Let's rock!\n");
else
printf("Let's not.\n");
return 0;
}
How to do this correctly?
Using setlocale we can specify the encoding of strings that are read from input, but does this apply to string literals as well?
No. String literals use the execution character set, which is defined by your compiler at compile time.
Execution character set does not have to be the same as the source character set, the character set used in the source code. The C compiler is responsible for the translation, and should have options for choosing/defining them. The default depends on the compiler, but on Linux and most current POSIXy systems, is usually UTF-8.
The following snippet seems to work for me. But doesn't it work only because of coincidence?
The example works because the character set of your locale, the source character set, and the execution character set used when the binary was constructed, all happen to be UTF-8.
How to do this correctly?
Two options. One is to use wide characters and string literals. The other is to use UTF-8 everywhere.
For wide input and output, see e.g. this example in another answer here.
Do note that getwline() and getwdelim() are not in POSIX.1, but in C11 Annex K. This means they are optional, and as of this writing, not widely available at all. Thus, a custom implementation around fgetwc() is recommended instead. (One based on fgetws(), wcslen(), and/or wcscspn() will not be able to handle embedded nuls, L'\0', correctly.)
In a typical wide I/O program, you only need mbstowcs() to convert command-line arguments and environment variables to wide strings.
Using UTF-8 everywhere is also a perfectly valid practical approach, at least if it is well documented, so that users know the program inputs and outputs UTF-8 strings, and developers know to ensure their C compiler uses UTF-8 as the execution character set when compiling those binaries.
Your program can even use e.g.
if (!setlocale(LC_ALL, ""))
fprintf(stderr, "Warning: Your C library does not support your current locale.\n");
if (strcmp("UTF-8", nl_langinfo(CODESET)))
fprintf(stderr, "Warning: Your locale does not use the UTF-8 character set.\n");
to verify the current locale uses UTF-8.
I have used both approaches, depending on the circumstances. It is difficult to say which one is more portable in practice, because as usual, both work just fine on non-Windows OSes without issues.
If you're willing to assume UTF-8,
strstr(s,"🎸")
Or:
strstr(s,u8"🎸")
The latter avoids some assumptions but requires a C11 compiler. If you want the best of both and can sacrifice readability:
strstr(s,"\360\237\216\270")
I am trying to make a simple -ancient greek to modern greek- converter, in c, by changing the tones of the vowels. For example, the user types a text in greek which conains the character: ῶ (unicode: U+1FF6), so the program converts it into: ώ (unicode:U+1F7D). Greek are not sopported by c, so I don't know how to make it work. Any ideas?
Assuming you use a sane operating system (meaning, not Windows), this is very easy to achieve using C99/C11 locale and wide character support. Consider filter.c:
#include <stdlib.h>
#include <locale.h>
#include <wchar.h>
#include <stdio.h>
wint_t convert(const wint_t wc)
{
switch (wc) {
case L'ῶ': return L'ώ';
default: return wc;
}
}
int main(void)
{
wint_t wc;
if (!setlocale(LC_ALL, "")) {
fprintf(stderr, "Current locale is unsupported.\n");
return EXIT_FAILURE;
}
if (fwide(stdin, 1) <= 0) {
fprintf(stderr, "Standard input does not support wide characters.\n");
return EXIT_FAILURE;
}
if (fwide(stdout, 1) <= 0) {
fprintf(stderr, "Standard output does not support wide characters.\n");
return EXIT_FAILURE;
}
while ((wc = fgetwc(stdin)) != WEOF)
fputwc(convert(wc), stdout);
return EXIT_SUCCESS;
}
The above program reads standard input, converts each ῶ into a ώ, and outputs the result.
Note that wide character strings and characters have an L prefix; L'ῶ' is a wide character constant. These are only in Unicode if the execution character set (the character set the code is compiled for) is Unicode, and that depends on your development environment. (Fortunately, outside of Windows, UTF-8 is pretty much a standard nowadays -- and that is a good thing -- so code like the above Just Works.)
On POSIXy systems (like Linux, Android, Mac OS, BSDs), you can use the iconv() facilities to convert from any input character set to Unicode, do the conversion there, and finally convert back to any output character set. Unfortunately, the question is not tagged posix, so that is outside this particular question.
The above example uses a simple switch/case statement. If there are many replacement pairs, one could use e.g.
typedef struct {
wint_t from;
wint_t to;
} widepair;
static widepair replace[] = {
{ L'ῶ', L'ώ' },
/* Others? */
};
#define NUM_REPLACE (sizeof replace / sizeof replace[0])
and at runtime, sort replace[] (using qsort() and a function that compares the from elements), and use binary search to quickly determine if a wide character is to be replaced (and if so, to which wide character). Because this is a O(log2N) operation with N being the number of pairs, and it utilizes cache okay, even thousands of replacement pairs is not a problem this way. (And of course, you can build the replacement array at runtime just as well, even from user input or command-line options.)
For Unicode characters, we could use a uint32_t map_to[0x110000]; to directly map each code point to another Unicode code point, but because we do not know whether wide characters are Unicode or not, we cannot do that; we do not know the code range of the wide characters until after compile time. Of course, we can do a multi-stage compilation, where a test program generates the replace[] array shown above, and outputs their codes in decimal; then do some kind of auto-grouping or clustering, for example bit maps or hash tables, to do it "even faster".
However, in practice it usually turns out that the I/O (reading and writing the data) takes more real-world time than the conversion itself. Even when the conversion is the bottleneck, the conversion rate is sufficient for most humans. (As an example, when compiling C or C++ code with the GNU utilities, the preprocessor first converts the source code to UTF-8 internally.)
Okay, here's some quick advice. I wouldn't use C because Unicode is not wel supported (yet).
A better language choice would be Python, Java, ..., anything with good Unicode support.
I'd write a utility that reads from standard input and writes to standard output. This makes it easy to use from the command line and in scripts.
I might be missing something but it's going to be something like this (in pseudo code):
while ((inCharacter = getCharacterFromStandardInput) != EOF
{
switch (inCharacter)
{
case 'ῶ': outCharacter = ώ; break
...
}
writeCharacterToStandardOutput(outCharacter)
}
You'll also need to select & handle the format: UTF-8/16/32.
That's it. Good luck!
I am developing a cross platform C (C89 standard) application which has to deal with UTF8 text. All I need is basic string manipulation functions like substr, first, last etc.
Question 1
Is there a UTF8 library that has the above functions implemented? I have already looked into ICU and it is too big for my requirement. I just need to support UTF8.
I have found a UTF8 decoder here. Following function prototypes are from that code.
void utf8_decode_init(char p[], int length);
int utf8_decode_next();
The initialization function takes a character array but utf8_decode_next() returns int. Why is that? How can I print the characters this function returns using standard functions like printf? The function is dealing with character data and how can that be assigned to a integer?
If the above decoder is not good for production code, do you have a better recommendation?
Question 2
I also got confused by reading articles that says, for unicode you need to use wchar_t. From my understanding this is not required as normal C strings can hold UTF8 values. I have verified this by looking at source code of SQLite and git. SQLite has the following typedef.
typedef unsigned char u8
Is my understanding correct? Also why is unsigned char required?
The utf_decode_next() function returns the next Unicode code point. Since Unicode is a 21-bit character set, it cannot return anything smaller than an int, and it can be argued that technically, it should be a long since an int could be a 16-bit quantity. Effectively, the function returns you a UTF-32 character.
You would need to look at the C94 wide character extensions to C89 to print wide characters (wprintf(), <wctype.h>, <wchar.h>). However, wide characters alone are not guaranteed to be UTF-8 or even Unicode. You most probably cannot print the characters from utf8_decode_next() portably, but it depends on what your portability requirements are. The wider the range of systems you must port to, the less chance there is of it all working simply. To the extent you can write UTF-8 portably, you would send the UTF-8 string (not an array of the UTF-32 characters obtained from utf8_decode_next()) to one of the regular printing functions. One of the strengths of UTF-8 is that it can be manipulated by code that is largely ignorant of it.
You need to understand that a 4-byte wchar_t can hold any Unicode codepoint in a single unit, but that UTF-8 can require between one and four 8-bit bytes (1-4 units of storage) to hold a single Unicode codepoint. On some systems, I believe wchar_t can be a 16-bit (short) integer. In this case, you are forced into using UTF-16, which encodes Unicode codepoints outside the Basic Multilingual Plane (BMP, code points U+0000 .. U+FFFF) using two storage units and surrogates.
Using unsigned char makes life easier; plain char is often signed. Having negative numbers makes life more difficult than it need me (and, believe me, it is difficult enough without adding complexity).
You do not need any special library routines for character or substring search with UTF-8. strstr does everything you need. That's the whole point of UTF-8 and the design requirements it was invented to meet.
GLib has quite a few relevant functions, and can be used independent of GTK+.
There are over 100,000 characters in Unicode. There are 256 possible values of char in most C implementations.
Hence, UTF-8 uses more than one char to encode each character, and the decoder needs a return type which is larger than char.
wchar_t is a larger type than char (well, it doesn't have to be larger, but it usually is). It represents the characters of the implementation-defined wide character set. On some implementations (most importantly, Windows, which uses surrogate pairs for characters outside the "basic multilingual plane"), it still isn't big enough to represent any Unicode character, which presumably is why the decoder you reference uses int.
You can't print wide characters using printf, because it deals in char. wprintf deals in wchar_t, so if the wide character set is unicode, and if wchar_t is int on your system (as it is on linux), then wprintf and friends will print the decoder output without further processing. Otherwise it won't.
In any case, you cannot portably print arbitrary unicode characters, because there's no guarantee that the terminal can display them, or even that the wide character set is in any way related to Unicode.
SQLite has probably used unsigned char so that:
they know the signedness - it's implementation-defined whether char is signed or not.
they can do right-shifts and assign out-of-range values, and get consistent and defined results across all C implementations. Implemenations have more freedom how signed char behaves than unsigned char.
Normal C strings are fine for storing utf8 data, but you can't easily search for a substring in your utf8 string. This is because a character encoded as a sequence of bytes using the utf8 encoding could be anywhere from one to 4 bytes depending on the character. i.e. a "character" is not equivalent to a "byte" for utf8 like it is for ASCII.
In order to do substring searches etc. you will need to decode it to some internal format that is used to represent Unicode characters and then do the substring search on that. Since there are far more than Unicode 256 characters, a byte (or char) is not enough. That's why the library you found uses ints.
As for your second question, it's probably just because it does not make sense to talk about negative characters, so they may as well be specified as "unsigned".
I have implemented a substr & length functions which supports UTF8 characters. This code is a modified version of what SQLite uses.
The following macro loops through the input text and skip all multi-byte sequence characters. if condition checks that this is a multi-byte sequence and the loop inside it increments input until it finds next head byte.
#define SKIP_MULTI_BYTE_SEQUENCE(input) { \
if( (*(input++)) >= 0xc0 ) { \
while( (*input & 0xc0) == 0x80 ){ input++; } \
} \
}
substr and length are implemented using this macro.
typedef unsigned char utf8;
substr
void *substr(const utf8 *string,
int start,
int len,
utf8 **substring)
{
int bytes, i;
const utf8 *str2;
utf8 *output;
--start;
while( *string && start ) {
SKIP_MULTI_BYTE_SEQUENCE(string);
--start;
}
for(str2 = string; *str2 && len; len--) {
SKIP_MULTI_BYTE_SEQUENCE(str2);
}
bytes = (int) (str2 - string);
output = *substring;
for(i = 0; i < bytes; i++) {
*output++ = *string++;
}
*output = '\0';
}
length
int length(const utf8 *string)
{
int len;
len = 0;
while( *string ) {
++len;
SKIP_MULTI_BYTE_SEQUENCE(string);
}
return len;
}
I'm writing a small application in C that reads a simple text file and then outputs the lines one by one. The problem is that the text file contains special characters like Æ, Ø and Å among others. When I run the program in terminal the output for those characters are represented with a "?".
Is there an easy fix?
First things first:
Read in the buffer
Use libiconv or similar to obtain wchar_t type from UTF-8 and use the wide character handling functions such as wprintf()
Use the wide character functions in C! Most file/output handling functions have a wide-character variant
Ensure that your terminal can handle UTF-8 output. Having the correct locale setup and manipulating the locale data can automate alot of the file opening and conversion for you ... depending on what you are doing.
Remember that the width of a code-point or character in UTF-8 is variable. This means you can't just seek to a byte and begin reading like with ASCII ... because you might land in the middle of a code point. Good libraries can do this in some cases.
Here is some code (not mine) that demonstrates some usage of UTF-8 file reading and wide character handling in C.
#include <stdio.h>
#include <wchar.h>
int main()
{
FILE *f = fopen("data.txt", "r, ccs=UTF-8");
if (!f)
return 1;
for (wint_t c; (c = fgetwc(f)) != WEOF;)
printf("%04X\n", c);
fclose(f);
return 0;
}
Links
libiconv
Locale data in C/GNU libc
Some handy info
Another good Unicode/UTF-8 in C resource
Make sure you're not accidentally dropping any bytes; some UTF-8 characters are more than one byte in length (that's sort of the point), and you need to keep them all.
It can be useful to print the contents of the buffer as hex, so you can inspect which bytes are actually read:
static void print_buffer(const char *buffer, size_t length)
{
size_t i;
for(i = 0; i < length; i++)
printf("%02x ", (unsigned int) buffer[i]);
putchar('\n');
}
You can do this after loading a very short file, containing just a few characters.
Also make sure the terminal is set to the proper encoding, so it interprets your characters as UTF-8.
Probably your text file is ISO-8559-1 encoded but your terminal is UTF-8. This kind of mismatch is a standard problem when dealing with byte-oriented text handling; other C programs (such as the standard ‘cat’ and ‘more’ commands) will do the same thing and it isn't generally considered an error or something that needs to be fixed.
If you want to operate on a Unicode character level instead of bytes that's fine, but you'll need to use wchar as your character type instead of char throughout your program, and provide switches for the user to specify what the incoming file encoding actually is. (Whilst it is sometimes possible to guess, it's not very reliable.)
I don't know if it could help but if you're sure that the encodings of terminal and input file are the same, you can try to setlocale():
#include <locale.h>
…
setlocale(LC_CTYPE, "");