What's the use of universal characters on POSIX system? - c

In C one can pass unicode characters to printf() like this:
printf("some unicode char: %c\n", "\u00B1");
But the problem is that on POSIX compliant systems `char' is always 8 bits and most of UTF-8 character such as the above are wider and don't fit into char and as the result nothing is printed on the terminal. I can do this to achieve this effect however:
printf("some unicode char: %s\n", "\u00B1");
%s placeholder is expanded automatically and a unicode character is printed on the terminal. Also, in a standard it says:
If the hexadecimal value for a universal character name is less than
0x20 or in the range 0x7F-0x9F (inclusive), or if the universal
character name designates a character in the basic source character
set, then the program is illformed.
When I do this:
printf("letter a: %c\n", "\u0061");
gcc says:
error: \u0061 is not a valid universal character
So this technique is also unusable for printing ASCII characters. In this article on Wikipedia http://en.wikipedia.org/wiki/Character_(computing)#cite_ref-3 it says:
A char in the C programming language is a data type with the size of
exactly one byte, which in turn is defined to be large enough to
contain any member of the basic execution character set and UTF-8 code
units.
But is this doable on POSIX systems?

Use of universal characters in byte-based strings is dependent on the compile-time and run-time character encodings matching, so it's generally not a good idea except in certain situations. However they work very well in wide string and wide character literals: printf("%ls", L"\u00B1"); or printf("%lc", L'\00B1'); will print U+00B1 in the correct encoding for your locale.

Related

Where is the C code encode the bytes in the memory to the specific charset in Linux?

In the document of Linux:
LC_CTYPE
This category determines the interpretation of byte sequences as characters (e.g., single versus multibyte characters), character classifications (e.g., alphabetic or digit), and the behavior of character classes. On glibc systems, this category also determines the character transliteration rules for iconv(1) and iconv(3). It changes the behavior of the character handling and classification functions, such as isupper(3) and toupper(3), and the multibyte character functions such as mblen(3) or wctomb(3).
However, I see GCC's source code of putwchar:
/* _IO_putwc_unlocked */
# define _IO_putwc_unlocked(_wch, _fp) \
(__glibc_unlikely ((_fp)->_wide_data == NULL \
|| ((_fp)->_wide_data->_IO_write_ptr \
>= (_fp)->_wide_data->_IO_write_end)) \
? __woverflow (_fp, _wch) \
: (wint_t) (*(_fp)->_wide_data->_IO_write_ptr++ = (_wch)))
/* putwchar */
wint_t
putwchar (wchar_t wc)
{
wint_t result;
_IO_acquire_lock (stdout);
result = _IO_putwc_unlocked (wc, stdout);
_IO_release_lock (stdout);
return result;
}
There is no code using the locale set with setlocale(), which confuses me. When and where the bytes stored in the memory transit to the specific charset set by setlocale()?
Update:
int main() {
wchar_t wc = L'\x00010437';
putwchar(wc); // print nothing
}
int main() {
wchar_t wc = L'\x00010437';
setlocale(LC_CTYPE, "");
putwchar(wc); // print '𐐷'
}
In the two cases above, setlocale() affects the character displayed on the screen. I want to know in which process the bytes are determined to represent the specific character like '𐐷'?
Update2:
Maybe I find the source code converting the multi-bytes data into the specific charset. Here is the code snippet in _IO_wdo_write() in glibc/libio/wfileops.c:
/* Now convert from the internal format into the external buffer. */
result = (*cc->__codecvt_do_out) (cc, &fp->_wide_data->_IO_state,
data, data + to_do, &new_data,
write_ptr,
buf_end,
&write_ptr);
Expanding on my comment:
Where is the C code encode the bytes in the memory to the specific charset in Linux?
To the best of my knowledge, there isn't any. A charset, a.k.a. character encoding, is a mapping from sequences of characters -- in a rather abstract sense of that term -- to sequences of bytes. If you are looking at bytes in memory that represent character data then, perforce, you are looking at an already-encoded representation. For a C program, they will normally be encoded according to the execution character set of the C implementation.
In particular, to the extent that C "character" and "wide character" types actually represent characters, they contain encoded character data. There is normally no conversion needed or performed when such data are read or written, which is why you don't see it in the glibc source.
It is of course possible for a program to encode characters in some other encoding and store the resulting bytes in memory, via iconv(3), for example. It is then program's responsibility to ensure that they are handled appropriately. As for mapping encoded byte sequences to a visual representation -- "glyphs" -- this is a function performed by the program that displays or prints them. One way that is done is simply by selection of a font with appropriate mappings from byte sequences to glyphs.

How do I index a (not all ascii) utf8 string in C?

I want to index the characters in a utf8 string which does not necessarily contain
only ascii characters. I want the same kind of behavior I get in javascript:
> str = "lλך" // i.e. Latin ell, Greek lambda, Hebrew lamedh
'lλך'
> str[0]
'l'
> str[1]
'λ'
> str[2]
'ך'
Following the advice of UTF-8 Everywhere, I am representing my mixed character-length string just as any other sting in c - and not using wchars.
The problem is that, in C, one cannot access the 16th character of a string: only the 16th byte. Because λ is encoded with two bytes in utf-8, I have to access the 16th and 17th bytes of the string in order to print out one λ.
For reference, the output of:
#include <stdio.h>
int main () {
char word_with_greek[] = "this is lambda:_λ";
printf("%s\n",word_with_greek);
printf("The 0th character is: %c\n", word_with_greek[0]);
printf("The 15th character is: %c\n",word_with_greek[15]);
printf("The 16th character is: %c%c\n",word_with_greek[16],word_with_greek[17]);
return 0;
}
is:
this is lambda:_λ
The 0th character is: t
The 15th character is: _
The 16th character is: λ
Is there an easy way to break up the string into characters? It does not seem too difficult to write a function which breaks a string into wchars- but I imagine that someone has already written this yet I cannot find it.
It depends on what your unicode characters can be. Most strings are restricted to the Basic Multilanguage Plane. If yours are (not by accident by because of their very nature: at least no risk for emoji...) you can use the char16_t to represent any character. BTW wchar_t is at least as large as char16_t so in that case it is safe to use it.
If your script can contain emoji character, or other characters not in the BMP or simply if you are unsure, the only foolproof way is to convert everything to char32_t because any unicode character (at least in 2019...) as a code using less than 32 bits.
Converting for UTF8 to 32 (or 16) bits unicode is not that hard, and can be coded by hand, Wikipedia contains enough information for it. But you will find tons of library where this is already coded and tested, mainly the excellent libiconv, but the C11 version of the C standard library contains functions for UTF8 conversions. Not as nice but useable.

Will gcc functions in string.h break UTF-8 string?

I don't know the following cases in GCC, who can help me?
Whether a valid UTF-8 character (except code point 0) still contains zero byte? If so, I think function such as strlen will break that UTF-8 character.
Whether a valid UTF-8 character contains a byte whose value is equal to '\n'? If so, I think function such as "gets" will break that UTF-8 character.
Whether a valid UTF-8 character contains a byte whose value is equal to ' ' or '\t'? If so, I think function such as scanf("%s%s") will break that UTF-8 character and be interpreted as two or more words.
The answer to all your questions are the same: No.
It's one of the advantages of UTF-8: all ASCII bytes do not occur when encoding non-ASCII code points into UTF-8.
For example, you can safely use strlen on a UTF-8 string, only that its result is the number of bytes instead of UTF-8 code points.

Trouble comparing UTF-8 characters using wchar.h

I am in the process of making a small program that reads a file, that contains UTF-8 elements, char by char. After reading a char it compares it with a few other characters and if there is a match it replaces the character in the file with an underscore '_'.
(Well, it actually makes a duplicate of that file with specific letters replaced by underscores.)
I'm not sure where exactly I'm messing up here but it's most likely everywhere.
Here is my code:
FILE *fpi;
FILE *fpo;
char ifilename[FILENAME_MAX];
char ofilename[FILENAME_MAX];
wint_t sample;
fpi = fopen(ifilename, "rb");
fpo = fopen(ofilename, "wb");
while (!feof(fpi)) {
fread(&sample, sizeof(wchar_t*), 1, fpi);
if ((wcscmp(L"ά", &sample) == 0) || (wcscmp(L"ε", &sample) == 0) ) {
fwrite(L"_", sizeof(wchar_t*), 1, fpo);
} else {
fwrite(&sample, sizeof(wchar_t*), 1, fpo);
}
}
I have omitted the code that has to do with the filename generation because it has nothing to offer to the case. It is just string manipulation.
If I feed this program a file containing the words γειά σου κόσμε. I would want it to return this:
γει_ σου κόσμ_.
Searching the internet didn't help much as most results were very general or talking about completely different things regarding UTF-8. It's like nobody needs to manipulate single characters for some reason.
Anything pointing me the right way is most welcome.
I am not, necessarily, looking for a straightforward fixed version of the code I submitted, I would be grateful for any insightful comments helping me understand how exactly the wchar mechanism works. The whole wbyte, wchar, L, no-L, thing is a mess to me.
Thank you in advance for your help.
C has two different kinds of characters: multibyte characters and wide characters.
Multibyte characters can take a varying number of bytes. For instance, in UTF-8 (which is a variable-length encoding of Unicode), a takes 1 byte, while α takes 2 bytes.
Wide characters always take the same number of bytes. Additionally, a wchar_t must be able to hold any single character from the execution character set. So, when using UTF-32, both a and α take 4 bytes each. Unfortunately, some platforms made wchar_t 16 bits wide: such platforms cannot correctly support characters beyond the BMP using wchar_t. If __STDC_ISO_10646__ is defined, wchar_t holds Unicode code-points, so must be (at least) 4 bytes long (technically, it must be at least 21-bits long).
So, when using UTF-8, you should use multibyte characters, which are stored in normal char variables (but beware of strlen(), which counts bytes, not multibyte characters).
Unfortunately, there is more to Unicode than this.
ά can be represented as a single Unicode codepoint, or as two separate codepoints:
U+03AC GREEK SMALL LETTER ALPHA WITH TONOS ← 1 codepoint ← 1 multibyte character ← 2 bytes (0xCE 0xAC) = 2 char's.
U+03B1 GREEK SMALL LETTER ALPHA U+0301 COMBINING ACUTE ACCENT ← 2 codepoints ← 2 multibyte characters ← 4 bytes (0xCE 0xB1 0xCC 0x81) = 4 char's.
U+1F71 GREEK SMALL LETTER ALPHA WITH OXIA ← 1 codepoint ← 1 multibyte character ← 3 bytes (0xE1 0xBD 0xB1) = 3 char's.
All of the above are canonical equivalents, which means that they should be treated as equal for all purposes. So, you should normalize your strings on input/output, using one of the Unicode normalization algorithms (there are 4: NFC, NFD, NFKC, NFKD).
First of all, please do take the time to read this great article, which explains UTF8 vs Unicode and lots of other important things about strings and encodings: http://www.joelonsoftware.com/articles/Unicode.html
What you are trying to do in your code is read in unicode character by character, and do comparisons with those. That's won't work if the input stream is UTF8, and it's not really possible to do with quite this structure.
In short: Fully unicode strings can be encoded in several ways. One of them is using a series of equally-sized "wide" chars, one for each character. That is what the wchar_t type (sometimes WCHAR) is for. Another way is UTF8, which uses a variable number of raw bytes to encode each character, depending on the value of the character.
UTF8 is just a stream of bytes, which can encode a unicode string, and is commonly used in files. It is not the same as a string of WCHARs, which are the more common in-memory representation. You can't poke through a UTF8 stream reliably, and do character replacements within it directly. You'll need to read the whole thing in and decode it, and then loop through the WCHARs that result to do your comparisons and replacement, and then map that result back to UTF8 to write to the output file.
On Win32, use MultiByteToWideChar to do the decoding, and you can use the corresponding WideCharToMultiByte to go back.
When you use a "string literal" with regular quotes, you're creating a nul-terminated ASCII string (char*), which does not support Unicode. The L"string literal" with the L prefix will create a nul-terminated string of WCHARs (wchar_t *), which you can use in string or character comparisons. The L prefix also works with single-quote character literals, like so: L'ε'
As a commenter noted, when you use fread/fwrite, you should be using sizeof(wchar_t) and not its pointer type, since the amount you are trying to read/write is an actual wchar, not the size of a pointer to one. This advice is just code feedback independent of the above-- you don't want to be reading the input character by character anyways.
Note too that when you do string comparisons (wcscmp), you should use actual wide strings (which are terminated with a nul wide char)-- not use single characters in memory as input. If (when) you want to do character-to-character comparisons, you don't even need to use the string functions. Since a WCHAR is just a value, you can compare directly: if (sample == L'ά') {}.

Who determines the ordering of characters

I have a query based on the below program -
char ch;
ch = 'z';
while(ch >= 'a')
{
printf("char is %c and the value is %d\n", ch, ch);
ch = ch-1;
}
Why is the printing of whole set of lowercase letters not guaranteed in the above program. If C doesn't make many guarantees about the ordering of characters in internal form, then who actually does it and how ?
The compiler implementor chooses their underlying character set. About the only thing the standard has to say is that a certain minimal number of characters must be available and that the numeric characters are contiguous.
The required characters for a C99 execution environment are A through Z, a through z, 0 through 9 (which must be together and in order), any of !"#%&'()*+,-./:;<=>?[\]^_{|}~, space, horizontal tab, vertical tab, form-feed, alert, backspace, carriage return and new line. This remains unchanged in the current draft of C1x, the next iteration of that standard.
Everything else depends on the implementation.
For example, code like:
int isUpperAlpha(char c) {
return (c >= 'A') && (c <= 'Z');
}
will break on the mainframe which uses EBCDIC, dividing the upper case characters into two regions.
Truly portable code will take that into account. All other code should document its dependencies.
A more portable implementation of your example would be something along the lines of:
static char chrs[] = "zyxwvutsrqponmlkjihgfedcba";
char *pCh = chrs;
while (*pCh != 0) {
printf ("char is %c and the value is %d\n", *pCh, *pCh);
pCh++;
}
If you want a real portable solution, you should probably use islower() since code that checks only the Latin characters won't be portable to (for example) Greek using Unicode for its underlying character set.
Why is the printing of whole set of
lowercase letters not guaranteed in
the above program.
Because it's possible to use C with an EBCDIC character encoding, in which the letters aren't consecutive.
Obviously determined by the implementation of C you're using, but more then likely for you it's determined by the American Standard Code for Information Interchange (ASCII).
It is determined by whatever the execution character set is.
In most cases nowadays, that is the ASCII character set, but C has no requirement that a specific character set be used.
Note that there are some guarantees about the ordering of characters in the execution character set. For example, the digits '0' through '9' are guaranteed each to have a value one greater than the value of the previous digit.
These days, people going around calling your code non-portable are engaging in useless pedantry. Support for ASCII-incompatible encodings only remains in the C standard because of legacy EBCDIC mainframes that refuse to die. You will never encounter an ASCII-incompatible char encoding on any modern computer, now or in the future. Give it a few decades, and you'll never encounter anything but UTF-8.
To answer your question about who decides the character encoding: While it's nominally at the discression of your implementation (the C compiler, library, and OS) it was ultimately decided by the internet, both existing practice and IETF standards. Presumably modern systems are intended to communicate and interoperate with one another, and it would be a huge headache to have to convert every protocol header, html file, javascript source, username, etc. back and forth between ASCII-compatible encodings and EBCDIC or some other local mess.
In recent times, it's become clear that a universal encoding not just for machine-parsed text but also for natural-language text is also highly desirable. (Natural language text interchange is not as fundamental as machine-parsed text, but still very common and important.) Unicode provided the character set, and as the only ASCII-compatible Unicode encoding, UTF-8 is pretty much the successor to ASCII as the universal character encoding.

Resources