Let's assume there's only C99 Standard paper and printf library function needs to be implemented according to this standard to work with UTF-16 encoding, could you please clarify the expected behavior for s conversion with precision specified?
C99 Standard (7.19.6.1) for s conversion says:
If no l length modifier is present, the argument shall be a pointer to the initial element of an array of character type. Characters from the array are written up to (but not including) the terminating null character. If the precision is specified, no more than that many bytes are written. If the precision is not specified or is greater than the size of the array, the array shall contain a null character.
If an l length modifier is present, the argument shall be a pointer to the initial element of an array of wchar_t type. Wide characters from the array are converted to multibyte characters (each as if by a call to the wcrtomb function, with the conversion state described by an mbstate_t object initialized to zero before the first wide character is converted) up to and including a terminating null wide character. The resulting multibyte characters are written up to (but not including) the terminating null character (byte). If no precision is specified, the array shall contain a null wide character. If a precision is specified, no more than that many bytes are written (including shift sequences, if any), and the array shall contain a null wide character if, to equal the multibyte character sequence length given by the precision, the function would need to access a wide character one past the end of the array. In no case is a partial multibyte character written.
I don't quite understand this paragraph in general and the statement "If a precision is specified, no more than that many bytes are written" in particular.
For example, let's take UTF-16 string "TEST" (byte sequence: 0x54, 0x00, 0x45, 0x00, 0x53, 0x00, 0x54, 0x00).
What is expected to be written to the output buffer in the following cases:
If precision is 3
If precision is 9 (one byte more than string length)
If precision is 12 (several bytes more than string length)
Then there's also "Wide characters from the array are converted to multibyte characters". Does it mean UTF-16 should be converted to UTF-8 first? This is pretty strange in case I expect to work with UTF-16 only.
Converting a comment into a slightly expanded answer.
What is the value of CHAR_BIT in your implementation?
If CHAR_BIT == 8, you can't handle UTF-16 with %s; you'd use %ls and you'd pass a wchar_t * as the corresponding argument. You'd then have to read the second paragraph of the specification.
If CHAR_BIT == 16, then you can't have an odd number of octets in the data. You then need to know about how wchar_t relates to char (are they the same size? do they have the same signedness?) and interpret both paragraphs to come up with a uniform effect — unless you decided to have wchar_t represent UTF-32.
The key point is that UTF-16 cannot be handled as a C string if CHAR_BIT == 8 because there are too many useful characters that are encoded with one byte holding zero, but those zero bytes mark the end of a null-terminated string. To handle UTF-16, either the plain char type has to be a 16-bit (or larger) type (so CHAR_BIT > 8), or you have to use wchar_t (and sizeof(wchar_t) > sizeof(char)).
Note that the specification expects that wide characters will be converted to a suitable multibyte representation.
If you want wide characters output natively, you have to use the fwprintf() and related function from <wchar.h>, first defined in C99. The specification there has a lot in common with the specification of fprintf(), but there are (unsurprisingly) important differences.
7.29.2.1 The fwprintf function
…
s
If no l length modifier is present, the argument shall be a pointer to the initial
element of a character array containing a multibyte character sequence
beginning in the initial shift state. Characters from the array are converted as
if by repeated calls to the mbrtowc function, with the conversion state
described by an mbstate_t object initialized to zero before the first
multibyte character is converted, and written up to (but not including) the
terminating null wide character. If the precision is specified, no more than
that many wide characters are written. If the precision is not specified or is
greater than the size of the converted array, the converted array shall contain a
null wide character.
If an l length modifier is present, the argument shall be a pointer to the initial
element of an array of wchar_t type. Wide characters from the array are
written up to (but not including) a terminating null wide character. If the
precision is specified, no more than that many wide characters are written. If
the precision is not specified or is greater than the size of the array, the array
shall contain a null wide character.
wchar_t is not meant to be used for UTF-16, only for implementation-defined fixed-width encodings depending on the current locale. There's simply no sane way to support a variable-length encoding with the wide character API. Likewise, the multi-byte representation used by functions like printf or wcrtomb is implementation-defined. If you want to write portable code using Unicode, you can't rely on the wide character API. Use a library or roll your own code.
To answer your question: fprintf with the l modifier accepts a wide character string in the implementation-defined encoding specified by the current locale. If wchar_t is 16 bits, this encoding might be a bastardization of UTF-16, but as I mentioned above, there's no way to properly support UTF-16 surrogates. This wchar_t string is then converted to a multi-byte char string in an implementation-defined encoding. This might or might not be UTF-8. The specified precision limits the number of chars in the output string with the added restriction that no partial multi-byte characters are written.
Here's an example. Let's assume that the wide character encoding is UTF-32 with 32-bit wchar_t and that the multi-byte encoding is UTF-8 (like on Linux with an appropriate locale). The following code
wchar_t w[] = { 0x1F600, 0 }; // U+1F600 GRINNING FACE
printf("%.3ls", w);
will print nothing at all since the resulting UTF-8 sequence has four bytes. Only if you specify a precision of at least four
printf("%.4ls", w);
the character will be printed.
EDIT: To answer your second question, no, printf should never write a null character. The sentence only means that in certain cases, a null character is required to specify the end of the string and avoid buffer over-reads.
Related
In my previous post I found a solution to do this using C++ strings, but I wonder if there would be a solution using char's in C as well.
My current solution uses str.compare() and size() of a character string as seen in my previous post.
Now, since I only use one (multibyte) character in the std::string, would it be possible to achieve the same using a char?
For example, if( str[i] == '¶' )? How do I achieve that using char's?
(edit: made a type on SO for comparison operator as pointed out in the comments)
How do I compare single multibyte character constants cross-platform in C?
You seem to mean an integer character constant expressed using a single multibyte character. The first thing to recognize, then, is that in C, integer character constants (examples: 'c', '¶') have type int, not char. The primary relevant section of C17 is paragraph 6.4.4.4/10:
An integer character constant has type int. The value of an integer character constant containing a single character that maps to a single-byte execution character is the numerical value of the representation of the mapped character interpreted as an integer. The value of an integer character constant containing more than one character (e.g.,’ab’ ), or containing a character or escape sequence that does not map to a single-byte execution character, is implementation-defined. If an integer character constant contains a single character or escape sequence, its value is the one that results when an object with type char whose value is that of the single character or escape sequence is converted to type int.
(Emphasis added.)
Note well that "implementation defined" implies limited portability from the get-go. Even if we rule out implementations defining perverse behavior, we still have alternatives such as
the implementation rejects integer character constants containing multibyte source characters; or
the implementation rejects integer character constants that do not map to a single-byte execution character; or
the implementation maps source multibyte characters via a bytewise identity mapping, regardless of the byte sequence's significance in the execution character set.
That is not an exhaustive list.
You can certainly compare integer character constants with each other, but if they map to multibyte execution characters then you cannot usefully compare them to individual chars.
Inasmuch as your intended application appears to be to locate individual mutlibyte characters in a C string, the most natural thing to do appears to be to implement a C analog of your C++ approach, using the standard strstr() function. Example:
char str[] = "Some string ¶ some text ¶ to see";
char char_to_compare[] = "¶";
int char_size = sizeof(char_to_compare) - 1; // don't count the string terminator
for (char *location = strstr(str, char_to_compare);
location;
location = strstr(location + char_size, char_to_compare)) {
puts("Found!");
}
That will do the right thing in many cases, but it still might be wrong for some characters in some execution character encodings, such as those encodings featuring multiple shift states.
If you want robust handling for characters outside the basic execution character set, then you would be well advised to take control of the in-memory encoding, and to perform appropriate convertions to, operations on, and conversions from that encoding. This is largely what ICU does, for example.
I believe you meant something like this:
char a = '¶';
char b = '¶';
if (a == b) /*do something*/;
The above may or may not work, if the value of '¶' is bigger than the char range, then it will overflow, causing a and b to store a different value than that of '¶'. Regardless of which value they hold, they may actually both have the same value.
Remember, the char type is simply a single-byte wide (8-bits) integer, so in order to work with multibyte characters and avoid overflow you just have to use a wider integer type (short, int, long...).
short a = '¶';
short b = '¶';
if (a == b) /*do something*/;
From personal experience, I've also noticed, that sometimes your environment may try to use a different character encoding than what you need. For example, trying to print the 'á' character will actually produce something else.
unsigned char x = 'á';
putchar(x); //actually prints character 'ß' in console.
putchar(160); //will print 'á'.
This happens because the console uses an Extended ASCII encoding, while my coding environment actually uses Unicode, parsing a value of 225 for 'á' instead of the value of 160 that I want.
While reviewing some WINAPI code intended to compile in MS Visual C++, I found the following (simplified):
char buf[4];
// buf gets filled ...
switch ((buf[0] << 8) + buf[1]) {
case 'CT':
/* ... */
case 'SY':
/* ... */
default:
break;
}
}
Assuming 16 bit chars, I can understand why the shift of buf[0] and addition of buf[1]. What I don't gather is how the comparisons in the case clauses are intended to work.
I don't have access to Visual C++ and, of course, those yield multi-character character constant [-Wmultichar] warnings on gcc/MingW.
This is a non-portable way of storing more than one chars in one int. Finally, the comparison happens as the int values, as usual.
Note: consider concatenated representation of the ASCII values for each individual char as the final int value.
Following the wiki article, (emphasis mine)
[...] Multi-character constants (e.g. 'xy') are valid, although rarely useful — they let one store several characters in an integer (e.g. 4 ASCII characters can fit in a 32-bit integer, 8 in a 64-bit one). Since the order in which the characters are packed into an int is not specified, portable use of multi-character constants is difficult.
Related, C11, chapter §6.4.4.4/p10
An integer character constant has type int. The value of an integer character constant
containing a single character that maps to a single-byte execution character is the
numerical value of the representation of the mapped character interpreted as an integer.
The value of an integer character constant containing more than one character (e.g.,
'ab'), or containing a character or escape sequence that does not map to a single-byte
execution character, is implementation-defined. [....]
Yes, they are valid and its type is int and its value is implementation dependent.
From C11 draft, 6.4.4.4p10:
An integer character constant has type int. The value of an integer
character constant containing a single character that maps to a
single-byte execution character is the numerical value of the
representation of the mapped character interpreted as an integer. The
value of an integer character constant containing more than one
character (e.g., 'ab'), or containing a character or escape sequence
that does not map to a single-byte execution character, is
implementation-defined.
(emphasis added)
GCC is being cautious, and warns to let you know in case you have used it unintentionally.
C11 defines a "string" as:
A string is a contiguous sequence of characters terminated by and
including the first null character. §7.1.1 1
It earlier defines a "character" as:
3.7 character
〈abstract〉 member of a set of elements used for the organization, control, or representation of data
3.7.1
character
single-byte character
〈C〉 bit representation that fits in a byte
3.7.2
multibyte character
sequence of one or more bytes representing a member of the extended character set ...
3.7.3
wide character
value representable by an object of type wchar_t, capable of representing any character
in the current locale
Question: What definition of "character" is being used in the definition of "string":
"character" in 3.7,
"character" in 3.7.1,
or something else?
A string is a contiguous sequence of data of type char.
The word "character" is used in two senses, abstract and practical.
From the abstract point of view, we first would have to define the concept "set of characters", in order to, later, go to 3.7 and say "a member of a set of elements for...".
This definition of "character" fits another standard: ISO/IEC 2382-1.
See ISO/IEC 2382-1(character)
There, you can analyze a big list of terms related to "Information Representation".
MY SHORT ANSWER: "character" in the definition of "string" corresponds to c11.3.7.1.
The explanation is as follows:
CHARACTER IN THE ABSTRACT
A symbol is an intellectual convention of human beings.
So, the abstract symbol for "A" is a convention which we use to recognize different "graphs" like A, A, A, as being all "the same" thing (a piece of information, say).
The information is represented, then, by ordered and finite sequences of a set of (abstract) characters.
Next, you need to codify this abstract symbols to make possible their representation in information systems (computers).
This is done, in general, by defining a one-to-one correspondence between integer numbers (called code-points) and their correspondant characters in a given set.
An encoding schema is a way in that a set of characters is associated to certain numbers (code-points).
This encoding can change from one system to another ("A" has not the same encoding in EBCDIC as in ASCII).
Finally, we associate a "graph" to each character+code-point, that is, a written representation, which can be eventually printed or shown on screen.
The shape of the graph can change according to a font design, so it is not a good starting point to define the term "character".
CHARACTER IN C
In 3.7.1. it seems that C11 refers to another meaning of "character", intended to be a brief form to say "single-byte character". It is talking about code-points (that is, integer numbers associated to "abstract characters of a given set") that fit in exactly 1 byte.
In this case, we need the definition of Byte.
In C, a byte is an information storage unit, consisting of an ordered sequence of n bits, where n is an integer number greater than or equal to 8 (in general is 8, of course), whose value you can found by checking the constant CHAR_BIT, in <limits.h>.
There are data types whose size is exactly 1 byte: char, unsigned char, signed char.
The range of values of unsigned char is exactly 0...2^n - 1, where n is CHAR_BIT.
The range of values of char coincides with signed char or unsgined char, but C11 doesn't say which of them corresponds to char.
Moreover, in any case, the type char must be considered different from signed char and unsigned char.
A string is, now, a sequence of objects of type char.
WHY CHAR?
The standard defines the representation of characters in terms of char:
(6.2.5.3)
An object declared as type char is large enough to store any member of the basic
execution character set. If a member of the basic execution character set is stored in a
char object, its value is guaranteed to be nonnegative. If any other character is stored in
a char object, the resulting value is implementation-defined but shall be within the range
of values that can be represented in that type.
STRING
Now, a string in C is a contiguous sequence of (single-byte) characters terminated by the null character, which in C is always 0.
This definition can be understood again in an abstract way, however in 7.1.1.1 the text talks about the address of the string, so it must be understood that a "string" is an object in memory.
A "string" object is, then, a contiguous sequence of "bytes", each one holding the code-point of a character.
This is derived from the fact that a "character" is intended to fit exactly in 1 byte.
It is represented in C by an array of type char, whose last element is 0.
MULTIBYTE CHARACTER
The definition of "multibyte" is complicated.
It is referred to some special encoding schemas that uses a variable number of bytes to represent an (abstract) character.
You need information about the execution character sets in order to properly handle multibyte character sets.
However, even if you have a multibyte character, it is still represented in memory as a sequence of bytes.
That means that you will represent a multibyte string again as an array of char.
The way in that the execution system interprets these bytes is a different issue.
WIDE CHARACTER
A wide character is an element of another set of (abstract) characters, different to those represented in the type char.
It is intended that the set of "wide characters" be larger than the set of "single-byte characters".
But this is not necessarily the case.
The relevant facts of the "wide characters" are the following:
The set of "wide characters", whichever they are, can be represented by the range of values of the type wchar_t.
These characters can be different from those represented in the type char.
A "wide character" can use more than 1 byte storage.
A "wide string" is a null-terminated contiguous sequence of "wide characters".
Thus, a "wide string" is a different object than a "string".
CONCLUSION
A string has nothing to do with "wide" characters, but only "single-byte characters".
A string is a null-terminated contiguous sequence of "bytes", which, in turn, means, objects of some the char types: char, signed char, unsigned char, corresponding to code-points of an abstract character set that fits in 1 byte.
I didn't find an explanation in the C standard how do aforementioned escape sequences in wide strings are processed.
For example:
wchar_t *txt1 = L"\x03A9";
wchar_t *txt2 = L"\xA9\x03";
Are these somehow processed (like prefixing each byte with \x00 byte) or stored in memory exactly the same way as they are declared here?
Also, how does L prefix operate according to the standard?
EDIT:
Let's consider txt2. How it would be stored in memory? \xA9\x00\x03\x00 or \xA9\x03 as it was written? Same goes to \x03A9. Would this be considered as a wide character or as 2 separate bytes which would be made into two wide characters?
EDIT2:
Standard says:
The hexadecimal digits that follow the backslash and the letter x in a hexadecimal escape
sequence are taken to be part of the construction of a single character for an integer
character constant or of a single wide character for a wide character constant. The
numerical value of the hexadecimal integer so formed specifies the value of the desired
character or wide character.
Now, we have a char literal:
wchar_t txt = L'\xFE\xFF';
It consists of 2 hex escape sequences, therefore it should be treated as two wide characters. If these are two wide characters they can't fit into one wchar_t space (yet it compiles in MSVC) and in my case this sequence is treated as the following:
wchar_t foo = L'\xFFFE';
which is the only hex escape sequence and therefore the only wide char.
EDIT3:
Conclusions: each oct/hex sequence is treated as a separate value ( wchar_t *txt2 = L"\xA9\x03"; consists of 3 elements). wchar_t txt = L'\xFE\xFF'; is not portable - implementation defined feature, one should use wchar_t txt = L'\xFFFE';
There's no processing. L"\x03A9" is simply an array wchar_t const[2] consisting of the two elements 0x3A9 and 0, and similarly L"\xA9\x03" is an array wchar_t const[3].
Note in particular C11 6.4.4.4/7:
Each octal or hexadecimal escape sequence is the longest sequence of characters that can
constitute the escape sequence.
And also C++11 2.14.3/4:
There is no limit to the number of digits in a hexadecimal sequence.
Note also that when you are using a hexadecimal sequence, it is your responsibility to ensure that your data type can hold the value. C11-6.4.4.4/9 actually spells this out as a requirement, whereas in C++ exceeding the type's range is merely "implementation-defined". (And a good compiler should warn you if you exceed the type's range.)
Your code doesn't make sense, though, because the left-hand sides are neither arrays nor pointers. It should be like this:
wchar_t const * p = L"\x03A9"; // pointer to the first element of a string
wchar_t arr1[] = L"\x03A9"; // an actual array
wchar_t arr2[2] = L"\x03A9"; // ditto, but explicitly typed
std::wstring s = L"\x03A9"; // C++ only
On a tangent: This question of mine elaborates a bit on string literals and escape sequences.
Example:
char arr[] = "\xeb\x2a";
BTW, are the following the same?
"\xeb\x2a" vs. '\xeb\x2a'
\x indicates a hexadecimal character escape. It's used to specify characters that aren't typeable (like a null '\x00').
And "\xeb\x2a" is a literal string (type is char *, 3 bytes, null-terminated), and '\xeb\x2a' is a character constant (type is int, 2 bytes, not null-terminated, and is just another way to write 0xEB2A or 60202 or 0165452). Not the same :)
As other have said, the \x is an escape sequence that starts a "hexadecimal-escape-sequence".
Some further details from the C99 standard:
When used inside a set of single-quotes (') the characters are part of an "integer character constant" which is (6.4.4.4/2 "Character constants"):
a sequence of one or more multibyte characters enclosed in single-quotes, as in 'x'.
and
An integer character constant has type int. The value of an integer character constant containing a single character that maps to a single-byte execution character is the numerical value of the representation of the mapped character interpreted as an integer. The value of an integer character constant containing more than one character (e.g., 'ab'), or containing a character or escape sequence that does not map to a single-byte execution character, is implementation-defined.
So the sequence in your example of '\xeb\x2a' is an implementation defined value. It's likely to be the int value 0xeb2a or 0x2aeb depending on whether the target platform is big-endian or little-endian, but you'd have to look at your compiler's documentation to know for certain.
When used inside a set of double-quotes (") the characters specified by the hex-escape-sequence are part of a null-terminated string literal.
From the C99 standard 6.4.5/3 "String literals":
The same considerations apply to each element of the sequence in a character string literal or a wide string literal as if it were in an integer character constant or a wide character constant, except that the single-quote ' is representable either by itself or by the escape sequence \', but the double-quote " shall be represented by the escape sequence \".
Additional info:
In my opinion, you should avoid avoid using 'multi-character' constants. There are only a few situations where they provide any value over using an regular, old int constant. For example, '\xeb\x2a' could be more portably be specified as 0xeb2a or 0x2aeb depending on what value you really wanted.
One area that I've found multi-character constants to be of some use is to come up with clever enum values that can be recognized in a debugger or memory dump:
enum CommandId {
CMD_ID_READ = 'read',
CMD_ID_WRITE = 'writ',
CMD_ID_DEL = 'del ',
CMD_ID_FOO = 'foo '
};
There are few portability problems with the above (other than platforms that have small ints or warnings that might be spewed). Whether the characters end up in the enum values in little- or big-endian form, the code will still work (unless you're doing some else unholy with the enum values). If the characters end up in the value using an endianness that wasn't what you expected, it might make the values less easy to read in a debugger, but the 'correctness' isn't affected.
When you say:
BTW,are these the same:
"\xeb\x2a" vs '\xeb\x2a'
They are in fact not. The first creates a character string literal, terminated with a zero byte, containing the two characters who's hex representation you provide. The second creates an integer constant.
It's a special character that indicates the string is actually a hexadecimal number.
http://www.austincc.edu/rickster/COSC1320/handouts/escchar.htm
The \x means it's a hex character escape. So \xeb would mean character eb in hex, or 235 in decimal. See http://msdn.microsoft.com/en-us/library/6aw8xdf2.aspx for ore information.
As for the second, no, they are not the same. The double-quotes, ", means it's a string of characters, a null-terminated character array, whereas a single quote, ', means it's a single character, the byte that character represents.
\x allows you to specify the character by its hexadecimal code.
This allows you to specify characters that are normally not printable (some of which have special escape sequences predefined such as '\n'=newline and '\t'=tab '\b'=bell)
A useful website is here.
And I quote:
x Unsigned hexadecimal integer
That way, your \xeb is like 235 in decimal.