C standard: L prefix and octal/hexadecimal escape sequences - c

I didn't find an explanation in the C standard how do aforementioned escape sequences in wide strings are processed.
For example:
wchar_t *txt1 = L"\x03A9";
wchar_t *txt2 = L"\xA9\x03";
Are these somehow processed (like prefixing each byte with \x00 byte) or stored in memory exactly the same way as they are declared here?
Also, how does L prefix operate according to the standard?
EDIT:
Let's consider txt2. How it would be stored in memory? \xA9\x00\x03\x00 or \xA9\x03 as it was written? Same goes to \x03A9. Would this be considered as a wide character or as 2 separate bytes which would be made into two wide characters?
EDIT2:
Standard says:
The hexadecimal digits that follow the backslash and the letter x in a hexadecimal escape
sequence are taken to be part of the construction of a single character for an integer
character constant or of a single wide character for a wide character constant. The
numerical value of the hexadecimal integer so formed specifies the value of the desired
character or wide character.
Now, we have a char literal:
wchar_t txt = L'\xFE\xFF';
It consists of 2 hex escape sequences, therefore it should be treated as two wide characters. If these are two wide characters they can't fit into one wchar_t space (yet it compiles in MSVC) and in my case this sequence is treated as the following:
wchar_t foo = L'\xFFFE';
which is the only hex escape sequence and therefore the only wide char.
EDIT3:
Conclusions: each oct/hex sequence is treated as a separate value ( wchar_t *txt2 = L"\xA9\x03"; consists of 3 elements). wchar_t txt = L'\xFE\xFF'; is not portable - implementation defined feature, one should use wchar_t txt = L'\xFFFE';

There's no processing. L"\x03A9" is simply an array wchar_t const[2] consisting of the two elements 0x3A9 and 0, and similarly L"\xA9\x03" is an array wchar_t const[3].
Note in particular C11 6.4.4.4/7:
Each octal or hexadecimal escape sequence is the longest sequence of characters that can
constitute the escape sequence.
And also C++11 2.14.3/4:
There is no limit to the number of digits in a hexadecimal sequence.
Note also that when you are using a hexadecimal sequence, it is your responsibility to ensure that your data type can hold the value. C11-6.4.4.4/9 actually spells this out as a requirement, whereas in C++ exceeding the type's range is merely "implementation-defined". (And a good compiler should warn you if you exceed the type's range.)
Your code doesn't make sense, though, because the left-hand sides are neither arrays nor pointers. It should be like this:
wchar_t const * p = L"\x03A9"; // pointer to the first element of a string
wchar_t arr1[] = L"\x03A9"; // an actual array
wchar_t arr2[2] = L"\x03A9"; // ditto, but explicitly typed
std::wstring s = L"\x03A9"; // C++ only
On a tangent: This question of mine elaborates a bit on string literals and escape sequences.

Related

How do I compare single multibyte character constants cross-platform in C?

In my previous post I found a solution to do this using C++ strings, but I wonder if there would be a solution using char's in C as well.
My current solution uses str.compare() and size() of a character string as seen in my previous post.
Now, since I only use one (multibyte) character in the std::string, would it be possible to achieve the same using a char?
For example, if( str[i] == '¶' )? How do I achieve that using char's?
(edit: made a type on SO for comparison operator as pointed out in the comments)
How do I compare single multibyte character constants cross-platform in C?
You seem to mean an integer character constant expressed using a single multibyte character. The first thing to recognize, then, is that in C, integer character constants (examples: 'c', '¶') have type int, not char. The primary relevant section of C17 is paragraph 6.4.4.4/10:
An integer character constant has type int. The value of an integer character constant containing a single character that maps to a single-byte execution character is the numerical value of the representation of the mapped character interpreted as an integer. The value of an integer character constant containing more than one character (e.g.,’ab’ ), or containing a character or escape sequence that does not map to a single-byte execution character, is implementation-defined. If an integer character constant contains a single character or escape sequence, its value is the one that results when an object with type char whose value is that of the single character or escape sequence is converted to type int.
(Emphasis added.)
Note well that "implementation defined" implies limited portability from the get-go. Even if we rule out implementations defining perverse behavior, we still have alternatives such as
the implementation rejects integer character constants containing multibyte source characters; or
the implementation rejects integer character constants that do not map to a single-byte execution character; or
the implementation maps source multibyte characters via a bytewise identity mapping, regardless of the byte sequence's significance in the execution character set.
That is not an exhaustive list.
You can certainly compare integer character constants with each other, but if they map to multibyte execution characters then you cannot usefully compare them to individual chars.
Inasmuch as your intended application appears to be to locate individual mutlibyte characters in a C string, the most natural thing to do appears to be to implement a C analog of your C++ approach, using the standard strstr() function. Example:
char str[] = "Some string ¶ some text ¶ to see";
char char_to_compare[] = "¶";
int char_size = sizeof(char_to_compare) - 1; // don't count the string terminator
for (char *location = strstr(str, char_to_compare);
location;
location = strstr(location + char_size, char_to_compare)) {
puts("Found!");
}
That will do the right thing in many cases, but it still might be wrong for some characters in some execution character encodings, such as those encodings featuring multiple shift states.
If you want robust handling for characters outside the basic execution character set, then you would be well advised to take control of the in-memory encoding, and to perform appropriate convertions to, operations on, and conversions from that encoding. This is largely what ICU does, for example.
I believe you meant something like this:
char a = '¶';
char b = '¶';
if (a == b) /*do something*/;
The above may or may not work, if the value of '¶' is bigger than the char range, then it will overflow, causing a and b to store a different value than that of '¶'. Regardless of which value they hold, they may actually both have the same value.
Remember, the char type is simply a single-byte wide (8-bits) integer, so in order to work with multibyte characters and avoid overflow you just have to use a wider integer type (short, int, long...).
short a = '¶';
short b = '¶';
if (a == b) /*do something*/;
From personal experience, I've also noticed, that sometimes your environment may try to use a different character encoding than what you need. For example, trying to print the 'á' character will actually produce something else.
unsigned char x = 'á';
putchar(x); //actually prints character 'ß' in console.
putchar(160); //will print 'á'.
This happens because the console uses an Extended ASCII encoding, while my coding environment actually uses Unicode, parsing a value of 225 for 'á' instead of the value of 160 that I want.

How is the string terminator '\0' has the same value as integer constant 0?

I have the following code -
#include <stdio.h>
#define LENGTH 5
int main(){
char* ch[LENGTH] = {"Zero", "One", "Two", "Three", "Four"};
char* pc;
char** ppc;
for(int i=0; i<LENGTH; i++){
ppc = ch+i;
pc = *ppc;
while(*pc != 0){
printf("%c ", *pc);
pc = pc +1;
}
printf("\n");
}
return 0;
}
It is an example of multiple indirection using string.
The output is
Z e r o
O n e
T w o
T h r e e
F o u r
Here in while() loop instead of *pc != '\0', *pc != 0 is used.
But both the approaches give same output. Why is it so?
A char is really nothing more than a small integer, and as such are implicitly convertible to int. Furthermore character literals (like e.g. 'A') are really represented by the compiler as int values (for example the literal character 'A' is represented by the int value 65 in ASCII encoding).
The C language allows one to insert any arbitrary integer (that can fit in a char) using escapes. There are two ways to escape such arbitrary values, using octal numbers, or using hexadecimal. For example, the ASCII value for A is 65, that can be represented as either 'A', '\101' in octal, '\x41' in hexadecimal, or plain 65.
Armed with that information it should be easy to see that the character literal '\0' is the octal representation of the integer 0. That is, '\0' == 0.
You can easily verify this by printing it:
printf("'\\0' = %d\n", '\0');
I mentioned that the compiler treats all character literals as int values, but also mentioned that the arbitrary numbers using escaped octal or hexadecimal numbers needs to fit in a char. That might seem like a contradiction, but it isn't really. A characters value must fit in a char, but the compiler will then internally convert it to an int when it parses the code.
Line feed \n, tab \t etc has their own escape sequence characters, but actually there does not exist one for the null terminator.
The industry de facto standard way of represending the null terminator is therefore to write an octal escape sequence with the value zero. Octal escape sequences are defined as \ followed by a number. So \0 simply means zero, with octal representation. Since this looks similar to other character escape sequences, it has become the de facto standard way of representing the null terminator.
This is why a decimal 0 works just as fine, it is just another way of writing the value zero. You could as well write \x0 if you wish to be obscure.
0 and '\0' are exactly the same value, and in C, are both int types. This is fixed by the C standard and is irrespective of the character encoding on your platform. In other words, they are completely indistinguishable. (In C++, the type of '\0' is a char.)
So while(*pc != 0), while(*pc != '\0'), and while(*pc) for that matter are all the same thing.
(Personally I find the last one I give the clearest, but some folk like to use the '\0' notation when working with C-style strings.)
Adding to the existing answers, to look into the sentinel, quoting C11, chapter §5.2.1
In a character constant or string literal, members of the execution character set shall be
represented by corresponding members of the source character set or by escape
sequences consisting of the backslash \ followed by one or more characters. A byte with
all bits set to 0, called the null character, shall exist in the basic execution character set; it
is used to terminate a character string.
and from chapter §6.4.4.4/P12,
EXAMPLE 1 The construction '\0' is commonly used to represent the null character.
So, a constant \0 is the one which satisfies the aforesaid property. This is a octal escape sequence.
Now, regarding the value, quoting §6.4.4.4/P5, (emphasis mine)
The octal digits that follow the backslash in an octal escape sequence are taken to be part
of the construction of a single character for an integer character constant or of a single
wide character for a wide character constant. The numerical value of the octal integer so
formed specifies the value of the desired character or wide character.
so, for a octal escape sequence '\0', the value is 0 (well, both in octal, as mentioned in §6.4.4.1, and decimal).

Are multi-character character constants valid in C? Maybe in MS VC?

While reviewing some WINAPI code intended to compile in MS Visual C++, I found the following (simplified):
char buf[4];
// buf gets filled ...
switch ((buf[0] << 8) + buf[1]) {
case 'CT':
/* ... */
case 'SY':
/* ... */
default:
break;
}
}
Assuming 16 bit chars, I can understand why the shift of buf[0] and addition of buf[1]. What I don't gather is how the comparisons in the case clauses are intended to work.
I don't have access to Visual C++ and, of course, those yield multi-character character constant [-Wmultichar] warnings on gcc/MingW.
This is a non-portable way of storing more than one chars in one int. Finally, the comparison happens as the int values, as usual.
Note: consider concatenated representation of the ASCII values for each individual char as the final int value.
Following the wiki article, (emphasis mine)
[...] Multi-character constants (e.g. 'xy') are valid, although rarely useful — they let one store several characters in an integer (e.g. 4 ASCII characters can fit in a 32-bit integer, 8 in a 64-bit one). Since the order in which the characters are packed into an int is not specified, portable use of multi-character constants is difficult.
Related, C11, chapter §6.4.4.4/p10
An integer character constant has type int. The value of an integer character constant
containing a single character that maps to a single-byte execution character is the
numerical value of the representation of the mapped character interpreted as an integer.
The value of an integer character constant containing more than one character (e.g.,
'ab'), or containing a character or escape sequence that does not map to a single-byte
execution character, is implementation-defined. [....]
Yes, they are valid and its type is int and its value is implementation dependent.
From C11 draft, 6.4.4.4p10:
An integer character constant has type int. The value of an integer
character constant containing a single character that maps to a
single-byte execution character is the numerical value of the
representation of the mapped character interpreted as an integer. The
value of an integer character constant containing more than one
character (e.g., 'ab'), or containing a character or escape sequence
that does not map to a single-byte execution character, is
implementation-defined.
(emphasis added)
GCC is being cautious, and warns to let you know in case you have used it unintentionally.

C99 Standard - fprintf - s conversion with precision

Let's assume there's only C99 Standard paper and printf library function needs to be implemented according to this standard to work with UTF-16 encoding, could you please clarify the expected behavior for s conversion with precision specified?
C99 Standard (7.19.6.1) for s conversion says:
If no l length modifier is present, the argument shall be a pointer to the initial element of an array of character type. Characters from the array are written up to (but not including) the terminating null character. If the precision is specified, no more than that many bytes are written. If the precision is not specified or is greater than the size of the array, the array shall contain a null character.
If an l length modifier is present, the argument shall be a pointer to the initial element of an array of wchar_t type. Wide characters from the array are converted to multibyte characters (each as if by a call to the wcrtomb function, with the conversion state described by an mbstate_t object initialized to zero before the first wide character is converted) up to and including a terminating null wide character. The resulting multibyte characters are written up to (but not including) the terminating null character (byte). If no precision is specified, the array shall contain a null wide character. If a precision is specified, no more than that many bytes are written (including shift sequences, if any), and the array shall contain a null wide character if, to equal the multibyte character sequence length given by the precision, the function would need to access a wide character one past the end of the array. In no case is a partial multibyte character written.
I don't quite understand this paragraph in general and the statement "If a precision is specified, no more than that many bytes are written" in particular.
For example, let's take UTF-16 string "TEST" (byte sequence: 0x54, 0x00, 0x45, 0x00, 0x53, 0x00, 0x54, 0x00).
What is expected to be written to the output buffer in the following cases:
If precision is 3
If precision is 9 (one byte more than string length)
If precision is 12 (several bytes more than string length)
Then there's also "Wide characters from the array are converted to multibyte characters". Does it mean UTF-16 should be converted to UTF-8 first? This is pretty strange in case I expect to work with UTF-16 only.
Converting a comment into a slightly expanded answer.
What is the value of CHAR_BIT in your implementation?
If CHAR_BIT == 8, you can't handle UTF-16 with %s; you'd use %ls and you'd pass a wchar_t * as the corresponding argument. You'd then have to read the second paragraph of the specification.
If CHAR_BIT == 16, then you can't have an odd number of octets in the data. You then need to know about how wchar_t relates to char (are they the same size? do they have the same signedness?) and interpret both paragraphs to come up with a uniform effect — unless you decided to have wchar_t represent UTF-32.
The key point is that UTF-16 cannot be handled as a C string if CHAR_BIT == 8 because there are too many useful characters that are encoded with one byte holding zero, but those zero bytes mark the end of a null-terminated string. To handle UTF-16, either the plain char type has to be a 16-bit (or larger) type (so CHAR_BIT > 8), or you have to use wchar_t (and sizeof(wchar_t) > sizeof(char)).
Note that the specification expects that wide characters will be converted to a suitable multibyte representation.
If you want wide characters output natively, you have to use the fwprintf() and related function from <wchar.h>, first defined in C99. The specification there has a lot in common with the specification of fprintf(), but there are (unsurprisingly) important differences.
7.29.2.1 The fwprintf function
…
s
If no l length modifier is present, the argument shall be a pointer to the initial
element of a character array containing a multibyte character sequence
beginning in the initial shift state. Characters from the array are converted as
if by repeated calls to the mbrtowc function, with the conversion state
described by an mbstate_t object initialized to zero before the first
multibyte character is converted, and written up to (but not including) the
terminating null wide character. If the precision is specified, no more than
that many wide characters are written. If the precision is not specified or is
greater than the size of the converted array, the converted array shall contain a
null wide character.
If an l length modifier is present, the argument shall be a pointer to the initial
element of an array of wchar_t type. Wide characters from the array are
written up to (but not including) a terminating null wide character. If the
precision is specified, no more than that many wide characters are written. If
the precision is not specified or is greater than the size of the array, the array
shall contain a null wide character.
wchar_t is not meant to be used for UTF-16, only for implementation-defined fixed-width encodings depending on the current locale. There's simply no sane way to support a variable-length encoding with the wide character API. Likewise, the multi-byte representation used by functions like printf or wcrtomb is implementation-defined. If you want to write portable code using Unicode, you can't rely on the wide character API. Use a library or roll your own code.
To answer your question: fprintf with the l modifier accepts a wide character string in the implementation-defined encoding specified by the current locale. If wchar_t is 16 bits, this encoding might be a bastardization of UTF-16, but as I mentioned above, there's no way to properly support UTF-16 surrogates. This wchar_t string is then converted to a multi-byte char string in an implementation-defined encoding. This might or might not be UTF-8. The specified precision limits the number of chars in the output string with the added restriction that no partial multi-byte characters are written.
Here's an example. Let's assume that the wide character encoding is UTF-32 with 32-bit wchar_t and that the multi-byte encoding is UTF-8 (like on Linux with an appropriate locale). The following code
wchar_t w[] = { 0x1F600, 0 }; // U+1F600 GRINNING FACE
printf("%.3ls", w);
will print nothing at all since the resulting UTF-8 sequence has four bytes. Only if you specify a precision of at least four
printf("%.4ls", w);
the character will be printed.
EDIT: To answer your second question, no, printf should never write a null character. The sentence only means that in certain cases, a null character is required to specify the end of the string and avoid buffer over-reads.

What does \x mean in C/C++?

Example:
char arr[] = "\xeb\x2a";
BTW, are the following the same?
"\xeb\x2a" vs. '\xeb\x2a'
\x indicates a hexadecimal character escape. It's used to specify characters that aren't typeable (like a null '\x00').
And "\xeb\x2a" is a literal string (type is char *, 3 bytes, null-terminated), and '\xeb\x2a' is a character constant (type is int, 2 bytes, not null-terminated, and is just another way to write 0xEB2A or 60202 or 0165452). Not the same :)
As other have said, the \x is an escape sequence that starts a "hexadecimal-escape-sequence".
Some further details from the C99 standard:
When used inside a set of single-quotes (') the characters are part of an "integer character constant" which is (6.4.4.4/2 "Character constants"):
a sequence of one or more multibyte characters enclosed in single-quotes, as in 'x'.
and
An integer character constant has type int. The value of an integer character constant containing a single character that maps to a single-byte execution character is the numerical value of the representation of the mapped character interpreted as an integer. The value of an integer character constant containing more than one character (e.g., 'ab'), or containing a character or escape sequence that does not map to a single-byte execution character, is implementation-defined.
So the sequence in your example of '\xeb\x2a' is an implementation defined value. It's likely to be the int value 0xeb2a or 0x2aeb depending on whether the target platform is big-endian or little-endian, but you'd have to look at your compiler's documentation to know for certain.
When used inside a set of double-quotes (") the characters specified by the hex-escape-sequence are part of a null-terminated string literal.
From the C99 standard 6.4.5/3 "String literals":
The same considerations apply to each element of the sequence in a character string literal or a wide string literal as if it were in an integer character constant or a wide character constant, except that the single-quote ' is representable either by itself or by the escape sequence \', but the double-quote " shall be represented by the escape sequence \".
Additional info:
In my opinion, you should avoid avoid using 'multi-character' constants. There are only a few situations where they provide any value over using an regular, old int constant. For example, '\xeb\x2a' could be more portably be specified as 0xeb2a or 0x2aeb depending on what value you really wanted.
One area that I've found multi-character constants to be of some use is to come up with clever enum values that can be recognized in a debugger or memory dump:
enum CommandId {
CMD_ID_READ = 'read',
CMD_ID_WRITE = 'writ',
CMD_ID_DEL = 'del ',
CMD_ID_FOO = 'foo '
};
There are few portability problems with the above (other than platforms that have small ints or warnings that might be spewed). Whether the characters end up in the enum values in little- or big-endian form, the code will still work (unless you're doing some else unholy with the enum values). If the characters end up in the value using an endianness that wasn't what you expected, it might make the values less easy to read in a debugger, but the 'correctness' isn't affected.
When you say:
BTW,are these the same:
"\xeb\x2a" vs '\xeb\x2a'
They are in fact not. The first creates a character string literal, terminated with a zero byte, containing the two characters who's hex representation you provide. The second creates an integer constant.
It's a special character that indicates the string is actually a hexadecimal number.
http://www.austincc.edu/rickster/COSC1320/handouts/escchar.htm
The \x means it's a hex character escape. So \xeb would mean character eb in hex, or 235 in decimal. See http://msdn.microsoft.com/en-us/library/6aw8xdf2.aspx for ore information.
As for the second, no, they are not the same. The double-quotes, ", means it's a string of characters, a null-terminated character array, whereas a single quote, ', means it's a single character, the byte that character represents.
\x allows you to specify the character by its hexadecimal code.
This allows you to specify characters that are normally not printable (some of which have special escape sequences predefined such as '\n'=newline and '\t'=tab '\b'=bell)
A useful website is here.
And I quote:
x Unsigned hexadecimal integer
That way, your \xeb is like 235 in decimal.

Resources