What Is the "character" in string's definition? - c

C11 defines a "string" as:
A string is a contiguous sequence of characters terminated by and
including the first null character. §7.1.1 1
It earlier defines a "character" as:
3.7 character
〈abstract〉 member of a set of elements used for the organization, control, or representation of data
3.7.1
character
single-byte character
〈C〉 bit representation that fits in a byte
3.7.2
multibyte character
sequence of one or more bytes representing a member of the extended character set ...
3.7.3
wide character
value representable by an object of type wchar_t, capable of representing any character
in the current locale
Question: What definition of "character" is being used in the definition of "string":
"character" in 3.7,
"character" in 3.7.1,
or something else?

A string is a contiguous sequence of data of type char.
The word "character" is used in two senses, abstract and practical.
From the abstract point of view, we first would have to define the concept "set of characters", in order to, later, go to 3.7 and say "a member of a set of elements for...".
This definition of "character" fits another standard: ISO/IEC 2382-1.
See ISO/IEC 2382-1(character)
There, you can analyze a big list of terms related to "Information Representation".
MY SHORT ANSWER: "character" in the definition of "string" corresponds to c11.3.7.1.
The explanation is as follows:
CHARACTER IN THE ABSTRACT
A symbol is an intellectual convention of human beings.
So, the abstract symbol for "A" is a convention which we use to recognize different "graphs" like A, A, A, as being all "the same" thing (a piece of information, say).
The information is represented, then, by ordered and finite sequences of a set of (abstract) characters.
Next, you need to codify this abstract symbols to make possible their representation in information systems (computers).
This is done, in general, by defining a one-to-one correspondence between integer numbers (called code-points) and their correspondant characters in a given set.
An encoding schema is a way in that a set of characters is associated to certain numbers (code-points).
This encoding can change from one system to another ("A" has not the same encoding in EBCDIC as in ASCII).
Finally, we associate a "graph" to each character+code-point, that is, a written representation, which can be eventually printed or shown on screen.
The shape of the graph can change according to a font design, so it is not a good starting point to define the term "character".
CHARACTER IN C
In 3.7.1. it seems that C11 refers to another meaning of "character", intended to be a brief form to say "single-byte character". It is talking about code-points (that is, integer numbers associated to "abstract characters of a given set") that fit in exactly 1 byte.
In this case, we need the definition of Byte.
In C, a byte is an information storage unit, consisting of an ordered sequence of n bits, where n is an integer number greater than or equal to 8 (in general is 8, of course), whose value you can found by checking the constant CHAR_BIT, in <limits.h>.
There are data types whose size is exactly 1 byte: char, unsigned char, signed char.
The range of values of unsigned char is exactly 0...2^n - 1, where n is CHAR_BIT.
The range of values of char coincides with signed char or unsgined char, but C11 doesn't say which of them corresponds to char.
Moreover, in any case, the type char must be considered different from signed char and unsigned char.
A string is, now, a sequence of objects of type char.
WHY CHAR?
The standard defines the representation of characters in terms of char:
(6.2.5.3)
An object declared as type char is large enough to store any member of the basic
execution character set. If a member of the basic execution character set is stored in a
char object, its value is guaranteed to be nonnegative. If any other character is stored in
a char object, the resulting value is implementation-defined but shall be within the range
of values that can be represented in that type.
STRING
Now, a string in C is a contiguous sequence of (single-byte) characters terminated by the null character, which in C is always 0.
This definition can be understood again in an abstract way, however in 7.1.1.1 the text talks about the address of the string, so it must be understood that a "string" is an object in memory.
A "string" object is, then, a contiguous sequence of "bytes", each one holding the code-point of a character.
This is derived from the fact that a "character" is intended to fit exactly in 1 byte.
It is represented in C by an array of type char, whose last element is 0.
MULTIBYTE CHARACTER
The definition of "multibyte" is complicated.
It is referred to some special encoding schemas that uses a variable number of bytes to represent an (abstract) character.
You need information about the execution character sets in order to properly handle multibyte character sets.
However, even if you have a multibyte character, it is still represented in memory as a sequence of bytes.
That means that you will represent a multibyte string again as an array of char.
The way in that the execution system interprets these bytes is a different issue.
WIDE CHARACTER
A wide character is an element of another set of (abstract) characters, different to those represented in the type char.
It is intended that the set of "wide characters" be larger than the set of "single-byte characters".
But this is not necessarily the case.
The relevant facts of the "wide characters" are the following:
The set of "wide characters", whichever they are, can be represented by the range of values of the type wchar_t.
These characters can be different from those represented in the type char.
A "wide character" can use more than 1 byte storage.
A "wide string" is a null-terminated contiguous sequence of "wide characters".
Thus, a "wide string" is a different object than a "string".
CONCLUSION
A string has nothing to do with "wide" characters, but only "single-byte characters".
A string is a null-terminated contiguous sequence of "bytes", which, in turn, means, objects of some the char types: char, signed char, unsigned char, corresponding to code-points of an abstract character set that fits in 1 byte.

Related

How do I compare single multibyte character constants cross-platform in C?

In my previous post I found a solution to do this using C++ strings, but I wonder if there would be a solution using char's in C as well.
My current solution uses str.compare() and size() of a character string as seen in my previous post.
Now, since I only use one (multibyte) character in the std::string, would it be possible to achieve the same using a char?
For example, if( str[i] == '¶' )? How do I achieve that using char's?
(edit: made a type on SO for comparison operator as pointed out in the comments)
How do I compare single multibyte character constants cross-platform in C?
You seem to mean an integer character constant expressed using a single multibyte character. The first thing to recognize, then, is that in C, integer character constants (examples: 'c', '¶') have type int, not char. The primary relevant section of C17 is paragraph 6.4.4.4/10:
An integer character constant has type int. The value of an integer character constant containing a single character that maps to a single-byte execution character is the numerical value of the representation of the mapped character interpreted as an integer. The value of an integer character constant containing more than one character (e.g.,’ab’ ), or containing a character or escape sequence that does not map to a single-byte execution character, is implementation-defined. If an integer character constant contains a single character or escape sequence, its value is the one that results when an object with type char whose value is that of the single character or escape sequence is converted to type int.
(Emphasis added.)
Note well that "implementation defined" implies limited portability from the get-go. Even if we rule out implementations defining perverse behavior, we still have alternatives such as
the implementation rejects integer character constants containing multibyte source characters; or
the implementation rejects integer character constants that do not map to a single-byte execution character; or
the implementation maps source multibyte characters via a bytewise identity mapping, regardless of the byte sequence's significance in the execution character set.
That is not an exhaustive list.
You can certainly compare integer character constants with each other, but if they map to multibyte execution characters then you cannot usefully compare them to individual chars.
Inasmuch as your intended application appears to be to locate individual mutlibyte characters in a C string, the most natural thing to do appears to be to implement a C analog of your C++ approach, using the standard strstr() function. Example:
char str[] = "Some string ¶ some text ¶ to see";
char char_to_compare[] = "¶";
int char_size = sizeof(char_to_compare) - 1; // don't count the string terminator
for (char *location = strstr(str, char_to_compare);
location;
location = strstr(location + char_size, char_to_compare)) {
puts("Found!");
}
That will do the right thing in many cases, but it still might be wrong for some characters in some execution character encodings, such as those encodings featuring multiple shift states.
If you want robust handling for characters outside the basic execution character set, then you would be well advised to take control of the in-memory encoding, and to perform appropriate convertions to, operations on, and conversions from that encoding. This is largely what ICU does, for example.
I believe you meant something like this:
char a = '¶';
char b = '¶';
if (a == b) /*do something*/;
The above may or may not work, if the value of '¶' is bigger than the char range, then it will overflow, causing a and b to store a different value than that of '¶'. Regardless of which value they hold, they may actually both have the same value.
Remember, the char type is simply a single-byte wide (8-bits) integer, so in order to work with multibyte characters and avoid overflow you just have to use a wider integer type (short, int, long...).
short a = '¶';
short b = '¶';
if (a == b) /*do something*/;
From personal experience, I've also noticed, that sometimes your environment may try to use a different character encoding than what you need. For example, trying to print the 'á' character will actually produce something else.
unsigned char x = 'á';
putchar(x); //actually prints character 'ß' in console.
putchar(160); //will print 'á'.
This happens because the console uses an Extended ASCII encoding, while my coding environment actually uses Unicode, parsing a value of 225 for 'á' instead of the value of 160 that I want.

Subtlety in conversion of characters to integers

Can someone explain clearly what these lines from K&R actually mean:
"When a char is converted to an int, can it ever produce a negative
integer? The answer varies from machine to machine. The definition of
C guarantees that any character in the machine's standard printing
character set will never be negative, but arbitrary bit patterns
stored in character variables may appear to be negative on some
machines,yet positive on others".
There are two more-or-less relevant parts to the standard — ISO/IEC 9899:2011.
6.2.5 Types
¶3 An object declared as type char is large enough to store any member of the basic
execution character set. If a member of the basic execution character set is stored in a
char object, its value is guaranteed to be nonnegative. If any other character is stored in
a char object, the resulting value is implementation-defined but shall be within the range
of values that can be represented in that type.
¶15 The three types char, signed char, and unsigned char are collectively called
the character types. The implementation shall define char to have the same range,
representation, and behavior as either signed char or unsigned char.45)
45) CHAR_MIN, defined in <limits.h>, will have one of the values 0 or SCHAR_MIN, and this can be
used to distinguish the two options. Irrespective of the choice made, char is a separate type from the
other two and is not compatible with either.
That defines what your quote from K&R states. The other relevant part defines what the basic execution character set is.
5.2.1 Character sets
¶1 Two sets of characters and their associated collating sequences shall be defined: the set in
which source files are written (the source character set), and the set interpreted in the
execution environment (the execution character set). Each set is further divided into a
basic character set, whose contents are given by this subclause, and a set of zero or more
locale-specific members (which are not members of the basic character set) called
extended characters. The combined set is also called the extended character set. The
values of the members of the execution character set are implementation-defined.
¶2 In a character constant or string literal, members of the execution character set shall be
represented by corresponding members of the source character set or by escape
sequences consisting of the backslash \ followed by one or more characters. A byte with
all bits set to 0, called the null character, shall exist in the basic execution character set; it
is used to terminate a character string.
¶3 Both the basic source and basic execution character sets shall have the following
members: the 26 uppercase letters of the Latin alphabet
A B C D E F G H I J K L M
N O P Q R S T U V W X Y Z
the 26 lowercase letters of the Latin alphabet
a b c d e f g h i j k l m
n o p q r s t u v w x y z
the 10 decimal digits
0 1 2 3 4 5 6 7 8 9
the following 29 graphic characters
! " # % & ' ( ) * + , - . / :
; < = > ? [ \ ] ^ _ { | } ~
the space character, and control characters representing horizontal tab, vertical tab, and
form feed. The representation of each member of the source and execution basic
character sets shall fit in a byte. In both the source and execution basic character sets, the
value of each character after 0 in the above list of decimal digits shall be one greater than
the value of the previous. In source files, there shall be some way of indicating the end of
each line of text; this International Standard treats such an end-of-line indicator as if it
were a single new-line character. In the basic execution character set, there shall be
control characters representing alert, backspace, carriage return, and new line. If any
other characters are encountered in a source file (except in an identifier, a character
constant, a string literal, a header name, a comment, or a preprocessing token that is never
converted to a token), the behavior is undefined.
¶4 A letter is an uppercase letter or a lowercase letter as defined above; in this International
Standard the term does not include other characters that are letters in other alphabets.
¶5 The universal character name construct provides a way to name other characters.
One consequence of these rules is that if a machine uses 8-bit character and EBCDIC encoding, then plain char must be an unsigned type since the digits have code 240..249 in EBCDIC.
You need to understand several things first.
If I take an 8-bit value and extend it to a 16-bit value, normally you would imagine just adding a bunch of 0's on the left. For example, if I have the 8-bit value 23, in binary that's 00010111, so as a 16-bit number it's 0000000000010111, which is also 23.
It turns out that negative numbers always have a 1 in the high-order bit. (There might be weird machines for which this is not true, but it's true for any machine you're likely to use.) For example, the 8-bit value -40 is represented in binary as 11011000.
So when you convert a signed 8-bit value to a 16-bit value, if the high-order bit is 1 (that is, if the number is negative), you do not add a bunch of 0-s on the left, you add a bunch of 1's instead. For example, going back to -40, we would convert 11011000 to 1111111111011000, which is the 16-bit representation of -40.
There are also unsigned numbers, that are never negative. For example, the 8-bit unsigned number 216 is represented as 11011000. (You will notice that this is the same bit pattern as the signed number -40 had.) When you convert an unsigned 8-bit number to 16 bits, you add a bunch of 0's no matter what. For example, you would convert 11011000 to 0000000011011000, which is the 16-bit representation of 216.
So, putting this all together, if you're converting an 8-bit number to 16 (or more) bits, you have to look at two things. First, is the number signed or unsigned? If it's unsigned, just add a bunch of 0's on the left. But if it's signed, you have to look at the high-order bit of the 8-0bit number. If it's 0 (if the number is positive), add a bunch of 0's on the left. But if it's 1 (if the number is negative), add a bunch of 1's on the right. (This whole process is known as sign extension.)
The ordinary ASCII characters (like 'A' and '1' and '$') all have values less than 128, which means that their high-order bit is always 0. But "special" characters from the "Latin-1" or UTF-8 character sets have values greater than 128. For this reason they're sometimes also called "high bit" or "eighth bit" characters. For example, the Latin-1 character Ø (O with a slash through it it) has the value 216.
Finally, although type char in C is typically an 8-bit type, the C Standard does not specify whether it is signed or unsigned.
Putting this all together, what Kernighan and Ritchie are saying is that when we convert a char to a 16- or 32-bit integer, we don't quite know how to apply step 5. If I'm on a machine where type char is unsigned, and I take the character Ø and convert it to an int, I'll probably get the value 216. But if I'm on a machine where type char is signed, I'll probably get the number -40.

Are multi-character character constants valid in C? Maybe in MS VC?

While reviewing some WINAPI code intended to compile in MS Visual C++, I found the following (simplified):
char buf[4];
// buf gets filled ...
switch ((buf[0] << 8) + buf[1]) {
case 'CT':
/* ... */
case 'SY':
/* ... */
default:
break;
}
}
Assuming 16 bit chars, I can understand why the shift of buf[0] and addition of buf[1]. What I don't gather is how the comparisons in the case clauses are intended to work.
I don't have access to Visual C++ and, of course, those yield multi-character character constant [-Wmultichar] warnings on gcc/MingW.
This is a non-portable way of storing more than one chars in one int. Finally, the comparison happens as the int values, as usual.
Note: consider concatenated representation of the ASCII values for each individual char as the final int value.
Following the wiki article, (emphasis mine)
[...] Multi-character constants (e.g. 'xy') are valid, although rarely useful — they let one store several characters in an integer (e.g. 4 ASCII characters can fit in a 32-bit integer, 8 in a 64-bit one). Since the order in which the characters are packed into an int is not specified, portable use of multi-character constants is difficult.
Related, C11, chapter §6.4.4.4/p10
An integer character constant has type int. The value of an integer character constant
containing a single character that maps to a single-byte execution character is the
numerical value of the representation of the mapped character interpreted as an integer.
The value of an integer character constant containing more than one character (e.g.,
'ab'), or containing a character or escape sequence that does not map to a single-byte
execution character, is implementation-defined. [....]
Yes, they are valid and its type is int and its value is implementation dependent.
From C11 draft, 6.4.4.4p10:
An integer character constant has type int. The value of an integer
character constant containing a single character that maps to a
single-byte execution character is the numerical value of the
representation of the mapped character interpreted as an integer. The
value of an integer character constant containing more than one
character (e.g., 'ab'), or containing a character or escape sequence
that does not map to a single-byte execution character, is
implementation-defined.
(emphasis added)
GCC is being cautious, and warns to let you know in case you have used it unintentionally.

C99 Standard - fprintf - s conversion with precision

Let's assume there's only C99 Standard paper and printf library function needs to be implemented according to this standard to work with UTF-16 encoding, could you please clarify the expected behavior for s conversion with precision specified?
C99 Standard (7.19.6.1) for s conversion says:
If no l length modifier is present, the argument shall be a pointer to the initial element of an array of character type. Characters from the array are written up to (but not including) the terminating null character. If the precision is specified, no more than that many bytes are written. If the precision is not specified or is greater than the size of the array, the array shall contain a null character.
If an l length modifier is present, the argument shall be a pointer to the initial element of an array of wchar_t type. Wide characters from the array are converted to multibyte characters (each as if by a call to the wcrtomb function, with the conversion state described by an mbstate_t object initialized to zero before the first wide character is converted) up to and including a terminating null wide character. The resulting multibyte characters are written up to (but not including) the terminating null character (byte). If no precision is specified, the array shall contain a null wide character. If a precision is specified, no more than that many bytes are written (including shift sequences, if any), and the array shall contain a null wide character if, to equal the multibyte character sequence length given by the precision, the function would need to access a wide character one past the end of the array. In no case is a partial multibyte character written.
I don't quite understand this paragraph in general and the statement "If a precision is specified, no more than that many bytes are written" in particular.
For example, let's take UTF-16 string "TEST" (byte sequence: 0x54, 0x00, 0x45, 0x00, 0x53, 0x00, 0x54, 0x00).
What is expected to be written to the output buffer in the following cases:
If precision is 3
If precision is 9 (one byte more than string length)
If precision is 12 (several bytes more than string length)
Then there's also "Wide characters from the array are converted to multibyte characters". Does it mean UTF-16 should be converted to UTF-8 first? This is pretty strange in case I expect to work with UTF-16 only.
Converting a comment into a slightly expanded answer.
What is the value of CHAR_BIT in your implementation?
If CHAR_BIT == 8, you can't handle UTF-16 with %s; you'd use %ls and you'd pass a wchar_t * as the corresponding argument. You'd then have to read the second paragraph of the specification.
If CHAR_BIT == 16, then you can't have an odd number of octets in the data. You then need to know about how wchar_t relates to char (are they the same size? do they have the same signedness?) and interpret both paragraphs to come up with a uniform effect — unless you decided to have wchar_t represent UTF-32.
The key point is that UTF-16 cannot be handled as a C string if CHAR_BIT == 8 because there are too many useful characters that are encoded with one byte holding zero, but those zero bytes mark the end of a null-terminated string. To handle UTF-16, either the plain char type has to be a 16-bit (or larger) type (so CHAR_BIT > 8), or you have to use wchar_t (and sizeof(wchar_t) > sizeof(char)).
Note that the specification expects that wide characters will be converted to a suitable multibyte representation.
If you want wide characters output natively, you have to use the fwprintf() and related function from <wchar.h>, first defined in C99. The specification there has a lot in common with the specification of fprintf(), but there are (unsurprisingly) important differences.
7.29.2.1 The fwprintf function
…
s
If no l length modifier is present, the argument shall be a pointer to the initial
element of a character array containing a multibyte character sequence
beginning in the initial shift state. Characters from the array are converted as
if by repeated calls to the mbrtowc function, with the conversion state
described by an mbstate_t object initialized to zero before the first
multibyte character is converted, and written up to (but not including) the
terminating null wide character. If the precision is specified, no more than
that many wide characters are written. If the precision is not specified or is
greater than the size of the converted array, the converted array shall contain a
null wide character.
If an l length modifier is present, the argument shall be a pointer to the initial
element of an array of wchar_t type. Wide characters from the array are
written up to (but not including) a terminating null wide character. If the
precision is specified, no more than that many wide characters are written. If
the precision is not specified or is greater than the size of the array, the array
shall contain a null wide character.
wchar_t is not meant to be used for UTF-16, only for implementation-defined fixed-width encodings depending on the current locale. There's simply no sane way to support a variable-length encoding with the wide character API. Likewise, the multi-byte representation used by functions like printf or wcrtomb is implementation-defined. If you want to write portable code using Unicode, you can't rely on the wide character API. Use a library or roll your own code.
To answer your question: fprintf with the l modifier accepts a wide character string in the implementation-defined encoding specified by the current locale. If wchar_t is 16 bits, this encoding might be a bastardization of UTF-16, but as I mentioned above, there's no way to properly support UTF-16 surrogates. This wchar_t string is then converted to a multi-byte char string in an implementation-defined encoding. This might or might not be UTF-8. The specified precision limits the number of chars in the output string with the added restriction that no partial multi-byte characters are written.
Here's an example. Let's assume that the wide character encoding is UTF-32 with 32-bit wchar_t and that the multi-byte encoding is UTF-8 (like on Linux with an appropriate locale). The following code
wchar_t w[] = { 0x1F600, 0 }; // U+1F600 GRINNING FACE
printf("%.3ls", w);
will print nothing at all since the resulting UTF-8 sequence has four bytes. Only if you specify a precision of at least four
printf("%.4ls", w);
the character will be printed.
EDIT: To answer your second question, no, printf should never write a null character. The sentence only means that in certain cases, a null character is required to specify the end of the string and avoid buffer over-reads.

What does \x mean in C/C++?

Example:
char arr[] = "\xeb\x2a";
BTW, are the following the same?
"\xeb\x2a" vs. '\xeb\x2a'
\x indicates a hexadecimal character escape. It's used to specify characters that aren't typeable (like a null '\x00').
And "\xeb\x2a" is a literal string (type is char *, 3 bytes, null-terminated), and '\xeb\x2a' is a character constant (type is int, 2 bytes, not null-terminated, and is just another way to write 0xEB2A or 60202 or 0165452). Not the same :)
As other have said, the \x is an escape sequence that starts a "hexadecimal-escape-sequence".
Some further details from the C99 standard:
When used inside a set of single-quotes (') the characters are part of an "integer character constant" which is (6.4.4.4/2 "Character constants"):
a sequence of one or more multibyte characters enclosed in single-quotes, as in 'x'.
and
An integer character constant has type int. The value of an integer character constant containing a single character that maps to a single-byte execution character is the numerical value of the representation of the mapped character interpreted as an integer. The value of an integer character constant containing more than one character (e.g., 'ab'), or containing a character or escape sequence that does not map to a single-byte execution character, is implementation-defined.
So the sequence in your example of '\xeb\x2a' is an implementation defined value. It's likely to be the int value 0xeb2a or 0x2aeb depending on whether the target platform is big-endian or little-endian, but you'd have to look at your compiler's documentation to know for certain.
When used inside a set of double-quotes (") the characters specified by the hex-escape-sequence are part of a null-terminated string literal.
From the C99 standard 6.4.5/3 "String literals":
The same considerations apply to each element of the sequence in a character string literal or a wide string literal as if it were in an integer character constant or a wide character constant, except that the single-quote ' is representable either by itself or by the escape sequence \', but the double-quote " shall be represented by the escape sequence \".
Additional info:
In my opinion, you should avoid avoid using 'multi-character' constants. There are only a few situations where they provide any value over using an regular, old int constant. For example, '\xeb\x2a' could be more portably be specified as 0xeb2a or 0x2aeb depending on what value you really wanted.
One area that I've found multi-character constants to be of some use is to come up with clever enum values that can be recognized in a debugger or memory dump:
enum CommandId {
CMD_ID_READ = 'read',
CMD_ID_WRITE = 'writ',
CMD_ID_DEL = 'del ',
CMD_ID_FOO = 'foo '
};
There are few portability problems with the above (other than platforms that have small ints or warnings that might be spewed). Whether the characters end up in the enum values in little- or big-endian form, the code will still work (unless you're doing some else unholy with the enum values). If the characters end up in the value using an endianness that wasn't what you expected, it might make the values less easy to read in a debugger, but the 'correctness' isn't affected.
When you say:
BTW,are these the same:
"\xeb\x2a" vs '\xeb\x2a'
They are in fact not. The first creates a character string literal, terminated with a zero byte, containing the two characters who's hex representation you provide. The second creates an integer constant.
It's a special character that indicates the string is actually a hexadecimal number.
http://www.austincc.edu/rickster/COSC1320/handouts/escchar.htm
The \x means it's a hex character escape. So \xeb would mean character eb in hex, or 235 in decimal. See http://msdn.microsoft.com/en-us/library/6aw8xdf2.aspx for ore information.
As for the second, no, they are not the same. The double-quotes, ", means it's a string of characters, a null-terminated character array, whereas a single quote, ', means it's a single character, the byte that character represents.
\x allows you to specify the character by its hexadecimal code.
This allows you to specify characters that are normally not printable (some of which have special escape sequences predefined such as '\n'=newline and '\t'=tab '\b'=bell)
A useful website is here.
And I quote:
x Unsigned hexadecimal integer
That way, your \xeb is like 235 in decimal.

Resources