When dynamically allocating chars, I've always done it like this:
char *pCh = malloc(NUM_CHARS * sizeof(char));
I've recently been told, however, that using sizeof(char) is redundant and unnecessary because, "by definition, the size of a char is one byte," so I should/could write the above line like this:
char *pCh = malloc(NUM_CHARS);
My understanding is the size of a char depends on the native character set that is being used on the target computer. For example, if the native character set is ASCII, a char is one byte (8 bits), and if the native character set is UNICODE a char will necessarily require more bytes (> 8 bits).
To provide maximum portability, wouldn't it be necessary to use sizeof(char), as malloc simply allocates 8-bit bytes? Am I misunderstanding malloc and sizeof(char)?
Yes, it is redundant since the language standard specifies that sizeof (char) is 1. This is because that is the unit in which things are measured, so of course the size of the unit itself must be 1.
Life becomes strange with units defined in terms of themselves, that simply doesn't make any sense. Many people seem to "want" to assume that "there are 8-bit bytes, and sizeof tells me how many such there are in a particular value". That is wrong, that's simply not how it works. It's true that there can be platforms with larger characters than 8 bits, that's why we have CHAR_BIT.
Typically you always "know" when you're allocating characters anyway, but if you really want to include sizeof, you should really consider making it use the pointer, instead:
char *pCh = malloc(NUM_CHARS * sizeof *pCh);
This "locks" the unit size of the thing being allocated the pointer that is used to store the result of the allocation. These two types should match, if you ever see code like this:
int *numbers = malloc(42 * sizeof (float));
that is a huge warning signal; by using the pointer from the left-hand side in the sizeof you make that type of error impossible which I consider a big win:
int *numbers = malloc(42 * sizeof *numbers);
Also, it's likely that if you change the name of the pointer, the malloc() won't compile which it would if you had the name of the (wrong) basic type in there. There is a slight risk that if you forget the asterisk (and write sizeof numbers instead of sizeof *numbers) you'll not get what you want. In practice (for me) this seems to never happen, since the asterisk is pretty well established as part of this pattern, to me.
Also, this usage relies on (and emphasizes) the fact that sizeof is not a function, since no ()s are needed around the pointer de-referencing expression. This is a nice bonus, since many people seem to want to deny this. :)
I find this pattern highly satisfying and recommend it to everyone.
The C99 draft standard section 6.5.3.4 The sizeof operator paragraph 3 states:
When applied to an operand that has type char, unsigned char, or signed char,
(or a qualified version thereof) the result is 1. [...]
In the C11 draft standard it is paragraph 4 but the wording is the same. So NUM_CHARS * sizeof(char) should be equivalent to NUM_CHARS.
We can see from the definition of byte in 3.6 that it is a:
addressable unit of data storage large enough to hold any member of the basic character
set of the execution environment
and Note 2 says:
A byte is composed of a contiguous sequence of bits, the number of which is implementation defined. The least significant bit is called the low-order bit; the most significant bit is called the high-order bit.
The C specification states that sizeof(char) is 1, so as long as you are dealing with conforming implementations of C it is redundant.
The size unit used by mallocis the same. malloc(120) allocates space for 120 char.
A char must be at least 8 bits, but may be larger.
sizeof(char) will always return 1 so it doesn't matter if you use it or nit, it will not change. You may be confusing this with UNICODE wide characters, which have two bytes, but they have a different type wchar_t so you should use sizeof in that case.
If you are working on a system where a byte is defined to have 16 bits, then sizeof(char) would still return 1 as this is what the underlying architecture would allocate. 1 Byte with 16 bits.
Allocation sizes are always measured in units of char, which has size 1 by definition. If you are on a 9-bit machine, malloc understands its argument as a number of 9-bit bytes.
sizeof(char) is always 1, but not because char is always one byte (it needn't be), but rather because the sizeof operator returns the object/type size in units of char.
Related
char array1[10];
scanf_s("%9s", array1, (unsigned)_countof(array1);
What's mean of (unsigned)_countof(array1)?
and what's different from use scanf_s("%9s", array1, sizeof(array1);?
both worked, but I want to understand about it more surely
What's the correct way to use scanf_s in using string arrays?
scanf_s() with "%9s" is not for string arrays. It is for reading text and saving as a single string, which is a character array with a null character.
Its proper use is complicated by MSVC expects a different type in the count argument than C11. In C11 and later, scanf_s() is an optional function of the standard library and lacks wide acceptance.
// MSVC - type unsigned expected, hence the cast.
scanf_s("%9s", array1, (unsigned)_countof(array1);
// C11 - type size_t expected
scanf_s("%9s", array1, sizeof array1 / sizeof array1[0]);
Proper use of either obliges code to check the return value of scanf_s()
if (1 == scanf_s("%9s", ...)) Success();
else Fail();
What's mean of (unsigned)_countof(array1)?
_countof() is an MSVC macro equivalent to (sizeof(array) / sizeof(array[0])). When passed an array, it returns the count of elements in the array as type size_t. Casting to unsigned gets the expected type as with MSVC scanf_s(). The value may truncate due to the cast - a rare issue with huge arrays.
what's different from use scanf_s("%9s", array1, sizeof(array1));?
The expected type of sizeof(array1) is size_t and not unsigned as expected in MSVC - leading to undefined behavior. When size_t is as unsigned, that is OK, else a real problem.
The conceptual value is wrong. It should be the array element count, not the byte size of the array. Either OK here as sizeof(char)==1, yet wrong if code moved to say using wide characters.
both worked
Code is not portable. It may have "worked" in OP's case, but lacks portability.
Consider alternatives to scanf_s(), scanf() like fgets().
sizeof and _countof are different.
(edit:) In this case you should use countof.
sizeof is an operator in C that returns the memory size occupied by a variable, in bytes.
In this case, array1 is an array of 10 chars, so it's size is ten times the size of a char, and sizeof(char) is 1 byte. So the value of sizeof(array1) is 10 * 1 = 10 bytes.
Note that in C you can use sizeof on any variable (e.g. sizeof(array1)) or on any type (e.g. sizeof(int)).
_countof is a macro (only available on Microsoft platforms) that gives you the number of elements declared in an array variable. In this case your array1 variable has 10 elements, so _countof(array1) will be 10.
Now in your precise case this happens to be the same result because the size of each char is 1, but this wouldn't be the same if, for example, your variable was an array of ints.
int array2[10];
sizeof(array2); // = 40
_countof(array2); // = 10
An int is (usually) 4 bytes, so an array of 10 ints will have a size of 10*4 = 40 bytes. On the other hand, _countof will just count how many elements are declared in the array, and still return 10.
In fact, we can see that the definition of _countof is litterally sizeof(array) divided by sizeof(array[0]), that is, the size of an individual element.
Since scanf_s with "%s" expects a number of items, you should be using _countof. (edit).
You might also want to be careful with sizeof (and _countof) on arrays, and avoid using them on pointers. Also be aware that _countof will not work outside of Visual Studio. See these related answers.
This question is compiler based, GCC 11.1.0 on Arch Linux does not have functions like scanf_s and _countof. These functions/macros might be valid in Windows.
I read some docs on Microsoft docs. So, _countof is a macro which calls sizeof() internally and scanf_s is similar to scanf in standard C17.
What's mean of (unsigned)_countof(array1)? and what's different from use scanf_s("%9s", array1, sizeof(array1);? both work, but I want to understand about it more surely
Here the macro _countof is counting the length of the stack stored array, which in this case is 10.
_countof macro expands to:
#define _countof(x) (sizeof(x) / sizeof(x[0]))
// the operation goes for `array1` like this:
// 10 * 1 byte / 1 byte
// 1 byte is the sizeof char
// hence, 10 is returned as the result of the above operation
And (unsigned) is just type casting the value of the above operation.
NOTE: As _countof is calling sizeof() do not use this inside a function the get the length of an array. Because sizeof(arr_ptr) will return the size of pointer which is 8 bytes in my case (x86_64 architecture).
Conclusion:
There no _countof is just calling sizeof(). I would like to suggest you to use sizeof and scanf functions because they are cross-platform and portable throughout the C standards.
Suppose I have a char array and an associated length: Arr and Len. Not a string, a char array. There is no null terminator. Yet I have to copy the array data into an integer of type int64_t. Here's how it's done, and for the purpose of this question I'm assuming Len will not exceed 8:
int64_t Word = 0;
memcpy(&Word, Arr, Len);
Is this actually the proper way to do this? I am copying memory, but is there a faster way to do it inline, for example? So Word can be register?
The problem with a type pun is it assumes that Arr has 8 bytes allocated. No, Arr has at most 8 bytes allocated. It could have 5, so casting Arr to a int64_t * then dereferencing it could try to access three illegal bytes at the end, resulting in segfault.
Is the proper way to do what I describe a memcpy() call, or is there a faster or better way?
Since you specify Len is at most (8), it's reasonable to assume little-endian storage, i.e., the least-significant byte at Arr[0].
If Len was fixed at (8), the compiler might be able to replace memcpy simply by loading the value from memory. That would also be dependent on whether the platform can do unaligned reads - if the compiler can't prove alignment - and may involve something like the bswap instruction on x86-64 if the architecture is big-endian.
The fact that a Len is a run-time value will likely generate a call to memcpy. The overhead of the call itself is not trivial. All things considered, it's probably best just to handle this in an endian-independent way using byte arithmetic. The code assumes 8-bit bytes, which seems consistent with your question.
uint64_t Word = 0;
while (Len--)
Word = (Word << 8) | Arr[Len];
On more exotic platforms, where (CHAR_BIT > 8), you can replace the right-hand side of the OR expression with (Arr[Len] & 0xff). In fact, this is optimised away on platforms with 8-bit (normative) bytes, so you might as well add it for completeness. Or just keep these issues in mind.
There are platforms with legal C implementations where char, short, int are 32-bit values, for example. These are quite common in the embedded world.
In C++, sizeof('a') == sizeof(char) == 1. This makes intuitive sense, since 'a' is a character literal, and sizeof(char) == 1 as defined by the standard.
In C however, sizeof('a') == sizeof(int). That is, it appears that C character literals are actually integers. Does anyone know why? I can find plenty of mentions of this C quirk but no explanation for why it exists.
discussion on same subject
"More specifically the integral promotions. In K&R C it was virtually (?)
impossible to use a character value without it being promoted to int first,
so making character constant int in the first place eliminated that step.
There were and still are multi character constants such as 'abcd' or however
many will fit in an int."
The original question is "why?"
The reason is that the definition of a literal character has evolved and changed, while trying to remain backwards compatible with existing code.
In the dark days of early C there were no types at all. By the time I first learnt to program in C, types had been introduced, but functions didn't have prototypes to tell the caller what the argument types were. Instead it was standardised that everything passed as a parameter would either be the size of an int (this included all pointers) or it would be a double.
This meant that when you were writing the function, all the parameters that weren't double were stored on the stack as ints, no matter how you declared them, and the compiler put code in the function to handle this for you.
This made things somewhat inconsistent, so when K&R wrote their famous book, they put in the rule that a character literal would always be promoted to an int in any expression, not just a function parameter.
When the ANSI committee first standardised C, they changed this rule so that a character literal would simply be an int, since this seemed a simpler way of achieving the same thing.
When C++ was being designed, all functions were required to have full prototypes (this is still not required in C, although it is universally accepted as good practice). Because of this, it was decided that a character literal could be stored in a char. The advantage of this in C++ is that a function with a char parameter and a function with an int parameter have different signatures. This advantage is not the case in C.
This is why they are different. Evolution...
I don't know the specific reasons why a character literal in C is of type int. But in C++, there is a good reason not to go that way. Consider this:
void print(int);
void print(char);
print('a');
You would expect that the call to print selects the second version taking a char. Having a character literal being an int would make that impossible. Note that in C++ literals having more than one character still have type int, although their value is implementation defined. So, 'ab' has type int, while 'a' has type char.
using gcc on my MacBook, I try:
#include <stdio.h>
#define test(A) do{printf(#A":\t%i\n",sizeof(A));}while(0)
int main(void){
test('a');
test("a");
test("");
test(char);
test(short);
test(int);
test(long);
test((char)0x0);
test((short)0x0);
test((int)0x0);
test((long)0x0);
return 0;
};
which when run gives:
'a': 4
"a": 2
"": 1
char: 1
short: 2
int: 4
long: 4
(char)0x0: 1
(short)0x0: 2
(int)0x0: 4
(long)0x0: 4
which suggests that a character is 8 bits, like you suspect, but a character literal is an int.
Back when C was being written, the PDP-11's MACRO-11 assembly language had:
MOV #'A, R0 // 8-bit character encoding for 'A' into 16 bit register
This kind of thing's quite common in assembly language - the low 8 bits will hold the character code, other bits cleared to 0. PDP-11 even had:
MOV #"AB, R0 // 16-bit character encoding for 'A' (low byte) and 'B'
This provided a convenient way to load two characters into the low and high bytes of the 16 bit register. You might then write those elsewhere, updating some textual data or screen memory.
So, the idea of characters being promoted to register size is quite normal and desirable. But, let's say you need to get 'A' into a register not as part of the hard-coded opcode, but from somewhere in main memory containing:
address: value
20: 'X'
21: 'A'
22: 'A'
23: 'X'
24: 0
25: 'A'
26: 'A'
27: 0
28: 'A'
If you want to read just an 'A' from this main memory into a register, which one would you read?
Some CPUs may only directly support reading a 16 bit value into a 16 bit register, which would mean a read at 20 or 22 would then require the bits from 'X' be cleared out, and depending on the endianness of the CPU one or other would need shifting into the low order byte.
Some CPUs may require a memory-aligned read, which means that the lowest address involved must be a multiple of the data size: you might be able to read from addresses 24 and 25, but not 27 and 28.
So, a compiler generating code to get an 'A' into the register may prefer to waste a little extra memory and encode the value as 0 'A' or 'A' 0 - depending on endianness, and also ensuring it is aligned properly (i.e. not at an odd memory address).
My guess is that C's simply carried this level of CPU-centric behaviour over, thinking of character constants occupying register sizes of memory, bearing out the common assessment of C as a "high level assembler".
(See 6.3.3 on page 6-25 of http://www.dmv.net/dec/pdf/macro.pdf)
I remember reading K&R and seeing a code snippet that would read a character at a time until it hit EOF. Since all characters are valid characters to be in a file/input stream, this means that EOF cannot be any char value. What the code did was to put the read character into an int, then test for EOF, then convert to a char if it wasn't.
I realize this doesn't exactly answer your question, but it would make some sense for the rest of the character literals to be sizeof(int) if the EOF literal was.
int r;
char buffer[1024], *p; // don't use in production - buffer overflow likely
p = buffer;
while ((r = getc(file)) != EOF)
{
*(p++) = (char) r;
}
I haven't seen a rationale for it (C char literals being int types), but here's something Stroustrup had to say about it (from Design and Evolution 11.2.1 - Fine-Grain Resolution):
In C, the type of a character literal such as 'a' is int.
Surprisingly, giving 'a' type char in C++ doesn't cause any compatibility problems.
Except for the pathological example sizeof('a'), every construct that can be expressed
in both C and C++ gives the same result.
So for the most part, it should cause no problems.
The historical reason for this is that C, and its predecessor B, were originally developed on various models of DEC PDP minicomputers with various word sizes, which supported 8-bit ASCII but could only perform arithmetic on registers. (Not the PDP-11, however; that came later.) Early versions of C defined int to be the native word size of the machine, and any value smaller than an int needed to be widened to int in order to be passed to or from a function, or used in a bitwise, logical or arithmetic expression, because that was how the underlying hardware worked.
That is also why the integer promotion rules still say that any data type smaller than an int is promoted to int. C implementations are also allowed to use one’s-complement math instead of two’s-complement for similar historical reasons. The reason that octal character escapes and octal constants are first-class citizens compared to hex is likewise that those early DEC minicomputers had word sizes divisible into three-byte chunks but not four-byte nibbles.
I don't know, but I'm going to guess it was easier to implement it that way and it didn't really matter. It wasn't until C++ when the type could determine which function would get called that it needed to be fixed.
This is only tangential to the language spec, but in hardware the CPU usually only has one register size -- 32 bits, let's say -- and so whenever it actually works on a char (by adding, subtracting, or comparing it) there is an implicit conversion to int when it is loaded into the register. The compiler takes care of properly masking and shifting the number after each operation so that if you add, say, 2 to (unsigned char) 254, it'll wrap around to 0 instead of 256, but inside the silicon it is really an int until you save it back to memory.
It's sort of an academic point because the language could have specified an 8-bit literal type anyway, but in this case the language spec happens to reflect more closely what the CPU is really doing.
(x86 wonks may note that there is eg a native addh op that adds the short-wide registers in one step, but inside the RISC core this translates to two steps: add the numbers, then extend sign, like an add/extsh pair on the PowerPC)
This is the correct behavior, called "integral promotion". It can happen in other cases too (mainly binary operators, if I remember correctly).
EDIT: Just to be sure, I checked my copy of Expert C Programming: Deep Secrets, and I confirmed that a char literal does not start with a type int. It is initially of type char but when it is used in an expression, it is promoted to an int. The following is quoted from the book:
Character literals have type int and
they get there by following the rules
for promotion from type char. This is
too briefly covered in K&R 1, on page
39 where it says:
Every char in an expression is
converted into an int....Notice that
all float's in an expression are
converted to double....Since a
function argument is an expression,
type conversions also take place when
arguments are passed to functions: in
particular, char and short become int,
float becomes double.
I'm trying to get a better understanding of the C standard. In particular I am interested in how pointer arithmetic might work in an implementation for an unusual machine architecture.
Suppose I have a processor with 64 bit wide registers that is connected to RAM where each address corresponds to a cell 8 bits wide. An implementation for C for this machine defines CHAR_BIT to be equal to 8. Suppose I compile and execute the following lines of code:
char *pointer = 0;
pointer = pointer + 1;
After execution, pointer is equal to 1. This gives one the impression that in general data of type char corresponds to the smallest addressable unit of memory on the machine.
Now suppose I have a processor with 12 bit wide registers that is connected to RAM where each address corresponds to a cell 4 bits wide. An implementation of C for this machine defines CHAR_BIT to be equal to 12. Suppose the same lines of code are compiled and executed for this machine. Would pointer be equal to 3?
More generally, when you increment a pointer to a char, is the address equal to CHAR_BIT divided by the width of a memory cell on the machine?
Would pointer be equal to 3?
Well, the standard doesn't say how pointers are implemented. The standard tells what is to happen when you use a pointer in a specific way but not what the value of a pointer shall be.
All we know is that adding 1 to a char pointer, will make the pointer point at the next char object - where ever that is. But nothing about pointers value.
So when you say that
pointer = pointer + 1;
will make the pointer equal 1, it's wrong. The standard doesn't say anything about that.
On most systems a char is 8 bit and pointers are (virtual) memory addresses referencing a 8 bit addressable memory loacation. On such systems incrementing a char pointer will increase the pointer value (aka memory address) by 1. However, on - unusual architectures - there is no way to tell.
But if you have a system where each memory address references 4 bits and a char is 12 bits, it seems a good guess that ++pointer will increase the pointer by three.
Pointers are incremented by the minimum of they width of the datatype they "point to", but are not guaranteed to increment to that size exactly.
For memory alignment purposes, there are many times where a pointer might increment to the next memory word alignment past the minimum width.
So, in general, you cannot assume this pointer to be equal to 3. It very well may be 3, 4, or some larger number.
Here is an example.
struct char_three {
char a;
char b;
char c;
};
struct char_three* my_pointer = 0;
my_pointer++;
/* I'd be shocked if my_pointer was now 3 */
Memory alignment is machine specific. One cannot generalize about it, except that most machines define a WORD as the first address that can be aligned to a memory fetch on the bus. Some machines can specify addresses that don't align with the bus fetches. In such a case, selecting two bytes that span the alignment may result in loading two WORDS.
Most systems don't accept WORD loads on non-aligned boundaries without complaining. This means that a bit of boiler plate assembly is applied to translate the fetch to the proceeding WORD boundary, if maximum density is desired.
Most compilers prefer speed to maximum density of data, so they align their structured data to take advantage of WORD boundaries, avoiding the extra calculations. This means that in many cases, data that is not carefully aligned might contain "holes" of bytes that are not used.
If you are interested in details of the above summary, you can read up on Data Structure Alignment which will discuss alignment (and as a consequence) padding.
char *pointer = 0;
After execution, pointer is equal to 1
Not necessarily.
This special case gives you a null pointer, since 0 is a null pointer constant. Strictly speaking, such a pointer is not supposed to point at a valid object. If you look at the actual address stored in the pointer, it could be anything.
Null pointers aside, the C language expects you to do pointer arithmetic by first pointing at an array. Or in case of char, you can also point at a chunk of generic data such as a struct. Everything else, like your example, is undefined behavior.
An implementation of C for this machine defines CHAR_BIT to be equal to 12
The C standard defines char to be equal to a byte, so your example is a bit weird and contradicting. Pointer arithmetic will always increase the pointer to point at the next object in the array. The standard doesn't really speak of representation of addresses at all, but your fictional example that would sensibly increase the address by 12 bits, because that's the size of a char.
Fictional computers are quite meaningless to discuss even from a learning point-of-view. I'd advise to focus on real-world computers instead.
When you increment a pointer to a char, is the address equal to CHAR_BIT divided by the width of a memory cell on the machine?
On a "conventional" machine -- indeed on the vast majority of machines where C runs -- CHAR_BIT simply is the width of a memory cell on the machine, so the answer to the question is vacuously "yes" (since CHAR_BIT / CHAR_BIT is 1.).
A machine with memory cells smaller than CHAR_BIT would be very, very strange -- arguably incompatible with C's definition.
C's definition says that:
sizeof(char) is exactly 1.
CHAR_BIT, the number of bits in a char, is at least 8. That is, as far as C is concerned, a byte may not be smaller than 8 bits. (It may be larger, and this is a surprise to many people, but it does not concern us here.)
There is a strong suggestion (if not an explicit requirement) that char (or "byte") is the machine's "minimum addressable unit" or some such.
So for a machine that can address 4 bits at a time, we would have to pick unnatural values for sizeof(char) and CHAR_BIT (which would otherwise probably want to be 2 and 4, respectively), and we would have to ignore the suggestion that type char is the machine's minimum addressable unit.
C does not mandate the internal representation (the bit pattern) of a pointer. The closest a portable C program can get to doing anything with the internal representation of a pointer value is to print it out using %p -- and that's explicitly defined to be implementation-defined.
So I think the only way to implement C on a "4 bit" machine would involve having the code
char a[10];
char *p = a;
p++;
generate instructions which actually incremented the address behind p by 2.
It would then be an interesting question whether %p should print the actual, raw pointer value, or the value divided by 2.
It would also be lots of fun to watch the ensuing fireworks as too-clever programmers on such a machine used type punning techniques to get their hands on the internal value of pointers so that they could increment them by actually 1 -- not the 2 that "proper" additions of 1 would always generate -- such that they could amaze their friends by accessing the odd nybble of a byte, or confound the regulars on SO by asking questions about it. "I just incremented a char pointer by 1. Why is %p showing a value that's 2 greater?"
Seems like the confusion in this question comes from the fact that the word "byte" in the C standard doesn't have the typical definition (which is 8 bits). Specifically, the word "byte" in the C standard means a collection of bits, where the number of bits is specified by the implementation-defined constant CHAR_BITS. Furthermore, a "byte" as defined by the C standard is the smallest addressable object that a C program can access.
This leaves open the question as to whether there is a one-to-one correspondence between the C definition of "addressable", and the hardware's definition of "addressable". In other words, is it possible that the hardware can address objects that are smaller than a "byte"? If (as in the OP) a "byte" occupies 3 addresses, then that implies that "byte" accesses have an alignment restriction. Which is to say that 3 and 6 are valid "byte" addresses, but 4 and 5 are not. This is prohibited by section 6.2.8 which discusses the alignment of objects.
Which means that the architecture proposed by the OP is not supported by the C specification. In particular, an implementation may not have pointers that point to 4-bit objects when CHAR_BIT is equal to 12.
Here are the relevant sections from the C standard:
§3.6 The definition of "byte" as used in the standard
[A byte is an] addressable unit of data storage large enough to hold
any member of the basic character set of the execution environment.
NOTE 1 It is possible to express the address of each individual byte
of an object uniquely.
NOTE 2 A byte is composed of a contiguous sequence of bits, the number
of which is implementation-defined. The least significant bit is
called the low-order bit; the most significant bit is called the
high-order bit.
§5.2.4.2.1 describes CHAR_BIT as the
number of bits for smallest object that is not a bit-field (byte)
§6.2.6.1 restricts all objects that are larger than a char to be a multiple of CHAR_BIT bits:
[...]
Except for bit-fields, objects are composed of contiguous sequences of
one or more bytes, the number, order, and encoding of which are either
explicitly specified or implementation-defined.
[...] Values stored in non-bit-field objects of any other object type
consist of n × CHAR_BIT bits, where n is the size of an object of that
type, in bytes.
§6.2.8 restricts the alignment of objects
Complete object types have alignment requirements which place
restrictions on the addresses at which objects of that type may be
allocated. An alignment is an implementation-defined integer value
representing the number of bytes between successive addresses at which
a given object can be allocated.
Valid alignments include only those values returned by an _Alignof
expression for fundamental types, plus an additional
implementation-defined set of values, which may be empty. Every
valid alignment value shall be a nonnegative integral power of two.
§6.5.3.2 specifies the sizeof a char, and hence a "byte"
When sizeof is applied to an operand that has type char, unsigned
char, or signed char, (or a qualified version thereof) the result is
1.
The following code fragment demonstrates an invariant of C pointer arithmetic -- no matter what CHAR_BIT is, no matter what the hardware least addressable unit is, and no matter what the actual bit representation of pointers is,
#include <assert.h>
int main(void)
{
T x[2]; // for any object type T whatsoever
assert(&x[1] - &x[0] == 1); // must be true
}
And since sizeof(char) == 1 by definition, this also means that
#include <assert.h>
int main(void)
{
T x[2]; // again for any object type T whatsoever
char *p = (char *)&x[0];
char *q = (char *)&x[1];
assert(q - p == sizeof(T)); // must be true
}
However, if you convert to integers before performing the subtraction, the invariant evaporates:
#include <assert.h>
#include <inttypes.h>
int main(void);
{
T x[2];
uintptr_t p = (uintptr_t)&x[0];
uintptr_t q = (uintptr_t)&x[1];
assert(q - p == sizeof(T)); // implementation-defined whether true
}
because the transformation performed by converting a pointer to an integer of the same size, or vice versa, is implementation-defined. I think it's required to be bijective, but I could be wrong about that, and it is definitely not required to preserve any of the above invariants.
In C++, sizeof('a') == sizeof(char) == 1. This makes intuitive sense, since 'a' is a character literal, and sizeof(char) == 1 as defined by the standard.
In C however, sizeof('a') == sizeof(int). That is, it appears that C character literals are actually integers. Does anyone know why? I can find plenty of mentions of this C quirk but no explanation for why it exists.
discussion on same subject
"More specifically the integral promotions. In K&R C it was virtually (?)
impossible to use a character value without it being promoted to int first,
so making character constant int in the first place eliminated that step.
There were and still are multi character constants such as 'abcd' or however
many will fit in an int."
The original question is "why?"
The reason is that the definition of a literal character has evolved and changed, while trying to remain backwards compatible with existing code.
In the dark days of early C there were no types at all. By the time I first learnt to program in C, types had been introduced, but functions didn't have prototypes to tell the caller what the argument types were. Instead it was standardised that everything passed as a parameter would either be the size of an int (this included all pointers) or it would be a double.
This meant that when you were writing the function, all the parameters that weren't double were stored on the stack as ints, no matter how you declared them, and the compiler put code in the function to handle this for you.
This made things somewhat inconsistent, so when K&R wrote their famous book, they put in the rule that a character literal would always be promoted to an int in any expression, not just a function parameter.
When the ANSI committee first standardised C, they changed this rule so that a character literal would simply be an int, since this seemed a simpler way of achieving the same thing.
When C++ was being designed, all functions were required to have full prototypes (this is still not required in C, although it is universally accepted as good practice). Because of this, it was decided that a character literal could be stored in a char. The advantage of this in C++ is that a function with a char parameter and a function with an int parameter have different signatures. This advantage is not the case in C.
This is why they are different. Evolution...
I don't know the specific reasons why a character literal in C is of type int. But in C++, there is a good reason not to go that way. Consider this:
void print(int);
void print(char);
print('a');
You would expect that the call to print selects the second version taking a char. Having a character literal being an int would make that impossible. Note that in C++ literals having more than one character still have type int, although their value is implementation defined. So, 'ab' has type int, while 'a' has type char.
using gcc on my MacBook, I try:
#include <stdio.h>
#define test(A) do{printf(#A":\t%i\n",sizeof(A));}while(0)
int main(void){
test('a');
test("a");
test("");
test(char);
test(short);
test(int);
test(long);
test((char)0x0);
test((short)0x0);
test((int)0x0);
test((long)0x0);
return 0;
};
which when run gives:
'a': 4
"a": 2
"": 1
char: 1
short: 2
int: 4
long: 4
(char)0x0: 1
(short)0x0: 2
(int)0x0: 4
(long)0x0: 4
which suggests that a character is 8 bits, like you suspect, but a character literal is an int.
Back when C was being written, the PDP-11's MACRO-11 assembly language had:
MOV #'A, R0 // 8-bit character encoding for 'A' into 16 bit register
This kind of thing's quite common in assembly language - the low 8 bits will hold the character code, other bits cleared to 0. PDP-11 even had:
MOV #"AB, R0 // 16-bit character encoding for 'A' (low byte) and 'B'
This provided a convenient way to load two characters into the low and high bytes of the 16 bit register. You might then write those elsewhere, updating some textual data or screen memory.
So, the idea of characters being promoted to register size is quite normal and desirable. But, let's say you need to get 'A' into a register not as part of the hard-coded opcode, but from somewhere in main memory containing:
address: value
20: 'X'
21: 'A'
22: 'A'
23: 'X'
24: 0
25: 'A'
26: 'A'
27: 0
28: 'A'
If you want to read just an 'A' from this main memory into a register, which one would you read?
Some CPUs may only directly support reading a 16 bit value into a 16 bit register, which would mean a read at 20 or 22 would then require the bits from 'X' be cleared out, and depending on the endianness of the CPU one or other would need shifting into the low order byte.
Some CPUs may require a memory-aligned read, which means that the lowest address involved must be a multiple of the data size: you might be able to read from addresses 24 and 25, but not 27 and 28.
So, a compiler generating code to get an 'A' into the register may prefer to waste a little extra memory and encode the value as 0 'A' or 'A' 0 - depending on endianness, and also ensuring it is aligned properly (i.e. not at an odd memory address).
My guess is that C's simply carried this level of CPU-centric behaviour over, thinking of character constants occupying register sizes of memory, bearing out the common assessment of C as a "high level assembler".
(See 6.3.3 on page 6-25 of http://www.dmv.net/dec/pdf/macro.pdf)
I remember reading K&R and seeing a code snippet that would read a character at a time until it hit EOF. Since all characters are valid characters to be in a file/input stream, this means that EOF cannot be any char value. What the code did was to put the read character into an int, then test for EOF, then convert to a char if it wasn't.
I realize this doesn't exactly answer your question, but it would make some sense for the rest of the character literals to be sizeof(int) if the EOF literal was.
int r;
char buffer[1024], *p; // don't use in production - buffer overflow likely
p = buffer;
while ((r = getc(file)) != EOF)
{
*(p++) = (char) r;
}
I haven't seen a rationale for it (C char literals being int types), but here's something Stroustrup had to say about it (from Design and Evolution 11.2.1 - Fine-Grain Resolution):
In C, the type of a character literal such as 'a' is int.
Surprisingly, giving 'a' type char in C++ doesn't cause any compatibility problems.
Except for the pathological example sizeof('a'), every construct that can be expressed
in both C and C++ gives the same result.
So for the most part, it should cause no problems.
The historical reason for this is that C, and its predecessor B, were originally developed on various models of DEC PDP minicomputers with various word sizes, which supported 8-bit ASCII but could only perform arithmetic on registers. (Not the PDP-11, however; that came later.) Early versions of C defined int to be the native word size of the machine, and any value smaller than an int needed to be widened to int in order to be passed to or from a function, or used in a bitwise, logical or arithmetic expression, because that was how the underlying hardware worked.
That is also why the integer promotion rules still say that any data type smaller than an int is promoted to int. C implementations are also allowed to use one’s-complement math instead of two’s-complement for similar historical reasons. The reason that octal character escapes and octal constants are first-class citizens compared to hex is likewise that those early DEC minicomputers had word sizes divisible into three-byte chunks but not four-byte nibbles.
I don't know, but I'm going to guess it was easier to implement it that way and it didn't really matter. It wasn't until C++ when the type could determine which function would get called that it needed to be fixed.
This is only tangential to the language spec, but in hardware the CPU usually only has one register size -- 32 bits, let's say -- and so whenever it actually works on a char (by adding, subtracting, or comparing it) there is an implicit conversion to int when it is loaded into the register. The compiler takes care of properly masking and shifting the number after each operation so that if you add, say, 2 to (unsigned char) 254, it'll wrap around to 0 instead of 256, but inside the silicon it is really an int until you save it back to memory.
It's sort of an academic point because the language could have specified an 8-bit literal type anyway, but in this case the language spec happens to reflect more closely what the CPU is really doing.
(x86 wonks may note that there is eg a native addh op that adds the short-wide registers in one step, but inside the RISC core this translates to two steps: add the numbers, then extend sign, like an add/extsh pair on the PowerPC)
This is the correct behavior, called "integral promotion". It can happen in other cases too (mainly binary operators, if I remember correctly).
EDIT: Just to be sure, I checked my copy of Expert C Programming: Deep Secrets, and I confirmed that a char literal does not start with a type int. It is initially of type char but when it is used in an expression, it is promoted to an int. The following is quoted from the book:
Character literals have type int and
they get there by following the rules
for promotion from type char. This is
too briefly covered in K&R 1, on page
39 where it says:
Every char in an expression is
converted into an int....Notice that
all float's in an expression are
converted to double....Since a
function argument is an expression,
type conversions also take place when
arguments are passed to functions: in
particular, char and short become int,
float becomes double.