Is it possible to do bitwise operation on multiple successive array elements? - c

I am working on an old implementation of AES I coded a few years ago, and I wanted to modify my ShiftRows function which is very inefficient.
For moment my ShiftRows basically just swaps value of successive array element (represented by one byte) n times to effectuate a cyclic permutation.
I wondered if it was possible to take my array of element and cast it as a single variable to do the permuatation using the bit shift operator?
The rows are 4 unsigned char, so 4 bytes each.
In the following code only the first byte (corresponding to 'a') seems to be affected by the bitshift.
char array[4][4] = {"abcd", "efgh", "ijkl", "mnop"};
int32_t somevar;
somevar = (int32_t)*array[0] >> 16;
It's been a long time since I didn't practice C so I am probably doing some stupid errors.

First, if your primary goal is a fast AES implementation, rather than either practicing C or a fast-but-portable AES implementation (that is, portability is primary and efficiency is secondary), then you would need to write in assembly language, not C, or at least use compiler features for specific targets that let you write near-assembly code. For example, Intel processors have AES-assist instructions, and GCC has built-in functions for them.
Second, if you are going to do this in C, your primary job, ideally, is to express the desired operations clearly to the compiler. By this, I mean you want the operations to be transparent to the compiler so that its optimizer can work. Using various techniques to reinterpret data (from char to int, for example) can block the compiler’s ability to optimize. (Or they might not, depending on compiler quality and the specific code you write.)
If you are aiming for portable code, it is likely better to simply write the character motions you want (just write simple assignment statements that move array elements). Good compilers can translate these efficiently, even combining multiple byte-move operations into single word-move operations if the hardware supports it.
When you are writing “fancy” code to try to optimize, it is important to be aware of rules of standard C, properties of the compiler(s) you are working with, and the hardware you are targeting.
For example, you have char array[4][4]. This declares an array with no particular alignment. The compiler might put this array anywhere, with any alignment—it is not necessarily aligned to a multiple of four bytes, for example. If you then take a pointer to the first row of this array and convert it to a pointer to an int, then an instruction to load an int may fail on some processors because they require int objects to be aligned to multiples of four bytes. On other processors, the load may work but be slower than an aligned load.
One solution for this is not to declare a bare array and not to convert pointers. Instead, you would declare a union, one member of which might be an array of four uint32_t and the other of which might be an array of four arrays of four uint8_t. The presence of the uint32_t array in the union would compel the compiler to align it suitably for the hardware. Additionally, reinterpreting data through unions is allowed in C, whereas reinterpreting data through converted pointers is not proper C code. (Even if the alignment requirements are satisfied, reinterpreting data through pointers generally violates aliasing rules.)
On another note, it is generally preferable to use unsigned types when working with bits as is done in cryptographic code. Instead of char and int32_t, you may be better off with uint8_t and uint32_t.
Regarding your specific code:
somevar = (int32_t)*array[0] >> 16;
array[0] is the first row of array. By the rules of C, it is automatically converted to a pointer to its first element, so it becomes &array[0][0]. Then *array[0] is *&array[0][0], which is array[0][0], which is the first char in the first row of the array. So the expression so far is just the value of the first char. Then the cast (int32_t) converts the type of the expression to int32_t. This does not change the value, so the result is simply the value of the first char in the first row.
What you were likely thinking of was either * (uint32_t *) &array[0] or * (uint32_t) array[0]. These take either the address of the first row (the former expression) or the address of the first element of the first row (the latter expression) (these denote the same location but are different types) and convert it to a pointer to a uint32_t. Then the * is intended to fetch the uint32_t at that address. That violates C rules and should be avoided.
Instead, you can use:
union
{
uint32_t words[4];
uint8_t bytes[4][4];
} block;
Then you can access individual bytes with block.bytes[i][j] or words of four bytes with block.words[i]. Whether this is a good idea or not depends on context and goals.

Related

Can we choose how many bits to reserve for a variable?

Does C language allow us to choose how many bits to reserve for a variable? For example, if we create whole number variable which we will use as bool (1=true, 0=false) we need only one bit. How can I reserve only one bit for that? Is it possible?
It is possible only when that variable is enclosed in a struct. Keep in mind that necessary padding will be done in this case.
For example:
struct Bitfield{
int Bool : 1;
} bit
bit structure will require 4 bytes (assuming int is of width 4 bytes) but only a bit will be used to store the value.
You can declare up to 32 variables each of width 1 for which size of structure bit will be 4 bytes.
Suggested Reading: How is the size of a struct with Bit Fields determined/measured?.
Quick answer: Not in general, no.
Each type has a specific size, each declared object (variable) has a type, and the size occupied by an object is determined by its type. The size of a each type is determined by the rules of the language and, in many cases, by the compiler. For example, an int is at least 16 bits, but may be larger depending on the compiler.
In the case of a bool, assuming your compiler supports it, just declared an object of type _Bool (or bool if you have #include <stdbool.h>. The size will probably be one byte (which is probably 8 bits).
Other than bit fields, it's not possible to have an object whose size is not a whole number of bytes, and a bit field can only exist as a member of a structure (or union).
If you really want an object whose size is a particular number of bytes, then you can define it as an array of unsigned char:
unsigned char obj[42];
If you want an integer that's a certain size, then you can use one of the types declared in <stdint.h>: uint8_t (8 bits), uint16_t (16 bits), etc.
But in most cases, there's no need to do this. Just define your variable using whatever type is appropriate, and let the compiler take care of determining how big they need to be. Unless you're storing or transmitting binary data that needs to be in a specific format, "Size matters not" -- Yoda.
To expand on #haccks answer, if you have a struct like this:
struct foo {
int flag1:1;
int flag2:2;
int somebits:6;
int somemorebits:12;
int evenmorebits:10;
};
A foo struct will still take up only 4 bytes.
Does C language allow us to choose how many bits to reserve for a variable?
The space reserved for a variable depends on its type. You can't directly dictate the bit length of a particular type, but every implementation will offer some types you can use. Some of these types have sizes that are implementation-defined and some of them are fixed.
For example, if we create whole number variable which we will use as bool (1=true, 0=false) we need only one bit. How can I reserve only one bit for that? Is it possible?
It's not possible, because a bit isn't addressable in most architectures and C requires that its types are addressable.
Usually, the extra bits you waste in order to store a boolean value are negligible. If you are working with large amounts of such boolean values and the wasted space isn't negligible, you can always opt to use bitmasks, in order to pack multiple binary values in a larger integer type. Keep in mind that if you choose to do that, you'll be optimizing for space and deoptimizing for speed, because writing or reading isolated bits from a larger type requires multiple operations on most common architectures.
You have some control over this by giving the variable's type. I.e., for integers you can select between char, short, int, long. But the standard is very careful in that it only guarantees that those are in non-decreasing order of size. They might all be exactly the same, as it happened famously in Cray's C (all 32 bits wide).
First of all in any case the minimum addressable unit is byte. So if you are going to use only one bit the compiler will operate with bytes.
You can use bit fields but they may be defined in a structire or union.
Take into account that standard header <stdint.h> contains some definition of more precise integer types.
And for boolean values you can use standard integer type _Bool.

When should I just use "int" versus more sign-specific or size-specific types?

I have a little VM for a programming language implemented in C. It supports being compiled under both 32-bit and 64-bit architectures as well as both C and C++.
I'm trying to make it compile cleanly with as many warnings enabled as possible. When I turn on CLANG_WARN_IMPLICIT_SIGN_CONVERSION, I get a cascade of new warnings.
I'd like to have a good strategy for when to use int versus either explicitly unsigned types, and/or explicitly sized ones. So far, I'm having trouble deciding what that strategy should be.
It's certainly true that mixing them—using mostly int for things like local variables and parameters and using narrower types for fields in structs—causes lots of implicit conversion problems.
I do like using more specifically sized types for struct fields because I like the idea of explicitly controlling memory usage for objects in the heap. Also, for hash tables, I rely on unsigned overflow when hashing, so it's nice if the hash table's size is stored as uint32_t.
But, if I try to use more specific types everywhere, I find myself in a maze of twisty casts everywhere.
What do other C projects do?
Just using int everywhere may seem tempting, since it minimizes the need for casting, but there are several potential pitfalls you should be aware of:
An int might be shorter than you expect. Even though, on most desktop platforms, an int is typically 32 bits, the C standard only guarantees a minimum length of 16 bits. Could your code ever need numbers larger than 216−1 = 32,767, even for temporary values? If so, don't use an int. (You may want to use a long instead; a long is guaranteed to be at least 32 bits.)
Even a long might not always be long enough. In particular, there is no guarantee that the length of an array (or of a string, which is a char array) fits in a long. Use size_t (or ptrdiff_t, if you need a signed difference) for those.
In particular, a size_t is defined to be large enough to hold any valid array index, whereas an int or even a long might not be. Thus, for example, when iterating over an array, your loop counter (and its initial / final values) should generally be a size_t, at least unless you know for sure that the array is short enough for a smaller type to work. (But be careful when iterating backwards: size_t is unsigned, so for(size_t i = n-1; i >= 0; i--) is an infinite loop! Using i != SIZE_MAX or i != (size_t) -1 should work, though; or use a do/while loop, but beware of the case n == 0!)
An int is signed. In particular, this means that int overflow is undefined behavior. If there's ever any risk that your values might legitimately overflow, don't use an int; use an unsigned int (or an unsigned long, or uintNN_t) instead.
Sometimes, you just need a fixed bit length. If you're interfacing with an ABI, or reading / writing a file format, that requires integers of a specific length, then that's the length you need to use. (Of course, is such situations, you may also need to worry about things like endianness, and so may sometimes have to resort to manually packing data byte-by-byte anyway.)
All that said, there are also reasons to avoid using the fixed-length types all the time: not only is int32_t awkward to type all the time, but forcing the compiler to always use 32-bit integers is not always optimal, particularly on platforms where the native int size might be, say, 64 bits. You could use, say, C99 int_fast32_t, but that's even more awkward to type.
Thus, here are my personal suggestions for maximum safety and portability:
Define your own integer types for casual use in a common header file, something like this:
#include <limits.h>
typedef int i16;
typedef unsigned int u16;
#if UINT_MAX >= 4294967295U
typedef int i32;
typedef unsigned int u32;
#else
typedef long i32;
typedef unsigned long i32;
#endif
Use these types for anything where the exact size of the type doesn't matter, as long as they're big enough. The type names I've suggested are both short and self-documenting, so they should be easy to use in casts where needed, and minimize the risk of errors due to using a too-narrow type.
Conveniently, the u32 and u16 types defined as above are guaranteed to be at least as wide as unsigned int, and thus can be used safely without having to worry about them being promoted to int and causing undefined overflow behavior.
Use size_t for all array sizes and indexing, but be careful when casting between it and any other integer types. Optionally, if you don't like to type so many underscores, typedef a more convenient alias for it too.
For calculations that assume overflow at a specific number of bits, either use uintNN_t, or just use u16 / u32 as defined above and explicit bitmasking with &. If you choose to use uintNN_t, make sure to protect yourself against unexpected promotion to int; one way to do that is with a macro like:
#define u(x) (0U + (x))
which should let you safely write e.g.:
uint32_t a = foo(), b = bar();
uint32_t c = u(a) * u(b); /* this is always unsigned multiply */
For external ABIs that require a specific integer length, again define a specific type, e.g.:
typedef int32_t fooint32; /* foo ABI needs 32-bit ints */
Again, this type name is self-documenting, with regard to both its size and its purpose.
If the ABI might actually require, say, 16- or 64-bit ints instead, depending on the platform and/or compile-time options, you can change the type definition to match (and rename the type to just fooint) — but then you really do need to be careful whenever you cast anything to or from that type, because it might overflow unexpectedly.
If your code has its own structures or file formats that require specific bitlengths, consider defining custom types for those too, exactly as if it was an external ABI. Or you could just use uintNN_t instead, but you'll lose a little bit of self-documentation that way.
For all these types, don't forget to also define the corresponding _MIN and _MAX constants for easy bounds checking. This might sound like a lot of work, but it's really just a couple of lines in a single header file.
Finally, remember to be careful with integer math, especially overflows.
For example, keep in mind that the difference of two n-bit signed integers may not fit in an n-bit int. (It will fit into an n-bit unsigned int, if you know it's non-negative; but remember that you need to cast the inputs to an unsigned type before taking their difference to avoid undefined behavior!)
Similarly, to find the average of two integers (e.g. for a binary search), don't use avg = (lo + hi) / 2, but rather e.g. avg = lo + (hi + 0U - lo) / 2; the former will break if the sum overflows.
You seem to know what you are doing, judging from the linked source code, which I took a glance at.
You said it yourself - using "specific" types makes you have more casts. That's not an optimal route to take anyway. Use int as much as you can, for things that do not mandate a more specialized type.
The beauty of int is that it is abstracted over the types you speak of. It is optimal in all cases where you need not expose the construct to a system unaware of int. It is your own tool for abstracting the platform for your program(s). It may also yield you speed, size and alignment advantage, depending.
In all other cases, e.g. where you want to deliberately stay close to machine specifications, int can and sometimes should be abandoned. Typical cases include network protocols where the data goes on the wire, and interoperability facilities - bridges of sorts between C and other languages, kernel assembly routines accessing C structures. But don't forget that sometimes you would want to in fact use int even in these cases, as it follows platforms own "native" or preferred word size, and you might want to rely on that very property.
With platform types like uint32_t, a kernel might want to use these (although it may not have to) in its data structures if these are accessed from both C and assembler, as the latter doesn't typically know what int is supposed to be.
To sum up, use int as much as possible and resort to moving from more abstract types to "machine" types (bytes/octets, words, etc) in any situation which may require so.
As to size_t and other "usage-suggestive" types - as long as syntax follows semantics inherent to the type - say, using size_t for well, size values of all kinds - I would not contest. But I would not liberally apply it to anything just because it is guaranteed to be the largest type (regardless if it is actually true). That's an underwater stone you don't want to be stepping on later. Code has to be self-explanatory to the degree possible, I would say - having a size_t where none is naturally expected, would raise eyebrows, for a good reason. Use size_t for sizes. Use offset_t for offsets. Use [u]intN_t for octets, words, and such things. And so on.
This is about applying semantics inherent in a particular C type, to your source code, and about the implications on the running program.
Also, as others have illustrated, don't shy away from typedef, as it gives you the power to efficiently define your own types, an abstraction facility I personally value. A good program source code may not even expose a single int, nevertheless relying on int aliased behind a multitude of purpose-defined types. I am not going to cover typedef here, the other answers hopefully will.
Keep large numbers that are used to access members of arrays, or control buffers as size_t.
For an example of a project that makes use of size_t, refer to GNU's dd.c, line 155.
Here are a few things I do. Not sure they're for everyone but they work for me.
Never use int or unsigned int directly. There always seems to be a more appropriately named type for the job.
If a variable needs to be a specific width (e.g. for a hardware register or to match a protocol) use a width-specific type (e.g. uint32_t).
For array iterators, where I want to access array elements 0 thru n, this should also be unsigned (no reason to access any index less than 0) and I use one of the fast types (e.g. uint_fast16_t), selecting the type based on the minimum size required to access all array elements. For example, if I have a for loop that will iterate through 24 elements max, I'll use uint_fast8_t and let the compiler (or stdint.h, depending how pedantic we want to get) decide which is the fastest type for that operation.
Always use unsigned variables unless there is a specific reason for them to be signed.
If your unsigned variables and signed variables need to play together, use explicit casts and be aware of the consequences. (Luckily this will be minimized if you avoid using signed variables except where absolutely necessary.)
If you disagree with any of those or have recommended alternatives please let me know in the comments! That's the life of a software developer... we keep learning or we become irrelevant.
Always.
Unless you have specific reasons for using a more specific type, including you're on a 16-bit platform and need integers greater than 32767, or you need to ensure proper byte order and signage for data exchange over a network or in a file (and unless you're resource constrained, consider transferring data in "plain text," meaning ASCII or UTF8 if you prefer).
My experience has shown that "just use 'int'" is a good maxim to live by and makes it possible to turn out working, easily maintained, correct code quickly every time. But your specific situation may differ, so take this advice with a bit of well-deserved scrutiny.
Most of the time, using int is not ideal. The main reason is that int is signed and signed can cause UB, signed integers can also be negative, something that you don't need for most integers. Prefer unsigned integers. Secondly, data types reflect meaning and a, very limited, way to document the used range and values this variable may have. If you use int, you imply that you expect this variable to sometimes hold negative values, that this values probably do not always fit into 8 bit but always fit into INT_MAX, which can be as low as 32767. Do not assume a int is 32 bit.
Always, think about the possible values of a variable and choose the type accordingly. I use the following rules:
Use unsigned integers except when you need to be able to handle negative numbers.
If you want to index an array, from the start, use size_t except when there are good reasons not to. Almost never use int for it, a int can be too small and there is a high chance of creating a UB bug that isn't found during testing because you never tested arrays large enough.
Same for array sizes and sizes of other object, prefer size_t.
If you need to index array with negative index, which you may need for image processing, prefer ptrdiff_t. But be aware, ptrdiff_t can be too small, but that is rare.
If you have arrays that never exceed a certain size, you may use uint_fastN_t, uintN_t, or uint_leastN_t types. This can make a lot of sense especially on a 8 bit microcontroller.
Sometimes, unsigned int can be used instead of uint_fast16_t, similarly int for int_fast16_t.
To handle the value of a single byte (or character, but this is not a real character because of UTF-8 and Unicode sometimes using more than one code pointer per character), use int. int can store -1 if you need an indicator for error or not set and a character literal is of type int. (This is true for C, for C++ you may use a different strategy). There is the extremely rare possibility that a machine uses sizeof(int)==1 && CHAR_MIN==0 where a byte can not be handled with a int, but i never saw such a machine.
It can make sense to define your own types for different purposes.
Use explicit cast where casts are needed. This way the code is well defined and has the least amount of unexpected behaviour.
After a certain size, a project needs a list/enum of the native integer data types. You can use macros with the _Generic expression from C11, that only needs to handle bool, signed char, short, int, long, long long and their unsigned counterparts to get the underlying native type from a typedefed one. This way your parsers and similar parts only need to handle 11 integer types and not 56 standard integer (if i counted correctly), and a bunch of other non-standard types.

Colons after variable name in C [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What does a colon in a struct declaration mean, such as :1, :7, :16, or :32?
This is C code sample of a reference page.
signed int _exponent:8;
What's the meaning of the colon before '8' and '8' itself?
It's a bitfield. It's only valid in a struct definition, and it means that the system will only use 8 bits for your integer.
It's a bitfield, an obscure and misguided feature of structures. That should be enough for you to lookup the information you need to know to deal with bitfields in other people's code. As for your own code, never use bitfields.
Edit: As requesed by Zack, bitfields have significant disadvantages versus performing your own bit arithmetic, and no advantages. Here are some of them:
You can only copy, compare, serialize, or deserialize one bitfield element at a time. Doing your own bit arithmetic, you can operate on whole words at a time.
You can never have a pointer to bitfield elements. With your own bit arithmetic, you can have a pointer to the larger word and a bit index within the word as a "pointer".
Directly reading/writing C structures to disk or network is half-way portable without bitfields, as long as you use fixed-size types and know the endianness. Throw in bitfields, though, and the order of elements within the larger type, as well as the total space used and alignment, become highly implementation-dependent, even within a given cpu architecture.
The C specification has very strange rules than allow the signedness of bitfield elements to be different from what you'd expect it to, and very few people are aware of these.
For single-bit flags, using your own bit arithmetic instead of bitfields is a complete no-brainer. For larger values you need to pack, if it's too painful to write out all the bit arithmetic all over the place, write some simple macros.
It is a bitfield specification.
It means _exponent takes only 8 bits out of the signed int which typically takes more than 8 bits. Typically, bit-fields are used with unsigned types.
IIRC, compiler would warn if a something that does not fit into 8-bits is written into _exponent.
When that statement is inside a structure, means bit fields.

Why can't I declare bitfields as automatic variables?

I want to declare a bitfield with the size specified using the a colon (I can't remember what the syntax is called). I want to write this:
void myFunction()
{
unsigned int thing : 12;
...
}
But GCC says it's a syntax error (it thinks I'm trying to write a nested function). I have no problem doing this though:
struct thingStruct
{
unsigned int thing : 4;
};
and then putting one such struct on the stack
void myFunction()
{
struct thingStruct thing;
...
}
This leads me to believe that it's being prevented by syntax, not semantic issues.
So why won't the first example work? What am I missing?
The first example won't work because you can only declare bitfields inside structs. This is syntax, not semantics, as you said, but there it is. If you want a bitfield, use a struct.
Why would you want to do such a thing? A bit field of 12 would on all common architectures be padded to at least 16 or 32 bits.
If you want to ensure the width of an integer variable use the types in inttypes.h, e.g int16_t or int32_t.
As others have said, bitfields must be declared inside a struct (or union, but that's not really useful). Why? Here are two reasons.
Mainly, it's to make the compiler writer's job easier. Bitfields tend to require more machine instructions to extract the bits from the bytes. Only fields can be bitfields, and not variables or other objects, so the compiler writer doesn't have to worry about them if there is no . or -> operator involved.
But, you say, sometimes the language designers make the compiler writer's job harder in order to make the programmer's life easier. Well, there is not a lot of demand from programmers for bitfields outside structs. The reason is that programmers pretty much only bother with bitfields when they're going to cram several small integers inside a single data structure. Otherwise, they'd use a plain integral type.
Other languages have integer range types, e.g., you can specify that a variable ranges from 17 to 42. There isn't much call for this in C because C never requires that an implementation check for overflow. So C programmers just choose a type that's capable of representing the desired range; it's their job to check bounds anyway.
C89 (i.e., the version of the C language that you can find just about everywhere) offers a limited selection of types that have at least n bits. There's unsigned char for 8 bits, unsigned short for 16 bits and unsigned long for 32 bits (plus signed variants). C99 offers a wider selection of types called uint_least8_t, uint_least16_t, uint_least32_t and uint_least64_t. These types are guaranteed to be the smallest types with at least that many value bits. An implementation can provide types for other number of bits, such as uint_least12_t, but most don't. These types are defined in <stdint.h>, which is available on many C89 implementations even though it's not required by the standard.
Bitfields provide a consistent syntax to access certain implementation-dependent functionality. The most common purpose of that functionality is to place certain data items into bits in a certain way, relative to each other. If two items (bit-fields or not) are declared as consecutive items in a struct, they are guaranteed to be stored consecutively. No such guarantee exists with individual variables, regardless of storage class or scope. If a struct contains:
struct foo {
unsigned bar: 1;
unsigned boz: 1;
};
it is guaranteed that bar and boz will be stored consecutively (most likely in the same storage location, though I don't think that's actually guaranteed). By contrast, 'bar' and 'boz' were single-bit automatic variables, there's no telling where they would be stored, so there'd be little benefit to having them as bitfields. If they did share space with some other variable, it would be hard to make sure that different functions reading and writing different bits in the same byte didn't interfere with each other.
Note that some embedded-systems compilers do expose a genuine 'bit' type, which are packed eight to a byte. Such compilers generally have an area of memory which is allocated for storing nothing but bit variables, and the processors for which they generate code have atomic instructions to test, set, and clear individual bits. Since the memory locations holding the bits are only accessed using such instructions, there's no danger of conflicts.

size_t vs. uintptr_t

The C standard guarantees that size_t is a type that can hold any array index. This means that, logically, size_t should be able to hold any pointer type. I've read on some sites that I found on the Googles that this is legal and/or should always work:
void *v = malloc(10);
size_t s = (size_t) v;
So then in C99, the standard introduced the intptr_t and uintptr_t types, which are signed and unsigned types guaranteed to be able to hold pointers:
uintptr_t p = (size_t) v;
So what is the difference between using size_t and uintptr_t? Both are unsigned, and both should be able to hold any pointer type, so they seem functionally identical. Is there any real compelling reason to use uintptr_t (or better yet, a void *) rather than a size_t, other than clarity? In an opaque structure, where the field will be handled only by internal functions, is there any reason not to do this?
By the same token, ptrdiff_t has been a signed type capable of holding pointer differences, and therefore capable of holding most any pointer, so how is it distinct from intptr_t?
Aren't all of these types basically serving trivially different versions of the same function? If not, why? What can't I do with one of them that I can't do with another? If so, why did C99 add two essentially superfluous types to the language?
I'm willing to disregard function pointers, as they don't apply to the current problem, but feel free to mention them, as I have a sneaking suspicion they will be central to the "correct" answer.
size_t is a type that can hold any array index. This means that,
logically, size_t should be able to
hold any pointer type
Not necessarily! Hark back to the days of segmented 16-bit architectures for example: an array might be limited to a single segment (so a 16-bit size_t would do) BUT you could have multiple segments (so a 32-bit intptr_t type would be needed to pick the segment as well as the offset within it). I know these things sound weird in these days of uniformly addressable unsegmented architectures, but the standard MUST cater for a wider variety than "what's normal in 2009", you know!-)
Regarding your statement:
"The C standard guarantees that size_t is a type that can hold any array index. This means that, logically, size_t should be able to hold any pointer type."
This is actually a fallacy (a misconception resulting from incorrect reasoning)(a). You may think the latter follows from the former but that's not actually the case.
Pointers and array indexes are not the same thing. It's quite plausible to envisage a conforming implementation that limits arrays to 65536 elements but allows pointers to address any value into a massive 128-bit address space.
C99 states that the upper limit of a size_t variable is defined by SIZE_MAX and this can be as low as 65535 (see C99 TR3, 7.18.3, unchanged in C11). Pointers would be fairly limited if they were restricted to this range in modern systems.
In practice, you'll probably find that your assumption holds, but that's not because the standard guarantees it. Because it actually doesn't guarantee it.
(a) This is not some form of personal attack by the way, just stating why your statements are erroneous in the context of critical thinking. For example, the following reasoning is also invalid:
All puppies are cute. This thing is cute. Therefore this thing must be a puppy.
The cuteness or otherwise of puppiess has no bearing here, all I'm stating is that the two facts do not lead to the conclusion, because the first two sentences allow for the existance of cute things that are not puppies.
This is similar to your first statement not necessarily mandating the second.
I'll let all the other answers stand for themselves regarding the reasoning with segment limitations, exotic architectures, and so on.
Isn't the simple difference in names reason enough to use the proper type for the proper thing?
If you're storing a size, use size_t. If you're storing a pointer, use intptr_t. A person reading your code will instantly know that "aha, this is a size of something, probably in bytes", and "oh, here's a pointer value being stored as an integer, for some reason".
Otherwise, you could just use unsigned long (or, in these here modern times, unsigned long long) for everything. Size is not everything, type names carry meaning which is useful since it helps describe the program.
It's possible that the size of the largest array is smaller than a pointer. Think of segmented architectures - pointers may be 32-bits, but a single segment may be able to address only 64KB (for example the old real-mode 8086 architecture).
While these aren't commonly in use in desktop machines anymore, the C standard is intended to support even small, specialized architectures. There are still embedded systems being developed with 8 or 16 bit CPUs for example.
I would imagine (and this goes for all type names) that it better conveys your intentions in code.
For example, even though unsigned short and wchar_t are the same size on Windows (I think), using wchar_t instead of unsigned short shows the intention that you will use it to store a wide character, rather than just some arbitrary number.
Looking both backwards and forwards, and recalling that various oddball architectures were scattered about the landscape, I'm pretty sure they were trying to wrap all existing systems and also provide for all possible future systems.
So sure, the way things settled out, we have so far needed not so many types.
But even in LP64, a rather common paradigm, we needed size_t and ssize_t for the system call interface. One can imagine a more constrained legacy or future system, where using a full 64-bit type is expensive and they might want to punt on I/O ops larger than 4GB but still have 64-bit pointers.
I think you have to wonder: what might have been developed, what might come in the future. (Perhaps 128-bit distributed-system internet-wide pointers, but no more than 64 bits in a system call, or perhaps even a "legacy" 32-bit limit. :-) Image that legacy systems might get new C compilers...
Also, look at what existed around then. Besides the zillion 286 real-mode memory models, how about the CDC 60-bit word / 18-bit pointer mainframes? How about the Cray series? Never mind normal ILP64, LP64, LLP64. (I always thought microsoft was pretensious with LLP64, it should have been P64.) I can certainly imagine a committee trying to cover all bases...
size_t vs. uintptr_t
In addition to other good answers:
size_t is defined in <stddef.h>, <stdio.h>, <stdlib.h>, <string.h>, <time.h>, <uchar.h>, <wchar.h>. It is at least 16-bit.
uintptr_t is defined in <stdint.h>. It is optional. A compliant library might not define it, likely because there is not a wide-enough integer type to round trip a void*-uintptr_t-void *.
Both are unsigned integer types.
Note: the optional companion intptr_t is a signed integer type.
int main(){
int a[4]={0,1,5,3};
int a0 = a[0];
int a1 = *(a+1);
int a2 = *(2+a);
int a3 = 3[a];
return a2;
}
Implying that intptr_t must always substitute for size_t and visa versa.

Resources