In C, a struct (record data structure) can be the return type of a function, but an array cannot be. What design characteristics of the C Language cause arrays to be an exception?
A naked array type in C language is not copyable for primarily historical reasons. For this reason it is not possible to initialize arrays with arrays, assign arrays to arrays, pass arrays by value as parameters or return arrays from functions. (Initialization context has a notable exception of char s[6] = "Hello";.)
It is still possible to do all the above if the array is wrapped in struct type, which demonstrates that the limitation is purely declarative in nature. There's no compelling technical reason for it.
C language inherited its approach to array implementation from its historical predecessors - B and BCPL languages. In B/BCPL arrays were openly implemented as pointers, meaning that an attempt to assign one array to another actually represented assignment of pointers. C language followed a different approach. In C arrays are not pointers, but the interface specification of C arrays is kept superficially compatible with that of B/BCPL. Arrays in C still "pretend" to be pointers in most contexts. This is one reason they are not immediately copyable.
Most obviously, the "lack" is that C doesn't permit a function to return a result of an array type. This is stated explicitly in the language standard.
Array types are, in a sense, second-class citizens in C. In most contexts, an expression of array type is implicitly converted to a pointer to its first element. The exceptions are when the array expression is the operand of sizeof (which yields the size of the array), when it's the operand of unary & (which yields the address of the array), and when it's a string literal in an initializer used to initialize an array object.
This absolutely does not mean that arrays are "really" pointers; they're not. You'll see people claiming that they are. They're wrong.
Functions return values. You can have a value of a structure type; that value consists of the values of its members. C permits assignment, parameter passing, and function results of structure type. All these manipulate array values (they deal with them by value, not by reference).
The same is not true for arrays. The rules I mentioned above imply that you can't construct an expression whose value is of an array type. There are array values (consisting of the values of all the array's elements), but such values are difficult or impossible to manipulate directly.
The way C code usually manipulates arrays is by using pointers to individual elements.
It probably wouldn't have been too difficult to have designed C so that fixed-size arrays can be treated as values, with assignment, parameter passing, and so forth. But then you'd run into problems where int[10] and int[11] are two distinct and incompatible types. Most C code that deals with arrays needs to handle arrays whose size is determined at run time. For example, the string functions in <string.h> deal with arrays of characters of any arbitrary length. They do so by using pointers to the elements of the arrays. You couldn't very well have distinct functions for 1-element, 2-element, 3-element, and so forth, arrays.
You can do the equivalent of returning an array value from a function, but it's unfortunately awkward. You can return a structure containing the array -- but then the size of the array has to be fixed at compile time. You can return a pointer to (the first element of) the array -- but then you have to deal with allocating and deallocating memory to hold the array. You can have the caller pass in a pointer to an array -- but that places the burden of memory management on the caller. And so forth.
Yes, it's all a bit of a mess. But dealing with arrays that can vary in size is genuinely difficult. C gives you all the tools you need to do it, but leaves a lot of the detailed management to you, the programmer. (Other languages provide arrays as first-class types. Many of those languages have compilers or interpreters written in C.)
Suggested reading: Section 6 of the comp.lang.c FAQ.
The characteristic is that in the small and speedy C language you don't want the equivalent of large memcpy operations when returning. If you badly need arrays returned, make them a member of a struct, and voila, array return in C. Sort of, starting with C89 :-)
Or use a memcpy yourself when and where you need it.
While the array can't be returned from a C function, a pointer to the array may. For code example of how to do what you're looking for, visit the site:
http://www.tutorialspoint.com/cprogramming/c_return_arrays_from_function.htm
Related
[This is a question inspired by a recent discussion elsewhere, and I'll provide an answer right with it.]
I was wondering about the odd C phenomenon of arrays "decaying" to pointers, e.g. when used as function arguments. That just seems so unsafe. It is also inconvenient to pass the length explicitly with it. And I can pass the other type of aggregate -- structs -- perfectly well by value; structs do not decay.
What is the rationale behind this design decision? How does it integrate with the language? Why is there a difference to structs?
Rationale
Let's examine function calls because the problems are nicely visible there: Why are arrays not simply passed to functions as arrays, by value, as a copy?
There is first a purely pragmatic reason: Arrays can be big; it may not be advisable to pass them by value because they
could exceed the stack size, especially in the 1970s. The first compilers were written on a PDP-7 with about 9 kB RAM.
There is also a more technical reason rooted in the language. It would be hard to generate code for a function call with arguments whose size is not known at compile time. For all arrays, including variable length arrays in modern C, simply the addresses are put on the call stack. The size of an address is of course well known. Even languages with elaborate array types carrying run time size information do not pass the objects proper on the stack. These languages typically pass "handles" around, which is what C has effectively done, too, for 40 years. See Jon Skeet here and an illustrated explanation he references (sic) here.
Now a language could make it a requirement that an array always have a complete type; i.e. whenever it is used, its complete declaration including the size must be visible. This is, after all, what C requires from structures (when they are accessed). Consequently, structures can be passed to functions by value. Requiring the complete type for arrays as well would make function calls easily compilable and obviate the need to pass additional length arguments: sizeof() would still work as expected inside the callee. But imagine what that means. If the size were really part of the array's argument type, we would need a distinct function for each array size:
// for user input.
int average_ten(int arr[10]);
// for my new Hasselblad.
int average_twohundredfivemilliononehundredfourtyfivethousandsixhundred(int arr[16544*12400]);
// ...
In fact it would be totally comparable to passing structures, which differ in type if their elements differ (say, one struct with 10 int elements and one with 16544*12400). It is obvious that arrays need more flexibility. For example, as demonstrated one could not sensibly provide generally usable library functions which take array arguments.
This "strong typing conundrum" is, in fact, what happens in C++ when a function takes a reference to an array; that is also the reason why nobody does it, at least not explicitly. It is totally inconvenient to the point of being useless except for cases which target specific uses, and in generic code: C++ templates provide compile-time flexibility which is not available in C.
If, in existing C, indeed arrays of known sizes should be passed by value there is always the possibility to wrap them in a struct. I remember that some IP related headers on Solaris defined address family structures with arrays in them, allowing to copy them around. Because the byte layout of the struct was fixed and known, that made sense.
For some background it's also interesting to read The Development of the C Language by Dennis Ritchie about the origins of C. C's predecessor BCPL didn't have any arrays; the memory was just homogeneous linear memory with pointers into it.
The answer to this question can be found in Dennis Ritchie's "The Development of the C Language" paper (see "Embryonic C" section)
According to Dennis Ritchie, the nascent versions of C directly inherited/adopted array semantics from B and BCPL languages - predecessors of C. In those languages arrays were literally implemented as physical pointers. These pointers pointed to independently allocated blocks of memory containing the actual array elements. These pointers were initialized at run time. I.e. back in B and BCPL days arrays were implemented as "binary" (bipartite) objects: an independent pointer pointing to an independent block of data. There was no difference between pointer and array semantics in those languages, aside from the fact that array pointers were initialized automatically. At any time it was possible to re-assign an array pointer in B and BCPL to make it point somewhere else.
Initially, this approach to array semantics got inherited by C. However, its drawbacks became immediately obvious when struct types were introduced into the language (something neither B nor BCPL had). And the idea was that structs should naturally be able to contain arrays. However, continuing to stick with the above "bipartite" nature of B/BCPL arrays would immediately lead to a number of obvious complications with structs. E.g. struct objects with arrays inside would require non-trivial "construction" at the point of definition. It would become impossible to copy such struct objects - a raw memcpy call would copy the array pointers without copying the actual data. One wouldn't be able to malloc struct objects, since malloc can only allocate raw memory and does not trigger any non-trivial initializations. And so on and so forth.
This was deemed unacceptable, which led to the redesign of C arrays. Instead of implementing arrays through physical pointers Ritchie decided to get rid of the pointers entirely. The new array was implemented as a single immediate memory block, which is exactly what we have in C today. However, for backward compatibility reasons the behavior of B/BCPL arrays was preserved (emulated) as much as possible at superficial level: the new C array readily decayed to a temporary pointer value, pointing to the beginning of the array. The rest of the array functionality remained unchanged, relying on that readily available result of the decay.
To quote the aforementioned paper
The solution constituted the crucial jump in the evolutionary chain
between typeless BCPL and typed C. It eliminated the materialization
of the pointer in storage, and instead caused the creation of the
pointer when the array name is mentioned in an expression. The rule,
which survives in today's C, is that values of array type are
converted, when they appear in expressions, into pointers to the first
of the objects making up the array.
This invention enabled most existing B code to continue to work,
despite the underlying shift in the language's semantics. The few
programs that assigned new values to an array name to adjust its
origin—possible in B and BCPL, meaningless in C—were easily repaired.
More important, the new language retained a coherent and workable (if
unusual) explanation of the semantics of arrays, while opening the way
to a more comprehensive type structure.
So, the direct answer to your "why" question is as follows: arrays in C were designed to decay to pointers in order to emulate (as close as possible) the historical behavior of arrays in B and BCPL languages.
Take your time machine and travel back to 1970. Start designing a programming language. You want the following code to compile and do the expected thing:
size_t i;
int* p = (int *) malloc (10 * sizeof (int));
for (i = 0; i < 10; ++i) p [i] = i;
int a [10];
for (i = 0; i < 10; ++i) a [i] = i;
At the same time, you want a language that is simple. Simple enough that you can compile it on a 1970's computer. The rule that "a" decays to "pointer to first element of a" achieves that nicely.
[This is a question inspired by a recent discussion elsewhere, and I'll provide an answer right with it.]
I was wondering about the odd C phenomenon of arrays "decaying" to pointers, e.g. when used as function arguments. That just seems so unsafe. It is also inconvenient to pass the length explicitly with it. And I can pass the other type of aggregate -- structs -- perfectly well by value; structs do not decay.
What is the rationale behind this design decision? How does it integrate with the language? Why is there a difference to structs?
Rationale
Let's examine function calls because the problems are nicely visible there: Why are arrays not simply passed to functions as arrays, by value, as a copy?
There is first a purely pragmatic reason: Arrays can be big; it may not be advisable to pass them by value because they
could exceed the stack size, especially in the 1970s. The first compilers were written on a PDP-7 with about 9 kB RAM.
There is also a more technical reason rooted in the language. It would be hard to generate code for a function call with arguments whose size is not known at compile time. For all arrays, including variable length arrays in modern C, simply the addresses are put on the call stack. The size of an address is of course well known. Even languages with elaborate array types carrying run time size information do not pass the objects proper on the stack. These languages typically pass "handles" around, which is what C has effectively done, too, for 40 years. See Jon Skeet here and an illustrated explanation he references (sic) here.
Now a language could make it a requirement that an array always have a complete type; i.e. whenever it is used, its complete declaration including the size must be visible. This is, after all, what C requires from structures (when they are accessed). Consequently, structures can be passed to functions by value. Requiring the complete type for arrays as well would make function calls easily compilable and obviate the need to pass additional length arguments: sizeof() would still work as expected inside the callee. But imagine what that means. If the size were really part of the array's argument type, we would need a distinct function for each array size:
// for user input.
int average_ten(int arr[10]);
// for my new Hasselblad.
int average_twohundredfivemilliononehundredfourtyfivethousandsixhundred(int arr[16544*12400]);
// ...
In fact it would be totally comparable to passing structures, which differ in type if their elements differ (say, one struct with 10 int elements and one with 16544*12400). It is obvious that arrays need more flexibility. For example, as demonstrated one could not sensibly provide generally usable library functions which take array arguments.
This "strong typing conundrum" is, in fact, what happens in C++ when a function takes a reference to an array; that is also the reason why nobody does it, at least not explicitly. It is totally inconvenient to the point of being useless except for cases which target specific uses, and in generic code: C++ templates provide compile-time flexibility which is not available in C.
If, in existing C, indeed arrays of known sizes should be passed by value there is always the possibility to wrap them in a struct. I remember that some IP related headers on Solaris defined address family structures with arrays in them, allowing to copy them around. Because the byte layout of the struct was fixed and known, that made sense.
For some background it's also interesting to read The Development of the C Language by Dennis Ritchie about the origins of C. C's predecessor BCPL didn't have any arrays; the memory was just homogeneous linear memory with pointers into it.
The answer to this question can be found in Dennis Ritchie's "The Development of the C Language" paper (see "Embryonic C" section)
According to Dennis Ritchie, the nascent versions of C directly inherited/adopted array semantics from B and BCPL languages - predecessors of C. In those languages arrays were literally implemented as physical pointers. These pointers pointed to independently allocated blocks of memory containing the actual array elements. These pointers were initialized at run time. I.e. back in B and BCPL days arrays were implemented as "binary" (bipartite) objects: an independent pointer pointing to an independent block of data. There was no difference between pointer and array semantics in those languages, aside from the fact that array pointers were initialized automatically. At any time it was possible to re-assign an array pointer in B and BCPL to make it point somewhere else.
Initially, this approach to array semantics got inherited by C. However, its drawbacks became immediately obvious when struct types were introduced into the language (something neither B nor BCPL had). And the idea was that structs should naturally be able to contain arrays. However, continuing to stick with the above "bipartite" nature of B/BCPL arrays would immediately lead to a number of obvious complications with structs. E.g. struct objects with arrays inside would require non-trivial "construction" at the point of definition. It would become impossible to copy such struct objects - a raw memcpy call would copy the array pointers without copying the actual data. One wouldn't be able to malloc struct objects, since malloc can only allocate raw memory and does not trigger any non-trivial initializations. And so on and so forth.
This was deemed unacceptable, which led to the redesign of C arrays. Instead of implementing arrays through physical pointers Ritchie decided to get rid of the pointers entirely. The new array was implemented as a single immediate memory block, which is exactly what we have in C today. However, for backward compatibility reasons the behavior of B/BCPL arrays was preserved (emulated) as much as possible at superficial level: the new C array readily decayed to a temporary pointer value, pointing to the beginning of the array. The rest of the array functionality remained unchanged, relying on that readily available result of the decay.
To quote the aforementioned paper
The solution constituted the crucial jump in the evolutionary chain
between typeless BCPL and typed C. It eliminated the materialization
of the pointer in storage, and instead caused the creation of the
pointer when the array name is mentioned in an expression. The rule,
which survives in today's C, is that values of array type are
converted, when they appear in expressions, into pointers to the first
of the objects making up the array.
This invention enabled most existing B code to continue to work,
despite the underlying shift in the language's semantics. The few
programs that assigned new values to an array name to adjust its
origin—possible in B and BCPL, meaningless in C—were easily repaired.
More important, the new language retained a coherent and workable (if
unusual) explanation of the semantics of arrays, while opening the way
to a more comprehensive type structure.
So, the direct answer to your "why" question is as follows: arrays in C were designed to decay to pointers in order to emulate (as close as possible) the historical behavior of arrays in B and BCPL languages.
Take your time machine and travel back to 1970. Start designing a programming language. You want the following code to compile and do the expected thing:
size_t i;
int* p = (int *) malloc (10 * sizeof (int));
for (i = 0; i < 10; ++i) p [i] = i;
int a [10];
for (i = 0; i < 10; ++i) a [i] = i;
At the same time, you want a language that is simple. Simple enough that you can compile it on a 1970's computer. The rule that "a" decays to "pointer to first element of a" achieves that nicely.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I wonder why it is not possible to return array in C?
After all, array is just a pointer backed by size info (to make sizeof work). First I thought this was done to prevent me from returning array defined on my stack, but nothing prevents me from returning pointer to something on my stack (gcc warns me, but code compiles). And I also can return string literal which is statically storaged array of chars. By the way, in lunux it is stored in .rodata, and const array is stored there also (check it with objdump), so I can return array (casting it to pointer) and it works, but AFAIK this is just implementation-specific (another OS/Compiler may store const on stack).
I have 2 ideas how to implement array returning: Just copy it as value (as it is done for structure. I even can return array wrapping it into structure!!), and create pointer to it automatically or allow user to return const array and create contract that such array should have static storage duration (as it done for strings). Both ideas are trivial!
So, my question is why K&R did not implement something like that?
Technically, you can return an array; you just can't do it "directly", but have to wrap it in a struct:
struct foo {
int array[5];
};
struct foo returns_array(void) {
return((struct foo) {
.array = {2, 4, 6, 8, 10}
});
}
Why C doesn't allow you to do it directly even though it has the ability is still a good question, though. It is probably related to the fact that it doesn't support whole-array assignments either:
void bar(int input[5]) {
int temp[5];
temp = input; <-- Doesn't compile
}
What makes it even stranger though, of course, is that whole-array copy via argument-passing is supported. If someone knows how to find the ANSI committee's decisions on the matter, that would be interesting to read.
However,
After all, array is just a pointer backed by size info (to make sizeof work).
This is not correct. There is no explicit pointer, nor any stored size, of an array. The array is stored as the raw values, packed together; the size is only known inside the compiler and never made explicit as run-time data in the program. The array decays to a pointer when you try to use it as one.
An array is not "just a pointer backed by size info".
An array is a block of contiguous elements of a certain type. There is no pointer.
Since an array is an object, a pointer can be formed which points to the array, or to one of the array's elements. But such a pointer is not part of the array and is not stored with the array. It would make as much sense to say "an int is just a pointer backed by a size of 1 int".
The size of an array is known by the compiler in the same way that the size of any object is known. If we have double d; then it is known that sizeof d is sizeof(double) because the compiler remembers that d is an object of type double.
nothing prevents me from returning pointer to something on my stack
The C standard prevents you from doing this (and using the returned pointer). If you write code that violates the standard then you are on your own.
And I also can return string literal
A string literal is an array of char. When you use an array in a return statement, it is converted to a pointer to the first element.
To enable arrays to be returned (and assigned) by value, the rule regarding conversion of array to pointer (sometimes called "decay") would have to be changed. This would be possible, but K&R decided to make the decay almost ubiquitous when designing C.
In fact it would be possible to have a language like C but without having the decay at all. Maybe in hindsight that would have saved a lot of confusion. However they just chose to implement C in the way that they did.
In K&R C, it was not possible to return structures by value either. Any copy operation that was not a primitive type, had to be done with memcpy or an equivalent iterative copy. This seems like a reasonable design decision given the way hardware resources were in the 1970s.
ANSI C added the possibility to return structures by value , however by then it would have been too late to change the decay rule even if they had wanted to; it would break a lot of existing code which is relying on the decay rule.
Because if suddently, a revision of the language allows a function to be able to return a complete array, that revision should deal with these situations too:
Allow assignment between arrays (because if a function returns an array, it's because it is going to be assigned to an array variable in the caller function)
Allow passing a complete array as value parameter (because the name of an array is no longer a pointer to its first element, as this would conflict with the first situation)
If these constructions are allowed, existing programs that pass the name of an array as an argument to a function, expecting the function to modify that array, will cease to work.
Also, existing programs that use the array's name as pointer to assign it to a pointer variable will cease to work.
So, while it's technically feasible, making arrays to work as complete entities that can be assigned, returned and so on would break a lot of existing programs.
Note that structs could be "upgraded" because there were no prior semantics in the K&R C that related the name of a variable structure to be a pointer to itself. Any function that had to use structures as arguments or return values had to use pointers to them.
The "reason" is that arrays decay to pointers in most expressions and things would "as wrong" as if you would want to allow for assignment of arrays. If you'd return an array from a function, you wouldn't be able to distinguish it from a normal pointer. If f() would be returning double[5], say, the initialization
double* A=f();
would be valid. A would take the address of a temporary object, something that in C only lives up to the end of the full expression where the call to f appeared. So then A would be a dangling pointer, a pointer that points to an address that is not valid any more.
To summarize: the initial decision to have arrays behave similar to pointers in most contexts, imposes that arrays can't be assigned nor returned by functions.
is it possible to convert any program written in C using pointer into another c program that does not contain any pointers?If yes, can we automate the process?
i read a few papers on c to java bytecode compilation and found that a major issue was "the pointer problem".so i was thinking that if the above process could be done,then it could be included like a preprocessing step(though it itself may be big task) and then it may be simpler to try converting to jvm bytecode...
thanks in advance
In theory you can, by simulating individual data structures or even the entire memory (static data, heap and stack) with arrays. But the question is whether this is very practical; it may involve having to rewrite every pointer-based standard library function you need.
Anyway, there's a nice explanation on Wikipedia:
It is possible to simulate pointer behavior using an index to an (normally one-dimensional) array.
Primarily for languages which do not support pointers explicitly but do support arrays, the array can be thought of and processed as if it were the entire memory range (within the scope of the particular array) and any index to it can be thought of as equivalent to a general purpose register in assembly language (that points to the individual bytes but whose actual value is relative to the start of the array, not its absolute address in memory). Assuming the array is, say, a contiguous 16 megabyte character data structure, individual bytes (or a string of contiguous bytes within the array) can be directly addressed and manipulated using the name of the array with a 31 bit unsigned integer as the simulated pointer (this is quite similar to the C arrays example shown above). Pointer arithmetic can be simulated by adding or subtracting from the index, with minimal additional overhead compared to genuine pointer arithmetic.
Pointers are rather central to C. While C may be turing-complete without pointers, it's not practical to rewrite arbitrary C without them.Things you can't do without pointers:-dynamic (manual) memory allocation.-passing by reference.Given that arrays decay into pointers a the drop of a hat, you also couldn't use arrays practically, so you are left with automatic, static and global variables that cannot be arrays. tl;dr: No
I've tried to google this and have read:
Why can't arrays of same type and size be assigned?
Assigning arrays
Assign to array in struct in c
But they all state the obvious: you can't assign to arrays because the standard says so. That's great and all, but I want to know why the standard doesn't include support for assigning to arrays. The standard committee discusses things in detail, and I'd be surprised if they never discussed making arrays assignable. Assuming they've discussed it, they must have some rationale for not letting arrays be assigned to.
I mean, we can put an array in a struct and assign to the struct just fine:
struct wrapper
{
int array[2];
};
struct wrapper a = {{1, 2}};
struct wrapper b = {{3, 4}};
a = b; // legal
But using an array directly is prohibited, even though it accomplishes effectively the same thing:
int a[2] = {1, 2};
int b[2] = {3, 4};
a = b; // Not legal
What is the standard committee's rationale for prohibiting assigning to arrays?
In C, assignment copies the contents of a fixed-size object to another fixed-size object. This is well defined and fairly straightforward to implement for scalar types (integers, floating-point, pointers, complex types since C99). Assignment of structs is nearly as simple; larger ones might require a call to memcpy() or equivalent, but it's still straightforward since the size and alignment are known at compile time.
Arrays are a different matter. Most array objects have sizes that aren't determined until run time. A good example is argv. The runtime environment constructs an array of char for each command-line argument, and an array of char* containing pointers to the arguments. These are made available to main via argv, a char**, and via the dynamically allocated char[] arrays that the elements of argv point to.
C arrays are objects in their own right, but they're not generally accessed as objects. Instead, their elements are accessed via pointers, and code traverses from one element to the next using pointer arithmetic.
Languages can be designed to treat arrays as first-class objects, with assignment -- but it's complicated. As a language designer, you have to decide whether an array of 10 integers and an array of 20 integers are the same type. If they are, you have to decide what happens when you try to assign one to the other. Does it copy the smaller size? Does it cause a runtime exception? Do you have to add a slice operation so you can operate on subsets of arrays?
If int[10] and int[20] are distinct types with no implicit conversion, then array operations are inflexible (see Pascal, for example).
All these things can be defined (see Ada), but only by defining higher-level constructs than what's typical in C. Instead, the designers of C (mostly Dennis Ritchie) chose to provide arrays with low-level operations. It's admittedly inconvenient at times, but it's a framework that can be used to implement all the higher-level array operations of any other language.
The reason is basically historic. There was a C even before ISO C89 which was called "K&R" C, after Kernighan and Ritchie. The language was designed to be small enough so a compiler would fit in severely limited (by today's standards) memory of 64kb.
This language did not allow assigning arrays. If you wanted to copy same-sized arrays, memcpy was there for your needs. Writing memcpy(a, b, sizeof a) instead of a = b is certainly not a big complication. It has the additional advantage of being generalizable to different-sized arrays and array slices.
Interestingly, the struct assignment workaround you mention also did not work in K&R C. You had to either assign members one by one or, again, use memcpy. The first edition of K&R's The C Programming language mentions struct assignment as a feature for future implementation in the language. Which eventually happened with C89.
The answer is simple: It never was allowed before the committee got involved (even struct-assignment was considered too heavy), and considering there's array-decay, allowing it would have all kinds of interesting consequences.
Let's see what would change:
int a[3], b[3], *c = b, *d = b;
a = b; // Currently error, would assign elements
a = c; // Currently error, might assign array of 3?
c = a; // Currently pointer assignment with array decay
c = d; // Currently pointer assignemnt
So, allowing array-assignment would make (up to) two currently disallowed assignments valid.
That's not the trouble though, it's that near-identical expressions would have wildly different results.
That gets especially piquant if you consider that array-notation in function arguments is currently just a different notation for pointers.
If array assignment was introduced, that would become even more confusing.
Not that enough people aren't completely confounded by things as they are today...
int g(int* x); // Function receiving pointer to int and returning int
int f(int x[3]);// Currently the same. What afterwards? Change to value-copy?
Understand that the intent wasn't to make array expressions unassignable; that wasn't the goal1. Rather, this behavior falls out of a design decision Ritchie made that simplified array handling in the compiler, but in exchange made arrays expressions "second-class" objects; they lose their "array-ness" in most contexts.
Read this paper (especially the section titled "Embryonic C") for some background; I also have a more detailed answer here.
1. With the possible exception of Perl or PHP2, most blatant language WTFs are generally accidents of design or the result of compromises; most languages aren't deliberately designed to be stupid.
2. I'm only trolling a little bit; Perl and PHP are straight-up messes.
C is written in such a way that the address of the first element would be computed when the array expression is evaluated.
Quoting an excerpt from this answer:
This is why you can't do something like
int a[N], b[N];
a = b;
because both a and b evaluate to pointer values in that context; it's equivalent to writing 3 = 4. There's nothing in memory that actually stores the address of the first element in the array; the compiler simply computes it during the translation phase.
Maybe it would be helpful to turn the question around, and ask why you'd ever want to assign arrays (or structs), instead of using pointers? That's much cleaner & easier to understand (at least if you've assimilated the Zen of C), and it has the benefit of not concealing the fact that a lot of work is hidden under the "simple" assignment of multi-megabyte arrays.