Contiguity of arrays in modern Fortran - arrays

I am reading "Modern Fortran Explained" (2018 edition). It says on p24:
Most implementations actually store arrays in contiguous storage in array element order,
but we emphasize that the standard does not require this.
I believe that this already dates back to old fortran 77 (and earlier). Given a declaration
real, dimension(5,5) :: a
my understanding is that the standard does not require that this array is contiguous. This agrees with what this book says in Sec. 7.18.1 on p140:
... the Fortran standard has shied away from specifying whether arrays are contiguous in the sense of occupying sequential memory locations with no intervening unoccupied spaces.
But then the book goes on (still p140):
Any of the following arrays are considered to be contiguous by the standard:
an array with the contiguous attribute;
a whole array (named array or array component without further qualification) that is not a pointer or assumed-shape;
...
I understand the first criterion; but I am confused about the second criterion. Doesn't this disagree with the earlier statement that the fortran standard does not require to store arrays in contiguous storage in array element order? Has modern fortran moved away from this paradigm? Or are the above quotes not contradicting each other due to reasons that go beyond my understanding of this topic? In the above example, is array a required to be contiguous or not?

The language standard specifies what has to happen, in terms of the concepts described by the standard (statements being executed in a certain order, variables being defined with values, content being written to things called "files"), but it says nothing about how those things happen/how those things are implemented.
There may be an obvious method of implementation, perhaps even only one practical method, but still, the standard does not specify implementation.
The standard does not even require a "Fortran processor" to be an electronic device.
Practically, it is a pretty safe assumption that a "contiguous array" (Fortran standard language term) will be implemented by the bytes that represent the values of the array elements being stored next to each other in RAM, but that's implementation detail, not a language standard requirement.
It is handy for programmers to be aware of likely implementation methods, particularly for debugging or understanding performance aspects, but implementation and specification shouldn't be conflated.

There are two different concepts of contiguity here.
In the Fortran sense, contiguous means (Fortran 2018, 3.41):
having array elements in order that are not separated by other data objects
This concept of contiguity allows for "padding" between array elements if there are stored in memory which has that possibility. What contiguity means is that A(1) and A(2) have no object between them.
That is, a scalar may take up some space, but as an element of an array the same object may take up more space. As given by (F2018 16.9.184, Note 1):
An array element might take more bits to store than an isolated scalar, since any hardware-imposed alignment requirements for array elements might not apply to a simple scalar variable.
The description of what arrays are considered to be contiguous refer to this sense of contiguity. It is the different concept of contiguity in the second quote of the question which is not mandated.
From your example
real, dimension(5,5) :: a
we do have a contiguous array a: it is a whole array which is not a pointer and is not assumed-shape. It may still have some degree of padding between elements.

Related

Is an mmap-ed region a "single object" and can I compare pointers inside it?

I'm working on a malloc implementation as a school exercise, using mmap.
I would like to compute the size of my block of memory, in my free list, by using the address of the metadata.
But I am not sure this solution would be well-defined inside the C standard, I didn't find a reference on whether or not the mmap allocated region is considered an "object" in the meaning of that part of the C standard :
§6.5.8.5 (quote taken from that answer to a somewhat related question) :
When two pointers are compared, the result depends on the relative locations in the address space of the objects pointed to. If two pointers to object or incomplete types both point to the same object, or both point one past the last element of the same array object, they compare equal. If the objects pointed to are members of the same aggregate object, pointers to structure members declared later compare greater than pointers to members declared earlier in the structure, and pointers to array elements with larger subscript values compare greater than pointers to elements of the same array with lower subscript values. All pointers to members of the same union object compare equal. If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater than P. In all other cases, the behavior is undefined.
In other words, can I consider the mmap region as a array of bytes (or char) within the standard ?
Yes you can do so reasonably, as an object of having initially no effective type - as otherwise the mmap system call would be completely useless and one could expect that a C compiler targetting a POSIX system should not render mmap useless...
The C Standard only describes the semantics of pointers formed in certain ways. Implementations are free to assign whatever semantics they see fit to pointers formed in other ways. According to the authors of the Standard, the Spirit of C includes the fundamental principle "Don't prevent the programmer from doing what needs to be done", and they presumably intended that quality implementation intended to be suitable for various tasks should avoid imposing needless obstacles to programmers attempting to accomplish those tasks. That would suggest that if a quality implementation defines ways of creating pointers to regions of storage that are not associated with objects of static, automatic, or allocated duration, it should process such pointers usefully even though the Standard would not require it to do so.
Unfortunately, compiler writers are not always clear about the range of purposes for which their compilers are designed to be suitable when configured in various ways. There are many situations where compilers will describe the behavior of a category of actions in more detail than required by the Standard, but the Standard will characterize an overlapping category of actions as invoking UB. Some compiler writers think that UB merely means that the Standard imposes no requirements, but behavioral descriptions that go beyond those required by the Standard should be unaffected. Others view the fact that an action invokes UB as overriding all other behavioral descriptions.
Actions involving addresses that are allocated in ways that the implementations don't understand are only going to be defined to the extent described by the implementations. On some implementations, the fact that the Standard would characterize e.g. comparisons involving unrelated pointers as UB should be viewed as irrelevant, since the Standard doesn't say anything about how such pointers will behave. On others, the fact that the standard characterizes some actions as UB would dominate, however. Unfortunately, it's hard to know which scenario would apply in any particular situation.

What makes it possible for glibc malloc to compare pointers from different "objects"?

Comparing pointers with a relational operator (e.g. <, <=, >= or >) is only defined by the C standard when the pointers both point to within the same aggregate object (struct, array or union). This in practise means that a comparison in the shape of
if (start_object <= my_pointer && my_pointer < end_object+1) {
can be turned into
if (1) {
by an optimising compiler. Despite this, in K&R, section 8.7 "Example—A Storage Allocator", the authors make comparisons similar to the one above. They excuse this by saying
There is still one assumption, however, that pointers to different blocks returned by sbrk can be meaningfully compared. This is not guaranteed by the standard, which permits pointer comparisons only within an array. Thus this version of malloc is portable only among machines for which general pointer comparison is meaningful.
Furthermore, it appears the implementation of malloc used in glibc does the same thing!
What's worse is – the reason I stumbled across this to begin with is – for a school assignment I'm supposed to implement a rudimentary malloc like function, and the instructions for the assignment requires us to use the K&R code, but we have to replace the sbrk call with a call to mmap!
While comparing pointers from different sbrk calls is probably undefined, it is also only slightly dubious, since you have some sort of mental intuition that the returned pointers should come from sort of the same region of memory. Pointers returned by different mmap calls have, as far as I understand, no guarantee to even be remotely similar to each other, and consolidating/merging memory blocks across mmap calls should be highly illegal (and it appears glibc avoids this, resorting to only merging memory returned by sbrk or internally inside mmap pages, not across them), yet the assignment requires this.
Question: could someone shine some light on
Whether or not comparing pointers from different calls to sbrk may be optimised away, and
If so, what glibc does that lets them get away with it.
The language lawyer answer is (I believe) to be found in §6.5.8.5 of the C99 standard (or more precisely from ISO/IEC 9899:TC3 Committee Draft — Septermber 7, 2007 WG14/N1256 which is nearly identical but I don't have the original to hand) which has the following with regard to relational operators (i.e. <, <=, >, >=):
When two pointers are compared, the result depends on the relative locations in the address space of the objects pointed to. If two pointers to object or incomplete types both point to the same object, or both point one past the last element of the same array object, they compare equal. If the objects pointed to are members of the same aggregate object, pointers to structure members declared later compare greater than pointers to members declared earlier in the structure, and pointers to array elements with larger subscript values compare greater than pointers to elements of the same array with lower subscript values. All pointers to members of the same union object compare equal. If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater than P. In all other cases, the behavior is undefined.
(the C11 text is identical or near identical)
This at first seems unhelpful, or at least suggests the that the implementations each exploit undefined behaviour. I think, however, you can either rationalise the behaviour or use a work around.
The C pointers specified are either going to be NULL, or derived from taking the address of an object with &, or by pointer arithmetic, or by the result of some function. In the case concerned, they are derived by the result of the sbrk or mmap system calls. What do these systems calls really return? At a register level, they return an integer with the size uintptr_t (or intptr_t). It is in fact the system call interface which is casting them to a pointer. As we know casts between pointers and uintptr_t (or intptr_t) are by definition of the type bijective, we know we could cast the pointers to uintptr_t (for instance) and compare them, which will impose a well order relation on the pointers. The wikipedia link gives more information, but this will in essence ensure that every comparison is well defined as well as other useful properties such as a<b and b<c implies a<c. (I also can't choose an entirely arbitrary order as it would need to satisfy the other requirements of C99 §6.5.8.5 which pretty much leaves me with intptr_t and uintptr_t as candidates.)
We can exploit this to and write the (arguably better):
if ((uintptr_t)start_object <= (uintptr_t)my_pointer && (uintptr_t)my_pointer < (uintptr_t)(end_object+1)) {
There is a nit here. You'll note I casted to uintptr_t and not intptr_t. Why was that the right choice? In fact why did I not choose a rather bizarre ordering such as reversing the bits and comparing? The assumption here is that I'm chosing the same ordering as the kernel, specifically that my definition of < (given by the ordering) is such that the start and end of any allocated memory block will always be such that start < end. On all modern platforms I know, there is no 'wrap around' (e.g. the kernel will not allocate 32 bit memory starting at 0xffff8000 and ending at 0x00007ffff) - though note that similar wrap around has been exploited in the past.
The C standard specifies that pointer comparisons give undefined results under many circumstances. However, here you are building your own pointers out of integers returned by system calls. You can therefore either compare the integers, or compare the the pointers by casting them back to integers (exploiting the bijective nature of the cast). If you merely compare the pointers, you rely on the C compiler's implementation of pointer comparison being sane, which it almost certainly is, but is not guaranteed.
Are the possibilities I mention so obscure that they can be discounted? Nope, let's find a platform example where they might be important: 8086. It's possible to imagine an 8086 compilation model where every pointer is a 'far' pointer (i.e. contains a segment register). Pointer comparison could do a < or > on the segment register and only if they are equal do a < or > onto the offset. This would be entirely legitimate so long as all the structures in C99 §6.5.8.5 are in the same segment. But it won't work as one might expect between segments as 1000:1234 (which is equal to 1010:1134 in memory address) will appear smaller than 1010:0123. mmap here might well return results in different segments. Similarly one could think of another memory model where the segment register is actually a selector, and a pointer comparison uses a processor comparison instruction was used to compare memory addresses which aborts if an invalid selector or an offset outside a segment is used.
You ask two specific questions:
Whether or not comparing pointers from different calls to sbrk may be optimised away, and
If so, what glibc does that lets them get away with it.
In the formulation given above where start_object etc. are actually void *, then the calculation may be optimized away (i.e. might do what you want) but is not guaranteed to do so as the behaviour is undefined. A cast would guarantee that it does so provided the kernel uses the same well ordering as implied by the cast.
In answer to the second question, glibc is relying on a behaviour of the C compiler which is technically not required, but is very likely (per the above).
Note also (at least in the K&R in front of me) that the line you quote doesn't exist in the code. The caveat is in relation to the comparison of header * pointers with < (as I far as I can see comparison of void * pointers with < is always UB) which may derive from separate sbrk() calls.
The answer is simple enough. The C library implementation is written with some knowledge of (or perhaps expectations for) how the C compiler will handle certain code that has undefined behaviour according to the official specification.
There are many examples I could give; but that pointers actually refer to an address in the process' address space and can be compared freely is relied on by the C library implementation (at least by Glibc) and also by many "real world" programs. While it is not guaranteed by the standard for strictly conforming programs, it is true for the vast majority of real-world architectures/compilers. Also, note footnote 67, regarding conversion of pointers to integers and back:
The mapping functions for converting a pointer to an integer or an
integer to a pointer are intended to be consistent with the addressing
structure of the execution environment.
While this doesn't strictly give license to compare arbitrary pointers, it helps to understand how the rules are supposed to work: as a set of specific behaviours that are certain to be consistent across all platforms, rather than as a limit to what is permissible when the representation of a pointer is fully known and understood.
You've remarked that:
if (start_object <= my_pointer && my_pointer < end_object+1) {
Can be turned into:
if (1) {
With the assumption (which you didn't state) that my_pointer is derived in some way from the value of start_object or the address of the object which it delimits - then this is strictly true, but it's not an optimisation that compilers make in practice except in the case of static/automatic storage duration objects (i.e. objects which the compiler knows weren't dynamically allocated).
Consider the fact that calls to sbrk are defined to increment, or decrement the number of bytes allocated in some region (the heap), for some process by the given incr parameter according to some brk address. This is really just a wrapper around brk, which allows you to adjust the current top of the heap. When you call brk(addr), you're telling the kernel to allocate space for your process all the way from the bottom of the addr (or possibly to free space between the current previous higher-address top of the heap down to the new address). sbrk(incr) would be exactly equivalent if incr == new_top - original_top. Thus to answer your question:
Because sbrk just adjusts the size of the heap (i.e. some contiguous region of memory) by incr number of bytes, comparing the values of sbrk is just a comparison of points in some contiguous region of memory. That is exactly equivalent to comparing points in an array, and so it is a well defined operation according to the the C-standard. Therefore, pointer comparison calls around sbrk can be optimized away.
glibc doesn't do anything special to "get away with it" - they just assume that the assumption mentioned above holds true (which it does). In fact, if they're checking the state of a chunk for memory that was allocated with mmap, it explicitly verifies that the mmap'd memory is outside the range allocated with sbrk.
Edit: Something I want to make clearer about my answer: The key here is that there is no undefined behavior! sbrk is defined to allocate bytes in some contiguous region of memory, which is itself an 'object' as specified by the C-standard. Therefore, comparison of pointers within that 'object' is a completely sane and well defined operation. The assumption here is not that glibc is taking advantage of undefined pointer comparison, it's that it's assuming that sbrk grows / shrinks memory in some contiguous region.
The authors of the C Standard recognized that there are some segmented-memory hardware platforms where an attempt to perform a relational comparison between objects in different segments might behave strangely. Rather than say that such platforms could not efficiently accommodate efficient C implementations, the authors of the Standard allow such implementations to do anything they see fit if an attempt is made to compare pointers to objects that might be in different segments.
For the authors of the Standard to have said that comparisons between disjoint objects should only exhibit strange behavior on such segmented-memory systems that can't efficiently yield consistent behavior would have been seen as implying that such systems were inferior to platforms where relational comparisons between arbitrary pointers will yield a consistent ranking, and the authors of the Standard went out of their way to avoid such implications. Instead, they figured that since there was no reason for implementations targeting commonplace platforms to do anything weird with such comparisons, such implementations would handle them sensibly whether the Standard mandated them or not.
Unfortunately, some people who are more interested in making a compiler that conforms to the Standard than in making one that's useful have decided that any code which isn't written to accommodate the limitations of hardware that has been obsolete for decades should be considered "broken". They claim that their "optimizations" allow programs to be more efficient than would otherwise be possible, but in many cases the "efficiency" gains are only significant in cases where a compiler omits code which is actually necessary. If a programmer works around the compiler's limitations, the resulting code may end up being less efficient than if the compiler hadn't bothered with the "optimization" in the first place.

Decay rules in C [duplicate]

[This is a question inspired by a recent discussion elsewhere, and I'll provide an answer right with it.]
I was wondering about the odd C phenomenon of arrays "decaying" to pointers, e.g. when used as function arguments. That just seems so unsafe. It is also inconvenient to pass the length explicitly with it. And I can pass the other type of aggregate -- structs -- perfectly well by value; structs do not decay.
What is the rationale behind this design decision? How does it integrate with the language? Why is there a difference to structs?
Rationale
Let's examine function calls because the problems are nicely visible there: Why are arrays not simply passed to functions as arrays, by value, as a copy?
There is first a purely pragmatic reason: Arrays can be big; it may not be advisable to pass them by value because they
could exceed the stack size, especially in the 1970s. The first compilers were written on a PDP-7 with about 9 kB RAM.
There is also a more technical reason rooted in the language. It would be hard to generate code for a function call with arguments whose size is not known at compile time. For all arrays, including variable length arrays in modern C, simply the addresses are put on the call stack. The size of an address is of course well known. Even languages with elaborate array types carrying run time size information do not pass the objects proper on the stack. These languages typically pass "handles" around, which is what C has effectively done, too, for 40 years. See Jon Skeet here and an illustrated explanation he references (sic) here.
Now a language could make it a requirement that an array always have a complete type; i.e. whenever it is used, its complete declaration including the size must be visible. This is, after all, what C requires from structures (when they are accessed). Consequently, structures can be passed to functions by value. Requiring the complete type for arrays as well would make function calls easily compilable and obviate the need to pass additional length arguments: sizeof() would still work as expected inside the callee. But imagine what that means. If the size were really part of the array's argument type, we would need a distinct function for each array size:
// for user input.
int average_ten(int arr[10]);
// for my new Hasselblad.
int average_twohundredfivemilliononehundredfourtyfivethousandsixhundred(int arr[16544*12400]);
// ...
In fact it would be totally comparable to passing structures, which differ in type if their elements differ (say, one struct with 10 int elements and one with 16544*12400). It is obvious that arrays need more flexibility. For example, as demonstrated one could not sensibly provide generally usable library functions which take array arguments.
This "strong typing conundrum" is, in fact, what happens in C++ when a function takes a reference to an array; that is also the reason why nobody does it, at least not explicitly. It is totally inconvenient to the point of being useless except for cases which target specific uses, and in generic code: C++ templates provide compile-time flexibility which is not available in C.
If, in existing C, indeed arrays of known sizes should be passed by value there is always the possibility to wrap them in a struct. I remember that some IP related headers on Solaris defined address family structures with arrays in them, allowing to copy them around. Because the byte layout of the struct was fixed and known, that made sense.
For some background it's also interesting to read The Development of the C Language by Dennis Ritchie about the origins of C. C's predecessor BCPL didn't have any arrays; the memory was just homogeneous linear memory with pointers into it.
The answer to this question can be found in Dennis Ritchie's "The Development of the C Language" paper (see "Embryonic C" section)
According to Dennis Ritchie, the nascent versions of C directly inherited/adopted array semantics from B and BCPL languages - predecessors of C. In those languages arrays were literally implemented as physical pointers. These pointers pointed to independently allocated blocks of memory containing the actual array elements. These pointers were initialized at run time. I.e. back in B and BCPL days arrays were implemented as "binary" (bipartite) objects: an independent pointer pointing to an independent block of data. There was no difference between pointer and array semantics in those languages, aside from the fact that array pointers were initialized automatically. At any time it was possible to re-assign an array pointer in B and BCPL to make it point somewhere else.
Initially, this approach to array semantics got inherited by C. However, its drawbacks became immediately obvious when struct types were introduced into the language (something neither B nor BCPL had). And the idea was that structs should naturally be able to contain arrays. However, continuing to stick with the above "bipartite" nature of B/BCPL arrays would immediately lead to a number of obvious complications with structs. E.g. struct objects with arrays inside would require non-trivial "construction" at the point of definition. It would become impossible to copy such struct objects - a raw memcpy call would copy the array pointers without copying the actual data. One wouldn't be able to malloc struct objects, since malloc can only allocate raw memory and does not trigger any non-trivial initializations. And so on and so forth.
This was deemed unacceptable, which led to the redesign of C arrays. Instead of implementing arrays through physical pointers Ritchie decided to get rid of the pointers entirely. The new array was implemented as a single immediate memory block, which is exactly what we have in C today. However, for backward compatibility reasons the behavior of B/BCPL arrays was preserved (emulated) as much as possible at superficial level: the new C array readily decayed to a temporary pointer value, pointing to the beginning of the array. The rest of the array functionality remained unchanged, relying on that readily available result of the decay.
To quote the aforementioned paper
The solution constituted the crucial jump in the evolutionary chain
between typeless BCPL and typed C. It eliminated the materialization
of the pointer in storage, and instead caused the creation of the
pointer when the array name is mentioned in an expression. The rule,
which survives in today's C, is that values of array type are
converted, when they appear in expressions, into pointers to the first
of the objects making up the array.
This invention enabled most existing B code to continue to work,
despite the underlying shift in the language's semantics. The few
programs that assigned new values to an array name to adjust its
origin—possible in B and BCPL, meaningless in C—were easily repaired.
More important, the new language retained a coherent and workable (if
unusual) explanation of the semantics of arrays, while opening the way
to a more comprehensive type structure.
So, the direct answer to your "why" question is as follows: arrays in C were designed to decay to pointers in order to emulate (as close as possible) the historical behavior of arrays in B and BCPL languages.
Take your time machine and travel back to 1970. Start designing a programming language. You want the following code to compile and do the expected thing:
size_t i;
int* p = (int *) malloc (10 * sizeof (int));
for (i = 0; i < 10; ++i) p [i] = i;
int a [10];
for (i = 0; i < 10; ++i) a [i] = i;
At the same time, you want a language that is simple. Simple enough that you can compile it on a 1970's computer. The rule that "a" decays to "pointer to first element of a" achieves that nicely.

Why do arrays in C decay to pointers?

[This is a question inspired by a recent discussion elsewhere, and I'll provide an answer right with it.]
I was wondering about the odd C phenomenon of arrays "decaying" to pointers, e.g. when used as function arguments. That just seems so unsafe. It is also inconvenient to pass the length explicitly with it. And I can pass the other type of aggregate -- structs -- perfectly well by value; structs do not decay.
What is the rationale behind this design decision? How does it integrate with the language? Why is there a difference to structs?
Rationale
Let's examine function calls because the problems are nicely visible there: Why are arrays not simply passed to functions as arrays, by value, as a copy?
There is first a purely pragmatic reason: Arrays can be big; it may not be advisable to pass them by value because they
could exceed the stack size, especially in the 1970s. The first compilers were written on a PDP-7 with about 9 kB RAM.
There is also a more technical reason rooted in the language. It would be hard to generate code for a function call with arguments whose size is not known at compile time. For all arrays, including variable length arrays in modern C, simply the addresses are put on the call stack. The size of an address is of course well known. Even languages with elaborate array types carrying run time size information do not pass the objects proper on the stack. These languages typically pass "handles" around, which is what C has effectively done, too, for 40 years. See Jon Skeet here and an illustrated explanation he references (sic) here.
Now a language could make it a requirement that an array always have a complete type; i.e. whenever it is used, its complete declaration including the size must be visible. This is, after all, what C requires from structures (when they are accessed). Consequently, structures can be passed to functions by value. Requiring the complete type for arrays as well would make function calls easily compilable and obviate the need to pass additional length arguments: sizeof() would still work as expected inside the callee. But imagine what that means. If the size were really part of the array's argument type, we would need a distinct function for each array size:
// for user input.
int average_ten(int arr[10]);
// for my new Hasselblad.
int average_twohundredfivemilliononehundredfourtyfivethousandsixhundred(int arr[16544*12400]);
// ...
In fact it would be totally comparable to passing structures, which differ in type if their elements differ (say, one struct with 10 int elements and one with 16544*12400). It is obvious that arrays need more flexibility. For example, as demonstrated one could not sensibly provide generally usable library functions which take array arguments.
This "strong typing conundrum" is, in fact, what happens in C++ when a function takes a reference to an array; that is also the reason why nobody does it, at least not explicitly. It is totally inconvenient to the point of being useless except for cases which target specific uses, and in generic code: C++ templates provide compile-time flexibility which is not available in C.
If, in existing C, indeed arrays of known sizes should be passed by value there is always the possibility to wrap them in a struct. I remember that some IP related headers on Solaris defined address family structures with arrays in them, allowing to copy them around. Because the byte layout of the struct was fixed and known, that made sense.
For some background it's also interesting to read The Development of the C Language by Dennis Ritchie about the origins of C. C's predecessor BCPL didn't have any arrays; the memory was just homogeneous linear memory with pointers into it.
The answer to this question can be found in Dennis Ritchie's "The Development of the C Language" paper (see "Embryonic C" section)
According to Dennis Ritchie, the nascent versions of C directly inherited/adopted array semantics from B and BCPL languages - predecessors of C. In those languages arrays were literally implemented as physical pointers. These pointers pointed to independently allocated blocks of memory containing the actual array elements. These pointers were initialized at run time. I.e. back in B and BCPL days arrays were implemented as "binary" (bipartite) objects: an independent pointer pointing to an independent block of data. There was no difference between pointer and array semantics in those languages, aside from the fact that array pointers were initialized automatically. At any time it was possible to re-assign an array pointer in B and BCPL to make it point somewhere else.
Initially, this approach to array semantics got inherited by C. However, its drawbacks became immediately obvious when struct types were introduced into the language (something neither B nor BCPL had). And the idea was that structs should naturally be able to contain arrays. However, continuing to stick with the above "bipartite" nature of B/BCPL arrays would immediately lead to a number of obvious complications with structs. E.g. struct objects with arrays inside would require non-trivial "construction" at the point of definition. It would become impossible to copy such struct objects - a raw memcpy call would copy the array pointers without copying the actual data. One wouldn't be able to malloc struct objects, since malloc can only allocate raw memory and does not trigger any non-trivial initializations. And so on and so forth.
This was deemed unacceptable, which led to the redesign of C arrays. Instead of implementing arrays through physical pointers Ritchie decided to get rid of the pointers entirely. The new array was implemented as a single immediate memory block, which is exactly what we have in C today. However, for backward compatibility reasons the behavior of B/BCPL arrays was preserved (emulated) as much as possible at superficial level: the new C array readily decayed to a temporary pointer value, pointing to the beginning of the array. The rest of the array functionality remained unchanged, relying on that readily available result of the decay.
To quote the aforementioned paper
The solution constituted the crucial jump in the evolutionary chain
between typeless BCPL and typed C. It eliminated the materialization
of the pointer in storage, and instead caused the creation of the
pointer when the array name is mentioned in an expression. The rule,
which survives in today's C, is that values of array type are
converted, when they appear in expressions, into pointers to the first
of the objects making up the array.
This invention enabled most existing B code to continue to work,
despite the underlying shift in the language's semantics. The few
programs that assigned new values to an array name to adjust its
origin—possible in B and BCPL, meaningless in C—were easily repaired.
More important, the new language retained a coherent and workable (if
unusual) explanation of the semantics of arrays, while opening the way
to a more comprehensive type structure.
So, the direct answer to your "why" question is as follows: arrays in C were designed to decay to pointers in order to emulate (as close as possible) the historical behavior of arrays in B and BCPL languages.
Take your time machine and travel back to 1970. Start designing a programming language. You want the following code to compile and do the expected thing:
size_t i;
int* p = (int *) malloc (10 * sizeof (int));
for (i = 0; i < 10; ++i) p [i] = i;
int a [10];
for (i = 0; i < 10; ++i) a [i] = i;
At the same time, you want a language that is simple. Simple enough that you can compile it on a 1970's computer. The rule that "a" decays to "pointer to first element of a" achieves that nicely.

C program with pointer

is it possible to convert any program written in C using pointer into another c program that does not contain any pointers?If yes, can we automate the process?
i read a few papers on c to java bytecode compilation and found that a major issue was "the pointer problem".so i was thinking that if the above process could be done,then it could be included like a preprocessing step(though it itself may be big task) and then it may be simpler to try converting to jvm bytecode...
thanks in advance
In theory you can, by simulating individual data structures or even the entire memory (static data, heap and stack) with arrays. But the question is whether this is very practical; it may involve having to rewrite every pointer-based standard library function you need.
Anyway, there's a nice explanation on Wikipedia:
It is possible to simulate pointer behavior using an index to an (normally one-dimensional) array.
Primarily for languages which do not support pointers explicitly but do support arrays, the array can be thought of and processed as if it were the entire memory range (within the scope of the particular array) and any index to it can be thought of as equivalent to a general purpose register in assembly language (that points to the individual bytes but whose actual value is relative to the start of the array, not its absolute address in memory). Assuming the array is, say, a contiguous 16 megabyte character data structure, individual bytes (or a string of contiguous bytes within the array) can be directly addressed and manipulated using the name of the array with a 31 bit unsigned integer as the simulated pointer (this is quite similar to the C arrays example shown above). Pointer arithmetic can be simulated by adding or subtracting from the index, with minimal additional overhead compared to genuine pointer arithmetic.
Pointers are rather central to C. While C may be turing-complete without pointers, it's not practical to rewrite arbitrary C without them.Things you can't do without pointers:-dynamic (manual) memory allocation.-passing by reference.Given that arrays decay into pointers a the drop of a hat, you also couldn't use arrays practically, so you are left with automatic, static and global variables that cannot be arrays. tl;dr: No

Resources