C program with pointer - c

is it possible to convert any program written in C using pointer into another c program that does not contain any pointers?If yes, can we automate the process?
i read a few papers on c to java bytecode compilation and found that a major issue was "the pointer problem".so i was thinking that if the above process could be done,then it could be included like a preprocessing step(though it itself may be big task) and then it may be simpler to try converting to jvm bytecode...
thanks in advance

In theory you can, by simulating individual data structures or even the entire memory (static data, heap and stack) with arrays. But the question is whether this is very practical; it may involve having to rewrite every pointer-based standard library function you need.
Anyway, there's a nice explanation on Wikipedia:
It is possible to simulate pointer behavior using an index to an (normally one-dimensional) array.
Primarily for languages which do not support pointers explicitly but do support arrays, the array can be thought of and processed as if it were the entire memory range (within the scope of the particular array) and any index to it can be thought of as equivalent to a general purpose register in assembly language (that points to the individual bytes but whose actual value is relative to the start of the array, not its absolute address in memory). Assuming the array is, say, a contiguous 16 megabyte character data structure, individual bytes (or a string of contiguous bytes within the array) can be directly addressed and manipulated using the name of the array with a 31 bit unsigned integer as the simulated pointer (this is quite similar to the C arrays example shown above). Pointer arithmetic can be simulated by adding or subtracting from the index, with minimal additional overhead compared to genuine pointer arithmetic.

Pointers are rather central to C. While C may be turing-complete without pointers, it's not practical to rewrite arbitrary C without them.Things you can't do without pointers:-dynamic (manual) memory allocation.-passing by reference.Given that arrays decay into pointers a the drop of a hat, you also couldn't use arrays practically, so you are left with automatic, static and global variables that cannot be arrays. tl;dr: No

Related

Contiguity of arrays in modern Fortran

I am reading "Modern Fortran Explained" (2018 edition). It says on p24:
Most implementations actually store arrays in contiguous storage in array element order,
but we emphasize that the standard does not require this.
I believe that this already dates back to old fortran 77 (and earlier). Given a declaration
real, dimension(5,5) :: a
my understanding is that the standard does not require that this array is contiguous. This agrees with what this book says in Sec. 7.18.1 on p140:
... the Fortran standard has shied away from specifying whether arrays are contiguous in the sense of occupying sequential memory locations with no intervening unoccupied spaces.
But then the book goes on (still p140):
Any of the following arrays are considered to be contiguous by the standard:
an array with the contiguous attribute;
a whole array (named array or array component without further qualification) that is not a pointer or assumed-shape;
...
I understand the first criterion; but I am confused about the second criterion. Doesn't this disagree with the earlier statement that the fortran standard does not require to store arrays in contiguous storage in array element order? Has modern fortran moved away from this paradigm? Or are the above quotes not contradicting each other due to reasons that go beyond my understanding of this topic? In the above example, is array a required to be contiguous or not?
The language standard specifies what has to happen, in terms of the concepts described by the standard (statements being executed in a certain order, variables being defined with values, content being written to things called "files"), but it says nothing about how those things happen/how those things are implemented.
There may be an obvious method of implementation, perhaps even only one practical method, but still, the standard does not specify implementation.
The standard does not even require a "Fortran processor" to be an electronic device.
Practically, it is a pretty safe assumption that a "contiguous array" (Fortran standard language term) will be implemented by the bytes that represent the values of the array elements being stored next to each other in RAM, but that's implementation detail, not a language standard requirement.
It is handy for programmers to be aware of likely implementation methods, particularly for debugging or understanding performance aspects, but implementation and specification shouldn't be conflated.
There are two different concepts of contiguity here.
In the Fortran sense, contiguous means (Fortran 2018, 3.41):
having array elements in order that are not separated by other data objects
This concept of contiguity allows for "padding" between array elements if there are stored in memory which has that possibility. What contiguity means is that A(1) and A(2) have no object between them.
That is, a scalar may take up some space, but as an element of an array the same object may take up more space. As given by (F2018 16.9.184, Note 1):
An array element might take more bits to store than an isolated scalar, since any hardware-imposed alignment requirements for array elements might not apply to a simple scalar variable.
The description of what arrays are considered to be contiguous refer to this sense of contiguity. It is the different concept of contiguity in the second quote of the question which is not mandated.
From your example
real, dimension(5,5) :: a
we do have a contiguous array a: it is a whole array which is not a pointer and is not assumed-shape. It may still have some degree of padding between elements.

Decay rules in C [duplicate]

[This is a question inspired by a recent discussion elsewhere, and I'll provide an answer right with it.]
I was wondering about the odd C phenomenon of arrays "decaying" to pointers, e.g. when used as function arguments. That just seems so unsafe. It is also inconvenient to pass the length explicitly with it. And I can pass the other type of aggregate -- structs -- perfectly well by value; structs do not decay.
What is the rationale behind this design decision? How does it integrate with the language? Why is there a difference to structs?
Rationale
Let's examine function calls because the problems are nicely visible there: Why are arrays not simply passed to functions as arrays, by value, as a copy?
There is first a purely pragmatic reason: Arrays can be big; it may not be advisable to pass them by value because they
could exceed the stack size, especially in the 1970s. The first compilers were written on a PDP-7 with about 9 kB RAM.
There is also a more technical reason rooted in the language. It would be hard to generate code for a function call with arguments whose size is not known at compile time. For all arrays, including variable length arrays in modern C, simply the addresses are put on the call stack. The size of an address is of course well known. Even languages with elaborate array types carrying run time size information do not pass the objects proper on the stack. These languages typically pass "handles" around, which is what C has effectively done, too, for 40 years. See Jon Skeet here and an illustrated explanation he references (sic) here.
Now a language could make it a requirement that an array always have a complete type; i.e. whenever it is used, its complete declaration including the size must be visible. This is, after all, what C requires from structures (when they are accessed). Consequently, structures can be passed to functions by value. Requiring the complete type for arrays as well would make function calls easily compilable and obviate the need to pass additional length arguments: sizeof() would still work as expected inside the callee. But imagine what that means. If the size were really part of the array's argument type, we would need a distinct function for each array size:
// for user input.
int average_ten(int arr[10]);
// for my new Hasselblad.
int average_twohundredfivemilliononehundredfourtyfivethousandsixhundred(int arr[16544*12400]);
// ...
In fact it would be totally comparable to passing structures, which differ in type if their elements differ (say, one struct with 10 int elements and one with 16544*12400). It is obvious that arrays need more flexibility. For example, as demonstrated one could not sensibly provide generally usable library functions which take array arguments.
This "strong typing conundrum" is, in fact, what happens in C++ when a function takes a reference to an array; that is also the reason why nobody does it, at least not explicitly. It is totally inconvenient to the point of being useless except for cases which target specific uses, and in generic code: C++ templates provide compile-time flexibility which is not available in C.
If, in existing C, indeed arrays of known sizes should be passed by value there is always the possibility to wrap them in a struct. I remember that some IP related headers on Solaris defined address family structures with arrays in them, allowing to copy them around. Because the byte layout of the struct was fixed and known, that made sense.
For some background it's also interesting to read The Development of the C Language by Dennis Ritchie about the origins of C. C's predecessor BCPL didn't have any arrays; the memory was just homogeneous linear memory with pointers into it.
The answer to this question can be found in Dennis Ritchie's "The Development of the C Language" paper (see "Embryonic C" section)
According to Dennis Ritchie, the nascent versions of C directly inherited/adopted array semantics from B and BCPL languages - predecessors of C. In those languages arrays were literally implemented as physical pointers. These pointers pointed to independently allocated blocks of memory containing the actual array elements. These pointers were initialized at run time. I.e. back in B and BCPL days arrays were implemented as "binary" (bipartite) objects: an independent pointer pointing to an independent block of data. There was no difference between pointer and array semantics in those languages, aside from the fact that array pointers were initialized automatically. At any time it was possible to re-assign an array pointer in B and BCPL to make it point somewhere else.
Initially, this approach to array semantics got inherited by C. However, its drawbacks became immediately obvious when struct types were introduced into the language (something neither B nor BCPL had). And the idea was that structs should naturally be able to contain arrays. However, continuing to stick with the above "bipartite" nature of B/BCPL arrays would immediately lead to a number of obvious complications with structs. E.g. struct objects with arrays inside would require non-trivial "construction" at the point of definition. It would become impossible to copy such struct objects - a raw memcpy call would copy the array pointers without copying the actual data. One wouldn't be able to malloc struct objects, since malloc can only allocate raw memory and does not trigger any non-trivial initializations. And so on and so forth.
This was deemed unacceptable, which led to the redesign of C arrays. Instead of implementing arrays through physical pointers Ritchie decided to get rid of the pointers entirely. The new array was implemented as a single immediate memory block, which is exactly what we have in C today. However, for backward compatibility reasons the behavior of B/BCPL arrays was preserved (emulated) as much as possible at superficial level: the new C array readily decayed to a temporary pointer value, pointing to the beginning of the array. The rest of the array functionality remained unchanged, relying on that readily available result of the decay.
To quote the aforementioned paper
The solution constituted the crucial jump in the evolutionary chain
between typeless BCPL and typed C. It eliminated the materialization
of the pointer in storage, and instead caused the creation of the
pointer when the array name is mentioned in an expression. The rule,
which survives in today's C, is that values of array type are
converted, when they appear in expressions, into pointers to the first
of the objects making up the array.
This invention enabled most existing B code to continue to work,
despite the underlying shift in the language's semantics. The few
programs that assigned new values to an array name to adjust its
origin—possible in B and BCPL, meaningless in C—were easily repaired.
More important, the new language retained a coherent and workable (if
unusual) explanation of the semantics of arrays, while opening the way
to a more comprehensive type structure.
So, the direct answer to your "why" question is as follows: arrays in C were designed to decay to pointers in order to emulate (as close as possible) the historical behavior of arrays in B and BCPL languages.
Take your time machine and travel back to 1970. Start designing a programming language. You want the following code to compile and do the expected thing:
size_t i;
int* p = (int *) malloc (10 * sizeof (int));
for (i = 0; i < 10; ++i) p [i] = i;
int a [10];
for (i = 0; i < 10; ++i) a [i] = i;
At the same time, you want a language that is simple. Simple enough that you can compile it on a 1970's computer. The rule that "a" decays to "pointer to first element of a" achieves that nicely.

Why do arrays in C decay to pointers?

[This is a question inspired by a recent discussion elsewhere, and I'll provide an answer right with it.]
I was wondering about the odd C phenomenon of arrays "decaying" to pointers, e.g. when used as function arguments. That just seems so unsafe. It is also inconvenient to pass the length explicitly with it. And I can pass the other type of aggregate -- structs -- perfectly well by value; structs do not decay.
What is the rationale behind this design decision? How does it integrate with the language? Why is there a difference to structs?
Rationale
Let's examine function calls because the problems are nicely visible there: Why are arrays not simply passed to functions as arrays, by value, as a copy?
There is first a purely pragmatic reason: Arrays can be big; it may not be advisable to pass them by value because they
could exceed the stack size, especially in the 1970s. The first compilers were written on a PDP-7 with about 9 kB RAM.
There is also a more technical reason rooted in the language. It would be hard to generate code for a function call with arguments whose size is not known at compile time. For all arrays, including variable length arrays in modern C, simply the addresses are put on the call stack. The size of an address is of course well known. Even languages with elaborate array types carrying run time size information do not pass the objects proper on the stack. These languages typically pass "handles" around, which is what C has effectively done, too, for 40 years. See Jon Skeet here and an illustrated explanation he references (sic) here.
Now a language could make it a requirement that an array always have a complete type; i.e. whenever it is used, its complete declaration including the size must be visible. This is, after all, what C requires from structures (when they are accessed). Consequently, structures can be passed to functions by value. Requiring the complete type for arrays as well would make function calls easily compilable and obviate the need to pass additional length arguments: sizeof() would still work as expected inside the callee. But imagine what that means. If the size were really part of the array's argument type, we would need a distinct function for each array size:
// for user input.
int average_ten(int arr[10]);
// for my new Hasselblad.
int average_twohundredfivemilliononehundredfourtyfivethousandsixhundred(int arr[16544*12400]);
// ...
In fact it would be totally comparable to passing structures, which differ in type if their elements differ (say, one struct with 10 int elements and one with 16544*12400). It is obvious that arrays need more flexibility. For example, as demonstrated one could not sensibly provide generally usable library functions which take array arguments.
This "strong typing conundrum" is, in fact, what happens in C++ when a function takes a reference to an array; that is also the reason why nobody does it, at least not explicitly. It is totally inconvenient to the point of being useless except for cases which target specific uses, and in generic code: C++ templates provide compile-time flexibility which is not available in C.
If, in existing C, indeed arrays of known sizes should be passed by value there is always the possibility to wrap them in a struct. I remember that some IP related headers on Solaris defined address family structures with arrays in them, allowing to copy them around. Because the byte layout of the struct was fixed and known, that made sense.
For some background it's also interesting to read The Development of the C Language by Dennis Ritchie about the origins of C. C's predecessor BCPL didn't have any arrays; the memory was just homogeneous linear memory with pointers into it.
The answer to this question can be found in Dennis Ritchie's "The Development of the C Language" paper (see "Embryonic C" section)
According to Dennis Ritchie, the nascent versions of C directly inherited/adopted array semantics from B and BCPL languages - predecessors of C. In those languages arrays were literally implemented as physical pointers. These pointers pointed to independently allocated blocks of memory containing the actual array elements. These pointers were initialized at run time. I.e. back in B and BCPL days arrays were implemented as "binary" (bipartite) objects: an independent pointer pointing to an independent block of data. There was no difference between pointer and array semantics in those languages, aside from the fact that array pointers were initialized automatically. At any time it was possible to re-assign an array pointer in B and BCPL to make it point somewhere else.
Initially, this approach to array semantics got inherited by C. However, its drawbacks became immediately obvious when struct types were introduced into the language (something neither B nor BCPL had). And the idea was that structs should naturally be able to contain arrays. However, continuing to stick with the above "bipartite" nature of B/BCPL arrays would immediately lead to a number of obvious complications with structs. E.g. struct objects with arrays inside would require non-trivial "construction" at the point of definition. It would become impossible to copy such struct objects - a raw memcpy call would copy the array pointers without copying the actual data. One wouldn't be able to malloc struct objects, since malloc can only allocate raw memory and does not trigger any non-trivial initializations. And so on and so forth.
This was deemed unacceptable, which led to the redesign of C arrays. Instead of implementing arrays through physical pointers Ritchie decided to get rid of the pointers entirely. The new array was implemented as a single immediate memory block, which is exactly what we have in C today. However, for backward compatibility reasons the behavior of B/BCPL arrays was preserved (emulated) as much as possible at superficial level: the new C array readily decayed to a temporary pointer value, pointing to the beginning of the array. The rest of the array functionality remained unchanged, relying on that readily available result of the decay.
To quote the aforementioned paper
The solution constituted the crucial jump in the evolutionary chain
between typeless BCPL and typed C. It eliminated the materialization
of the pointer in storage, and instead caused the creation of the
pointer when the array name is mentioned in an expression. The rule,
which survives in today's C, is that values of array type are
converted, when they appear in expressions, into pointers to the first
of the objects making up the array.
This invention enabled most existing B code to continue to work,
despite the underlying shift in the language's semantics. The few
programs that assigned new values to an array name to adjust its
origin—possible in B and BCPL, meaningless in C—were easily repaired.
More important, the new language retained a coherent and workable (if
unusual) explanation of the semantics of arrays, while opening the way
to a more comprehensive type structure.
So, the direct answer to your "why" question is as follows: arrays in C were designed to decay to pointers in order to emulate (as close as possible) the historical behavior of arrays in B and BCPL languages.
Take your time machine and travel back to 1970. Start designing a programming language. You want the following code to compile and do the expected thing:
size_t i;
int* p = (int *) malloc (10 * sizeof (int));
for (i = 0; i < 10; ++i) p [i] = i;
int a [10];
for (i = 0; i < 10; ++i) a [i] = i;
At the same time, you want a language that is simple. Simple enough that you can compile it on a 1970's computer. The rule that "a" decays to "pointer to first element of a" achieves that nicely.

the dynamic array in C

Recently I found it was annoyed to deal with array in c language.
I have to realloc() frequently to increase the size.
And there is no standard data structure like vector in C++ or Arraylist in java
I have got to known that in linux kernel, there is some data structure, such as kfifo,
we could use this by kfifo_in(), kfifo_out() function.
But this means the user would define kfifo *pointer; to record the array, and this variable does not contain any info about the type contained in the structure.
The user have to remember that when he try to use the dynamic array by kfifo pointer.
I think it may be a little confusing.
Is there any better way to deal with the problem? What's the common solution in linux c programing?
realloc is not that bad, as long as you do not spread it all over your code, and use a reasonable strategy to grow your dynamic array.
Rolling your own dynamic arrays in C is a matter of implementing a handful of easy functions. Numerous short articles walk you through this exercise - here is one for an example. The article defines a struct that represents your dynamic array, along with the currently used and the allocated size. It also provides functions for initializing, growing, and de-allocating the array represented by the structure. There is no explicit initialization function in the library - you initialize by passing NULL as the first parameter. This is a valid approach, but you could also opt for a more traditional separation of init and grow.
I'd use Glib arrays. It's a very well known library in Linux and other OSes, used in projects like Gnome.
There is no standard for dynamic arrays in C.
#bluesea
I mean they could define struct array{int len; int capacity; int each_element_size; void *data;} and copy the bytes of element, put at the end of the data. – bluesea Jun 29 at 3:04
This is already taken care of in the library under discussion. See the macro's that it comes with and the examples in the main.c file. Depending on the macros's being used, you would either end up with an array of pointers to the original data, or an array of pointers to a copy of the data.
FWIW, I'm the author of the library, and I'll be the first to admit that it comes without airbags, so you have to be sure to use it safely (as with anything else in C).

Why does C need arrays if it has pointers?

If we can use pointers and malloc to create and use arrays, why does the array type exist in C? Isn't it unnecessary if we can use pointers instead?
Arrays are faster than dynamic memory allocation.
Arrays are "allocated" at "compile time" whereas malloc allocates at run time. Allocating takes time.
Also, C does not mandate that malloc() and friends are available in free-standing implementations.
Edit
Example of array
#define DECK_SIZE 52
int main(void) {
int deck[DECK_SIZE];
play(deck, DECK_SIZE);
return 0;
}
Example of malloc()
int main(void) {
size_t len = 52;
int *deck = malloc(len * sizeof *deck);
if (deck) {
play(deck, len);
}
free(deck);
return 0;
}
In the array version, the space for the deck array was reserved by the compiler when the program was created (but, of course, the memory is only reserved/occupied when the program is being run), in the malloc() version, space for the deck array has to be requested at every run of the program.
Arrays can never change size, malloc'd memory can grow when needed.
If you only need a fixed number of elements, use an array (within the limits of your implementation).
If you need memory that can grow or shrink during the running of the program, use malloc() and friends.
It's not a bad question. In fact, early C had no array types.
Global and static arrays are allocated at compile time (very fast). Other arrays are allocated on the stack at runtime (fast). Allocating memory with malloc (to be used for an array or otherwise) is much slower. A similar thing is seen in deallocation: dynamically allocated memory is slower to deallocate.
Speed is not the only issue. Array types are automatically deallocated when they go out of scope, so they cannot be "leaked" by mistake. You don't need to worry about accidentally freeing something twice, and so on. They also make it easier for static analysis tools to detect bugs.
You may argue that there is the function _alloca() which lets you allocate memory from the stack. Yes, there is no technical reason why arrays are needed over _alloca(). However, I think arrays are more convenient to use. Also, it is easier for the compiler to optimise the use of an array than a pointer with an _alloca() return value in it, since it's obvious what a stack-allocated array's offset from the stack pointer is, whereas if _alloca() is treated like a black-box function call, the compiler can't tell this value in advance.
EDIT, since tsubasa has asked for more details on how this allocation occurs:
On x86 architectures, the ebp register normally refers to the current function's stack frame, and is used to reference stack-allocated variables. For instance, you may have an int located at [ebp - 8] and a char array stretching from [ebp - 24] to [ebp - 9]. And perhaps more variables and arrays on the stack. (The compiler decides how to use the stack frame at compile time. C99 compilers allow variable-size arrays to be stack allocated, this is just a matter of doing a tiny bit of work at runtime.)
In x86 code, pointer offsets (such as [ebp - 16]) can be represented in a single instruction. Pretty efficient.
Now, an important point is that all stack-allocated variables and arrays in the current context are retrieved via offsets from a single register. If you call malloc there is (as I have said) some processing overhead in actually finding some memory for you. But also, malloc gives you a new memory address. Let's say it is stored in the ebx register. You can't use an offset from ebp anymore, because you can't tell what that offset will be at compile time. So you are basically "wasting" an extra register that you would not need if you used a normal array instead. If you malloc more arrays, you have more "unpredictable" pointer values that magnify this problem.
Arrays have their uses, and should be used when you can, as static allocation will help make programs more stable, and are a necessity at times due to the need to ensure memory leaks don't happen.
They exist because some requirements require them.
In a language such as BASIC, you have certain commands that are allowed, and this is known, due to the language construct. So, what is the benefit of using malloc to create the arrays, and then fill them in from strings?
If I have to define the names of the operations anyway, why not put them into an array?
C was written as a general purpose language, which means that it should be useful in any situation, so they had to ensure that it had the constructs to be useful for writing operating systems as well as embedded systems.
An array is a shorthand way to specify pointing to the beginning of a malloc for example.
But, imagine trying to do matrix math by using pointer manipulations rather than vec[x] * vec[y]. It would be very prone to difficult to find errors.
See this question discussing space hardening and C. Sometimes dynamic memory allocation is just a bad idea, I have worked with C libraries that are completely devoid of malloc() and friends.
You don't want a satellite dereferencing a NULL pointer any more than you want air traffic control software forgetting to zero out heap blocks.
Its also important (as others have pointed out) to understand what is part of C and what extends it into various uniform standards (i.e. POSIX).
Arrays are a nice syntax improvement compared to dealing with pointers. You can make all sorts of mistakes unknowingly when dealing with pointers. What if you move too many spaces across the memory because you're using the wrong byte size?
Explanation by Dennis Ritchie about C history:
Embryonic C
NB existed so briefly that no full description of it was written. It supplied the types int and char, arrays of them, and pointers to them, declared in a style typified by
int i, j;
char c, d;
int iarray[10];
int ipointer[];
char carray[10];
char cpointer[];
The semantics of arrays remained exactly as in B and BCPL: the declarations of iarray and carray create cells dynamically initialized with a value pointing to the first of a sequence of 10 integers and characters respectively. The declarations for ipointer and cpointer omit the size, to assert that no storage should be allocated automatically. Within procedures, the language's interpretation of the pointers was identical to that of the array variables: a pointer declaration created a cell differing from an array declaration only in that the programmer was expected to assign a referent, instead of letting the compiler allocate the space and initialize the cell.
Values stored in the cells bound to array and pointer names were the machine addresses, measured in bytes, of the corresponding storage area. Therefore, indirection through a pointer implied no run-time overhead to scale the pointer from word to byte offset. On the other hand, the machine code for array subscripting and pointer arithmetic now depended on the type of the array or the pointer: to compute iarray[i] or ipointer+i implied scaling the addend i by the size of the object referred to.
These semantics represented an easy transition from B, and I experimented with them for some months. Problems became evident when I tried to extend the type notation, especially to add structured (record) types. Structures, it seemed, should map in an intuitive way onto memory in the machine, but in a structure containing an array, there was no good place to stash the pointer containing the base of the array, nor any convenient way to arrange that it be initialized. For example, the directory entries of early Unix systems might be described in C as
struct {
int inumber;
char name[14];
};
I wanted the structure not merely to characterize an abstract object but also to describe a collection of bits that might be read from a directory. Where could the compiler hide the pointer to name that the semantics demanded? Even if structures were thought of more abstractly, and the space for pointers could be hidden somehow, how could I handle the technical problem of properly initializing these pointers when allocating a complicated object, perhaps one that specified structures containing arrays containing structures to arbitrary depth?
The solution constituted the crucial jump in the evolutionary chain between typeless BCPL and typed C. It eliminated the materialization of the pointer in storage, and instead caused the creation of the pointer when the array name is mentioned in an expression. The rule, which survives in today's C, is that values of array type are converted, when they appear in expressions, into pointers to the first of the objects making up the array.
To summarize in my own words - if name above were just a pointer, any of that struct would contain an additional pointer, destroying the perfect mapping of it to an external object (like an directory entry).

Resources