Why does C not require a garbage collector? [closed] - c

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
My understanding of this has come down to C's origins as a "portable assembler" and the option of less overhead. Is thiat all there is to it?

First of all, lets be clear about what garbage is.
The Java definition of garbage is objects that are no longer reachable. The precise meaning of reachable is a bit abstruse, but a practical definition is that if you can get to an object by following references (pointers) from well known places like thread stacks or static variables, then it may be reachable. (In practice, some imprecision is OK, so long as objects that are reachable don't get deleted.)
You could try to apply the same definition to C and C++. An object is garbage if it cannot be reached.
However, the practical problem with this definition ... and garbage collection ... in C or C++ is whether a "pointer like" value is actually a valid pointer. For instance:
An uninitialized C variable can contain a random value that looks like a pointer to an object.
When a C union type that overlays a pointer with an long, a garbage collector cannot be sure whether the union contains one or the other ... or both.
When C application code "compresses" pointers to word aligned heap nodes by dividing them by 4 or 8, a garbage collector won't detect them as "pointer like". Or if it does, it will misinterpret them.
A similar issues is when C application code represents pointers as offsets relative to something else.
However, it is clear that a C program can call malloc, forget to call free, and then forget the address of the heap node. That node is garbage.
There are two reasons why C / C++ doesn't have garbage collection.
It is "culturally inappropriate". The culture of these languages is to leave storage management to the programmer.
It would be technically difficult (and expensive) to implement a precise garbage collector for C / C++. Indeed, doing this would involve things that made the language implementation slow.
Imprecise (i.e. conservative) garbage collectors are practical, but they have performance and (I have heard) reliability issues. (For instance, a conservative collector cannot move non-garbage objects.)
It would be simpler if the implementer (of a C / C++ garbage collector) could assume that the programmer only wrote code that strictly conformed to the C / C++ specs. But they don't.
But your answer seems to be, why did they design C like that?
Questions like that can only be answered authoritatively by the designers (in this case, the late Dennis Ritchie) or their writings.
As you point out in the question, C was designed to be simple and "close to the hardware".
However, C was designed in the early 1970's. In those days programming languages which required a garbage collector were rare, and GC techniques were not as advanced as they are now.
And even now, it is still a fact that garbage collected languages (like Java) are not suitable for applications that require predictable "real-time" performance.
In short, I suspect that the designers were of the view that garbage collection would make the language impractical for its intended purpose.

There are some garbage collectors built for C or C++:
Please check http://www.hboehm.info/gc/.
As you stated, garbage collection defies the purpose of performance claimed by C and C++, as it requires tracking allocations and/or reference counting.

Related

Should C compilers immediately free "further unused" memories? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
Context: while compiling some C code compilers may show high RAM consumption. Preliminary investigation shows that (at least some) C compilers do not immediately free "further unused" memories: despite that such (previously allocated) memories are not used anymore, they are still kept in the RAM. The C compiler continues processing the C code, allocating more memories in the RAM, until it reaches OOM (out of memory).
The core question: should C compilers immediately free "further unused" memories?
Rationale:
Efficient RAM utilization: no need of mem_X anymore => free mem_X to let other processes (itself including) to use mem_X.
Ability to compile the "RAM demanding" C code.
UPD20210825. I've memory-profiled some C compiler and have found that it keeps in RAM "C preprocessor data", in particular:
macro table (memory pool for macros);
scanner token objects (memory pool for tokens and for lists).
At certain point X in the middle-end (after the IR is built) these objects seem not needed anymore and, hence, can be freed. (However, now these objects are kept in RAM until a point X+1.) The benefit is seen on "preprocessor-heavy" C programs. Example: "preprocessor-heavy" C program using "ad hoc polymorphism" implemented via C preprocessor (by using a set of macros it progressively implements all the needed "machinery" to support a common interface for an arbitrary (and supported) set of individually specified types). The number of "polymorphic" entries is ~50k * 12 = ~600k (yes, it does not say anything). Results:
before fix: at point X C compiler keeps in RAM ~1.5GB of unused "C preprocessor data";
after fix: at point X C compiler frees from RAM ~1.5GB of unused "C preprocessor data", hence, letting OS processes (itself including) to use these ~1.5GB.
I don't know where you get your analysis from. Most parts like the abstract syntax tree is kept because it is used in all different passes.
It might be that some, especially simple compilers don't free stuff because it's not considered necessary for a C compiler. It's a one shot compilation unit operation and than the process ends.
Of course if you build a compiler library like tinycc did you need to free everything, but even this might happen within a final custom heap clearance at the end of the compilation run.
I have not seen this ever be a problem in real world. But i don't do embedded stuff where a lack of resources can be something to worry.
allocating more memories in the RAM, until it reaches OOM (out of
memory).
None of the compilers I use ever run out of memory. Please give an example of such behaviour.
If you are an Arduino user and think about the code which will not fit into the memory - it is not the problem of the compiler, only the programmer.

Why does Passing Pointers as input arguments allow "swapping" and other possibly unintended side effects?

C beginner. Below is a page from the lecture notes at my university (4 hours total to learn structured programming). The procedure manages to change the places that global pointers address, despite not having any output. This is confusing as hell, and seems a dangerous thing to be allowed. Can anyone explain what is happening in the second case, and how it happens. Ie: how are the normal rules bypassed?
I know this has been asked from a "I want to use this" background here and less generally in other places. But I am asking why this is allowed, when normally, changing values in a function would produce no effect unless they are returned?
That is basically the whole purpose of having pointers: you can change the values contained at a location in memory, and you can share the same pointer between different pieces of code.
In C, you can make a pointer to almost anything, and then pass that pointer to another function so that the other function can modify the object the pointer points to. Whew, that's a mouthful. C and C++ are fairly unusual in letting you make pointers so freely, many other languages have tight restrictions on what pointers can point to. Java, for example, only allows pointers to point to object instances, and you can't do pointer arithmetic. (Yes, Java has pointers.)
And yes, it is "dangerous", in the sense that it's easy to write incorrect programs that misuse pointers. Dangling pointers, buffer overflows, null pointer dereferencing, and many other types of programming errors are possible in C because of the way pointers work. If you're lucky, your program will crash and you can debug it. If you're unlucky, it won't crash.
The danger in using pointers is one of the primary motivations between other languages such as Java, things like std::unique_ptr<> in C++, C#, Rust, Go, etc. If you are writing C, you just have to be careful. So, why use C? Sometimes, you need to use those pointers and scribble over memory however you want. The Linux kernel is mostly written in C, and plenty of language runtimes are at least partially written in C (I know this applies at least to Python, Java, and Go).
People still write in assembly language, too. Sometimes even C doesn't let you do what you need.

Is memcpy of array in C Vaxocentrist? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is a memcpy of a chunk of one array to another in C guilty of Vaxocentrism?
Example:
double A[10];
double B[10];
// ... do stuff ...
// copy elements 3 to 7 in A to elements 2 to 6 in B
memcpy(B+2, A+3, 5*sizeof(double)
As a related question, is casting from an array to a pointer Vaxocentrist?
char A[10];
char* B = (char*)A;
B[0]=2;
A[1]=3;
B[2]=5;
I certainly appreciate the idea of writing code that works under different machine architectures and different compilers, but if I applied type safety to the extreme it would cripple many of C's useful features! How much / little can I assume about how the compiler implements arrays/pointers/etc.?
No. The model on which memcpy works is defined in the abstract machine specified by the C language standard and has nothing to do with any particular physical machine it might be running on. In particular, all objects in C have a representation which is defined as an overlaid array of type unsigned char[sizeof object], and memcpy works on this representation.
Likewise, the 'decay' of arrays to pointers via cast or implicit conversion is completely defined on the abstract machine and has nothing to do with physical machines.
Further, none of the points 1-14 in the linked article have anything to do with the code you're asking about.
In C code, memcpy() can be a useful optimization in a couple of cases. First, if the array of memory is very small then the copy operation can often be inlined directly by the compiler instead of calling a function. This can be a big win in a tight loop that runs a lot. Second, in the case where the array is very large and the hardware supports a faster mode of memory access for certain aligned memory cases then that faster code can be used for the vast majority of the memory. You honestly do not want to know the scary details of alignment and copy operations for different hardware, better to just put that stuff in memcpy() and let everyone use it.
For your first example you're using the + operator incorrectly. You want to deference the element its pointing to. This is safe because both arrays are of size 10, and when allocating memory for arrays all the addresses are sequential with respect to element 0. Also you're copying doesn't go outside of the bounds of the declared array that you're copying to, B.
memcpy(&B[2], &A[3], 5*sizeof(double));
On your related point, you're making the same mistake, you'd want to do the following:
char A[10];
char* B = &A[0];
B[0]=2;
A[1]=3;
B[2]=5;

How can I provide garbage collection for an interpreted language implemented in C?

If I were to implement a garbage collected interpreted language in C, how can I go about providing precise (i.e. not conservative) garbage collection without writing my own garbage collector? Are there libraries available for this? If so, which ones? I understand that I would have to maintain certain invariants on my end for any objects tracked by the garbage collector.
If you want a precise GC (not a conservative one, like Boehm's GC, which performs quite well in practice) you should track local pointer (to GC-ed data) variables, or else invoke the GC only with a nearly empty call stack when you are sure there are no such local variables (btw, the GCC compiler has such a mark&sweep garbage collector - with marking routines generated by some specialized gengtype C++ code generator; that GGC is invoked only between passes). Of course you should also track global (including static, or thread local) pointer (to GC-ed data) variables also.
Alternatively, have some bytecode virtual machine (like OCaml or NekoVM have), then the local GC-ed variables are those in the stacks and/or registers of your bytecode VM, and you trigger your GC at specific and carefully chosen points of your VM interpreter. (See this explanation of Ocaml GC).
You should read more about Garbage Collection techniques, see the GC handbook.
If your GC is copying generational, you need to implement write barriers (to handle mutation of old data pointing to new zone). You could use my old Qish GC (which I don't maintain much anymore), or Ravenbrook's MPS, or write your own generational copying GC (this is not that hard in theory, but debugging GCs is a nightmare in practice, so it is a lot of work).
You probably want to use some macro tricks (like my Qish does) to help keeping your local variables. See the Living in harmony with the garbage collector section of Ocaml documentation as an example (or look inside Qish).
Notice that a generational copying GC is not friendly to handle in manually written C code (because you need to explicitly keep local pointers, and because you need a write barrier to remember when an old value is modified to have a pointer to the new generation). If you want to do that, your C code should be in A-normal form (you cannot code x=f(g(y),z); but you need to code temp=g(y); x=f(temp,z); and add temp as a local variable, assuming that x, y, z are local GC-ed variables and that both f and g return a GC-ed pointer). In practice it is much easier to generate the C code. See my MELT domain specific language (to extend and customize GCC) as an example.
If your language is genuinely multithreaded (several mutator threads allocating in parallel), then coding a GC becomes quite tricky. It might take several months of work (and is probably a nightmare to debug).
Actually, I would today recommend using Boehm's GC (notice that it is multithread friendly). A naive mark&sweep handcoded GC would probably not be faster than Boehm's GC. And you won't be able (and I don't recommend) to use GGC, the garbage collector internal to GCC (which, IMNSHO, is not very good; it was a dirty hack design many years ago).
BTW, you might consider customizing -e.g. with MELT- the GCC compiler (by adding some application specific __attribute__ or #pragma) to help your GC. With some work, you could generate some of the marking routines, etc. However, that approach might be quite painful (I really don't know). Notice that MELT (free software, GPLv3+) contains a copying generational GC whose old generation is the GGC heap, so you could at least look inside the code of melt-runtime.cc
PS. I also recommend Queinnec's book: Lisp In Small Pieces; it has some interesting material about GC and their connection to programming languages, and it is a great book to read when you are implementing an interpreter. Scott's book on Programming Languages Pragmatics is also worth reading.
For C programs, there are 2 options: the Boehm GC which replaces malloc (it is a conservative GC, so perhaps not exactly what you're looking for but it's either that or...), or write your own.
But writing your own isn't that hard. Do the mark-sweep algorithm. The root set for marking will be your symbol table. And you'll need another table or linked-list to track all of the allocated memory that can be freed. When you sweep through the list of allocations, free anything without a mark.
The actual coding will of course be more complicated because you have to iterate through these two kinds of data structures, but the algorithm itself is very simple. You can do it.
A few years ago, I found myself on the same search and these were (and AFAIK still are) the results. Writing your own is tremendously rewarding and worthwhile.
In practice, a great many other issues will arise as Basile's answer touches upon.
If the garbage collector is called from deep in the call-stack (by an allocation routine that needs more memory, perhaps), then care must be taken about any allocations whose handles are still held in local variables of C functions in the call stack, and not saved-out to their symbol-table or database locations. In my postscript interpreter I dealt with this by using a temporary stack which all allocators pushed to. This stack was cleared by the main loop after all subroutines had returned, and it was considered part of the root set during marking. In my APL interpreter, I call the GC everytime around the mainloop. For the little programs in little languages, the speed issues are less vital than the more-dreaded memory leakage, at least among the circles which have influenced me.
When implementing such a language, your interpreter needs to keep track of all objects in the program it's running, including knowledge of their types and what part of the data is a reference to other data. Then it's trivial for you to walk all the data and implement whatever sort of garbage collector you like. No bogus hacks like trying to determine where the C implementation's "heap"/"stack"/etc. are located or guessing at what might be a pointer is needed, because you're dealing exactly with the data whose structure you know.

Is it feasible for a general purpose programming language to not have a heap? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm looking into creating a programming language. What I'm wondering is, in a language that contains a reference-like construct, is it feasible to not have a new/malloc operator? That is, all variables are stored either on the stack somewhere or are statically allocated.
The idea is that you get better type safety, as well as "free garbage collection" without actually having a garbage collector.
I'm not familiar with too many scripting languages, so if one already does this, feel free to point it out.
(Dynamic / unknown size data structures would be handled by a dynamic list structure, which would be handled (obviously) on the heap, behind the user's back.)
Fortran was always quite a "general purpose" language, but it had no support for any kind of a dynamic memory allocation out of the box.
A usual practice was to allocate a big array statically and simulate your own memory management on top of it.
If a way to get rid of both GC and a manual memory management is what you're looking for, then region analysis can help, but only in few specific cases.
Region-based memory management was one approach for not having a heap managed in the traditional sense. This manifested in languages like FX and MLKit.
There is no requirement at all that you absolutely have to implement a stack, or a heap. C does not specify a stack either, for example. In fact, in many languages you don't even need to care, you just specify that the implementation (a compiler, an interpret, or whatever) to make room for a variable, and perhaps for how long.
Your language's interpreter (assuming one) could do int main(void) { char memory[1048576]; run_script_from_stdin_using(memory); }. You could even call mmap(2) to get an anonymous block of memory and use that to stash your variables in. It just does not matter where objects live, and that stack/heap are terms that have questionable meaning given they are often interchangeable.
You can allocate objects on the stack if you know they won't be referenced after the method terminates. This means, the object is used solely withing the method (e.g. a temp object), or use solely in method resulting from nested invocations. This is however a severe restriction. There are some dynamic optimizations that go in this direction though (at least optimization of temp object). You could maybe have a static checking mechanism that enforces this restriction, or possibly distinguish between heap and stack objects with types...

Resources