malloc is not guaranteed to return 0'ed memory. The conventional wisdom is not only that, but that the contents of the memory malloc returns are actually non-deterministic, e.g. openssl used them for extra randomness.
However, as far as I know, malloc is built on top of brk/sbrk, which do "return" 0'ed memory. I can see why the contents of what malloc returns may be non-0, e.g. from previously free'd memory, but why would they be non-deterministic in "normal" single-threaded software?
Is the conventional wisdom really true (assuming the same binary and libraries)
If so, Why?
Edit Several people answered explaining why the memory can be non-0, which I already explained in the question above. What I'm asking is why the program using the contents of what malloc returns may be non-deterministic, i.e. why it could have different behavior every time it's run (assuming the same binary and libraries). Non-deterministic behavior is not implied by non-0's. To put it differently: why it could have different contents every time the binary is run.
Malloc does not guarantee unpredictability... it just doesn't guarantee predictability.
E.g. Consider that
return 0;
Is a valid implementation of malloc.
The initial values of memory returned by malloc are unspecified, which means that the specifications of the C and C++ languages put no restrictions on what values can be handed back. This makes the language easier to implement on a variety of platforms. While it might be true that in Linux malloc is implemented with brk and sbrk and the memory should be zeroed (I'm not even sure that this is necessarily true, by the way), on other platforms, perhaps an embedded platform, there's no reason that this would have to be the case. For example, an embedded device might not want to zero the memory, since doing so costs CPU cycles and thus power and time. Also, in the interest of efficiency, for example, the memory allocator could recycle blocks that had previously been freed without zeroing them out first. This means that even if the memory from the OS is initially zeroed out, the memory from malloc needn't be.
The conventional wisdom that the values are nondeterministic is probably a good one because it forces you to realize that any memory you get back might have garbage data in it that could crash your program. That said, you should not assume that the values are truly random. You should, however, realize that the values handed back are not magically going to be what you want. You are responsible for setting them up correctly. Assuming the values are truly random is a Really Bad Idea, since there is nothing at all to suggest that they would be.
If you want memory that is guaranteed to be zeroed out, use calloc instead.
Hope this helps!
malloc is defined on many systems that can be programmed in C/C++, including many non-UNIX systems, and many systems that lack operating system altogether. Requiring malloc to zero out the memory goes against C's philosophy of saving CPU as much as possible.
The standard provides a zeroing cal calloc that can be used if you need to zero out the memory. But in cases when you are planning to initialize the memory yourself as soon as you get it, the CPU cycles spent making sure the block is zeroed out are a waste; C standard aims to avoid this waste as much as possible, often at the expense of predictability.
Memory returned by mallocis not zeroed (or rather, is not guaranteed to be zeroed) because it does not need to. There is no security risk in reusing uninitialized memory pulled from your own process' address space or page pool. You already know it's there, and you already know the contents. There is also no issue with the contents in a practical sense, because you're going to overwrite it anyway.
Incidentially, the memory returned by malloc is zeroed upon first allocation, because an operating system kernel cannot afford the risk of giving one process data that another process owned previously. Therefore, when the OS faults in a new page, it only ever provides one that has been zeroed. However, this is totally unrelated to malloc.
(Slightly off-topic: The Debian security thing you mentioned had a few more implications than using uninitialized memory for randomness. A packager who was not familiar with the inner workings of the code and did not know the precise implications patched out a couple of places that Valgrind had reported, presumably with good intent but to desastrous effect. Among these was the "random from uninitilized memory", but it was by far not the most severe one.)
I think that the assumption that it is non-deterministic is plain wrong, particularly as you ask for a non-threaded context. (In a threaded context due to scheduling alea you could have some non-determinism).
Just try it out. Create a sequential, deterministic application that
does a whole bunch of allocations
fills the memory with some pattern, eg fill it with the value of a counter
free every second of these allocations
newly allocate the same amount
run through these new allocations and register the value of the first byte in a file (as textual numbers one per line)
run this program twice and register the result in two different files. My idea is that these files will be identical.
Even in "normal" single-threaded programs, memory is freed and reallocated many times. Malloc will return to you memory that you had used before.
Even single-threaded code may do malloc then free then malloc and get back previously used, non-zero memory.
There is no guarantee that brk/sbrk return 0ed-out data; this is an implementation detail. It is generally a good idea for an OS to do that to reduce the possibility that sensitive information from one process finds its way into another process, but nothing in the specification says that it will be the case.
Also, the fact that malloc is implemented on top of brk/sbrk is also implementation-dependent, and can even vary based on the size of the allocation; for example, large allocations on Linux have traditionally used mmap on /dev/zero instead.
Basically, you can neither rely on malloc()ed regions containing garbage nor on it being all-0, and no program should assume one way or the other about it.
The simplest way I can think of putting the answer is like this:
If I am looking for wall space to paint a mural, I don't care whether it is white or covered with old graffiti, since I'm going to prime it and paint over it. I only care whether I have enough square footage to accommodate the picture, and I care that I'm not painting over an area that belongs to someone else.
That is how malloc thinks. Zeroing memory every time a process ends would be wasted computational effort. It would be like re-priming the wall every time you finish painting.
There is an whole ecosystem of programs living inside a computer memmory and you cannot control the order in which mallocs and frees are happening.
Imagine that the first time you run your application and malloc() something, it gives you an address with some garbage. Then your program shuts down, your OS marks that area as free. Another program takes it with another malloc(), writes a lot of stuff and then leaves. You run your program again, it might happen that malloc() gives you the same address, but now there's different garbage there, that the previous program might have written.
I don't actually know the implementation of malloc() in any system and I don't know if it implements any kind of security measure (like randomizing the returned address), but I don't think so.
It is very deterministic.
Related
C (and C++) include a family of dynamic memory allocation functions, most of which are intuitively named and easy to explain to a programmer with a basic understanding of memory. malloc() simply allocates memory, while calloc() allocates some memory and clears it eagerly. There are also realloc() and free(), which are pretty self-explanatory.
The manpage for malloc() also mentions valloc(), which allocates (size) bytes aligned to the page border.
Unfortunately, my background isn't thorough enough in low-level intricacies; what are the implications of allocating and using page border-aligned memory, and when is this appropriate as opposed to regular malloc() or calloc()?
The manpage for valloc contains an important note:
The function valloc() appeared in 3.0BSD. It is documented as being obsolete in 4.3BSD, and as legacy in SUSv2. It does not appear in POSIX.1-2001.
valloc is obsolete and nonstandard - to answer your question, it would never be appropriate to use in new code.
While there are some reasons to want to allocate aligned memory - this question lists a few good ones - it is usually better to let the memory allocator figure out which bit of memory to give you. If you are certain that you need your freshly-allocated memory aligned to something, use aligned_alloc (C11) or posix_memalign (POSIX) instead.
Allocations with page alignment usually are not done for speed - they're because you want to take advantage of some feature of your processor's MMU, which typically works with page granularity.
One example is if you want to use mprotect(2) to change the access rights on that memory. Suppose, for instance, that you want to store some data in a chunk of memory, and then make it read only, so that any buggy part of your program that tries to write there will trigger a segfault. Since mprotect(2) can only change permissions page by page (since this is what the underlying CPU hardware can enforce), the block where you store your data had better be page aligned, and its size had better be a multiple of the page size. Otherwise the area you set read-only might include other, unrelated data that still needs to be written.
Or, perhaps you are going to generate some executable code in memory and then want to execute it later. Memory you allocate by default probably isn't set to allow code execution, so you'll have to use mprotect to give it execute permission. Again, this has to be done with page granularity.
Another example is if you want to allocate memory now, but might want to mmap something on top of it later.
So in general, a need for page-aligned memory would relate to some fairly low-level application, often involving something system-specific. If you needed it, you'd know. (And as mentioned, you should allocate it not with valloc, but using posix_memalign, or perhaps an anonymous mmap.)
First of all valloc is obsolete, and memalignshould be used instead.
Second thing it's not part of the C (C++) standard at all.
It's a special allocation which is aligned to _SC_PAGESIZE boundary.
When is it useful to use it? I guess never, unless you have some specific low level requirement. If you would need it, you would know to need it, since it's rarely useful (maybe just when trying some micro-optimizations or creating shared memory between processes).
The self-evident answer is that it is appropriate to use valloc when malloc is unsuitable (less efficient) for the application (virtual) memory usage pattern and valloc is better suited (more efficient). This will depend on the OS and libraries and architecture and application...
malloc traditionally allocated real memory from freed memory if available and by increasing the brk point if not, in which case it is cleared by the OS for security reasons.
calloc in a dumb implementation does a malloc and then (re)clears the memory, while a smart implementation would avoid reclearing newly allocated memory that is automatically cleared by the operating system.
valloc relates to virtual memory. In a virtual memory system using the file system, you can allocate a large amount of memory or filespace/swapspace, even more than physical memory, and it will be swapped in by pages so alignment is a factor. In Unix creation of file of a specified file and adding/deleting pages is done using inodes to define the file but doesn't deal with actual disk blocks till needed, in which case it creates them cleared. So I would expect a valloc system to increase the size of the data segment swap without actually allocating physical or swap pages, or running a for loop to clear it all - as the file and paging system does that as needed. Thus valloc should be a heck of a lot faster than malloc. But as with calloc, how particular idiotsyncratic *x/C flavours do it is up to them, and the valloc man page is totally unhelpful about these expectations.
Traditionally this was implemented with brk/sbrk. Of course in a virtual memory system, whether a paged or a segmented system, there is no real need for any of this brk/sbrk stuff and it is enough to simply write the last location in a file or address space to extend up to that point.
Re the allocation to page boundaries, that is not usually something the user wants or needs, but rather is usually something the system wants or needs.
A (probably more expensive) way to simulate valloc is to determine the page boundary and then call aligned_alloc or posix_memalign with this alignment spec.
The fact that valloc is deprecated or has been removed or is not required in some OS' doesn't mean that it isn't still useful and required for best efficiency in others. If it has been deprecated or removed, one would hope that there are replacements that are as efficient (but I wouldn't bet on it, and might, indeed have, written my own malloc replacement).
Over the last 40 years the tradeoffs of real and (once invented) virtual memory have changed periodically, and mainstream OS has tended to go for frills rather than efficiency, with programmers who don't have (time or space) efficiency as a major imperative. In the embedded systems, efficiency is more critical, but even there efficiency is often not well supported by the standard OS and/or tools. But when in doubt, you can roll your own malloc replacement for your application that does what you need, rather than depend on what someone else woke up and decided to do/implement, or to undo/deprecate.
So the real answer is you don't necessarily want to use valloc or malloc or calloc or any of the replacements your current subversion of an OS provides.
The OS will just recover it (after the program exits) right? So what's the use other than good programming style? Or is there something I'm misunderstanding? What makes it different from "automatic" allocation since both can be changed during run time, and both end after program execution?
When your application is working with vast amounts of data, you must free in order to conserve heap space. If you don't, several bad things can happen:
the OS will stop allocating memory for you (crashing)
the OS will start swapping your data to disk (thrashing)
other applications will have less space to put their data
The fact that the OS collects all the space you allocate when the application exits does not mean you should rely upon this to write a solid application. This would be like trying to rely on the compiler to optimize poor programming. Memory management is crucial for good performance, scalability, and reliability.
As others have mentioned, malloc allocates space in the heap, while auto variables are created on the stack. There are uses for both, but they are indeed very different. Heap space must be allocated and managed by the OS and can store data dynamically and of different sizes.
If you call a macro for thousand times without using free() then compiler or safe to say system will assign you thousand different address, but if you use free() after each malloc then only one memory address will be given to you every time.
So chances of memory leak, bus error, memory out of bound and crash would be minimum.
Its safe to use free().
In C/C++ "auto" variables are allocated on the stack. They are destroyed right at the exit from the function. This will happen automatically. You do not need to write anything for this.
Heap allocations (result of a call to malloc) are either released explicitly (with a call to free) or they are cleaned up when the process ends.
If you are writing small program that will be used maybe once or twice, then it is ok not to free your heap allocations. This is not nice but acceptable.
If you are writing medium or big project or are planning to include your code into other project, you should definitely release every heap allocation. Not doing this will create HUGE trouble. The heap memory is not endless. Program may use it all. Even if you will allocate small amount of memory, this will still create unnedded pressure on the OS, cause swapping, etc.
The bottom line: freeing allocations is much more than just a style or a good habit.
An automatic variable is destroyed (and its memory is re-usable) as soon as you exit the scope in which it is defined. For most variables that's much earlier than program exit.
If you malloc and don't free, then the memory isn't re-usable until the program exits. Not even then, on some systems with very minimal OS.
So yes, there's big difference between an automatic variable and a leaked memory allocation. Call a function that leaks an allocation enough times, and you'll run out of memory. Call a function with an automatic variable in it as many times as you like, the memory is re-usable.
It is good programming style and it's more than that. Not doing proper memory management in non-trivial programs will eventually influence the usability of your program. Sure the OS can reclaim any resources that you've allocated/used after your program terminates, but that doesn't alleviate the burden or potential issues during program execution.
Consider the web browser that you've used to post this question: if the browser is written in a language that requires memory management, and the code didn't do it properly, how long do you think it would be before you'd notice that it's eating up all your memory? How long do you think the browser would remain usable? Now consider that users often leave browsers open for long periods of time: without proper memory management, they would become unusable after few page loads.
If your program does not exit immediately and you're not freeing your memory you're going to end up wasting it. Either you'll run out of memory eventually, or you'll start swapping to disk (which is slow, and also not unlimited).
automatic variable is on the stack and its size should be known on compilation time. if you need to store data that you don't the size, for example, maintain a binary tree, where the user add and removes objects. beside that stack size might be limited (depends on your target), for example, linux kernel the stack is 4k-8k usually. you also trash the instruction cache, which affects performance,
Yes you absolutely have to use free() after malloc() (as well as closing files and other resources when you're done). While it's true that the OS will recover it after execution, a long running process will leak memory that way. If your program is as simple as a main method that runs a single method then exists, it's probably not a big deal, albeit incredibly sloppy. You should get in the habit of managing memory properly in C because one day you may want to write a nontrivial program that runs for more than a second, and if you don't learn how to do it in advance, you'll have a huge headache dealing with memory leaks.
While developing a piece of software for embedded system I used realloc() function many times. Now I've been said that I "should not use realloc() in embedded" without any explanation.
Is realloc() dangerous for embedded system and why?
Yes, all dynamic memory allocation is regarded as dangerous, and it is banned from most "high integrity" embedded systems, such as industrial/automotive/aerospace/med-tech etc etc. The answer to your question depends on what sort of embedded system you are doing.
The reasons it's banned from high integrity embedded systems is not only the potential memory leaks, but also a lot of dangerous undefined/unspecified/impl.defined behavior asociated with those functions.
EDIT: I also forgot to mention heap fragmentation, which is another danger. In addition, MISRA-C also mentions "data inconsistency, memory exhaustion, non-deterministic behaviour" as reasons why it shouldn't be used. The former two seem rather subjective, but non-deterministic behaviour is definitely something that isn't allowed in these kind of systems.
References:
MISRA-C:2004 Rule 20.4 "Dynamic heap memory allocation shall not be used."
IEC 61508 Functional safety, 61508-3 Annex B (normative) Table B1, >SIL1: "No dynamic objects", "No dynamic variables".
It depends on the particular embedded system. Dynamic memory management on an small embedded system is tricky to begin with, but realloc is no more complicated than a free and malloc (of course, that's not what it does). On some embedded systems you'd never dream of calling malloc in the first place. On other embedded systems, you almost pretend it's a desktop.
If your embedded system has a poor allocator or not much RAM, then realloc might cause fragmentation problems. Which is why you avoid malloc too, cause it causes the same problems.
The other reason is that some embedded systems must be high reliability, and malloc / realloc can return NULL. In these situations, all memory is allocated statically.
In many embedded systems, a custom memory manager can provide better semantics than are available with malloc/realloc/free. Some applications, for example, can get by with a simple mark-and-release allocator. Keep a pointer to the start of not-yet-allocated memory, allocate things by moving the pointer upward, and jettison them by moving the pointer below them. That won't work if it's necessary to jettison some things while keeping other things that were allocated after them, but in situations where that isn't necessary the mark-and-release allocator is cheaper than any other allocation method. In some cases where the mark-and-release allocator isn't quite good enough, it may be helpful to allocate some things from the start of the heap and other things from the end of the heap; one may free up the things allocated from one end without affecting those allocated from the other.
Another approach that can sometimes be useful in non-multitasking or cooperative-multitasking systems is to use memory handles rather than direct pointers. In a typical handle-based system, there's a table of all allocated objects, built at the top of memory working downward, and objects themselves are allocated from the bottom up. Each allocated object in memory holds either a reference to the table slot that references it (if live) or else an indication of its size (if dead). The table entry for each object will hold the object's size as well as a pointer to the object in memory. Objects may be allocated by simply finding a free table slot (easy, since table slots are all fixed size), storing the address of the object's table slot at the start of free memory, storing the object itself just beyond that, and updating the start of free memory to point just past the object. Objects may be freed by replacing the back-reference with a length indication, and freeing the object in the table. If an allocation would fail, relocate all live objects starting at the top of memory, overwriting any dead objects, and updating the object table to point to their new addresses.
The performance of this approach is non-deterministic, but fragmentation is not a problem. Further, it may be possible in some cooperative multitasking systems to perform garbage collection "in the background"; provided that the garbage collector can complete a pass in the time it takes to chug through the slack space, long waits can be avoided. Further, some fairly simple "generational" logic may be used to improve average-case performance at the expense of worst-case performance.
realloc can fail, just like malloc can. This is one reason why you probably should not use either in an embedded system.
realloc is worse than malloc in that you will need to have the old and new pointers valid during the realloc. In other words, you will need 2X the memory space of the original malloc, plus any additional amount (assuming realloc is increasing the buffer size).
Using realloc is going to be very dangerous, because it may return a new pointer to your memory location. This means:
All references to the old pointer must be corrected after realloc.
For a multi-threaded system, the realloc must be atomic. If you are disabling interrupts to achieve this, the realloc time might be long enough to cause a hardware reset by the watchdog.
Update: I just wanted to make it clear. I'm not saying that realloc is worse than implementing realloc using a malloc/free. That would be just as bad. If you can do a single malloc and free, without resizing, it's slightly better, yet still dangerous.
The issues with realloc() in embedded systems are no different than in any other system, but the consequences may be more severe in systems where memory is more constrained, and the sonsequences of failure less acceptable.
One problem not mentioned so far is that realloc() (and any other dynamic memory operation for that matter) is non-deterministic; that is it's execution time is variable and unpredictable. Many embedded systems are also real-time systems, and in such systems, non-deterministic behaviour is unacceptable.
Another issue is that of thread-safety. Check your library's documantation to see if your library is thread-safe for dynamic memory allocation. Generally if it is, you will need to implement mutex stubs to integrate it with your particular thread library or RTOS.
Not all emebdded systems are alike; if your embedded system is not real-time (or the process/task/thread in question is not real-time, and is independent of the real-time elements), and you have large amounts of memory unused, or virtual memory capabilities, then the use of realloc() may be acceptable, if perhaps ill-advised in most cases.
Rather than accept "conventional wisdom" and bar dynamic memory regardless, you should understand your system requirements, and the behaviour of dynamic memory functions and make an appropriate decision. That said, if you are building code for reuability and portability to as wide a range of platforms and applications as possible, then reallocation is probably a really bad idea. Don't hide it in a library for example.
Note too that the same problem exists with C++ STL container classes that dynamically reallocate and copy data when the container capacity is increased.
Well, it's better to avoid using realloc if it's possible, since this operation is costly especially being put into the loop: for example, if some allocated memory needs to be extended and there no gap between after current block and the next allocated block - this operation is almost equals: malloc + memcopy + free.
From what I understand because malloc dynamically assigns mem , you need to free that mem so that it can be used again.
What happens if you return a char* that was created using malloc (i.e. how are you supposed to free that)
If you leave the pointer as it is
and exit the application will it be
freed.(I cant find a definite answer on this , some say yes , some say no).
The caller has to free it (or arrange for it to be freed). This means that functions that create and return resources need to document exactly how it should be freed.
Most OSes will free the memory when the program exits, as part of the definition of a "process". The C standard doesn't care what happens, it's beyond the scope of the program. Not all OSes have a full process abstraction, but desktop-style OSes certainly do.
The main reasons to free it before that are:
If you free memory as soon as possible, often a long time before process exit, your program uses less memory total.
If you don't free it, and you later want to change your program into a routine within another program, that perhaps is called many times, then suddenly you require many times as much memory as before (memory leak).
There are debugging tools that will help you identify memory leaks, by warning you about memory that is still allocated when the program exits. These don't really help much if there's a lot of deliberately-leaked junk to wade through.
If you don't free it and you hit any problems, it's much harder to go back later and find all the memory that needs freeing, than it is to do it right in the first place.
There are so many cases where you do need to free the memory (to prevent huge memory use in long-running programs), that your default strategy must be to clean pretty much everything up anyway.
The vaguely plausible reasons not to free are:
Less code.
If you have squillions of blocks to free individually, immediately before program exit, then it might be much faster to let the OS drop the whole process.
Stuff which is created on demand and stored in globals might be quite difficult to clean up safely, if you don't know exactly where it's used. Think of some kind of cache that's populated as you go along, that might have MRU rules to limit how much memory it occupies, so it's not an unlimited leak. OK, so this is one bad thing (unrestricted globals) causing another bad thing (unfreed memory), but it's worth knowing about as a reason why you might see unfreed blocks in existing code, and you can't necessarily just go in and fix them.
The reasons for freeing almost always outweigh the reasons against.
If you have a pointer to memory created by malloc, freeing that memory, using that pointer, will do the right thing. Yes, there is some magic involved; this will be taken care of by your compiler.
Yes, if you ignore the memory freeing, and exit the application, the OS will release the memory. However, it's considered bad practice to leave it unfreed. The OS may not do the right thing (especially in embedded settings), or may not do it in a timely fashion. Also, if you're running your program continuously, you may end up consuming a growing amount of memory, eventually consuming it all, and running out of memory and crashing.
Yes. If you malloc, you need to free. You are guaranteeing memory leaks while your program is running if you don't free.
Free it.
Always.
Period.
Yes, every call to malloc() has to be matched with a call to free().
To answer your specific questions:
You have to explicitly document your API telling the user whether the returned pointer has to be free()'d
The OS will free all memory allocated to the process.
If you write the function yourself: Avoid doing that.
Instead, let the caller pass a buffer, let the caller specify the buffer's size and copy the data into that buffer. That way, you can use your function from other modules that don't use the same heap (other programming languages, different C runtime...)
If you for whatever reason can not use such an interface, specify in the function's documentation that the caller has to free the returned pointer after it is done with it.
If you are using a library function: Have a look at the documentation.
If the documentation states that you have to free, do so.
If the documentation states that you don't have to, it might be some global cleanup function that has to be called to free the module's resources.
Regarding your second question, freeing before exiting is recommended. Technically it wont hurt, but when you ever want to reuse your code in a bigger project, you will be thankful that you wrote the correct cleanup in the first place.
The C standard has no concept of the system environment outside of a single program's execution, so it cannot specify what happens "after the program exits". At the same time, nowhere does it make any requirement that memory obtained with malloc should or must be released with free before a call to exit or a return from main, and I think it's pretty clear that the intention is that exiting without manually freeing memory will not leave resources tied up - much like how calling exit without closing all files first automatically closes them (including flushing them).
Now, as for whether you should or should not call free, that depends a lot on your particular program.
Any library code should free any memory that it obtained purely for internal use as soon as possible.
A library which returns allocated objects to the calling program should always provide a corresponding call to free those objects.
A library which performs any allocations as part of a global initialization (note: this is a very bad design, but sometimes inevitable) should provide a way for the application to reverse that initialization and free everything that was allocated. This is especially important if the library might ever be loaded dynamically (even as a consequence of satisfying another dynamically-loaded library's dependencies).
So far I've only talked about library code. At this point, all that's left is allocations made by the application itself or on the application's behalf by libraries. My view, and I will admit that it is unorthodox, is that freeing such objects is not just unnecessary but harmful. The main reason I say this is that most long-lived applications will have accumulated quite a bit of allocated memory which they are not making significant use of (think of the undo buffer in a word processor or the history in a browser). On a moderately loaded system, much of this data has been swapped to disk by the time the application terminates. If you want to free it, you're going to end up walking all over swapped-out memory addresses tracking down all the pointers to free,
putting useless wear on the physical components of the hard drive
making the user wait for your application to exit
causing other still-in-use applications' data to get swapped out, making them run slower
All of this in the name of a ridiculous "you must free everything you allocate" rule.
For short-lived applications, it's not so much of a big deal, but you can often simplify the implementation of short-lived applications that perform a single linear task and exit if you don't bother freeing all the memory they allocate. Think of most unix command line utilities. Is there any use to writing the loops for sed to free all its compiled regular expressions before exiting? Couldn't programmers' time be spent on something more productive?
1) The same way you'd free the memory normally, i.e.
p = func();
//...
free(p);
The trick is in making sure that you always do it...
2) Generally speaking, yes. But you should still free any memory you use as good practice. Not spending the time to figure out where to free the memory is just being lazy.
Let's take those one point at a time...
If you return a char * that you know was created with malloc, then yes, it is your responsibility to free that. You can do that with free(myCharPtr).
The OS will claim the memory back, and it won't be lost forever, but there's technically no guarantee that it will be reclaimed right when the application dies. That just depends on the operating system.
I wouldn't go so far as to say every malloc must be freed, but I would say that, no matter how long a program runs, there must be a bounded number of allocations (and total size) that won't be freed. The number need not be a static constant, but it must be specifiable in terms of something else (e.g. this program processes widgets; it will allocate one 64-byte struct for each quizzix in the largest widget). One may not know beforehand the size of the largest widget, but if e.g. one knows that the temporary storage required to process a widget is proportional to the square of its size, one might safely infer that the largest widget will be small enough that the total amount of memory stranded will be pretty slight.
I have read that free() "generally" does not return memory to the OS. Can we portably make use of this feature of free(). For example,is this portable?
/* Assume I know i would need memory equivalent to 10000 integers at max
during the lifetime of the process */
unsigned int* p = malloc(sizeof(unsigned int) * 10000);
if ( p == NULL)
return 1;
free(p);
/* Different points in the program */
unsigned int* q = malloc(sizeof(unsigned int) * 5);
/* No need to check for the return value of malloc */
I am writing a demo where I would know in advance how many call contexts to support.
Is it feasible to allocate "n" number of "call contexts" structures in advance and then free them immediately. Would that guarantee that my future malloc calls would not fail?
Does this give me any advantage with regards to efficiency? I am thinking one less "if" check plus would memory management work better if a large chunk was initially acquired and is now available with free. Would this cause less fragmentation?
You would be better off keeping the initial block of memory allocated then using a pool to make it available to clients in your application. It isn't good form to rely on esoteric behaviors to maintain the stability of your code. If anything changes, you could be dealing with bad pointers and having program crashes.
You are asking for a portable and low level way to control what happens on the OS side of the memory interface.
On any OS (because c is one of the most widely ported languages out there).
Think about that and keep in mind that OSs differ in their construction and goals and have widely varying sets of needs and properties.
There is a reason the usual c APIs only define how things should look and behave from the c side of the interface, and not how things should be on the OS side.
No, you can't reliably do such a thing. It is portable only in the sense that it's valid C and will probably compile anywhere you try it, but it is still incorrect to rely on this supposed (and undocumented) behaviour.
You will also not get any appreciably better performance. A simple check for NULL returns from malloc() is not what's making your program slow. If you think all your calls to malloc() and free() are slowing your program down, write your own allocation system with the behaviour you want.
You cannot portably rely on any such behavior of malloc(). You can only rely on malloc() giving you a pointer to a block memory of the given size which you can use until you call free().
Hmm, no. What malloc(3) does internally is call the brk(2) to extend the data segment if it's too small for given allocation request. It does touch some of the pages for its internal bookkeeping, but generally not all allocated pages might be backed by physical memory at that point.
This is because many OS-es do memory overcommit - promising the application whatever it requested regardless of available physical memory in hopes that not all memory will be used by the app, that other apps release memory/terminate, and falling back on swapping as last resort. Say on Linux malloc(3) pretty much never fails.
When memory actually gets referenced the kernel will have to find available physical pages to back it up, create/update page tables and TLBs, etc. - normal handling for a page fault. So again, no, you will not get any speedup later unless you go and touch every page in that allocated chunk.
Disclamer: the above is might not be accurate for Windows (so, no again - nothing close to portable.)
No, there's no guarantee free() will not release the memory back, and there's no guarantee your second malloc will succeed.
Even platforms who "generally" don't return memory to the OS, does so at times if it can. You could end up with your first malloc succeeding, and your next malloc not succeding since in the mean time some other part of the system used up your memory.
Not at all. It's not portable at all. In addition, there's no guarantee that another process won't have used the memory in question- C runs on many devices like embedded where virtual memory addressing doesn't exist. Nor will it reduce fragmentation- you'd have to know the page size of the machine and allocate an exact number of pages- and then, when you freed them, they'd be just unfragmented again.
If you really want to guarantee malloced memory, malloc a large block and manually place objects into it. That's the only way. As for efficiency, you must be running on an incredibly starved device in order to save a few ifs.
malloc(3) also does mmap(2), consequently, free(3) does munmap(2), consequently, second malloc() can in theory fail.
The C standard can be read as requiring this, but from a practical viewpoint there are implementations known that don't follow that, so whether it's required or not you can't really depend on it.
So, if you want this to be guaranteed, you'll pretty much need to write your own allocator. A typical strategy is to allocate a big block from malloc (or whatever) and sub-allocate pieces for the rest of the program to use out of that large block (or perhaps a number of large blocks).
For better control you should create your own memory allocator. An example of memory allocator is this one. This is the only way you will have predictable results. Everything else that relies on obscure/undocumented features and works can be attributed to luck.