GLib handle out of memory - c

I've a question concerning the GLib.
I would like to use the GLib in a server context but I'm not aware on how the memory is managed:
https://developer.gnome.org/glib/stable/glib-Memory-Allocation.html
If any call to allocate memory fails, the application is terminated. This also means that there is no need to check if the call succeeded.
If I look at the source code, if g_malloc failed, it will call g_error:
g_error()
define g_error(...)
A convenience function/macro to log an error message.
Error messages are always fatal, resulting in a call to abort() to terminate the application.[...]
But in my case, as I'm developing a server application, I don't want the application exit, I would prefer, as the traditional malloc function, the GLib functions returns NULL or something to indicate an error happened.
So, my question is, there is a manner to handle out of memory?
Is the GLib not recommended for server purpose applications?
If I look at the man of abort I can see that I can handle the signal but I'll make the management of out-of-memory errors a little bit painful...
The abort() function causes abnormal program termination to occur, unless
the signal SIGABRT is being caught and the signal handler does not
return.
Thanks for you help!

It's very difficult to recover from lack of memory. The reason for that is that it can be considered a terminal state, in the sense that lack of memory will persist for some time before it goes away. Even reacting to the lack of memory (like informing the user) might require more memory, for example, to build and send a message. A related problem is that there are operating systems (linux at least) that may be over optimistic about allocating memory. When the kernel realizes that memory is missing, it may kill the application, even if your code is handling the failures.
So either you have a much stricter grasp of your whole system than average, or you won't be able to successfully handle out of memory errors, and, in this case, it doesn't matter what the helper library is doing.
If you really want to control memory allocation while still using glib, you have partial ways to do that. Don't use any glib allocation function and use some from other library. Glib provides functions that receive a "free function" when necessary. For example:
https://developer.gnome.org/glib/2.31/glib-Hash-Tables.html#g-hash-table-new-full
The hash table constructor accepts functions for destroying both keys and values. In your case, the data will be allocated using custom allocation functions, while the hash data structures will be allocated with glib functions.
Alternatively you could use g_try_* macros to allocate memory, so you still use glib allocator, but it won't abort on error. Again, this only partially solves the problem. Internally, glib will implicitly call functions that may abort and it assumes it will never return on error.
About the general question: does it make sense for a server to crash when it's out of memory ? The obvious answer is no, but I can't estimate how theoretical this answer is. I can only expect that the server system be properly sized for its operation and reject as invalid any input that could potentially exceed its capacities, and for that, it doesn't matter which libraries it might use.

I'm probably editorializing a bit here, but the modern tendency to use virtual/logical memory (both names have been used, although "logical" is more distinct) does dramatically complicate knowing when memory is exhausted, although I think one can restore the old, real-(RAM + swap) model (I'll call this the physical model) in Linux with the following in /etc/sysctl.d/10-no-overcommit.conf:
vm.overcommit_memory = 2
vm.overcommit_ratio = 100
This restores the ability to have the philosophy that if a program's malloc just failed, that program has a good chance of having been the actual cause of memory exhaustion, and can then back away from the construction of the current object, freeing memory along the way, possibly grumbling at the user for having asked for something crazy that needed too much RAM, and awaiting the next request. In this model, most OOM conditions resolve almost instantly - the program either copes and presumably returns RAM, or gets killed immediately on the following SEGV when it tries to use the 0 returned by malloc.
With the virtual/logical memory models that linux tends to default to in 2013, this doesn't work, since a program won't find memory isn't available at malloc, but instead upon attempting to access memory later at which point the kernel finally realizes there's nowhere in RAM for it. This amounts to disaster, since any program on the system can die, rather than the one the ran the host out of RAM. One can understand why some GLib folks don't even care about trying to fix this problem, because with the logical memory model, it can't be fixed.
The original point of logical memory was to allow huge programs using more than half the memory of the host to still be able to fork and exec supporting programs. It was typically enabled only on hosts with that particular usage pattern. Now in 2013 when a home workstation can have 24+ GiB of RAM, there's really no excuse to have logical memory enabled at all 99% of the time. It should probably be disabled by default on hosts with >4 GiB of RAM at boot.
Anyway. So if you want to take the old-school physical model approach, make sure your computer has it enabled, or there's no point to testing your malloc and realloc calls.
If you are in that model, remember that GLib wasn't really guided by the same philosophy (see http://code.google.com/p/chromium/issues/detail?id=51286#c27 for just how madly astray some of them are). Any library based on GLib might be infected with the same attitude as well. However, there may be some interesting things one can do with GLib in the physical memory model by emplacing your own memory handlers with g_mem_set_vtable(), since you might be able to poke around in program globals and reduce usage in a cache or the like to free up space, then retry the underlying malloc. However, that's limited in its own way by not knowing which object was under construction at the point your special handler is invoked.

Related

Freeing memory after the program return EXIT_FAILURE and terminates [duplicate]

Let's say I have the following C code:
int main () {
int *p = malloc(10 * sizeof *p);
*p = 42;
return 0; //Exiting without freeing the allocated memory
}
When I compile and execute that C program, ie after allocating some space in memory, will that memory I allocated be still allocated (ie basically taking up space) after I exit the application and the process terminates?
It depends on the operating system. The majority of modern (and all major) operating systems will free memory not freed by the program when it ends.
Relying on this is bad practice and it is better to free it explicitly. The issue isn't just that your code looks bad. You may decide you want to integrate your small program into a larger, long running one. Then a while later you have to spend hours tracking down memory leaks.
Relying on a feature of an operating system also makes the code less portable.
In general, modern general-purpose operating systems do clean up after terminated processes. This is necessary because the alternative is for the system to lose resources over time and require rebooting due to programs which are poorly written or simply have rarely-occurring bugs that leak resources.
Having your program explicitly free its resources anyway can be good practice for various reasons, such as:
If you have additional resources that are not cleaned up by the OS on exit, such as temporary files or any kind of change to the state of an external resource, then you will need code to deal with all of those things on exit, and this is often elegantly combined with freeing memory.
If your program starts having a longer lifetime, then you will not want the only way to free memory to be to exit. For example, you might want to convert your program into a server (daemon) which keeps running while handling many requests for individual units of work, or your program might become a small part of a larger program.
However, here is a reason to skip freeing memory: efficient shutdown. For example, suppose your application contains a large cache in memory. If when it exits it goes through the entire cache structure and frees it one piece at a time, that serves no useful purpose and wastes resources. Especially, consider the case where the memory pages containing your cache have been swapped to disk by the operating system; by walking the structure and freeing it you're bringing all of those pages back into memory all at once, wasting significant time and energy for no actual benefit, and possibly even causing other programs on the system to get swapped out!
As a related example, there are high-performance servers that work by creating a process for each request, then having it exit when done; by this means they don't even have to track memory allocation, and never do any freeing or garbage collection at all, since everything just vanishes back into the operating system's free memory at the end of the process. (The same kind of thing can be done within a process using a custom memory allocator, but requires very careful programming; essentially making one's own notion of “lightweight processes” within the OS process.)
My apologies for posting so long after the last post to this thread.
One additional point. Not all programs make it to graceful exits. Crashes and ctrl-C's, etc. will cause a program to exit in uncontrolled ways. If your OS did not free your heap, clean up your stack, delete static variables, etc, you would eventually crash your system from memory leaks or worse.
Interesting aside to this, crashes/breaks in Ubuntu, and I suspect all other modern OSes, do have problems with "handled' resources. Sockets, files, devices, etc. can remain "open" when a program ends/crashes. It is also good practice to close anything with a "handle" or "descriptor" as part of your clean up prior to graceful exit.
I am currently developing a program that uses sockets heavily. When I get stuck in a hang I have to ctrl-c out of it, thus, stranding my sockets. I added a std::vector to collect a list of all opened sockets and a sigaction handler that catches sigint and sigterm. The handler walks the list and closes the sockets. I plan on making a similar cleanup routine for use before throw's that will lead to premature termination.
Anyone care to comment on this design?
What's happening here (in a modern OS), is that your program runs inside its own "process." This is an operating system entity that is endowed with its own address space, file descriptors, etc. Your malloc calls are allocating memory from the "heap", or unallocated memory pages that are assigned to your process.
When your program ends, as in this example, all of the resources assigned to your process are simply recycled/torn down by the operating system. In the case of memory, all of the memory pages that are assigned to you are simply marked as "free" and recycled for the use of other processes. Pages are a lower-level concept than what malloc handles-- as a result, the specifics of malloc/free are all simply washed away as the whole thing gets cleaned up.
It's the moral equivalent of, when you're done using your laptop and want to give it to a friend, you don't bother to individually delete each file. You just format the hard drive.
All this said, as all other answerers are noting, relying on this is not good practice:
You should always be programming to take care of resources, and in C that means memory as well. You might end up embedding your code in a library, or it might end up running much longer than you expect.
Some OSs (older ones and maybe some modern embedded ones) may not maintain such hard process boundaries, and your allocations might affect others' address spaces.
Yes. The OS cleans up resources. Well ... old versions of NetWare didn't.
Edit: As San Jacinto pointed out, there are certainly systems (aside from NetWare) that do not do that. Even in throw-away programs, I try to make a habit of freeing all resources just to keep up the habit.
Yes, the operating system releases all memory when the process ends.
It depends, operating systems will usually clean it up for you, but if you're working on for instance embedded software then it might not be released.
Just make sure you free it, it can save you a lot of time later when you might want to integrate it in to a large project.
That really depends on the operating system, but for all operating systems you'll ever encounter, the memory allocation will disappear when the process exits.
I think direct freeing is best. Undefined behaviour is the worst thing, so if you have access while it's still defined in your process, do it, there are lots of good reasons people have given for it.
As to where, or whether, I found that in W98, the real question was 'when' (I didn't see a post emphasising this). A small template program (for MIDI SysEx input, using various malloc'd spaces) would free memory in the WM_DESTROY bit of the WndProc, but when I transplanted this to a larger program it crashed on exit. I assumed this meant I was trying to free what the OS had already freed during a larger cleanup. If I did it on WM_CLOSE, then called DestroyWindow(), it all worked fine, instant clean exit.
While this isn't exactly the same as MIDI buffers, there is similarity in that it is best to keep the process intact, clean up fully, then exit. With modest memory chunks this is very fast. I found that many small buffers worked faster in operation and cleanup than fewer large ones.
Exceptions may exist, as someone said when avoiding hauling large memory chunks back out of a swap file on disk, but even that may be minimised by keeping more, and smaller, allocated spaces.

Why do i need to free dynamic memory manually? [duplicate]

Let's say I have the following C code:
int main () {
int *p = malloc(10 * sizeof *p);
*p = 42;
return 0; //Exiting without freeing the allocated memory
}
When I compile and execute that C program, ie after allocating some space in memory, will that memory I allocated be still allocated (ie basically taking up space) after I exit the application and the process terminates?
It depends on the operating system. The majority of modern (and all major) operating systems will free memory not freed by the program when it ends.
Relying on this is bad practice and it is better to free it explicitly. The issue isn't just that your code looks bad. You may decide you want to integrate your small program into a larger, long running one. Then a while later you have to spend hours tracking down memory leaks.
Relying on a feature of an operating system also makes the code less portable.
In general, modern general-purpose operating systems do clean up after terminated processes. This is necessary because the alternative is for the system to lose resources over time and require rebooting due to programs which are poorly written or simply have rarely-occurring bugs that leak resources.
Having your program explicitly free its resources anyway can be good practice for various reasons, such as:
If you have additional resources that are not cleaned up by the OS on exit, such as temporary files or any kind of change to the state of an external resource, then you will need code to deal with all of those things on exit, and this is often elegantly combined with freeing memory.
If your program starts having a longer lifetime, then you will not want the only way to free memory to be to exit. For example, you might want to convert your program into a server (daemon) which keeps running while handling many requests for individual units of work, or your program might become a small part of a larger program.
However, here is a reason to skip freeing memory: efficient shutdown. For example, suppose your application contains a large cache in memory. If when it exits it goes through the entire cache structure and frees it one piece at a time, that serves no useful purpose and wastes resources. Especially, consider the case where the memory pages containing your cache have been swapped to disk by the operating system; by walking the structure and freeing it you're bringing all of those pages back into memory all at once, wasting significant time and energy for no actual benefit, and possibly even causing other programs on the system to get swapped out!
As a related example, there are high-performance servers that work by creating a process for each request, then having it exit when done; by this means they don't even have to track memory allocation, and never do any freeing or garbage collection at all, since everything just vanishes back into the operating system's free memory at the end of the process. (The same kind of thing can be done within a process using a custom memory allocator, but requires very careful programming; essentially making one's own notion of “lightweight processes” within the OS process.)
My apologies for posting so long after the last post to this thread.
One additional point. Not all programs make it to graceful exits. Crashes and ctrl-C's, etc. will cause a program to exit in uncontrolled ways. If your OS did not free your heap, clean up your stack, delete static variables, etc, you would eventually crash your system from memory leaks or worse.
Interesting aside to this, crashes/breaks in Ubuntu, and I suspect all other modern OSes, do have problems with "handled' resources. Sockets, files, devices, etc. can remain "open" when a program ends/crashes. It is also good practice to close anything with a "handle" or "descriptor" as part of your clean up prior to graceful exit.
I am currently developing a program that uses sockets heavily. When I get stuck in a hang I have to ctrl-c out of it, thus, stranding my sockets. I added a std::vector to collect a list of all opened sockets and a sigaction handler that catches sigint and sigterm. The handler walks the list and closes the sockets. I plan on making a similar cleanup routine for use before throw's that will lead to premature termination.
Anyone care to comment on this design?
What's happening here (in a modern OS), is that your program runs inside its own "process." This is an operating system entity that is endowed with its own address space, file descriptors, etc. Your malloc calls are allocating memory from the "heap", or unallocated memory pages that are assigned to your process.
When your program ends, as in this example, all of the resources assigned to your process are simply recycled/torn down by the operating system. In the case of memory, all of the memory pages that are assigned to you are simply marked as "free" and recycled for the use of other processes. Pages are a lower-level concept than what malloc handles-- as a result, the specifics of malloc/free are all simply washed away as the whole thing gets cleaned up.
It's the moral equivalent of, when you're done using your laptop and want to give it to a friend, you don't bother to individually delete each file. You just format the hard drive.
All this said, as all other answerers are noting, relying on this is not good practice:
You should always be programming to take care of resources, and in C that means memory as well. You might end up embedding your code in a library, or it might end up running much longer than you expect.
Some OSs (older ones and maybe some modern embedded ones) may not maintain such hard process boundaries, and your allocations might affect others' address spaces.
Yes. The OS cleans up resources. Well ... old versions of NetWare didn't.
Edit: As San Jacinto pointed out, there are certainly systems (aside from NetWare) that do not do that. Even in throw-away programs, I try to make a habit of freeing all resources just to keep up the habit.
Yes, the operating system releases all memory when the process ends.
It depends, operating systems will usually clean it up for you, but if you're working on for instance embedded software then it might not be released.
Just make sure you free it, it can save you a lot of time later when you might want to integrate it in to a large project.
That really depends on the operating system, but for all operating systems you'll ever encounter, the memory allocation will disappear when the process exits.
I think direct freeing is best. Undefined behaviour is the worst thing, so if you have access while it's still defined in your process, do it, there are lots of good reasons people have given for it.
As to where, or whether, I found that in W98, the real question was 'when' (I didn't see a post emphasising this). A small template program (for MIDI SysEx input, using various malloc'd spaces) would free memory in the WM_DESTROY bit of the WndProc, but when I transplanted this to a larger program it crashed on exit. I assumed this meant I was trying to free what the OS had already freed during a larger cleanup. If I did it on WM_CLOSE, then called DestroyWindow(), it all worked fine, instant clean exit.
While this isn't exactly the same as MIDI buffers, there is similarity in that it is best to keep the process intact, clean up fully, then exit. With modest memory chunks this is very fast. I found that many small buffers worked faster in operation and cleanup than fewer large ones.
Exceptions may exist, as someone said when avoiding hauling large memory chunks back out of a swap file on disk, but even that may be minimised by keeping more, and smaller, allocated spaces.

Protecting heap data when a program is halted

Suppose I have a program that decrypts a file and stores the decrypted contents on the heap. I want to protect this information from other (non-root) processes running on the same system, so before I call free() to release the heap allocation, I'm using memset() to overwrite the data and make it unavailable to the next process that uses the same physical memory. (I understand this isn't a concern on some systems, but would prefer to err on the side of safety.)
However, I'm not sure what to do in cases where the program doesn't terminate normally, either through a forced termination (SIGINT, SIGTERM, etc.) or due to an error condition (SIGSEGV, SIGBUS, etc.). Should I just trap as many signals as possible to clear the heap before exiting, or is there a more orderly way of doing things?
An operating system that leaks contents of memory between processes (especially with different privileges) would be so broken from a security point of view that you doing it yourself won't change anything. Especially since on most operating systems the memory pages that you write to can at any point be taken away from you, swapped out and given to someone else. So I can safely say that you don't need to worry about normal termination unless you're on an operating system so specialized that it doesn't have anyone to leak the memory to. Also, there are certain ways to kill your process without you having any ability to catch the killing signal, so you couldn't handle all the cases anyway.
When it comes to abnormal termination (SIGSEGV, etc.) your best bet is to either disable dumping cores or at least make sure that your core dumps are only readable by you. That should be the main worry, the physical memory won't leak, but your core dumps could be readable by someone else.
That being said, it's still a very good practice to wipe secrets from memory as soon as you don't need them anymore. Not because they can leak to others through normal operation, because they can't, but because they can leak out through bugs. You might have an exploitable bug, maybe you get a stray pointer you'll write to a log, maybe you'll leave your key on the stack and then forget to initialize your data, etc. So your main worry shouldn't be to wipe out secrets from memory before exit, but to actually identify the point in your code where you don't need a secret anymore and wipe it right then and there.
Unfortunately, using memset that you mentioned is not enough. Many compilers today are smart enough to understand that some of your calls to memset are dead stores and optimize them away (like memset of a stack buffer just before leaving a function or just before free). See this issue in LibreSSL for a discussion about it, and this implementation of explicit_bzero for the currently best known attempt to work around it on clang and gcc.

What is the correct way to handle "out of memory"?

Recently, I work on a video player program on Windows for a CCTV program. As the program has to decode and play many videos streams at the same time, I think it might meet the situation that malloc will fail and I add checking after every malloc.
But genrally speaking, in these code of open source programs that I've read in open source projects, I seldom find any checking of result of malloc. So when malloc fails, most program will just crash. Isn't that unacceptalbe?
My colleagues who write server programs on linux will alloc a enough memory for 100 client connections. So although his program might refuse the 101 client, it will never met a failure of malloc. Is his approach also suitable for desktop applications?
On Linux, malloc() will never fail -- instead, the OOM killer will be triggered and begin killing random processes until the system falls over. Since Linux is the most popular UNIX derivative in use today, many developers have learned to just never check the result of malloc(). That's probably why your colleagues ignore malloc() failures.
On OSes which support failures, I've seen two general patterns:
Write a custom procedure which checks the result of malloc(), and calls abort() if allocation failed. For example, the GLib and GTK+ libraries use this approach.
Store a global list of "purge-able" allocations, such as caches, which can be cleared in the event of allocation failure. Then, try the allocation again, and if it still fails, report it via the standard error reporting mechanisms (which do not perform dynamic allocation).
Follow the standardized API
Even on Linux, ulimit can be used to get a prompt malloc error return. It's just that it defaults to unlimited.
There is a definite pressure to conform to published standards. On most systems, in the long run, and eventually even on Linux, malloc(3) will return a correct indication of failure. It is true that desktop systems have virtual memory and demand paging, but even then not checking malloc(3) only works in a debugged program with no memory leaks. If anything goes wrong, someone will want to set a ulimit and track it down. Suddenly, the malloc check makes sense.
To use the result of malloc without checking for null is unacceptable in code that might be open to use on platforms where malloc can fail, on those it will tend to result in crashes and unpredicatable behaviour. I can't forsee the future, don't know where my code will go, so I would write code with checks for malloc returning null - better to die than behave unpredicatbly!
Strategies for what to do if malloc fails depend upon the kind of applciation and how much confidence you have in the libraries you are using. It some situations the only safe thing to do is halt the whole program.
The idea of preallocating a known quota of memory and parcelling out in some chunks, hence steering clear of actually exhausting the memeory is a good one, if your application's memory usage is predicatable. You can extend this to writing your own memory management routines for use by your code.
It depends on the type of application that you are working on. If the application does work that is divided into discrete tasks where an individual task can be allowed to fail, then checking memory allocations can be recovered from gracefully.
But in many cases, the only reasonable way to respond to a malloc failure is by terminating the program. Allowing your code to just crash on the inevitable null dereference will achieve that. It would certainly always be better to dump a log entry or error message explaining the error, but in the real world we work on limited schedules. Sometimes the return on investment of pedantic error handling isn't there.
Always check, and pre-allocate a buffer that can be freed in this case so you can warn the user to save his data and shut down the application.
Depends on the app you write. Of course you always need to check the return value of malloc(). However, handling OOM gracefully only makes sense in very cases, such as low-level crucial system services, or when writing a library that might be used be them. Having a malloc wrapper that aborts on OOM is hence very common in many apps and frameworks. Often those wrappers are named xmalloc() or similar.
GLib's g_malloc() is aborting, too.
If you are going to handle huge amounts of memory, and want to make statements to Linux like "now I have memory area ABC and I don't need the B piece, do as you wish", have a look to mmap()/madvise() family of functions available in stock GNU C library. Depending on your usage patterns, the code can end up even simpler than using malloc. This API can also be used to help Linux not waste memory by caching files you are going to read/write only once.
They are nicely documented in GNU libc info documentation.
It is usually impossible for a program to handle running out of memory. What are you going to do? Open a file and log something? If you try to allocate a large block and it fails, you may have a fallback and try again with a smaller buffer, but if you fail to allocate 10 bytes, there is not much you can do. And checking for null constantly convolutes the code. For that reason I usually add a custom function that does checking and aborts on fail:
static void* xmalloc(size_t sz) {
void* p = malloc(sz);
if (!p) abort();
return p;
}

Always check malloc'ed memory?

I often catch myself doing the following (in non-critical components):
some_small_struct *ptr=(some_small_struct *) malloc(sizeof(some_small_struct));
ptr->some_member= ...;
In words, I allocate dynamically memory for a small structure and I use it directly without checking the malloc'ed pointer. I understand there is always a chance that the program won't get the memory it asks for (duh!) but consider the following:
If the program can't even get some memory for a small structure off the
heap, maybe there are much bigger problems looming and it doesn't matter after all.
Furthermore, what if handling the null pointer exacerbates the precarious situation even more?? (e.g. trying to log the condition calls even more non-existing resources etc.)
Is my reasoning sane (enough) ?
Updated:
A "safe_malloc" function can be useful when debugging and might be useful otherwise
+X access can hide the root cause of a NULL pointer
On Linux, "optimistic memory allocation" can shadow loomin OOM (Out-Of-Memory) conditions
Depends on the platform. For instance, on Linux (by default) it does not make much sense to check for NULL:
http://linux.die.net/man/3/malloc
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer.
In the case of C, it depends on the platform. If you are on an embedded platform with very little memory, you should alweays check, thouggh what you do if it does fail is more difficult to say. On a modern 32-bit OS with virtual memory, the system will probably become unresponsive and crash before it admits to running out of memory. In this case, the call to malloc never returns, so the utility of checking its value becomes moot.
In the case of C++, you should be using new instead of malloc, in which case an exception will be raised on exhaustion, so there is no point in checking the return value.
I would say No.
Using a NULL pointer is going to crash the program (probably).
But detecting it and doing something intelligent will be OK and you may be able to recover from the low memory situation.
If you are doing a big operation set some global error flag and start unwinding the stack and releasing resources. Hopefully one or more of these resources will be your memory hog and your application will get back to normal.
This of course is a C problem and handeled automatically in C++ with the help of exceptions and RAII.
As new will not return NULL there is no point in checking.
Allocations can fail for several reasons. What you do (and can do) about it depends in part on the allocation failure.
Being truly out of memory is catastrophic. Unless you've made a careful plan for this, there's probably nothing you can do. (For example, you could have pre-allocated all the resources you'd need for an emergency save and shutdown.)
But many allocation failures have nothing to do with being out of memory. Fragmentation can cause an allocation to fail because there's not enough contiguous space available even though there's plenty of memory free. The question specifically said a "small structure", so this is probably as bad as true out-of-memory condition. (But code is ever-changing. What's a small structure today might be a monster tomorrow. And if it's so small, do you really need memory from the heap or can you get it from the stack?)
In a multi-threaded world, allocation failures are often transient conditions. Your modest allocation might fail this microsecond, but perhaps a memory-hogging thread is about to release a big buffer. So a recovery strategy might involve a delay and retry.
How (and if) you handle allocation failure can also depend on the type of application. If you're writing a complex document editor, and a crash means losing user's work, then it's worth expending more effort to handle these failures. If your application is transactional, and each change is incrementally applied to persistent storage, then a crash is only a minor inconvenience to the user. Even so, logging should be considered. If you're application is routinely getting allocation failures, you probably have a bug, and you'll need the logs to know about it and track it down.
Lastly, you have to think about testing. Allocation failures are rare, so the chance that recovery code has been exercised in your testing is vanishingly small--unless you've taken steps to ensure test coverage by artificially forcing failures. If you aren't going to test your recovery code, then it's probably not worth writing it.
at the very least I would put an assert(ptr != NULL) in there so you get a meaningful error.
Furthermore, what if handling the null pointer exacerbates the precarious situation even more??
I do not see why it can exacerbate the situation.
Anyway, when writing code for windows ptr->some_member will throw access violation so you will immediately see the problem, therefore I see no reason to check the return value, unless your program has some opportunity to free the memory.
For platforms that do not handle null-pointers in a good way(throwing exception) it is dangerous to ignore such points.
Assuming that you are running on a Linux/MaxOs/Windows or other virtual memory system, then... the only reason to check the return value from malloc is if you have a strategy for freeing up enough memory to allow the program to continue running. An informative message will help in diagnosing the problem, but only if your program caused the out-of-memory situation.
Usually it is not your program and the only thing that your program can to do help is to exit as quickly as possible.
assert(ptr != NULL);
will do all of these things. My usual strategy is to have a layer around malloc that has
this in it.
void *my_malloc(size_t size)
{
void *ptr = malloc ( size );
assert(ptr != NULL);
return *ptr;
}
Then you call my_malloc instead of malloc. During development I use a memory allocation library that is conducive to debugging. After that if it runs out of memory - I get a message.
Yes, having insufficient memeory will almost certatinly presage other failures coming soon. But how sure are you that no corrupt output will occur between the failure to allocate and the final crash?
How sure are you for every program, every time you make an edit.
Catch your errors so you can know you crashed on time.
It is possible to allocate a largish chunk of memory at startup that you can free when you hit an out of memory condition and use that to shut down gracefully.
I always feel it is important and best to handle the return of malloc or any other system call for that matter. Though in modern systems (apart from embedded ones) it's a rare scenario unless and until your code uses too much memory, it's always safer.
Continuing the code after a system call failure can lead to corruption, crash and what not apart from making your program look bad.
Also, in linux, memory allocated to a process is limited. Try creating 1000 threads in a process and allocate some memory in each one of them, then you can easily simulate the low memory condition. : )
Always better to check for sys call return values!

Resources