Unit testing for failed malloc() - c

What is the best way for unit testing code paths involving a failed malloc()? In most instances, it probably doesn't matter because you're doing something like
thingy *my_thingy = malloc(sizeof(thingy));
if (my_thingy == NULL) {
fprintf(stderr, "We're so screwed!\n");
exit(EXIT_FAILURE);
}
but in some instances you have choices other than dying, because you've allocated some extra stuff for caching or whatever, and you can reclaim that memory.
However, in those instances where you can try to recover from a failed malloc() that you're doing something tricky and error prone in a code path that's pretty unusual, making testing especially important. How do you actually go about doing this?

I saw a cool solution to this problem which was presented to me by S. Paavolainen. The idea is to override the standard malloc(), which you can do just in the linker, by a custom allocator which
reads the current execution stack of the thread calling malloc()
checks if the stack exists in a database that is stored on hard disk
if the stack does not exist, adds the stack to the database and returns NULL
if the stack did exist already, allocates memory normally and returns
Then you just run your unit test many times: this system automatically enumerates through different control paths to malloc() failure and is much more efficient and reliable than e.g. random testing.

I suggest creating a specific function for your special malloc code that you expect could fail and you could handle gracefully. For example:
void* special_malloc(size_t bytes) {
void* ptr = malloc(bytes);
if(ptr == NULL) {
/* Do something crafty */
} else {
return ptr;
}
}
Then you could unit-test this crafty business in here by passing in some bad values for bytes. You could put this in a separate library and make a mock-library that does behaves special for your testing of the functions which call this one.

This is a kinda gross, but if you really want unit testing, you could do it with #ifdefs:
thingy *my_thingy = malloc(sizeof(thingy));
#ifdef MALLOC_UNIT_TEST_1
my_thingy = NULL;
#endif
if (my_thingy == NULL) {
fprintf(stderr, "We're so screwed!\n");
exit(EXIT_FAILURE);
}
Unfortunately, you'd have to recompile a lot with this solution.
If you're using linux, you could also consider running your code under memory pressure by using ulimit, but be careful.

write your own library that implements malloc by randomly failing or calling the real malloc (either staticly linked or explicitly dlopened)
then LD_PRELOAD it

In FreeBSD I once simply overloaded C library malloc.o module (symbols there were weak) and replaced malloc() implementation with one which had controlled probability to fail.
So I linked statically and started to perform testing. srandom() finished the picture with controlled pseudo-random sequence.
Also look here for a set of good tools that you seems to need by my opinion. At least they overload malloc() / free() to track leaks so it seems as usable point to add anything you want.

You could hijack malloc by using some defines and global parameter to control it... It's a bit hackish but seems to work.
#include <stdio.h>
#include <stdlib.h>
#define malloc(x) fake_malloc(x)
struct {
size_t last_request;
int should_fail;
void *(*real_malloc)(size_t);
} fake_malloc_params;
void *fake_malloc(size_t size) {
fake_malloc_params.last_request = size;
if (fake_malloc_params.should_fail) {
return NULL;
}
return (fake_malloc_params.real_malloc)(size);;
}
int main(void) {
fake_malloc_params.real_malloc = malloc;
void *ptr = NULL;
ptr = malloc(1);
printf("last: %d\n", (int) fake_malloc_params.last_request);
printf("ptr: 0x%p\n", ptr);
return 0;
}

Related

Do I have to free any previously allocated memory after an error occurs?

I'm writing a multithreaded server in C for a uni project and I'm having a hard time figuring out how to do error handling in a nice, readable and standard way.
Right now, if the program terminates successfully, I free every bit of allocated memory at the end of it. But what if a fatal error occurs during the execution (e.g. malloc returns NULL)?
For example, let's say I have a custom data type mydata_t and a constructor function mydata_t *mydata_init() that is used by several modules of my program. After seeing some code on the internet, this is how I would have written it:
mydata_t *mydata_init() {
mydata_t *mydata = malloc(sizeof(mydata_t));
if (!mydata) return NULL;
mydata->field1 = malloc(sizeof(mydata2_t));
if (!mydata->field1) return NULL;
mydata->field2 = malloc(sizeof(mydata3_t));
if (!mydata->field2) return NULL;
/*
Initialization of other fields
*/
return mydata;
}
It does seem nice and clean, but is this the "standard" way to do it?
In particular, what if one of the mallocs returns NULL? Is it necessary to free all the previously allocated memory? Is it reasonable to change the code to something like this?
mydata_t *mydata_init() {
mydata_t *mydata = malloc(sizeof(mydata_t));
if (!mydata) goto err_1;
mydata->field1 = malloc(sizeof(mydata2_t));
if (!mydata->field1) goto err_2;
mydata->field2 = malloc(sizeof(mydata3_t));
if (!mydata->field2) goto err_3;
/*
Initialization of other fields
*/
return mydata;
/*
Other tags
*/
err_3:
free(mydata->field1);
err_2:
free(mydata);
err_1:
return NULL;
}
Assuming a non-trivial OS, that's one with 'kill-9' or a Task Manger with 'End Process' entry on a GUI context menu, please bear in mind the following points before embarking on a lenghty and expensive campaign to write specific user code to free all malloced/whatever memory upon a fatal error:
1) Freeing all memory with user code requires more user code. That extra code must be designed, coded tested and maintained, often repeatedly after changes and/or new versions. With a complex, multithreaded app, maybe with pools of objects that are shared and communicated between threads, it's not going to be a remotely trivial exercise to even try to shut it down with user code.
2) Freeing all memory with user code after a fatal error can make things worse. If the error is a result of a corrupted heap manager, then you will raise even more faults as you try to free it. An app 'diasppearing' on a customer with an error log entry is bad enough, a screen full of AV error boxes and a stuck app is much worse.
3) Safely freeing all memory with user code can only be done by a thread if all other threads have been stopped so that they cannot possibly access any of that memory. Reliably stopping all process threads can only be done by the OS at process termination. User code cannot be guaranteed to do it - if a thread is stuck executing a lengthy operation in a library, you cannot reliably stop it. If you try, it may, for instance, leave the memory-manager locked. Just unblocking threads stuck on an I/O operation is difficult enough, often requiring bodges like opening a connection on the local network stack to force an accept() call to return early.
4) The OS 'terminate process' API, and all that is involved in it, has been tested a LOT. It works, and it comes free with your OS. Your user code that tries to stop threads and free memory will never accumulate as much testing.
5) User code that tries to stop threads and free memory is redundant - the OS does the same job, only better, quicker and more reliably. You are trying to clean up memory from a sub-allocator that the OS is going to soon destroy and deallocate anyway.
6) Many OS and other commercial libraries have already given way to the inevitable and accept that they cannot safely have all their memory freed at shutdown without causing problems, especially with multithreaded apps. The library authors cannot do it reliably, neither can you.
Sure, manage your memory during a run in a controlled and sensible manner, freeing what you malloc as required, so as not to leak memory during the lifetime of the process.
If you encounter a fatal error, however, maybe try to write the details to a log file or make some other debugging/logging action, if you can, and then call your OS 'terminate process' API.
There is nothing else you can safely do.
Your app is near death, let the OS euthanize it.
One possible alternative.
mydata_t *mydata_init()
{
mydata_t *mydata = malloc(sizeof(mydata_t));
if (mydata == NULL)
{
/* Handle error */
return NULL;
}
mydata->field1 = malloc(sizeof(mydata2_t));
mydata->field2 = malloc(sizeof(mydata3_t));
...
if (mydata->field1 != NULL &&
mydata->field2 != NULL &&
...)
{
/* success */
/*
* Initialize everything
*/
return mydata;
}
free(mydata->field1);
free(mydata->field2);
...
free(mydata);
return NULL;
}
Note that you do not need to check for NULL before calling free() on the error path. As noted in the first answer to this question
Quoting the C standard, 7.20.3.2/2 from ISO-IEC 9899:
If ptr is a null pointer, no action occurs.
Is it necessary to free all the previously allocated memory?
No, but non-leaky (good) code does.
You will find various opinions on how to do this (free up things), but the end goal is to do it, somehow. Free unused resources.
Code for clarity.
Note: goto form is an acceptable use of goto.
I'll offer another approach and use a mydata_uninit() companion function as possible.
mydata_t *mydata_uninit(mydata_t *mydata) {
if (mydata) {
free(mydate->field1);
mydate->field1 = NULL; // example discussed below **
free(mydate->field2);
// free other resources and perform other clean up/accounting.
free(mydate);
}
return NULL;
}
I'd also allocate to the size of the de-referenced pointer, not the type.
mydata_t *mydata_init(void) {
mydata_t *mydata = calloc(1, sizeof *mydata);
if (mydata == NULL) {
return NULL;
}
mydata->field1 = calloc(1, sizeof *(mydata->field1));
mydata->field2 = calloc(1, sizeof *(mydata->field2));
// Other 1st time assignments
// If any failed
if (mydata->field1 == NULL || mydata->field2 == NULL) {
mydata = mydata_uninit(mydata);
}
return mydata;
}
** Setting pointer struct members to NULL (mydata->field1) and then later freeing the struct pointer (mydata) I find aids in debug as errant code that de-references a NULL pointer typically errors faster than a free'd pointer.
The existing answers covers the various malloc/free scenarios, so I will not add to that.
What I would like to point out is that if you fail a malloc, it is pretty much game-over for the program, and it is often better to just exit() than to try to recover the situation.
Even if you manage to clean up partially allocated structs, other code, and libraries used by that code, will also fail to allocate memory and may (and often will) fail in mysterious ways.
malloc() usually only fails if you run in severely restricted environments, like embedded micro controllers), or if you are leaking memory like a sieve.

Error handling in C: unable to malloc in nested function

In C, I know it is good practice to always check if a newly malloced variable is null right after allocating. If so, I can output an error e.g. perror and exit the program.
But what about in more complicated programs? E.g. I have main call a function f1(returns an int), which calls a function f2(returns a char*), which calls a function f3(returns a double), and I fail to malloc inside f3.
In this case, I can't seem to just output an error and exit(and may even have memory leaks if possible) since f3 will still force me to first return a double. Then f2 will force me to return a char*, etc. In this case, it seems very painful to keep track of the errors and exit appropriately. What is the proper way to efficiently cover these sort of errors accross functions?
The obvious solution is to design your program with care, so that every function that does dynamic allocation has some means to report errors. Most often the return value of the function is used for this purpose.
In well-designed programs, errors bounce back all the way up the call stack, so that they are dealt with at the application level.
In the specific case of dynamic memory allocation, it is always best to leave the allocation to the caller whenever possible.
It's always a problem. You need a disiplined approach.
Firstly, every dynamic pointer must be "owned" by someone. C won't help you here, you just have to specify. Generally the three patterns are
a) Function calls malloc(), then calls free():
b) We have two matching functions, one which returns a buffer or dynamic
structure, one which destroys it. The function that calls create also calls the destroy.
c) We have a set of nodes we are inserting into a graph, at random throughout the program. It needs to be managed like b, one function creates the root, then calls the delete which destroys the entire graph.
The rule is owner holds and frees.
If you return a pointer, return 0 on out of memory. If you return an integer, return -1. Errors get propagated up until some high level code knows what user-level operation has failed and aborts it.
The other answers are correct that the correct way to handle this is to make sure that every function that can allocate memory can report failure to its caller, and every caller handles the possibility. And, of course, you have a test malloc shim that arranges to test every possible allocation failure.
But in large C programs, this becomes intractable — the number of cases that need testing increases exponentially with the number of malloc callsites, for starters — so it is very common to see a function like this in a utils.c file:
void *
xmalloc(size_t n)
{
void *rv = malloc(n);
if (!rv) {
fprintf(stderr, "%s: memory exhausted\n", program_name);
exit(1);
}
return rv;
}
All other code in the program always calls xmalloc, never malloc, and can assume it always succeeds. (And you also have xcalloc, xrealloc, xstrdup, etc.)
Libraries cannot get away with this, but applications can.
The one way to switch across the functions is exception handling.
When an exception is thrown, it return the scope the catch part.
But make sure of the memory allocation across the functions, Since it directly moves to the catch blok.
The sample code for reference,
// Example program
#include <iostream>
#include <string>
using namespace std ;
int f1()
{
int *p = (int*) malloc(sizeof(int)) ;
if(p == NULL)
{
throw(1) ;
}
//Code flow continues.
return 0 ;
}
char *g()
{
char *p ;
f1() ;
cout << "Inside fun g*" << endl ;
return p;
}
int f2()
{
g() ;
cout << "Inside fun f2" << endl ;
return 0 ;
}
int main()
{
try
{
f2() ;
}
catch(int a)
{
cout << "Caught memory exception" << endl ;
}
return 0 ;
}

Is it a bad idea to check whether malloc was successful using a user-defined function?

When using malloc in C, I'm always told to check to see if any errors occurred by checking if it returned a NULL value. While I definitely understand why this is important, it is a bit of a bother constantly typing out 'if' statements and whatever I want inside them to check whether the memory was successfully allocated for each individual instance where I use malloc. To make things quicker, I made a function as follows to check whether it was successful.
#define MAX 25
char MallocCheck(char* Check);
char *Option1, *Option2;
int main(){
Option1 = (char *)malloc(sizeof(char) * MAX);
MallocCheck(Option1);
Option2 = (char *)malloc(sizeof(char) * MAX);
MallocCheck(Option2);
return 0;
}
char MallocCheck(char* Check){
if(Check == NULL){
puts("Memory Allocation Error");
exit(1);
}
}
However, I have never seen someone else doing something like this no matter how much I search so I assume it is wrong or otherwise something that shouldn't be done.
Is using a user-defined function for this purpose wrong and if so, why is that the case?
Error checking is a good thing.
Making a helper function to code quicker, better is a good thing.
The details depend on coding goals and your group's coding standards.
OP's approach is not bad. I prefer to handle the error with the allocation. The following outputs on stderr #EOF and does not complain of a NULL return when 0 bytes allocated (which is not a out-of-memory failure).
void *malloc_no_return_on_OOM(size_t size) {
void *p = mallc(size);
if (p == NULL && size > 0) {
// Make messages informative
fprintf(stderr, "malloc(%zu) failure\n", size);
// or
perror("malloc() failure");
exit(1);
}
return p;
}
Advanced: Could code a DEBUG version that contains the callers function and line by using a macro.
This is an addendum to #chux's answer and the comments.
As stated, DRY code is generally a good thing and malloc errors are often handled the same way within a specific implementation.
It's true that some systems (notably, Linux) offer optimistic malloc implementations, meaning malloc always returns a valid pointer (never NULL) and the error is reported using a signal the first time data is written to the returned pointer... which makes error handling slightly more complex then the code in the question.
However, moving the error check to a different function might incur a performance penalty, unless the compiler / linker catches the issue and optimizes the function call away.
This is a classic use case for inline functions (on newer compilers) or macros.
i.e.
#include <signal.h>
void handle_no_memory(int sig) {
if (sig == SIGSEGV) {
perror("Couldn't allocate or access memory");
/* maybe use longjmp to stay in the game...? Or not... */
exit(SIGSEGV);
}
}
/* Using a macro: */
#define IS_MEM_VALID(ptr) \
if ((ptr) == NULL) { \
handle_no_memory(SIGSEGV); \
}
/* OR an inline function: */
static inline void *is_mem_valid(void *ptr) {
if (ptr == NULL)
handle_no_memory(SIGSEGV);
return ptr;
}
int main(int argc, char const *argv[]) {
/* consider setting a signal handler - `sigaction` is better, but I'm lazy. */
signal(SIGSEGV, handle_no_memory);
/* using the macro */
void *data_macro = malloc(1024);
IS_MEM_VALID(data_macro);
/* using the inline function */
void *data_inline = is_mem_valid(malloc(1024));
}
Both macros and inline functions prevent code jumps and function calls, since the if statement is now part of the function instead of an external function.
When using inline, the compiler will take the assembly code and place it within the function (instead of performing a function call). For this, we must trust the compiler to so it's job properly (it usually does it's job better than us).
When using macros, the preprocessor takes care of things and we don't need to trust the compiler.
In both cases the function / macro is local to the file (notice the static key word), allowing any optimizations to be performed by the compiler (not the linker).
Good luck.

Should I check if malloc() was successful?

Should one check after each malloc() if it was successful? Is it at all possible that a malloc() fails? What happens then?
At school we were told that we should check, i.e.:
arr = (int) malloc(sizeof(int)*x*y);
if(arr==NULL){
printf("Error. Allocation was unsuccessful. \n");
return 1;
}
What is the practice regarding this? Can I do it this way:
if(!(arr = (int) malloc(sizeof(int)*x*y))
<error>
This mainly only adds to the existing answer but I understand where you are coming from, if you do a lot of memory allocation your code ends up looking very ugly with all the error checks for malloc.
Personally I often get around this using a small malloc wrapper which will never fail. Unless your software is a resilient, safety critical system you cannot meaningfully work around malloc failing anyway so I would suggest something like this:
static inline void *MallocOrDie(size_t MemSize)
{
void *AllocMem = malloc(MemSize);
/* Some implementations return null on a 0 length alloc,
* we may as well allow this as it increases compatibility
* with very few side effects */
if(!AllocMem && MemSize)
{
printf("Could not allocate memory!");
exit(-1);
}
return AllocMem;
}
Which will at least ensure you get an error message and clean crash, and avoids all the bulk of the error checking code.
For a more generic solution for functions that can fail I also tend to implement a simple macrosuch as this:
#define PrintDie(...) \
do \
{ \
fprintf(stderr, __VA_ARGS__); \
abort(); \
} while(0)
Which then allows you to run a function as:
if(-1 == foo()) PrintDie("Oh no");
Which gives you a one liner, again avoiding the bulk while enabling proper checks.
No need to cast malloc(). Yes, however, it is required to check whether the malloc() was successful or not.
Let's say malloc() failed and you are trying to access the pointer thinking memory is allocated will lead to crash, so it it better to catch the memory allocating failure before accessing the pointer.
int *arr = malloc(sizeof(*arr));
if(arr == NULL)
{
printf("Memory allocation failed");
return;
}

How to make OS X malloc automatically abort on failure?

The Matasano blog calls “Checking the return value of malloc()” a “C Programming Anti-Idiom.” Instead malloc() should automatically call abort() for you if it fails. The argument is that, since you’ll usually want to abort the program if malloc() fails, that should be the default behaviour instead of something you have to laboriously type—or maybe forget to type!—every time.
Without getting into the merits of the idea, what’s the easiest way to set this up? I’m looking for something that would automatically detect memory allocation failures by other library functions such as asprintf() too. A portable solution would be wonderful, but I’d also be happy with something mac-specific.
Summarizing the best answers below:
Mac run-time solution
Set the MallocErrorAbort=1 environment variable before running your program. Automatically works for all memory allocation functions.
Mac/linux run-time solution
Use a dynamic library shim to load a custom malloc() wrapper at runtime with LD_PRELOAD or DYLD_INSERT_LIBRARIES. You will likely want to wrap calloc(), realloc(), &c. as well.
Mac/linux compiled solution
Define your own malloc() and free() functions, and access the system versions using dyld(RTLD_NEXT, "malloc") as shown here. Again, you will likely want to wrap calloc(), realloc(), &c. as well.
#include <dlfcn.h>
#include <stdio.h>
#include <stdlib.h>
void *(*system_malloc)(size_t) = NULL;
void* malloc(size_t bytes) {
if (system_malloc == NULL) {
system_malloc = dlsym(RTLD_NEXT, "malloc");
}
void* ret = system_malloc(bytes);
if (ret == NULL) {
perror("malloc failed, aborting");
abort();
}
return ret;
}
int main() {
void* m = malloc(10000000000000000l);
if (m == NULL) {
perror("malloc failed, program still running");
}
return 0;
}
Linux compiled solution
Use __malloc_hook and __realloc_hook as described in the glibc manual.
Mac compiled solution
Use the malloc_default_zone() function to access the heap’s data structure, unprotect the memory page, and install a hook in zone->malloc:
#include <malloc/malloc.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <unistd.h>
static void* (*system_malloc)(struct _malloc_zone_t *zone, size_t size);
static void* my_malloc(struct _malloc_zone_t *zone, size_t size) {
void* ret = system_malloc(zone, size);
if (ret == NULL) {
perror("malloc failed, aborting");
abort();
}
return ret;
}
int main() {
malloc_zone_t *zone = malloc_default_zone();
if (zone->version != 8) {
fprintf(stderr, "Unknown malloc zone version %d\n", zone->version);
abort();
}
system_malloc = zone->malloc;
if (mprotect(zone, getpagesize(), PROT_READ | PROT_WRITE) != 0) {
perror("munprotect failed");
abort();
}
zone->malloc = my_malloc;
if (mprotect(zone, getpagesize(), PROT_READ) != 0) {
perror("mprotect failed");
abort();
}
void* m = malloc(10000000000000000l);
if (m == NULL) {
perror("malloc failed, program still running");
}
return 0;
}
For completeness you will likely want to wrap calloc(), realloc(), and the other functions defined in malloc_zone_t in /usr/include/malloc/malloc.h as well.
Just wrap malloc() in some my_malloc() function that does this instead. In a lot of cases it's actually possible to handle not being able to allocate memory so this type of behaviour would be undesirable. It's easy to add functionality to malloc() but not to remove it, which is probably why it behaves this way.
Another thing to keep in mind is that this is a library you're calling into. Would you like to make a library call and have the library kill your application without you being able to have a say in it?
I guess I missed the part about asprintf but libc exports some hooks you can use (what valgrind does essentially) that let you override the malloc behavior. Here's a reference to the hooks themselves, if you know C well enough it's not hard to do.
http://www.gnu.org/savannah-checkouts/gnu/libc/manual/html_node/Hooks-for-Malloc.html
man malloc on my Mac gives the following information. It looks like you want MallocErrorAbort.
ENVIRONMENT
The following environment variables change the behavior of the allocation-related functions.
MallocLogFile <f>
Create/append messages to the given file path <f> instead of writing to the standard error.
MallocGuardEdges
If set, add a guard page before and after each large block.
MallocDoNotProtectPrelude
If set, do not add a guard page before large blocks, even if the MallocGuardEdges environment variable is set.
MallocDoNotProtectPostlude
If set, do not add a guard page after large blocks, even if the MallocGuardEdges environment variable is set.
MallocStackLogging
If set, record all stacks, so that tools like leaks can be used.
MallocStackLoggingNoCompact
If set, record all stacks in a manner that is compatible with the malloc_history program.
MallocStackLoggingDirectory
If set, records stack logs to the directory specified instead of saving them to the default location (/tmp).
MallocScribble
If set, fill memory that has been allocated with 0xaa bytes. This increases the likelihood that a program making assumptions about the contents of freshly allocated memory will fail. Also if set, fill memory that has been deallocated with 0x55 bytes. This increases the likelihood that a program will fail due to accessing memory that is no longer allocated.
MallocCheckHeapStart <s>
If set, specifies the number of allocations <s> to wait before beginning periodic heap checks every <n> as specified by MallocCheckHeapEach. If MallocCheckHeapStart is set but MallocCheckHeapEach is not specified, the default check repetition is 1000.
MallocCheckHeapEach <n>
If set, run a consistency check on the heap every <n> operations. MallocCheckHeapEach is only meaningful if MallocCheckHeapStart is also set.
MallocCheckHeapSleep <t>
Sets the number of seconds to sleep (waiting for a debugger to attach) when MallocCheckHeapStart is set and a heap corruption is detected. The default is 100 seconds. Setting this to zero means not to sleep at all. Setting this to a negative number means to sleep (for the positive number of seconds) only the very first time a heap corruption is detected.
MallocCheckHeapAbort <b>
When MallocCheckHeapStart is set and this is set to a non-zero value, causes abort(3) to be called if a heap corruption is detected, instead of any sleeping.
MallocErrorAbort
If set, causes abort(3) to be called if an error was encountered in malloc(3) or free(3), such as a calling free(3) on a pointer previously freed.
MallocCorruptionAbort
Similar to MallocErrorAbort but will not abort in out of memory conditions, making it more useful to catch only those errors which will cause memory corruption. MallocCorruptionAbort is always set on 64-bit processes.
MallocHelp
If set, print a list of environment variables that are paid heed to by the allocation-related functions, along with short descriptions. The list should correspond to this documentation.
Note the comments under MallocCorruptionAbort about the behaviour of MallocErrorAbort.
For most of my own code, I use a series of wrapper functions — emalloc(), erealloc(), ecalloc(), efree(), estrdup(), etc — that check for failed allocations (efree() is a straight pass-through function for consistency) and do not return when an allocation fails. They either exit or abort. This is basically what Jesus Ramos suggests in his answer; I agree with what he suggests.
However, not all programs can afford to have that happen. I'm just in the process of fixing up some code I wrote which does use these functions so that it can be reused in a context where it is not OK to fail to on allocation error. For its original purpose (security checks during the very early stages of process startup), it was fine to exit on error, but now it needs to be usable after the system is running, when a premature exit is not allowed. So, the code has to deal with those paths where the code used to be able to assume 'no return on allocation failure'. That's a tad painful. It can still take a conservative view; an allocation failure means the request is not safe and process it appropriately. But not all code can afford to fail with abort on memory allocation failure.

Resources