The Matasano blog calls “Checking the return value of malloc()” a “C Programming Anti-Idiom.” Instead malloc() should automatically call abort() for you if it fails. The argument is that, since you’ll usually want to abort the program if malloc() fails, that should be the default behaviour instead of something you have to laboriously type—or maybe forget to type!—every time.
Without getting into the merits of the idea, what’s the easiest way to set this up? I’m looking for something that would automatically detect memory allocation failures by other library functions such as asprintf() too. A portable solution would be wonderful, but I’d also be happy with something mac-specific.
Summarizing the best answers below:
Mac run-time solution
Set the MallocErrorAbort=1 environment variable before running your program. Automatically works for all memory allocation functions.
Mac/linux run-time solution
Use a dynamic library shim to load a custom malloc() wrapper at runtime with LD_PRELOAD or DYLD_INSERT_LIBRARIES. You will likely want to wrap calloc(), realloc(), &c. as well.
Mac/linux compiled solution
Define your own malloc() and free() functions, and access the system versions using dyld(RTLD_NEXT, "malloc") as shown here. Again, you will likely want to wrap calloc(), realloc(), &c. as well.
#include <dlfcn.h>
#include <stdio.h>
#include <stdlib.h>
void *(*system_malloc)(size_t) = NULL;
void* malloc(size_t bytes) {
if (system_malloc == NULL) {
system_malloc = dlsym(RTLD_NEXT, "malloc");
}
void* ret = system_malloc(bytes);
if (ret == NULL) {
perror("malloc failed, aborting");
abort();
}
return ret;
}
int main() {
void* m = malloc(10000000000000000l);
if (m == NULL) {
perror("malloc failed, program still running");
}
return 0;
}
Linux compiled solution
Use __malloc_hook and __realloc_hook as described in the glibc manual.
Mac compiled solution
Use the malloc_default_zone() function to access the heap’s data structure, unprotect the memory page, and install a hook in zone->malloc:
#include <malloc/malloc.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <unistd.h>
static void* (*system_malloc)(struct _malloc_zone_t *zone, size_t size);
static void* my_malloc(struct _malloc_zone_t *zone, size_t size) {
void* ret = system_malloc(zone, size);
if (ret == NULL) {
perror("malloc failed, aborting");
abort();
}
return ret;
}
int main() {
malloc_zone_t *zone = malloc_default_zone();
if (zone->version != 8) {
fprintf(stderr, "Unknown malloc zone version %d\n", zone->version);
abort();
}
system_malloc = zone->malloc;
if (mprotect(zone, getpagesize(), PROT_READ | PROT_WRITE) != 0) {
perror("munprotect failed");
abort();
}
zone->malloc = my_malloc;
if (mprotect(zone, getpagesize(), PROT_READ) != 0) {
perror("mprotect failed");
abort();
}
void* m = malloc(10000000000000000l);
if (m == NULL) {
perror("malloc failed, program still running");
}
return 0;
}
For completeness you will likely want to wrap calloc(), realloc(), and the other functions defined in malloc_zone_t in /usr/include/malloc/malloc.h as well.
Just wrap malloc() in some my_malloc() function that does this instead. In a lot of cases it's actually possible to handle not being able to allocate memory so this type of behaviour would be undesirable. It's easy to add functionality to malloc() but not to remove it, which is probably why it behaves this way.
Another thing to keep in mind is that this is a library you're calling into. Would you like to make a library call and have the library kill your application without you being able to have a say in it?
I guess I missed the part about asprintf but libc exports some hooks you can use (what valgrind does essentially) that let you override the malloc behavior. Here's a reference to the hooks themselves, if you know C well enough it's not hard to do.
http://www.gnu.org/savannah-checkouts/gnu/libc/manual/html_node/Hooks-for-Malloc.html
man malloc on my Mac gives the following information. It looks like you want MallocErrorAbort.
ENVIRONMENT
The following environment variables change the behavior of the allocation-related functions.
MallocLogFile <f>
Create/append messages to the given file path <f> instead of writing to the standard error.
MallocGuardEdges
If set, add a guard page before and after each large block.
MallocDoNotProtectPrelude
If set, do not add a guard page before large blocks, even if the MallocGuardEdges environment variable is set.
MallocDoNotProtectPostlude
If set, do not add a guard page after large blocks, even if the MallocGuardEdges environment variable is set.
MallocStackLogging
If set, record all stacks, so that tools like leaks can be used.
MallocStackLoggingNoCompact
If set, record all stacks in a manner that is compatible with the malloc_history program.
MallocStackLoggingDirectory
If set, records stack logs to the directory specified instead of saving them to the default location (/tmp).
MallocScribble
If set, fill memory that has been allocated with 0xaa bytes. This increases the likelihood that a program making assumptions about the contents of freshly allocated memory will fail. Also if set, fill memory that has been deallocated with 0x55 bytes. This increases the likelihood that a program will fail due to accessing memory that is no longer allocated.
MallocCheckHeapStart <s>
If set, specifies the number of allocations <s> to wait before beginning periodic heap checks every <n> as specified by MallocCheckHeapEach. If MallocCheckHeapStart is set but MallocCheckHeapEach is not specified, the default check repetition is 1000.
MallocCheckHeapEach <n>
If set, run a consistency check on the heap every <n> operations. MallocCheckHeapEach is only meaningful if MallocCheckHeapStart is also set.
MallocCheckHeapSleep <t>
Sets the number of seconds to sleep (waiting for a debugger to attach) when MallocCheckHeapStart is set and a heap corruption is detected. The default is 100 seconds. Setting this to zero means not to sleep at all. Setting this to a negative number means to sleep (for the positive number of seconds) only the very first time a heap corruption is detected.
MallocCheckHeapAbort <b>
When MallocCheckHeapStart is set and this is set to a non-zero value, causes abort(3) to be called if a heap corruption is detected, instead of any sleeping.
MallocErrorAbort
If set, causes abort(3) to be called if an error was encountered in malloc(3) or free(3), such as a calling free(3) on a pointer previously freed.
MallocCorruptionAbort
Similar to MallocErrorAbort but will not abort in out of memory conditions, making it more useful to catch only those errors which will cause memory corruption. MallocCorruptionAbort is always set on 64-bit processes.
MallocHelp
If set, print a list of environment variables that are paid heed to by the allocation-related functions, along with short descriptions. The list should correspond to this documentation.
Note the comments under MallocCorruptionAbort about the behaviour of MallocErrorAbort.
For most of my own code, I use a series of wrapper functions — emalloc(), erealloc(), ecalloc(), efree(), estrdup(), etc — that check for failed allocations (efree() is a straight pass-through function for consistency) and do not return when an allocation fails. They either exit or abort. This is basically what Jesus Ramos suggests in his answer; I agree with what he suggests.
However, not all programs can afford to have that happen. I'm just in the process of fixing up some code I wrote which does use these functions so that it can be reused in a context where it is not OK to fail to on allocation error. For its original purpose (security checks during the very early stages of process startup), it was fine to exit on error, but now it needs to be usable after the system is running, when a premature exit is not allowed. So, the code has to deal with those paths where the code used to be able to assume 'no return on allocation failure'. That's a tad painful. It can still take a conservative view; an allocation failure means the request is not safe and process it appropriately. But not all code can afford to fail with abort on memory allocation failure.
Related
I was working on a project for a course on Operating Systems. The task was to implement a library for dealing with threads, similar to pthreads, but much more simpler. The purpose of it is to practice scheduling algorithms. The final product is a .a file. The course is over and everything worked just fine (in terms of functionality).
Though, I got curious about an issue I faced. On three different functions of my source file, if I add the following line, for instance:
fprintf(stderr, "My lucky number is %d\n", 4);
I get a segmentation fault. The same doesn't happen if stdout is used instead, or if the formatting doesn't contain any variables.
That leaves me with two main questions:
Why does it only happen in three functions of my code, and not the others?
Could the creation of contexts using getcontext() and makecontext(), or the changing of contexts using setcontext() or swapcontext() mess up with the standard file descriptors?
My intuition says those functions could be responsible for that. Even more when given the fact that the three functions of my code in which this happens are functions that have contexts which other parts of the code switch to. Usually by setcontext(), though swapcontext() is used to go to the scheduler, for choosing another thread to execute.
Additionally, if that is the case, then:
What is the proper way to create threads using those functions?
I'm currently doing the following:
/*------------------------------------------------------------------------------
Funct: Creates an execution context for the function and arguments passed.
Input: uc -> Pointer where the context will be created.
funct -> Function to be executed in the context.
arg -> Argument to the function.
Return: If the function succeeds, 0 will be returned. Otherwise -1.
------------------------------------------------------------------------------*/
static int create_context(ucontext_t *uc, void *funct, void *arg)
{
if(getcontext(uc) != 0) // Gets a context "model"
{
return -1;
}
stack_t *sp = (stack_t*)malloc(STACK_SIZE); // Stack area for the execution context
if(!sp) // A stack area is mandatory
{
return -1;
}
uc->uc_stack.ss_sp = sp; // Sets stack pointer
uc->uc_stack.ss_size = STACK_SIZE; // Sets stack size
uc->uc_link = &context_end; // Sets the context to go after execution
makecontext(uc, funct, 1, arg); // "Makes everything work" (can't fail)
return 0;
}
This code is probably a little modified, but it is originally an online example on how to use u_context.
Assuming glibc, the explanation is that fprintf with an unbuffered stream (such as stderr by default) internally creates an on-stack buffer which as a size of BUFSIZE bytes. See the function buffered_vfprintf in stdio-common/vfprintf.c. BUFSIZ is 8192, so you end up with a stack overflow because the stack you create is too small.
I'm writing a multithreaded server in C for a uni project and I'm having a hard time figuring out how to do error handling in a nice, readable and standard way.
Right now, if the program terminates successfully, I free every bit of allocated memory at the end of it. But what if a fatal error occurs during the execution (e.g. malloc returns NULL)?
For example, let's say I have a custom data type mydata_t and a constructor function mydata_t *mydata_init() that is used by several modules of my program. After seeing some code on the internet, this is how I would have written it:
mydata_t *mydata_init() {
mydata_t *mydata = malloc(sizeof(mydata_t));
if (!mydata) return NULL;
mydata->field1 = malloc(sizeof(mydata2_t));
if (!mydata->field1) return NULL;
mydata->field2 = malloc(sizeof(mydata3_t));
if (!mydata->field2) return NULL;
/*
Initialization of other fields
*/
return mydata;
}
It does seem nice and clean, but is this the "standard" way to do it?
In particular, what if one of the mallocs returns NULL? Is it necessary to free all the previously allocated memory? Is it reasonable to change the code to something like this?
mydata_t *mydata_init() {
mydata_t *mydata = malloc(sizeof(mydata_t));
if (!mydata) goto err_1;
mydata->field1 = malloc(sizeof(mydata2_t));
if (!mydata->field1) goto err_2;
mydata->field2 = malloc(sizeof(mydata3_t));
if (!mydata->field2) goto err_3;
/*
Initialization of other fields
*/
return mydata;
/*
Other tags
*/
err_3:
free(mydata->field1);
err_2:
free(mydata);
err_1:
return NULL;
}
Assuming a non-trivial OS, that's one with 'kill-9' or a Task Manger with 'End Process' entry on a GUI context menu, please bear in mind the following points before embarking on a lenghty and expensive campaign to write specific user code to free all malloced/whatever memory upon a fatal error:
1) Freeing all memory with user code requires more user code. That extra code must be designed, coded tested and maintained, often repeatedly after changes and/or new versions. With a complex, multithreaded app, maybe with pools of objects that are shared and communicated between threads, it's not going to be a remotely trivial exercise to even try to shut it down with user code.
2) Freeing all memory with user code after a fatal error can make things worse. If the error is a result of a corrupted heap manager, then you will raise even more faults as you try to free it. An app 'diasppearing' on a customer with an error log entry is bad enough, a screen full of AV error boxes and a stuck app is much worse.
3) Safely freeing all memory with user code can only be done by a thread if all other threads have been stopped so that they cannot possibly access any of that memory. Reliably stopping all process threads can only be done by the OS at process termination. User code cannot be guaranteed to do it - if a thread is stuck executing a lengthy operation in a library, you cannot reliably stop it. If you try, it may, for instance, leave the memory-manager locked. Just unblocking threads stuck on an I/O operation is difficult enough, often requiring bodges like opening a connection on the local network stack to force an accept() call to return early.
4) The OS 'terminate process' API, and all that is involved in it, has been tested a LOT. It works, and it comes free with your OS. Your user code that tries to stop threads and free memory will never accumulate as much testing.
5) User code that tries to stop threads and free memory is redundant - the OS does the same job, only better, quicker and more reliably. You are trying to clean up memory from a sub-allocator that the OS is going to soon destroy and deallocate anyway.
6) Many OS and other commercial libraries have already given way to the inevitable and accept that they cannot safely have all their memory freed at shutdown without causing problems, especially with multithreaded apps. The library authors cannot do it reliably, neither can you.
Sure, manage your memory during a run in a controlled and sensible manner, freeing what you malloc as required, so as not to leak memory during the lifetime of the process.
If you encounter a fatal error, however, maybe try to write the details to a log file or make some other debugging/logging action, if you can, and then call your OS 'terminate process' API.
There is nothing else you can safely do.
Your app is near death, let the OS euthanize it.
One possible alternative.
mydata_t *mydata_init()
{
mydata_t *mydata = malloc(sizeof(mydata_t));
if (mydata == NULL)
{
/* Handle error */
return NULL;
}
mydata->field1 = malloc(sizeof(mydata2_t));
mydata->field2 = malloc(sizeof(mydata3_t));
...
if (mydata->field1 != NULL &&
mydata->field2 != NULL &&
...)
{
/* success */
/*
* Initialize everything
*/
return mydata;
}
free(mydata->field1);
free(mydata->field2);
...
free(mydata);
return NULL;
}
Note that you do not need to check for NULL before calling free() on the error path. As noted in the first answer to this question
Quoting the C standard, 7.20.3.2/2 from ISO-IEC 9899:
If ptr is a null pointer, no action occurs.
Is it necessary to free all the previously allocated memory?
No, but non-leaky (good) code does.
You will find various opinions on how to do this (free up things), but the end goal is to do it, somehow. Free unused resources.
Code for clarity.
Note: goto form is an acceptable use of goto.
I'll offer another approach and use a mydata_uninit() companion function as possible.
mydata_t *mydata_uninit(mydata_t *mydata) {
if (mydata) {
free(mydate->field1);
mydate->field1 = NULL; // example discussed below **
free(mydate->field2);
// free other resources and perform other clean up/accounting.
free(mydate);
}
return NULL;
}
I'd also allocate to the size of the de-referenced pointer, not the type.
mydata_t *mydata_init(void) {
mydata_t *mydata = calloc(1, sizeof *mydata);
if (mydata == NULL) {
return NULL;
}
mydata->field1 = calloc(1, sizeof *(mydata->field1));
mydata->field2 = calloc(1, sizeof *(mydata->field2));
// Other 1st time assignments
// If any failed
if (mydata->field1 == NULL || mydata->field2 == NULL) {
mydata = mydata_uninit(mydata);
}
return mydata;
}
** Setting pointer struct members to NULL (mydata->field1) and then later freeing the struct pointer (mydata) I find aids in debug as errant code that de-references a NULL pointer typically errors faster than a free'd pointer.
The existing answers covers the various malloc/free scenarios, so I will not add to that.
What I would like to point out is that if you fail a malloc, it is pretty much game-over for the program, and it is often better to just exit() than to try to recover the situation.
Even if you manage to clean up partially allocated structs, other code, and libraries used by that code, will also fail to allocate memory and may (and often will) fail in mysterious ways.
malloc() usually only fails if you run in severely restricted environments, like embedded micro controllers), or if you are leaking memory like a sieve.
In C, I know it is good practice to always check if a newly malloced variable is null right after allocating. If so, I can output an error e.g. perror and exit the program.
But what about in more complicated programs? E.g. I have main call a function f1(returns an int), which calls a function f2(returns a char*), which calls a function f3(returns a double), and I fail to malloc inside f3.
In this case, I can't seem to just output an error and exit(and may even have memory leaks if possible) since f3 will still force me to first return a double. Then f2 will force me to return a char*, etc. In this case, it seems very painful to keep track of the errors and exit appropriately. What is the proper way to efficiently cover these sort of errors accross functions?
The obvious solution is to design your program with care, so that every function that does dynamic allocation has some means to report errors. Most often the return value of the function is used for this purpose.
In well-designed programs, errors bounce back all the way up the call stack, so that they are dealt with at the application level.
In the specific case of dynamic memory allocation, it is always best to leave the allocation to the caller whenever possible.
It's always a problem. You need a disiplined approach.
Firstly, every dynamic pointer must be "owned" by someone. C won't help you here, you just have to specify. Generally the three patterns are
a) Function calls malloc(), then calls free():
b) We have two matching functions, one which returns a buffer or dynamic
structure, one which destroys it. The function that calls create also calls the destroy.
c) We have a set of nodes we are inserting into a graph, at random throughout the program. It needs to be managed like b, one function creates the root, then calls the delete which destroys the entire graph.
The rule is owner holds and frees.
If you return a pointer, return 0 on out of memory. If you return an integer, return -1. Errors get propagated up until some high level code knows what user-level operation has failed and aborts it.
The other answers are correct that the correct way to handle this is to make sure that every function that can allocate memory can report failure to its caller, and every caller handles the possibility. And, of course, you have a test malloc shim that arranges to test every possible allocation failure.
But in large C programs, this becomes intractable — the number of cases that need testing increases exponentially with the number of malloc callsites, for starters — so it is very common to see a function like this in a utils.c file:
void *
xmalloc(size_t n)
{
void *rv = malloc(n);
if (!rv) {
fprintf(stderr, "%s: memory exhausted\n", program_name);
exit(1);
}
return rv;
}
All other code in the program always calls xmalloc, never malloc, and can assume it always succeeds. (And you also have xcalloc, xrealloc, xstrdup, etc.)
Libraries cannot get away with this, but applications can.
The one way to switch across the functions is exception handling.
When an exception is thrown, it return the scope the catch part.
But make sure of the memory allocation across the functions, Since it directly moves to the catch blok.
The sample code for reference,
// Example program
#include <iostream>
#include <string>
using namespace std ;
int f1()
{
int *p = (int*) malloc(sizeof(int)) ;
if(p == NULL)
{
throw(1) ;
}
//Code flow continues.
return 0 ;
}
char *g()
{
char *p ;
f1() ;
cout << "Inside fun g*" << endl ;
return p;
}
int f2()
{
g() ;
cout << "Inside fun f2" << endl ;
return 0 ;
}
int main()
{
try
{
f2() ;
}
catch(int a)
{
cout << "Caught memory exception" << endl ;
}
return 0 ;
}
This question already has answers here:
How to write self-modifying code in x86 assembly
(7 answers)
Closed 6 years ago.
Is there any way to put processor instructions into array, make its memory segment executable and run it as a simple function:
int main()
{
char myarr[13] = {0x90, 0xc3};
(void (*)()) myfunc = (void (*)()) myarr;
myfunc();
return 0;
}
On Unix (these days, that means "everything except Windows and some embedded and mainframe stuff you've probably never heard of") you do this by allocating a whole number of pages with mmap, writing the code into them, and then making them executable with mprotect.
void execute_generated_machine_code(const uint8_t *code, size_t codelen)
{
// in order to manipulate memory protection, we must work with
// whole pages allocated directly from the operating system.
static size_t pagesize;
if (!pagesize) {
pagesize = sysconf(_SC_PAGESIZE);
if (pagesize == (size_t)-1) fatal_perror("getpagesize");
}
// allocate at least enough space for the code + 1 byte
// (so that there will be at least one INT3 - see below),
// rounded up to a multiple of the system page size.
size_t rounded_codesize = ((codelen + 1 + pagesize - 1)
/ pagesize) * pagesize;
void *executable_area = mmap(0, rounded_codesize,
PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0);
if (!executable_area) fatal_perror("mmap");
// at this point, executable_area points to memory that is writable but
// *not* executable. load the code into it.
memcpy(executable_area, code, codelen);
// fill the space at the end with INT3 instructions, to guarantee
// a prompt crash if the generated code runs off the end.
// must change this if generating code for non-x86.
memset(executable_area + codelen, 0xCC, rounded_codesize - codelen);
// make executable_area actually executable (and unwritable)
if (mprotect(executable_area, rounded_codesize, PROT_READ|PROT_EXEC))
fatal_perror("mprotect");
// now we can call it. passing arguments / receiving return values
// is left as an exercise (consult libffi source code for clues).
((void (*)(void)) executable_area)();
munmap(executable_area, rounded_codesize);
}
You can probably see that this code is very nearly the same as the Windows code shown in cherrydt's answer. Only the names and arguments of the system calls are different.
When working with code like this, it is important to know that many modern operating systems will not allow you to have a page of RAM that is simultaneously writable and executable. If I'd written PROT_READ|PROT_WRITE|PROT_EXEC in the call to mmap or mprotect, it would fail. This is called the W^X policy; the acronym stands for Write XOR eXecute. It originates with OpenBSD, and the idea is to make it harder for a buffer-overflow exploit to write code into RAM and then execute it. (It's still possible, the exploit just has to find a way to make an appropriate call to mprotect first.)
Depends on the platform.
For Windows, you can use this code:
// Allocate some memory as readable+writable
// TODO: Check return value for error
LPVOID memPtr = VirtualAlloc(NULL, sizeof(myarr), MEM_COMMIT, PAGE_READWRITE);
// Copy data
memcpy(memPtr, myarr, sizeof(myarr);
// Change memory protection to readable+executable
// Again, TODO: Error checking
DWORD oldProtection; // Not used but required for the function
VirtualProtect(memPtr, sizeof(myarr), PAGE_EXECUTE_READ, &oldProtection);
// Assign and call the function
(void (*)()) myfunc = (void (*)()) memPtr;
myfunc();
// Free the memory
VirtualFree(memPtr, 0, MEM_RELEASE);
This codes assumes a myarr array as in your question's code, and it assumes that sizeof will work on it i.e. it has a directly defined size and is not just a pointer passed from elsewhere. If the latter is the case, you would have to specify the size in another way.
Note that here there are two "simplifications" possible, in case you wonder, but I would advise against them:
1) You could call VirtualAlloc with PAGE_EXECUTE_READWRITE, but this is in general bad practice because it would open an attack vector for unwanted code exeuction.
2) You could call VirtualProtect on &myarr directly, but this would just make a random page in your memory executable which happens to contain your array executable, which is even worse than #1 because there might be other data in this page as well which is now suddenly executable as well.
For Linux, I found this on Google but I don't know much about it.
Very OS-dependent: not all OSes will deliberately (read: without a bug) allow you to execute code in the data segment. DOS will because it runs in Real Mode, Linux can also with the appropriate privileges. I don't know about Windows.
Casting is often undefined and has its own caveats, so some elaboration on that topic here. From C11 standard draft N1570, §J.5.7/1:
A pointer to an object or to void may
be cast to a pointer to a function, allowing data to be invoked as a
function (6.5.4).
(Formatting added.)
So, it's perfectly fine and should work as expected. Of course, you would need to cohere to the ABI's calling convention.
What is the best way for unit testing code paths involving a failed malloc()? In most instances, it probably doesn't matter because you're doing something like
thingy *my_thingy = malloc(sizeof(thingy));
if (my_thingy == NULL) {
fprintf(stderr, "We're so screwed!\n");
exit(EXIT_FAILURE);
}
but in some instances you have choices other than dying, because you've allocated some extra stuff for caching or whatever, and you can reclaim that memory.
However, in those instances where you can try to recover from a failed malloc() that you're doing something tricky and error prone in a code path that's pretty unusual, making testing especially important. How do you actually go about doing this?
I saw a cool solution to this problem which was presented to me by S. Paavolainen. The idea is to override the standard malloc(), which you can do just in the linker, by a custom allocator which
reads the current execution stack of the thread calling malloc()
checks if the stack exists in a database that is stored on hard disk
if the stack does not exist, adds the stack to the database and returns NULL
if the stack did exist already, allocates memory normally and returns
Then you just run your unit test many times: this system automatically enumerates through different control paths to malloc() failure and is much more efficient and reliable than e.g. random testing.
I suggest creating a specific function for your special malloc code that you expect could fail and you could handle gracefully. For example:
void* special_malloc(size_t bytes) {
void* ptr = malloc(bytes);
if(ptr == NULL) {
/* Do something crafty */
} else {
return ptr;
}
}
Then you could unit-test this crafty business in here by passing in some bad values for bytes. You could put this in a separate library and make a mock-library that does behaves special for your testing of the functions which call this one.
This is a kinda gross, but if you really want unit testing, you could do it with #ifdefs:
thingy *my_thingy = malloc(sizeof(thingy));
#ifdef MALLOC_UNIT_TEST_1
my_thingy = NULL;
#endif
if (my_thingy == NULL) {
fprintf(stderr, "We're so screwed!\n");
exit(EXIT_FAILURE);
}
Unfortunately, you'd have to recompile a lot with this solution.
If you're using linux, you could also consider running your code under memory pressure by using ulimit, but be careful.
write your own library that implements malloc by randomly failing or calling the real malloc (either staticly linked or explicitly dlopened)
then LD_PRELOAD it
In FreeBSD I once simply overloaded C library malloc.o module (symbols there were weak) and replaced malloc() implementation with one which had controlled probability to fail.
So I linked statically and started to perform testing. srandom() finished the picture with controlled pseudo-random sequence.
Also look here for a set of good tools that you seems to need by my opinion. At least they overload malloc() / free() to track leaks so it seems as usable point to add anything you want.
You could hijack malloc by using some defines and global parameter to control it... It's a bit hackish but seems to work.
#include <stdio.h>
#include <stdlib.h>
#define malloc(x) fake_malloc(x)
struct {
size_t last_request;
int should_fail;
void *(*real_malloc)(size_t);
} fake_malloc_params;
void *fake_malloc(size_t size) {
fake_malloc_params.last_request = size;
if (fake_malloc_params.should_fail) {
return NULL;
}
return (fake_malloc_params.real_malloc)(size);;
}
int main(void) {
fake_malloc_params.real_malloc = malloc;
void *ptr = NULL;
ptr = malloc(1);
printf("last: %d\n", (int) fake_malloc_params.last_request);
printf("ptr: 0x%p\n", ptr);
return 0;
}