Can I rely on malloc returning NULL? - c

I read that on Unix systems, malloc can return a non-NULL pointer even if the memory is not actually available, and trying to use the memory later on will trigger an error. Since I cannot catch such an error by checking for NULL, I wonder how useful it is to check for NULL at all?
On a related note, Herb Sutter says that handling C++ memory errors is futile, because the system will go into spasms of paging long before an exception will actually occur. Does this apply to malloc as well?

Quoting Linux manuals:
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no
guarantee that
the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more
processes will be
killed by the infamous OOM killer. In case Linux is employed under circumstances where it would be less desirable to suddenly lose
some randomly
picked processes, and moreover the kernel version is sufficiently recent, one can switch off this overcommitting behavior
using a command like:
# echo 2 > /proc/sys/vm/overcommit_memory
You ought to check for NULL return, especially on 32-bit systems, as the process address space could be exhausted far before the RAM: on 32-bit Linux for example, user processes might have usable address space of 2G - 3G as opposed to over 4G of total RAM. On 64-bit systems it might be useless to check the malloc return code, but might be considered good practice anyway, and it does make your program more portable. And, remember, dereferencing the null pointer kills your process certainly; some swapping might not hurt much compared to that.
If malloc happens to return NULL when one tries to allocate only a small amount of memory, then one must be cautious when trying to recover from the error condition as any subsequent malloc can fail too, until enough memory is available.
The default C++ operator new is often a wrapper over the same allocation mechanisms employed by malloc().

On Linux, you can indeed not rely on malloc returning NULL if sufficient memory is not available due to the kernel's overallocation strategy, but you should still check for it because in some circumstances malloc will return NULL, e.g. when you ask for more memory than is available in the machine in total. The Linux malloc(3) manpage calls the overallocation "a really bad bug" and contains advice on how to turn it off.
I've never heard about this behavior also occurring in other Unix variants.
As for the "spasms of paging", that depends on the machine setup. E.g., I tend not to setup a swap partition on laptop Linux installations, since the exact behavior you fear might kill the hard disk. I would still like the C/C++ programs that I run to check malloc return values, give appropriate error messages and when possible clean up after themselves.

Checking for the return of malloc doesn't help you much by its own to make your allocations safer or less error prone. It can even be a trap if this is the only test that you implement.
When called with an argument of 0 the standard allows malloc to return a sort of unique address, which is not a null pointer and which you don't have the right to access, nevertheless. So if you just test if the return is 0 but don't test the arguments to malloc, calloc or realloc you might encounter a segfault much later.
This error condition (memory exhausted) is quite rare in "hosted" environments. Usually you are in trouble long before you hassle with this kind of error. (But if you are writing runtime libraries, are a kernel hacker or rocket builder this is different, and there the test makes perfect sense.)
People then tend to decorate their code with complicated captures of that error condition that span several lines, doing perror and stuff like that, that can have an impact on the readability of the code.
I think that this "check the return of malloc" is much overestimated, sometimes even defended quite dogmatically. Other things are much more important:
always initialize variables, always. for pointer variables this is crucial,
let the program crash nicely before things get too bad. uninitialized pointer members in structs are an important cause of errors that are difficult to find.
always check the argument to malloc and Co. if this is a compile
time constant like sizof toto there can't be a problem, but
always ensure that your vector allocation handles the zero case properly.
An easy thing to check for return of malloc is to wrap it up with something like memset(malloc(n), 0, 1). This just writes a 0 in the first byte and crashes nicely if malloc had an error or n was 0 to start with.

To view this from an alternative point of view:
"malloc can return a non-NULL pointer even if the memory is not actually available" does not mean that it always returns non-NULL. There might (and will) be cases where NULL is returned (as others already said), so this check is necessary nevertheless.

Related

How important is it to check for malloc failures?

Is always protecting mallocs important? By protecting I mean:
char *test_malloc = malloc(sizeof(char) * 10000);
if (!test_malloc)
exit(EXIT_FAILURE);
I mean, in electronic devices, I don't doubt it's essential.
But in programs I'm running on my own machine that I'm sure my allocation size will be positive and that the size will not be astronomical.
Some people say, "Ah imagine there’s not enough memory in your computer at this moment."
There's always the possibility that the system can run out of memory and not have any more to allocate, so it's always a good idea to do so.
Given that there's almost no way to recover from a failed malloc, I prefer to use a wrapper function to do the allocation and checking. That makes the code more readable.
void *safe_malloc(size_t size)
{
void *p = malloc(size);
if (p == NULL) {
perror("malloc failed!");
exit(EXIT_FAILURE);
}
return p;
}
The other reason it's a good idea to check for malloc failures (and why I always do) is: to catch programming mistakes.
It's true, memory is effectively infinite on many machines today, so malloc almost never fails because it ran out of memory. But it often fails (for me, at least) because I screwed up and overwrote memory in some way, screwing up the heap, and the next call to malloc often returns NULL to let you know it noticed the corruption. And that's something I really want to know about, and fix.
Over the years, probably 10% of the time malloc has returned NULL on me was because it was out of memory, and the other 90% was because of those other problems, my problems, that I really needed to know about.
So, to this day, I still maintain a pretty religious habit of always checking malloc.
It is important to check for error conditions? Absolutely!
Now when it comes to malloc you might find, that some implementations of the C standard library – the glibc for example – will never return NULL and instead abort, due to the assumption that if allocating memory fails, a program won't be able to recover from that. IMHO that assumption is ill founded. But it places malloc in that weird groups of function where, even if you properly implement out-of-memory condition handling that works without requesting any more memory, your program is still being crashed.
That being said: You should definitely check for error conditions, for the sole reason you might run your program in an environment that gives you a chance to properly react to them.
The Standard makes no distinction between a number of kinds of implementations:
Those where a program a program could allocate as much memory as it malloc() will supply, without affecting system stability, and where a program that could behave usefully without needing the storage from the failed allocation may run normally.
Those where system memory exhaustion may cause programs to terminate unexpectedly even without malloc() ever indicating a failure, but where attempting to dereference a pointer returned via malloc() will either access a valid memory region or force program termination.
Those where malloc() might return null, and dereferencing a null pointer might have unwanted effects beyond abnormal program termination, but where programs might be unexpectedly terminated due to a lack of storage even when malloc() reports success.
If there's no way a program would be able to usefully continue operation if malloc() fails, it's probably best to allocate memory via wrapper function that calls malloc() and forcibly terminates the program if it returns null.

The need of checking a pointer returned by malloc in C

Suppose I have the following line in my code:
struct info *pinfo = malloc(sizeof(struct info));
Usually there is another line of code like this one:
if (!pinfo)
<handle this error>
But does it really worth it? especially if the object is so small that the code generated to check it might need more memory than the object itself.
It's true that running out of memory is rare, especially for little test programs that are only allocating tens of bytes of memory, especially on modern systems that have many gigabytes of memory available.
Yet malloc failures are very common, especially for little test programs.
malloc can fail for two reasons:
There's not enough memory to allocate.
malloc detects that the memory-allocation heap is messed up, perhaps because you did something wrong with one of your previous memory allocations.
Now, it turns out that #2 happens all the time.
And, it turns out that #1 is pretty common, too, although not because there's not enough memory to satisfy the allocation the programmer meant to do, but because the programmer accidentally passed a preposterously huge number to malloc, accidentally asking for more memory than there is in the known universe.
So, yes, it turns out that checking for malloc failure is a really good idea, even though it seems like malloc "can't fail".
The other thing to think about is, what if you take the shortcut and don't check for malloc failure? If you sail along and use the null pointer that malloc gave you instead, that'll cause your program to immediately crash, and that'll alert you to your problem just as well as an "out of memory" message would have, without your having to wear your fingers to the bone typing if(!pinfo) and fprintf(stderr, "out of memory\n"), right?
Well, no.
Depending on what your program accidentally does with the null pointer, it's possible it won't crash right away. Anyway, the crash you get, with a message like "Segmentation violation - core dumped" doesn't tell you much, doesn't tell you where your problem is. You can get segmentation violations for all sorts of reasons (especially in little test programs, especially if you're a beginner not quite sure what you're doing). You can spend hours in a futile effort to figure out why your program is crashing, without realizing it's because malloc is returning a null pointer. So, definitely, you should always check for malloc failure, even in the tiniest test programs.
Deciding which errors to test for, versus those that "can't happen" or for whatever reason aren't worth catching, is a hard problem in general. It can take a fair amount of experience to know what is and isn't worth checking for. But, truly, anybody who's programmed in C for very long can tell you emphatically: malloc failure is definitely worth checking for.
If your program is calling malloc all over the place, checking each and every call can be a real nuisance. So a popular strategy is to use a malloc wrapper:
void *my_malloc(size_t n)
{
void *ret = malloc(n);
if(ret == NULL) {
fprintf(stderr, "malloc failed (%s)\n", strerror(errno));
exit(1);
}
return ret;
}
There are three ways of thinking about this function:
Whenever you have some processing that you're doing repetitively, all over the place (in this case, checking for malloc failure), see if you can move it off to (centralize it in) a single function, like this.
Unlike malloc, my_malloc can't fail. It never returns a null pointer. It's almost magic. You can call it whenever and wherever you want, and you never have to check its return value. It lets you pretend that you never have to worry about running out of memory (which was sort of the goal all along).
Like any magical result, my_malloc's benefit — that it never seems to fail — comes at a price. If the underlying malloc fails, my_malloc summarily exits (since it can't return in that case), meaning that the rest of your program doesn't get a chance to clean up. If the program were, say, a text editor, and whenever it had a little error it printed "out of memory" and then basically threw away the file the user had been editing for the last hour, the user might not be too pleased. So you can't use the simple my_malloc trick in production programs that might lose data. But it's a huge convenience for programs that don't have to worry about that sort of thing.
If malloc fails then chances are the system is out of memory or it's something else your program can't handle. It should abort immediately and at most log some diagnostics. Not handling NULL from malloc will make you end up in undefined behavior land. One might argue that having to abort because of a failure of malloc is already catastrophic but just letting it exhibit UB falls under a worse category.
But what if the malloc fails? You will dereference the NULL pointer, which is UB (undefined behaviour) and your program will (probably) fail!
Sometimes code which checks the correctness of the data is longer than the code which does something with it :).
This is very simply, if you won't check for NULL you might end up with runtime error. Checking for NULL will help you to avoid program from unexpected crash and gracefully handle the error case.
If you just want to quickly test some algorithm, then fine, but know it can fail. For example run it in the debugger.
When you include it in your Real World Program, then add all the error checking and handling needed.

Is it bad to use a malloc loop to guarantee a result from malloc?

Is it bad practice to allocate memory like this?:
FOO *foo;
while (!(foo = malloc(sizeof(FOO)))) ;
I don't know about bad practice, but it's uncommon. malloc() failures are usually indicative of major system problems that your program is unlikely to be able to recover from. If your system is different, your example may very well be practical.
NB - this answer assumes that sizeof(FOO) is "reasonable" and that your malloc() doesn't just refuse because you're asking for too much memory.
That doesn't "guarantee" a result from malloc(). If malloc returns NULL there's probably a reason for it (out of memory for example). And you might throw yourself into an infinite loop.
Furthermore, if you're running this on a Linux platform:
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available.
That means that lets say calling malloc over and over again did happen to return something non-NULL, due to Linux's "optimistic" strategy, a non-NULL still doesn't guarantee you got anything that will work.
I think this code is setting you up for a debugging nightmare.

Why is malloc really non-deterministic? (Linux/Unix)

malloc is not guaranteed to return 0'ed memory. The conventional wisdom is not only that, but that the contents of the memory malloc returns are actually non-deterministic, e.g. openssl used them for extra randomness.
However, as far as I know, malloc is built on top of brk/sbrk, which do "return" 0'ed memory. I can see why the contents of what malloc returns may be non-0, e.g. from previously free'd memory, but why would they be non-deterministic in "normal" single-threaded software?
Is the conventional wisdom really true (assuming the same binary and libraries)
If so, Why?
Edit Several people answered explaining why the memory can be non-0, which I already explained in the question above. What I'm asking is why the program using the contents of what malloc returns may be non-deterministic, i.e. why it could have different behavior every time it's run (assuming the same binary and libraries). Non-deterministic behavior is not implied by non-0's. To put it differently: why it could have different contents every time the binary is run.
Malloc does not guarantee unpredictability... it just doesn't guarantee predictability.
E.g. Consider that
return 0;
Is a valid implementation of malloc.
The initial values of memory returned by malloc are unspecified, which means that the specifications of the C and C++ languages put no restrictions on what values can be handed back. This makes the language easier to implement on a variety of platforms. While it might be true that in Linux malloc is implemented with brk and sbrk and the memory should be zeroed (I'm not even sure that this is necessarily true, by the way), on other platforms, perhaps an embedded platform, there's no reason that this would have to be the case. For example, an embedded device might not want to zero the memory, since doing so costs CPU cycles and thus power and time. Also, in the interest of efficiency, for example, the memory allocator could recycle blocks that had previously been freed without zeroing them out first. This means that even if the memory from the OS is initially zeroed out, the memory from malloc needn't be.
The conventional wisdom that the values are nondeterministic is probably a good one because it forces you to realize that any memory you get back might have garbage data in it that could crash your program. That said, you should not assume that the values are truly random. You should, however, realize that the values handed back are not magically going to be what you want. You are responsible for setting them up correctly. Assuming the values are truly random is a Really Bad Idea, since there is nothing at all to suggest that they would be.
If you want memory that is guaranteed to be zeroed out, use calloc instead.
Hope this helps!
malloc is defined on many systems that can be programmed in C/C++, including many non-UNIX systems, and many systems that lack operating system altogether. Requiring malloc to zero out the memory goes against C's philosophy of saving CPU as much as possible.
The standard provides a zeroing cal calloc that can be used if you need to zero out the memory. But in cases when you are planning to initialize the memory yourself as soon as you get it, the CPU cycles spent making sure the block is zeroed out are a waste; C standard aims to avoid this waste as much as possible, often at the expense of predictability.
Memory returned by mallocis not zeroed (or rather, is not guaranteed to be zeroed) because it does not need to. There is no security risk in reusing uninitialized memory pulled from your own process' address space or page pool. You already know it's there, and you already know the contents. There is also no issue with the contents in a practical sense, because you're going to overwrite it anyway.
Incidentially, the memory returned by malloc is zeroed upon first allocation, because an operating system kernel cannot afford the risk of giving one process data that another process owned previously. Therefore, when the OS faults in a new page, it only ever provides one that has been zeroed. However, this is totally unrelated to malloc.
(Slightly off-topic: The Debian security thing you mentioned had a few more implications than using uninitialized memory for randomness. A packager who was not familiar with the inner workings of the code and did not know the precise implications patched out a couple of places that Valgrind had reported, presumably with good intent but to desastrous effect. Among these was the "random from uninitilized memory", but it was by far not the most severe one.)
I think that the assumption that it is non-deterministic is plain wrong, particularly as you ask for a non-threaded context. (In a threaded context due to scheduling alea you could have some non-determinism).
Just try it out. Create a sequential, deterministic application that
does a whole bunch of allocations
fills the memory with some pattern, eg fill it with the value of a counter
free every second of these allocations
newly allocate the same amount
run through these new allocations and register the value of the first byte in a file (as textual numbers one per line)
run this program twice and register the result in two different files. My idea is that these files will be identical.
Even in "normal" single-threaded programs, memory is freed and reallocated many times. Malloc will return to you memory that you had used before.
Even single-threaded code may do malloc then free then malloc and get back previously used, non-zero memory.
There is no guarantee that brk/sbrk return 0ed-out data; this is an implementation detail. It is generally a good idea for an OS to do that to reduce the possibility that sensitive information from one process finds its way into another process, but nothing in the specification says that it will be the case.
Also, the fact that malloc is implemented on top of brk/sbrk is also implementation-dependent, and can even vary based on the size of the allocation; for example, large allocations on Linux have traditionally used mmap on /dev/zero instead.
Basically, you can neither rely on malloc()ed regions containing garbage nor on it being all-0, and no program should assume one way or the other about it.
The simplest way I can think of putting the answer is like this:
If I am looking for wall space to paint a mural, I don't care whether it is white or covered with old graffiti, since I'm going to prime it and paint over it. I only care whether I have enough square footage to accommodate the picture, and I care that I'm not painting over an area that belongs to someone else.
That is how malloc thinks. Zeroing memory every time a process ends would be wasted computational effort. It would be like re-priming the wall every time you finish painting.
There is an whole ecosystem of programs living inside a computer memmory and you cannot control the order in which mallocs and frees are happening.
Imagine that the first time you run your application and malloc() something, it gives you an address with some garbage. Then your program shuts down, your OS marks that area as free. Another program takes it with another malloc(), writes a lot of stuff and then leaves. You run your program again, it might happen that malloc() gives you the same address, but now there's different garbage there, that the previous program might have written.
I don't actually know the implementation of malloc() in any system and I don't know if it implements any kind of security measure (like randomizing the returned address), but I don't think so.
It is very deterministic.

What are the chances that realloc should fail?

Does it fail when it runs out of free memory similar to malloc or could there be other reasons?
Any of the allocation functions (malloc, realloc, calloc, and on POSIX, posix_memalign) could fail for any of the following reasons, and possibly others:
You've used up your entire virtual address space, or at least the usable portion of it. On a 32-bit machine, there are only 4GB worth of addresses, and possibly 1GB or so is reserved for use by the OS kernel. Even if your machine has 16GB of physical memory, a single process cannot use more than it has addresses for.
You haven't used up your virtual address space, but you've fragmented it so badly that no contiguous range of addresses of the requested size are available. This could happen (on a 32-bit machine) if you successfully allocate 6 512MB blocks, free every other one, then try to allocate a 1GB block. Of course there are plenty of other examples with smaller memory sizes.
Your machine has run out of physical memory, either due to your own program having used it all, or other programs running on the machine having used it all. Some systems (Linux in the default configuration) will overcommit, meaning malloc won't fail in this situation, but instead the OS will later kill one or more programs when it figures out there's not really enough physical memory to go around. But on robust systems (including Linux with overcommit disabled), malloc will fail if there's no physical memory left.
Note that strictly speaking, the allocation functions are allowed to fail at any time for any reason. Minimizing failure is a quality-of-implementation issue. It's also possible that realloc could fail, even when reducing the size of an object; this could happen on implementations that strictly segregate allocations by size. Of course, in this case you could simply continue to use the old (larger) object.
You should think of realloc as working this way:
void *realloc(void *oldptr, size_t newsize)
{
size_t oldsize = __extract_size_of_malloc_block(oldptr);
void *newptr = malloc(newsize);
if (!newptr)
return 0;
if (oldsize > newsize)
oldsize = newsize;
memcpy(newptr, oldptr, oldsize);
free(oldptr);
return newptr;
}
An implementation may be able to do specific cases more efficiently than that, but an implementation that works exactly as shown is 100% correct. That means realloc(ptr, newsize) can fail anytime malloc(newsize) would have failed; in particular it can fail even if you are shrinking the allocation.
Now, on modern desktop systems there is a strong case for not trying to recover from malloc failures, but instead wrapping malloc in a function (usually called xmalloc) that terminates the program immediately if malloc fails; naturally the same argument applies to realloc. The case is:
Desktop systems often run in "overcommit" mode where the kernel will happily hand out more address space than can be backed by RAM+swap, assuming that the program isn't actually going to use all of it. If the program does try to use all of it, it will get forcibly terminated. On such systems, malloc will only fail if you exhaust the address space, which is unlikely on 32-bit systems and nigh-impossible on 64-bit systems.
Even if you're not in overcommit mode, the odds are that a desktop system has so much RAM and swap available that, long before you cause malloc to fail, the user will get fed up with their thrashing disk and forcibly terminate your program.
There is no practical way to test recovery from an allocation failure; even if you had a shim library that could control exactly which calls to malloc failed (such shims are at best difficult, at worst impossible, to create, depending on the OS) you would have to test order of 2N failure patterns, where N is the number of calls to malloc in your program.
Arguments 1 and 2 do not apply to embedded or mobile systems (yet!) but argument 3 is still valid there.
Argument 3 only applies to programs where allocation failures must be checked and propagated at every call site. If you are so lucky as to be using C++ as it is intended to be used (i.e. with exceptions) you can rely on the compiler to create the error recovery paths for you, so the testing burden is much reduced. And in any higher level language worth using nowadays you have both exceptions and a garbage collector, which means you couldn't worry about allocation failures even if you wanted to.
I'd say it mostly implementation specific. Some implementations may be very likely to fail. Some may have other parts of the program fail before realloc will. Always be defensive and check if it does fail.
And remember to free the old pointer that you tried to realloc.
ptr=realloc(ptr,10);
is ALWAYS a possible memory leak.
Always do it rather like this:
void *tmp=ptr;
if(ptr=realloc(ptr,10)==NULL){
free(tmp);
//handle error...
}
You have two questions.
The chances that malloc or realloc fail are negligible on most modern system. This only occurs when you run out of virtual memory. Your system will fail on accessing the memory and not on reserving it.
W.r.t failure realloc and malloc are almost equal. The only reason that realloc may fail additionally is that you give it a bad argument, that is memory that had not been allocated with malloc or realloc or that had previously been freed.
Edit: In view of R.'s comment. Yes, you may configure your system such that it will fail when you allocate. But first of all, AFAIK, this is not the default. It needs privileges to be configured in that way and as an application programmer this is nothing that you can count on. Second, even if you'd have a system that is configured in that way, this will only error out when your available swap space has been eaten up. Usually your machine will be unusable long before that: it will be doing mechanical computations on your harddisk (AKA swapping).
From zwol's answer:
Now, on modern desktop systems there is a strong case for not trying to recover from malloc failures, but instead wrapping malloc in a function (usually called xmalloc) that terminates the program immediately if malloc fails;
Naturally the same argument applies to realloc.
You can see that principle applied with Git 2.29 (Q4 2020): It was possible for xrealloc() to send a non-NULL pointer that has been freed, which has been fixed.
See commit 6479ea4 (02 Sep 2020) by Jeff King (peff).
(Merged by Junio C Hamano -- gitster -- in commit 56b891e, 03 Sep 2020)
xrealloc: do not reuse pointer freed by zero-length realloc()
Signed-off-by: Jeff King
This patch fixes a bug where xrealloc(ptr, 0) can double-free and corrupt the heap on some platforms (including at least glibc).
The C99 standard says of malloc (section 7.20.3):
If the size of the space requested is zero, the behavior is
implementation-defined: either a null pointer is returned, or the
behavior is as if the size were some nonzero value, except that the
returned pointer shall not be used to access an object.
So we might get NULL back, or we might get an actual pointer (but we're not allowed to look at its contents).
To simplify our code, our xmalloc() handles a NULL return by converting it into a single-byte allocation.
That way callers get consistent behavior. This was done way back in 4e7a2eccc2 ("?alloc: do not return NULL when asked for zero bytes", 2005-12-29, Git v1.1.0 -- merge).
We also gave xcalloc() and xrealloc() the same treatment. And according to C99, that is fine; the text above is in a paragraph that applies to all three.
But what happens to the memory we passed to realloc() in such a case? I.e., if we do:
ret = realloc(ptr, 0);
and "ptr" is non-NULL, but we get NULL back: is "ptr" still valid?
C99 doesn't cover this case specifically, but says (section 7.20.3.4):
The realloc function deallocates the old object pointed to by ptr and
returns a pointer to a new object that has the size specified by size.
So "ptr" is now deallocated, and we must only look at "ret".
And since "ret" is NULL, that means we have no allocated object at all. But that's not quite the whole story. It also says:
If memory for the new object cannot be allocated, the old object is
not deallocated and its value is unchanged.
[...]
The realloc function returns a pointer to the new object (which may
have the same value as a pointer to the old object), or a null pointer
if the new object could not be allocated.
So if we see a NULL return with a non-zero size, we can expect that the original object is still valid.
But with a non-zero size, it's ambiguous. The NULL return might mean a failure (in which case the object is valid), or it might mean that we successfully allocated nothing, and used NULL to represent that.
The glibc manpage for realloc() explicitly says:
[...]if size is equal to zero, and ptr is not NULL, then the call is
equivalent to free(ptr).
Likewise, this StackOverflow answer to "What does malloc(0) return?":
claims that C89 gave similar guidance (but I don't have a copy to verify it).
A comment on this answer to "What's the point of malloc(0)?" claims that Microsoft's CRT behaves the same.
But our current "retry with 1 byte" code passes the original pointer again.
So on glibc, we effectively free() the pointer and then try to realloc() it again, which is undefined behavior.
The simplest fix here is to just pass "ret" (which we know to be NULL) to the follow-up realloc().
But that means that a system which doesn't free the original pointer would leak it. It's not clear if any such systems exist, and that interpretation of the standard seems unlikely (I'd expect a system that doesn't deallocate to simply return the original pointer in this case).
But it's easy enough to err on the safe side, and just never pass a zero size to realloc() at all.

Resources