In my C program, I have many data structures that are dynamically allocating memory. If the user does any of the following:
Alt-F4
Clicks on the X
Force quits via Task Manager, etc
then my program will have a memory leak. How do I do a set on close request feature so I can clean up my global variables before my program is shut down unnaturally?
(Your terminology strongly suggests a Windows-specific application so I have written this answer accordingly. On other modern operating systems, the terminology is different, but the behavior is the same, except for some embedded environments, but if you were coding for those you would not have asked this question.)
First, the OS will automatically deallocate all memory associated with your process when it is terminated. You never have to worry about RAM remaining allocated.1
Second, you can get notified upon Alt-F4 / window close. I don't know the details, never having written a GUI application for Windows, but there should be some way you can arrange to get it treated the same as selecting Quit from the application's menus; your "toolkit" might even do this for you by default. The proper use of this notification is to save files to disk and cleanly shut down network dialogues. (The OS will also automatically close all file handles and disconnect all network sockets, but it won't flush unsaved changes for you, nor will it send application-layer goodbye messages.)
Third, you cannot receive a notification upon "force quit", and this is by design, because it's the software equivalent of the big red emergency stop button on an industrial robot -- it's for situations where making the process go away right now, no matter what is more important than any potential data loss. As you can't do anything about this, you should not worry about it.
1 As discussed in the comments: yes, this means it is not necessary to deallocate memory manually before exiting. In fact, it is more efficient if you don't, as discussed here and here. The "common wisdom" that manual deallocation is required dates to the early days of microcomputers; (some iterations of) CP/M and MS-DOS did indeed need each application to deallocate memory manually; nowadays it is an instance of cargo-cult programming.
That said, there are some good reasons to deallocate memory manually before exiting. The most obvious is that some leak-detection tools need you to do it, because they do not know the difference between "memory that the application could have freed if it had bothered" and "memory that the application has lost all valid pointers to, and cannot free anymore". A less obvious case is if you are writing a library, or there's a possibility that your application might get turned into a library in the future. Libraries need to be able to manually free all of their allocations upon request by the application, because the application might need to set up, use, and tear down the library many times in the lifetime of one process. (However, it will be more efficient if the application sets up the library once in its lifetime, uses it repeatedly, and then doesn't bother tearing it down before exiting.)
You can register a function to be called at exit, with atexit. This should handle normal termination (by returning from main or a call to exit anywhere in the program) and Alt+f4 and clicking the X too. However, there is nothing your program can do when it is killed (force quit in the task manager). Fortunately, most modern operating systems can and do clean up the memory used by your program, even when it dies of non-natural causes.
Here is an example of a program using atexit. The program creates (or truncates) a file called atexit.txt. When the program is terminated normally it writes Exited! to that file and closes the handle.
#include <stdlib.h>
#include <stdio.h>
FILE * quitFp;
void onExit(){
fputs("Exited!", quitFp);
fclose(quitFp);
}
int main(){
quitFp = fopen("atexit.txt", "w+");
if(!quitFp){
return 1;
}
atexit(onExit);
puts("Press enter to exit");
getchar();
return 0;
}
On Windows 8.1, compiled with MinGW/GCC it works for normal return and exit and when I click the X on the console window.
Related
Let's say I have the following C code:
int main () {
int *p = malloc(10 * sizeof *p);
*p = 42;
return 0; //Exiting without freeing the allocated memory
}
When I compile and execute that C program, ie after allocating some space in memory, will that memory I allocated be still allocated (ie basically taking up space) after I exit the application and the process terminates?
It depends on the operating system. The majority of modern (and all major) operating systems will free memory not freed by the program when it ends.
Relying on this is bad practice and it is better to free it explicitly. The issue isn't just that your code looks bad. You may decide you want to integrate your small program into a larger, long running one. Then a while later you have to spend hours tracking down memory leaks.
Relying on a feature of an operating system also makes the code less portable.
In general, modern general-purpose operating systems do clean up after terminated processes. This is necessary because the alternative is for the system to lose resources over time and require rebooting due to programs which are poorly written or simply have rarely-occurring bugs that leak resources.
Having your program explicitly free its resources anyway can be good practice for various reasons, such as:
If you have additional resources that are not cleaned up by the OS on exit, such as temporary files or any kind of change to the state of an external resource, then you will need code to deal with all of those things on exit, and this is often elegantly combined with freeing memory.
If your program starts having a longer lifetime, then you will not want the only way to free memory to be to exit. For example, you might want to convert your program into a server (daemon) which keeps running while handling many requests for individual units of work, or your program might become a small part of a larger program.
However, here is a reason to skip freeing memory: efficient shutdown. For example, suppose your application contains a large cache in memory. If when it exits it goes through the entire cache structure and frees it one piece at a time, that serves no useful purpose and wastes resources. Especially, consider the case where the memory pages containing your cache have been swapped to disk by the operating system; by walking the structure and freeing it you're bringing all of those pages back into memory all at once, wasting significant time and energy for no actual benefit, and possibly even causing other programs on the system to get swapped out!
As a related example, there are high-performance servers that work by creating a process for each request, then having it exit when done; by this means they don't even have to track memory allocation, and never do any freeing or garbage collection at all, since everything just vanishes back into the operating system's free memory at the end of the process. (The same kind of thing can be done within a process using a custom memory allocator, but requires very careful programming; essentially making one's own notion of “lightweight processes” within the OS process.)
My apologies for posting so long after the last post to this thread.
One additional point. Not all programs make it to graceful exits. Crashes and ctrl-C's, etc. will cause a program to exit in uncontrolled ways. If your OS did not free your heap, clean up your stack, delete static variables, etc, you would eventually crash your system from memory leaks or worse.
Interesting aside to this, crashes/breaks in Ubuntu, and I suspect all other modern OSes, do have problems with "handled' resources. Sockets, files, devices, etc. can remain "open" when a program ends/crashes. It is also good practice to close anything with a "handle" or "descriptor" as part of your clean up prior to graceful exit.
I am currently developing a program that uses sockets heavily. When I get stuck in a hang I have to ctrl-c out of it, thus, stranding my sockets. I added a std::vector to collect a list of all opened sockets and a sigaction handler that catches sigint and sigterm. The handler walks the list and closes the sockets. I plan on making a similar cleanup routine for use before throw's that will lead to premature termination.
Anyone care to comment on this design?
What's happening here (in a modern OS), is that your program runs inside its own "process." This is an operating system entity that is endowed with its own address space, file descriptors, etc. Your malloc calls are allocating memory from the "heap", or unallocated memory pages that are assigned to your process.
When your program ends, as in this example, all of the resources assigned to your process are simply recycled/torn down by the operating system. In the case of memory, all of the memory pages that are assigned to you are simply marked as "free" and recycled for the use of other processes. Pages are a lower-level concept than what malloc handles-- as a result, the specifics of malloc/free are all simply washed away as the whole thing gets cleaned up.
It's the moral equivalent of, when you're done using your laptop and want to give it to a friend, you don't bother to individually delete each file. You just format the hard drive.
All this said, as all other answerers are noting, relying on this is not good practice:
You should always be programming to take care of resources, and in C that means memory as well. You might end up embedding your code in a library, or it might end up running much longer than you expect.
Some OSs (older ones and maybe some modern embedded ones) may not maintain such hard process boundaries, and your allocations might affect others' address spaces.
Yes. The OS cleans up resources. Well ... old versions of NetWare didn't.
Edit: As San Jacinto pointed out, there are certainly systems (aside from NetWare) that do not do that. Even in throw-away programs, I try to make a habit of freeing all resources just to keep up the habit.
Yes, the operating system releases all memory when the process ends.
It depends, operating systems will usually clean it up for you, but if you're working on for instance embedded software then it might not be released.
Just make sure you free it, it can save you a lot of time later when you might want to integrate it in to a large project.
That really depends on the operating system, but for all operating systems you'll ever encounter, the memory allocation will disappear when the process exits.
I think direct freeing is best. Undefined behaviour is the worst thing, so if you have access while it's still defined in your process, do it, there are lots of good reasons people have given for it.
As to where, or whether, I found that in W98, the real question was 'when' (I didn't see a post emphasising this). A small template program (for MIDI SysEx input, using various malloc'd spaces) would free memory in the WM_DESTROY bit of the WndProc, but when I transplanted this to a larger program it crashed on exit. I assumed this meant I was trying to free what the OS had already freed during a larger cleanup. If I did it on WM_CLOSE, then called DestroyWindow(), it all worked fine, instant clean exit.
While this isn't exactly the same as MIDI buffers, there is similarity in that it is best to keep the process intact, clean up fully, then exit. With modest memory chunks this is very fast. I found that many small buffers worked faster in operation and cleanup than fewer large ones.
Exceptions may exist, as someone said when avoiding hauling large memory chunks back out of a swap file on disk, but even that may be minimised by keeping more, and smaller, allocated spaces.
Let's say I have the following C code:
int main () {
int *p = malloc(10 * sizeof *p);
*p = 42;
return 0; //Exiting without freeing the allocated memory
}
When I compile and execute that C program, ie after allocating some space in memory, will that memory I allocated be still allocated (ie basically taking up space) after I exit the application and the process terminates?
It depends on the operating system. The majority of modern (and all major) operating systems will free memory not freed by the program when it ends.
Relying on this is bad practice and it is better to free it explicitly. The issue isn't just that your code looks bad. You may decide you want to integrate your small program into a larger, long running one. Then a while later you have to spend hours tracking down memory leaks.
Relying on a feature of an operating system also makes the code less portable.
In general, modern general-purpose operating systems do clean up after terminated processes. This is necessary because the alternative is for the system to lose resources over time and require rebooting due to programs which are poorly written or simply have rarely-occurring bugs that leak resources.
Having your program explicitly free its resources anyway can be good practice for various reasons, such as:
If you have additional resources that are not cleaned up by the OS on exit, such as temporary files or any kind of change to the state of an external resource, then you will need code to deal with all of those things on exit, and this is often elegantly combined with freeing memory.
If your program starts having a longer lifetime, then you will not want the only way to free memory to be to exit. For example, you might want to convert your program into a server (daemon) which keeps running while handling many requests for individual units of work, or your program might become a small part of a larger program.
However, here is a reason to skip freeing memory: efficient shutdown. For example, suppose your application contains a large cache in memory. If when it exits it goes through the entire cache structure and frees it one piece at a time, that serves no useful purpose and wastes resources. Especially, consider the case where the memory pages containing your cache have been swapped to disk by the operating system; by walking the structure and freeing it you're bringing all of those pages back into memory all at once, wasting significant time and energy for no actual benefit, and possibly even causing other programs on the system to get swapped out!
As a related example, there are high-performance servers that work by creating a process for each request, then having it exit when done; by this means they don't even have to track memory allocation, and never do any freeing or garbage collection at all, since everything just vanishes back into the operating system's free memory at the end of the process. (The same kind of thing can be done within a process using a custom memory allocator, but requires very careful programming; essentially making one's own notion of “lightweight processes” within the OS process.)
My apologies for posting so long after the last post to this thread.
One additional point. Not all programs make it to graceful exits. Crashes and ctrl-C's, etc. will cause a program to exit in uncontrolled ways. If your OS did not free your heap, clean up your stack, delete static variables, etc, you would eventually crash your system from memory leaks or worse.
Interesting aside to this, crashes/breaks in Ubuntu, and I suspect all other modern OSes, do have problems with "handled' resources. Sockets, files, devices, etc. can remain "open" when a program ends/crashes. It is also good practice to close anything with a "handle" or "descriptor" as part of your clean up prior to graceful exit.
I am currently developing a program that uses sockets heavily. When I get stuck in a hang I have to ctrl-c out of it, thus, stranding my sockets. I added a std::vector to collect a list of all opened sockets and a sigaction handler that catches sigint and sigterm. The handler walks the list and closes the sockets. I plan on making a similar cleanup routine for use before throw's that will lead to premature termination.
Anyone care to comment on this design?
What's happening here (in a modern OS), is that your program runs inside its own "process." This is an operating system entity that is endowed with its own address space, file descriptors, etc. Your malloc calls are allocating memory from the "heap", or unallocated memory pages that are assigned to your process.
When your program ends, as in this example, all of the resources assigned to your process are simply recycled/torn down by the operating system. In the case of memory, all of the memory pages that are assigned to you are simply marked as "free" and recycled for the use of other processes. Pages are a lower-level concept than what malloc handles-- as a result, the specifics of malloc/free are all simply washed away as the whole thing gets cleaned up.
It's the moral equivalent of, when you're done using your laptop and want to give it to a friend, you don't bother to individually delete each file. You just format the hard drive.
All this said, as all other answerers are noting, relying on this is not good practice:
You should always be programming to take care of resources, and in C that means memory as well. You might end up embedding your code in a library, or it might end up running much longer than you expect.
Some OSs (older ones and maybe some modern embedded ones) may not maintain such hard process boundaries, and your allocations might affect others' address spaces.
Yes. The OS cleans up resources. Well ... old versions of NetWare didn't.
Edit: As San Jacinto pointed out, there are certainly systems (aside from NetWare) that do not do that. Even in throw-away programs, I try to make a habit of freeing all resources just to keep up the habit.
Yes, the operating system releases all memory when the process ends.
It depends, operating systems will usually clean it up for you, but if you're working on for instance embedded software then it might not be released.
Just make sure you free it, it can save you a lot of time later when you might want to integrate it in to a large project.
That really depends on the operating system, but for all operating systems you'll ever encounter, the memory allocation will disappear when the process exits.
I think direct freeing is best. Undefined behaviour is the worst thing, so if you have access while it's still defined in your process, do it, there are lots of good reasons people have given for it.
As to where, or whether, I found that in W98, the real question was 'when' (I didn't see a post emphasising this). A small template program (for MIDI SysEx input, using various malloc'd spaces) would free memory in the WM_DESTROY bit of the WndProc, but when I transplanted this to a larger program it crashed on exit. I assumed this meant I was trying to free what the OS had already freed during a larger cleanup. If I did it on WM_CLOSE, then called DestroyWindow(), it all worked fine, instant clean exit.
While this isn't exactly the same as MIDI buffers, there is similarity in that it is best to keep the process intact, clean up fully, then exit. With modest memory chunks this is very fast. I found that many small buffers worked faster in operation and cleanup than fewer large ones.
Exceptions may exist, as someone said when avoiding hauling large memory chunks back out of a swap file on disk, but even that may be minimised by keeping more, and smaller, allocated spaces.
Suppose I have a program that decrypts a file and stores the decrypted contents on the heap. I want to protect this information from other (non-root) processes running on the same system, so before I call free() to release the heap allocation, I'm using memset() to overwrite the data and make it unavailable to the next process that uses the same physical memory. (I understand this isn't a concern on some systems, but would prefer to err on the side of safety.)
However, I'm not sure what to do in cases where the program doesn't terminate normally, either through a forced termination (SIGINT, SIGTERM, etc.) or due to an error condition (SIGSEGV, SIGBUS, etc.). Should I just trap as many signals as possible to clear the heap before exiting, or is there a more orderly way of doing things?
An operating system that leaks contents of memory between processes (especially with different privileges) would be so broken from a security point of view that you doing it yourself won't change anything. Especially since on most operating systems the memory pages that you write to can at any point be taken away from you, swapped out and given to someone else. So I can safely say that you don't need to worry about normal termination unless you're on an operating system so specialized that it doesn't have anyone to leak the memory to. Also, there are certain ways to kill your process without you having any ability to catch the killing signal, so you couldn't handle all the cases anyway.
When it comes to abnormal termination (SIGSEGV, etc.) your best bet is to either disable dumping cores or at least make sure that your core dumps are only readable by you. That should be the main worry, the physical memory won't leak, but your core dumps could be readable by someone else.
That being said, it's still a very good practice to wipe secrets from memory as soon as you don't need them anymore. Not because they can leak to others through normal operation, because they can't, but because they can leak out through bugs. You might have an exploitable bug, maybe you get a stray pointer you'll write to a log, maybe you'll leave your key on the stack and then forget to initialize your data, etc. So your main worry shouldn't be to wipe out secrets from memory before exit, but to actually identify the point in your code where you don't need a secret anymore and wipe it right then and there.
Unfortunately, using memset that you mentioned is not enough. Many compilers today are smart enough to understand that some of your calls to memset are dead stores and optimize them away (like memset of a stack buffer just before leaving a function or just before free). See this issue in LibreSSL for a discussion about it, and this implementation of explicit_bzero for the currently best known attempt to work around it on clang and gcc.
I've a question concerning the GLib.
I would like to use the GLib in a server context but I'm not aware on how the memory is managed:
https://developer.gnome.org/glib/stable/glib-Memory-Allocation.html
If any call to allocate memory fails, the application is terminated. This also means that there is no need to check if the call succeeded.
If I look at the source code, if g_malloc failed, it will call g_error:
g_error()
define g_error(...)
A convenience function/macro to log an error message.
Error messages are always fatal, resulting in a call to abort() to terminate the application.[...]
But in my case, as I'm developing a server application, I don't want the application exit, I would prefer, as the traditional malloc function, the GLib functions returns NULL or something to indicate an error happened.
So, my question is, there is a manner to handle out of memory?
Is the GLib not recommended for server purpose applications?
If I look at the man of abort I can see that I can handle the signal but I'll make the management of out-of-memory errors a little bit painful...
The abort() function causes abnormal program termination to occur, unless
the signal SIGABRT is being caught and the signal handler does not
return.
Thanks for you help!
It's very difficult to recover from lack of memory. The reason for that is that it can be considered a terminal state, in the sense that lack of memory will persist for some time before it goes away. Even reacting to the lack of memory (like informing the user) might require more memory, for example, to build and send a message. A related problem is that there are operating systems (linux at least) that may be over optimistic about allocating memory. When the kernel realizes that memory is missing, it may kill the application, even if your code is handling the failures.
So either you have a much stricter grasp of your whole system than average, or you won't be able to successfully handle out of memory errors, and, in this case, it doesn't matter what the helper library is doing.
If you really want to control memory allocation while still using glib, you have partial ways to do that. Don't use any glib allocation function and use some from other library. Glib provides functions that receive a "free function" when necessary. For example:
https://developer.gnome.org/glib/2.31/glib-Hash-Tables.html#g-hash-table-new-full
The hash table constructor accepts functions for destroying both keys and values. In your case, the data will be allocated using custom allocation functions, while the hash data structures will be allocated with glib functions.
Alternatively you could use g_try_* macros to allocate memory, so you still use glib allocator, but it won't abort on error. Again, this only partially solves the problem. Internally, glib will implicitly call functions that may abort and it assumes it will never return on error.
About the general question: does it make sense for a server to crash when it's out of memory ? The obvious answer is no, but I can't estimate how theoretical this answer is. I can only expect that the server system be properly sized for its operation and reject as invalid any input that could potentially exceed its capacities, and for that, it doesn't matter which libraries it might use.
I'm probably editorializing a bit here, but the modern tendency to use virtual/logical memory (both names have been used, although "logical" is more distinct) does dramatically complicate knowing when memory is exhausted, although I think one can restore the old, real-(RAM + swap) model (I'll call this the physical model) in Linux with the following in /etc/sysctl.d/10-no-overcommit.conf:
vm.overcommit_memory = 2
vm.overcommit_ratio = 100
This restores the ability to have the philosophy that if a program's malloc just failed, that program has a good chance of having been the actual cause of memory exhaustion, and can then back away from the construction of the current object, freeing memory along the way, possibly grumbling at the user for having asked for something crazy that needed too much RAM, and awaiting the next request. In this model, most OOM conditions resolve almost instantly - the program either copes and presumably returns RAM, or gets killed immediately on the following SEGV when it tries to use the 0 returned by malloc.
With the virtual/logical memory models that linux tends to default to in 2013, this doesn't work, since a program won't find memory isn't available at malloc, but instead upon attempting to access memory later at which point the kernel finally realizes there's nowhere in RAM for it. This amounts to disaster, since any program on the system can die, rather than the one the ran the host out of RAM. One can understand why some GLib folks don't even care about trying to fix this problem, because with the logical memory model, it can't be fixed.
The original point of logical memory was to allow huge programs using more than half the memory of the host to still be able to fork and exec supporting programs. It was typically enabled only on hosts with that particular usage pattern. Now in 2013 when a home workstation can have 24+ GiB of RAM, there's really no excuse to have logical memory enabled at all 99% of the time. It should probably be disabled by default on hosts with >4 GiB of RAM at boot.
Anyway. So if you want to take the old-school physical model approach, make sure your computer has it enabled, or there's no point to testing your malloc and realloc calls.
If you are in that model, remember that GLib wasn't really guided by the same philosophy (see http://code.google.com/p/chromium/issues/detail?id=51286#c27 for just how madly astray some of them are). Any library based on GLib might be infected with the same attitude as well. However, there may be some interesting things one can do with GLib in the physical memory model by emplacing your own memory handlers with g_mem_set_vtable(), since you might be able to poke around in program globals and reduce usage in a cache or the like to free up space, then retry the underlying malloc. However, that's limited in its own way by not knowing which object was under construction at the point your special handler is invoked.
Recently, I work on a video player program on Windows for a CCTV program. As the program has to decode and play many videos streams at the same time, I think it might meet the situation that malloc will fail and I add checking after every malloc.
But genrally speaking, in these code of open source programs that I've read in open source projects, I seldom find any checking of result of malloc. So when malloc fails, most program will just crash. Isn't that unacceptalbe?
My colleagues who write server programs on linux will alloc a enough memory for 100 client connections. So although his program might refuse the 101 client, it will never met a failure of malloc. Is his approach also suitable for desktop applications?
On Linux, malloc() will never fail -- instead, the OOM killer will be triggered and begin killing random processes until the system falls over. Since Linux is the most popular UNIX derivative in use today, many developers have learned to just never check the result of malloc(). That's probably why your colleagues ignore malloc() failures.
On OSes which support failures, I've seen two general patterns:
Write a custom procedure which checks the result of malloc(), and calls abort() if allocation failed. For example, the GLib and GTK+ libraries use this approach.
Store a global list of "purge-able" allocations, such as caches, which can be cleared in the event of allocation failure. Then, try the allocation again, and if it still fails, report it via the standard error reporting mechanisms (which do not perform dynamic allocation).
Follow the standardized API
Even on Linux, ulimit can be used to get a prompt malloc error return. It's just that it defaults to unlimited.
There is a definite pressure to conform to published standards. On most systems, in the long run, and eventually even on Linux, malloc(3) will return a correct indication of failure. It is true that desktop systems have virtual memory and demand paging, but even then not checking malloc(3) only works in a debugged program with no memory leaks. If anything goes wrong, someone will want to set a ulimit and track it down. Suddenly, the malloc check makes sense.
To use the result of malloc without checking for null is unacceptable in code that might be open to use on platforms where malloc can fail, on those it will tend to result in crashes and unpredicatable behaviour. I can't forsee the future, don't know where my code will go, so I would write code with checks for malloc returning null - better to die than behave unpredicatbly!
Strategies for what to do if malloc fails depend upon the kind of applciation and how much confidence you have in the libraries you are using. It some situations the only safe thing to do is halt the whole program.
The idea of preallocating a known quota of memory and parcelling out in some chunks, hence steering clear of actually exhausting the memeory is a good one, if your application's memory usage is predicatable. You can extend this to writing your own memory management routines for use by your code.
It depends on the type of application that you are working on. If the application does work that is divided into discrete tasks where an individual task can be allowed to fail, then checking memory allocations can be recovered from gracefully.
But in many cases, the only reasonable way to respond to a malloc failure is by terminating the program. Allowing your code to just crash on the inevitable null dereference will achieve that. It would certainly always be better to dump a log entry or error message explaining the error, but in the real world we work on limited schedules. Sometimes the return on investment of pedantic error handling isn't there.
Always check, and pre-allocate a buffer that can be freed in this case so you can warn the user to save his data and shut down the application.
Depends on the app you write. Of course you always need to check the return value of malloc(). However, handling OOM gracefully only makes sense in very cases, such as low-level crucial system services, or when writing a library that might be used be them. Having a malloc wrapper that aborts on OOM is hence very common in many apps and frameworks. Often those wrappers are named xmalloc() or similar.
GLib's g_malloc() is aborting, too.
If you are going to handle huge amounts of memory, and want to make statements to Linux like "now I have memory area ABC and I don't need the B piece, do as you wish", have a look to mmap()/madvise() family of functions available in stock GNU C library. Depending on your usage patterns, the code can end up even simpler than using malloc. This API can also be used to help Linux not waste memory by caching files you are going to read/write only once.
They are nicely documented in GNU libc info documentation.
It is usually impossible for a program to handle running out of memory. What are you going to do? Open a file and log something? If you try to allocate a large block and it fails, you may have a fallback and try again with a smaller buffer, but if you fail to allocate 10 bytes, there is not much you can do. And checking for null constantly convolutes the code. For that reason I usually add a custom function that does checking and aborts on fail:
static void* xmalloc(size_t sz) {
void* p = malloc(sz);
if (!p) abort();
return p;
}