Freeing memory in C under Linux - c

If you do not free memory that you malloc'd in a C program under Linux, when is it released? After the program terminates? Or is the memory still locked up until an unforseen time (probably at reboot)?

Memory allocated by malloc() is freed when the process ends. However memory allocated using shmget() is not freed when the process ends. Be careful with that.
Edit:
mmap() is not shmget() read about the difference here: http://www.opengroup.org/onlinepubs/000095399/functions/mmap.html http://www.opengroup.org/onlinepubs/009695399/functions/shmget.html
They are different system calls, which do very different things.

Yes, the memory is freed when the process terminates.

malloc()ed memory is definitely freed when the process ends, but not by calling free(). it's the whole memory mapping (presented to the process as linear RAM) that is just deleted from the MMU tables.

In Linux (and most other Unixes) when you invoke a program from a command shell, it creates a new process to run that program in. All resources a process reserves, including heap memory, should get released back to the OS when your process terminates.

Memory 'allocation' has two distinct meanings that are commonly overlapped in questions like this.
memory is made available to the process by the sbrk system call
memory is assigned to some purpose within the context of the program by malloc()
the sbrk system call tell the kernel to get some more memory ready in case the process needs it. the memory is not actually mapped into the processes address space until immediately before it is written to. This is called demand paging. Typically the right to access this memory is never actually revoked by the operating system until the process exits. If memory becomes scarce then the kswapd (part of the kernel) will shuffle the least used parts off to disk to make room. The kernel can enforce a hard limit on the ammount of memory in a processes working set if you would like :)
the second context is what you are talking about when you call malloc/free. malloc keeps a list of available memory and hands chunks out to your functions when requested. if it detects that it doesnt have enough memory on hand to meet a request it will call sbrk to allow the process to access more.
when you look at the output of top you will see two numbers for a processes memory usage. One for the 'virtual size' and 'resident size', virtual size is the total amount that the process has requested access to. resident size is the amount that is actively being used by the process at the moment.

The memory is released from the point of the program when it exits. It is not tied up in any way after that and can be reused by other processes.

modern operating systems will release all memory allocated by a process when that process is terminated. The only situations where that might not be true, would probably be in very old operating systems (perhaps DOS?) and some embedded systems.

Related

How is memory layout shared with other processes/threads?

I'm currently learning memory layout in C. For now I know there exist several sections in C program memory: text, data, bss, heap and stack. They also say heap is shared with other things beyond the program.
My questions are these.
What exactly is the heap shared with? One source states that Heap must always be freed in order to make it available for other processes whereas another says The heap area is shared by all threads, shared libraries, and dynamically loaded modules in a process. If it is not shared with other processes, do I really have to free it while my program is running (not at the end of it)?
Some sources also single out high addresses (the sixth section) for command line arguments and environment variables. Shall this be considered as another layer and a part of a program memory?
Are the other sections shared with anything else beyond a program?
The heap is a per-process memory: each process has its own heap, which is shared only within the same process space (like between the process threads, as you said). Why should you free it? Not properly to give space to other processes (at least in modern OS where the process memory is reclaimed by the OS when the process dies), but to prevent heap exhaustion within your process memory: in C, if you don't deallocate the heap memory regions you used, they will be always considered as busy even when they are not used anymore. Thus, to prevent undesired errors, it's a good practice to free the memory in the heap as soon as you don't need it anymore.
In a C program the command line variables are stored in the stack as function variables of the main. What happens is that usually the stack is allocated in the highest portion of a process memory, which is mapped to the high addresses (this is probably the reason why some sources point out what you wrote). But, generally speaking, there isn't any sixth memory area.
As said by the others, the text area can be shared by processes. This area usually contains the binary code, which would be the same for different processes which share the same binary. For performance reasons, the OS can allow to share such memory area, (think for example when you fork a child process).
Heap is shared with other processes in a sense that all processes use RAM. The more of it you use, the less is available to other programs. Heap sharing with other threads in your own program means that all your threads actually see and access the same heap (same virtual address space, same actual RAM, with some luck also same cache).
No.
text can be shared with other processes. These days it is marked as read-only, so having several processes share text makes sense. In practice this means that if you are already running top and you run another instance it makes no sense to load text part again. This would waste time and physical RAM. If the OS is smart enough it can map those RAM pages into virtual address space of both top instances, saving time and space.
On the official aspect:
The terms thread, process, text section, data section, bss, heap and stack are not even defined by the C language standard, and every platform is free to implement these components however "it may like".
Threads and processes are typically implemented at the operating-system layer, while all the different memory sections are typically implemented at the compiler layer.
On the practical aspect:
For every given process, all these memory sections (text section, data section, bss, heap and stack) are shared by all the threads of that process.
Hence, it is under the responsibility of the programmer to ensure mutual-exclusion when accessing these memory sections from different threads.
Typically, this is achieved via synchronization utilities such as semaphores, mutexes and message queues.
In between processes, it is under the responsibility of the operating system to ensure mutual-exclusion.
Typically, this is achieved via virtual-memory abstraction, where each process runs inside its own logical address space, and each logical address space is mapped to a different physical address space.
Disclaimer: some would claim that each thread has its own stack, but technically speaking, those stacks are usually allocated consecutively on the stack of the process, and there's usually no one to prevent a thread from accessing the stacks of other threads, whether intentionally or by mistake (aka stack overflow).

Does terminating a program reclaim memory in the same way as free()?

I saw this answer to a stack overflow question that says that freeing memory at the very end of a c program is actually harmful because it moves variables that wouldn't be used again into system memory.
I'm confused why the free() method in C would do anything different than the operating system reclaiming the heap at the end of the program.
Does anyone know if there is a real difference between free() and termination in terms of memory management and if so how the operating system may treat these two differently?
e.g.
would anything different happen between these two short programs?
void main() {
int* mem = malloc(1);
return 0;
}
void main() {
int* mem = malloc(1);
free(mem);
return 0;
}
No, terminating a program, as with exit or abort, does not reclaim memory in the same way as free. Using free causes some activity that ultimately has no effect when the operating system discards the data maintained by malloc and free.
exit has some complications, as it does not immediately terminate the program. For now, let’s just consider the effect of immediately terminating the program and consider the complications later.
In a general-purpose multi-user operating system, when a process is terminated, the operating system releases the memory it was using to make it available for other purposes.1 In large part, this simply means the operating system does some accounting operations.
In contrast, when you call free, software inside the program runs, and it has to look up the size of the memory you are freeing and then insert information about that memory into the pool of memory it is maintaining. There could be thousands or tens of thousands (or more) of such allocations. A program that frees all its data may have to execute many thousands of calls to free. Yet, in the end, when the program exits, all of the changes produced by free will vanish, as the operating system will discard all the data about that pool of memory—all of the data is in memory pages the operating system does not preserve.
So, in this regard, the answer you link to is correct, calling free is a waste. And, as it points out, the necessity of going through all the data structures in the program to fetch the pointers in them so the memory they point to can be freed causes all those data structures to be read into memory if they had been swapped out to disk. For large programs, it can take a considerable amount of time and other resources.
On the other hand, it is not clear it is easy to avoid many calls to free. This is because releasing memory is not the only thing a terminating program has to clean up. A program may want to write final data to files or send final messages to network connections. Furthermore, a program may not have established all of this context directly. Most large programs rely on layers of software, and each software package may have set up its own context, and often no way is provided to tell other software “I want to exit now. Finish the valuable context, but skip all the freeing of memory.” So all the desired clean-up tasks may be interwined with the free-memory tasks, and there may be no good way to untangle them.
Software should generally be written so that nothing terrible happens if a program is suddenly aborted (since this can happen from a loss of power, not just deliberate user action). But even though a program might be able to tolerate an abort, there can still be value in a graceful exit.
Getting back to exit, calling the C exit routine does not exit the program immediately. Exit handlers (registered with atexit) are called, stream buffers are flushed, and streams are closed. Any software libraries you called may have set up their own exit handlers so that they can finish up when the program is exiting. So, if you want to be sure libraries you have used in your program are not calling free when you end the program, you have to call abort, not exit. But it is generally preferred to end a program gracefully, not by aborting. Calling abort will not call exit handlers, flush streams, close streams, or perform other wind-down code that exit does—data can be lost when a program calls abort.
Footnote
1 Releasing memory does not mean it is immediately available for other purposes. The specific result of this depends on each page of memory. For example:
If the memory is shared with other processes, it is still needed for them, so releasing it from use by this process only decrements the number of processes using the memory. It is not immediately available for any other use.
If the memory is not in use by any other processes but contains data mapped from a file on disk, the operating system might mark it as available when needed but leave it alone for the moment. This is because you might run the same program again, and it would be nice if the data were still in memory, so why not just leave it in place just in case? The data might even be used by a different program that uses the same file. (For example, many programs might use the same shared library.)
If the memory is not in use by any other processes and was just used by the program as a work area, not mapped from a file, then system may mark it as immediately available and not containing anything useful.
would anything different happen between these two short programs?
The simple answer is: it makes no difference, the memory is released to the system in both cases. Calling free() is not strictly necessary and does incur an infinitesimal overhead but may prove useful when trying to track memory leaks in more complex programs.
Does terminating a program reclaim memory in the same way as free?
Not exactly:
Terminating a program releases the memory used by the program, be it for the program code, data, stack or heap. It also releases some other resources such as file handles, device handles, network sockets... All this is done efficiently, no matter how many blocks of memory have been allocated with malloc().
Conversely, free() makes the block of memory available for further use by the program for later calls to malloc() or realloc(). Depending on its size and the implementation of the heap, this freed block may or may not be returned to the OS for use by other programs. Also worth noting it the fragmentation problem, where small blocks of freed memory may not be usable for a larger allocation because they are surrounded by allocated blocks. The C heap does not perform packing or de-fragmentation, it merely coalesces adjacent free blocks. Freeing all allocated blocks before leaving the program may be useful for debugging purposes, but may be complicated and time consuming, while not necessary for the memory to be reused by the system after the program terminates.
free() is a user level memory management function and depends on malloc implementation you are currently using. The user-level allocator might maintain a linked-list of memory chunk and malloc/free will take the chunk of appropropriate size/put it back.
exit() Destroys an address space and all regions.
This is related to malloced heap as well as some other regions and in-kernel data structures used for managing address space of the process:
Each address space consists of a number of page-aligned regions
of memory that are in use. They never overlap and represent a set
of addresses which contain pages that are related to each other in
terms of protection and purpose. These regions are represented by
a struct vm_area_struct and are roughly analogous to the
vm_map_entry struct in BSD. For clarity, a region may represent the
process heap for use with malloc(), a memory mapped file such as
a shared library or a block of anonymous memory allocated with
mmap(). The pages for this region may still have to be allocated, be
active and resident or have been paged out
Reference: https://www.kernel.org/doc/gorman/html/understand/understand007.html
The reason well-designed programs free memory at exit is to check for memory leaks. If your application-level memory allocation does not go to zero after your last deallocation, you know that you have a memory memory that is not being managed properly and probably have a memory leak in your code.
would anything different happen between these two short programs?
YES
I'm confused why the free() method in C would do anything different than the operating system reclaiming the heap at the end of the program.
The operating system allocates memory in pages. Heap managers (such as malloc/free implementations) allocate pages from the operating system and subdivide the pages into smaller allocations. Calls to free() normally return memory to the heap. They do not return the pages to the operating system.

Heap memory allocation

If I allocate memory dynamically in my program using malloc() but I don't free the memory during program runtime, will the dynamically allocated memory be freed after program terminates?
Or if it is not freed, and I execute the same program over and over again, will it allocate the different block of memory every time? If that is the case, how should I free that memory?
Note: one answer I could think of is rebooting the machine on which I am executing the program. But if I am executing the program on a remote machine and rebooting is not an option?
Short answer: Once your process terminates, any reasonable operating system is going to free all memory allocated by that process. So no, memory allocations will not accumulate when you re-start your process several times.
Process and memory management are typically a responsibility of the operating system, so whether allocated memory is freed or not after a process terminates is actually dependent on the operating system. Different operating systems can handle memory management differently.
That being said, any reasonable operating system (especially a multi-tasking one) is going to free all of the memory that a process allocated once that process terminates.
I assume the reason behind this is that an operating system has to be able to gracefully handle irregular situations:
malicious programs (e.g. those that don't free their memory intentionally, in the hope of affecting the system they run on)
abnormal program terminations (i.e. situations where a program ends unexpectedly and therefore might not get a chance to explicitly free its dynamically allocated memory itself)
Any operating system worth its salt has to be able to deal with such situations. It has to isolate other parts of the system (e.g. itself and other running processes) from a faulty process. If it did not, a process' memory leak would propagate to the system. Meaning that the OS would leak memory (which is usually considered a bug).
One way to protect the system from memory leaks is by ensuring that once a process ends, all the memory (and possibly other resources) that it used get freed.
Any memory a program allocated should be freed when the program terminates, regardless of whether it's allocated statically or dynamically. The main exception to this is if the process is forked to another process.
If you do not explicitly free any memory you malloc, it will stay allocated until the process is terminated.
Even if your OS does cleanup on exit(). The syscall to exit is often wrapped by an exit() function. Here is some pseudo code, derived from studying several libc implementations, to demonstrate what happens around main() that could cause a problem.
//unfortunately gcc has no builtin for stack pointer, so we use assembly
#ifdef __x86_64__
#define STACK_POINTER "rsp"
#elif defined __i386__
#define STACK_POINTER "esp"
#elif defined __aarch64__
#define STACK_POINTER "x13"
#elif defined __arm__
#define STACK_POINTER "r13"
#else
#define STACK_POINTER "sp" //most commonly used name on other arches
#endif
char **environ;
void exit(int);
int main(int,char**,char**);
_Noreturn void _start(void){
register long *sp __asm__( STACK_POINTER );
//if you don't use argc, argv or envp/environ, just remove them
long argc = *sp;
char **argv = (char **)(sp + 1);
environ = (char **)(sp + argc + 1);
//init routines for threads, dynamic linker, etc... go here
exit(main((int)argc, argv, environ));
__builtin_unreachable(); //or for(;;); to shut up compiler warnings
}
Notice that exit is called using the return value of main. On a static build without a dynamic linker or threads, exit() can be a directly inlined syscall(__NR_exit,main(...)); however if your libc uses a wrapper for exit() that does *_fini() routines (most libc implementations do), there is still 1 function to call after main() terminates.
A malicious program could LD_PRELOAD exit() or any of the routines it calls and turn it into a sort of zombie process that would never have its memory freed.
Even if you do free() before exit() the process is still going to consume some memory (basically the size of the executable and to some extent the shared libraries that aren't used by other processes), but some operating systems can re-use the non-malloc()ed memory for subsequent loads of that same program such that you could run for months without noticing the zombies.
FWIW, most libc implementations do have some kind of exit() wrapper with the exception of dietlibc (when built as a static library) and my partial, static-only libc.h that I've only posted on the Puppy Linux Forum.
If I allocate memory dynamically in my program using malloc() but I
don't free the memory during program runtime, will the dynamically
allocated memory be freed after program terminates?
The operating system will release the memory allocated through malloc to be available to other systems.
This is much more complex than your question makes it sound, as the physical memory used by a process may be written to disk (paged-out). But with both Windows, Unix (Linux, MAC OS X, iOS, android) the system will free the resources it has committed to the process.
Or if it is not freed, and I execute the same program over and over
again, will it allocate the different block of memory every time? If
that is the case, how should I free that memory?
Each launch of the program, gets a new set of memory. This is taken from the system, and provided as virtual addresses. Modern operating systems use address-space-layout-randomization (ASLR) as a security feature, this means that the heap should provide unique addresses each time your program launches. But as the resources from other runs have been tidied up, there is no need to free that memory.
As you have noted, if there is no way for a subsequent run to track where it has committed resources, how is it expected to be able to free them.
Also note, you can run your program multiple launches that run at the same time. The memory allocated may appear to overlap - each program may see the same address allocated, but that is "virtual memory" - the operating system has set each process up independently so it appears to use the same memory, but the RAM associated with each process would be independent.
Not freeing the memory of a program when it executes will "work" on Windows and Unix, and probably any other reasonable operating system.
Benefits of not freeing memory
The operating system keeps a list of large memory chunks allocated to the process, and also the malloc library keeps tables of small chunks of memory allocated to malloc.
By not freeing the memory, you will save the work accounting for these small lists when the process terminates. This is even recommended in some cases (e.g. MSDN : Service Control Handler suggests SERVICE_CONTROL_SHUTDOWN should be handled by NOT freeing memory)
Disadvantages of not freeing memory
Programs such as valgrind and application verifier check for program correctness by monitoring the memory allocated to a process and reporting on leaks.
When you don't free the memory, these will report a lot of noise, making unintentional leaks difficult to find. This would be important, if you were leaking memory inside a loop, which would limit the size of task your program could deliver.
Several times in my career, I have converted a process to a shared object/dll. These were problematic conversions, because of leaks that were expected to be handled by the OS process termination, started to survive beyond the life of "main".
As we say brain of the Operating system is kernel. Operating system has several responsibilities.
Memory Management is a function of kernel.
Kernel has full access to the system's memory and must allow processes
to safely access this memory as they require it.
Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address. Virtual address spaces may be different for different processes; the memory that one process accesses at a particular (virtual) address may be different memory from what another process accesses at the same address.
This allows every program to behave as if it is the only one (apart
from the kernel) running and thus prevents applications from crashing
each other
Memory Allocation
malloc
Allocate block of memory from heap
. .NET Equivalent: Not applicable. To call the standard C function, use PInvoke.
The Heap
The heap is a region of your computer's memory that is not managed
automatically for you, and is not as tightly managed by the CPU. It is
a more free-floating region of memory (and is larger). To allocate
memory on the heap, you must use malloc() or calloc(), which are
built-in C functions. Once you have allocated memory on the heap, you
are responsible for using free() to deallocate that memory once you
don't need it any more. If you fail to do this, your program will have
what is known as a memory leak. That is, memory on the heap will
still be set aside (and won't be available to other processes).
Memory Leak
For Windows
A memory leak occurs when a process allocates memory from the paged or nonpaged pools, but does not free the memory. As a result, these limited pools of memory are depleted over time, causing Windows to slow down. If memory is completely depleted, failures may result.
Determining Whether a Leak Exists describes a technique you can use
if you are not sure whether there is a memory leak on your system.
Finding a Kernel-Mode Memory Leak describes how to find a leak that
is caused by a kernel-mode driver or component.
Finding a User-Mode Memory Leak describes how to find a leak that is
caused by a user-mode driver or application.
Preventing Memory Leaks in Windows Applications
Memory leaks are a class of bugs where the application fails to release memory when no longer needed. Over time, memory leaks affect the performance of both the particular application as well as the operating system. A large leak might result in unacceptable response times due to excessive paging. Eventually the application as well as other parts of the operating system will experience failures.
Windows will free all memory allocated by the application on process
termination, so short-running applications will not affect overall
system performance significantly. However, leaks in long-running
processes like services or even Explorer plug-ins can greatly impact
system reliability and might force the user to reboot Windows in order
to make the system usable again.
Applications can allocate memory on their behalf by multiple means. Each type of allocation can result in a leak if not freed after use
. Here are some examples of common allocation patterns:
Heap memory via the HeapAlloc function or its C/C++ runtime
equivalents malloc or new
Direct allocations from the operating system via the VirtualAlloc
function.
Kernel handles created via Kernel32 APIs such as CreateFile,
CreateEvent, or CreateThread, hold kernel memory on behalf of the
application
GDI and USER handles created via User32 and Gdi32 APIs (by default,
each process has a quota of 10,000 handles)
For Linux
memprof is a tool for profiling memory usage and finding memory leaks.
It can generate a profile how much memory was allocated by each
function in your program. Also, it can scan memory and find blocks
that you’ve allocated but are no longer referenced anywhere.
Memory allocated by the malloc needs to be freed by the allocating program.If not and memory is kept on being allocated then one point will come that the program will run out of allowable memory allocation and throw a segmentation or out of memory error. Every set of memory allocation by malloc needs to be accompanied by free.

Why the memory not freed will cause leaking ? and some other questions

As I know, all processes run within its own virtual address space. If a process call malloc, OS will allocate some region from the heap owned by the program, and return an address which is a virtual address not a real physical address. As the heap is owned by the program, why can't OS reclaim the memory not freed by programmer ?
Virtual address space is respectively for every program, so program have no method to destroy the data owned by other program. Am I right?
If a program access to an random address owned by other program, segment fault will occur. But why no error occurs when the program access to the address freed by itself previously?
The OS can't reclaim the memory not freed by a running program (a process) because it (the OS) doesn't know whether the process is still using that memory. This is exactly what the freeing act is there for - notifying the OS that the memory will not be used anymore. Of course, the OS can reclaim the memory of the program once it's terminated.
Yes, one process can't easily destroy (or even modify) another process's address space. Processes are isolated from one another (which is a good thing) and in order to make some interaction possible, a programmer has to resort to some means of interprocess communication (or IPC), e.g. shared memory, pipes, signals etc.
Actually, accessing previously freed region of memory can lead to a program crash. But usually detecting this kind of errors by OS is not cheap and is not, therefore, generaly done.
As the heap is owned by the program, why can't OS reclaim the memory not freed by programmer ?
Assuming virtual memory as you do here, the OS will reclaim any memory not freed by the program when it terminates. For short lived programs this is not a problem. However not all systems have virtual memory (think embedded programming on strange CPU's.) Also for programs that live for a long time, leaking memory while the program is running could be bad.
Virtual address space is respectively for every program, so program have no method to destroy the data owned by other program. Am I right?
Yes.
If a program access to an random address owned by other program, segment fault will occur. But why no error occurs when the program access to the address freed by itself previously?
This depends. What OS you're running, which allocator you're using, where the block of memory was located and its size will all matter in how the actual memory is freed or not. In short, the OS allocates fixed size pages of memory to your application. Your runtime library maps memory requested by malloc to the pages served by the OS. A page will not be released back to the OS until all the memory blocks located within it is freed. Some allocators also hold on to pages once they've been allocated in the assumption that you will need them again later. You will only get segfaults for trying to access memory in a page that does not belong to your program, either because it was never allocated in the first place, or because it has been released back to the OS.

Can I cause memory leaks during debug of my program?

I'm developing on Ubuntu 9.10
I'm writing a C program, during my tests & debugs I'm calling malloc and always remember to call free() - This is obviously just during debug.
I'm curious: Am I eating up the free memory the system has which each debugging session? Or does the kernel cleans up the process memory after I shutdown my application from the IDE? Logically reasoning I'm pretty sure that the kernel knows about the whole process being killed and thus knows what memory it has allocated, and thus even though the application did not call free the memory is still being released.
I would appreciate an explanation.
Thank you,
Maxim.
Yes, the OS will reclaim all the memory allocated to your program when it stops running.
You are correct.
This is not a serious concern, because any 'leaked' memory is immediately freed once the program you're debugging is terminated.
Memory leaks are generally only a concern when a process is long running.
The kernel has a collection of process records within the kernel's memory and keeps track of each process, the amount of memory consumed, resources such as i/o, file handles or inodes. The process records are usually held in a queue in which the kernel's task pointer points to the process record in a never ending fashion (that explains why the perception of 'multi-tasking', it is doing that a blink of an eye - so fast, really, it is doing single tasking in the eyes of the kernel). There is a field in the process record that tells how much memory is chewed up by said process.
Yes the kernel does obtain the memory back to its own pool ready for consumption by another process. Furthermore, you are absolutely 100% correct in relation to memory leaks as John Weldon above pointed out. I mentioned this in another posting, for every malloc there is a free, if there isn't you have a memory leak. So don't worry about your debugging session. That is perfectly ok as the kernel has a responsibility to ensure the memory is reclaimed.
Some applications (especially daemons) must be debugged throughly and that no memory leaks occur in it as the daemon will be running for a long time before the next reboot. Incidentally, it was mentioned in my favourite book 'Expert C Programming, Deep C Secrets - Peter Van Der Linden', that at one stage when he was in Sun, there was a tool called 'printtool' for printing, but every so often the queue jammed up because there was a memory leak in the print spooler program and the reboot of the Sun machine cured it, he describes that in detail in relation to memory leaks.
Hope this helps,
Best regards,
Tom.
A lot of old Unix apps leaked memory badly, and counted on the end-of-process cleanup. Even in the limited address space of those days, it generally worked reasonably well. This doesn't work with long-running apps, of course. I wouldn't worry about the effects of memory leaks when debugging (the leaks themselves are bugs, so you'll want to remove them before releasing).
What happens in Unix, and all other current OSs that I'm actually familiar with, is that the OS assigns memory to a process. malloc() grabs memory out of the process memory, and if it asks for more than the process can provide will ask for more process memory from the OS. If the program's got a memory leak, the process memory can grow as large as the system will allow, but it's all process memory, and all the allocation will go away when the process ends.
I understand there have been OSs that didn't recover memory from a terminated process, but I haven't seen one or worked on one. The only OSs for personal computers (as opposed to special purpose or enterprise computers) that significant numbers of people use are Windows and Unix variants, and those will release all memory at the end of the process.

Resources