Memory leak in infinite loop - c

What are the damages in your computer if you run a program which continuously generates memory leaks ?
For example:
while(true)
{
char* c = malloc(sizeof(char));
c = NULL;
}
then let that code execute for hours or days?

You probably wouldn't get the chance of running it a day. The unallocated main memory will quickly reach a threshold level when the system will stop your program. The operating system in most of the cases would stop the process and at that moment system will run slow. The worst part is -allocated memory cannot be used because the reference to it has been lost.
Note: This memory leaked is not permanently lost. The system after termination of the program resets the available physical memory. Not the memory of hard disk drive.

Related

C potential memory leak caused by abnormally terminating program

Windows and Linux.
When I allocate memory in my C program, good coding wants me to free the memory before the end of my program.
Assume the following:
int main (int argc, char *argv[]) {
char *f_name;
FILE *i_file;
f_name = malloc(some_amount);
// put something in f_name */
i_file = fopen(f_name, "r");
if (i_file == NULL) {
perror("Error opening file.");
exit(EXIT_FAILURE);
}
free(f_name);
return EXIT_SUCCESS;
}
If the program terminates before I call "free", will the OS recover any un-freed memory when the program exits? Or will I just nibble away from my 3Gb or so available memory until a system restart?
Thanks, Mark.
You don't have to worry about it on popular OSes like Windows and Linux.
Virtual memory ceases to exist when the process terminates. So it's not possible for it to leak after a process terminates.
Physical memory always belongs to the OS to allocate as it pleases, whether or not your process is still running. (Unless you lock your allocations, in which case it ceases to be locked when the corresponding virtual memory mapping is destroyed which happens on process termination anyway.)
There are a few resources that are not cleaned up (like some types of shared memory) but that's pretty exotic.
When you call malloc, typically just backing store (essentially RAM+swap) is reserved and a virtual memory mapping (which is essentially free) is created. When you first write to that virtual memory mapping, physical pages of memory (RAM) are mapped into it to "back" it. That RAM always belongs to the OS to use as it pleases and the OS will re-use the RAM for other purposes if it deems that wise.
When a process terminates, its address space ceases to exist. That means any virtual memory mappings or reservations go away. Unshared physical pages will have their use count drop to zero when the virtual memory mapping goes away, rendering those pages of physical memory free.
It's worth understanding this in detail because you can easily draw the wrong conclusions about edge cases if you don't understand what's going on under the hood. Also, this will give you a framework to plug concepts such as file mappings, memory overcommit, and shared memory into.
The memory is reclaimed by the OS.
Some programs (like webservers) are never meant to exit, they just keep running and serving requests. Memory they allocate technically doesn't need to be returned.
In your example, taking this branch would indeed cause memory to leak:
f_name = malloc(some_amount);
// put something in f_name */
i_file = fopen(f_name, "r");
if (i_file == NULL) {
perror("Error opening file.");
exit(EXIT_FAILURE); // <--- f_name leaks here!
}
It's just a once-off leak which doesn't recur frequently throughout the life of the program and common OSes will clean up leaked memory upon termination. It'd be unlikely to be a problem impacting upon system-wide performance, however it would be a diagnostic highlighted by instruments such as Valgrind and so it'd be wise for debugging purposes to free(f_name); before you exit(EXIT_FAILURE); in that instance.
That termination isn't considered abnormal, as it's caused by a call to exit. Nonetheless, an abnormal termination caused by a call to abort or a signal being raised is likely to compound on top of this leak.
will the OS recover any un-freed memory when the program exits?
There's no requirement within the C standard that an OS exist, let alone that it recover un-freed memory. It's possible that some minimalist OSes might not clean up after you. Similarly, if your program runs within a scripting environment (i.e. CGI), or it's etched into a chip (in which case you probably wouldn't want your program terminating), then you might have issues later on.
Indeed a program's memory allocations are freed automatically when the process terminates. However, in the case of a function call who receives an exception before a free() or delete call within that function's code block, the memory will not be freed unless it is referenced by a smart pointer or object of some sort. One recommended option to have allocated memory freed automatically is to use std::shared_ptr, as follows:
void BadFunction(){
char *someMemory = (char *)malloc(1024);
DoSomethingThatMakesAnException();
delete someMemory;// This never gets called!
}
void GoodFunction(){
std::shared_ptr<char> someMemory = std::shared_ptr<char>((char *)malloc(1024));
DoSomethingThatMakesAnException();
// someMemory is freed automatically, even on exception!
}

C malloc and free

I was taught that if you do malloc(), but you don't free(), the memory will stay taken until a restart happens. Well, I of course tested it. A very simple code:
#include <stdlib.h>
int main(void)
{
while (1) malloc(1000);
}
And I watched over it in Task Manager (Windows 8.1).
Well, the program took up 2037.4 MB really quickly and just stayed like that. I understand it's probably Windows limiting the program.
But here is the weird part: When I closed the console, the memory use percentage went down, even though I was taught that it isn't supposed to!
Is it redundant to call free, since the operating system frees it up anyway?
(The question over here is related, but doesn't quite answer whether I should free or not.)
On Windows, a 32 bit process can only allocate 2048 megabytes because that's how many addresses are there. Some of this memory is probably reserved by Windows, so the total figure is lower. malloc returns a null pointer when it fails, which is likely what happens at that point. You could modify your program like this to see that:
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
int counter = 0;
while (1) {
counter++;
if (malloc(1000) == NULL) {
printf("malloc failed after %d calls\n", counter);
return 0;
}
}
}
Now you should get output like this:
$ ./mem
malloc failed after 3921373 calls
When a process terminates or when it is terminated from the outside (as you do by killing it through the task manager), all memory allocated by the process is released. The operating system manages what memory belongs to what process and can therefore free the memory of a process when it terminates. The operating system does not however know how each process uses the memory it requested from the operating system and depends on the process telling it when it doesn't need a chunk of memory anymore.
Why do you need free() then? Well, this only happens on program termination and does not discriminate between memory you still need and memory you don't need any more. When your process is doing complicated things, it is often constantly allocating and releasing memory for its own computations. It's important to release memory explicitly with free() because otherwise your process might at some point no longer be able to allocate new memory and crashes. It's also good programming practice to release memory when you can so your process does not unnecessarily eat up tons of memory. Instead, other processes can use that memory.
It is advisable to call free after you are done with the memory you had allocated, as you may need this memory space later in your program and it will be a problem if there was no memory space for new allocations.
You should always seek portability for your code.If windows frees this space, may be other operating systems don't.
Every process in the Operating System have a limited amount of addressable memory called the Process Address Space. If you allocate a huge amount of memory and you end up allocating all of the memory available for this process, malloc will fail and return NULL. And you will not be able to allocate memory for this process anymore.
With all non-trivial OS, process resources are reclaimed by the OS upon process termination.
Unless there is specifc and overriding reason to explicitly free memory upon termination, you don't need to do it and you should not try for at least these reasons:
1) You would need to write code to do it, and test it, and debug it. Why do this, if the OS can do it? It's not just redundant, it reduces quality because your explict resource-releasing will never get as much testing as the OS has already had before it got released.
2) With a complex app, with a GUI and many subsystems and threads, cleanly freeing memory on shutdown is nigh-on impossible anyway, which leads to:
3) Many library developers have already given up on the 'you must explicitly release blah... ' mantra because the complexity would result in the libs never being released. Many report unreleased, (but not lost), memory to valgrid and, with opaque libs, you can do nothing at all about it.
4) You must not free any memory that is in use by a running thread. To safely release all such memory in multithreaded apps, you must ensure that all process threads are stopped and cannot run again. User code does not have the tools to do this, the OS does. It is therefore not possible to explicitly free memory from user code in any kind of safe manner in such apps.
5) The OS can free off the process memory in big chunks - much more quickly than messing around with dozens of sub-allcations in the C manager.
6) If the process is being terminated because it has failed due to memory management issues, calling free() many more times is not going to help at all.
7) Many teachers and profs say that you must explicity free the memory, so it's obviously a bad plan.

Improve heap memory usage reporting in Windows application

In using the following code, I have observed that increases in memory usage are registered fairly quickly in the variable members of PROCESS_MEMORY_COUNTERS, but as memory is freed in a process that continues to run, it appears that decreasing memory usage does not seem to register
time_t GetMemUsage(void)
{
HANDLE hProcess;
PROCESS_MEMORY_COUNTERS pmc;
DWORD processID = GetCurrentProcessId();
hProcess = OpenProcess( PROCESS_QUERY_INFORMATION |
PROCESS_VM_READ,
FALSE, processID );
GetProcessMemoryInfo( hProcess, &pmc, sizeof(pmc));
CloseHandle(hProcess);
return pmc.PeakWorkingSetSize;
}
Using Task Manager, I can see almost immediately a physical memory size change (smaller) after terminating a process, but freeing memory as a process runs does not register. (Even after a substantial delay before calling the above function.)
I am not sure if my observations are due to the way free() works. (i.e. maybe it does not notify the OS memory is freed), or if Windows is just slow in registering it to the PROCESS_MEMORY_COUNTERS struct.
In any case, is anyone aware of a better technique to obtain more timely and accurate reports of actual memory usage within (or for) a running process?
I have also tried looking at pmc.WorkingSetSize and pmc.PagefileUsage with same results.
There are several things going on here.
Memory managers, tend to keep memory around after you've freed it. The theory is that you're likely to need more memory soon, and giving you back memory you've used recently is cheaper than getting more memory from the operating system. Thus, from the operating system's perspective, your program is still using memory you've freed.
Working set size is an indirect measure of the memory your program is using. The operating system determines the working set size based on memory demand from your program and based on memory pressure from everything else running on the system. As your program starts to use more memory, the operating system will (typically) quickly grow the process's working set to accommodate it. If your program releases a bunch of memory back to the operating system, that doesn't necessarily mean the OS will trim the process's working set size. In fact, if there's plenty of RAM available, it probably won't bother. But if lots of other processes need memory, and you start to reach the capacity of your RAM, then the OS will start trimming the working set of processes that have more than they currently need.
It looks like your code sample is reporting peak working set size rather than the current working set size. The peak tells you the largest working set size the process reached during its lifetime, even if the current working set is much smaller.

How many times this loop will run?

Interview asked question:
while(1)
{
void * a = malloc(1024*1024);
}
How many times this loop will run on a 2 gb ram and a 8 gb ram ?
I said infinite loop because there is no terminating condition even if memory will be full.
He din't agree.I don't have any idea now.Please help.
It should run indefinitely. On most platforms, when there's no more memory available, malloc() will return 0, so the loop will keep on running without changing the amount of memory allocated. Linux allows memory over-commitment so that malloc() calls continue to add to virtual memory. The process might eventually get killed by the OOM Killer when the data that malloc() uses to administer the memory starts to cause problems (it won't be because you try using the allocated memory itself because the code doesn't use it), but Linux isn't stipulated as the platform in the question.

malloc() inside an infinte loop

I got an Interview question , What happens when we allocate large chunk of memory using malloc() inside an infinite loop and don't free() it.
I thought of checking the condition with NULL should work when there is no enough memory on heap and it should break the loop , But it didn't happen and program terminates abnormally by printing killed.
Why is this happening and why it doesn't execute the if part when there is no memory to allocate (I mean when malloc() failed) ? What behavior is this ?
My code is :
#include<stdio.h>
#include<stdlib.h>
int main(void) {
int *px;
while(1)
{
px = malloc(sizeof(int)*1024*1024);
if (px == NULL)
{
printf("Heap Full .. Cannot allocate memory \n");
break;
}
else
printf("Allocated \t");
}
return 0;
}
EDIT : gcc - 4.5.2 (Linux- Ubuntu -11.04)
If you're running on linux, keep an eye on the first terminal. It will show something like:
OOM error - killing proc 1100
OOM means out of memory.
I think it's also visible in dmesg and/or /var/log/messages and/or /var/log/system depending on the linux distro. You can grep with:
grep -i oom /var/log/*
You could make your program grab memory slowly, and keep an eye on:
watch free -m
You'll see the available swap go down and down. When it gets close to nothing Linux will kill your program and the amount of free memory will go up again.
This is a great link for interpreting the output of free -m: http://www.linuxatemyram.com/
This behaviour can be a problem with apps that are started my init or some other protection mechanism like 'god', you can get into a loop where linux kills the app and init or something starts it up again. If the amount of memory needed is much bigger than the available RAM, it can cause slowness through swapping memory pages to disk.
In some cases linux doesn't kill the program that's causing the trouble but some other process. If it kills init for example, the machine will reboot.
In the worst cases a program or group of processes will request a lot of memory (more than is available in Ram) and attempt to access it repeatedly. Linux has no where fast to put that memory, so it'll have to swap out some page of Ram to disk (the swap partition) and load the page being accessed from disk so the program can see/edit it.
This happens over and over again every milisecond. As disk is 1000s of times slower than RAM, this problem can grind the machine down to a practical halt.
The behaviour depends on the ulimits - see http://www.linuxhowtos.org/Tips%20and%20Tricks/ulimit.htm
If you have a limit on the memory use, you'll see the expected NULL return behaviour, on the other hand if you are not limited, you might see the OOM reaper that you saw etc.
But it didn't happen and program terminates abnormally by printing killed.
Keep in mind, you are not alone. In this case, you were killed by the Out Of Memory killer, it saw your process hogging the memory of the system and it took steps to stop that.
Why is this happening and why it doesn't execute the if part when there is no memory to allocate (I mean when malloc() failed)? What behavior is this?
Well, there's no reason to beleve that the if check wasn't run. Check out the man page for malloc()
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. In case it turns out that the system is out of memory, one or more processes will be killed by the OOM killer.
So you think you "protected" yourself from an out of memory condition with a NULL check; in reality it only means if you got back a NULL, you wouldn't have deferenced it, it means nothing regarding if you actually got the memory you requested.

Resources