How many times this loop will run? - c

Interview asked question:
while(1)
{
void * a = malloc(1024*1024);
}
How many times this loop will run on a 2 gb ram and a 8 gb ram ?
I said infinite loop because there is no terminating condition even if memory will be full.
He din't agree.I don't have any idea now.Please help.

It should run indefinitely. On most platforms, when there's no more memory available, malloc() will return 0, so the loop will keep on running without changing the amount of memory allocated. Linux allows memory over-commitment so that malloc() calls continue to add to virtual memory. The process might eventually get killed by the OOM Killer when the data that malloc() uses to administer the memory starts to cause problems (it won't be because you try using the allocated memory itself because the code doesn't use it), but Linux isn't stipulated as the platform in the question.

Related

C malloc and free

I was taught that if you do malloc(), but you don't free(), the memory will stay taken until a restart happens. Well, I of course tested it. A very simple code:
#include <stdlib.h>
int main(void)
{
while (1) malloc(1000);
}
And I watched over it in Task Manager (Windows 8.1).
Well, the program took up 2037.4 MB really quickly and just stayed like that. I understand it's probably Windows limiting the program.
But here is the weird part: When I closed the console, the memory use percentage went down, even though I was taught that it isn't supposed to!
Is it redundant to call free, since the operating system frees it up anyway?
(The question over here is related, but doesn't quite answer whether I should free or not.)
On Windows, a 32 bit process can only allocate 2048 megabytes because that's how many addresses are there. Some of this memory is probably reserved by Windows, so the total figure is lower. malloc returns a null pointer when it fails, which is likely what happens at that point. You could modify your program like this to see that:
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
int counter = 0;
while (1) {
counter++;
if (malloc(1000) == NULL) {
printf("malloc failed after %d calls\n", counter);
return 0;
}
}
}
Now you should get output like this:
$ ./mem
malloc failed after 3921373 calls
When a process terminates or when it is terminated from the outside (as you do by killing it through the task manager), all memory allocated by the process is released. The operating system manages what memory belongs to what process and can therefore free the memory of a process when it terminates. The operating system does not however know how each process uses the memory it requested from the operating system and depends on the process telling it when it doesn't need a chunk of memory anymore.
Why do you need free() then? Well, this only happens on program termination and does not discriminate between memory you still need and memory you don't need any more. When your process is doing complicated things, it is often constantly allocating and releasing memory for its own computations. It's important to release memory explicitly with free() because otherwise your process might at some point no longer be able to allocate new memory and crashes. It's also good programming practice to release memory when you can so your process does not unnecessarily eat up tons of memory. Instead, other processes can use that memory.
It is advisable to call free after you are done with the memory you had allocated, as you may need this memory space later in your program and it will be a problem if there was no memory space for new allocations.
You should always seek portability for your code.If windows frees this space, may be other operating systems don't.
Every process in the Operating System have a limited amount of addressable memory called the Process Address Space. If you allocate a huge amount of memory and you end up allocating all of the memory available for this process, malloc will fail and return NULL. And you will not be able to allocate memory for this process anymore.
With all non-trivial OS, process resources are reclaimed by the OS upon process termination.
Unless there is specifc and overriding reason to explicitly free memory upon termination, you don't need to do it and you should not try for at least these reasons:
1) You would need to write code to do it, and test it, and debug it. Why do this, if the OS can do it? It's not just redundant, it reduces quality because your explict resource-releasing will never get as much testing as the OS has already had before it got released.
2) With a complex app, with a GUI and many subsystems and threads, cleanly freeing memory on shutdown is nigh-on impossible anyway, which leads to:
3) Many library developers have already given up on the 'you must explicitly release blah... ' mantra because the complexity would result in the libs never being released. Many report unreleased, (but not lost), memory to valgrid and, with opaque libs, you can do nothing at all about it.
4) You must not free any memory that is in use by a running thread. To safely release all such memory in multithreaded apps, you must ensure that all process threads are stopped and cannot run again. User code does not have the tools to do this, the OS does. It is therefore not possible to explicitly free memory from user code in any kind of safe manner in such apps.
5) The OS can free off the process memory in big chunks - much more quickly than messing around with dozens of sub-allcations in the C manager.
6) If the process is being terminated because it has failed due to memory management issues, calling free() many more times is not going to help at all.
7) Many teachers and profs say that you must explicity free the memory, so it's obviously a bad plan.

Memory leak in infinite loop

What are the damages in your computer if you run a program which continuously generates memory leaks ?
For example:
while(true)
{
char* c = malloc(sizeof(char));
c = NULL;
}
then let that code execute for hours or days?
You probably wouldn't get the chance of running it a day. The unallocated main memory will quickly reach a threshold level when the system will stop your program. The operating system in most of the cases would stop the process and at that moment system will run slow. The worst part is -allocated memory cannot be used because the reference to it has been lost.
Note: This memory leaked is not permanently lost. The system after termination of the program resets the available physical memory. Not the memory of hard disk drive.

can I loop malloc on failure?

I'm doing some networking C code and I don't want to exit the program if I'm out of memory, and I don't want to ignore a clients data that he's sent me. So I was thinking I would do something like this
while(1){
client_packet = (struct packet_data *)malloc(size);
if(client_packet != NULL){
break;
}
if(errno == ENOMEM){
sleep(1);
continue;
}else{
return;
}
}
Keep in mind this is run from a child process that's been forked off. I'm gonna have as many child processes as possible when the program is in full swing. So my logic is that chances are that if I didn't have memory when I made the call to malloc, sleep for a second to give other processes a chance to call free, then try again.
I can't really test this myself until I have a situation where I might run out of memory, but would this concept be effective?
It depends on the computer and the operating system that you are using, but usually this will be just pointless.
Lots of software nowadays is 64 bit. That means in practice that malloc will never fail, unless your size argument is ridiculously wrong, like an uninitialised variable, or (size_t) -1 which happens to be about 17 billion Gigabyte. What happens is that your application will start swapping like mad if you exceed the available RAM, and at some point the user can't take the slowness anymore and kills your application. So even checks for malloc failure are often pointless as long as you make sure your app crashes if it fails and doesn't go off doing stupid things, for security reasons.
As an example of code seriously using this approach, take MacOS X / iOS code, where many essential functions have no way to notify you if they run out of memory.
Now if you have a situation where you can't find the memory to allocate a small data packet, what are the chances of the application as a whole surviving this and doing anything meaningful? Nil?
Typically, failure of malloc happens when the memory request can not be satisfied because there is not a usable block of memory. Looping of malloc may solve the problem although it completely depends on specific situations.
For instance, there are 2 processes running simultaneously on your computer, the first one requests too much dynamic memory that leads to the failure of the second process malloc(). While looping for malloc() in the second process, the first one starts to release their allocated memory via free() (the operation is switch frequently between processes by OS scheduler), now the malloc() in the second process will return a successful pointer.
However, I don't think this implementation is a good idea since this loop can block forever. Furthermore, looping like this is CPU consuming.

Program heap size?

Is the maximum heap size of a program in C fixed or if I keep malloc-ing it will at some point start to overflow?
Code:
while(connectionOK) //connectionOK is the connection with server which might be forever
{
if(userlookup_IDNotFound(userID))
user_struct* newuser = malloc(getsize(user_struct));
setupUserAccount(newuser);
}
I am using gcc in ubuntu/ linux if that matters.
I know something like getrlimit but not sure if it gives heap size. Although it does give the default stack size for one of the options in the input argument.
Also valgrind is probably a good tool as suggested here how to get Heap size of a program but I want to dynamically print an error message if there is a heap overflow.
My understanding was the process address space being allocated by the OS (which is literally allowed to use the whole memory if it wants to) at the beginning of the process creation but I am not sure if it is dynamically given more physical memory once it requests for additional memory.
The heap never overflows it just runs out of memory at a certain point (usually when malloc() returns NULL) So to detect out of memory just check the return value of the malloc() call.
if (newuser == NULL)
{
printf("OOM\n");
exit(1); /* exit if you want or can't handle being OOM */
}
malloc() internally will request more memory from the OS so it expands dynamically so it's not really fixed size as it will give back pages to the OS that it no longer needs as well as requesting more at any given time that it requires them.
Technically what malloc allocates on most systems is not memory, but address space. On a modern system you can easily allocate several petabytes of address space with malloc and malloc will probably always return a non null pointer. The reason behind this is, that most OS actually perform memory allocation only when a piece of address space is actively modified. As long as it sits there untouched, the OS will just make a note that a certain area of a process address space has been validly reserved for future use.
This kind of bahavior is called "memory overcommitment" and is of importance when maintaining Linux systems. If can happen, that theres more memory allocated than available for some time, and then some program will actually write to some of the overcommited memory. What then happens is, that the so called "Out Of Memory Killer" (OOM killer) will go on a rampage and kills those processes it sees most apropriate for; unfortunately it usually are those processes you don't want to loose under any circumstances. Databases are known to be among the prime targets of the OOM killer.
Because of this, it's strongly recommended to switch of memory overcommitment on high availability Linux boxes. With disabled memory overcommitment disabled, each request for address space must be backed by memory. In that case malloc will actually return 0 if the request can not be fullfilled.
at some point, malloc() will return NULL, when system will run out of memory. then when you try to dereference that, your program will abort executing.
See what happens when you do malloc(SIZE_MAX) a few times :-)

malloc() inside an infinte loop

I got an Interview question , What happens when we allocate large chunk of memory using malloc() inside an infinite loop and don't free() it.
I thought of checking the condition with NULL should work when there is no enough memory on heap and it should break the loop , But it didn't happen and program terminates abnormally by printing killed.
Why is this happening and why it doesn't execute the if part when there is no memory to allocate (I mean when malloc() failed) ? What behavior is this ?
My code is :
#include<stdio.h>
#include<stdlib.h>
int main(void) {
int *px;
while(1)
{
px = malloc(sizeof(int)*1024*1024);
if (px == NULL)
{
printf("Heap Full .. Cannot allocate memory \n");
break;
}
else
printf("Allocated \t");
}
return 0;
}
EDIT : gcc - 4.5.2 (Linux- Ubuntu -11.04)
If you're running on linux, keep an eye on the first terminal. It will show something like:
OOM error - killing proc 1100
OOM means out of memory.
I think it's also visible in dmesg and/or /var/log/messages and/or /var/log/system depending on the linux distro. You can grep with:
grep -i oom /var/log/*
You could make your program grab memory slowly, and keep an eye on:
watch free -m
You'll see the available swap go down and down. When it gets close to nothing Linux will kill your program and the amount of free memory will go up again.
This is a great link for interpreting the output of free -m: http://www.linuxatemyram.com/
This behaviour can be a problem with apps that are started my init or some other protection mechanism like 'god', you can get into a loop where linux kills the app and init or something starts it up again. If the amount of memory needed is much bigger than the available RAM, it can cause slowness through swapping memory pages to disk.
In some cases linux doesn't kill the program that's causing the trouble but some other process. If it kills init for example, the machine will reboot.
In the worst cases a program or group of processes will request a lot of memory (more than is available in Ram) and attempt to access it repeatedly. Linux has no where fast to put that memory, so it'll have to swap out some page of Ram to disk (the swap partition) and load the page being accessed from disk so the program can see/edit it.
This happens over and over again every milisecond. As disk is 1000s of times slower than RAM, this problem can grind the machine down to a practical halt.
The behaviour depends on the ulimits - see http://www.linuxhowtos.org/Tips%20and%20Tricks/ulimit.htm
If you have a limit on the memory use, you'll see the expected NULL return behaviour, on the other hand if you are not limited, you might see the OOM reaper that you saw etc.
But it didn't happen and program terminates abnormally by printing killed.
Keep in mind, you are not alone. In this case, you were killed by the Out Of Memory killer, it saw your process hogging the memory of the system and it took steps to stop that.
Why is this happening and why it doesn't execute the if part when there is no memory to allocate (I mean when malloc() failed)? What behavior is this?
Well, there's no reason to beleve that the if check wasn't run. Check out the man page for malloc()
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. In case it turns out that the system is out of memory, one or more processes will be killed by the OOM killer.
So you think you "protected" yourself from an out of memory condition with a NULL check; in reality it only means if you got back a NULL, you wouldn't have deferenced it, it means nothing regarding if you actually got the memory you requested.

Resources