I am just about finished reading K&R, and that is all the C that I know. All my compilation is done from Windows command line using MinGW, and I have no knowledge of advanced debugging methods (hence the "ghetto debug" comment in my 2nd program below).
I am trying to make a few small test programs to help me better understand how memory allocation works. These first couple programs do not use malloc or free, I just wanted to see how memory is allocated and de-allocated for standard arrays local to a function. The idea is that I watch my running processes RAM usage to see if it corresponds with what I understand. For this first program below, it does work as I expected. The alloc_one_meg() function allocates and initializes 250,000 4-byte integers, but that MB is de-allocated as soon as the function returns. So if I call that function 1000000 times in a row, I should never see my RAM usage go much above 1MB. And, it works.
#include <stdio.h>
#include <stdlib.h>
void alloc_one_meg() {
int megabyte[250000];
int i;
for (i=0; i<250000; i++) {
megabyte[i] = rand();
}
}
main()
{
int i;
for (i=0; i<1000000; i++) {
alloc_one_meg();
}
}
For this second program below, the idea was to not allow the function to exit, to have 1000 copies of the same function running at once, which I accomplished with recursion. My theory was that the program would consume 1GB of RAM before it de-allocated it all after the recursion finished. However, it doesn't get past the 2nd loop through the recursion (see my ghetto debug comment). The program crashes with a pretty non-informative (to me) message (a Windows pop-up saying ____.exe has encountered a problem). Usually I can always get to the bottom of things with my ghetto debug method... but it's not working here. I'm stumped. What is the problem with this code? Thanks!
#include <stdio.h>
#include <stdlib.h>
int j=0;
void alloc_one_meg() {
int megabyte[250000];
int i;
for (i=0; i<250000; i++) {
megabyte[i] = rand();
}
j++;
printf("Loop %d\n", j); // ghetto debug
if (j<1000) {
alloc_one_meg();
}
}
main()
{
alloc_one_meg();
}
Followup question posted here.
You're running into a stack overflow.
Local automatic storage variables (such as megabyte) are allocated on the stack, which has limited amount of space. malloc allocates on the heap, which allows much larger allocations.
You can read more here:
http://en.wikipedia.org/wiki/Stack_overflow
(I should note that the C language does not specify where memory is allocated - stack and heap are implementation details)
The size of the stack in a Windows program is usually around 1 MB, so on the second recursion, you're overflowing the stack. You shouldn't be allocating such large arrays on the stack, use malloc and free to allocate and deallocate the memory on the heap (there's no way to get around malloc for such sizes of arrays):
void alloc_one_meg() {
int *megabyte = malloc(sizeof(int) * 250000); // allocate space for 250000
// ints on the heap
int i;
for (i=0; i<250000; i++) {
megabyte[i] = rand();
}
j++;
printf("Loop %d\n", j); // ghetto debug
if (j<1000) {
alloc_one_meg();
}
free(megabyte); // DO NOT FORGET THIS
}
That said, you can actually change the stack size of a program and make it bigger (though I'd only do so as an educational exercise, not in production code). For Visual Studio you can use the /F compiler option, and on linux you can use setrlimit(3). I'm not sure what to use with MinGW though.
The memory you are allocating via the recursive functional calls is allocated from the stack. All of the stack memory must be contiguous. When your process starts a thread, Windows will reserve a range of virtual memory address space for that thread's stack. The amount of memory to be reserved is specified in your EXE file's "PE header." PE stands for "Portable Executable."
Using the dumpbin utility included with Visual Studio, with itself (dumpbin.exe) as the input file:
dumpbin /headers dumpbin.exe
... there is some output, and then:
100000 size of stack reserve
2000 size of stack commit
The "100000" is a hexadecimal number equal to 1,048,576, so this represents around 1MB.
In other words, the operating system will only reserve a 1MB address range for the stack. When that address range is used up, Windows may or may not be able to allocate further consecutive memory ranges to increase the stack. The result depends on whether further contiguous address range is available. It is very unlikely to be available, due to the other allocations Windows made when the thread began.
To allocate a maximum amount of virtual memory under Windows, use the VirtualAlloc family of functions.
StackOverflow. Is this a trick question?
Related
int a = 0;
int *b = malloc (sizeof(int));
b = malloc (sizeof(int));
The above code is bad because it allocates memory on the heap and then doesn't free it, meaning you lose access to it. But you also created 'a' and never used it, so you also allocated memory on the stack, which isn't freed until the scope ends.
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Note: I know that memory on the stack can't be freed, I want to know why its not considered bad.
The stack memory will get released automatically when the scope ends. The memory allocated on the heap will remain occupied unless you release it explicitly. As an example:
void foo(void) {
int a = 0;
void *b = malloc(1000);
}
for (int i=0; i<1000; i++) {
foo();
}
Running this code will decrease the available memory by 1000*1000 bytes required by b, whereas the memory required by a will always get released automatically when you return from the foo call.
Simple: Because you'll leak memory. And memory leaks are bad. Leaks: bad, free: good.
When calling malloc or calloc, or indeed any *alloc function, you're claiming a chunk of memory (the size of which is defined by the arguments passed to the allocating function).
Unlike stack variables, which reside in a portion of memory the program has, sort of, free reign over, the same rules don't apply to heap memory. You may need to allocate heap memory for any number of reasons: the stack isn't big enough, you need an array of pointers, but have no way of knowing how big this array will need to be at compile time, you need to share some chunk of memory (threading nightmares), a struct that requires the members to be set at various places (functions) in your program...
Some of these reasons, by their very nature, imply that the memory can't be freed as soon as pointer to that memory goes out of scope. Another pointer might still be around, in another scope, that points to the same block of memory.
There is, though, as mentioned in one of the comments, a slight drawback to this: heap memory requires not just more awareness on the programmers part, but it's also more expensive, and slower than working on the stack.
So some rules of thumb are:
You claimed the memory, so you take care of it... you make sure it's freed when you're done playing around with it.
Don't use heap memory without a valid reason. Avoiding stack overflow, for example, is a valid reason.
Anyway,
Some examples:
Stack overflow:
#include <stdio.h>
int main()
{
int foo[2000000000];//stack overflow, array is too large!
return 0;
}
So, here we've depleted the stack, we need to allocate the memory on the heap:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *foo= malloc(2000000000*sizeof(int));//heap is bigger
if (foo == NULL)
{
fprintf(stderr, "But not big enough\n");
}
free(foo);//free claimed memory
return 0;
}
Or, an example of an array, whose length depends on user input:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *arr = NULL;//null pointer
int arrLen;
scanf("%d", &arrLen);
arr = malloc(arrLen * sizeof(int));
if (arr == NULL)
{
fprintf(stderr, "Not enough heap-mem for %d ints\n", arrLen);
exit ( EXIT_FAILURE);
}
//do stuff
free(arr);
return 0;
}
And so the list goes on... Another case where malloc or calloc is useful: An array of strings, that all might vary in size. Compare:
char str_array[20][100];
In this case str_array is an array of 20 char arrays (or strings), each 100 chars long. But what if 100 chars is the maximum you'll ever need, and on average, you'll only ever use 25 chars, or less?
You're writing in C, because it's fast and your program won't use any more resources than it actually needs? Then this isn't what you actually want to be doing. More likely, you want:
char *str_array[20];
for (int i=0;i<20;++i) str_array[i] = malloc((someInt+i)*sizeof(int));
Now each element in the str_array has exactly the amount of memory I need allocated too it. That's just way more clean. However, in this case calling free(str_array) won't cut it. Another rule of thumb is: Each alloc call has to have a free call to match it, so deallocating this memory looks like this:
for (i=0;i<20;++i) free(str_array[i]);
Note:
Dynamically allocated memory isn't the only cause for mem-leaks. It has to be said. If you read a file, opening a file pointer using fopen, but failing to close that file (fclose) will cause a leak, too:
int main()
{//LEAK!!
FILE *fp = fopen("some_file.txt", "w");
if (fp == NULL) exit(EXIT_FAILURE);
fwritef(fp, "%s\n", "I was written in a buggy program");
return 0;
}
Will compile and run just fine, but it will contain a leak, that is easily plugged (and it should be plugged) by adding just one line:
int main()
{//OK
FILE *fp = fopen("some_file.txt", "w");
if (fp == NULL) exit(EXIT_FAILURE);
fwritef(fp, "%s\n", "I was written in a bug-free(?) program");
fclose(fp);
return 0;
}
As an asside: if the scope is really long, chances are you're trying to cram too much into a single function. Even so, if you're not: you can free up claimed memory at any point, it needn't be the end of the current scope:
_Bool some_long_f()
{
int *foo = malloc(2000000000*sizeof(int));
if (foo == NULL) exit(EXIT_FAILURE);
//do stuff with foo
free(foo);
//do more stuff
//and some more
//...
//and more
return true;
}
Because stack and heap, mentioned many times in the other answers, are sometimes misunderstood terms, even amongst C programmers, Here is a great conversation discussing that topic....
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Memory on the stack, such as memory allocated to automatic variables, will be automatically freed upon exiting the scope in which they were created.
whether scope means global file, or function, or within a block ( {...} ) within a function.
But memory on the heap, such as that created using malloc(), calloc(), or even fopen() allocate memory resources that will not be made available to any other purpose until you explicity free them using free(), or fclose()
To illustrate why it is bad practice to allocate memory without freeing it, consider what would happen if an application were designed to run autonomously for very long time, say that application was used in the PID loop controlling the cruise control on your car. And, in that application there was un-freed memory, and that after 3 hours of running, the memory available in the microprocessor is exhausted, causing the PID to suddenly rail. "Ah!", you say, "This will never happen!" Yes, it does. (look here). (not exactly the same problem, but you get the idea)
If that word picture doesn't do the trick, then observe what happens when you run this application (with memory leaks) on your own PC. (at least view the graphic below to see what it did on mine)
Your computer will exhibit increasingly sluggish behavior until it eventually stops working. Likely, you will be required to re-boot to restore normal behavior.
(I would not recommend running it)
#include <ansi_c.h>
char *buf=0;
int main(void)
{
long long i;
char text[]="a;lskdddddddd;js;'";
buf = malloc(1000000);
strcat(buf, "a;lskdddddddd;js;dlkag;lkjsda;gkl;sdfja;klagj;aglkjaf;d");
i=1;
while(strlen(buf) < i*1000000)
{
strcat(buf,text);
if(strlen(buf) > (i*10000) -10)
{
i++;
buf = realloc(buf, 10000000*i);
}
}
return 0;
}
Memory usage after just 30 seconds of running this memory pig:
I guess that has to do with scope 'ending' really often (at the end of a function) meaning if you return from that function creating a and allocating b, you will have freed in a sense the memory taken by a, and lost for the remainder of the execution memory used by b
Try calling that function a a handful of times, and you'll soon exhaust all of your memory. This never happens with stack variables (except in the case of a defectuous recursion)
Memory for local variables automatically is reclaimed when the function is left (by resetting the frame pointer).
The problem is that memory you allocate on the heap never gets freed until your program ends, unless you explicitly free it. That means every time you allocate more heap memory, you reduce available memory more and more, until eventually your program runs out (in theory).
Stack memory is different because it's laid-out and used in a predictable pattern, as determined by the compiler. It expands as needed for a given block, then contracts when the block ends.
So why is it bad practice to not free memory on the heap but okay for memory on the stack to not be freed (until the scope ends)?
Imagine the following:
while ( some_condition() )
{
int x;
char *foo = malloc( sizeof *foo * N );
// do something interesting with x and foo
}
Both x and foo are auto ("stack") variables. Logically speaking, a new instance for each is created and destroyed in each loop iteration1; no matter how many times this loop runs, the program will only allocate enough memory for a single instance of each.
However, each time through the loop, N bytes are allocated from the heap, and the address of those bytes is written to foo. Even though the variable foo ceases to exist at the end of the loop, that heap memory remains allocated, and now you can't free it because you've lost the reference to it. So each time the loop runs, another N bytes of heap memory is allocated. Over time, you run out of heap memory, which may cause your code to crash, or even cause a kernel panic depending on the platform. Even before then, you may see degraded performance in your code or other processes running on the same machine.
For long-running processes like Web servers, this is deadly. You always want to make sure you clean up after yourself. Stack-based variables are cleaned up for you, but you're responsible for cleaning up the heap after you're done.
1. In practice, this (usually) isn't the case; if you look at the generated machine code, you'll (usually) see the stack space allocated for x and foo at function entry. Usually, space for all local variables (regardless of their scope within the function) is allocated at once.
The malloc example I'm studying is
#include <stdio.h>
#include <stdlib.h>
int main()
{
int *vec;
int i, size;
printf("Give size of vector: ");
scanf("%d",&size);
vec = (int *) malloc(size * sizeof(int));
for(i=0; i<size; i++) vec[i] = i;
for(i=0; i<size; i++)
printf("vec[%d]: %d\n", i, vec[i]);
free(vec);
}
But I can make a program behave at runtime like this program behaves writing it in C wihout malloc, can't I? So what's the use of malloc here?
It is dynamic memory allocation.
The very important point there is that you don't know how much memory you'll need because the amount of memory you must end up with is dependant on the user input.
Thus the use of malloc, which takes size as part of its argument, and size is unknown at compile-time.
This specific example could have been done using variable length arrays which were supported by the standard since c99, now optional as of the 2011 standard. Although if size is very large allocating it on the stack would not work since the stack is much smaller than available heap memory which is what malloc will use. This previous post of mine also has some links on variable length arrays you might find useful.
Outside of this example when you have dynamic data structures using malloc is pretty hard to avoid.
Two issues:
First, you might not know how much memory you need until run time. Using malloc() allows you to allocate exactly the right amount: no more, no less. And your application can "degrade" gracefully if there is not enough memory.
Second, malloc() allocated memory from the heap. This can sometimes be an advantage. Local variables that are allocated on the stack have a very limited amount of total memory. Static variables mean that your app will use all the memory all the time, which could even potentially prevent your app from loading if there isn't enough memory.
This is a followup to the question I just asked here.
I've created a simple program to help myself understand memory allocation, malloc() and free(). Notice the commented out free line. I created an intentional memory leak so I can watch the Windows reported "Mem Usage" bloat slowly to 1GB. But then I found something stranger. If I comment out the loop just above the free line, so that I don't initialize my storage block with random ints, it appears that the space doesn't actually get "claimed" by the program, from the OS. Why is this?
Sure, I haven't initialized it the block, but I have claimed it, so shouldn't the OS still see that the program is using 1GB, whether or not that GB is initialized?
#include <stdio.h>
#include <stdlib.h>
void alloc_one_meg() {
int *pmeg = (int *) malloc(250000*sizeof(int));
int *p = pmeg;
int i;
// for (i=0; i<250000; i++) /* removing this loop causes memory to not be used? */
// *p++ = rand();
// free((void *)pmeg); /* removing this line causes memory leak! */
}
main()
{
int i;
for (i=0; i<1000; i++) {
alloc_one_meg();
}
}
Allocated memory can be in two states in Windows: Reserved, and Commited (see the documentation of VirtualAlloc about MEM_RESERVE: "Reserves a range of the process's virtual address space without allocating any actual physical storage in memory or in the paging file on disk.").
If you allocate memory but do not use it, it remains in the Reserved state, and the OS doesn't count that as used memory. When you try to use it (whether it is only on write, or on both read and write, I do not know, you might want to do a test to find out), it turns into Commited memory, and the OS counts it as used.
Also, the memory allocated by malloc will not be full of 0's (actually it may happen to be, but it's not guaranteed), because you have not initialised it.
it could be compiler optimization : the memory is not used at all, so a possible optimization is to not allocate this memory depening on compiler and on optimization options.
i tested your code, the line : free((void *)pmeg); does not cause any memory leak for me.
Please help :)
OS : Linux
Where in " sleep(1000);", at this time "top (display Linux tasks)" wrote me 7.7 %MEM use.
valgrind : not found memory leak.
I understand, wrote correctly and all malloc result is NULL.
But Why in this time "sleep" my program NOT decreased memory ? What missing ?
Sorry for my bad english, Thanks
~ # tmp_soft
For : Is it free?? no
Is it free?? yes
For 0
For : Is it free?? no
Is it free?? yes
For 1
END : Is it free?? yes
END
~ #top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23060 root 20 0 155m 153m 448 S 0 7.7 0:01.07 tmp_soft
Full source : tmp_soft.c
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
struct cache_db_s
{
int table_update;
struct cache_db_s * p_next;
};
void free_cache_db (struct cache_db_s ** cache_db)
{
struct cache_db_s * cache_db_t;
while (*cache_db != NULL)
{
cache_db_t = *cache_db;
*cache_db = (*cache_db)->p_next;
free(cache_db_t);
cache_db_t = NULL;
}
printf("Is it free?? %s\n",*cache_db==NULL?"yes":"no");
}
void make_cache_db (struct cache_db_s ** cache_db)
{
struct cache_db_s * cache_db_t = NULL;
int n = 10000000;
for (int i=0; i = n; i++)
{
if ((cache_db_t=malloc(sizeof(struct cache_db_s)))==NULL) {
printf("Error : malloc 1 -> cache_db_s (no free memory) \n");
break;
}
memset(cache_db_t, 0, sizeof(struct cache_db_s));
cache_db_t->table_update = 1; // tmp
cache_db_t->p_next = *cache_db;
*cache_db = cache_db_t;
cache_db_t = NULL;
}
}
int main(int argc, char **argv)
{
struct cache_db_s * cache_db = NULL;
for (int ii=0; ii 2; ii++) {
make_cache_db(&cache_db);
printf("For : Is it free?? %s\n",cache_db==NULL?"yes":"no");
free_cache_db(&cache_db);
printf("For %d \n", ii);
}
printf("END : Is it free?? %s\n",cache_db==NULL?"yes":"no");
printf("END \n");
sleep(1000);
return 0;
}
For good reasons, virtually no memory allocator returns blocks to the OS
Memory can only be removed from your program in units of pages, and even that is unlikely to be observed.
calloc(3) and malloc(3) do interact with the kernel to get memory, if necessary. But very, very few implementations of free(3) ever return memory to the kernel1, they just add it to a free list that calloc() and malloc() will consult later in order to reuse the released blocks. There are good reasons for this design approach.
Even if a free() wanted to return memory to the system, it would need at least one contiguous memory page in order to get the kernel to actually protect the region, so releasing a small block would only lead to a protection change if it was the last small block in a page.
Theory of Operation
So malloc(3) gets memory from the kernel when it needs it, ultimately in units of discrete page multiples. These pages are divided or consolidated as the program requires. Malloc and free cooperate to maintain a directory. They coalesce adjacent free blocks when possible in order to be able to provide large blocks. The directory may or may not involve using the memory in freed blocks to form a linked list. (The alternative is a bit more shared-memory and paging-friendly, and it involves allocating memory specifically for the directory.) Malloc and free have little if any ability to enforce access to individual blocks even when special and optional debugging code is compiled into the program.
1. The fact that very few implementations of free() attempt to return memory to the system is not at all due to the implementors slacking off.Interacting with the kernel is much slower than simply executing library code, and the benefit would be small. Most programs have a steady-state or increasing memory footprint, so the time spent analyzing the heap looking for returnable memory would be completely wasted. Other reasons include the fact that internal fragmentation makes page-aligned blocks unlikely to exist, and it's likely that returning a block would fragment blocks to either side. Finally, the few programs that do return large amounts of memory are likely to bypass malloc() and simply allocate and free pages anyway.
If you're trying to establish whether your program has a memory leak, then top isn't the right tool for the job (valrind is).
top shows memory usage as seen by the OS. Even if you call free, there is no guarantee that the freed memory would get returned to the OS. Typically, it wouldn't. Nonetheless, the memory does become "free" in the sense that your process can use it for subsequent allocations.
edit If your libc supports it, you could try experimenting with M_TRIM_THRESHOLD. Even if you do follow this path, it's going to be tricky (a single used block sitting close to the top of the heap would prevent all free memory below it from being released to the OS).
Generally free() doesn't give back physical memory to OS, they are still mapped in your process's virtual memory. If you allocate a big chunk of memory, libc may allocate it by mmap(); then if you free it, libc may release the memory to OS by munmap(), in this case, top will show that your memory usage comes down.
So, if you want't to release memory to OS explicitly, you can use mmap()/munmap().
When you free() memory, it is returned to the standard C library's pool of memory, and not returned to the operating system. In the vision of the operating system, as you see it through top, the process is still "using" this memory. Within the process, the C library has accounted for the memory and could return the same pointer from malloc() in the future.
I will explain it some more with a different beginning:
During your calls to malloc, the standard library implementation may determine that the process does not have enough allocated memory from the operating system. At that time, the library will make a system call to receive more memory from the operating system to the process (for example, sbrk() or VirtualAlloc() system calls on Unix or Windows, respectively).
After the library requests additional memory from the operating system, it adds this memory to its structure of memory available to return from malloc. Later calls to malloc will use this memory until it runs out. Then, the library asks the operating system for even more memory.
When you free memory, the library usually does not return the memory to the operating system. There are many reasons for this. One reason is that the library author believed you will call malloc again. If you will not call malloc again, your program will probably end soon. Either case, there is not much advantage to return the memory to the operating system.
Another reason that the library may not return the memory to the operating system is that the memory from operating system is allocated in large, contiguous ranges. It could only be returned when an entire contiguous range is no longer in use. The pattern of calling malloc and free may not clear the entire range of use.
Two problems:
In make_cache_db(), the line
for (int i=0; i = n; i++)
should probably read
for (int i=0; i<n; i++)
Otherwise, you'll only allocate a single cache_db_s node.
The way you're assigning cache_db in make_cache_db() seems to be buggy. It seems that your intention is to return a pointer to the first element of the linked list; but because you're reassigning cache_db in every iteration of the loop, you'll end up returning a pointer to the last element of the list.
If you later free the list using free_cache_db(), this will cause you to leak memory. At the moment, though, this problem is masked by the bug described in the previous bullet point, which causes you to allocate lists of only length 1.
Independent of these bugs, the point raised by aix is very valid: The runtime library need not return all free()d memory to the operating system.
I was going through one of the threads.
A program crashed because
it had declared an array of 10^6 locally inside a function.
Reason being given was memory allocation failure on stack leads to crash.
when same array was declared globally, it worked well.(memory on heap saved it).
Now for the moment, let us suppose,
stack grows downward and heap upwards.
We have:
---STACK---
-------------------
---HEAP----
Now , I believe that if there is failure in allocation on stack,
it must fail on heap too.
So my question is: is there any limit on stack size?
(crossing the limit caused the program to crash).
Or am I missing something?
Yes, stack is always limited. In several languages/compilers you can set the requested size.
Usually default values (if not set manually) are about 1MB for current languages, which is enough unless you do something that usually isn't recommended (like you allocating huge arrays on the stack)
Contrary to all answers so far, on Linux with GCC (and I guess it is true for all modern POSIX operating systems), maximum stack size is a safety limit enforced by the operating system, that can be easily lifted.
I crafted a small program that calls recursively a function until at least 10 GB is allocated on stack, waits for input on the terminal, and then safely returns from all recursive calls up to main.
#include <stdio.h>
#include <string.h>
#include <sys/time.h>
#include <sys/resource.h>
void grow(unsigned cur_size)
{
if(cur_size * sizeof(int) < 10ul*1024ul*1024ul*1024ul) {
unsigned v[1000];
v[0] = cur_size;
for(unsigned i = 1; i < 1000; ++i) {
v[i] = v[i-1] + 1;
}
grow(cur_size + 1000);
for(unsigned i = 0; i < 1000; ++i) {
if(v[i] != cur_size + i)
puts("Error!");
}
} else {
putchar('#');
getchar();
}
}
int main()
{
struct rlimit l;
l.rlim_max = RLIM_INFINITY;
l.rlim_cur = RLIM_INFINITY;
setrlimit(RLIMIT_STACK, &l);
grow(0);
putchar('#');
getchar();
}
This all depends on what language and compiler you use. But programs compiled with for instance C or C++ allocate a fixed size stack at program startup. The size of the stack can usually be specified at compile time (on my particular compiler it default to 1 MB).
You don't mention which programming language, but in Delphi the compile options include maximum and minimum stack size, and I believe similar parameters will exist for all compiled languages.
I've certainly had to increase the maximum myself occasionally.
Yes, there is a limit on stack size in most languages. For example, in C/C++, if you have an improperly written recursive function (e.g. incorrect base case), you will overflow the stack. This is because, ignoring tail recursion, each call to a function creates a new stack frame that takes up space on the stack. Do this enough, and you will run out of space.
Running this C program on Windows (VS2008)...
void main()
{
main();
}
...results in a stack overflow:
Unhandled exception at 0x004113a9 in Stack.exe: 0xC00000FD: Stack overflow.
Maybe not a really good answer, but gives you a little more in depth look on how windows in general manages the memory: Pushing the Limits of Windows