how to fix stack overflow error? - c

so, i was making this program that let people know the number of contiguous subarray which sum is equal to a certain value.
i have written the code , but when i try to run this code in vcexpress 2010, it says these error
Unhandled exception at 0x010018e7 in test 9.exe: 0xC00000FD: Stack overflow.
i have tried to search for the solution in this website and other webisites, but i can't seem to find any solution which could help me fix the error in this code(they are using recursion while i'm not).
i would be really grateful if you would kindly explain what cause this error in my code, and how to fix this error. any help would be appreciated. Thank you.
here is my code :
#include <stdio.h>
int main ()
{
int n,k,a=0,t=0;
unsigned long int i[1000000];
int v1,v2=0,v3;
scanf("%d %d",&n,&k);
for(v3=0;v3<n;v3++)
{
scanf("%d",&i[v3]);
}
do
{
for(v1=v2;v1<n;v1++)
{
t=i[v1]+t;
if(t==k)
{
a++;
break;
}
}
t=0;
v2++;
}while(v2!=n);
printf("%lu",a);
return 0;
}

Either move
unsigned long int i[1000000];
outside of main, thus making it a global variable (not an automatic one), or better yet, use some C dynamic heap allocation:
// inside main
unsigned long int *i = calloc(1000000, sizeof(unsigned long int));
if (!i) { perror("calloc"); exit(EXIT_FAILURE); };
BTW, for such a pointer, I would use (for readability reasons) some other name than i. And near the end of main you'll better free(i); to avoid memory leaks.
Also, you could move these 2 lines after the read of n and use calloc(n, sizeof(unsigned long int)) instead of calloc(1000000, sizeof(unsigned long int)) ; then you can handle arrays bigger than a million elements if your computer and system provides enough resources for that.
Your initial code is declaring an automatic variable which goes into the call frame of main on your call stack (which has a limited size, typically a megabyte or a few of them). On some operating systems there is a way to increase the size of that call stack (in an OS-specific way). BTW each thread has its own call stack.
As a rule of thumb, your C functions (including main) should avoid having call frames bigger than a few kilobytes. With the GCC compiler, you could invoke it with gcc -Wall -Wextra -Wframe-larger-than=1024 -g to get useful warnings and debug information.
Read the virtual address space wikipage. It has a nice picture worth many words. Later, find the way to query, on your operating system, the virtual address space of your process (on Linux, use proc(5) like cat /proc/$$/maps etc...). In practice, your virtual address space is likely to contain many segments (perhaps a dozen, sometimes thousands). Often, the dynamic linker or some other part of your program (or of your C standard library) uses memory-mapped files. The standard C heap (managed by malloc etc) may be organized in several segments.
If you want to understand more about virtual address space, take time to read a good book, like: Operating systems, three easy pieces (freely downloadable).
If you want to query the organization of the virtual address space in some process, you need to find an operating-system specific way to do that (on Linux, for a process of pid 1234, use /proc/1234/maps or /proc/self/maps from inside the process).

Memory is laid out much more differently than simply 4 segments(which was done long ago). The answer to the question can be generalized this way - the global or dynamically allocated memory space is handled differently than that of local variables by the system, where as the memory for local variable is limited in size, memory for dynamic allocation or global variables doesn't put a lower constraint like this.
In modern system the concept of virtual address space is there. The process from your program gets a chunk of it. That portion of memory is now responsible for holding the required memory.
Now for dynamic allocation and so on, the process can request more memory and depending on the other processes and so on, new memory request is serviced. For dynamic or global array there is no limit process wise (of course system wise there is- it cant haverequest all memory). That's why dynamic allocation or using global variable won't cause the process to run out of it's allocated memory unlike the automatic lifetime memory that it originally had for local variables.

Basically you can check your stack size
for example in Linux : ulimit -s (Kbytes)
and then decide how you manipulate your code regarding that.
As a concept I would never allocate big piece of memory on the stack because unless you know exactly the depth of your function call and the stack use, it's hard to control the precised allocated memory on stack during run time

Related

Segmentation fault in C in high length arrays in prime sieve [duplicate]

The following code is generating a stack overflow error for me
int main(int argc, char* argv[])
{
int sieve[2000000];
return 0;
}
How do I get around this? I am using Turbo C++ but would like to keep my code in C
EDIT:
Thanks for the advice. The code above was only for example, I actually declare the array in a function and not in sub main. Also, I needed the array to be initialized to zeros, so when I googled malloc, I discovered that calloc was perfect for my purposes.
Malloc/calloc also has the advantage over allocating on the stack of allowing me to declare the size using a variable.
Your array is way too big to fit into the stack, consider using the heap:
int *sieve = malloc(2000000 * sizeof(*sieve));
If you really want to change the stack size, take a look at this document.
Tip: - Don't forget to free your dynamically allocated memory when it's no-longer needed.
There are 3 ways:
Allocate array on heap - use malloc(), as other posters suggested. Do not forget to free() it (although for main() it is not that important - OS will clean up memory for you on program termination).
Declare the array on unit level - it will be allocated in data segment and visible for everybody (adding static to declaration will limit the visibility to unit).
Declare your array as static - in this case it will be allocated in data segment, but visible only in main().
That's about 7MB of stack space. In visual studio you would use /STACK:###,### to reflect the size you want. If you truely want a huge stack (could be a good reason, using LISP or something :), even the heap is limited to small'sh allocations before forcing you to use VirtualAlloc), you may also want to set your PE to build with /LARGEADDRESSAAWARE (Visual Studio's linker again), but this configure's your PE header to allow your compiled binary to address the full 4GB of 32'bit address space (if running in a WOW64). If building truely massive binaries, you would also typically need to configure /bigobj as an additional linker paramerter.
And if you still need more space, you can radically violate convention by using something simular to (again MSVC's link) /merge:, which will allow you to pack one section into another, so you can use every single byte for a single shared code/data section. Naturally you would also need to configure the SECTIONS permissions in a def file or with #pgrama.
Use malloc. All check the return type is not null, if it is null then your system simply doesn't have enought memory to fit that many values.
You would be better off allocating it on the heap, not the stack. something like
int main(int argc, char* argv[])
{
int * sieve;
sieve = malloc(20000);
return 0;
}
Your array is huge.
It's possible that your machine or OS don't have or want to allocate so much memory.
If you absolutely need an enormous array, you can try to allocate it dynamically (using malloc(...)), but then you're at risk of leaking memory. Don't forget to free the memory.
The advantage of malloc is that it tries to allocate memory on the heap instead of the stack (therefore you won't get a stack overflow).
You can check the value that malloc returns to see if the allocation succeeded or failed.
If it fails, just try to malloc a smaller array.
Another option would be to use a different data structure that can be resized on the fly (like a linked list). Wether this option is good depends on what you are going to do with the data.
Yet another option would be to store things in a file, streaming data on the fly. This approach is the slowest.
If you go for storage on the hard drive, you might as well use an existing library (for databases)
As Turbo C/C++ is 16 bit compiler int datatype consumes about 2 bytes.
2bytes*2000000=40,00,000 bytes=3.8147MB space.
The auto variables of a function is stored in stack and it caused the overflow of the stack memory. Instead use the data memory [using static or global variable] or the dynamic heap memory [using the malloc/calloc] for creating the required memory as per the availability of the processor memory mapping.
Is there some reason why you can't use alloca() to allocate the space you need on the stack frame based on how big the object really needs to be?
If you do that, and still bust the stack, put it in allocated heap. I highly recommend NOT declaring it as static in main() and putting it in the data segment.
If it really has to be that big and your program can't allocate it on the heap, your program really has no business running on that type of machine to begin with.
What (exactly) are you trying to accomplish?

Segmentation fault due to large buffer size when reading a large line in C [duplicate]

The following code is generating a stack overflow error for me
int main(int argc, char* argv[])
{
int sieve[2000000];
return 0;
}
How do I get around this? I am using Turbo C++ but would like to keep my code in C
EDIT:
Thanks for the advice. The code above was only for example, I actually declare the array in a function and not in sub main. Also, I needed the array to be initialized to zeros, so when I googled malloc, I discovered that calloc was perfect for my purposes.
Malloc/calloc also has the advantage over allocating on the stack of allowing me to declare the size using a variable.
Your array is way too big to fit into the stack, consider using the heap:
int *sieve = malloc(2000000 * sizeof(*sieve));
If you really want to change the stack size, take a look at this document.
Tip: - Don't forget to free your dynamically allocated memory when it's no-longer needed.
There are 3 ways:
Allocate array on heap - use malloc(), as other posters suggested. Do not forget to free() it (although for main() it is not that important - OS will clean up memory for you on program termination).
Declare the array on unit level - it will be allocated in data segment and visible for everybody (adding static to declaration will limit the visibility to unit).
Declare your array as static - in this case it will be allocated in data segment, but visible only in main().
That's about 7MB of stack space. In visual studio you would use /STACK:###,### to reflect the size you want. If you truely want a huge stack (could be a good reason, using LISP or something :), even the heap is limited to small'sh allocations before forcing you to use VirtualAlloc), you may also want to set your PE to build with /LARGEADDRESSAAWARE (Visual Studio's linker again), but this configure's your PE header to allow your compiled binary to address the full 4GB of 32'bit address space (if running in a WOW64). If building truely massive binaries, you would also typically need to configure /bigobj as an additional linker paramerter.
And if you still need more space, you can radically violate convention by using something simular to (again MSVC's link) /merge:, which will allow you to pack one section into another, so you can use every single byte for a single shared code/data section. Naturally you would also need to configure the SECTIONS permissions in a def file or with #pgrama.
Use malloc. All check the return type is not null, if it is null then your system simply doesn't have enought memory to fit that many values.
You would be better off allocating it on the heap, not the stack. something like
int main(int argc, char* argv[])
{
int * sieve;
sieve = malloc(20000);
return 0;
}
Your array is huge.
It's possible that your machine or OS don't have or want to allocate so much memory.
If you absolutely need an enormous array, you can try to allocate it dynamically (using malloc(...)), but then you're at risk of leaking memory. Don't forget to free the memory.
The advantage of malloc is that it tries to allocate memory on the heap instead of the stack (therefore you won't get a stack overflow).
You can check the value that malloc returns to see if the allocation succeeded or failed.
If it fails, just try to malloc a smaller array.
Another option would be to use a different data structure that can be resized on the fly (like a linked list). Wether this option is good depends on what you are going to do with the data.
Yet another option would be to store things in a file, streaming data on the fly. This approach is the slowest.
If you go for storage on the hard drive, you might as well use an existing library (for databases)
As Turbo C/C++ is 16 bit compiler int datatype consumes about 2 bytes.
2bytes*2000000=40,00,000 bytes=3.8147MB space.
The auto variables of a function is stored in stack and it caused the overflow of the stack memory. Instead use the data memory [using static or global variable] or the dynamic heap memory [using the malloc/calloc] for creating the required memory as per the availability of the processor memory mapping.
Is there some reason why you can't use alloca() to allocate the space you need on the stack frame based on how big the object really needs to be?
If you do that, and still bust the stack, put it in allocated heap. I highly recommend NOT declaring it as static in main() and putting it in the data segment.
If it really has to be that big and your program can't allocate it on the heap, your program really has no business running on that type of machine to begin with.
What (exactly) are you trying to accomplish?

Why does a C program crash if a large variable is declared?

I have the following C program compiled in Microsoft Visual Studio Express 2012:
int main() {
int a[300000];
return 0;
}
This crashes with a stack overflow in msvcr110d.dll!__crtFlsGetValue().
If I change the array size from 300,000 to 200,000 it works fine (in so much as this simple program can be said to 'work' since it doesn't do anything).
I'm running on Windows 7 and have also tried this with gcc under Cygwin and it produces the same behaviour (in this case a seg fault).
What the heck?
There are platform-specific limits on the size of the space used by automatic objects in C (the "stack size"). Objects that are larger than that size (which may be a few kilobytes on an embedded platform and a few megabytes on a desktop machine) cannot be declared as automatic objects. Make them static or dynamic instead.
In a similar vein, there are limits on the depth of function calls, and in particular on recursion.
Check your compiler and/or platform documentation for details on what the actual size is, and on how you might be able to change it. (E.g. on Linux check out ulimit.)
Because it's being allocated on the stack and the stack has a limited size, obviously not large enough to hold 300000 ints.
Use heap allocation a la malloc:
int* a = malloc(sizeof(int) * 300000);
// ...
free(a);
The heap can hold a lot more than the stack.
The size of thread stacks is traditionally limited by operating systems because of a finite limit to the amount of virtual address space available to each process.
As the virtual address space allocated to a thread stack can't be changed once it is allocated, there is no strategy other than to allocate a fairly large, but limited, chunk to each thread - even when most threads will use very little of it.
Similarly, there is a finite limit also on the number of threads a process is allowed to spawn.
At a guess, the limit here is 1MB, and Windows then limits the number of threads to - say - 256, this means that 256MB of the 3GB virtual address space available to a 32-bit process is allocated to thread stacks - or put another way, 1/12th.
On 64-bit systems, there is obviously a lot more virtual space to play with, but having a limit is still sensible in order to quickly detect - and terminate - infinite recursion.
Local variables claim space from the stack. So, if you allocate something large enough, the stack will inevitably overflow.

Why can I write and read memory when I haven't allocated space?

I'm trying to build my own Hash Table in C from scratch as an exercise and I'm doing one little step at a time. But I'm having a little issue...
I'm declaring the Hash Table structure as pointer so I can initialize it with the size I want and increase it's size whenever the load factor is high.
The problem is that I'm creating a table with only 2 elements (it's just for testing purposes), I'm allocating memory for just those 2 elements but I'm still able to write to memory locations that I shouldn't. And I also can read memory locations that I haven't written to.
Here's my current code:
#include <stdio.h>
#include <stdlib.h>
#define HASHSIZE 2
typedef char *HashKey;
typedef int HashValue;
typedef struct sHashTable {
HashKey key;
HashValue value;
} HashEntry;
typedef HashEntry *HashTable;
void hashInsert(HashTable table, HashKey key, HashValue value) {
}
void hashInitialize(HashTable *table, int tabSize) {
*table = malloc(sizeof(HashEntry) * tabSize);
if(!*table) {
perror("malloc");
exit(1);
}
(*table)[0].key = "ABC";
(*table)[0].value = 45;
(*table)[1].key = "XYZ";
(*table)[1].value = 82;
(*table)[2].key = "JKL";
(*table)[2].value = 13;
}
int main(void) {
HashTable t1 = NULL;
hashInitialize(&t1, HASHSIZE);
printf("PAIR(%d): %s, %d\n", 0, t1[0].key, t1[0].value);
printf("PAIR(%d): %s, %d\n", 1, t1[1].key, t1[1].value);
printf("PAIR(%d): %s, %d\n", 3, t1[2].key, t1[2].value);
printf("PAIR(%d): %s, %d\n", 3, t1[3].key, t1[3].value);
return 0;
}
You can easily see that I haven't allocated space for (*table)[2].key = "JKL"; nor (*table)[2].value = 13;. I also shouldn't be able read the memory locations in the last 2 printfs in main().
Can someone please explain this to me and if I can/should do anything about it?
EDIT:
Ok, I've realized a few things about my code above, which is a mess... But I have a class right now and can't update my question. I'll update this when I have the time. Sorry about that.
EDIT 2:
I'm sorry, but I shouldn't have posted this question because I don't want my code like I posted above. I want to do things slightly different which makes this question a bit irrelevant. So, I'm just going to assume this was question that I needed an answer for and accept one of the correct answers below. I'll then post my proper questions...
Just don't do it, it's undefined behavior.
It might accidentially work because you write/read some memory the program doesn't actually use. Or it can lead to heap corruption because you overwrite metadata used by the heap manager for its purposes. Or you can overwrite some other unrelated variable and then have hard times debugging the program that goes nuts because of that. Or anything else harmful - either obvious or subtle yet severe - can happen.
Just don't do it - only read/write memory you legally allocated.
Generally speaking (different implementation for different platforms) when a malloc or similar heap based allocation call is made, the underlying library translates it into a system call. When the library does that, it generally allocates space in sets of regions - which would be equal or larger than the amount the program requested.
Such an arrangement is done so as to prevent frequent system calls to kernel for allocation, and satisfying program requests for Heap faster (This is certainly not the only reason!! - other reasons may exist as well).
Fall through of such an arrangement leads to the problem that you are observing. Once again, its not always necessary that your program would be able to write to a non-allocated zone without crashing/seg-faulting everytime - that depends on particular binary's memory arrangement. Try writing to even higher array offset - your program would eventually fault.
As for what you should/should-not do - people who have responded above have summarized fairly well. I have no better answer except that such issues should be prevented and that can only be done by being careful while allocating memory.
One way of understanding is through this crude example: When you request 1 byte in userspace, the kernel has to allocate a whole page atleast (which would be 4Kb on some Linux systems, for example - the most granular allocation at kernel level). To improve efficiency by reducing frequent calls, the kernel assigns this whole page to the calling Library - which the library can allocate as when more requests come in. Thus, writing or reading requests to such a region may not necessarily generate a fault. It would just mean garbage.
In C, you can read to any address that is mapped, you can also write to any address that is mapped to a page with read-write areas.
In practice, the OS gives a process memory in chunks (pages) of normally 8K (but this is OS-dependant). The C library then manages these pages and maintains lists of what is free and what is allocated, giving the user addresses of these blocks when asked to with malloc.
So when you get a pointer back from malloc(), you are pointing to an area within an 8k page that is read-writable. This area may contain garbage, or it contain other malloc'd memory, it may contain the memory used for stack variables, or it may even contain the memory used by the C library to manage the lists of free/allocated memory!
So you can imagine that writing to addresses beyond the range you have malloc'ed can really cause problems:
Corruption of other malloc'ed data
Corruption of stack variables, or the call stack itself, causing crashes when a function return's
Corruption of the C-library's malloc/free management memory, causing crashes when malloc() or free() are called
All of which are a real pain to debug, because the crash usually occurs much later than when the corruption occurred.
Only when you read or write from/to the address which does not correspond to a mapped page will you get a crash... eg reading from address 0x0 (NULL)
Malloc, Free and pointers are very fragile in C (and to a slightly lesser degree in C++), and it is very easy to shoot yourself in the foot accidentally
There are many 3rd party tools for memory checking which wrap each memory allocation/free/access with checking code. They do tend to slow your program down, depending on how much checking is applied..
Think of memory as being a great big blackboard divided into little squares. Writing to a memory location is equivalent to erasing a square and writing a new value there. The purpose of malloc generally isn't to bring memory (blackboard squares) into existence; rather, it's to identify an area of memory (group of squares) that's not being used for anything else, and take some action to ensure that it won't be used for anything else until further notice. Historically, it was pretty common for microprocessors to expose all of the system's memory to an application. An piece of code Foo could in theory pick an arbitrary address and store its data there, but with a couple of major caveats:
Some other code `Bar` might have previously stored something there with the expectation that it would remain. If `Bar` reads that location expecting to get back what it wrote, it will erroneously interpret the value written by `Foo` as its own. For example, if `Bar` had stored the number of widgets that were received (23), and `Foo` stored the value 57, the earlier code would then believe it had received 57 widgets.
If `Foo` expects the data it writes to remain for any significant length of time, its data might get overwritten by some other code (basically the flip-side of the above).
Newer systems include more monitoring to keep track of what processes own what areas of memory, and kill off processes that access memory that they don't own. In many such systems, each process will often start with a small blackboard and, if attempts are made to malloc more squares than are available, processes can be given new chunks of blackboard area as needed. Nonetheless, there will often be some blackboard area available to each process which hasn't yet been reserved for any particular purposes. Code could in theory use such areas to store information without bothering to allocate it first, and such code would work if nothing happened to use the memory for any other purpose, but there would be no guarantee that such memory areas wouldn't be used for some other purpose at some unexpected time.
Usually malloc will allocate more memory than you require to for alignment purpose. Also because the process really have read/write access to the heap memory region. So reading a few bytes outside of the allocated region seldom trigger any errors.
But still you should not do it. Since the memory you're writing to can be regarded as unoccupied or is in fact occupied by others, anything can happen e.g. the 2nd and 3rd key/value pair will become garbage later or an irrelevant vital function will crash due to some invalid data you've stomped onto its malloc-ed memory.
(Also, either use char[≥4] as the type of key or malloc the key, because if the key is unfortunately stored on the stack it will become invalid later.)

Why should I use malloc() when "char bigchar[ 1u << 31 - 1 ];" works just fine?

What's the advantage of using malloc (besides the NULL return on failure) over static arrays? The following program will eat up all my ram and start filling swap only if the loops are uncommented. It does not crash.
...
#include <stdio.h>
unsigned int bigint[ 1u << 29 - 1 ];
unsigned char bigchar[ 1u << 31 - 1 ];
int main (int argc, char **argv) {
int i;
/* for (i = 0; i < 1u << 29 - 1; i++) bigint[i] = i; */
/* for (i = 0; i < 1u << 31 - 1; i++) bigchar[i] = i & 0xFF; */
getchar();
return 0;
}
...
After some trial and error I found the above is the largest static array allowed on my 32-bit Intel machine with GCC 4.3. Is this a standard limit, a compiler limit, or a machine limit? Apparently I can have as many of of them as I want. It will segfault, but only if I ask for (and try to use) more than malloc would give me anyway.
Is there a way to determine if a static array was actually allocated and safe to use?
EDIT: I'm interested in why malloc is used to manage the heap instead of letting the virtual memory system handle it. Apparently I can size an array to many times the size I think I'll need and the virtual memory system will only keep in ram what is needed. If I never write to e.g. the end (or beginning) of these huge arrays then the program doesn't use the physical memory. Furthermore, if I can write to every location then what does malloc do besides increment a pointer in the heap or search around previous allocations in the same process?
Editor's note: 1 << 31 causes undefined behaviour if int is 32-bit, so I have modified the question to read 1u. The intent of the question is to ask about allocating large static buffers.
Well, for two reasons really:
Because of portability, since some systems won't do the virtual memory management for you.
You'll inevitably need to divide this array into smaller chunks for it to be useful, then to keep track of all the chunks, then eventually as you start "freeing" some of the chunks of the array you no longer require you'll hit the problem of memory fragmentation.
All in all you'll end up implementing a lot of memory management functionality (actually pretty much reimplementing the malloc) without the benefit of portability.
Hence the reasons:
Code portability via memory management encapsulation and standardisation.
Personal productivity enhancement by the way of code re-use.
Please see:
malloc() and the C/C++ heap
Should a list of objects be stored on the heap or stack?
C++ Which is faster: Stack allocation or Heap allocation
Proper stack and heap usage in C++?
About C/C++ stack allocation
Stack,Static and Heap in C++
Of Memory Management, Heap Corruption, and C++
new on stack instead of heap (like alloca vs malloc)
with malloc you can grow and shrink your array: it becomes dynamic, so you can allocate exactly for what you need.
This is called custom memory management, I guess.
You can do that, but you'll have to manage that chunk of memory yourself.
You'd end up writing your own malloc() woring over this chunk.
Regarding:
After some trial and error I found the
above is the largest static array
allowed on my 32-bit Intel machine
with GCC 4.3. Is this a standard
limit, a compiler limit, or a machine
limit?
One upper bound will depend on how the 4GB (32-bit) virtual address space is partitioned between user-space and kernel-space. For Linux, I believe the most common partitioning scheme has a 3 GB range of addresses for user-space and a 1 GB range of addresses for kernel-space. The partitioning is configurable at kernel build-time, 2GB/2GB and 1GB/3GB splits are also in use. When the executable is loaded, virtual address space must be allocated for every object regardless of whether real memory is allocated to back it up.
You may be able to allocate that gigantic array in one context, but not others. For example, if your array is a member of a struct and you wish to pass the struct around. Some environments have a 32K limit on struct size.
As previously mentioned, you can also resize your memory to use exactly what you need. It's important in performance-critical contexts to not be paging out to virtual memory if it can be avoided.
There is no way to free stack allocation other than going out of scope. So when you actually use global allocation and VM has to alloc you real hard memory, it is allocated and will stay there until your program runs out. This means that any process will only grow in it's virtual memory use (functions have local stack allocations and those will be "freed").
You cannot "keep" the stack memory once it goes out of scope of function, it is always freed. So you must know how much memory you will use at compile time.
Which then boils down to how many int foo[1<<29]'s you can have. Since first one takes up whole memory (on 32bit) and will be (lets lie: 0x000000) the second will resolve to 0xffffffff or thereaobout. Then the third one would resolve to what? Something that 32bit pointers cannot express. (remember that stack reservations are resolved partially at compiletime, partially runtime, via offsets, how far the stack offset is pushed when you alloc this or that variable).
So the answer is pretty much that once you have int foo [1<<29] you cant have any reasonable depth of functions with other local stack variables anymore.
You really should avoid doing this unless you know what you're doing. Try to only request as much memory as you need. Even if it's not being used or getting in the way of other programs it can mess up the process its self. There are two reasons for this. First, on certain systems, particularly 32bit ones it can cause address space to be exhausted prematurely in rare circumstances. Additionally many kernels have some kind of per process limit on reserved/virtual/not in use memory. If your program asks for memory at points in run time the kernel can kill the process if it asks for memory to be reserved that exceeds this limit. I've seen programs that have either crashed or exited due to a failed malloc because they are reserving GBs of memory while only using a few MB.

Resources