Arrays of which size require malloc (or global assignment)? - c

While taking my first steps with C, I quickly noticed that int array[big number] causes my programs to crash when called inside a function. Not quite as quickly, I discovered that I can prevent this from happening by defining the array with global scope (outside the functions) or using malloc.
My question is:
Starting at which size is it necessary to use one of the above methods to make sure my programs won't crash?
I mean, is it safe to use just, e.g., int i; for counters and int chars[256]; for small arrays or should I just use malloc for all local variables?

You should understand what the difference is between int chars[256] in a function and using malloc().
In short, the former places the entire array on the stack. The latter allocates the memory you requested from the heap. Generally speaking, the heap is much larger than the stack, but the size of each can be adjusted.
Another key difference is that a variable allocated on the stack will technically be gone after you return from the method. (Oh, your program may function as though it's not gone for a bit if you continue to access that array, but ho ho ho danger lurks.) A hunk of memory allocated with malloc will remain allocated until you explicitly free it or the program exits.

You should use malloc for your dynamic memory allocation. For statically sized arrays (or any other object) within functions, if the memory required is to big you will quickly get a segmentation fault. I don't think a 'safe limit' can be defined, its probably implementation specific and other factors come in play too, like the current stack and objects within it created by callers to the current function. I would be tempted to say that anything below the page size (usually 4kb) should be safe as long as there is no recursion involved, but I do not think there are such guarantees.

It depends. If you have some guarantee that a line will never be longer than 100 ... 1000 characters you can get away with a fixed size buffer. If you don't: you don't. There is a difference between reading in a neat x KB config file and a x GB XML file (with no CR/LF). It depends.
The next choice is: do you want your program to die gracefully? It is only a design choice.

Related

Growing an array on the stack

This is my problem in essence. In the life of a function, I generate some integers, then use the array of integers in an algorithm that is also part of the same function. The array of integers will only be used within the function, so naturally it makes sense to store the array on the stack.
The problem is I don't know the size of the array until I'm finished generating all the integers.
I know how to allocate a fixed size and variable sized array on the stack. However, I do not know how to grow an array on the stack, and that seems like the best way to solve my problem. I'm fairly certain this is possible to do in assembly, you just increment stack pointer and store an int for each int generated, so the array of ints would be at the end of the stack frame. Is this possible to do in C though?
I would disagree with your assertion that "so naturally it makes sense to store the array on the stack". Stack memory is really designed for when you know the size at compile time. I would argue that dynamic memory is the way to go here
C doesn't define what the "stack" is. It only has static, automatic and dynamic allocations. Static and automatic allocations are handled by the compiler, and only dynamic allocation puts the controls in your hands. Thus, if you want to manually deallocate an object and allocate a bigger one, you must use dynamic allocation.
Don't use dynamic arrays on the stack (compare Why is the use of alloca() not considered good practice?), better allocate memory from the heap using malloc and resize it using realloc.
Never Use alloca()
IMHO this point hasn't been made well enough in the standard references.
One rule of thumb is:
If you're not prepared to statically allocate the maximum possible size as a
fixed length C array then you shouldn't do it dynamically with alloca() either.
Why? The reason you're trying to avoid malloc() is performance.
alloca() will be slower and won't work in any circumstance static allocation will fail. It's generally less likely to succeed than malloc() too.
One thing is sure. Statically allocating the maximum will outdo both malloc() and alloca().
Static allocation is typically damn near a no-op. Most systems will advance the stack pointer for the function call anyway. There's no appreciable difference for how far.
So what you're telling me is you care about performance but want to hold back on a no-op solution? Think about why you feel like that.
The overwhelming likelihood is you're concerned about the size allocated.
But as explained it's free and it gets taken back. What's the worry?
If the worry is "I don't have a maximum or don't know if it will overflow the stack" then you shouldn't be using alloca() because you don't have a maximum and know it if it will overflow the stack.
If you do have a maximum and know it isn't going to blow the stack then statically allocate the maximum and go home. It's a free lunch - remember?
That makes alloca() either wrong or sub-optimal.
Every time you use alloca() you're either wasting your time or coding in one of the difficult-to-test-for arbitrary scaling ceilings that sleep quietly until things really matter then f**k up someone's day.
Don't.
PS: If you need a big 'workspace' but the malloc()/free() overhead is a bottle-neck for example called repeatedly in a big loop, then consider allocating the workspace outside the loop and carrying it from iteration to iteration. You may need to reallocate the workspace if you find a 'big' case but it's often possible to divide the number of allocations by 100 or even 1000.
Footnote:
There must be some theoretical algorithm where a() calls b() and if a() requires a massive environment b() doesn't and vice versa.
In that event there could be some kind of freaky play-off where the stack overflow is prevented by alloca(). I have never heard of or seen such an algorithm. Plausible specimens will be gratefully received!
The innards of the C compiler requires stack sizes to be fixed or calculable at compile time. It's been a while since I used C (now a C++ convert) and I don't know exactly why this is. http://gribblelab.org/CBootcamp/7_Memory_Stack_vs_Heap.html provides a useful comparison of the pros and cons of the two approaches.
I appreciate your assembly code analogy but C is largely managed, if that makes any sense, by the Operating System, which imposes/provides the task, process and stack notations.
In order to address your issue dynamic memory allocation looks ideal.
int *a = malloc(sizeof(int));
and dereference it to store the value .
Each time a new integer needs to be added to the existing list of integers
int *temp = realloc(a,sizeof(int) * (n+1)); /* n = number of new elements */
if(temp != NULL)
a = temp;
Once done using this memory free() it.
Is there an upper limit on the size? If you can impose one, so the size is at most a few tens of KiB, then yes alloca is appropriate (especially if this is a leaf function, not one calling other functions that might also allocate non-tiny arrays this way).
Or since this is C, not C++, use a variable-length array like int foo[n];.
But always sanity-check your size, otherwise it's a stack-clash vulnerability waiting to happen. (Where a huge allocation moves the stack pointer so far that it ends up in the middle of another memory region, where other things get overwritten by local variables and return addresses.) Some distros enable hardening options that make GCC generate code to touch every page in between when moving the stack pointer by more than a page.
It's usually not worth it to check the size and use alloc for small, malloc for large, since you also need another check at the end of your function to call free if the size was large. It might give a speedup, but this makes your code more complicated and more likely to get broken during maintenance if future editors don't notice that the memory is only sometimes malloced. So only consider a dual strategy if profiling shows this is actually important, and you care about performance more than simplicity / human-readability / maintainability for this particular project.
A size check for an upper limit (else log an error and exit) is more reasonable, but then you have to choose an upper limit beyond which your program will intentionally bail out, even though there's plenty of RAM you're choosing not to use. If there is a reasonable limit where you can be pretty sure something's gone wrong, like the input being intentionally malicious from an exploit, then great, if(size>limit) error(); int arr[size];.
If neither of those conditions can be satisfied, your use case is not appropriate for C automatic storage (stack memory) because it might need to be large. Just use dynamic allocation autom don't want malloc.
Windows x86/x64 the default user-space stack size is 1MiB, I think. On x86-64 Linux it's 8MiB. (ulimit -s). Thread stacks are allocated with the same size. But remember, your function will be part of a chain of function calls (so if every function used a large fraction of the total size, you'd have a problem if they called each other). And any stack memory you dirty won't get handed back to the OS even after the function returns, unlike malloc/free where a large allocation can give back the memory instead of leaving it on the free list.
Kernel thread stack are much smaller, like 16 KiB total for x86-64 Linux, so you never want VLAs or alloca in kernel code, except maybe for a tiny max size, like up to 16 or maybe 32 bytes, not large compared to the size of a pointer that would be needed to store a kmalloc return value.

why is array size limited when declared at compile time?

for example I can do
int *arr;
arr = (int *)malloc(sizeof(int) * 1048575);
but I cannot do this without the program crashing:
int arr[1048575];
why is this so?
Assuming arr is a local variable, declaring it as an array uses memory from the (relatively limited) stack, while malloc() uses memory from the (comparatively limitless) heap.
If you're allocating these as local variables in functions (which is the only place you could have the pointer declaration immediately followed by a malloc call), then the difference is that malloc will allocate a chunk of memory from the heap and give you its address, while directly doing int arr[1048575]; will attempt to allocate the memory on the stack. The stack has much less space available to it.
The stack is limited in size for two main reasons that I'm aware of:
Traditional imperative programming makes very little use of recursion, so deep recursion (and heavy stack growth) is "probably" a sign of infinite recursion, and hence a bug that's going to kill the process. It's therefore best if it is caught before the process consumes the gigabytes of virtual memory (on a 32 bit architecture) that will cause the process to exhaust its address space (at which point the machine is probably using far more virtual memory than it actually has RAM, and is therefore operating extremely slowly).
Multi-threaded programs need multiple stacks. Therefore, the runtime system needs know that the stack will never grow beyond a certain bound, so it can put another stack after that bound if a new thread is created.
When you declare an array, you are placing it on the stack.
When you call malloc(), the memory is taken from the heap.
The stack is usually more limited compared to the heap, and is usually transient (but it depends on how often you enter and exit the function that this array is declared in.
For such a large (maybe not by today's standards?) memory, it is good practice to malloc it, assuming you want the array to last around for a bit.

Is there any problem by accessing memory space without allocation in c language [duplicate]

This question already has answers here:
What's the point of using malloc when you can use pointer? [duplicate]
(3 answers)
Closed 5 years ago.
#include<stio.h>
main()
{
int *p,i;
p = (int*)malloc(sizeof(int));
printf("Enter:");
scanf("%d",p);
for(i=1;i<3;i++)
{
printf("Enter");
scanf("%d",p+i);
}
for(i=0;i<3;i++)
{
printf("No:%d\n",*(p+i));
}
getch();
return 0;
}
In this C program memory is accessed without allocation.The program works.Will any problem arise by accessing memory without allocation?If yes then what is the solution for storing a collection of integer data which the size is not known in advance?
Yes, it leads to undefined behavior. The problem is working here purely becuase of luck and may crash any time. The solution is to allocate the memory using malloc For example if you want to allocate memory for count number of elements then you can use int* p = (int*)malloc(sizeof(int)*count);. From here on you can access p as an array of count elements.
It likely works because the memory immediately after *p is both accessible (allocated in the VM system and has the right bits set), and not in use for anything else. This could all change if malloc finds you some bytes immediately before an inaccessible page; or if you move to a malloc implementation that uses the trailing space for bookkeeping.
So it's not really safe.
Accessing unallocated memory leads to undefined behavior. Exactly what happens will depends on a variety of conditions. It may "work" now but you could see problems when you extend your program.
If you don't know how many items you want to read, there are a couple of strategies to use.
Use realloc to grow the buffer as you need more space.
Use a linked list instead of an array
Most definitely yes. Its just pure luck that you can access without allocating. malloc does not what memory you are using and that could result in serious problems.
Hence its a compulsion (i don't want to use the word better here) to allocate memory according to your needs and then use it.
Some problems which could result are:
Segmentation fault
Memory corruption
and it may result in giving you headache for hours when the behavior is undefined.
For eg: the location of a crash may not be the exact place of origin of the problem.
The reason this code works is that the kernel never gives you a fraction of the system page size (which should be 4k). This means the memory after the first sizeof(int) bytes is actually owned by the process you run, but not allocated to you by the second layer of abstraction which is malloc.
"Segmentation fault" happens when you try and access memory outside the pages allocated to you by the kernel. You won't see it until you step out of your page.
The problem that may arise here is that you use malloc again and you will receive a pointer to a memory you used without malloc being aware of it. This will cause hellish bugs since you will change data used in different contexts without knowing.
As for your second question, the right way is very program dependent.
If the number of elements can be bounded reasonably, it might be OK to always allocate the same size using a constant defined in your program. This is ALWAYS the secure way (you need to make sure you don't let the user give you more than what you allocated).
If you really have a broad range of array sizes here, you might want to use a linked list which is built for that exactly.
There are two levels of memory allocation that take usually take place. At operating system level, you map memory pages to your address space. A page is the basic unit of memory management and is usually something like 1K or 4K bytes (but can be much larger or as small as 512 bytes, depending upon the system). It is possible to do that mapping yourself by making the appropriate system calls. However, applications generally only do that when they need large blocks of memory.
Standard libraries generally maintain a pool of pages. When you call malloc, the library looks to see if there is available memory in the pool. If so, it returns a block of memory from pages already mapped by the operating system. If not, the library make the system call to map more pages to the process and adds them to the managed pool.
Mapping and unmapping pages is a rather time consuming process. By using pooling, the library can speed things up significantly.
Invariable, the standard library functions allocate a few bytes in front of the memory returned by malloc and the like so that they can know how much memory is in the block when it is free'd. Many will also add memory add the end of the block as well for error checking.
When you are doing what you are doing, you could be reading this extra data or you could be reading some data that was mapped to the memory pool by the library.
What you are doing is bad.
IF you do not know the number of items in advance, you can use a data structure, such a linked list where new entries are created with each new number.

Why would you ever want to allocate memory on the heap rather than the stack? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
When is it best to use a Stack instead of a Heap and vice versa?
I've read a few of the other questions regarding the heap vs stack, but they seem to focus more on what the heap/stack do rather than why you would use them.
It seems to me that stack allocation would almost always be preferred since it is quicker (just moving the stack pointer vs looking for free space in the heap), and you don't have to manually free allocated memory when you're done using it. The only reason I can see for using heap allocation is if you wanted to create an object in a function and then use it outside that functions scope, since stack allocated memory is automatically unallocated after returning from the function.
Are there other reasons for using heap allocation instead of stack allocation that I am not aware of?
There are a few reasons:
The main one is that with heap allocation, you have the most flexible control over the object's lifetime (from malloc/calloc to free);
Stack space is typically a more limited resource than heap space, at least in default configurations;
A failure to allocate heap space can be handled gracefully, whereas running out of stack space is often unrecoverable.
Without the flexible object lifetime, useful data structures such as binary trees and linked lists would be virtually impossible to write.
You want an allocation to live beyond a function invocation
You want to conserve stack space (which is typically limited to a few MBs)
You're working with re-locatable memory (Win16, databases, etc.), or want to recover from allocation failures.
Variable length anything. You can fake around this, but your code will be really nasty.
The big one is #1. As soon as you get into any sort of concurrency or IPC #1 is everywhere. Even most non-trivial single threaded applications are tricky to devise without some heap allocation. That'd practically be faking a functional language in C/C++.
So I want to make a string. I can make it on the heap or on the stack. Let's try both:
char *heap = malloc(14);
if(heap == NULL)
{
// bad things happened!
}
strcat(heap, "Hello, world!");
And for the stack:
char stack[] = "Hello, world!";
So now I have these two strings in their respective places. Later, I want to make them longer:
char *tmp = realloc(heap, 20);
if(tmp == NULL)
{
// bad things happened!
}
heap = tmp;
memmove(heap + 13, heap + 7);
memcpy(heap + 7, "cruel ", 6);
And for the stack:
// umm... What?
This is only one benefit, and others have mentioned other benefits, but this is a rather nice one. With the heap, we can at least try to make our allocated space larger. With the stack, we're stuck with what we have. If we want room to grow, we have to declare it all up front, and we all know how it stinks to see this:
char username[MAX_BUF_SIZE];
The most obvious rationale for using the heap is when you call a function and need something of unknown length returned. Sometimes the caller may pass a memory block and size to the function, but at other times this is just impractical, especially if the returned stuff is complex (e.g. a collection of different objects with pointers flying around, etc.).
Size limits are a huge dealbreaker in a lot of cases. The stack is usually measured in the low megabytes or even kilobytes (that's for everything on the stack), whereas all modern PCs allow you a few gigabytes of heap. So if you're going to be using a large amount of data, you absolutely need the heap.
just to add
you can use alloca to allocate memory on the stack, but again memory on the stack is limited and also the space exists only during the function execution only.
that does not mean everything should be allocated on the heap. like all design decisions this is also somewhat difficult, a "judicious" combination of both should be used.
Besides manual control of object's lifetime (which you mentioned), the other reasons for using heap would include:
Run-time control over object's size (both initial size and it's "later" size, during the program's execution).
For example, you can allocate an array of certain size, which is only known at run time.
With the introduction of VLA (Variable Length Arrays) in C99, it became possible to allocate arrays of fixed run-time size without using heap (this is basically a language-level implementation of 'alloca' functionality). However, in other cases you'd still need heap even in C99.
Run-time control over the total number of objects.
For example, when you build a binary tree stucture, you can't meaningfully allocate the nodes of the tree on the stack in advance. You have to use heap to allocated them "on demand".
Low-level technical considerations, as limited stack space (others already mentioned that).
When you need a large, say, I/O buffer, even for a short time (inside a single function) it makes more sense to request it from the heap instead of declaring a large automatic array.
Stack variables (often called 'automatic variables') is best used for things you want to always be the same, and always be small.
int x;
char foo[32];
Are all stack allocations, These are fixed at compile time too.
The best reason for heap allocation is that you cant always know how much space you need. You often only know this once the program is running. You might have an idea of limits but you would only want to use the exact amount of space required.
If you had to read in a file that could be anything from 1k to 50mb, you would not do this:-
int readdata ( FILE * f ) {
char inputdata[50*1024*1025];
...
return x;
}
That would try to allocate 50mb on the stack, which would usually fail as the stack is usually limited to 256k anyway.
The stack and heap share the same "open" memory space and will have to eventually come to a point where they meet, if you use the entire segment of memory. Keeping the balance between the space that each of them use will have amortized cost later for allocation and de-allocation of memory a smaller asymptotic value.

What is causing a stack overflow?

You may think that this is a coincidence that the topic of my question is similar to the name of the forum but I actually got here by googling the term "stack overflow".
I use the OPNET network simulator in which I program using C. I think I am having a problem with big array sizes. It seems that I am hitting some sort of memory allocation limitation. It may have to do with OPNET, Windows, my laptop memory or most likely C language. The problem is caused when I try to use nested arrays with a total number of elements coming to several thousand integers. I think I am exceeding an overall memory allocation limit and I am wondering if there is a way to increase this cap.
Here's the exact problem description:
I basically have a routing table. Let's call it routing_tbl[n], meaning I am supporting 30 nodes (routers). Now, for each node in this table, I keep info. about many (hundreds) available paths, in an array called paths[p]. Again, for each path in this array, I keep the list of nodes that belong to it in an array called hops[h]. So, I am using at least nph integers worth of memory but this table contains other information as well. In the same function, I am also using another nested array that consumes almost 40,000 integers as well.
As soon as I run my simulation, it quits complaining about stack overflow. It works when I reduce the total size of the routing table.
What do you think causes the problem and how can it be solved?
Much appreciated
Ali
It may help if you post some code. Edit the question to include the problem function and the error.
Meanwhile, here's a very generic answer:
The two principal causes of a stack overflow are 1) a recursive function, or 2) the allocation of a large number of local variables.
Recursion
if your function calls itself, like this:
int recurse(int number) {
return (recurse(number));
}
Since local variables and function arguments are stored on the stack, then it will in fill the stack and cause a stack overflow.
Large local variables
If you try to allocate a large array of local variables then you can overflow the stack in one easy go. A function like this may cause the issue:
void hugeStack (void) {
unsigned long long reallyBig[100000000][1000000000];
...
}
There is quite a detailed answer to this similar question.
Somehow you are using a lot of stack. Possible causes include that you're creating the routing table on the stack, you're passing it on the stack, or else you're generating lots of calls (eg by recursively processing the whole thing).
In the first two cases you should create it on the heap and pass around a pointer to it. In the third case you'll need to rewrite your algorithm in an iterative form.
Stack overflows can happen in C when the number of embedded recursive calls is too high. Perhaps you are calling a function from itself too many times?
This error may also be due to allocating too much memory in static declarations. You can switch to dynamic allocations through malloc() to fix this type of problem.
Is there a reason why you cannot use the debugger on this program?
It depends on where you have declared the variable.
A local variable (i.e. one declared on the stack is limited by the maximum frame size) This is a limit of the compiler you are using (and can usually be adjusted with compiler flags).
A dynamically allocated object (i.e. one that is on the heap) is limited by the amount of available memory. This is a property of the OS (and can technically by larger the physical memory if you have a smart OS).
Many operating systems dynamically expand the stack as you use more of it. When you start writing to a memory address that's just beyond the stack, the OS assumes your stack has just grown a bit more and allocates it an extra page (usually 4096Kib on x86 - exactly 1024 ints).
The problem is, on the x86 (and some other architectures) the stack grows downwards but C arrays grow upwards. This means if you access the start of a large array, you'll be accessing memory that's more than a page away from the edge of the stack.
If you initialise your array to 0 starting from the end of the array (that's right, make a for loop to do it), the errors might go away. If they do, this is indeed the problem.
You might be able to find some OS API functions to force stack allocation, or compiler pragmas/flags. I'm not sure about how this can be done portably, except of course for using malloc() and free()!
You are unlikely to run into a stack overflow with unthreaded compiled C unless you do something particularly egregious like have runaway recursion or a cosmic memory leak. However, your simulator probably has a threading package which will impose stack size limits. When you start a new thread it will allocate a chunk of memory for the stack for that thread. Likely, there is a parameter you can set somewhere that establishes the the default stack size, or there may be a way to grow the stack dynamically. For example, pthreads has a function pthread_attr_setstacksize() which you call prior to starting a new thread to set its size. Your simulator may or may not be using pthreads. Consult your simulator reference documentation.

Resources