I am programming some embedded device which has 64 MB SDRAM. (C is used as programming language).
Is it possible to make a guess (maybe even a rough guess) about the possible size of the stack of this device?
Referring to memory which gets used when we make allocations such as, e.g.,
char s[100];
int t[50];
etc.
e.g., will it be more than 50KB? etc. that is what I mean with rough
plus when I have variables inside some function f
f()
{
int p;
}
when f() exists, this variable dies right?
So when I call f2():
void f2()
{
char t[100];
}
Only the size of a 100 element char array will be added to the stack size right??
size of int p from previous function is not considered anymore.
All sorts of guesses can be made :) Most (all?) embedded development environments provide mechanisms for allocating device memories (read-only, stack, heap, etc.) This is commonly done through linker directive files or C #pragmas placed in a setup source file. Without more information on your development environment, no accurate guess can be made.
In function f(), variable p will exist on the stack. When the function exits, that location on the stack will likely be used for something else.
As for function f2(), you can expect that 100 bytes from the stack will be assigned to t while this function is executing. The size of p will not be considered.
Note that the stack can be used for other information, so you cannot reliably estimate stack usage without considering other factors. For example, do you expect recursion? The stack can be used to store function call/return information - thereby reducing the amount of space you have for local (stack) variables.
Lastly, I've worked with devices operating with less than 1KB of stack, so assumptions should be made carefully.
your question looks like ,"Guessing the stack size"
Why guess when you can know it exactly its not from the sky ! :)
For an embedded programmer the stack size is always his in hands,one has to handle it through the linker command file that he submit to a loader
some thing like this as below
Linker.cmd
MEMORY
{
.
.
SARAM2 (RWIX) : origin = 0x039000, length = 0x010000 /*64KB*/
.
.
}
SECTIONS
{
.
.
.stack > SARAM2
.sysstack > SARAM2
.
.
}
so its clear that you can set your own stack size provided "the stack size is limited to stack pointer bound"
so it is completely depends on your stack pointer range, if your stack pointer is 16bit you stack size is limited to 2^16 which is 64KB
for instance moving away from firmware programming, in a standard linux machine also
if you go for typing
ulimit -a
you will get your limited stack size,and its extendable up to the boundary where Stack Pointer can point to
BTW These may further help you
Direction of Stack Growth
When Does Stack Really Over Flow
and i also suggest its not a bad idea to monitor your stack size , in other words trying to find Stack pointer value which can make you clear 'what is your stack status ?', especially for an embedded programmer a stack overflow cause severe damage :)
There is a script in the Linux kernel sources that can help you find the functions that heavily allocate on the stack. The script is named checkstack.pl and is located in the scripts directory.
Here is a sample invocation:
$ objdump -d $BUILDDIR/target-isns | perl checkstack.pl x86_64
0x00403799 configfs_handle_events []: 4384
0x004041d6 isns_attr_query []: 4176
0x00404646 isns_target_register []: 4176
The script displays the functions that consumes the most space on the stack. The script can help you make a guess for the stack size required. Also, it helps you determine which functions should be optimized with regard to their stack usage.
You may find that your linker is capable of performing stack requirement analysis - normally by command line option to specify map file content. If you could add information about your toolchain, you will get better advice.
What such analysis gives you is the worst case stack usage and the call path involved for any particular function - for the process total you need to look at the stack usage for main() and any other thread entry points (each of which will have a separate stack). You may need also to consider ISRs which on some targets may use the interrupted process or thread stack, on others there may be a separate interrupt stack.
What linker stack analysis cannot determine is stack usage in the case of recursion or call through function pointers, both of which depend on the prevailing runtime conditions that cannot be determined statically.
Related
so, i was making this program that let people know the number of contiguous subarray which sum is equal to a certain value.
i have written the code , but when i try to run this code in vcexpress 2010, it says these error
Unhandled exception at 0x010018e7 in test 9.exe: 0xC00000FD: Stack overflow.
i have tried to search for the solution in this website and other webisites, but i can't seem to find any solution which could help me fix the error in this code(they are using recursion while i'm not).
i would be really grateful if you would kindly explain what cause this error in my code, and how to fix this error. any help would be appreciated. Thank you.
here is my code :
#include <stdio.h>
int main ()
{
int n,k,a=0,t=0;
unsigned long int i[1000000];
int v1,v2=0,v3;
scanf("%d %d",&n,&k);
for(v3=0;v3<n;v3++)
{
scanf("%d",&i[v3]);
}
do
{
for(v1=v2;v1<n;v1++)
{
t=i[v1]+t;
if(t==k)
{
a++;
break;
}
}
t=0;
v2++;
}while(v2!=n);
printf("%lu",a);
return 0;
}
Either move
unsigned long int i[1000000];
outside of main, thus making it a global variable (not an automatic one), or better yet, use some C dynamic heap allocation:
// inside main
unsigned long int *i = calloc(1000000, sizeof(unsigned long int));
if (!i) { perror("calloc"); exit(EXIT_FAILURE); };
BTW, for such a pointer, I would use (for readability reasons) some other name than i. And near the end of main you'll better free(i); to avoid memory leaks.
Also, you could move these 2 lines after the read of n and use calloc(n, sizeof(unsigned long int)) instead of calloc(1000000, sizeof(unsigned long int)) ; then you can handle arrays bigger than a million elements if your computer and system provides enough resources for that.
Your initial code is declaring an automatic variable which goes into the call frame of main on your call stack (which has a limited size, typically a megabyte or a few of them). On some operating systems there is a way to increase the size of that call stack (in an OS-specific way). BTW each thread has its own call stack.
As a rule of thumb, your C functions (including main) should avoid having call frames bigger than a few kilobytes. With the GCC compiler, you could invoke it with gcc -Wall -Wextra -Wframe-larger-than=1024 -g to get useful warnings and debug information.
Read the virtual address space wikipage. It has a nice picture worth many words. Later, find the way to query, on your operating system, the virtual address space of your process (on Linux, use proc(5) like cat /proc/$$/maps etc...). In practice, your virtual address space is likely to contain many segments (perhaps a dozen, sometimes thousands). Often, the dynamic linker or some other part of your program (or of your C standard library) uses memory-mapped files. The standard C heap (managed by malloc etc) may be organized in several segments.
If you want to understand more about virtual address space, take time to read a good book, like: Operating systems, three easy pieces (freely downloadable).
If you want to query the organization of the virtual address space in some process, you need to find an operating-system specific way to do that (on Linux, for a process of pid 1234, use /proc/1234/maps or /proc/self/maps from inside the process).
Memory is laid out much more differently than simply 4 segments(which was done long ago). The answer to the question can be generalized this way - the global or dynamically allocated memory space is handled differently than that of local variables by the system, where as the memory for local variable is limited in size, memory for dynamic allocation or global variables doesn't put a lower constraint like this.
In modern system the concept of virtual address space is there. The process from your program gets a chunk of it. That portion of memory is now responsible for holding the required memory.
Now for dynamic allocation and so on, the process can request more memory and depending on the other processes and so on, new memory request is serviced. For dynamic or global array there is no limit process wise (of course system wise there is- it cant haverequest all memory). That's why dynamic allocation or using global variable won't cause the process to run out of it's allocated memory unlike the automatic lifetime memory that it originally had for local variables.
Basically you can check your stack size
for example in Linux : ulimit -s (Kbytes)
and then decide how you manipulate your code regarding that.
As a concept I would never allocate big piece of memory on the stack because unless you know exactly the depth of your function call and the stack use, it's hard to control the precised allocated memory on stack during run time
I am reading about memory allocation and activation records. I am having some doubts. Can anyone make the following crystal clear ?
A). My first doubt is that "Are activation records created on stack or heap in C" ?
B). These are few lines from an abstract which i am referring :-->
Even though memory on stack area is created during run time- the
amount of memory (activation record size) is determined at compile
time. Static and global memory area is compile time determined and
this is part of the binary. At run time, we cannot change this. Only
memory area freely available for the process to change during runtime
is heap.At compile time compiler only reserves the stack space for
activation record. This gets used (allocated on actual memory) only
during program run. Only DATA segment part of the program like static
variables, string literals etc. are allocated during compile time. For
heap area, how much memory to be allocated is also determined at run
time.
Can anyone please elaborate these lines as i am unable to understand anything ?
I am sure the explaination would be of great need to me.
As a quick answer, I don't even really know what an activation record is. The rest of the quote has very poor English and is quite misleading.
Honestly, the abstract is talking about absolutes when in reality, there really are not at all absolute. You do define a main stack at compile time, yes (though you can create many stacks at runtime as well).
Yes, when you want to allocate memory, one usually creates a pointer to store that information, but where you place that is completely up to you. It can be stack, it can be global memory, it can be in the heap from another allocation, or you can just leak memory and not store it anywhere it all if you wish. Perhaps this is what is meant by an activation record?
Or perhaps, it means that when dynamic memory is created, somewhere in memory, there has to be some sort of information that keeps track of used and unused memory. For many allocators, this is a list of pointers stored somewhere in the allocated memory, though others store it in a different piece of memory and some could even place that on the stack. It all depends on the needs of the memory system.
Finally, where dynamic memory is allocated from can vary as well. It can come from a call to the OS, though in some cases, it can also just be overlayed onto existing global (or even stack) memory - which is not uncommon in embedded programming.
As you can see, this abstract is not even close to what dynamic memory represents.
Additional info:
Many are jumping all over me stating that 'C' has no stack in the standard. Correct. That said, how many people have truly coded in C without one? I'll leave that alone for now.
Defined memory, as you call it, is anything declared with the 'static' keyword within a function or any variable declared outside of a function without the 'extern' keyword in front of it. This is memory that the compiler knows about and can reserve space for without any additional help.
Allocated memory - is not a good term as defined memory can also be considered allocated. Instead, use the term dynamic memory. This is memory that you allocate from a heap at run-time. An example:
char *foo;
int my_value;
int main(void)
{
foo = malloc(10 * sizeof(char));
// Do stuff with foo
free(foo);
return 0;
}
foo is "defined" as you say as a pointer. If nothing else were done, it would only reserve that much memory, but when the malloc is reached in main(), it now points to at least 10 bytes of dynamic memory as well. Once the free is reached, that memory is now made available to the program for other uses. It's allocated size is 'dynamic'. Compare that to my_value which will always be the size of an int and nothing else.
In C (given how it is almost universally implemented*) An activation record is exactly the same thing as a stack frame which is the same thing as a call frame. They are always created on the stack.
The stack segment is a memory area the process gets "for free" from the OS when it created. It does not need to malloc or free it. On x86, a machine register (e.g RSP) points to the end of the segment and stack frames/activation records/call frames are "allocated" by decrementing the pointer in that register by how many byte to allocate. E.g:
int my_func() {
int x = 123;
int y = 234;
int z = 345;
...
return 1;
}
An unoptimizing C compiler could generate assembly code for keeping those three variables in the stack frame like this:
my_func:
; "allocate" 24 bytes of stack space
sub rsp, 24
; Initialize the allocated stack memory
mov [rsp], 345 ; z = 345
mov [rsp+8], 234 ; y = 234
mov [rsp+16], 134 ; x = 123
...
; "free" the allocated stack space
add rsp, 24
; return 1
mov rax, 1
ret
In other contexts and languages activation records can be implemented differently. For example using linked lists. But as the language is C and the context is low-level programming I don't think it is useful to discuss that.
In theory, a C99 (or C11) compatible implementation (e.g. a C compiler & C standard library implementation) do not even need (in all cases) a call stack. For example, one could imagine a whole program compiler (notably for freestanding C implementation) which would analyze the entire program and decide that stack frames are unneeded (e.g. each local variable could be allocated statically, or fit in a register). Or one could imagine an implementation allocating the call frames as continuation frames (perhaps after CPS transformation by the compiler) elsewhere (e.g. in some "heap"), using techniques similar to those described in Appel old book Compiling with Continuations (describing an SML/NJ compiler).
(remember that a programming language is a specification -not some software-, often written in English, perhaps with additional formalization, in some technical report or standard document. AFAIK, the C99 or C11 standards do not even mention any stack or activation record. But in practice, most C implementations are made of a compiler and a standard library implementation.)
In practice, allocation records are call frames (for C, they are synonyms; things are more complex with nested functions) and are allocated on a hardware assisted call stack on all reasonable C implementations I know. on Z/Architecture there is no hardware stack pointer register, so it is a convention (dedicating some register to play the role of the stack pointer).
So look first at call stack wikipage. It has a nice picture worth many words.
Are activation records created on stack or heap
In practice, they (activation records) are call frames on the call stack (allocated following calling conventions and ABIs). Of course the layout, slot usage, and size of a call frame is computed at compile-time by the compiler.
In practice, a local variable may correspond to some slot inside the call frame. But sometimes, the compiler would keep it only in a register, or reuse the same slot (which has a fixed offset in the call frame) for various usages, e.g. for several local variables in different blocks, etc.
But most C compilers are optimizing compilers. They are able to inline a function, or sometimes make a tail call to it (then the caller's call frame is reused as or overwritten by the callee call frame), so details are more complex.
See also this How was C ported to architectures that had no hardware stack? question on retro.
I'm writing a firmware for a Atmel XMEGA microcontroller in c and I think I filled up the 4 KB of SRAM. As far as I know I only do have static/global data and local stack variables (I don't use malloc within my code).
I use a local variable to buffer some pixel data. If I increase the buffer to 51 bytes my display is showing strange results - a buffer of 6 bytes is doing fine. This is why I think my ram is full and the stack is overwriting something.
Creating more free memory is not my problem because I can just move some static data into the flash and only load it when its needed. What bothers me is the fact that I could have never discovered that the memory got full.
Is it somehow possible to dected (e.g. by reseting the microcontroller) when the memory got filled up instead of letting it overwrite some other data?
It can be very difficult to predict exactly how much stack you'll need (some toolchains can have a go at this if you turn on the right options, but it's only ever a rough guide).
A common way of checking on the state of the stack is to fill it completely with a known value at startup, run the code as hard/long as you can, and then see how much had not been overwritten.
The startup code for your toolchain might even have an option to fill the stack for you.
Unfortunately, although the concepts are very simple: fill the stack with a known value, count the number of those values which remain, the reality of implementing it can require quite a deep understanding of the way your specific tools (particularly the startup code and the linker) work.
Crude ways to check if stack overflow is what's causing your problem are to make all your local arrays 'static' and/or to hugely increase the size of the stack and then see if things work better. These can both be difficult to do on small embedded systems.
"Is it somehow possible to dected (e.g.
by reseting the microcontroller) when
the memory got filled up instead of
letting it overwrite some other data?"
I suppose currently you have a memory mapping like (1).
When stack and/or variable space grow to much, they collide and overwrite each other (*).
Another possibility is a memory mapping like (2).
When stack or variable space exceeds the maximum space, they hit the not mapped addr space (*).
Depending on the controller (I am not sure about AVR family) this causes a reset/trap or similar (= what you desired).
[not mapped addr space][ RAM mapped addr space ][not mapped addr space]
(1) [variables ---> * <--- stack]
(2) *[ <--- stack variables ---> ]*
(arrows indicate growing direction if more variable/stack is used)
Of course it is better to make sure beforehand that RAM is big enough.
Typically the linker is responsible for allocating the memory for code, constants, static data, stacks and heaps. Often you must specify required stack sizes (and available memory) to the linker which will then flag an error if it can't fit everything in.
Note also that if you're dealing with a multithreaded application, then each thread has it's own stack and these are frequently allocated off the heap as the thread starts.
Unless your processor has some hardware checking for stack overflow on it (unlikely), there are a couple of tricks you can use to monitor the stack usage.
Fill the stack with a known marker pattern, and examine the stack memory (as allocated by the linker) to determine how much of the marker remains uncorrupted.
In a timer interrupt (or similar) compare the main thread stack pointer with the base of the stack to check for overflow
Both of these approaches are useful in debugging, but they are not guaranteed to catch all problems and will typically only flag a problem AFTER the stack has already corrupted something else...
Usually your programming tool knows the parameters of the controller, so you should be warned if you used more (without mallocs, it is known at compile time).
But you should be careful with pixeldata, because most displays don't have linear address space.
EDIT: usually you can specify the stack size manually. Leave just enough memory for static variables, and reserve the rest for stack.
i declared a struct variable in C of size greater than 1024bytes. On running Coverity (a static code analyzer application) it reports that this stack variable is greater than 1024 bytes and therefore a cause of error.
I'd like to know if I need to worry about this warning? Is there really a maximum limit to the size of a single stack variable?
thanks,
che
The maximum size of a variable is the limited by the maximum size of the stack (specifically, how much of the stack is left over from any current use including variables and parameters from functions higher on the stack as well as process frame overhead).
On Windows, the stacksize of the first thread is a property of the executable set during linking while the stacksize of a thread can be specified during thread creation.
On Unix, the stacksize of the first thread is usually only limited only by how much room there is for it to grow. Depending on how the particular Linux lays out memory and your use of shared objects, that can vary. The stacksize of a thread can also be specified during thread creation.
The problem it is trying to protect you from is stack overflow, because of different execution paths, it is very hard to find in testing. Mostly for this reason - it is considered bad form to allocate a large amount of data on the stack. You are only really likely to run into a real problem on an embedded system though.
In other words, it sets an arbitrary limit to what it considers too much data on the stack.
Yes. Of course it's limited by the address space of your system. It's also limited by the amount of space allocated to the stack by your OS, which usually can't be changed after your program starts but can be changed beforehand (either by the launching process, or by the properties of the executable). At a quick glance, the maximum stack size on my OS X system is 8 MiB and on Linux it's 10 MiB. On some systems, you can even allocate a different amount of stack to each different thread you start, although this is of limited usefulness. Most compilers also have another limit to how much they'll allow in a single stack frame.
On a modern desktop, I wouldn't worry about a 1k stack allocation unless the function were recursive. If you're writing embedded code or code for use inside an OS kernel, it would be a problem. Code in the Linux kernel is only permitted 64 KiB stacks or less, depending on configuration options.
This article is pretty interesting regarding stack size http://www.embedded.com/columns/technicalinsights/47101892?_requestid=27362
Yes is it OS dependent and also other things dependent. Sorry to be so vague. You may also be able to dig up some code in the gcc collection for testing stack size.
If your function was involved (directly or indirectly) in recursion, then allocating a large amount on the stack would limit the depth of recursion and might well blow the stack. Under Windows this stack reserve defaults to 1MB, though you can increase it statically with linker commands. The stack will grow as it is used, but the operating system sometimes cannot extend it. I discuss this in a little more detail on my website here.
As I have seen, a C compiler(turbo) provides a maximum size of 64000k for a variable. If we need more size, then it is declared as "huge".
It's not a good idea to try to use a massive amount of stack space.
Here is a link to the default gcc stack size: http://www.cs.nyu.edu/exact/core/doc/stackOverflow.txt
Also, you could specify --stack,xxxxx to customize the stack size, so it's best to assume xxxxx is a small number and stick with heap allocation.
Stack, heap, low, high VM -- Nuts, for the first thread, the stack at the top pf 64 bit VM, there should be no limit, so it seems like a gcc/c compiler bug that for local automatic "int x[2621440];" I get SIGSEGV. The compiler should be letting the first thread stack grow until it hits the heap, which in a 16 billion billion byte VM is pretty unlikely for now. The kindest thing is to call it a compiler "limitation". (In testing some while back, probably on a Solaris SPARC, it seemed that local variables processed faster than global ones. Go figure!)
I know that stack size is fixed. So we can not store large objects on stack and we shift to dynamic allocations (e.g. malloc). Also, stack gets used when there is nesting of function calls so we avoid recursive functions as well for this reason. Is there any way at runtime to determine how much stack memory is used so far and how much is left ?
Here, I am assuming linux environment (gcc compiler) with x86 architecture.
There is a pthread API to determine where the stack lies:
#include <pthread.h>
void PrintStackInfo (void)
{ pthread_attr_t Attributes;
void *StackAddress;
int StackSize;
// Get the pthread attributes
memset (&Attributes, 0, sizeof (Attributes));
pthread_getattr_np (pthread_self(), &Attributes);
// From the attributes, get the stack info
pthread_attr_getstack (&Attributes, &StackAddress, &StackSize);
// Done with the attributes
pthread_attr_destroy (&Attributes);
printf ("Stack top: %p\n", StackAddress);
printf ("Stack size: %u bytes\n", StackSize);
printf ("Stack bottom: %p\n", StackAddress + StackSize);
}
On i386, the stack starts at the bottom and grows towards the top.
So you know you have ($ESP - StackAddress) bytes available.
In my system, I have a wrapper around pthread_create(), so each thread starts in my private function. In that function, I find the stack as described above, then find the unused portion, then initialize that memory with a distinctive pattern (or "Patton", as my Somerville, MA-born father-in-law would say).
Then when I want to know how much of the stack has been used, I start at the top and search towards the bottom for the first value that doesn't match my pattern.
Just read %esp, and remember its value goes down. You already know your defaulted max size from the environment, as well as your threads' starting point.
gcc has great assembly support, unlike some flakes out there.
If your application needs to be sure it can use X MB of memory the usual approach is for the process to alloc it at startup time (and fail to start if it cannot alloc the minimum requirement).
This of course, means the application has to employ its own memory management logic.
You can see the state of the stack virtual memory area by looking at /proc/<pid>/smaps. The stack vma grows down automatically when you use more stack spa. You can check how much stack space you are really using by checking how far %esp is from the upper limit of the stack area on smaps (as the stack grows down). Probably the first limit you will hit if you use too much stack space will be the one set by ulimit.
But always remember that these low level details may vary without any notice. Don't expect all Linux kernel versions and all glibc versions to have the same behavior. I would never make my program rely on this information.
That's very much depending on your OS and its memory management. On Linux you can use procfs. It's something like /proc/$PID/memory. I'm not on a Linux box right now.
GCC generally adds 16 bits for the registers (to jump back to the function context referred from) to the stack-frame. Normally you can gain more information on how the program exactly is compiled by disassembling it. Or use -S to get the assembly.
Tcl had a stack check at some time, to avoid crashing due to unlimted recursion or other out of stack issues. Wasn't too portable, e.g. crashed on one of the BSDs..., but you could try to find the code they used.