How are alloca and thread safety related, if at all? - c

Generic theory question.
I'm trying to build a library from source (FFTW if anyone cares, but it really doesn't matter), and I noticed there is an option to disable the use of alloca.
I'm aware of the dangers of using alloca, but I'm assessing the performance of FFTW with and without alloca.
Does alloca have known issues with thread safety? I'm seeing an extreme performance hit when I use a certain number of threads with FFTW (which is obviously calling alloca in the background). I'm sticking to using a number of threads equal to powers of 2, if that matters.
Is it possible that FFTW is sharing objects on a thread-local stack via alloca? I'm just trying to figure out why I see such extreme performance hits with certain numbers of threads. However, I don't fully understand the theory behind what alloca is really doing w/ threads.

Short answer: It doesn't. alloca() is guaranteed to be MT-safe.
Longer answer: alloca() isn't complicated function. By specification, it returns pointer to location that can be automatically freed. Please note that it's not a good practice anymore:
The alloca() function returns a pointer to the beginning of the allocated space. If the allocation causes stack overflow, program behaviour is undefined.
As you can see, allocation can cause stackoverflow, so the space is allocated on the stack by bumping SP. Threads share heap, but not stack, so there is no way you will run into trouble with multi-threaded use of alloca().
Safer way would be using VLA rather than alloca(), because both do the same and (as I suspect), VLA is faster and lighter.

Related

What is the point of __builtin_alloca [duplicate]

alloca() allocates memory on the stack rather than on the heap, as in the case of malloc(). So, when I return from the routine the memory is freed. So, actually this solves my problem of freeing up dynamically allocated memory. Freeing of memory allocated through malloc() is a major headache and if somehow missed leads to all sorts of memory problems.
Why is the use of alloca() discouraged in spite of the above features?
The answer is right there in the man page (at least on Linux):
RETURN VALUE
The alloca() function returns a pointer to the beginning of the
allocated space. If the
allocation causes
stack overflow, program behaviour is undefined.
Which isn't to say it should never be used. One of the OSS projects I work on uses it extensively, and as long as you're not abusing it (alloca'ing huge values), it's fine. Once you go past the "few hundred bytes" mark, it's time to use malloc and friends, instead. You may still get allocation failures, but at least you'll have some indication of the failure instead of just blowing out the stack.
One of the most memorable bugs I had was to do with an inline function that used alloca. It manifested itself as a stack overflow (because it allocates on the stack) at random points of the program's execution.
In the header file:
void DoSomething() {
wchar_t* pStr = alloca(100);
//......
}
In the implementation file:
void Process() {
for (i = 0; i < 1000000; i++) {
DoSomething();
}
}
So what happened was the compiler inlined DoSomething function and all the stack allocations were happening inside Process() function and thus blowing the stack up. In my defence (and I wasn't the one who found the issue; I had to go and cry to one of the senior developers when I couldn't fix it), it wasn't straight alloca, it was one of ATL string conversion macros.
So the lesson is - do not use alloca in functions that you think might be inlined.
Old question but nobody mentioned that it should be replaced by variable length arrays.
char arr[size];
instead of
char *arr=alloca(size);
It's in the standard C99 and existed as compiler extension in many compilers.
alloca() is very useful if you can't use a standard local variable because its size would need to be determined at runtime and you can
absolutely guarantee that the pointer you get from alloca() will NEVER be used after this function returns.
You can be fairly safe if you
do not return the pointer, or anything that contains it.
do not store the pointer in any structure allocated on the heap
do not let any other thread use the pointer
The real danger comes from the chance that someone else will violate these conditions sometime later. With that in mind it's great for passing buffers to functions that format text into them :)
As noted in this newsgroup posting, there are a few reasons why using alloca can be considered difficult and dangerous:
Not all compilers support alloca.
Some compilers interpret the intended behaviour of alloca differently, so portability is not guaranteed even between compilers that support it.
Some implementations are buggy.
One issue is that it isn't standard, although it's widely supported. Other things being equal, I'd always use a standard function rather than a common compiler extension.
still alloca use is discouraged, why?
I don't perceive such a consensus. Lots of strong pros; a few cons:
C99 provides variable length arrays, which would often be used preferentially as the notation's more consistent with fixed-length arrays and intuitive overall
many systems have less overall memory/address-space available for the stack than they do for the heap, which makes the program slightly more susceptible to memory exhaustion (through stack overflow): this may be seen as a good or a bad thing - one of the reasons the stack doesn't automatically grow the way heap does is to prevent out-of-control programs from having as much adverse impact on the entire machine
when used in a more local scope (such as a while or for loop) or in several scopes, the memory accumulates per iteration/scope and is not released until the function exits: this contrasts with normal variables defined in the scope of a control structure (e.g. for {int i = 0; i < 2; ++i) { X } would accumulate alloca-ed memory requested at X, but memory for a fixed-sized array would be recycled per iteration).
modern compilers typically do not inline functions that call alloca, but if you force them then the alloca will happen in the callers' context (i.e. the stack won't be released until the caller returns)
a long time ago alloca transitioned from a non-portable feature/hack to a Standardised extension, but some negative perception may persist
the lifetime is bound to the function scope, which may or may not suit the programmer better than malloc's explicit control
having to use malloc encourages thinking about the deallocation - if that's managed through a wrapper function (e.g. WonderfulObject_DestructorFree(ptr)), then the function provides a point for implementation clean up operations (like closing file descriptors, freeing internal pointers or doing some logging) without explicit changes to client code: sometimes it's a nice model to adopt consistently
in this pseudo-OO style of programming, it's natural to want something like WonderfulObject* p = WonderfulObject_AllocConstructor(); - that's possible when the "constructor" is a function returning malloc-ed memory (as the memory remains allocated after the function returns the value to be stored in p), but not if the "constructor" uses alloca
a macro version of WonderfulObject_AllocConstructor could achieve this, but "macros are evil" in that they can conflict with each other and non-macro code and create unintended substitutions and consequent difficult-to-diagnose problems
missing free operations can be detected by ValGrind, Purify etc. but missing "destructor" calls can't always be detected at all - one very tenuous benefit in terms of enforcement of intended usage; some alloca() implementations (such as GCC's) use an inlined macro for alloca(), so runtime substitution of a memory-usage diagnostic library isn't possible the way it is for malloc/realloc/free (e.g. electric fence)
some implementations have subtle issues: for example, from the Linux manpage:
On many systems alloca() cannot be used inside the list of arguments of a function call, because the stack space reserved by alloca() would appear on the stack in the middle of the space for the function arguments.
I know this question is tagged C, but as a C++ programmer I thought I'd use C++ to illustrate the potential utility of alloca: the code below (and here at ideone) creates a vector tracking differently sized polymorphic types that are stack allocated (with lifetime tied to function return) rather than heap allocated.
#include <alloca.h>
#include <iostream>
#include <vector>
struct Base
{
virtual ~Base() { }
virtual int to_int() const = 0;
};
struct Integer : Base
{
Integer(int n) : n_(n) { }
int to_int() const { return n_; }
int n_;
};
struct Double : Base
{
Double(double n) : n_(n) { }
int to_int() const { return -n_; }
double n_;
};
inline Base* factory(double d) __attribute__((always_inline));
inline Base* factory(double d)
{
if ((double)(int)d != d)
return new (alloca(sizeof(Double))) Double(d);
else
return new (alloca(sizeof(Integer))) Integer(d);
}
int main()
{
std::vector<Base*> numbers;
numbers.push_back(factory(29.3));
numbers.push_back(factory(29));
numbers.push_back(factory(7.1));
numbers.push_back(factory(2));
numbers.push_back(factory(231.0));
for (std::vector<Base*>::const_iterator i = numbers.begin();
i != numbers.end(); ++i)
{
std::cout << *i << ' ' << (*i)->to_int() << '\n';
(*i)->~Base(); // optionally / else Undefined Behaviour iff the
// program depends on side effects of destructor
}
}
Lots of interesting answers to this "old" question, even some relatively new answers, but I didn't find any that mention this....
When used properly and with care, consistent use of alloca()
(perhaps application-wide) to handle small variable-length allocations
(or C99 VLAs, where available) can lead to lower overall stack
growth than an otherwise equivalent implementation using oversized
local arrays of fixed length. So alloca() may be good for your stack if you use it carefully.
I found that quote in.... OK, I made that quote up. But really, think about it....
#j_random_hacker is very right in his comments under other answers: Avoiding the use of alloca() in favor of oversized local arrays does not make your program safer from stack overflows (unless your compiler is old enough to allow inlining of functions that use alloca() in which case you should upgrade, or unless you use alloca() inside loops, in which case you should... not use alloca() inside loops).
I've worked on desktop/server environments and embedded systems. A lot of embedded systems don't use a heap at all (they don't even link in support for it), for reasons that include the perception that dynamically allocated memory is evil due to the risks of memory leaks on an application that never ever reboots for years at a time, or the more reasonable justification that dynamic memory is dangerous because it can't be known for certain that an application will never fragment its heap to the point of false memory exhaustion. So embedded programmers are left with few alternatives.
alloca() (or VLAs) may be just the right tool for the job.
I've seen time & time again where a programmer makes a stack-allocated buffer "big enough to handle any possible case". In a deeply nested call tree, repeated use of that (anti-?)pattern leads to exaggerated stack use. (Imagine a call tree 20 levels deep, where at each level for different reasons, the function blindly over-allocates a buffer of 1024 bytes "just to be safe" when generally it will only use 16 or less of them, and only in very rare cases may use more.) An alternative is to use alloca() or VLAs and allocate only as much stack space as your function needs, to avoid unnecessarily burdening the stack. Hopefully when one function in the call tree needs a larger-than-normal allocation, others in the call tree are still using their normal small allocations, and the overall application stack usage is significantly less than if every function blindly over-allocated a local buffer.
But if you choose to use alloca()...
Based on other answers on this page, it seems that VLAs should be safe (they don't compound stack allocations if called from within a loop), but if you're using alloca(), be careful not to use it inside a loop, and make sure your function can't be inlined if there's any chance it might be called within another function's loop.
All of the other answers are correct. However, if the thing you want to alloc using alloca() is reasonably small, I think that it's a good technique that's faster and more convenient than using malloc() or otherwise.
In other words, alloca( 0x00ffffff ) is dangerous and likely to cause overflow, exactly as much as char hugeArray[ 0x00ffffff ]; is. Be cautious and reasonable and you'll be fine.
I don't think anyone has mentioned this: Use of alloca in a function will hinder or disable some optimizations that could otherwise be applied in the function, since the compiler cannot know the size of the function's stack frame.
For instance, a common optimization by C compilers is to eliminate use of the frame pointer within a function, frame accesses are made relative to the stack pointer instead; so there's one more register for general use. But if alloca is called within the function, the difference between sp and fp will be unknown for part of the function, so this optimization cannot be done.
Given the rarity of its use, and its shady status as a standard function, compiler designers quite possibly disable any optimization that might cause trouble with alloca, if would take more than a little effort to make it work with alloca.
UPDATE:
Since variable-length local arrays have been added to C, and since these present very similar code-generation issues to the compiler as alloca, I see that 'rarity of use and shady status' does not apply to the underlying mechanism; but I would still suspect that use of either alloca or VLA tends to compromise code generation within a function that uses them. I would welcome any feedback from compiler designers.
Everyone has already pointed out the big thing which is potential undefined behavior from a stack overflow but I should mention that the Windows environment has a great mechanism to catch this using structured exceptions (SEH) and guard pages. Since the stack only grows as needed, these guard pages reside in areas that are unallocated. If you allocate into them (by overflowing the stack) an exception is thrown.
You can catch this SEH exception and call _resetstkoflw to reset the stack and continue on your merry way. Its not ideal but it's another mechanism to at least know something has gone wrong when the stuff hits the fan. *nix might have something similar that I'm not aware of.
I recommend capping your max allocation size by wrapping alloca and tracking it internally. If you were really hardcore about it you could throw some scope sentries at the top of your function to track any alloca allocations in the function scope and sanity check this against the max amount allowed for your project.
Also, in addition to not allowing for memory leaks alloca does not cause memory fragmentation which is pretty important. I don't think alloca is bad practice if you use it intelligently, which is basically true for everything. :-)
One pitfall with alloca is that longjmp rewinds it.
That is to say, if you save a context with setjmp, then alloca some memory, then longjmp to the context, you may lose the alloca memory. The stack pointer is back where it was and so the memory is no longer reserved; if you call a function or do another alloca, you will clobber the original alloca.
To clarify, what I'm specifically referring to here is a situation whereby longjmp does not return out of the function where the alloca took place! Rather, a function saves context with setjmp; then allocates memory with alloca and finally a longjmp takes place to that context. That function's alloca memory is not all freed; just all the memory that it allocated since the setjmp. Of course, I'm speaking about an observed behavior; no such requirement is documented of any alloca that I know.
The focus in the documentation is usually on the concept that alloca memory is associated with a function activation, not with any block; that multiple invocations of alloca just grab more stack memory which is all released when the function terminates. Not so; the memory is actually associated with the procedure context. When the context is restored with longjmp, so is the prior alloca state. It's a consequence of the stack pointer register itself being used for allocation, and also (necessarily) saved and restored in the jmp_buf.
Incidentally, this, if it works that way, provides a plausible mechanism for deliberately freeing memory that was allocated with alloca.
I have run into this as the root cause of a bug.
Here's why:
char x;
char *y=malloc(1);
char *z=alloca(&x-y);
*z = 1;
Not that anyone would write this code, but the size argument you're passing to alloca almost certainly comes from some sort of input, which could maliciously aim to get your program to alloca something huge like that. After all, if the size isn't based on input or doesn't have the possibility to be large, why didn't you just declare a small, fixed-size local buffer?
Virtually all code using alloca and/or C99 vlas has serious bugs which will lead to crashes (if you're lucky) or privilege compromise (if you're not so lucky).
alloca () is nice and efficient... but it is also deeply broken.
broken scope behavior (function scope instead of block scope)
use inconsistant with malloc (alloca()-ted pointer shouldn't be freed, henceforth you have to track where you pointers are coming from to free() only those you got with malloc())
bad behavior when you also use inlining (scope sometimes goes to the caller function depending if callee is inlined or not).
no stack boundary check
undefined behavior in case of failure (does not return NULL like malloc... and what does failure means as it does not check stack boundaries anyway...)
not ansi standard
In most cases you can replace it using local variables and majorant size. If it's used for large objects, putting them on the heap is usually a safer idea.
If you really need it C you can use VLA (no vla in C++, too bad). They are much better than alloca() regarding scope behavior and consistency. As I see it VLA are a kind of alloca() made right.
Of course a local structure or array using a majorant of the needed space is still better, and if you don't have such majorant heap allocation using plain malloc() is probably sane.
I see no sane use case where you really really need either alloca() or VLA.
Processes only have a limited amount of stack space available - far less than the amount of memory available to malloc().
By using alloca() you dramatically increase your chances of getting a Stack Overflow error (if you're lucky, or an inexplicable crash if you're not).
A place where alloca() is especially dangerous than malloc() is the kernel - kernel of a typical operating system has a fixed sized stack space hard-coded into one of its header; it is not as flexible as the stack of an application. Making a call to alloca() with an unwarranted size may cause the kernel to crash.
Certain compilers warn usage of alloca() (and even VLAs for that matter) under certain options that ought to be turned on while compiling a kernel code - here, it is better to allocate memory in the heap that is not fixed by a hard-coded limit.
alloca is not worse than a variable-length array (VLA), but it's riskier than allocating on the heap.
On x86 (and most often on ARM), the stack grows downwards, and that brings with it a certain amount of risk: if you accidentally write beyond the block allocated with alloca (due to a buffer overflow for example), then you will overwrite the return address of your function, because that one is located "above" on the stack, i.e. after your allocated block.
The consequence of this is two-fold:
The program will crash spectacularly and it will be impossible to tell why or where it crashed (stack will most likely unwind to a random address due to the overwritten frame pointer).
It makes buffer overflow many times more dangerous, since a malicious user can craft a special payload which would be put on the stack and can therefore end up executed.
In contrast, if you write beyond a block on the heap you "just" get heap corruption. The program will probably terminate unexpectedly but will unwind the stack properly, thereby reducing the chance of malicious code execution.
Sadly the truly awesome alloca() is missing from the almost awesome tcc. Gcc does have alloca().
It sows the seed of its own destruction. With return as the destructor.
Like malloc() it returns an invalid pointer on fail which will segfault on modern systems with a MMU (and hopefully restart those without).
Unlike auto variables you can specify the size at run time.
It works well with recursion. You can use static variables to achieve something similar to tail recursion and use just a few others pass info to each iteration.
If you push too deep you are assured of a segfault (if you have an MMU).
Note that malloc() offers no more as it returns NULL (which will also segfault if assigned) when the system is out of memory. I.e. all you can do is bail or just try to assign it any way.
To use malloc() I use globals and assign them NULL. If the pointer is not NULL I free it before I use malloc().
You can also use realloc() as general case if want copy any existing data. You need to check pointer before to work out if you are going to copy or concatenate after the realloc().
3.2.5.2 Advantages of alloca
Actually, alloca is not guaranteed to use the stack.
Indeed, the gcc-2.95 implementation of alloca allocates memory from the heap using malloc itself. Also that implementation is buggy, it may lead to a memory leak and to some unexpected behavior if you call it inside a block with a further use of goto. Not, to say that you should never use it, but some times alloca leads to more overhead than it releaves frome.
In my opinion, alloca(), where available, should be used only in a constrained manner. Very much like the use of "goto", quite a large number of otherwise reasonable people have strong aversion not just to the use of, but also the existence of, alloca().
For embedded use, where the stack size is known and limits can be imposed via convention and analysis on the size of the allocation, and where the compiler cannot be upgraded to support C99+, use of alloca() is fine, and I've been known to use it.
When available, VLAs may have some advantages over alloca(): The compiler can generate stack limit checks that will catch out-of-bounds access when array style access is used (I don't know if any compilers do this, but it can be done), and analysis of the code can determine whether the array access expressions are properly bounded. Note that, in some programming environments, such as automotive, medical equipment, and avionics, this analysis has to be done even for fixed size arrays, both automatic (on the stack) and static allocation (global or local).
On architectures that store both data and return addresses/frame pointers on the stack (from what I know, that's all of them), any stack allocated variable can be dangerous because the address of the variable can be taken, and unchecked input values might permit all sorts of mischief.
Portability is less of a concern in the embedded space, however it is a good argument against use of alloca() outside of carefully controlled circumstances.
Outside of the embedded space, I've used alloca() mostly inside logging and formatting functions for efficiency, and in a non-recursive lexical scanner, where temporary structures (allocated using alloca() are created during tokenization and classification, then a persistent object (allocated via malloc()) is populated before the function returns. The use of alloca() for the smaller temporary structures greatly reduces fragmentation when the persistent object is allocated.
Why no one mentions this example introduced by GNU documention?
https://www.gnu.org/software/libc/manual/html_node/Advantages-of-Alloca.html
Nonlocal exits done with longjmp (see Non-Local Exits) automatically
free the space allocated with alloca when they exit through the
function that called alloca. This is the most important reason to use
alloca
Suggest reading order 1->2->3->1:
https://www.gnu.org/software/libc/manual/html_node/Advantages-of-Alloca.html
Intro and Details from Non-Local Exits
Alloca Example
I don't think that anybody has mentioned this, but alloca also has some serious security issues not necessarily present with malloc (though these issues also arise with any stack based arrays, dynamic or not). Since the memory is allocated on the stack, buffer overflows/underflows have much more serious consequences than with just malloc.
In particular, the return address for a function is stored on the stack. If this value gets corrupted, your code could be made to go to any executable region of memory. Compilers go to great lengths to make this difficult (in particular by randomizing address layout). However, this is clearly worse than just a stack overflow since the best case is a SEGFAULT if the return value is corrupted, but it could also start executing a random piece of memory or in the worst case some region of memory which compromises your program's security.
IMO the biggest risk with alloca and variable length arrays is it can fail in a very dangerous manner if the allocation size is unexpectedly large.
Allocations on the stack typically have no checking in user code.
Modern operating systems will generally put a guard page in place below* to detect stack overflow. When the stack overflows the kernel may either expand the stack or kill the process. Linux expanded this guard region in 2017 to be significantly large than a page, but it's still finite in size.
So as a rule it's best to avoid allocating more than a page on the stack before making use of the previous allocations. With alloca or variable length arrays it's easy to end up allowing an attacker to make arbitrary size allocations on the stack and hence skip over any guard page and access arbitrary memory.
* on most widespread systems today the stack grows downwards.
Most answers here largely miss the point: there's a reason why using _alloca() is potentially worse than merely storing large objects in the stack.
The main difference between automatic storage and _alloca() is that the latter suffers from an additional (serious) problem: the allocated block is not controlled by the compiler, so there's no way for the compiler to optimize or recycle it.
Compare:
while (condition) {
char buffer[0x100]; // Chill.
/* ... */
}
with:
while (condition) {
char* buffer = _alloca(0x100); // Bad!
/* ... */
}
The problem with the latter should be obvious.

How can I get information about malloc()'s behavior?

I'm wrapping malloc() for some reason. I would like to have some (system-specific, at-run-time) information beyond what I can get by merely calling it. For example:
What's the minimum alignment malloc() is using for allocation?
When allocating a specific stretch of memory, how much did it actually allocate (this could theoretically be more than the amount requested)?
Whether (assuming no other concurrent operations) a realloc() will succeed with the same original address or require a move.
Note: I'd like as portable as answer as possible, but a platform-specific answer is still relevant: Linux, Windows, MacOs, Un*x.
glibc implements malloc_usable_size, which returns the actual allocation size available for application use. Some alternative mallocs implement it as well. Note that glibc can perform a non-moving realloc even if the requested new size is larger than the one malloc_usable_size, so it is not useful for this purpose.
For the other things you are asking, there are no clear answers. Theoretically, malloc should provide memory at least aligned to _Alignof (max_align_t), but many implementations do not do this for various reasons:
max_align_t comes from a compiler such as GCC and thus reflects the compiler's view of the world, and not what malloc provides (see glibc malloc is incompatible with GCC 7 for an example). The C standard assumes a uniform implementation, but in practice, the compiler, the C run-time library, and even malloc are separate components, built from different sources, and on different release cycles, so they can fall out of sync, and a compile-time constant such as _Alignof (max_align_t) will rarely accurately reflect what malloc does at run-time.
Providing the ABI-mandated 16 byte alignment on x86-64 for allocations of 8 or 4 bytes is wasteful.
A malloc implementation may have internal constraints which result in larger alignment then what is required by the architecture specification. Applications obviously cannot rely on that, but it is still observable.
Your question about non-moving realloc does not even have a proper answer: For a multi-threaded program, another thread might place an allocation which blocks the enlargement of the current allocation between the determination of the resize limit and the actual realloc call. A non-moving version of realloc could be a useful addition, but the interface would be quite different (probably something along the lines of please resize to this maximum value if possible without moving the block, otherwise return the largest possible size or something like that).
If you want a portable answer to these questions, your best bet is to implement your own allocation scheme. It would be safer to not use the names malloc(), calloc(), realloc(), free(), strdup(), etc. because you might run into problems with dynamic name resolution, even if your reimplementation of the standard functions is conformant.
Any source code you control could be made to call your allocator by defining a set of macros at the head of every module (via a common header file).
Using system specific tricks to retrieve information from the allocator's metadata is risky because your program will bind to the C library dynamically, so it is possible that such structures change from one system to another, even for the same operating system.
Re-implementing malloc() in terms of lower level systems calls such as mmap() or other system specific stuff seems deceptively simple, but it is a lot of work to make it correct, stable, efficient and reliable. You should look at available proven alternative implementations of malloc and try and tailor them to your needs.

Growing an array on the stack

This is my problem in essence. In the life of a function, I generate some integers, then use the array of integers in an algorithm that is also part of the same function. The array of integers will only be used within the function, so naturally it makes sense to store the array on the stack.
The problem is I don't know the size of the array until I'm finished generating all the integers.
I know how to allocate a fixed size and variable sized array on the stack. However, I do not know how to grow an array on the stack, and that seems like the best way to solve my problem. I'm fairly certain this is possible to do in assembly, you just increment stack pointer and store an int for each int generated, so the array of ints would be at the end of the stack frame. Is this possible to do in C though?
I would disagree with your assertion that "so naturally it makes sense to store the array on the stack". Stack memory is really designed for when you know the size at compile time. I would argue that dynamic memory is the way to go here
C doesn't define what the "stack" is. It only has static, automatic and dynamic allocations. Static and automatic allocations are handled by the compiler, and only dynamic allocation puts the controls in your hands. Thus, if you want to manually deallocate an object and allocate a bigger one, you must use dynamic allocation.
Don't use dynamic arrays on the stack (compare Why is the use of alloca() not considered good practice?), better allocate memory from the heap using malloc and resize it using realloc.
Never Use alloca()
IMHO this point hasn't been made well enough in the standard references.
One rule of thumb is:
If you're not prepared to statically allocate the maximum possible size as a
fixed length C array then you shouldn't do it dynamically with alloca() either.
Why? The reason you're trying to avoid malloc() is performance.
alloca() will be slower and won't work in any circumstance static allocation will fail. It's generally less likely to succeed than malloc() too.
One thing is sure. Statically allocating the maximum will outdo both malloc() and alloca().
Static allocation is typically damn near a no-op. Most systems will advance the stack pointer for the function call anyway. There's no appreciable difference for how far.
So what you're telling me is you care about performance but want to hold back on a no-op solution? Think about why you feel like that.
The overwhelming likelihood is you're concerned about the size allocated.
But as explained it's free and it gets taken back. What's the worry?
If the worry is "I don't have a maximum or don't know if it will overflow the stack" then you shouldn't be using alloca() because you don't have a maximum and know it if it will overflow the stack.
If you do have a maximum and know it isn't going to blow the stack then statically allocate the maximum and go home. It's a free lunch - remember?
That makes alloca() either wrong or sub-optimal.
Every time you use alloca() you're either wasting your time or coding in one of the difficult-to-test-for arbitrary scaling ceilings that sleep quietly until things really matter then f**k up someone's day.
Don't.
PS: If you need a big 'workspace' but the malloc()/free() overhead is a bottle-neck for example called repeatedly in a big loop, then consider allocating the workspace outside the loop and carrying it from iteration to iteration. You may need to reallocate the workspace if you find a 'big' case but it's often possible to divide the number of allocations by 100 or even 1000.
Footnote:
There must be some theoretical algorithm where a() calls b() and if a() requires a massive environment b() doesn't and vice versa.
In that event there could be some kind of freaky play-off where the stack overflow is prevented by alloca(). I have never heard of or seen such an algorithm. Plausible specimens will be gratefully received!
The innards of the C compiler requires stack sizes to be fixed or calculable at compile time. It's been a while since I used C (now a C++ convert) and I don't know exactly why this is. http://gribblelab.org/CBootcamp/7_Memory_Stack_vs_Heap.html provides a useful comparison of the pros and cons of the two approaches.
I appreciate your assembly code analogy but C is largely managed, if that makes any sense, by the Operating System, which imposes/provides the task, process and stack notations.
In order to address your issue dynamic memory allocation looks ideal.
int *a = malloc(sizeof(int));
and dereference it to store the value .
Each time a new integer needs to be added to the existing list of integers
int *temp = realloc(a,sizeof(int) * (n+1)); /* n = number of new elements */
if(temp != NULL)
a = temp;
Once done using this memory free() it.
Is there an upper limit on the size? If you can impose one, so the size is at most a few tens of KiB, then yes alloca is appropriate (especially if this is a leaf function, not one calling other functions that might also allocate non-tiny arrays this way).
Or since this is C, not C++, use a variable-length array like int foo[n];.
But always sanity-check your size, otherwise it's a stack-clash vulnerability waiting to happen. (Where a huge allocation moves the stack pointer so far that it ends up in the middle of another memory region, where other things get overwritten by local variables and return addresses.) Some distros enable hardening options that make GCC generate code to touch every page in between when moving the stack pointer by more than a page.
It's usually not worth it to check the size and use alloc for small, malloc for large, since you also need another check at the end of your function to call free if the size was large. It might give a speedup, but this makes your code more complicated and more likely to get broken during maintenance if future editors don't notice that the memory is only sometimes malloced. So only consider a dual strategy if profiling shows this is actually important, and you care about performance more than simplicity / human-readability / maintainability for this particular project.
A size check for an upper limit (else log an error and exit) is more reasonable, but then you have to choose an upper limit beyond which your program will intentionally bail out, even though there's plenty of RAM you're choosing not to use. If there is a reasonable limit where you can be pretty sure something's gone wrong, like the input being intentionally malicious from an exploit, then great, if(size>limit) error(); int arr[size];.
If neither of those conditions can be satisfied, your use case is not appropriate for C automatic storage (stack memory) because it might need to be large. Just use dynamic allocation autom don't want malloc.
Windows x86/x64 the default user-space stack size is 1MiB, I think. On x86-64 Linux it's 8MiB. (ulimit -s). Thread stacks are allocated with the same size. But remember, your function will be part of a chain of function calls (so if every function used a large fraction of the total size, you'd have a problem if they called each other). And any stack memory you dirty won't get handed back to the OS even after the function returns, unlike malloc/free where a large allocation can give back the memory instead of leaving it on the free list.
Kernel thread stack are much smaller, like 16 KiB total for x86-64 Linux, so you never want VLAs or alloca in kernel code, except maybe for a tiny max size, like up to 16 or maybe 32 bytes, not large compared to the size of a pointer that would be needed to store a kmalloc return value.

Why is malloc really non-deterministic? (Linux/Unix)

malloc is not guaranteed to return 0'ed memory. The conventional wisdom is not only that, but that the contents of the memory malloc returns are actually non-deterministic, e.g. openssl used them for extra randomness.
However, as far as I know, malloc is built on top of brk/sbrk, which do "return" 0'ed memory. I can see why the contents of what malloc returns may be non-0, e.g. from previously free'd memory, but why would they be non-deterministic in "normal" single-threaded software?
Is the conventional wisdom really true (assuming the same binary and libraries)
If so, Why?
Edit Several people answered explaining why the memory can be non-0, which I already explained in the question above. What I'm asking is why the program using the contents of what malloc returns may be non-deterministic, i.e. why it could have different behavior every time it's run (assuming the same binary and libraries). Non-deterministic behavior is not implied by non-0's. To put it differently: why it could have different contents every time the binary is run.
Malloc does not guarantee unpredictability... it just doesn't guarantee predictability.
E.g. Consider that
return 0;
Is a valid implementation of malloc.
The initial values of memory returned by malloc are unspecified, which means that the specifications of the C and C++ languages put no restrictions on what values can be handed back. This makes the language easier to implement on a variety of platforms. While it might be true that in Linux malloc is implemented with brk and sbrk and the memory should be zeroed (I'm not even sure that this is necessarily true, by the way), on other platforms, perhaps an embedded platform, there's no reason that this would have to be the case. For example, an embedded device might not want to zero the memory, since doing so costs CPU cycles and thus power and time. Also, in the interest of efficiency, for example, the memory allocator could recycle blocks that had previously been freed without zeroing them out first. This means that even if the memory from the OS is initially zeroed out, the memory from malloc needn't be.
The conventional wisdom that the values are nondeterministic is probably a good one because it forces you to realize that any memory you get back might have garbage data in it that could crash your program. That said, you should not assume that the values are truly random. You should, however, realize that the values handed back are not magically going to be what you want. You are responsible for setting them up correctly. Assuming the values are truly random is a Really Bad Idea, since there is nothing at all to suggest that they would be.
If you want memory that is guaranteed to be zeroed out, use calloc instead.
Hope this helps!
malloc is defined on many systems that can be programmed in C/C++, including many non-UNIX systems, and many systems that lack operating system altogether. Requiring malloc to zero out the memory goes against C's philosophy of saving CPU as much as possible.
The standard provides a zeroing cal calloc that can be used if you need to zero out the memory. But in cases when you are planning to initialize the memory yourself as soon as you get it, the CPU cycles spent making sure the block is zeroed out are a waste; C standard aims to avoid this waste as much as possible, often at the expense of predictability.
Memory returned by mallocis not zeroed (or rather, is not guaranteed to be zeroed) because it does not need to. There is no security risk in reusing uninitialized memory pulled from your own process' address space or page pool. You already know it's there, and you already know the contents. There is also no issue with the contents in a practical sense, because you're going to overwrite it anyway.
Incidentially, the memory returned by malloc is zeroed upon first allocation, because an operating system kernel cannot afford the risk of giving one process data that another process owned previously. Therefore, when the OS faults in a new page, it only ever provides one that has been zeroed. However, this is totally unrelated to malloc.
(Slightly off-topic: The Debian security thing you mentioned had a few more implications than using uninitialized memory for randomness. A packager who was not familiar with the inner workings of the code and did not know the precise implications patched out a couple of places that Valgrind had reported, presumably with good intent but to desastrous effect. Among these was the "random from uninitilized memory", but it was by far not the most severe one.)
I think that the assumption that it is non-deterministic is plain wrong, particularly as you ask for a non-threaded context. (In a threaded context due to scheduling alea you could have some non-determinism).
Just try it out. Create a sequential, deterministic application that
does a whole bunch of allocations
fills the memory with some pattern, eg fill it with the value of a counter
free every second of these allocations
newly allocate the same amount
run through these new allocations and register the value of the first byte in a file (as textual numbers one per line)
run this program twice and register the result in two different files. My idea is that these files will be identical.
Even in "normal" single-threaded programs, memory is freed and reallocated many times. Malloc will return to you memory that you had used before.
Even single-threaded code may do malloc then free then malloc and get back previously used, non-zero memory.
There is no guarantee that brk/sbrk return 0ed-out data; this is an implementation detail. It is generally a good idea for an OS to do that to reduce the possibility that sensitive information from one process finds its way into another process, but nothing in the specification says that it will be the case.
Also, the fact that malloc is implemented on top of brk/sbrk is also implementation-dependent, and can even vary based on the size of the allocation; for example, large allocations on Linux have traditionally used mmap on /dev/zero instead.
Basically, you can neither rely on malloc()ed regions containing garbage nor on it being all-0, and no program should assume one way or the other about it.
The simplest way I can think of putting the answer is like this:
If I am looking for wall space to paint a mural, I don't care whether it is white or covered with old graffiti, since I'm going to prime it and paint over it. I only care whether I have enough square footage to accommodate the picture, and I care that I'm not painting over an area that belongs to someone else.
That is how malloc thinks. Zeroing memory every time a process ends would be wasted computational effort. It would be like re-priming the wall every time you finish painting.
There is an whole ecosystem of programs living inside a computer memmory and you cannot control the order in which mallocs and frees are happening.
Imagine that the first time you run your application and malloc() something, it gives you an address with some garbage. Then your program shuts down, your OS marks that area as free. Another program takes it with another malloc(), writes a lot of stuff and then leaves. You run your program again, it might happen that malloc() gives you the same address, but now there's different garbage there, that the previous program might have written.
I don't actually know the implementation of malloc() in any system and I don't know if it implements any kind of security measure (like randomizing the returned address), but I don't think so.
It is very deterministic.

Is realloc() safe in embedded system?

While developing a piece of software for embedded system I used realloc() function many times. Now I've been said that I "should not use realloc() in embedded" without any explanation.
Is realloc() dangerous for embedded system and why?
Yes, all dynamic memory allocation is regarded as dangerous, and it is banned from most "high integrity" embedded systems, such as industrial/automotive/aerospace/med-tech etc etc. The answer to your question depends on what sort of embedded system you are doing.
The reasons it's banned from high integrity embedded systems is not only the potential memory leaks, but also a lot of dangerous undefined/unspecified/impl.defined behavior asociated with those functions.
EDIT: I also forgot to mention heap fragmentation, which is another danger. In addition, MISRA-C also mentions "data inconsistency, memory exhaustion, non-deterministic behaviour" as reasons why it shouldn't be used. The former two seem rather subjective, but non-deterministic behaviour is definitely something that isn't allowed in these kind of systems.
References:
MISRA-C:2004 Rule 20.4 "Dynamic heap memory allocation shall not be used."
IEC 61508 Functional safety, 61508-3 Annex B (normative) Table B1, >SIL1: "No dynamic objects", "No dynamic variables".
It depends on the particular embedded system. Dynamic memory management on an small embedded system is tricky to begin with, but realloc is no more complicated than a free and malloc (of course, that's not what it does). On some embedded systems you'd never dream of calling malloc in the first place. On other embedded systems, you almost pretend it's a desktop.
If your embedded system has a poor allocator or not much RAM, then realloc might cause fragmentation problems. Which is why you avoid malloc too, cause it causes the same problems.
The other reason is that some embedded systems must be high reliability, and malloc / realloc can return NULL. In these situations, all memory is allocated statically.
In many embedded systems, a custom memory manager can provide better semantics than are available with malloc/realloc/free. Some applications, for example, can get by with a simple mark-and-release allocator. Keep a pointer to the start of not-yet-allocated memory, allocate things by moving the pointer upward, and jettison them by moving the pointer below them. That won't work if it's necessary to jettison some things while keeping other things that were allocated after them, but in situations where that isn't necessary the mark-and-release allocator is cheaper than any other allocation method. In some cases where the mark-and-release allocator isn't quite good enough, it may be helpful to allocate some things from the start of the heap and other things from the end of the heap; one may free up the things allocated from one end without affecting those allocated from the other.
Another approach that can sometimes be useful in non-multitasking or cooperative-multitasking systems is to use memory handles rather than direct pointers. In a typical handle-based system, there's a table of all allocated objects, built at the top of memory working downward, and objects themselves are allocated from the bottom up. Each allocated object in memory holds either a reference to the table slot that references it (if live) or else an indication of its size (if dead). The table entry for each object will hold the object's size as well as a pointer to the object in memory. Objects may be allocated by simply finding a free table slot (easy, since table slots are all fixed size), storing the address of the object's table slot at the start of free memory, storing the object itself just beyond that, and updating the start of free memory to point just past the object. Objects may be freed by replacing the back-reference with a length indication, and freeing the object in the table. If an allocation would fail, relocate all live objects starting at the top of memory, overwriting any dead objects, and updating the object table to point to their new addresses.
The performance of this approach is non-deterministic, but fragmentation is not a problem. Further, it may be possible in some cooperative multitasking systems to perform garbage collection "in the background"; provided that the garbage collector can complete a pass in the time it takes to chug through the slack space, long waits can be avoided. Further, some fairly simple "generational" logic may be used to improve average-case performance at the expense of worst-case performance.
realloc can fail, just like malloc can. This is one reason why you probably should not use either in an embedded system.
realloc is worse than malloc in that you will need to have the old and new pointers valid during the realloc. In other words, you will need 2X the memory space of the original malloc, plus any additional amount (assuming realloc is increasing the buffer size).
Using realloc is going to be very dangerous, because it may return a new pointer to your memory location. This means:
All references to the old pointer must be corrected after realloc.
For a multi-threaded system, the realloc must be atomic. If you are disabling interrupts to achieve this, the realloc time might be long enough to cause a hardware reset by the watchdog.
Update: I just wanted to make it clear. I'm not saying that realloc is worse than implementing realloc using a malloc/free. That would be just as bad. If you can do a single malloc and free, without resizing, it's slightly better, yet still dangerous.
The issues with realloc() in embedded systems are no different than in any other system, but the consequences may be more severe in systems where memory is more constrained, and the sonsequences of failure less acceptable.
One problem not mentioned so far is that realloc() (and any other dynamic memory operation for that matter) is non-deterministic; that is it's execution time is variable and unpredictable. Many embedded systems are also real-time systems, and in such systems, non-deterministic behaviour is unacceptable.
Another issue is that of thread-safety. Check your library's documantation to see if your library is thread-safe for dynamic memory allocation. Generally if it is, you will need to implement mutex stubs to integrate it with your particular thread library or RTOS.
Not all emebdded systems are alike; if your embedded system is not real-time (or the process/task/thread in question is not real-time, and is independent of the real-time elements), and you have large amounts of memory unused, or virtual memory capabilities, then the use of realloc() may be acceptable, if perhaps ill-advised in most cases.
Rather than accept "conventional wisdom" and bar dynamic memory regardless, you should understand your system requirements, and the behaviour of dynamic memory functions and make an appropriate decision. That said, if you are building code for reuability and portability to as wide a range of platforms and applications as possible, then reallocation is probably a really bad idea. Don't hide it in a library for example.
Note too that the same problem exists with C++ STL container classes that dynamically reallocate and copy data when the container capacity is increased.
Well, it's better to avoid using realloc if it's possible, since this operation is costly especially being put into the loop: for example, if some allocated memory needs to be extended and there no gap between after current block and the next allocated block - this operation is almost equals: malloc + memcopy + free.

Resources