Which functions in the C standard library commonly encourage bad practice? [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of.
Specifically, to quote R..
strncpy does not null-terminate, and
does null-pad the whole remainder of
the destination buffer, which is a
huge waste of time. You can work
around the former by adding your own
null padding, but not the latter. It
was never intended for use as a "safe
string handling" function, but for
working with fixed-size fields in Unix
directory tables and database files.
snprintf(dest, n, "%s", src) is the
only correct "safe strcpy" in standard
C, but it's likely to be a lot slower.
By the way, truncation in itself can
be a major bug and in some cases might
lead to privilege elevation or DoS, so
throwing "safe" string functions that
truncate their output at a problem is
not a way to make it "safe" or
"secure". Instead, you should ensure
that the destination buffer is the
right size and simply use strcpy (or
better yet, memcpy if you already know
the source string length).
And from Jonathan Leffler
Note that strncat() is even more
confusing in its interface than
strncpy() - what exactly is that
length argument, again? It isn't what
you'd expect based on what you supply
strncpy() etc - so it is more error
prone even than strncpy(). For copying
strings around, I'm increasingly of
the opinion that there is a strong
argument that you only need memmove()
because you always know all the sizes
ahead of time and make sure there's
enough space ahead of time. Use
memmove() in preference to any of
strcpy(), strcat(), strncpy(),
strncat(), memcpy().
So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question:
What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies?
In the interests of objectivity, I have a number of criteria for an answer:
Please, if you can, cite design reasons behind the function in question i.e. its intended purpose.
Please highlight the misuse to which the code is currently put.
Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers.
Please avoid:
Debates over naming conventions of functions (except where this unequivocably causes confusion).
"I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them.
As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away.
I am also working as per C99.

What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies ?
I'm gonna go with the obvious :
char *gets(char *s);
With its remarkable particularity that it's simply impossible to use it appropriately.

A common pitfall with the strtok() function is to assume that the parsed string is left unchanged, while it actually replaces the separator character with '\0'.
Also, strtok() is used by making subsequent calls to it, until the entire string is tokenized. Some library implementations store strtok()'s internal status in a global variable, which may induce some nasty suprises, if strtok() is called from multiple threads at the same time.
The CERT C Secure Coding Standard lists many of these pitfalls you asked about.

In almost all cases, atoi() should not be used (this also applies to atof(), atol() and atoll()).
This is because these functions do not detect out-of-range errors at all - the standard simply says "If the value of the result cannot be represented, the behavior is undefined.". So the only time they can be safely used is if you can prove that the input will certainly be within range (for example, if you pass a string of length 4 or less to atoi(), it cannot be out of range).
Instead, use one of the strtol() family of functions.

Let us extend the question to interfaces in a broader sense.
errno:
technically it is not even clear what it is, a variable, a macro, an implicit function call? In practice on modern systems it is mostly a macro that transforms into a function call to have a thread specific error state. It is evil:
because it may cause overhead for the
caller to access the value, to check the "error" (which might just be an exceptional event)
because it even imposes at some places that the caller clears this "variable" before making a library call
because it implements a simple error
return by setting a global state, of the library.
The forthcoming standard gets the definition of errno a bit more straight, but these uglinesses remain

There is often a strtok_r.
For realloc, if you need to use the old pointer, it's not that hard to use another variable. If your program fails with an allocation error, then cleaning up the old pointer is often not really necessary.

I would put printf and scanf pretty high up on this list. The fact that you have to get the formatting specifiers exactly correct makes these functions tricky to use and extremely easy to get wrong. It's also very hard to avoid buffer overruns when reading data out. Moreover, the "printf format string vulnerability" has probably caused countless security holes when well-intentioned programmers specify client-specified strings as the first argument to printf, only to find the stack smashed and security compromised many years down the line.

Any of the functions that manipulate global state, like gmtime() or localtime(). These functions simply can't be used safely in multiple threads.
EDIT: rand() is in the same category it would seem. At least there are no guarantees of thread-safety, and on my Linux system the man page warns that it is non-reentrant and non-threadsafe.

One of my bêtes noire is strtok(), because it is non-reentrant and because it hacks the string it is processing into pieces, inserting NUL at the end of each token it isolates. The problems with this are legion; it is distressingly often touted as a solution to a problem, but is as often a problem itself. Not always - it can be used safely. But only if you are careful. The same is true of most functions, with the notable exception of gets() which cannot be used safely.

There's already one answer about realloc, but I have a different take on it. A lot of time, I've seen people write realloc when they mean free; malloc - in other words, when they have a buffer full of trash that needs to change size before storing new data. This of course leads to potentially-large, cache-thrashing memcpy of trash that's about to be overwritten.
If used correctly with growing data (in a way that avoids worst-case O(n^2) performance for growing an object to size n, i.e. growing the buffer geometrically instead of linearly when you run out of space), realloc has doubtful benefit over simply doing your own new malloc, memcpy, and free cycle. The only way realloc can ever avoid doing this internally is when you're working with a single object at the top of the heap.
If you like to zero-fill new objects with calloc, it's easy to forget that realloc won't zero-fill the new part.
And finally, one more common use of realloc is to allocate more than you need, then resize the allocated object down to just the required size. But this can actually be harmful (additional allocation and memcpy) on implementations that strictly segregate chunks by size, and in other cases might increase fragmentation (by splitting off part of a large free chunk to store a new small object, instead of using an existing small free chunk).
I'm not sure if I'd say realloc encourages bad practice, but it's a function I'd watch out for.

How about the malloc family in general? The vast majority of large, long-lived programs I've seen use dynamic memory allocation all over the place as if it were free. Of course real-time developers know this is a myth, and careless use of dynamic allocation can lead to catastrophic blow-up of memory usage and/or fragmentation of address space to the point of memory exhaustion.
In some higher-level languages without machine-level pointers, dynamic allocation is not so bad because the implementation can move objects and defragment memory during the program's lifetime, as long as it can keep references to these objects up-to-date. A non-conventional C implementation could do this too, but working out the details is non-trivial and it would incur a very significant cost in all pointer dereferences and make pointers rather large, so for practical purposes, it's not possible in C.
My suspicion is that the correct solution is usually for long-lived programs to perform their small routine allocations as usual with malloc, but to keep large, long-lived data structures in a form where they can be reconstructed and replaced periodically to fight fragmentation, or as large malloc blocks containing a number of structures that make up a single large unit of data in the application (like a whole web page presentation in a browser), or on-disk with a fixed-size in-memory cache or memory-mapped files.

On a wholly different tack, I've never really understood the benefits of atan() when there is atan2(). The difference is that atan2() takes two arguments, and returns an angle anywhere in the range -π..+π. Further, it avoids divide by zero errors and loss of precision errors (dividing a very small number by a very large number, or vice versa). By contrast, the atan() function only returns a value in the range -π/2..+π/2, and you have to do the division beforehand (I don't recall a scenario where atan() could be used without there being a division, short of simply generating a table of arctangents). Providing 1.0 as the divisor for atan2() when given a simple value is not pushing the limits.

Another answer, since these are not really related, rand:
it is of unspecified random quality
it is not re-entrant

Some of this functions are modifying some global state. (In windows) this state is shared per single thread - you can get unexpected result. For example, the first call of rand in every thread will give the same result, and it requires some care to make it pseudorandom, but deterministic (for debug purposes).

basename() and dirname() aren't threadsafe.

Related

Why do so many standard C functions tamper with parameters instead of returning values?

Many functions like strcat, strcpy and alike don't return the actual value but change one of the parameters (usually a buffer). This of course creates a boatload of side effects.
Wouldn't it be far more elegant to just return a new string? Why isn't this done?
Example:
char *copy_string(char *text, size_t length) {
char *result = malloc(sizeof(char) * length);
for (int i = 0; i < length; ++i) {
result[i] = text[i];
}
return result;
}
int main() {
char *copy = copy_string("Hello World", 12);
// *result now lingers in memory and can not be freed?
}
I can only guess it has something to do with memory leaking since there is dynamic memory being allocated inside of the function which you can not free internally (since you need to return a pointer to it).
Edit: From the answers it seems that it is good practice in C to work with parameters rather than creating new variables. So I should aim for building my functions like that?
Edit 2: Will my example code lead to a memory leak? Or can *result be free'd?
To answer your original question: C, at the time it was designed, was tailored to be a language of maximum efficiency. It was, basically, just a nicer way of writing assembly code (the guy who designed it, wrote his own compiler for it).
What you say (that parameters are often used rather than return codes) is mainly true for string handling. Most other functions (those that deal with numbers for example) work through return codes as expected. Or they only modify values for parameters if they have to return more than one value.
String handling in C today is considered one of the major (if not THE major) weakness in C. But those functions were written with performance in mind, and with the machines available those days (and the intent of performance) working on the callers buffers was the way of choice.
Re your edit 1: Today other intents may apply. Performance usually isn't the limiting factor. Equally or important are readability, robustness, pronenees to error. And generally, as said, the string handling in C is today generally considered an horrible relic of the past. So it's basically your choice, depending on your intent.
Re your edit 2: Yes, the memory will leak. You need to call free(copy); Which ties into edit 1: proneness of error - it's easy to forget the free and create leaks that way (or attempt to free it twice or access it after it was freed). It may be more readable and more more prone to error too (even more than the clunky original C approach of modifying the caller's buffer).
Generally, I'd suggest, whenever you have the choice, to work with a newer dialect that support std-string or something similar.
Why do so many standard C functions tamper with parameters instead of returning values?
Because that's often what the users of the C library wants.
Many functions like strcat, strcpy and alike don't return the actual value but change one of the parameters (usually a buffer). This of course creates a boatload of side effects. Wouldn't it be far more elegant to just return a new string? Why isn't this done?
It's not very efficient to allocate a memory and it'll require the user to free() them later, which is an unnecessary burden on the user. Efficiency and letting users do what they want (even if they want shoot themselves in the foot) is a part of C's philosophy.
Besides, there are syntax/implementation issues. For example, how can the following be done if the strcpy() function actually returns a newly allocated string?
char arr[256] = "Hello";
strcpy(arr, "world");
Because C doesn't allow you assign something to an array (arr).
Basically, you are questioning C is the way it is. For that question, the common answer is "historical reasons".
Two reasons:
Properly designed functions should only concern themselves with their designated purpose, and not unrelated things such as memory allocation.
Making a hard copy of the string would make the function far slower.
So for your example, if there is a need for a hard copy, the caller should malloc the buffer and afterwards call strcpy. That separates memory allocation from the algorithm.
On top of that, good design practice dictates that the module that allocated memory should also be responsible for freeing it. Otherwise the caller might not even realize that the function is allocating memory, and there would be a memory leak. If the caller instead is responsible for the allocation, then it is obvious that the caller is also responsible for clean-up.
Overall, C standard library functions are designed to be as fast as possible, meaning they will strive to meet the case where the caller has minimal requirements. A typical example of such a function is malloc, which doesn't even set the allocated data to zero, because that would take extra time. Instead they added an additional function calloc for that purpose.
Other languages have different philosophies, where they would for example force a hard copy for all string handling functions ("immutable objects"). This makes the function easier to work with and perhaps also the code easier to read, but it comes at the expense of a slower program, which needs more memory.
This is one of the main reasons why C is still widely used for development. It tends is much faster and more efficient than any other language (except raw assembler).

Why is strdup considered to be evil

I've seen some posters stating that strdup is evil. Is there a consensus on this? I've used it without any guilty feelings and can see no reason why it is worse than using malloc/memcpy.
The only thing I can think might earn strdup a reputation is that callers might misuse it (eg. not realise they have to free the memory returned; try to strcat to the end of a strdup'ed string). But then malloc'ed strings are not free from the possibility of misuse either.
Thanks for the replies and apologies to those who consider the question unhelpful (votes to close). In summary of the replies, it seems that there is no general feeling that strdup is evil per se, but a general consensus that it can, like many other parts of C, be used improperly or unsafely.
There is no 'correct' answer really, but for the sake of accepting one, I accepted #nneoneo's answer - it could equally have been #R..'s answer.
Two reasons I can think of:
It's not strictly ANSI C, but rather POSIX. Consequently, some compilers (e.g. MSVC) discourage use (MSVC prefers _strdup), and technically the C standard could define its own strdup with different semantics since str is a reserved prefix. So, there are some potential portability concerns with its use.
It hides its memory allocation. Most other str functions don't allocate memory, so users might be misled (as you say) into believing the returned string doesn't need to be freed.
But, aside from these points, I think that careful use of strdup is justified, as it can reduce code duplication and provides a nice implementation for common idioms (such as strdup("constant string") to get a mutable, returnable copy of a literal string).
My answer is rather supporting strdup and it is no worse than any other function in C.
POSIX is a standard and strdup is not too difficult to implement if portability becomes an issue.
Whether to free the memory allocated by strdup shouldn't be an issue if anyone taken a little time to read the man page and understand how strdup works. If one doesn't understand how a function works, it's very likely the person is going to mess up something, this is applicable to any function, not just strdup.
In C, memory & most other things are managed by the programmer, so strdup is no worse than forgetting to free malloc'ed memory, failing to null terminate a string, using incorrect format string in scanf (and invoking undefined behaviour), accessing dangling pointer etc.
(I really wanted to post this as a comment, but couldn't add in a single comment. Hence, posted it as an answer).
I haven't really heard strdup described as evil, but some possible reasons some people dislike it:
It's not standard C (but is in POSIX). However I find this reason silly because it's nearly a one-line function to add on systems that lack it.
Blindly duplicating strings all over the place rather than using them in-place when possible wastes time and memory and introduces failure cases into code that might otherwise be failure-free.
When you do need a copy of a string, it's likely you actually need more space to modify or build on it, and strdup does not give you that.
I think the majority of the concern about strdup comes from security concerns regarding buffer over runs, and improperly formatted strings. If a non-null terminated string is passed to strdup it can allocated an undefined length string. I don't know if this can be specifically leveraged into an attack but in general it is good secure coding practice to only use string functions which take a maximum length instead of relying on the null character alone.
Many people obviously don't, but I personally find strdup evil for several reasons,
the main one being it hides the allocation. The other str* functions and most other standard functions require no free afterwards, so strdup looks innocuous enough and you can forget to clean up after it. dmckee suggested to just add it to your mental list of functions that need cleaning up after, but why? I don't see a big advantage over reducing two medium-length lines to one short one.
It allocates memory on the heap always, and with C99's (is it 99?) VLAs, you have yet another reason to just use strcpy (you don't even need malloc). You can't always do this, but when you can, you should.
It's not part of the ISO standard (but it is part of the POSIX standard, thanks Wiz), but that's really a small point as R.. mentioned that it can be added easily. If you write portable programs, I'm not sure how you'd tell if it was already defined or not though...
These are of course a few of my own reasons, no one else's. To answer your question, there is no consensus that I'm aware of.
If you're writing programs just for yourself and you find strdup no problem, then there's much less reason not to use it than if you are writing a program to be read by many people of many skill levels and ages.
My reason for disliking strdup, which hasn't been mentioned, is that it is resource allocation without a natural pair. Let's try a silly game: I say malloc, you say free. I say open you say close. I say create you say destroy. I say strdup you say ....?
Actually, the answer to strdup is free of course, and the function would have been better named malloc_and_strcpy to make that clear. But many C programmers don't think of it that way and forgets that strdup requires its opposite or "ending" free to deallocate.
In my experience, it is very common to find memory leaks in code which calls strdup. It's an odd function which combines strlen, malloc and strcpy.
Why is strdup considered to be evil
Conflicts with Future language directions.
Reliance on errno state.
Easier to make your own strdup() that is not quite like the POISX one nor the future C2x one.
With C2x on the way with certain inclusion of strdup(), using strdup() before that has these problems.
The C2x proposed strdup() does not mention errno whereas POSIX does. Code that relies on setting errno to ENOMEM or EINVAL can have trouble in the future.
The C2x proposed char *strdup(const char *s1) uses a const char * as the parameter. User coded versions of strdup() too often use char *s1, incurring a difference that can break select code that counts on the char * signature. I.E. function pointers.
User code that did roll their own strdup() were not following C's Future language directions with its "Function names that begin with str, mem, or wcs and a lowercase letter may be added to the
declarations in the <string.h> header" and so may incur library conflicts with the new strdup() and user's strdup().
If user code wants strdup() code before C2x, consider naming it something different like my_strdup() and use a const char * parameter. Minimize or avoid any reliance on the state of errno after the call returns NULL.
My my_strdup() effort - warts and all.

Why isn't there a "memsize" in C which returns the size of a memory block allocated in the heap using malloc?

ok. It can be called anything else as in _msize in Visual Studio.
But why is it not in the standard to return the size of the memory given the memory block alloced using malloc? Since we can not tell how much memory is pointed to by the return pointer following malloc, we could use this "memsize" call to return that information should we need it. "memsize" would be implementation specific as are malloc/free
Just asking as I had to write a wrapper sometime back to store some additional bytes for the size.
Because the C library, including malloc, was designed for minimum overhead. A function like the one you want would require the implementation to record the exact size of the allocation, while implementations may now choose to "round" the size up as they please, to prevent actually reallocating in realloc.
Storing the size requires an extra size_t per allocation, which may be heavy for embedded systems. (And for the PDP-11s and 286s that were still abundant when C89 was written.)
To turn this around, why should there be? There's plenty of stuff in the Standards already, particularly the C++ standard. What are your use cases?
You ask for an adequately-sized chunk of memory, and you get it (or a null pointer or exception). There may or may not be additional bytes allocated, and some of these may be reserved. This is conceptually simple: you ask for what you want, and you get something you can use.
Why complicate it?
I don't think there is any definite answer. The developers of the standard probably considered it, and weighed the pros and cons. Anything that goes into a standard must be implemented by every implementation, so adding things to it places a significant burden on developers. I guess they just didn't find that feature useful enough to warrant this.
In C++, the wrapper that you talk about is provided by the standard. If you allocate a block of memory with std::vector, you can use the member function vector::size() to determine the size of the array and use vector::capacity() to determine the size of the allocation (which might be different).
C, on the other hand, is a low-level language which leaves such concerns to be managed by the developer, since tracking it dynamically (as you suggest) is not strictly necessary and would be redundant in many cases.

simple c malloc

While there are lots of different sophisticated implementations of malloc / free for C/C++, I'm looking for a really simple and (especially) small one that works on a fixed-size buffer and supports realloc. Thread-safety etc. are not needed and my objects are small and do not vary much in size. Is there any implementation that you could recommend?
EDIT:
I'll use that implementation for a communication buffer at the receiver to transport objects with variable size (unknown to the receiver). The allocated objects won't live long, but there are possibly several objects used at the same time.
As everyone seems to recommend the standard malloc, I should perhaps reformulate my question. What I need is the "simplest" implementation of malloc on top of a buffer that I can start to optimize for my own needs. Perhaps the original question was unclear because I'm not looking for an optimized malloc, only for a simple one. I don't want to start with a glibc-malloc and extend it, but with a light-weight one.
Kerninghan & Ritchie seem to have provided a small malloc / free in their C book - that's exactly what I was looking for (reimplementation found here). I'll only add a simple realloc.
I'd still be glad about suggestions for other implementations that are as simple and concise as this one (for example, using doubly-linked lists).
I recommend the one that came with standard library bundled with your compiler.
One should also note there is no legal way to redefine malloc/free
The malloc/free/realloc that come with your compiler are almost certainly better than some functions you're going to plug in.
It is possible to improve things for fixed-size objects, but that usually doesn't involve trying to replace the malloc but rather supplementing it with memory pools. Typically, you would use malloc to get a large chunk of memory that you can divide into discrete blocks of the appropriate size, and manage those blocks.
It sounds to me that you are looking for a memory pool. The Apache Runtime library has a pretty good one, and it is cross-platform too.
It may not be entirely light-weight, but the source is open and you can modify it.
There's a relatively simple memory pool implementation in CCAN:
http://ccodearchive.net/info/antithread/alloc.html
This looks like fits your bill. Sure, alloc.c is 1230 lines, but a good chunk of that is test code and list manipulation. It's a bit more complex than the code you implemented, but decent memory allocation is complicated.
I would generally not reinvent the wheel with allocation functions unless my memory-usage pattern either is not supported by malloc/etc. or memory can be partitioned into one or more pre-allocated zones, each containing one or two LIFO heaps (freeing any object releases all objects in the same heap that were allocated after it). In a common version of the latter scenario, the only time anything is freed, everything is freed; in such a case, malloc() may be usefully rewritten as:
char *malloc_ptr;
void *malloc(int size)
{
void *ret;
ret = (void*)malloc_ptr;
malloc_ptr += size;
return ret;
}
Zero bytes of overhead per allocated object. An example of a scenario where a custom memory manager was used for a scenario where malloc() was insufficient was an application where variable-length test records produced variable-length result records (which could be longer or shorter); the application needed to support fetching results and adding more tests mid-batch. Tests were stored at increasing addresses starting at the bottom of the buffer, while results were stored at decreasing addresses starting at the top. As a background task, tests after the current one would be copied to the start of the buffer (since there was only one pointer that was used to read tests for processing, the copy logic would update that pointer as required). Had the application used malloc/free, it's possible that the interleaving of allocations for tests and results could have fragmented memory, but with the system used there was no such risk.
Echoing advice to measure first and only specialize if performance sucks - should be easy to abstract your malloc/free/reallocs such that replacement is straightforward.
Given the specialized platform I can't comment on effectiveness of the runtimes. If you do investigate your own then object pooling (see other answers) or small object allocation a la Loki or this is worth a look. The second link has some interesting commentary on the issue as well.

Is it bad practice to declare an array mid-function

in an effort to only ask what I'm really looking for here... I'm really only concerned if it's considered bad practice or not to declare an array like below where the size could vary. If it is... I would generally malloc() instead.
void MyFunction()
{
int size;
//do a bunch of stuff
size = 10; //but could have been something else
int array[size];
//do more stuff...
}
Generally yes, this is bad practice, although new standards allow you to use this syntax. In my opinion you must allocate (on the heap) the memory you want to use and release it once you're done with it. Since there is no portable way of checking if the stack is enough to hold that array you should use some methods that can really be checked - like malloc/calloc & free. In the embedded world stack size can be an issue.
If you are worried about fragmentation you can create your own memory allocator, but this is a totally different story.
That depends. The first clearly isn't what I'd call "proper", and the second is only under rather limited circumstances.
In the first, you shouldn't cast the return from malloc in C -- doing so can cover up the bug of accidentally omitting inclusion of the correct header (<stdlib.h>).
In the second, you're restricting the code to C99 or a gcc extension. As long as you're aware of that, and it works for your purposes, it's all right, but hardly what I'd call an ideal of portability.
As far as what you're really asking: with the minor bug mentioned above fixed, the first is portable, but may be slower than you'd like. If the second is portable enough for your purposes, it'll normally be faster.
For your question, I think each has its advantages and disadvantages.
Dynamic Allocation:
Slow, but you can detect when there is no memory to be given to your programmer by checking the pointer.
Stack Allocation:
Only in C99 and it is blazingly fast but in case of stackoverflow you are out of luck.
In summary, when you need a small array, reserve it on the stack. Otherwise, use dynamic memory wisely.
The argument against VLAs runs that because of the absolute badness of overflowing the stack, by the time you've done enough thinking/checking to make them safe, you've done enough thinking/checking to use a fixed-size array:
1) In order to safely use VLAs, you must know that there is enough stack available.
2) In the vast majority of cases, the way that you know there's enough stack is that you know an upper bound on the size required, and you know (or at least are willing to guess or require) a lower bound on the stack available, and the one is smaller than the other. So just use a fixed-size array.
3) In the vast majority of the few cases that aren't that simple, you're using multiple VLAs (perhaps one in each call to a recursive function), and you know an upper bound on their total size, which is less than a lower bound on available stack. So you could use a fixed-size array and divide it into pieces as required.
4) If you ever encounter one of the remaining cases, in a situation where the performance of malloc is unacceptable, do let me know...
It may be more convenient, from the POV of the source code, to use VLAs. For instance you can use sizeof (in the defining scope) instead of maintaining the size in a variable, and that business with dividing an array into chunks might require passing an extra parameter around. So there's some small gain in convenience, sometimes.
It's also easier to miss that you're using a humongous amount of stack, yielding undefined behavior, if instead of a rather scary-looking int buf[1920*1024] or int buf[MAX_IMG_SIZE] you have an int buf[img->size]. That works fine right up to the first time you actually handle a big image. That's broadly an issue of proper testing, but if you miss some possible difficult inputs, then it won't be the first or last test suite to do so. I find that a fixed-size array reminds me either to put in fixed-size checks of the input, or to replace it with a dynamic allocation and stop worrying whether it fits on the stack or not. There is no valid option to put it on the stack and not worry whether it fits...
two points from a UNIX/C perspective -
malloc is only slow when you force it to call brk(). Meaning for reasonable arrays it is the same as allocating stack space for a variable. By the way when you use method #2 (via alloca and in the code libc I have seen) also invokes brk() for huge objects. So it is a wash. Note: with #2 & #1 you still have to invoke directly or indirectly a memset-type call to zero the bytes in the array. This is just a side note to the real issue (IMO):
The real issue is memory leaks. alloca cleans up after itself when the function retruns so #2 is less likely to cause a problem. With malloc/calloc you have to call free() or start a leak.

Resources