I've seen some posters stating that strdup is evil. Is there a consensus on this? I've used it without any guilty feelings and can see no reason why it is worse than using malloc/memcpy.
The only thing I can think might earn strdup a reputation is that callers might misuse it (eg. not realise they have to free the memory returned; try to strcat to the end of a strdup'ed string). But then malloc'ed strings are not free from the possibility of misuse either.
Thanks for the replies and apologies to those who consider the question unhelpful (votes to close). In summary of the replies, it seems that there is no general feeling that strdup is evil per se, but a general consensus that it can, like many other parts of C, be used improperly or unsafely.
There is no 'correct' answer really, but for the sake of accepting one, I accepted #nneoneo's answer - it could equally have been #R..'s answer.
Two reasons I can think of:
It's not strictly ANSI C, but rather POSIX. Consequently, some compilers (e.g. MSVC) discourage use (MSVC prefers _strdup), and technically the C standard could define its own strdup with different semantics since str is a reserved prefix. So, there are some potential portability concerns with its use.
It hides its memory allocation. Most other str functions don't allocate memory, so users might be misled (as you say) into believing the returned string doesn't need to be freed.
But, aside from these points, I think that careful use of strdup is justified, as it can reduce code duplication and provides a nice implementation for common idioms (such as strdup("constant string") to get a mutable, returnable copy of a literal string).
My answer is rather supporting strdup and it is no worse than any other function in C.
POSIX is a standard and strdup is not too difficult to implement if portability becomes an issue.
Whether to free the memory allocated by strdup shouldn't be an issue if anyone taken a little time to read the man page and understand how strdup works. If one doesn't understand how a function works, it's very likely the person is going to mess up something, this is applicable to any function, not just strdup.
In C, memory & most other things are managed by the programmer, so strdup is no worse than forgetting to free malloc'ed memory, failing to null terminate a string, using incorrect format string in scanf (and invoking undefined behaviour), accessing dangling pointer etc.
(I really wanted to post this as a comment, but couldn't add in a single comment. Hence, posted it as an answer).
I haven't really heard strdup described as evil, but some possible reasons some people dislike it:
It's not standard C (but is in POSIX). However I find this reason silly because it's nearly a one-line function to add on systems that lack it.
Blindly duplicating strings all over the place rather than using them in-place when possible wastes time and memory and introduces failure cases into code that might otherwise be failure-free.
When you do need a copy of a string, it's likely you actually need more space to modify or build on it, and strdup does not give you that.
I think the majority of the concern about strdup comes from security concerns regarding buffer over runs, and improperly formatted strings. If a non-null terminated string is passed to strdup it can allocated an undefined length string. I don't know if this can be specifically leveraged into an attack but in general it is good secure coding practice to only use string functions which take a maximum length instead of relying on the null character alone.
Many people obviously don't, but I personally find strdup evil for several reasons,
the main one being it hides the allocation. The other str* functions and most other standard functions require no free afterwards, so strdup looks innocuous enough and you can forget to clean up after it. dmckee suggested to just add it to your mental list of functions that need cleaning up after, but why? I don't see a big advantage over reducing two medium-length lines to one short one.
It allocates memory on the heap always, and with C99's (is it 99?) VLAs, you have yet another reason to just use strcpy (you don't even need malloc). You can't always do this, but when you can, you should.
It's not part of the ISO standard (but it is part of the POSIX standard, thanks Wiz), but that's really a small point as R.. mentioned that it can be added easily. If you write portable programs, I'm not sure how you'd tell if it was already defined or not though...
These are of course a few of my own reasons, no one else's. To answer your question, there is no consensus that I'm aware of.
If you're writing programs just for yourself and you find strdup no problem, then there's much less reason not to use it than if you are writing a program to be read by many people of many skill levels and ages.
My reason for disliking strdup, which hasn't been mentioned, is that it is resource allocation without a natural pair. Let's try a silly game: I say malloc, you say free. I say open you say close. I say create you say destroy. I say strdup you say ....?
Actually, the answer to strdup is free of course, and the function would have been better named malloc_and_strcpy to make that clear. But many C programmers don't think of it that way and forgets that strdup requires its opposite or "ending" free to deallocate.
In my experience, it is very common to find memory leaks in code which calls strdup. It's an odd function which combines strlen, malloc and strcpy.
Why is strdup considered to be evil
Conflicts with Future language directions.
Reliance on errno state.
Easier to make your own strdup() that is not quite like the POISX one nor the future C2x one.
With C2x on the way with certain inclusion of strdup(), using strdup() before that has these problems.
The C2x proposed strdup() does not mention errno whereas POSIX does. Code that relies on setting errno to ENOMEM or EINVAL can have trouble in the future.
The C2x proposed char *strdup(const char *s1) uses a const char * as the parameter. User coded versions of strdup() too often use char *s1, incurring a difference that can break select code that counts on the char * signature. I.E. function pointers.
User code that did roll their own strdup() were not following C's Future language directions with its "Function names that begin with str, mem, or wcs and a lowercase letter may be added to the
declarations in the <string.h> header" and so may incur library conflicts with the new strdup() and user's strdup().
If user code wants strdup() code before C2x, consider naming it something different like my_strdup() and use a const char * parameter. Minimize or avoid any reliance on the state of errno after the call returns NULL.
My my_strdup() effort - warts and all.
Related
Many functions like strcat, strcpy and alike don't return the actual value but change one of the parameters (usually a buffer). This of course creates a boatload of side effects.
Wouldn't it be far more elegant to just return a new string? Why isn't this done?
Example:
char *copy_string(char *text, size_t length) {
char *result = malloc(sizeof(char) * length);
for (int i = 0; i < length; ++i) {
result[i] = text[i];
}
return result;
}
int main() {
char *copy = copy_string("Hello World", 12);
// *result now lingers in memory and can not be freed?
}
I can only guess it has something to do with memory leaking since there is dynamic memory being allocated inside of the function which you can not free internally (since you need to return a pointer to it).
Edit: From the answers it seems that it is good practice in C to work with parameters rather than creating new variables. So I should aim for building my functions like that?
Edit 2: Will my example code lead to a memory leak? Or can *result be free'd?
To answer your original question: C, at the time it was designed, was tailored to be a language of maximum efficiency. It was, basically, just a nicer way of writing assembly code (the guy who designed it, wrote his own compiler for it).
What you say (that parameters are often used rather than return codes) is mainly true for string handling. Most other functions (those that deal with numbers for example) work through return codes as expected. Or they only modify values for parameters if they have to return more than one value.
String handling in C today is considered one of the major (if not THE major) weakness in C. But those functions were written with performance in mind, and with the machines available those days (and the intent of performance) working on the callers buffers was the way of choice.
Re your edit 1: Today other intents may apply. Performance usually isn't the limiting factor. Equally or important are readability, robustness, pronenees to error. And generally, as said, the string handling in C is today generally considered an horrible relic of the past. So it's basically your choice, depending on your intent.
Re your edit 2: Yes, the memory will leak. You need to call free(copy); Which ties into edit 1: proneness of error - it's easy to forget the free and create leaks that way (or attempt to free it twice or access it after it was freed). It may be more readable and more more prone to error too (even more than the clunky original C approach of modifying the caller's buffer).
Generally, I'd suggest, whenever you have the choice, to work with a newer dialect that support std-string or something similar.
Why do so many standard C functions tamper with parameters instead of returning values?
Because that's often what the users of the C library wants.
Many functions like strcat, strcpy and alike don't return the actual value but change one of the parameters (usually a buffer). This of course creates a boatload of side effects. Wouldn't it be far more elegant to just return a new string? Why isn't this done?
It's not very efficient to allocate a memory and it'll require the user to free() them later, which is an unnecessary burden on the user. Efficiency and letting users do what they want (even if they want shoot themselves in the foot) is a part of C's philosophy.
Besides, there are syntax/implementation issues. For example, how can the following be done if the strcpy() function actually returns a newly allocated string?
char arr[256] = "Hello";
strcpy(arr, "world");
Because C doesn't allow you assign something to an array (arr).
Basically, you are questioning C is the way it is. For that question, the common answer is "historical reasons".
Two reasons:
Properly designed functions should only concern themselves with their designated purpose, and not unrelated things such as memory allocation.
Making a hard copy of the string would make the function far slower.
So for your example, if there is a need for a hard copy, the caller should malloc the buffer and afterwards call strcpy. That separates memory allocation from the algorithm.
On top of that, good design practice dictates that the module that allocated memory should also be responsible for freeing it. Otherwise the caller might not even realize that the function is allocating memory, and there would be a memory leak. If the caller instead is responsible for the allocation, then it is obvious that the caller is also responsible for clean-up.
Overall, C standard library functions are designed to be as fast as possible, meaning they will strive to meet the case where the caller has minimal requirements. A typical example of such a function is malloc, which doesn't even set the allocated data to zero, because that would take extra time. Instead they added an additional function calloc for that purpose.
Other languages have different philosophies, where they would for example force a hard copy for all string handling functions ("immutable objects"). This makes the function easier to work with and perhaps also the code easier to read, but it comes at the expense of a slower program, which needs more memory.
This is one of the main reasons why C is still widely used for development. It tends is much faster and more efficient than any other language (except raw assembler).
what should I use when I want to copy src_str to dst_arr and why?
char dst_arr[10];
char *src_str = "hello";
PS: my head is spinning faster than the disk of my computer after reading a lot of things on how good or bad is strncpy and strlcpy.
Note: I know strlcpy is not available everywhere. That is not the concern here.
strncpy is never the right answer when your destination string is zero-terminated. strncpy is a function intended to be used with non-terminated fixed-width strings. More precisely, its purpose is to convert a zero-terminated string to a non-terminated fixed-width string (by copying). In other words, strncpy is not meaningfully applicable here.
The real choice you have here is between strlcpy and plain strcpy.
When you want to perform "safe" (i.e. potentially truncated) copying to dst_arr, the proper function to use is strlcpy.
As for dst_ptr... There's no such thing as "copy to dst_ptr". You can copy to memory pointed by dst_ptr, but first you have to make sure it points somewhere and allocate that memory. There are many different ways to do it.
For example, you can just make dst_ptr to point to dst_arr, in which case the answer is the same as in the previous case - strlcpy.
Or you can allocate the memory using malloc. If the amount of memory you allocated is guaranteed to be enough for the string (i.e. at least strlen(src_str) + 1 bytes is allocated), then you can use the plain strcpy or even memcpy to copy the string. There's no need and no reason to use strlcpy in this case , although some people might prefer using it, since it somehow gives them the feeling of extra safety.
If you intentionally allocate less memory (i.e. you want your string to get truncated), then strlcpy becomes the right function to use.
strlcpy() is safer than strncpy() so you might as well use it.
Systems that don't have it will often have a s_strncpy() that does the same thing.
Note : you can't copy anything to dst_ptr until it points to something
I did not know of strlcpy. I just found here that:
The strlcpy() and strlcat() functions copy and concatenate strings
respectively. They are designed to be safer, more consistent, and
less error prone replacements for strncpy(3) and strncat(3).
So strlcpy seams safer.
Edit: A full discussion is available here.
Edit2:
I realize that what I wrote above does not answer the "in your case" part of your question. If you understand the limitations of strncpy, I guess you can use it and write good code around it to avoid its pitfalls; but if your are not sure about your understanding of its limits, use strlcpy.
My understanding of the limitations of strncpy and strlcpy is that you can do something very bad with strncpy (buffer overflow), and the worst you can do with strlcpy is to loose one char in the process.
You should always the standard function, which in this case is the C11 strcpy_s() function. Not strncpy(), as this is unsafe not guaranteeing zero termination. And not the OpenBSD-only strlcpy(), as it is also unsafe, and OpenBSD always comes up with it's own inventions, which usually don't make it into any standard.
See
http://en.cppreference.com/w/c/string/byte/strcpy
The function strcpy_s is similar to the BSD function strlcpy, except that
strlcpy truncates the source string to fit in the destination (which is a security risk)
strlcpy does not perform all the runtime checks that strcpy_s does
strlcpy does not make failures obvious by setting the destination to a null string or calling a handler if the call fails.
Although strcpy_s prohibits truncation due to potential security risks, it's possible to truncate a string using bounds-checked strncpy_s instead.
If your C library doesn't have strcpy_s, use the safec lib.
https://rurban.github.io/safeclib/doc/safec-3.1/df/d8e/strcpy__s_8c.html
First of all, your dst_ptr has no space allocated and you haven't set it to point at the others, so assigning anything to that would probably cause a segmentation fault.
Strncpy should work perfectly fine - just do:
strncpy(dst_arr, src_str, sizeof(dst_arr));
and you know you wont overflow dst_arr. If you use a bigger src_str you might have to put your own null-terminator at the end of dst_arr, but in this case your source is < your dest, so it will be padded with nulls anyway.
This works everywhere and its safe, so I wouldn't look at anything else unless its intellectual curiousity.
Also note that it would be good to use a non-magic number for the 10 so you know the size of that matches the size of the strncpy :)
you should not use strncpy and not strlcpy for this. Better you use
*dst_arr=0; strncat(dst_arr,src_arr,(sizeof dst_arr)-1);
or without an initialization
sprintf(dst_arr,"%.*s",(sizeof dst_arr)-1,src_arr);
dst_arr here must be an array NOT a pointer.
Are functions like strcpy, gets, etc. always dangerous? What if I write a code like this:
int main(void)
{
char *str1 = "abcdefghijklmnop";
char *str2 = malloc(100);
strcpy(str2, str1);
}
This way the function doesn't accept arguments(parameters...) and the str variable will always be the same length...which is here 16 or slightly more depending on the compiler version...but yeah 100 will suffice as of march, 2011 :).
Is there a way for a hacker to take advantage of the code above?
10x!
Absolutely not. Contrary to Microsoft's marketing campaign for their non-standard functions, strcpy is safe when used properly.
The above is redundant, but mostly safe. The only potential issue is that you're not checking the malloc return value, so you may be dereferencing null (as pointed out by kotlinski). In practice, this likely to cause an immediate SIGSEGV and program termination.
An improper and dangerous use would be:
char array[100];
// ... Read line into uncheckedInput
// Extract substring without checking length
strcpy(array, uncheckedInput + 10);
This is unsafe because the strcpy may overflow, causing undefined behavior. In practice, this is likely to overwrite other local variables (itself a major security breach). One of these may be the return address. Through a return to lib C attack, the attacker may be able to use C functions like system to execute arbitrary programs. There are other possible consequences to overflows.
However, gets is indeed inherently unsafe, and will be removed from the next version of C (C1X). There is simply no way to ensure the input won't overflow (causing the same consequences given above). Some people would argue it's safe when used with a known input file, but there's really no reason to ever use it. POSIX's getline is a far better alternative.
Also, the length of str1 doesn't vary by compiler. It should always be 17, including the terminating NUL.
You are forcefully stuffing completely different things into one category.
Functions gets is indeed always dangerous. There's no way to make a safe call to gets regardless of what steps you are willing to take and how defensive you are willing to get.
Function strcpy is perfectly safe if you are willing to take the [simple] necessary steps to make sure that your calls to strcpy are safe.
That already puts gets and strcpy in vastly different categories, which have nothing in common with regard to safety.
The popular criticisms directed at safety aspects of strcpy are based entirely on anecdotal social observations as opposed to formal facts, e.g. "programmers are lazy and incompetent, so don't let them use strcpy". Taken in the context of C programming, this is, of course, utter nonsense. Following this logic we should also declare the division operator exactly as unsafe for exactly the same reasons.
In reality, there are no problems with strcpy whatsoever. gets, on the other hand, is a completely different story, as I said above.
yes, it is dangerous. After 5 years of maintenance, your code will look like this:
int main(void)
{
char *str1 = "abcdefghijklmnop";
{enough lines have been inserted here so as to not have str1 and str2 nice and close to each other on the screen}
char *str2 = malloc(100);
strcpy(str2, str1);
}
at that point, someone will go and change str1 to
str1 = "THIS IS A REALLY LONG STRING WHICH WILL NOW OVERRUN ANY BUFFER BEING USED TO COPY IT INTO UNLESS PRECAUTIONS ARE TAKEN TO RANGE CHECK THE LIMITS OF THE STRING. AND FEW PEOPLE REMEMBER TO DO THAT WHEN BUGFIXING A PROBLEM IN A 5 YEAR OLD BUGGY PROGRAM"
and forget to look where str1 is used and then random errors will start happening...
Your code is not safe. The return value of malloc is unchecked, if it fails and returns 0 the strcpy will give undefined behavior.
Besides that, I see no problem other than that the example basically does not do anything.
strcpy isn't dangerous as far as you know that the destination buffer is large enough to hold the characters of the source string; otherwise strcpy will happily copy more characters than your target buffer can hold, which can lead to several unfortunate consequences (stack/other variables overwriting, which can result in crashes, stack smashing attacks & co.).
But: if you have a generic char * in input which hasn't been already checked, the only way to be sure is to apply strlen to such string and check if it's too large for your buffer; however, now you have to walk the entire source string twice, once for checking its length, once to perform the copy.
This is suboptimal, since, if strcpy were a little bit more advanced, it could receive as a parameter the size of the buffer and stop copying if the source string were too long; in a perfect world, this is how strncpy would perform (following the pattern of other strn*** functions). However, this is not a perfect world, and strncpy is not designed to do this. Instead, the nonstandard (but popular) alternative is strlcpy, which, instead of going out of the bounds of the target buffer, truncates.
Several CRT implementations do not provide this function (notably glibc), but you can still get one of the BSD implementations and put it in your application. A standard (but slower) alternative can be to use snprintf with "%s" as format string.
That said, since you're programming in C++ (edit I see now that the C++ tag has been removed), why don't you just avoid all the C-string nonsense (when you can, obviously) and go with std::string? All these potential security problems vanish and string operations become much easier.
The only way malloc may fail is when an out-of-memory error occurs, which is a disaster by itself. You cannot reliably recover from it because virtually anything may trigger it again, and the OS is likely to kill your process anyway.
As you point out, under constrained circumstances strcpy isn't dangerous. It is more typical to take in a string parameter and copy it to a local buffer, which is when things can get dangerous and lead to a buffer overrun. Just remember to check your copy lengths before calling strcpy and null terminate the string afterward.
Aside for potentially dereferencing NULL (as you do not check the result from malloc) which is UB and likely not a security threat, there is no potential security problem with this.
gets() is always unsafe; the other functions can be used safely.
gets() is unsafe even when you have full control on the input -- someday, the program may be run by someone else.
The only safe way to use gets() is to use it for a single run thing: create the source; compile; run; delete the binary and the source; interpret results.
ok. It can be called anything else as in _msize in Visual Studio.
But why is it not in the standard to return the size of the memory given the memory block alloced using malloc? Since we can not tell how much memory is pointed to by the return pointer following malloc, we could use this "memsize" call to return that information should we need it. "memsize" would be implementation specific as are malloc/free
Just asking as I had to write a wrapper sometime back to store some additional bytes for the size.
Because the C library, including malloc, was designed for minimum overhead. A function like the one you want would require the implementation to record the exact size of the allocation, while implementations may now choose to "round" the size up as they please, to prevent actually reallocating in realloc.
Storing the size requires an extra size_t per allocation, which may be heavy for embedded systems. (And for the PDP-11s and 286s that were still abundant when C89 was written.)
To turn this around, why should there be? There's plenty of stuff in the Standards already, particularly the C++ standard. What are your use cases?
You ask for an adequately-sized chunk of memory, and you get it (or a null pointer or exception). There may or may not be additional bytes allocated, and some of these may be reserved. This is conceptually simple: you ask for what you want, and you get something you can use.
Why complicate it?
I don't think there is any definite answer. The developers of the standard probably considered it, and weighed the pros and cons. Anything that goes into a standard must be implemented by every implementation, so adding things to it places a significant burden on developers. I guess they just didn't find that feature useful enough to warrant this.
In C++, the wrapper that you talk about is provided by the standard. If you allocate a block of memory with std::vector, you can use the member function vector::size() to determine the size of the array and use vector::capacity() to determine the size of the allocation (which might be different).
C, on the other hand, is a low-level language which leaves such concerns to be managed by the developer, since tracking it dynamically (as you suggest) is not strictly necessary and would be redundant in many cases.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of.
Specifically, to quote R..
strncpy does not null-terminate, and
does null-pad the whole remainder of
the destination buffer, which is a
huge waste of time. You can work
around the former by adding your own
null padding, but not the latter. It
was never intended for use as a "safe
string handling" function, but for
working with fixed-size fields in Unix
directory tables and database files.
snprintf(dest, n, "%s", src) is the
only correct "safe strcpy" in standard
C, but it's likely to be a lot slower.
By the way, truncation in itself can
be a major bug and in some cases might
lead to privilege elevation or DoS, so
throwing "safe" string functions that
truncate their output at a problem is
not a way to make it "safe" or
"secure". Instead, you should ensure
that the destination buffer is the
right size and simply use strcpy (or
better yet, memcpy if you already know
the source string length).
And from Jonathan Leffler
Note that strncat() is even more
confusing in its interface than
strncpy() - what exactly is that
length argument, again? It isn't what
you'd expect based on what you supply
strncpy() etc - so it is more error
prone even than strncpy(). For copying
strings around, I'm increasingly of
the opinion that there is a strong
argument that you only need memmove()
because you always know all the sizes
ahead of time and make sure there's
enough space ahead of time. Use
memmove() in preference to any of
strcpy(), strcat(), strncpy(),
strncat(), memcpy().
So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question:
What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies?
In the interests of objectivity, I have a number of criteria for an answer:
Please, if you can, cite design reasons behind the function in question i.e. its intended purpose.
Please highlight the misuse to which the code is currently put.
Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers.
Please avoid:
Debates over naming conventions of functions (except where this unequivocably causes confusion).
"I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them.
As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away.
I am also working as per C99.
What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies ?
I'm gonna go with the obvious :
char *gets(char *s);
With its remarkable particularity that it's simply impossible to use it appropriately.
A common pitfall with the strtok() function is to assume that the parsed string is left unchanged, while it actually replaces the separator character with '\0'.
Also, strtok() is used by making subsequent calls to it, until the entire string is tokenized. Some library implementations store strtok()'s internal status in a global variable, which may induce some nasty suprises, if strtok() is called from multiple threads at the same time.
The CERT C Secure Coding Standard lists many of these pitfalls you asked about.
In almost all cases, atoi() should not be used (this also applies to atof(), atol() and atoll()).
This is because these functions do not detect out-of-range errors at all - the standard simply says "If the value of the result cannot be represented, the behavior is undefined.". So the only time they can be safely used is if you can prove that the input will certainly be within range (for example, if you pass a string of length 4 or less to atoi(), it cannot be out of range).
Instead, use one of the strtol() family of functions.
Let us extend the question to interfaces in a broader sense.
errno:
technically it is not even clear what it is, a variable, a macro, an implicit function call? In practice on modern systems it is mostly a macro that transforms into a function call to have a thread specific error state. It is evil:
because it may cause overhead for the
caller to access the value, to check the "error" (which might just be an exceptional event)
because it even imposes at some places that the caller clears this "variable" before making a library call
because it implements a simple error
return by setting a global state, of the library.
The forthcoming standard gets the definition of errno a bit more straight, but these uglinesses remain
There is often a strtok_r.
For realloc, if you need to use the old pointer, it's not that hard to use another variable. If your program fails with an allocation error, then cleaning up the old pointer is often not really necessary.
I would put printf and scanf pretty high up on this list. The fact that you have to get the formatting specifiers exactly correct makes these functions tricky to use and extremely easy to get wrong. It's also very hard to avoid buffer overruns when reading data out. Moreover, the "printf format string vulnerability" has probably caused countless security holes when well-intentioned programmers specify client-specified strings as the first argument to printf, only to find the stack smashed and security compromised many years down the line.
Any of the functions that manipulate global state, like gmtime() or localtime(). These functions simply can't be used safely in multiple threads.
EDIT: rand() is in the same category it would seem. At least there are no guarantees of thread-safety, and on my Linux system the man page warns that it is non-reentrant and non-threadsafe.
One of my bêtes noire is strtok(), because it is non-reentrant and because it hacks the string it is processing into pieces, inserting NUL at the end of each token it isolates. The problems with this are legion; it is distressingly often touted as a solution to a problem, but is as often a problem itself. Not always - it can be used safely. But only if you are careful. The same is true of most functions, with the notable exception of gets() which cannot be used safely.
There's already one answer about realloc, but I have a different take on it. A lot of time, I've seen people write realloc when they mean free; malloc - in other words, when they have a buffer full of trash that needs to change size before storing new data. This of course leads to potentially-large, cache-thrashing memcpy of trash that's about to be overwritten.
If used correctly with growing data (in a way that avoids worst-case O(n^2) performance for growing an object to size n, i.e. growing the buffer geometrically instead of linearly when you run out of space), realloc has doubtful benefit over simply doing your own new malloc, memcpy, and free cycle. The only way realloc can ever avoid doing this internally is when you're working with a single object at the top of the heap.
If you like to zero-fill new objects with calloc, it's easy to forget that realloc won't zero-fill the new part.
And finally, one more common use of realloc is to allocate more than you need, then resize the allocated object down to just the required size. But this can actually be harmful (additional allocation and memcpy) on implementations that strictly segregate chunks by size, and in other cases might increase fragmentation (by splitting off part of a large free chunk to store a new small object, instead of using an existing small free chunk).
I'm not sure if I'd say realloc encourages bad practice, but it's a function I'd watch out for.
How about the malloc family in general? The vast majority of large, long-lived programs I've seen use dynamic memory allocation all over the place as if it were free. Of course real-time developers know this is a myth, and careless use of dynamic allocation can lead to catastrophic blow-up of memory usage and/or fragmentation of address space to the point of memory exhaustion.
In some higher-level languages without machine-level pointers, dynamic allocation is not so bad because the implementation can move objects and defragment memory during the program's lifetime, as long as it can keep references to these objects up-to-date. A non-conventional C implementation could do this too, but working out the details is non-trivial and it would incur a very significant cost in all pointer dereferences and make pointers rather large, so for practical purposes, it's not possible in C.
My suspicion is that the correct solution is usually for long-lived programs to perform their small routine allocations as usual with malloc, but to keep large, long-lived data structures in a form where they can be reconstructed and replaced periodically to fight fragmentation, or as large malloc blocks containing a number of structures that make up a single large unit of data in the application (like a whole web page presentation in a browser), or on-disk with a fixed-size in-memory cache or memory-mapped files.
On a wholly different tack, I've never really understood the benefits of atan() when there is atan2(). The difference is that atan2() takes two arguments, and returns an angle anywhere in the range -π..+π. Further, it avoids divide by zero errors and loss of precision errors (dividing a very small number by a very large number, or vice versa). By contrast, the atan() function only returns a value in the range -π/2..+π/2, and you have to do the division beforehand (I don't recall a scenario where atan() could be used without there being a division, short of simply generating a table of arctangents). Providing 1.0 as the divisor for atan2() when given a simple value is not pushing the limits.
Another answer, since these are not really related, rand:
it is of unspecified random quality
it is not re-entrant
Some of this functions are modifying some global state. (In windows) this state is shared per single thread - you can get unexpected result. For example, the first call of rand in every thread will give the same result, and it requires some care to make it pseudorandom, but deterministic (for debug purposes).
basename() and dirname() aren't threadsafe.