strlen not checking for NULL - c

Why is strlen() not checking for NULL?
if I do strlen(NULL), the program segmentation faults.
Trying to understand the rationale behind it (if any).

The rational behind it is simple -- how can you check the length of something that does not exist?
Also, unlike "managed languages" there is no expectations the run time system will handle invalid data or data structures correctly. (This type of issue is exactly why more "modern" languages are more popular for non-computation or less performant requiring applications).
A standard template in c would look like this
int someStrLen;
if (someStr != NULL) // or if (someStr)
someStrLen = strlen(someStr);
else
{
// handle error.
}

The portion of the language standard that defines the string handling library states that, unless specified otherwise for the specific function, any pointer arguments must have valid values.
The philosphy behind the design of the C standard library is that the programmer is ultimately in the best position to know whether a run-time check really needs to be performed. Back in the days when your total system memory was measured in kilobytes, the overhead of performing an unnecessary runtime check could be pretty painful. So the C standard library doesn't bother doing any of those checks; it assumes that the programmer has already done it if it's really necessary. If you know you will never pass a bad pointer value to strlen (such as, you're passing in a string literal, or a locally allocated array), then there's no need to clutter up the resulting binary with an unnecessary check against NULL.

The standard does not require it, so implementations just avoid a test and potentially an expensive jump.

A little macro to help your grief:
#define strlens(s) (s==NULL?0:strlen(s))

Three significant reasons:
The standard library and the C language are designed assuming that the programmer knows what he is doing, so a null pointer isn't treated as an edge case, but rather as a programmer's mistake that results in undefined behaviour;
It incurs runtime overhead - calling strlen thousands of times and always doing str != NULL is not reasonable unless the programmer is treated as a sissy;
It adds up to the code size - it could only be a few instructions, but if you adopt this principle and do it everywhere it can inflate your code significantly.

size_t strlen ( const char * str );
http://www.cplusplus.com/reference/clibrary/cstring/strlen/
Strlen takes a pointer to a character array as a parameter, null is not a valid argument to this function.

Related

Assign to a null pointer an area inside a function and preserve the value outside

I have a function that reads from a socket, it returns a char** where packets are stored and my intention is to use a NULL unsigned int pointer where I store the length of single packet.
char** readPackets(int numToRead,unsigned int **lens,int socket){
char** packets=(char**)malloc(numToRead);
int *len=(int*)malloc(sizeof(int)*numToRead);
*(lens)=len;
for(int i=0;i<numToRead;i++){
//read
packets[i]=(char*)malloc(MAX_ETH_LEN);
register int pack_len=read(socket,packets[i],MAX_ETH_LEN);
//TODO handler error in case of freezing
if(pack_len<=0){
i--;
continue;
}
len[i]=pack_len;
}
return packets;
}
I use it in this way:
unsigned int *lens_out=NULL;
char **packets=readPackets(N_PACK,&lens,sniff_sock[handler]);
where N_PACK is a constant defined previously.
Now the problem is that when I am inside the function everything works, in fact *(lens) points to the same memory area of len and outside the function lens_out points to the same area too. Inside the function len[i] equals to *(lens[i]) (I checked it with gdb).
The problem is that outside the function even if lens_out points to the same area of len elements with same index are different for example
len[0]=46
lens_out[0]=4026546640
Can anyone explain where I made the mistake?
Your statement char** packets=(char**)malloc(numToRead) for sure does not reserve enough memory. Note that an element of packets-array is of type char*, and that sizeof(char*) is probably 8 (eventually 4), but very very unlikely 1. So you should write
char** packets = malloc(sizeof(char*) * numToRead)
Otherwise, you write out of the bounds of reserved memory, thereby yielding undefined behaviour (probably the one you explained).
Note further that with i--; continue;, you get memory leaks since you assign a new memory block to the ith element, but you lose reference to the memory reserved right before. Write free(packets[i]);i--;continue; instead.
Further, len[0] is an integral type, whereas lens[0] refers to a pointer to int. Comparing these two does not make sense.
Firstly, I want to put it out there that you should write clear code for the sake of future maintenance, and not for what you think is optimal (yet). This entire function should merely be replaced with read. That's the crux of my post.
Now the problem is that when I am inside the function everything works
I disagree. On a slightly broader topic, the biggest problem here is that you've posted a question containing code which doesn't compile when copied and pasted unmodified, and the question isn't about the error messages, so we can't answer the question without guessing.
My guess is that you haven't noticed these error messages; you're running a stale binary which we don't have the source code for, we can't reproduce the issue and we can't see the old source code, so we can't help you. It is as valid as any other guess. For example, there's another answer which speculates:
Your statement char** packets=(char**)malloc(numToRead) for sure does not reserve enough memory.
The malloc manual doesn't guarantee that precisely numToRead bytes will be allocated; in fact, allocations to processes tend to be performed in pages and just as the sleep manual doesn't guarantee a precise number of milliseconds/microseconds, it may allocate more or it may allocate less; in the latter case, malloc must return NULL which your code needs to check.
It's extremely common for implementations to seem to behave correctly when a buffer is overflowed anyway. Nonetheless, it'd be best if you fixed that buffer overflow. malloc doesn't know about the type you're allocating; you need to tell it everything about the size, not just the number of elements.
P.S. You probably want select or sleep within your loop, you know, to "handle error in case of freezing" or whatever. Generally, OSes will switch context to another program when you call one of those, and only switch back when there's data ready to process. By calling sleep after sending or receiving, you give the OS a heads up that it needs to perform some I/O. The ability to choose that timing can be beneficial, when you're optimising. Not at the moment, though.
Inside the function len[i] equals to *(lens[i]) (I checked it with gdb).
I'm fairly sure you've misunderstood that. Perhaps gdb is implicitly dereferencing your pointers, for you; that's really irrelevant to C (so don't confuse anything you learn from gdb with anything C-related).
In fact, I strongly recommend learning a little bit less about gdb and a lot more about assert, because the former won't help you document your code for future maintenance from other people, including us, those who you ask questions to, where-as the latter will. If you include assert in your code, you're almost certainly strengthening your question (and code) much more than including gdb into your question would.
The types of len[i] and *(len[i]) are different, and their values are affected by the way types are interpreted. These values can only be considered equal When they're converted to the same type. We can see this through C11/3.19p1 (the definition of "value", where the standard establishes it is dependant upon type). len[i] is an int * value, where-as *(len[i]) is an int value. The two categories of values might have different alignment, representation and... well, they have different semantics entirely. One is for integral data, and the other is a reference to an object or array. You shouldn't be comparing them, no matter how equal they may seem; the information you obtain from such a comparison is virtually useless.
You can't use len[i] in a multiplication expression, for example. They're certainly not equal in that respect. They might compare equal (as a side-effect of comparison introducing implicit conversions), which is useless information for you to have, and that is a different story.
memcmp((int[]){0}, (unsigned char[]){ [sizeof int] = 42 }, sizeof int) may return 0 indicating that they're equal, but you know that array of characters contains an extra byte, right? Yeh... they're equal...
You must check the return value of malloc (and don't cast the return value), if you're using it, though I really think you should reconsider your options with that regard.
The fact that you use malloc means everyone who uses your function must then use free; it's locking down-stream programmers into an anti-pattern that can tear the architecture of software apart. You should separate categories of allocation logic and user interface logic from processing logic.
For example, you use read which gives you the opportunity to choose whatever storage duration you like. This means you have an immense number of optimisation opportunities. It gives you, the downstream programmer, the opportunity to write flexible code which assigns whatever storage duration you like to the memory used. Imagine if, on the other hand, you had to free every return value from every function... That's the mess you're encouraging.
This is especially a poor, inefficient design when constants are involved (i.e. your usecase), because you could just use an automatic array and get rid of the calls to malloc and free altogether... Your downstream programmers code could be:
char packet[size][count];
int n_read = read(fd, packet, size * count);
Perhaps you think using malloc to allocate (and later read) n spaces for packets is faster than using something else to allocate n spaces. You should test that theory, because from my experience computers tend to be optimised for simpler, shorter, more concise logic.
In anticipation:
But I can't return packet; like that!
True. You can't return packet; to your downstream programmer, so you modify an object pointed at by an argument. That doesn't mean you should use malloc, though.
Unfortunately, too many programs are adopting this "use malloc everywhere" mentality. It's reminiscent of the "don't use goto" crap that we've been fed. Rather than listening to cargo cult propaganda, I recommend thinking critically about what you hear, because your peers are in the same position as you; they don't necessarily know what they're talking about.

Why do so many standard C functions tamper with parameters instead of returning values?

Many functions like strcat, strcpy and alike don't return the actual value but change one of the parameters (usually a buffer). This of course creates a boatload of side effects.
Wouldn't it be far more elegant to just return a new string? Why isn't this done?
Example:
char *copy_string(char *text, size_t length) {
char *result = malloc(sizeof(char) * length);
for (int i = 0; i < length; ++i) {
result[i] = text[i];
}
return result;
}
int main() {
char *copy = copy_string("Hello World", 12);
// *result now lingers in memory and can not be freed?
}
I can only guess it has something to do with memory leaking since there is dynamic memory being allocated inside of the function which you can not free internally (since you need to return a pointer to it).
Edit: From the answers it seems that it is good practice in C to work with parameters rather than creating new variables. So I should aim for building my functions like that?
Edit 2: Will my example code lead to a memory leak? Or can *result be free'd?
To answer your original question: C, at the time it was designed, was tailored to be a language of maximum efficiency. It was, basically, just a nicer way of writing assembly code (the guy who designed it, wrote his own compiler for it).
What you say (that parameters are often used rather than return codes) is mainly true for string handling. Most other functions (those that deal with numbers for example) work through return codes as expected. Or they only modify values for parameters if they have to return more than one value.
String handling in C today is considered one of the major (if not THE major) weakness in C. But those functions were written with performance in mind, and with the machines available those days (and the intent of performance) working on the callers buffers was the way of choice.
Re your edit 1: Today other intents may apply. Performance usually isn't the limiting factor. Equally or important are readability, robustness, pronenees to error. And generally, as said, the string handling in C is today generally considered an horrible relic of the past. So it's basically your choice, depending on your intent.
Re your edit 2: Yes, the memory will leak. You need to call free(copy); Which ties into edit 1: proneness of error - it's easy to forget the free and create leaks that way (or attempt to free it twice or access it after it was freed). It may be more readable and more more prone to error too (even more than the clunky original C approach of modifying the caller's buffer).
Generally, I'd suggest, whenever you have the choice, to work with a newer dialect that support std-string or something similar.
Why do so many standard C functions tamper with parameters instead of returning values?
Because that's often what the users of the C library wants.
Many functions like strcat, strcpy and alike don't return the actual value but change one of the parameters (usually a buffer). This of course creates a boatload of side effects. Wouldn't it be far more elegant to just return a new string? Why isn't this done?
It's not very efficient to allocate a memory and it'll require the user to free() them later, which is an unnecessary burden on the user. Efficiency and letting users do what they want (even if they want shoot themselves in the foot) is a part of C's philosophy.
Besides, there are syntax/implementation issues. For example, how can the following be done if the strcpy() function actually returns a newly allocated string?
char arr[256] = "Hello";
strcpy(arr, "world");
Because C doesn't allow you assign something to an array (arr).
Basically, you are questioning C is the way it is. For that question, the common answer is "historical reasons".
Two reasons:
Properly designed functions should only concern themselves with their designated purpose, and not unrelated things such as memory allocation.
Making a hard copy of the string would make the function far slower.
So for your example, if there is a need for a hard copy, the caller should malloc the buffer and afterwards call strcpy. That separates memory allocation from the algorithm.
On top of that, good design practice dictates that the module that allocated memory should also be responsible for freeing it. Otherwise the caller might not even realize that the function is allocating memory, and there would be a memory leak. If the caller instead is responsible for the allocation, then it is obvious that the caller is also responsible for clean-up.
Overall, C standard library functions are designed to be as fast as possible, meaning they will strive to meet the case where the caller has minimal requirements. A typical example of such a function is malloc, which doesn't even set the allocated data to zero, because that would take extra time. Instead they added an additional function calloc for that purpose.
Other languages have different philosophies, where they would for example force a hard copy for all string handling functions ("immutable objects"). This makes the function easier to work with and perhaps also the code easier to read, but it comes at the expense of a slower program, which needs more memory.
This is one of the main reasons why C is still widely used for development. It tends is much faster and more efficient than any other language (except raw assembler).

Copy Arbitrary Type in C Without Dynamic Memory Allocation

The Question:
I think I have figured out a way that, near as I can tell, allows you to write completely type-agnostic code that makes a copy of a variable of arbitrary type on the "stack" (in quotes because C standard does not actually require there to be a stack, so what I really mean is that it's copied with the auto storage class in local scope). Here it is:
/* Save/duplicate thingToCopy */
char copyPtr[sizeof(thingToCopy)];
memcpy(copyPtr, &thingToCopy, sizeof(thingToCopy));
/* modify the thingToCopy variable to do some work; do NOT do operations directly on the data in copyPtr, that's just a "storage bin". */
/* Restore old value of thingToCopy */
memcpy(&thingToCopy, copyPtr, sizeof(thingToCopy));
From my limited testing it works and near as I can tell it should work on all standards-compliant C implementations, but just in case I missed something, I'd like to know:
Is this completely in line with the C standard (I believe this should be good all the way from C89 through to the modern stuff), and if not, is it possible to fix and how?
What limitations on usage does this method force upon itself in order to stay standards-compliant?
For example, as I understand it, I am safe from alignment issues so long as I never use the char-array temp-copies directly - just as bins to save to and load from with memcpy. But I couldn't pass those addresses to other functions expecting pointers to the type I'm working with, without risking alignment issues (obviously syntactically I could do it perversely by first getting a void * from the char *, without even specifying the exact type I'm working with, but the point is that I think I would be triggering undefined behavior by doing so).
Is there is a more clean and/or performant* way to achieve the same thing?
*GCC 4.6.1 on my armel v7 test device, with -O3 optimization, produced identical code to regular code using normal assignments to temporary variables, but it could be that my test cases were just simple enough that it was able to figure it out, and that it would get confused if this technique were used more generally.
As a bonus passing interest, I'm curious if this would break in mostly-C-compatible languages (the ones I know of are C++, Objective-C, D, and maybe C#, though mentions of others are welcome too).
Rationale:
This is why I think the above works, in case you find it helpful to know where I'm coming from in order to explain any mistakes I may have made:
The C standard's "byte" (in the traditional sense of "smallest addressable unit of memory", not in the modernized "8 bits" meaning) is the char type - the sizeof operator produces numbers in units of char. So we can get exactly the smallest size of storage (that we can work with in C) needed for an arbitrary variable's type by using the sizeof operator on that variable.
The C standard guarantees that pretty all pointer types can be converted implicitly into a void * (but with a change of representation if their representation is different (but incidentally, the C standard guarantees that void * and char * have identical representations)).
The "name" of an array of a given type, and a pointer to that same type, can basically be treated identically as far as the syntax is concerned.
The sizeof operator is figured out at compile-time, so we can do char foo[sizeof(bar)] without depending on the effectively non-portable VLAs.
Therefore, we should be able to declare an array of "chars" that is the minimum size necessary to hold a given type.
Thus we should be able to pass the address of the variable to be copied, and name of the array, to memcpy (as I understand it, the array name is implicitly used as a char * to the first element of the array). Since any pointer can be implicitly converted to a void * (with change of representation is necessary), this works.
The memcpy should make a bitwise copy of the variable we are copying to the array. Regardless of what the type is, any padding bits involved, etc, the sizeof guarantees we'll grab all the bits that make up the type, including padding.
Since we can't explicitly use/declare the type of the variable we just copied, and because some architectures might have alignment requirements for various types that this hack might violate some of the time, we can't use this copy directly - we'd have to memcpy it back into the variable we got it from, or one of the same type, in order to make use of it. But once we copy it back, we have an exact copy of what we put there in the first place. Essentially, we are freeing the variable itself to be used as scratch space.
Motivation (or, "Dear God Why!?!"):
I like to write type-independent code when useful, and yet I also enjoy coding in C, and combining the two largely comes down to writing the generic code in function-like macros (you can then re-claim type-checking by making wrapper function definitions which call the function-like macro). Think of it like really crude templates in C.
As I've done this, I've run into situations where I needed an additional variable of scratch space, but, given the lack of a portable typeof() operator, I cannot declare any temporary variables of a matching type in such "generic macro" snippets of code. This is the closest thing to a truly portable solution that I've found.
Since we can do this trick multiple times (large enough char array that we can fit several copies, or several char arrays big enough to fit one), as long as we can keep our memcpy calls and copy pointer names straight, it's functionally like having an arbitrary number of temporary variables of the copied type, while being able to keep the generic code type-agnostic.
P.S. To slightly deflect the likely-inevitable rain of judgement, I'd like to say that I do recognize that this is seriously convoluted, and I would only reserve this in practice for very well-tested library code where it significantly added usefulness, not something I would regularly deploy.
Yes, it works. Yes, it is C89 standard. Yes, it is convoluted.
Minor improvement
A table of bytes char[] can start at any position in memory.
Depending on the content of your thingToCopy, and depending on CPU, this can result in sub-optimal copy performance.
Should speed matter (since it may not if this operation is rare), you may prefer to align your table, using int, long long or size_t units instead.
Major limitation
Your proposition only works if you know the size of thingToCopy.
This is a major issue : that means your compiler needs to know what thingToCopy is at compilation type (hence, it cannot be an incomplete type).
Hence, the following sentence is troubling :
Since we can't explicitly use/declare the type of the variable we just copied
No way. In order to compile char copyPtr[sizeof(thingToCopy)];, the compiler must know what thingToCopy is, hence it must have access to its type !
If you know it, you can simply do :
thingToCopy_t save;
save = thingToCopy;
/* do some stuff with thingToCopy */
thingToCopy = save;
which is clearer to read, and even better from an alignment perspective.
It would be bad to use your code on an object containing a pointer (except const pointer to const). Someone might modify the pointed-to data, or the pointer itself (e.g. realloc). This would leave your copy of the object in an unexpected or even invalid state.
Generic programming is one of the main driving forces behind C++. Others have tried to do generic programming in C using macros and casts. It's OK for small examples, but doesn't scale well. The compiler can't catch bugs for you when you use those techniques.

Is strcpy where src == dest strictly defined?

It's not too hard to demonstrate that strcpy on overlapped source and destination addresses fails on some platforms, either producing incorrect results or trapping (the latter with some negative random offsets on Linux/amd64).
I'd instrumented a strcpy wrapper function for our codebase with an debug-build assertions that check for such overlapped copies, and have received a number of internal development requests to weaken this assertion checking so that it only raises an abortion for non-zero overlaps.
I've been hesitant to do so, based on my read of the strcpy documentation since I'd assume that equal source and destinations would count as overlapped. Is overlapped defined explicitly in the C++ standard (or C), and does this also include equality?
I suspect many vendor strcpy implementations special case this despite the freedom the standard allows to have this be undefined behavior. Are there any platform/hardware combinations where such an equal copy is known to fail?
Since you are using C++, the question is really, why are you not using std::string. strcpy is notoriously unsafe (as are it's cousins strcpy_s and strncpy, although they are mildly safer than strcpy).
If you attempt to copy from a source to a destination that are the same, at best you will get no change.
If you found yourself a better documentation website for C functions, you'd see this signature:
char *strcpy(char *restrict s1, const char *restrict s2);
restrict in this case indicates that the caller promises that the two buffers in question do not overlap.
We can further search for the meaning of restrict as of C99, and find this wikipedia page:
It says that for the lifetime of the pointer, only it or a value
directly derived from it (such as pointer + 1) will be used to access
the object to which it points.
which is pretty clear that identical pointers are not allowed. If it happens to work on your system, there is no reason for you to think it will work in the next iteration of the compiler, library, or on new hardware.

Which functions in the C standard library commonly encourage bad practice? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of.
Specifically, to quote R..
strncpy does not null-terminate, and
does null-pad the whole remainder of
the destination buffer, which is a
huge waste of time. You can work
around the former by adding your own
null padding, but not the latter. It
was never intended for use as a "safe
string handling" function, but for
working with fixed-size fields in Unix
directory tables and database files.
snprintf(dest, n, "%s", src) is the
only correct "safe strcpy" in standard
C, but it's likely to be a lot slower.
By the way, truncation in itself can
be a major bug and in some cases might
lead to privilege elevation or DoS, so
throwing "safe" string functions that
truncate their output at a problem is
not a way to make it "safe" or
"secure". Instead, you should ensure
that the destination buffer is the
right size and simply use strcpy (or
better yet, memcpy if you already know
the source string length).
And from Jonathan Leffler
Note that strncat() is even more
confusing in its interface than
strncpy() - what exactly is that
length argument, again? It isn't what
you'd expect based on what you supply
strncpy() etc - so it is more error
prone even than strncpy(). For copying
strings around, I'm increasingly of
the opinion that there is a strong
argument that you only need memmove()
because you always know all the sizes
ahead of time and make sure there's
enough space ahead of time. Use
memmove() in preference to any of
strcpy(), strcat(), strncpy(),
strncat(), memcpy().
So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question:
What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies?
In the interests of objectivity, I have a number of criteria for an answer:
Please, if you can, cite design reasons behind the function in question i.e. its intended purpose.
Please highlight the misuse to which the code is currently put.
Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers.
Please avoid:
Debates over naming conventions of functions (except where this unequivocably causes confusion).
"I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them.
As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away.
I am also working as per C99.
What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies ?
I'm gonna go with the obvious :
char *gets(char *s);
With its remarkable particularity that it's simply impossible to use it appropriately.
A common pitfall with the strtok() function is to assume that the parsed string is left unchanged, while it actually replaces the separator character with '\0'.
Also, strtok() is used by making subsequent calls to it, until the entire string is tokenized. Some library implementations store strtok()'s internal status in a global variable, which may induce some nasty suprises, if strtok() is called from multiple threads at the same time.
The CERT C Secure Coding Standard lists many of these pitfalls you asked about.
In almost all cases, atoi() should not be used (this also applies to atof(), atol() and atoll()).
This is because these functions do not detect out-of-range errors at all - the standard simply says "If the value of the result cannot be represented, the behavior is undefined.". So the only time they can be safely used is if you can prove that the input will certainly be within range (for example, if you pass a string of length 4 or less to atoi(), it cannot be out of range).
Instead, use one of the strtol() family of functions.
Let us extend the question to interfaces in a broader sense.
errno:
technically it is not even clear what it is, a variable, a macro, an implicit function call? In practice on modern systems it is mostly a macro that transforms into a function call to have a thread specific error state. It is evil:
because it may cause overhead for the
caller to access the value, to check the "error" (which might just be an exceptional event)
because it even imposes at some places that the caller clears this "variable" before making a library call
because it implements a simple error
return by setting a global state, of the library.
The forthcoming standard gets the definition of errno a bit more straight, but these uglinesses remain
There is often a strtok_r.
For realloc, if you need to use the old pointer, it's not that hard to use another variable. If your program fails with an allocation error, then cleaning up the old pointer is often not really necessary.
I would put printf and scanf pretty high up on this list. The fact that you have to get the formatting specifiers exactly correct makes these functions tricky to use and extremely easy to get wrong. It's also very hard to avoid buffer overruns when reading data out. Moreover, the "printf format string vulnerability" has probably caused countless security holes when well-intentioned programmers specify client-specified strings as the first argument to printf, only to find the stack smashed and security compromised many years down the line.
Any of the functions that manipulate global state, like gmtime() or localtime(). These functions simply can't be used safely in multiple threads.
EDIT: rand() is in the same category it would seem. At least there are no guarantees of thread-safety, and on my Linux system the man page warns that it is non-reentrant and non-threadsafe.
One of my bêtes noire is strtok(), because it is non-reentrant and because it hacks the string it is processing into pieces, inserting NUL at the end of each token it isolates. The problems with this are legion; it is distressingly often touted as a solution to a problem, but is as often a problem itself. Not always - it can be used safely. But only if you are careful. The same is true of most functions, with the notable exception of gets() which cannot be used safely.
There's already one answer about realloc, but I have a different take on it. A lot of time, I've seen people write realloc when they mean free; malloc - in other words, when they have a buffer full of trash that needs to change size before storing new data. This of course leads to potentially-large, cache-thrashing memcpy of trash that's about to be overwritten.
If used correctly with growing data (in a way that avoids worst-case O(n^2) performance for growing an object to size n, i.e. growing the buffer geometrically instead of linearly when you run out of space), realloc has doubtful benefit over simply doing your own new malloc, memcpy, and free cycle. The only way realloc can ever avoid doing this internally is when you're working with a single object at the top of the heap.
If you like to zero-fill new objects with calloc, it's easy to forget that realloc won't zero-fill the new part.
And finally, one more common use of realloc is to allocate more than you need, then resize the allocated object down to just the required size. But this can actually be harmful (additional allocation and memcpy) on implementations that strictly segregate chunks by size, and in other cases might increase fragmentation (by splitting off part of a large free chunk to store a new small object, instead of using an existing small free chunk).
I'm not sure if I'd say realloc encourages bad practice, but it's a function I'd watch out for.
How about the malloc family in general? The vast majority of large, long-lived programs I've seen use dynamic memory allocation all over the place as if it were free. Of course real-time developers know this is a myth, and careless use of dynamic allocation can lead to catastrophic blow-up of memory usage and/or fragmentation of address space to the point of memory exhaustion.
In some higher-level languages without machine-level pointers, dynamic allocation is not so bad because the implementation can move objects and defragment memory during the program's lifetime, as long as it can keep references to these objects up-to-date. A non-conventional C implementation could do this too, but working out the details is non-trivial and it would incur a very significant cost in all pointer dereferences and make pointers rather large, so for practical purposes, it's not possible in C.
My suspicion is that the correct solution is usually for long-lived programs to perform their small routine allocations as usual with malloc, but to keep large, long-lived data structures in a form where they can be reconstructed and replaced periodically to fight fragmentation, or as large malloc blocks containing a number of structures that make up a single large unit of data in the application (like a whole web page presentation in a browser), or on-disk with a fixed-size in-memory cache or memory-mapped files.
On a wholly different tack, I've never really understood the benefits of atan() when there is atan2(). The difference is that atan2() takes two arguments, and returns an angle anywhere in the range -π..+π. Further, it avoids divide by zero errors and loss of precision errors (dividing a very small number by a very large number, or vice versa). By contrast, the atan() function only returns a value in the range -π/2..+π/2, and you have to do the division beforehand (I don't recall a scenario where atan() could be used without there being a division, short of simply generating a table of arctangents). Providing 1.0 as the divisor for atan2() when given a simple value is not pushing the limits.
Another answer, since these are not really related, rand:
it is of unspecified random quality
it is not re-entrant
Some of this functions are modifying some global state. (In windows) this state is shared per single thread - you can get unexpected result. For example, the first call of rand in every thread will give the same result, and it requires some care to make it pseudorandom, but deterministic (for debug purposes).
basename() and dirname() aren't threadsafe.

Resources