free() not deallocating memory? - c

free(str);
printf("%d\n", str->listeners);
The call to printf succeeds (as do any other calls to str's members). How is this possible?

Here's an analogy for you: imagine you're renting an apartment (that's the memory) and you terminate your lease but keep a duplicate of the key (that's the pointer). You might be able to get back into the apartment later if it hasn't been torn down, if the locks haven't been changed, etc. and if you do it right away you might find things the way you left them. But it's a pretty bad idea, and in the likely case you're going to get yourself in a heap of trouble...

You're just (un)lucky. That code exhibits undefined behavior - anything can happen, including looking like the memory wasn't freed.
The memory is freed, but there is no point in actively clearing it, so its original content is likely to still be there. But you can't rely on that.

That is called undefined behavior. You are dereferencing a pointer which refers to deallocated memory. Anything can happen, i.e., one cannot assume that the program will crash or anything else; the behavior is undefined.

as long as str is not NULL and the corresponding memory has not been overwritten by some other allocation it it still works because the memory content is not changed by free (if the runtime doesn't overwrite the memory area on free). BUT this is definetly undefined behaviour and you CANNOT rely on it to work this way...

Things to keep in mind...
On virtually all current operating systems, free() never returns
memory to the operating system. Even if it's theoretically capable of
that, it would almost never actually happen. This is because memory can
only be returned in aligned pages of, generally, 4kB, because that's how
the MMU works, and, should one be found, ripping it out would likely
fragment a block that included memory above and below it, making the
entire process counterproductive. (Fragmentation is the enemy of efficient use of dynamic memory.)
Also, most program just generally
use more memory, so the time spent searching for something to really
return to the OS would be totally wasted. If it's not given back to the
OS and then protected, it won't core your program when you touch it.
So instead, what happens is
that the block you gave to free is just put on a list or some other
data structure, then possibly merged into blocks above or below it.
The memory is still there and accessible. Some of the block may be
overwritten with pointers and other internals of the library code
behind malloc() and free(). But it might not be.
Eventually it may be
handed back elsewhere in your program. But it might not be.

Related

C program help: Insufficient memory allocation but still works...why? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
behaviour of malloc(0)
I'm trying to understand memory allocation in C. So I am experimenting with malloc. I allotted 0 bytes for this pointer but yet it can still hold an integer. As a matter of fact, no matter what number I put into the parameter of malloc, it can still hold any number I give it. Why is this?
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int *ptr = (int*)malloc(0);
*ptr = 9;
printf("%i", *ptr); // 9
free(ptr);
return 0;
}
It still prints 9, what's up with that?
If size is 0, then malloc() returns either NULL, or a unique pointer
value that can later be successfully passed to free().
I guess you are hitting the 2nd case.
Anyway that pointer just by mistake happens to be in an area where you can write without generating segmentation fault, but you are probably writing in the space of some other variable messing up its value.
A lot of good answers here. But it is definitely undefined behavior. Some people declare that undefined behavior means that purple dragons may fly out of your computer or something like that... there's probably some history behind that outrageous claim that I'm missing, but I promise you that purple dragons won't appear regardless of what the undefined behavior will be.
First of all, let me mention that in the absence of an MMU, on a system without virtual memory, your program would have direct access to all of the memory on the system, regardless of its address. On a system like that, malloc() is merely the guy who helps you carve out pieces of memory in an ordered manner; the system can't actually enforce you to use only the addresses that malloc() gave you. On a system with virtual memory, the situation is slightly different... well, ok, a lot different. But within your program, any code in your program can access any part of the virtual address space that's mapped via the MMU to real physical memory. It doesn't matter whether you got an address from malloc() or whether you called rand() and happened to get an address that falls in a mapped region of your program; if it's mapped and not marked execute-only, you can read it. And if it isn't marked read-only, you can write it as well. Yes. Even if you didn't get it from malloc().
Let's consider the possibilities for the malloc(0) undefined behavior:
malloc(0) returns NULL.
OK, this is simple enough. There really is a physical address 0x00000000 in most computers, and even a virtual address 0x00000000 in all processes, but the OS intentionally doesn't map any memory to that address so that it can trap null pointer accesses. There's a whole page (generally 4KB) there that's just never mapped at all, and maybe even much more than 4KB. Therefore if you try to read or write through a null pointer, even with an offset from it, you'll hit these pages of virtual memory that aren't even mapped, and the MMU will throw an exception (a hardware exception, or interrupt) that the OS catches, and it declares a SIGSEGV (on Linux/Unix), or an illegal access (on Windows).
malloc(0) returns a valid address to previously unallocated memory of the smallest allocable unit.
With this, you actually get a real piece of memory that you can legally call your own, of some size you don't know. You really shouldn't write anything there (and probably not read either) because you don't know how big it is, and for that matter, you don't know if this is the particular case you're experiencing (see the following cases). If this is the case, the block of memory you were given is almost guaranteed to be at least 4 bytes and probably is 8 bytes or perhaps even larger; it all depends on whatever the size is of your implementation's minimum allocable unit.
malloc(0) intentionally returns the address of an unmapped page of
memory other than NULL.
This is probably a good option for an implementation, as it would allow you or the system to track & pair together malloc() calls with their corresponding free() calls, but in essence, it's the same as returning NULL. If you try to access (read/write) via this pointer, you'll crash (SEGV or illegal access).
malloc(0) returns an address in some other mapped page of memory
that may be used by "someone else".
I find it highly unlikely that a commercially-available system would take this route, as it serves to simply hide bugs rather than bring them out as soon as possible. But if it did, malloc() would be returning a pointer to somewhere in memory that you do not own. If this is the case, sure, you can write to it all you want, but you'd be corrupting some other code's memory, though it would be memory in your program's process, so you can be assured that you're at least not going to be stomping on another program's memory. (I hear someone getting ready to say, "But it's UB, so technically it could be stomping on some other program's memory. Yes, in some environments, like an embedded system, that is right. No modern commercial OS would let one process have access to another process's memory as easily as simply calling malloc(0) though; in fact, you simply can't get from one process to another process's memory without going through the OS to do it for you.) Anyway, back to reality... This is the one where "undefined behavior" really kicks in: If you're writing to "someone else's memory" (in your own program's process), you'll be changing the behavior of your program in difficult-to-predict ways. Knowing the structure of your program and where everything is laid out in memory, it's fully predictable. But from one system to another, things would be laid out in memory (appearing a different locations in memory), so the effect on one system would not necessarily be the same as the effect on another system, or on the same system at a different time.
And finally.... No, that's it. There really, truly, are only those four
possibilities. You could argue for special-case subset points for
the last two of the above, but the end result will be the same.
For one thing, your compiler may be seeing these two lines back to back and optimizing them:
*ptr = 9;
printf("%i", *ptr);
With such a simplistic program, your compiler may actually be optimizing away the entire memory allocate/free cycle and using a constant instead. A compiler-optimized version of your program could end up looking more like simply:
printf("9");
The only way to tell if this is indeed what is happening is to examine the assembly that your compiler emits. If you're trying to learn how C works, I recommend explicitly disabling all compiler optimizations when you build your code.
Regarding your particular malloc usage, remember that you will get a NULL pointer back if allocation fails. Always check the return value of malloc before you use it for anything. Blindly dereferencing it is a good way to crash your program.
The link that Nick posted gives a good explanation about why malloc(0) may appear to work (note the significant difference between "works" and "appears to work"). To summarize the information there, malloc(0) is allowed to return either NULL or a pointer. If it returns a pointer, you are expressly forbidden from using it for anything other than passing it to free(). If you do try to use such a pointer, you are invoking undefined behavior and there's no way to tell what will happen as a result. It may appear to work for you, but in doing so you may be overwriting memory that belongs to another program and corrupting their memory space. In short: nothing good can happen, so leave that pointer alone and don't waste your time with malloc(0).
The answer to the malloc(0)/free() calls not crashing you can find here:
zero size malloc
About the *ptr = 9, is just like overflowing a buffer (like malloc'ing 10 bytes and access the 11th), you are writing to memory you don't own, and doing that is looking for trouble. In this particular implementation malloc(0) happens to return a pointer instead of NULL.
Bottom line, it is wrong even if it seems to work on a simple case.
Some memory allocators have the notion of "minimum allocatable size". So, even if you pass zero, this will return pointer to the memory of word-size, for example. You need to check up with your system allocator documentation. But if it does return pointer to some memory it'd be wrong to rely on it as the pointer is only supposed to be passed either to be passed realloc() or free().

Why is malloc really non-deterministic? (Linux/Unix)

malloc is not guaranteed to return 0'ed memory. The conventional wisdom is not only that, but that the contents of the memory malloc returns are actually non-deterministic, e.g. openssl used them for extra randomness.
However, as far as I know, malloc is built on top of brk/sbrk, which do "return" 0'ed memory. I can see why the contents of what malloc returns may be non-0, e.g. from previously free'd memory, but why would they be non-deterministic in "normal" single-threaded software?
Is the conventional wisdom really true (assuming the same binary and libraries)
If so, Why?
Edit Several people answered explaining why the memory can be non-0, which I already explained in the question above. What I'm asking is why the program using the contents of what malloc returns may be non-deterministic, i.e. why it could have different behavior every time it's run (assuming the same binary and libraries). Non-deterministic behavior is not implied by non-0's. To put it differently: why it could have different contents every time the binary is run.
Malloc does not guarantee unpredictability... it just doesn't guarantee predictability.
E.g. Consider that
return 0;
Is a valid implementation of malloc.
The initial values of memory returned by malloc are unspecified, which means that the specifications of the C and C++ languages put no restrictions on what values can be handed back. This makes the language easier to implement on a variety of platforms. While it might be true that in Linux malloc is implemented with brk and sbrk and the memory should be zeroed (I'm not even sure that this is necessarily true, by the way), on other platforms, perhaps an embedded platform, there's no reason that this would have to be the case. For example, an embedded device might not want to zero the memory, since doing so costs CPU cycles and thus power and time. Also, in the interest of efficiency, for example, the memory allocator could recycle blocks that had previously been freed without zeroing them out first. This means that even if the memory from the OS is initially zeroed out, the memory from malloc needn't be.
The conventional wisdom that the values are nondeterministic is probably a good one because it forces you to realize that any memory you get back might have garbage data in it that could crash your program. That said, you should not assume that the values are truly random. You should, however, realize that the values handed back are not magically going to be what you want. You are responsible for setting them up correctly. Assuming the values are truly random is a Really Bad Idea, since there is nothing at all to suggest that they would be.
If you want memory that is guaranteed to be zeroed out, use calloc instead.
Hope this helps!
malloc is defined on many systems that can be programmed in C/C++, including many non-UNIX systems, and many systems that lack operating system altogether. Requiring malloc to zero out the memory goes against C's philosophy of saving CPU as much as possible.
The standard provides a zeroing cal calloc that can be used if you need to zero out the memory. But in cases when you are planning to initialize the memory yourself as soon as you get it, the CPU cycles spent making sure the block is zeroed out are a waste; C standard aims to avoid this waste as much as possible, often at the expense of predictability.
Memory returned by mallocis not zeroed (or rather, is not guaranteed to be zeroed) because it does not need to. There is no security risk in reusing uninitialized memory pulled from your own process' address space or page pool. You already know it's there, and you already know the contents. There is also no issue with the contents in a practical sense, because you're going to overwrite it anyway.
Incidentially, the memory returned by malloc is zeroed upon first allocation, because an operating system kernel cannot afford the risk of giving one process data that another process owned previously. Therefore, when the OS faults in a new page, it only ever provides one that has been zeroed. However, this is totally unrelated to malloc.
(Slightly off-topic: The Debian security thing you mentioned had a few more implications than using uninitialized memory for randomness. A packager who was not familiar with the inner workings of the code and did not know the precise implications patched out a couple of places that Valgrind had reported, presumably with good intent but to desastrous effect. Among these was the "random from uninitilized memory", but it was by far not the most severe one.)
I think that the assumption that it is non-deterministic is plain wrong, particularly as you ask for a non-threaded context. (In a threaded context due to scheduling alea you could have some non-determinism).
Just try it out. Create a sequential, deterministic application that
does a whole bunch of allocations
fills the memory with some pattern, eg fill it with the value of a counter
free every second of these allocations
newly allocate the same amount
run through these new allocations and register the value of the first byte in a file (as textual numbers one per line)
run this program twice and register the result in two different files. My idea is that these files will be identical.
Even in "normal" single-threaded programs, memory is freed and reallocated many times. Malloc will return to you memory that you had used before.
Even single-threaded code may do malloc then free then malloc and get back previously used, non-zero memory.
There is no guarantee that brk/sbrk return 0ed-out data; this is an implementation detail. It is generally a good idea for an OS to do that to reduce the possibility that sensitive information from one process finds its way into another process, but nothing in the specification says that it will be the case.
Also, the fact that malloc is implemented on top of brk/sbrk is also implementation-dependent, and can even vary based on the size of the allocation; for example, large allocations on Linux have traditionally used mmap on /dev/zero instead.
Basically, you can neither rely on malloc()ed regions containing garbage nor on it being all-0, and no program should assume one way or the other about it.
The simplest way I can think of putting the answer is like this:
If I am looking for wall space to paint a mural, I don't care whether it is white or covered with old graffiti, since I'm going to prime it and paint over it. I only care whether I have enough square footage to accommodate the picture, and I care that I'm not painting over an area that belongs to someone else.
That is how malloc thinks. Zeroing memory every time a process ends would be wasted computational effort. It would be like re-priming the wall every time you finish painting.
There is an whole ecosystem of programs living inside a computer memmory and you cannot control the order in which mallocs and frees are happening.
Imagine that the first time you run your application and malloc() something, it gives you an address with some garbage. Then your program shuts down, your OS marks that area as free. Another program takes it with another malloc(), writes a lot of stuff and then leaves. You run your program again, it might happen that malloc() gives you the same address, but now there's different garbage there, that the previous program might have written.
I don't actually know the implementation of malloc() in any system and I don't know if it implements any kind of security measure (like randomizing the returned address), but I don't think so.
It is very deterministic.

Does every malloc call have to be freed

From what I understand because malloc dynamically assigns mem , you need to free that mem so that it can be used again.
What happens if you return a char* that was created using malloc (i.e. how are you supposed to free that)
If you leave the pointer as it is
and exit the application will it be
freed.(I cant find a definite answer on this , some say yes , some say no).
The caller has to free it (or arrange for it to be freed). This means that functions that create and return resources need to document exactly how it should be freed.
Most OSes will free the memory when the program exits, as part of the definition of a "process". The C standard doesn't care what happens, it's beyond the scope of the program. Not all OSes have a full process abstraction, but desktop-style OSes certainly do.
The main reasons to free it before that are:
If you free memory as soon as possible, often a long time before process exit, your program uses less memory total.
If you don't free it, and you later want to change your program into a routine within another program, that perhaps is called many times, then suddenly you require many times as much memory as before (memory leak).
There are debugging tools that will help you identify memory leaks, by warning you about memory that is still allocated when the program exits. These don't really help much if there's a lot of deliberately-leaked junk to wade through.
If you don't free it and you hit any problems, it's much harder to go back later and find all the memory that needs freeing, than it is to do it right in the first place.
There are so many cases where you do need to free the memory (to prevent huge memory use in long-running programs), that your default strategy must be to clean pretty much everything up anyway.
The vaguely plausible reasons not to free are:
Less code.
If you have squillions of blocks to free individually, immediately before program exit, then it might be much faster to let the OS drop the whole process.
Stuff which is created on demand and stored in globals might be quite difficult to clean up safely, if you don't know exactly where it's used. Think of some kind of cache that's populated as you go along, that might have MRU rules to limit how much memory it occupies, so it's not an unlimited leak. OK, so this is one bad thing (unrestricted globals) causing another bad thing (unfreed memory), but it's worth knowing about as a reason why you might see unfreed blocks in existing code, and you can't necessarily just go in and fix them.
The reasons for freeing almost always outweigh the reasons against.
If you have a pointer to memory created by malloc, freeing that memory, using that pointer, will do the right thing. Yes, there is some magic involved; this will be taken care of by your compiler.
Yes, if you ignore the memory freeing, and exit the application, the OS will release the memory. However, it's considered bad practice to leave it unfreed. The OS may not do the right thing (especially in embedded settings), or may not do it in a timely fashion. Also, if you're running your program continuously, you may end up consuming a growing amount of memory, eventually consuming it all, and running out of memory and crashing.
Yes. If you malloc, you need to free. You are guaranteeing memory leaks while your program is running if you don't free.
Free it.
Always.
Period.
Yes, every call to malloc() has to be matched with a call to free().
To answer your specific questions:
You have to explicitly document your API telling the user whether the returned pointer has to be free()'d
The OS will free all memory allocated to the process.
If you write the function yourself: Avoid doing that.
Instead, let the caller pass a buffer, let the caller specify the buffer's size and copy the data into that buffer. That way, you can use your function from other modules that don't use the same heap (other programming languages, different C runtime...)
If you for whatever reason can not use such an interface, specify in the function's documentation that the caller has to free the returned pointer after it is done with it.
If you are using a library function: Have a look at the documentation.
If the documentation states that you have to free, do so.
If the documentation states that you don't have to, it might be some global cleanup function that has to be called to free the module's resources.
Regarding your second question, freeing before exiting is recommended. Technically it wont hurt, but when you ever want to reuse your code in a bigger project, you will be thankful that you wrote the correct cleanup in the first place.
The C standard has no concept of the system environment outside of a single program's execution, so it cannot specify what happens "after the program exits". At the same time, nowhere does it make any requirement that memory obtained with malloc should or must be released with free before a call to exit or a return from main, and I think it's pretty clear that the intention is that exiting without manually freeing memory will not leave resources tied up - much like how calling exit without closing all files first automatically closes them (including flushing them).
Now, as for whether you should or should not call free, that depends a lot on your particular program.
Any library code should free any memory that it obtained purely for internal use as soon as possible.
A library which returns allocated objects to the calling program should always provide a corresponding call to free those objects.
A library which performs any allocations as part of a global initialization (note: this is a very bad design, but sometimes inevitable) should provide a way for the application to reverse that initialization and free everything that was allocated. This is especially important if the library might ever be loaded dynamically (even as a consequence of satisfying another dynamically-loaded library's dependencies).
So far I've only talked about library code. At this point, all that's left is allocations made by the application itself or on the application's behalf by libraries. My view, and I will admit that it is unorthodox, is that freeing such objects is not just unnecessary but harmful. The main reason I say this is that most long-lived applications will have accumulated quite a bit of allocated memory which they are not making significant use of (think of the undo buffer in a word processor or the history in a browser). On a moderately loaded system, much of this data has been swapped to disk by the time the application terminates. If you want to free it, you're going to end up walking all over swapped-out memory addresses tracking down all the pointers to free,
putting useless wear on the physical components of the hard drive
making the user wait for your application to exit
causing other still-in-use applications' data to get swapped out, making them run slower
All of this in the name of a ridiculous "you must free everything you allocate" rule.
For short-lived applications, it's not so much of a big deal, but you can often simplify the implementation of short-lived applications that perform a single linear task and exit if you don't bother freeing all the memory they allocate. Think of most unix command line utilities. Is there any use to writing the loops for sed to free all its compiled regular expressions before exiting? Couldn't programmers' time be spent on something more productive?
1) The same way you'd free the memory normally, i.e.
p = func();
//...
free(p);
The trick is in making sure that you always do it...
2) Generally speaking, yes. But you should still free any memory you use as good practice. Not spending the time to figure out where to free the memory is just being lazy.
Let's take those one point at a time...
If you return a char * that you know was created with malloc, then yes, it is your responsibility to free that. You can do that with free(myCharPtr).
The OS will claim the memory back, and it won't be lost forever, but there's technically no guarantee that it will be reclaimed right when the application dies. That just depends on the operating system.
I wouldn't go so far as to say every malloc must be freed, but I would say that, no matter how long a program runs, there must be a bounded number of allocations (and total size) that won't be freed. The number need not be a static constant, but it must be specifiable in terms of something else (e.g. this program processes widgets; it will allocate one 64-byte struct for each quizzix in the largest widget). One may not know beforehand the size of the largest widget, but if e.g. one knows that the temporary storage required to process a widget is proportional to the square of its size, one might safely infer that the largest widget will be small enough that the total amount of memory stranded will be pretty slight.

Why freed struct in C still has data?

When I run this code:
#include <stdio.h>
typedef struct _Food
{
char name [128];
} Food;
int
main (int argc, char **argv)
{
Food *food;
food = (Food*) malloc (sizeof (Food));
snprintf (food->name, 128, "%s", "Corn");
free (food);
printf ("%d\n", sizeof *food);
printf ("%s\n", food->name);
}
I still get
128
Corn
although I have freed food. Why is this? Is memory really freed?
When you free 'food', you are saying you are done with it. However, the pointer food still points to the same address, and that data is still there (it would be too much overhead to have to zero out every bit of memory that's freed when not necessary)
Basically it's because it's such a small example that this works. If any other malloc calls were in between the free and the print statements, there's a chance that you wouldn't be seeing this, and would most likely crash in some awful way. You shouldn't rely on this behavior.
There is nothing like free food :)
When you "free" something, it means that the same space is again ready to be used by something else. It does NOT mean filling it up by garbage.
Secondly, the pointer value has not changed -- if you are seriously coding you should set a pointer to NULL once you have freed it so that potential junk accesses like this do not happen.
Freeing memory doesn't necessarily overwrite the contents of it.
sizeof is a compile-time operation, so memory allocation won't change how it works.
free does not erase memory, it just marks the block as unused. Even if you allocate a few hundred megabytes of memory, your pointer may still not be overwritten (modern computers have lots of RAM). However, after you free memory, you can no longer depend on its value.
See if your development environment has a memory allocation debugging setting -- some have settings to overwrite blocks with something like 0xDEADBEEF when you free them.
Also, you may wish to adopt the habit of setting your pointer to NULL immediately after calling free (to help encourage your program to crash early and loudly).
free tells the memory allocator that it can reuse that memory block, nothing else. It doesn't overwrite the block with zeros or anything - luckily, because that could be quite an expensive operation! What it does do is make any further dereferencing of the pointer undefined, but 'undefined' behaviour can very well mean 'do the same thing as before' - you just can't rely on it. In another compiler, another runime, or under other conditions it might throw an exception, or terminate the program, or corrupt other data, so... just DON'T.
There's no such thing as "struct has data" or "struct doesn't have data" in C. In your program you have a pointer that points somewhere in memory. As long as this memory belongs to your application (i.e. not returned to the system) it will always contain something. That "something" might be total garbage, or it might look more or less meaningful. Moreover, memory might contain garbage that appears as something meaningful (remains of the data previously stored there).
This is exactly what you observe in your experiment. Once you deallocated the struct, the memory formerly occupied by it officially contains garbage. However, that garbage might still resemble bits and pieces of the original data stored in that struct object at the moment it was deallocated. In your case you got lucky, so the data looks intact. Don't count on it though - next time it might get totally destroyed.
As far as C language is concerned, what you are doing constitutes undefined behavior. You are not allowed to check whether a deallocated struct "has data" or not. The "why" question you are asking does not really exist in the realm of C language.
In some systems freeing memory will unmap it from the address space and you will get a core dump or equivalent if you try to access it after unallocating it.
In win32 systems (at least up through XP) this is specifically not the case. Microsoft made their memory subsystem on 32 bit Windows purposely linger memory blocks to maintain compatibility with well known MS-DOS applications that used memory after freeing it.
In the MS-DOS programming model there is no concept of mapping or process space so these types of bugs didn't show up as program failures until they were executed as DOS-mode programs under Windows95.
That behavior persisted for 32-bit Windows for over a decade. It may change now that legacy compatibility is being withdrawn in systems such as Vista and 7.

Why exactly should I not call free() on variables not allocated by malloc()?

I read somewhere that it is disastrous to use free to get rid of an object not created by calling malloc, is this true? why?
That's undefined behavior - never try it.
Let's see what happens when you try to free() an automatic variable. The heap manager will have to deduce how to take ownership of the memory block. To do so it will either have to use some separate structure that lists all allocated blocks and that is very slow an rarely used or hope that the necessary data is located near the beginning of the block.
The latter is used quite often and here's how i is supposed to work. When you call malloc() the heap manager allocates a slightly bigger block, stores service data at the beginning and returns an offset pointer. Smth like:
void* malloc( size_t size )
{
void* block = tryAlloc( size + sizeof( size_t) );
if( block == 0 ) {
return 0;
}
// the following is for illustration, more service data is usually written
*((size_t*)block) = size;
return (size_t*)block + 1;
}
then free() will try to access that data by offsetting the passed pointer but if the pointer is to an automatic variable whatever data will be located where it expects to find service data. Hence undefined behavior. Many times service data is modified by free() for heap manager to take ownership of the block - so if the pointer passed is to an automatic variable some unrelated memory will be modified and read from.
Implementations may vary but you should never make any specific assumptions. Only call free() on addresses returned by malloc() family functions.
By the standard, it's "undefined behavior" - i.e. "anything can happen". That's usually bad things, though.
In practice: free'ing a pointer means modifying the heap. C runtime does virtually never validate if the pointer passed comes from the heap - that would be to costly in either time or memory. Combine these two factoids, and you get "free(non-malloced-ptr) will write something somewhere" - the resutl may be some of "your" data modified behind your back, an access violation, or trashing vital runtime structures, such as a return address on the stack.
Example: A disastrous scenario:
Your heap is implemented as a simple list of free blocks. malloc means removing a suitable block from the list, free means adding it to the list again. (a typical if trivial implementation)
You free() a pointer to a local variable on the stack. You are "lucky" because the modification goes into irrelevant stack space. However, part of the stack is now on your free list.
Because of the allocator design and your allocation patterns, malloc is unlikely to return this block. Later, in an completely unrelated part of the program, you actually do get this block as malloc result, writing to it trashes some local variables up the stack, and when returning some vital pointer contains garbage and your app crashes. Symptoms, repro and location are completely unrelated to the actual cause.
Debug that.
It is undefined behaviour. And logically, if behaviour is undefined, you cannot be sure what has happened, and if the program is still operating properly.
Some people have pointed out here that this is "undefined behavior". I'm going to go farther and say that on some implementations, this will either crash your program or cause data corruption. It has to do with how "malloc" and "free" are implemented.
One possible way to implement malloc/free is to put a small header before each allocated region. On a malloc'd region, that header would contain the size of the region. When the region is freed, that header is checked and the region is added to the appropriate freelist. If this happens to you, this is bad news. For example, if you free an object allocated on the stack, suddenly part of the stack is in the freelist. Then malloc might return that region in response to a future call, and you'll scribble data all over your stack. Another possibility is that you free a string constant. If that string constant is in read-only memory (it often is), this hypothetical implementation would cause a segfault and crash either after a later malloc or when free adds the object to its freelist.
This is a hypothetical implementation I am talking about, but you can use your imagination to see how it could go very, very wrong. Some implementations are very robust and are not vulnerable to this precise type of user error. Some implementations even allow you to set environment variables to diagnose these types of errors. Valgrind and other tools will also detect these errors.
Strictly speaking, this is not true. calloc() and realloc() are valid object sources for free(), too. ;)
Please have a look at what undefined behavior means. malloc() and free() on a conforming hosted C implementation are built to standards. The standards say the behavior of calling free() on a heap block that was not returned by malloc() (or something wrapping it, e.g. calloc()) is undefined.
This means, it can do whatever you want it to do, provided that you make the necessary modifications to free() on your own. You won't break the standard by making the behavior of free() on blocks not allocated by malloc() consistent and even possibly useful.
In fact, there could be platforms that (themselves) define this behavior. I don't know of any, but there could be some. There are several garbage collecting / logging malloc() implementations that might let it fail more gracefully while logging the event. But thats implementation , not standards defined behavior.
Undefined simply means don't count on any kind of consistent behavior unless you implement it yourself without breaking any defined behavior. Finally, implementation defined does not always mean defined by the host system. Many programs link against (and ship) uclibc. In that case, the implementation is self contained, consistent and portable.
It would certainly be possible for an implementation of malloc/free to keep a list of the memory blocks thats been allocated and in the case the user tries to free a block that isn't in this list do nothing.
However since the standard says that this isn't a requirement most implementation will treat all pointers coming into free as valid.

Resources