Can looking at freed memory cause an access violation? - c

Can accessing (for read only) memory freed cause an access violation, and, if so, under what circumstances?

Yes, it can. "Access violation" ("segmentation fault", etc) is the response that is normally generated by OS/hardware when the process attempts to access (even just for reading) memory that is known to OS as "empty", "freed" or inaccessible for some other reason. The key moment here is that the OS/hardware must know that the memory is free. Memory management functions of C Standard Library don't necessarily return the 'free'd memory back to OS. They might (and will) keep it for future allocations. So in some cases accessing 'free'd memory will not result in "Access Violation" since from the OS's/hardware's point of view this memory has not been really freed. However, at some point the Standard Library might decide to return the collected free memory back to OS, after which an attempt to access that memory will normally result in "Access Violation".

You're asking "can" and not "will", so your answer is yes. It is undefined behavior to point to memory not owned by your program, therefore anything could happen.
Will it? Depends. That is very OS specific. You might be able to get away with it, but obviously you cannot depend on this. Trying to dereference it could cause an exception, because the OS has reclaimed the memory for it's own uses. (again, OS specific).

On Windows: Managing Virtual Memory in Win32
Free, Reserved, and Committed Virtual Memory
Every address in a process can be
thought of as either free, reserved,
or committed at any given time. A
process begins with all addresses
free, meaning they are free to be
committed to memory or reserved for
future use. Before any free address
may be used, it must first be
allocated as reserved or committed.
Attempting to access an address that is either reserved or free generates
an access violation exception.

Unlikely
Memory managers can theoretically return space to the OS but rarely if ever do so. And without returning the space all the way to the kernel, the MMU will never be involved and so a fault is not possible.
The problem is fragmentation. Allocation of variably-sized blocks is inefficient, and general purpose allocators cannot move allocated blocks around because they don't know what points to what, so they can't coalesce blocks unless they happen to be adjacent.
Measurements indicate that fragmentation overhead tends to be about 50% in steady-state processes, so with every-other-block untouchable it's impossible to return pages unless they are much smaller than blocks, and they generally are not.
Also, the book-keeping challenge of returning pages embedded within the heap is daunting, so most memory managers don't even have the capability, even in the unlikely case that they would have the opportunity.
Finally, the traditional process model was not a sparse object. This kind of low-level software is conservatively developed and lives for a long time. An allocator developed today just might attempt sparse allocation but most subsystems just use whatever is already in the C library, and that is not something that's wise to rewrite casually.

It's certainly allowed to; the C standard says straightforwardly that behavior is undefined if "The value of a pointer that refers to space deallocated by a call to the free or realloc function is used". In practice, owing to the way OSs work, you're more likely to get garbage than a crash, but quite simply you're not allowed to assume anything about what happens when you invoke undefined behavior.

freed memory doesn't belong to you anymore, exactly, corresponding physical memory page is out of your process address space which might have been remapped to other process address space already after your freeing, and that address you accessing have not been allocated physical page and do mapping yet; so "Access violation" or "segfault" will happen if access it even for reading only. it is triggered by the processor hardware in general, e.g. GP#, not by OS.
though if the specific physical page which owns your freed memory is still under controlling of your task context, say partial of the page is still used by your process, then "Access violation" or "segfault" may not occur.

If you're asking this because you've profiled your code and found that accessing freed memory would provide a significant performance boost, then the answer is very rarely, if the freed block is small. If you want to be sure, provide your own alternative implementation of malloc() and free().

We can access it but not encouraged. for example
void main()
{
char *str, *ptr;
str = (char *)malloc(10);
ptr = str;
strcpy(str, "Hello");
printf("%s", str);
free(str);
printf("%s", ptr);
}

Related

why am I not gettting Segmentation error?

I have
x=(int *)malloc(sizeof(int)*(1));
but still I am able to read x[20] or x[4].
How am I able to access those values? Shouldn't I be getting segmentation error while accessing those memory?
The basic premise is that of Sourav Ghosh's answer: accessing memory returned from malloc beyond the size you asked for is undefined behavior, so a conforming implementation is allowed to do pretty much anything, including happily returning bizarre values.
But given a "normal" implementation on mainstream operating systems on "normal" machines (gcc/MSVC/clang, Linux/Windows/macOS, x86/ARM) why do you sometimes get segmentation faults (or access violations), and sometimes not?
Pretty much every "regular" C implementation doesn't perform any kind of memory check when reading/writing through pointers1; these loads/stores get generally translated straight to the corresponding machine code, which accesses the memory at a given location without much regard for the size of the "abstract C machine" objects.
However, on these machines the CPU doesn't straight access the physical memory (RAM) of the PC, but a translation layer (MMU) is introduced2; whenever your program tries to access an address, the MMU checks to see whether anything has been mapped there, and if your process has permissions to write over there. In case any of those checks fail3, you get a segmentation fault and your process gets killed. This is why uninitialized and NULL pointer values generally give nice segfaults: some memory at the beginning of the virtual address space is reserved unmapped just to spot NULL dereferences, and in general if you throw a dart at random into a 32 bit address space (or even better, a 64 bit one) you are most likely to find zones of memory that have never been mapped to anything.
As good as it is, the MMU cannot catch all your memory errors for several reasons.
First of all, the granularity of memory mappings is quite coarse compared to most "run of the mill" allocations; on PCs memory pages (the smallest unit of memory that can be mapped and have protection attributes) are generally 4 KB in size. There is of course a tradeoff here: very small pages would require a lot of memory themselves (as there's a target physical address plus protection attributes associated to each page, and those have to be stored somewhere) and slow down the MMU operation3. So, if you access memory out of "logical" boundaries but still within the same memory page, the MMU cannot help you: as far as the hardware is concerned, you are still accessing valid memory.
Besides, even if you go outside of the last page of your allocation, it may be that the page that follows is "valid" as far as the hardware is concerned; indeed, this is pretty common for memory you get from the so-called heap (malloc & friends).
This comes from the fact that malloc, for smaller allocations, doesn't ask the OS for "new" blocks of memory (which in theory may be allocated keeping a guard page at both ends); instead, the allocator in the C runtime asks the OS for memory in big sequential chunks, and logically partitions them in smaller zones (usually kept in linked lists of some kind), which are handed out on malloc and returned back by free.
Now, when in your program you step outside the boundaries of the requested memory, you probably don't get any error as:
the memory chunk you are using isn't near a page boundary, so your out-of-bounds read doesn't trigger an access violation;
even if it was at the end of a page, the page that follows is still mapped, as it still belongs to the heap; it may either be memory that has been given to some other code of your process (so you are reading data of some unrelated part of your code), or a free memory zone (so you are reading whatever garbage happened to be left by the previous owner of the block when it freed it), or a zone used by the allocator to keep its bookkeping data (so you are reading parts of such data).
In all these cases except for the "free block" one, even if you were to write there you wouldn't get a segmentation fault, but you could corrupt unrelated data or the data structures of the heap (which generally results in crashes later, as the allocator finds inconsistencies in its data).
Notes
Although modern compilers provide special instrumented builds to trap some of these errors; gcc and clang, in particular, provide the so-called "address sanitizer".
This allows to introduce transparent paging (swapping out to disk memory zones that aren't actively used in case of low physical memory availability) and, most importantly, memory protection and address space separation (when a user-mode process is running, it "sees" a full virtual address space containing only his stuff, and nothing from the other processes or the kernel).
And it's not a failure put there on purpose by the operating system to be notified that the processes is trying to access memory that has been swapped out.
Given that each access to memory needs to go through the MMU, the mapping must be very fast, so the most used page mappings are kept in a cache; if you make the pages very small and the cache can hold just as many entries, you effectively have a smaller memory range covered by the cache.
No, accessing invalid memory is undefined behavior, and segmantation fault is one of the many side effects of UB. It is not guaranteed.
That said,
Always check for the success of the malloc() by checking the returned pointer against NULL before using the returned pointer.
Please see this: Do I cast the result of malloc?

Is it possible to detect a block of memory was changed by stray or wild pointer on C?

For example, I have a block of memory allocated on C.
void* block = malloc(1024*10);
In runtime, I never change it manually. But, it might be changed because of memory corruption, stray pointer or wild pointer for example.
memset(straypointer, 1, 1);
This will happen in very very rare, BUT, it still has the chance.
So, I wonder whether is possible to know my memory block has be changed unexpected.
I guess some kind of memory pool can do it, but I don't have further idea.
If you are on Windows : don't use malloc but VirtualAlloc. Then fill the memory with whatever you want, then use VirtualProtect to protect that memory.
Then as soon as someone writes to that memory region, your program will crash (or crash into the debugger if debugged). For other systems use a similar method (depending on the system).

free() not deallocating memory?

free(str);
printf("%d\n", str->listeners);
The call to printf succeeds (as do any other calls to str's members). How is this possible?
Here's an analogy for you: imagine you're renting an apartment (that's the memory) and you terminate your lease but keep a duplicate of the key (that's the pointer). You might be able to get back into the apartment later if it hasn't been torn down, if the locks haven't been changed, etc. and if you do it right away you might find things the way you left them. But it's a pretty bad idea, and in the likely case you're going to get yourself in a heap of trouble...
You're just (un)lucky. That code exhibits undefined behavior - anything can happen, including looking like the memory wasn't freed.
The memory is freed, but there is no point in actively clearing it, so its original content is likely to still be there. But you can't rely on that.
That is called undefined behavior. You are dereferencing a pointer which refers to deallocated memory. Anything can happen, i.e., one cannot assume that the program will crash or anything else; the behavior is undefined.
as long as str is not NULL and the corresponding memory has not been overwritten by some other allocation it it still works because the memory content is not changed by free (if the runtime doesn't overwrite the memory area on free). BUT this is definetly undefined behaviour and you CANNOT rely on it to work this way...
Things to keep in mind...
On virtually all current operating systems, free() never returns
memory to the operating system. Even if it's theoretically capable of
that, it would almost never actually happen. This is because memory can
only be returned in aligned pages of, generally, 4kB, because that's how
the MMU works, and, should one be found, ripping it out would likely
fragment a block that included memory above and below it, making the
entire process counterproductive. (Fragmentation is the enemy of efficient use of dynamic memory.)
Also, most program just generally
use more memory, so the time spent searching for something to really
return to the OS would be totally wasted. If it's not given back to the
OS and then protected, it won't core your program when you touch it.
So instead, what happens is
that the block you gave to free is just put on a list or some other
data structure, then possibly merged into blocks above or below it.
The memory is still there and accessible. Some of the block may be
overwritten with pointers and other internals of the library code
behind malloc() and free(). But it might not be.
Eventually it may be
handed back elsewhere in your program. But it might not be.

I can use more memory than how much I've allocated with malloc(), why?

char *cp = (char *) malloc(1);
strcpy(cp, "123456789");
puts(cp);
output is "123456789" on both gcc (Linux) and Visual C++ Express, does that mean when there is free memory, I can actually use more than what I've allocated with malloc()?
and why malloc(0) doesn't cause runtime error?
Thanks.
You've asked a very good question and maybe this will whet your appetite about operating systems. Already you know you've managed to achieve something with this code that you wouldn't ordinarily expect to do. So you would never do this in code you want to make portable.
To be more specific, and this depends entirely on your operating system and CPU architecture, the operating system allocates "pages" of memory to your program - typically this can be in the order of 4 kilobytes. The operating system is the guardian of pages and will immediately terminate any program that attempts to access a page it has not been assigned.
malloc, on the other hand, is not an operating system function but a C library call. It can be implemented in many ways. It is likely that your call to malloc resulted in a page request from the operating system. Then malloc would have decided to give you a pointer to a single byte inside that page. When you wrote to the memory from the location you were given you were just writing in a "page" that the operating system had granted your program, and thus the operating system will not see any wrong doing.
The real problems, of course, will begin when you continue to call malloc to assign more memory. It will eventually return pointers to the locations you just wrote over. This is called a "buffer overflow" when you write to memory locations that are legal (from an operating system perspective) but could potentially be overwriting memory another part of the program will also be using.
If you continue to learn about this subject you'll begin to understand how programs can be exploited using such "buffer overflow" techniques - even to the point where you begin to write assembly language instructions directly into areas of memory that will be executed by another part of your program.
When you get to this stage you'll have gained much wisdom. But please be ethical and do not use it to wreak havoc in the universe!
PS when I say "operating system" above I really mean "operating system in conjunction with privileged CPU access". The CPU and MMU (memory management unit) triggers particular interrupts or callbacks into the operating system if a process attempts to use a page that has not been allocated to that process. The operating system then cleanly shuts down your application and allows the system to continue functioning. In the old days, before memory management units and privileged CPU instructions, you could practically write anywhere in memory at any time - and then your system would be totally at the mercy of the consequences of that memory write!
No. You get undefined behavior. That means anything can happen, from it crashing (yay) to it "working" (boo), to it reformatting your hard drive and filling it with text files that say "UB, UB, UB..." (wat).
There's no point in wondering what happens after that, because it depends on your compiler, platform, environment, time of day, favorite soda, etc., all of which can do whatever they want as (in)consistently as they want.
More specifically, using any memory you have not allocated is undefined behavior. You get one byte from malloc(1), that's it.
When you ask malloc for 1 byte, it will probably get 1 page (typically 4KB) from the operating system. This page will be allocated to the calling process so as long as you don't go out of the page boundary, you won't have any problems.
Note, however, that it is definitely undefined behavior!
Consider the following (hypothetical) example of what might happen when using malloc:
malloc(1)
If malloc is internally out of memory, it will ask the operating system some more. It will typically receive a page. Say it's 4KB in size with addresses starting at 0x1000
Your call returns giving you the address 0x1000 to use. Since you asked for 1 byte, it is defined behavior if you only use the address 0x1000.
Since the operating system has just allocated 4KB of memory to your process starting at address 0x1000, it will not complain if you read/write something from/to addresses 0x1000-0x1fff. So you can happily do so but it is undefined behavior.
Let's say you do another malloc(1)
Now malloc still has some memory left so it doesn't need to ask the operating system for more. It will probably return the address 0x1001.
If you had written to more than 1 byte using the address given from the first malloc, you will get into troubles when you use the address from the second malloc because you will overwrite the data.
So the point is you definitely get 1 byte from malloc but it might be that malloc internally has more memory allocated to you process.
No. It means that your program behaves badly. It writes to a memory location that it does not own.
You get undefined behavior - anything can happen. Don't do it and don't speculate about whether it works. Maybe it corrupts memory and you don't see it immediately. Only access memory within the allocated block size.
You may be allowed to use until the memory reaches some program memory or other point at which your applicaiton will most likely crash for accessing protected memory
So many responses and only one that gives the right explanation. While the page size, buffer overflow and undefined behaviour stories are true (and important) they do not exactly answer the original question. In fact any sane malloc implementation will allocate at least in size of the alignment requirement of an intor a void *. Why, because if it allocated only 1 byte then the next chunk of memory wouldn't be aligned anymore. There's always some book keeping data around your allocated blocks, these data structures are nearly always aligned to some multiple of 4. While some architectures can access words on unaligned addresses (x86) they do incure some penalties for doing that, so allocator implementer avoid that. Even in slab allocators there's no point in having a 1 byte pool as small size allocs are rare in practice. So it is very likely that there's 4 or 8 bytes real room in your malloc'd byte (this doesn't mean you may use that 'feature', it's wrong).
EDIT: Besides, most malloc reserve bigger chunks than asked for to avoid to many copy operations when calling realloc. As a test you can try using realloc in a loop with growing allocation size and compare the returned pointer, you will see that it changes only after a certain threshold.
You just got lucky there. You are writing to locations which you don't own this leads to undefined behavior.
On most platforms you can not just allocate one byte. There is often also a bit of housekeeping done by malloc to remember the amount of allocated memory. This yields to the fact that you usually "allocate" memory rounded up to the next 4 or 8 bytes. But this is not a defined behaviour.
If you use a few bytes more you'll very likeley get an access violation.
To answer your second question, the standard specifically mandates that malloc(0) be legal. Returned value is implementation-dependent, and can be either NULL or a regular memory address. In either case, you can (and should) legally call free on the return value when done. Even when non-NULL, you must not access data at that address.
malloc allocates the amount of memory you ask in heap and then return a pointer to void (void *) that can be cast to whatever you want.
It is responsibility of the programmer to use only the memory that has been allocate.
Writing (and even reading in protected environment) where you are not supposed can cause all sort of random problems at execution time. If you are lucky your program crash immediately with an exception and you can quite easily find the bug and fix it. If you aren't lucky it will crash randomly or produce unexpected behaviors.
For the Murphy's Law, "Anything that can go wrong, will go wrong" and as a corollary of that, "It will go wrong at the right time, producing the most large amount of damage".
It is sadly true. The only way to prevent that, is to avoid that in the language that you can actually do something like that.
Modern languages do not allow the programmer to do write in memory where he/she is not supposed (at least doing standard programming). That is how Java got a lot of its traction. I prefer C++ to C. You can still make damages using pointers but it is less likely. That is the reason why Smart Pointers are so popular.
In order to fix these kind of problems, a debug version of the malloc library can be handy. You need to call a check function periodically to sense if the memory was corrupted.
When I used to work intensively on C/C++ at work, we used Rational Purify that in practice replace the standard malloc (new in C++) and free (delete in C++) and it is able to return quite accurate report on where the program did something it was not supposed. However you will never be sure 100% that you do not have any error in your code. If you have a condition that happen extremely rarely, when you execute the program you may not incur in that condition. It will eventually happen in production on the most busy day on the most sensitive data (according to Murphy's Law ;-)
It could be that you're in Debug mode, where a call to malloc will actually call _malloc_dbg. The debug version will allocate more space than you have requested to cope with buffer overflows. I guess that if you ran this in Release mode you might (hopefully) get a crash instead.
You should use new and delete operators in c++... And a safe pointer to control that operations doesn't reach the limit of the array allocated...
There is no "C runtime". C is glorified assembler. It will happily let you walk all over the address space and do whatever you want with it, which is why it's the language of choice for writing OS kernels. Your program is an example of a heap corruption bug, which is a common security vulnerability. If you wrote a long enough string to that address, you'd eventually overrun the end of the heap and get a segmentation fault, but not before you overwrote a lot of other important things first.
When malloc() doesn't have enough free memory in its reserve pool to satisfy an allocation, it grabs pages from the kernel in chunks of at least 4 kb, and often much larger, so you're probably writing into reserved but un-malloc()ed space when you initially exceed the bounds of your allocation, which is why your test case always works. Actually honoring allocation addresses and sizes is completely voluntary, so you can assign a random address to a pointer, without calling malloc() at all, and start working with that as a character string, and as long as that random address happens to be in a writable memory segment like the heap or the stack, everything will seem to work, at least until you try to use whatever memory you were corrupting by doing so.
strcpy() doesn't check if the memory it's writing to is allocated. It just takes the destination address and writes the source character by character until it reaches the '\0'. So, if the destination memory allocated is smaller than the source, you just wrote over memory. This is a dangerous bug because it is very hard to track down.
puts() writes the string until it reaches '\0'.
My guess is that malloc(0) only returns NULL and not cause a run-time error.
My answer is in responce to Why does printf not seg fault or produce garbage?
From
The C programming language by Denis Ritchie & Kernighan
typedef long Align; /* for alignment to long boundary */
union header { /* block header */
struct {
union header *ptr; /* next block if on free list */
unsigned size; /* size of this block */
} s;
Align x; /* force alignment of blocks */
};
typedef union header Header;
The Align field is never used;it just forces each header to be aligned on a worst-case boundary.
In malloc,the requested size in characters is rounded up to the proper number of header-sized units; the block that will be allocated contains
one more unit, for the header itself, and this is the value recorded in the
size field of the header.
The pointer returned by malloc points at the free space, not at the header itself.
The user can do anything with the space requested, but if anything is written outside of the allocated space the list is likely to be scrambled.
-----------------------------------------
| | SIZE | |
-----------------------------------------
| |
points to |-----address returned touser
next free
block
-> a block returned by malloc
In statement
char* test = malloc(1);
malloc() will try to search consecutive bytes from the heap section of RAM if requested bytes are available and it returns the address as below
--------------------------------------------------------------
| free memory | memory in size allocated for user | |
----------------------------------------------------------------
0x100(assume address returned by malloc)
test
So when malloc(1) executed it won't allocate just 1 byte, it allocated some extra bytes to maintain above structure/heap table. you can find out how much actual memory allocated when you requested only 1 byte by printing test[-1] because just to before that block contain the size.
char* test = malloc(1);
printf("memory allocated in bytes = %d\n",test[-1]);
If the size passed is zero, and ptr is not NULL then the call is equivalent to free.

How MMU detects double free of a pointer?

How the memory management unit(MMU) detects the double free of a pointer?
I know that its a good practice to make the pointer NULL just after freeing it, but suppose programmer does not do it. Is there any MMU mechanism to detect it?
The MMU has nothing to do with it. If you free a pointer allocated with malloc twice you will probably corrupt the C runtime heap. The heap (not the MMU) can in principle protect itself against such things, but most don't. Please note that this has nothing to do with the operating system - neither malloc() nor free() are system calls.
How the memory management unit(MMU) detects the double free of a pointer?
The MMU just does virtual address space -> physical memory mapping, it doesn't know anything about how the heap is organized/how the allocation works/..., that is operating system/allocator work.
How does OS detects the double free then? Whats the mechanism??
It walks the list/bitmap/... of allocated blocks, sees that there's no allocated block with the address you passed to it, so it detects that it's a double free.
However if that block has already been re-allocated, it finds it and correctly free it => but now the code that used the re-allocated block will go nuts, since the memory it has correctly acquired and that it didn't release has become unallocated.
If the allocator protects the unallocated memory marking it as no-read and no-write/removing it from the committed pages of the virtual address space the program will die as soon as that memory is accessed again (but the code that apparently caused the crash will be actually innocent, since it didn't do anything wrong).
Otherwise, the application may still work for some time, until that memory block will be given to some other piece of code that requested some memory. At that point, two pieces of the same application will try to work on the same block of memory, with all the mess that can originate from this.
(Thanks to Pascal Cuoq for pointing out my error.)
No, there is no MMU mechanism to detect it. It is common that calling free on an already free'd address causes the program to crash as the implentation of free does something unexpected and causes a segmentation fault.
Running valgrind is a good way of checking for memory management problems, such as double freeing a pointer.
Setting it to NULL isn't actually a good practice, it hides bugs. Particularly double free()s. Check the OS memory map, something like 0xfeeefeee or 0xdeadbeef is usually good.
You can diagnose double free()s with a debug allocator. Most any decent CRT has one.

Resources