I need to use a real C program for exemplifying memory safety concepts. The idea is to inject or delete some statements in a program that uses malloc in order to create a memory problem. The modified version of the program must lend to a memory fault at runtime. The problem should be detectable by Valgrind, and thus be related to dynamically allocated memory (not stack memory). It should also have a pre-made test case or test input to trigger the problem.
I don't understand how to create a dynamically allocated memory fault.
Can you present an example and explain a modification to a program that causes a memory fault when the program is executed with a given input?
I'll give you a few examples.
#include <stdlib.h>
int main(void)
{
int*pi1 = malloc(10*sizeof(int));
if (pi1[5]) // ERROR here, see 4.2.2 in the manual
;
free(pi1);
int* pi2 = malloc(10*sizeof(int));
free(pi2);
if (pi2[5]) // ERROR here, a variation of 4.2.1 in the manual
;
int* pi3 = (int*)0x500000000000U;
if (*pi3) // ERROR and probably a crash, see 4.2.1 in the manual
;
}
Clearly these are trivial examples. In more involved real world problems you should be aware that the 'uninitialized' nature of memory is transitive. Valgrind does not emit error messages until the use of the uninitialized memory has an effect on the behaviour of the software.
For example, you could have
Structure s1 allocated with malloc.
All fields of s1 get initialized except f1.
s1 gets copied into s2. No error emitted.
s2 gets copied into s3. No error emitted.
A read is done on s3.f1. Now Valgrind emits an error. It will give the stack here and the allocation stack of step 1.
Related
when I try the code below it works fine. Am I missing something?
main()
{
int *p;
p=malloc(sizeof(int));
printf("size of p=%d\n",sizeof(p));
p[500]=999999;
printf("p[0]=%d",p[500]);
return 0;
}
I tried it with malloc(0*sizeof(int)) or anything but it works just fine. The program only crashes when I don't use malloc at all. So even if I allocate 0 memory for the array p, it still stores values properly. So why am I even bothering with malloc then?
It might appear to work fine, but it isn't very safe at all. By writing data outside the allocated block of memory you are overwriting some data you shouldn't. This is one of the greatest causes of segfaults and other memory errors, and what you're observing with it appearing to work in this short program is what makes it so difficult to hunt down the root cause.
Read this article, in particular the part on memory corruption, to begin understanding the problem.
Valgrind is an excellent tool for analysing memory errors such as the one you provide.
#David made a good comment. Compare the results of running your code to running the following code. Note the latter results in a runtime error (with pretty much no useful output!) on ideone.com (click on links), whereas the former succeeds as you experienced.
int main(void)
{
int *p;
p=malloc(sizeof(int));
printf("size of p=%d\n",sizeof(p));
p[500]=999999;
printf("p[0]=%d",p[500]);
p[500000]=42;
printf("p[0]=%d",p[500000]);
return 0;
}
If you don't allocate memory, p has garbage in it, so writing to it likely will fail. Once you made a valid malloc call, p is pointing to valid memory location and you can write to it. You are overwriting memory that you shouldn't write to, but nobody's going to hold your hand and tell you about it. If you run your program and a memory debugger such as valgrind, it will tell you.
Welcome to C.
Writing past the end of your memory is Undefined Behaviour™, which means that anything could happen- including your program operating as if what you just did was perfectly legal. The reason for your program running as if you had done malloc(501*sizeof(int)) are completely implementation-specific, and can indeed be specific to anything, including the phase of the moon.
This is because P would be assigned some address no matter what size you use with malloc(). Although, with a zero size you would be referencing invalid memory as the memory hasn't been allocated, but it may be within a location which wouldn't cause program crash, though the behavior will be undefined.
Now if you do not use malloc(), it would be pointing to a garbaging location and trying to access that is likely to cause program crash.
I tried it with malloc(0*sizeof(int))
According to C99 if the size passed to malloc is 0, a C runtime can return either a NULL pointer or the allocation behaves as if the request was for non-zero allocation, except that the returned pointer should not be dereferenced. So it is implementation defined (e.g. some implementations return a zero-length buffer) and in your case you do not get a NULL pointer back, but you are using a pointer you should not be using.If you try it in a different runtime you could get a NULL pointer back.
When you call malloc() a small chunk of memory is carved out of a larger page for you.
malloc(sizeof(int));
Does not actually allocate 4 bytes on a 32bit machine (the allocator pads it up to a minimum size) + size of heap meta data used to track the chunk through its lifetime (chunks are placed in bins based on their size and marked in-use or free by the allocator). hxxp://en.wikipedia.org/wiki/Malloc or more specifically hxxp://en.wikipedia.org/wiki/Malloc#dlmalloc_and_its_derivatives if you're testing this on Linux.
So writing beyond the bounds of your chunk doesn't necessarily mean you are going to crash. At p+5000 you are not writing outside the bounds of the page allocated for that initial chunk so you are technically writing to a valid mapped address. Welcome to memory corruption.
http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=heap+overflows
Our CheckPointer tool can detect this error. It knows that the allocation of p was to a chunk of 4 bytes, and thus the assignment is made, it is outside the area for which p was allocated. It will tell you that the p[500] assignment is wrong.
I've a written a simple C program with two data structures implemented as ADT, so I dynamically allocate the memory for them
Everything was working fine, until I've decided to add a int value inside a struct, nothing dynamically allocated, classic plain old simple static memory allocation, but since I've added it I've started having a segfault in a pretty safe function that shouldn't segfault at all.
I've thought about a memory allocation error, so I've tried to not free and reuse a pointer variable I was using, but instead use another variable, and doing so the program went fine.
Pissed off by all the times I had to deal with this kind of errors, I've re-enabled that free I was talking before, recompiled and made a run with valgrind.
To my surprise, there was absolutely no memory leak, no segmentation fault, not any kind of interruption, just a warn about Conditional jump or move depends on uninitialised value(s), but that's a wanted behavior (if (pointer == NULL) { }) so I've run the executable directly from command line and again, everything went fine, so the situation it's this:
Program without the new int value in the struct:
Compile : check
Runs : check
Valgrind analisys: No memory leakage, just the warn
Debug (gdb) : check
Program with the new int value in the struct:
Compile : check
Runs : check
Valgrind analisys: No memory leakage, just the warn
Debug (gdb) : Segfault
So I think that's the opposite of a Heisenbug, a bug that shows itself only and absolutely only when debugging, how can I try to fix this?
OK thanks to #weather-vane and #some-programmer-dude I've noticed that effectively I wasn't initializing the variable valgrind was complaining about, and I've misunderstood the valgrind warn, I was reading it as You should not use a if to check if variables are NULL
I wrote a simple program to test the contents of a dynamically allocated memory after free() as below. (I know we should not access the memory after free. I wrote this to check what will be there in the memory after free)
#include <stdio.h>
#include <stdlib.h>
main()
{
int *p = (int *)malloc(sizeof(int));
*p = 3;
printf("%d\n", *p);
free(p);
printf("%d\n", *p);
}
output:
3
0
I thought it will print either junk values or crash by 2nd print statement. But it is always printing 0.
1) Does this behaviour depend on the compiler?
2) if I try to deallocate the memory twice using free(), core dump is getting generated. In the man pages, it is mentioned that program behaviour is abnormal. But I am always getting core dump. Does this behaviour also depend on the compiler?
Does free() remove the data stored in the dynamically allocated memory?
No. free just free the allocated space pointed by its argument (pointer). This function accepts a char pointer to a previously allocated memory chunk, and frees it - that is, adds it to the list of free memory chunks, that may be re-allocated.
The freed memory is not cleared/erased in any manner.
You should not dereference the freed (dangling) pointer. Standard says that:
7.22.3.3 The free function:
[...] Otherwise, if the argument does not match a pointer earlier returned by a memory management
function, or if the space has been deallocated by a call to free or realloc, the behavior is undefined.
The above quote also states that freeing a pointer twice will invoke undefined behavior. Once UB is in action, you may get either expected, unexpected results. There may be program crash or core dump.
As described in gnu website
Freeing a block alters the contents of the block. Do not expect to find any data (such as a pointer to the next block in a chain of blocks) in the block after freeing it.
So, accessing a memory location after freeing it results in undefined behaviour, although free doesnt change the data in the memory location. U may be getting 0 in this example, u might as well get garbage in some other example.
And, if you try to deallocate the memory twice, on the second attempt you would be trying to free a memory which is not allocated, thats why you are gettin the core dump.
In addition to all the above explanations for use-after-free semantics, you really may want to investigate the life-saver for every C programmer: valgrind. It will automatically detect such bugs in your code and generally save your behind in the real world.
Coverity and all the other static code checkers are also great, but valgrind is awesome.
As far as standard C is concerned, it’s just not specified, because it is not observable. As soon as you free memory, all pointers pointing there are invalid, so there is no way to inspect that memory.*)
Even if you happen to have some standard C library documenting a certain behaviour, your compiler may still assume pointers aren’t reused after being passed to free, so you still cannot expect any particular behaviour.
*) I think, even reading these pointers is UB, not only dereferencing, but this doesn’t matter here anyway.
Earlier I encountered a problem with dynamic memory in C (visual studio) .
I had a more or less working program that threw a run-time error when freeing one of the buffers. It was a clear memory corruption, the program wrote over the end of the buffer.
My problem is, that it was very time consuming to track down. The error was thrown way down after the corruption, and i had to manually debug the entire run to find when is the buffer end overwritten.
Is there any tool\ way to assist in tracking down this issue? if the program would have crashed immediately i would have found the problem a lot faster...
an example of the issue:
int *pNum = malloc(10 * sizeof(int));
// ||
// \/
for(int i = 0; i < 13; i++)
{
pNum[i] = 3;
}
// error....
free(pNum);
I use "data breakpoints" for that. In your case, when the program crashes, it might first complain like this:
Heap block at 00397848 modified at 0039789C past requested size of 4c
Then, start your program again, and set a data breakpoint at address 0039789C. When the code writes to that address, the execution will stop. It often happens that i find the bug immediately at this point.
If your program allocates and deallocates memory repeatedly, and it happens to be at this exact address, just disable deallocations:
_CrtSetDbgFlag(_CrtSetDbgFlag(_CRTDBG_REPORT_FLAG) | _CRTDBG_DELAY_FREE_MEM_DF);
I use pageheap. This is a tool from Microsoft that changes how the allocator works. With pageheap on, when you call malloc, the allocation is rounded up to the nearest page(a block of memory), and an additional page of virtual memory that is set to no-read/no-write is placed after it. The dynamic memory you allocate is aligned so that the end of your buffer is just before the end of the page before the virtual page. This way, if you go over the edge of your buffer, often by a single byte, the debugger can catch it easily.
Is there any tool\ way to assist in tracking down this issue?
Yes, that's precisely the type of error which static code analysers try to locate. e.g. splint/PC-Lint
Here's a list of such tools:
http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis
Edit: In trying out splint on your code snippet I get the following warning:
main.c:9:2: Possible out-of-bounds store: pnum[i]
Presumably this warning would have assisted you.
Our CheckPointer tool can help find memory management errors. It works with GCC 3/4 and Microsoft dialects of C.
Many dynamic checkers only catch accesses outside of an object, and then only if the object is heap allocated. CheckPointer will find memory access errors inside a heap-allocated object; it is illegal to access off the end of a field in a struct regardless of the field type; most dynamic checkers cannot detect such errors. It will also find accesses off the edge of locals.
Does the following code segfault at array[10] = 22 or array[9999] = 22?
I'm just trying to figure out if the whole code would execute before it seg faults. (in the C language).
#include <stdio.h>
int main(){
int array[10];
int i;
for(i=0; i<9999; ++i){
array[i] = 22;
}
return 0;
}
It depends...
If the memory after array[9] is clean then nothing might happen, until ofcourse one reaches a segment of memory which is occupied.
Try out the code and add:
printf("%d\n",i);
in the loop and you will see when it crashes and burns.
I get various results, ranging from 596 to 2380.
Use a debugger?
$ gcc -g seg.c -o so_segfault
$ gdb so_segfault
GNU gdb 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i486-linux-gnu"...
(gdb) run
Starting program: /.../so_segfault
Program received signal SIGSEGV, Segmentation fault.
0x080483b1 in main () at seg.c:7
7 array[i] = 22;
(gdb) print i
$1 = 2406
(gdb)
In fact if you run this again, you will see that the segfault will not always occur for the same value of i. What is sure is that it happens when i>=10, but there is no way to determine the value for i for which it will crash, because this is not deterministic: It depends on how the memory is allocated. If the memory is free until array[222] (aka no other programs use it), it will go on until i=222, but it might as well crash for any other value of i>=10.
The answer is maybe. The C language says nothing about what should happen in this case. It is undefined behavior. The compiler is not required to detect the problem, do anything to handle the problem, terminate the program or anything else. And so it does nothing.
You write to memory that's not yours, and in practice one of three things may happen:
You might be lucky, and just get a segfault. This happens if you hit an address that is not allocated to your process. The OS will detect this, and throw an error at you.
You might hit memory that's genuinely unused, in which case no error will occur right away. But if the memory is allocated and used later, it will overwrite your data, and if you expect it to still be there by then, you'll get some nice delayed-action errors.
You might hit data that's actually used for something else already. You overwrite that, and sometime soon, when the original data is needed, it'll read your data instead, and unpredictable errors will ensue.
Writing out of bounds: Just don't do it. The C language won't do anything to tell you when it happens, so you have to keep an eye on it yourself.
When and if your code crashes is not deterministic. It'll depend on what platform you're running the code on.
array is a stack variable, so your compiler is going to reserve 10 * sizeof(int) bytes on the stack for it. Depending on how the compiler arranges other local variables and which way your stack grows, i may come right after array. If you follow Daniel's suggestion and put the printf statement in, you may notice an interesting effect. On my platform, when i = 10, array[10] = 22 clobbers i and the next assignment is to array[23].
A segmentation violation occurs when user code tries to touch a page that it does not have access to. In this case, you'll get one if your stack is small enough that 9999 iterations runs out off the stack.
If you had allocated array on the heap instead (by using malloc()), then you'll get a SIGSEGV when you run off the end of a page boundary. Even a 10 byte allocation will return a whole page. Page sizes vary by platform. Note that some malloc debuggers can try to flag an array out-of-bounds case, but you won't get the SIGSEGV unless the hardware gets involved when you run off the end of the page.
Where your code will segfault depends on what compiler you're using, luck, and other linking details of the program in question. You will most likely not segfault for i == 10. Even though that is outside your array, you will almost certainly still have memory allocated to your process at that location. As you keep going beyond your array bounds, however, you will eventually leave the memory allocated to your process and then take a segfault.
However, if you write beyond an array boundary, you will likely overwrite other automatic variables in your same stack frame. If any of these variables are pointers (or array indexes used later), then when you reference these now-corrupted values, you'll possibly take a segfault. (Depending on the exact value corrupted to and whether or not you're now going to reference memory that is not allocated to your process.)
This is not terribly deterministic.
segmentation fault happens when accessing outside of the processes dedicated memory,
this is not easily predicted. When i == 10 it's outside the array.. but might still be in the memory of the process. This depends how the processes memory got allocated, something there is no way (normally) of knowing (depending of the memory manager of the OS). So segfault might happen at any of i = 10 - 9999, or not at all.
I suggest using GDB to investigate such problems: http://www.gnu.org/software/gdb/documentation/
More generally, you can figure out where the segmentation fault occurs in Linux systems by using a debugger. For example, to use gdb, compile your program with debugging symbols by using the -g flag, e.g.:
gcc -g segfault.c -o segfault
Then, invoke gdb with your program and any arguments using the --args flag, .g.:
gdb --args ./segault myarg1 myarg2 ...
Then, when the debugger starts up, type run, and your program should run until it receives SIGSEGV, and should tell you where it was in the source code when it received that signal.