I wrote a simple c program to simulate a memory leak. But it crashes when i try to run it.
#include <stdio.h>
#include <stdlib.h>
void memory_leak(void);
int main()
{
memory_leak();
return EXIT_SUCCESS;
}
void memory_leak()
{
int i = 100;
memory_leak();
}
I use MinGW gcc compiler.
You are producing a stack overflow - by calling your function memory leak recursively.
Your version of memory_leak allocates a local ("stack") variable that will be released/destroyed/deallocated when the function exits.
To actually create a memory leak, you need to allocate memory from the heap (e.g. using new or malloc).
void* memory_leak()
{
return malloc(10);
}
[Don't unconditionally call memory_leak within memory_leak.]
Related
I have been trying to intercept calls to malloc and free, following our textbook (CSAPP book).
I have followed their exact code, and nearly the same code that I found online and I keep getting a segmentation fault. I heard our professor saying something about printf that mallocs and frees memory so I think that this happens because I am intercepting a malloc and since I am using a printf function inside the intercepting function, it will call itself recursively.
However I can't seem to find a solution to solving this problem? Our professor demonstrated that intercepting worked ( he didn't show us the code) and prints our information every time a malloc occurs, so I do know that it's possible.
Can anyone suggest a working method??
Here is the code that I used and get nothing:
mymalloc.c
#ifdef RUNTIME
// Run-time interposition of malloc and free based on // dynamic linker's (ld-linux.so) LD_PRELOAD mechanism #define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h> #include <dlfcn.h>
void *malloc(size_t size) {
static void *(*mallocp)(size_t size) = NULL; char *error;
void *ptr;
// get address of libc malloc
if (!mallocp) {
mallocp = dlsym(RTLD_NEXT, "malloc"); if ((error = dlerror()) != NULL) {
fputs(error, stderr);
exit(EXIT_FAILURE);
}
}
ptr = mallocp(size);
printf("malloc(%d) = %p\n", (int)size, ptr); return ptr;
}
#endif
test.c
#include <stdio.h>
#include <stdlib.h>
int main(){
printf("main\n");
int* a = malloc(sizeof(int)*5);
a[0] = 1;
printf("end\n");
}
The result i'm getting:
$ gcc -o test test.c
$ gcc -DRUNTIME -shared -fPIC mymalloc.c -o mymalloc.so
$ LD_PRELOAD=./mymalloc.so ./test
Segmentation Fault
This is the code that I tried and got segmentation fault (from https://gist.github.com/iamben/4124829):
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <dlfcn.h>
void* malloc(size_t size)
{
static void* (*rmalloc)(size_t) = NULL;
void* p = NULL;
// resolve next malloc
if(!rmalloc) rmalloc = dlsym(RTLD_NEXT, "malloc");
// do actual malloc
p = rmalloc(size);
// show statistic
fprintf(stderr, "[MEM | malloc] Allocated: %lu bytes\n", size);
return p;
}
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define STR_LEN 128
int main(int argc, const char *argv[])
{
char *c;
char *str1 = "Hello ";
char *str2 = "World";
//allocate an empty string
c = malloc(STR_LEN * sizeof(char));
c[0] = 0x0;
//and concatenate str{1,2}
strcat(c, str1);
strcat(c, str2);
printf("New str: %s\n", c);
return 0;
}
The makefile from the git repo didn't work so I manually compiled the files and got:
$ gcc -shared -fPIC libint.c -o libint.so
$ gcc -o str str.c
$ LD_PRELOAD=./libint.so ./str
Segmentation fault
I have been doing this for hours and I still get the same incorrect result, despite the fact that I copied textbook code. I would really appreciate any help!!
One way to deal with this is to turn off the printf when your return is called recursively:
static char ACallIsInProgress = 0;
if (!ACallIsInProgress)
{
ACallIsInProgress = 1;
printf("malloc(%d) = %p\n", (int)size, ptr);
ACallIsInProgress = 0;
}
return ptr;
With this, if printf calls malloc, your routine will merely call the actual malloc (via mallocp) and return without causing another printf. You will miss printing information about a call to malloc that the printf does, but that is generally tolerable when interposing is being used to study the general program, not the C library.
If you need to support multithreading, some additional work might be needed.
The printf implementation might allocate a buffer only once, the first time it is used. In that case, you can initialize a flag that turns off the printf similar to the above, call printf once in the main routine (maybe be sure it includes a nice formatting task that causes printf to allocate a buffer, not a plain string), and then set the flag to turn on the printf call and leave it set for the rest of the program.
Another option is for your malloc routine not to use printf at all but to cache data in a buffer to be written later by some other routine or to write raw data to a file using write, with that data interpreted and formatted by a separate program later. Or the raw data could be written by a pipe to a program that formats and prints it and that is not using your interposed malloc.
I have a C program which uses malloc (it could also have been C++ with new). I would like to test my program and simulate an "out of memory" scenario.
I would strongly prefer running my program from within a bash or sh shell environment without modifying the core code.
How do I make dynamic memory allocations fail for a program run?
Seems like it could be possible using ulimit but I can't seem to find the right parameters:
$ ulimit -d 50
$ ./program_which_heap_allocates
./program_which_heap_allocates: error while loading shared libraries: libc.so.6: cannot map zero-fill pages
$ ulimit -d 51
bash: ulimit: data seg size: cannot modify limit: Operation not permitted
I'm having trouble running the program in such a way that dynamic linking can occur (such as stdlib) but not the allocations from my program.
If you are under Linux and using glibc then there are Hooks for Malloc. The hooks allow you to catch calls to malloc and make them randomly fail.
Your test suite could use an environment variable to tell the code to insert the malloc hook and which call of malloc to fail. E.g. if you set FOOBAR_FAIL_MALLOC=10 then your malloc hook would count down and let the 10th use of malloc return 0.
FOOBAR_FAIL_MALLOC=0 could simply report the numbers of mallocs in a testcase. You would then run the test once with FOOBAR_FAIL_MALLOC=0 and capture the number of mallocs involved. Then repeat for FOOBAR_FAIL_MALLOC=1 to N to test every single malloc.
Unless after a failure of malloc you have more mallocs. Then you have to think of something more complex to specify which mallocs should fail.
You could also just make the hook fail randomly. Given enough runs every malloc call would fail at some point.
Note: a C++ new should also hot the malloc hook
You can have your test program include the .c under test and use a #define to override calls to malloc.
For example:
prog.c:
#include <stdio.h>
#include <stdlib.h>
void *foo(int x)
{
return malloc(x);
}
test.c:
#include <stdio.h>
#include <stdlib.h>
static char buf[100];
static int malloc_fail;
void *test_malloc(size_t n)
{
if (malloc_fail) {
return NULL;
} else {
return buf;
}
}
#define malloc(x) test_malloc(x)
#include "prog.c"
#undef malloc
int main()
{
void *p;
malloc_fail=0;
p = foo(5);
printf("buf=%p, p=%p\n", (void *)buf, p); // prints same value both times
malloc_fail=1;
p = foo(4);
if (p) {
printf("buf=%p, p=%p\n", (void *)buf, p);
} else {
printf("p is NULL\n"); // this prints
}
return 0;
}
I am trying to compile the following C code on linux:
#include <stdio.h>
/////
void func1();
void func2();
//////
void func1()
{
func2();
}
void func2()
{
func1();
}
int main()
{
func1();//call to function 1
}
If I am not wrong then the program should execute infintely but when i compile and run it on linux it gives Segmentation Fault.
Why is this happening?
Each nested function call consumes some stack space for the arguments and the return address. In your code the nested function calls are unbounded, so they consume an unbounded amount of stack. Once the stack is exhausted, the program goes on to write return addresses outside the memory allocated to the process and crashes.
Depending on the compiler, turning on optimizations might help because of tail call optimization.
Behaviour you are experiencing is called stack overflow. This means, the call stack contained too many items and it overflowed (there was no space left on it to continue execution), and the program crashed with SIGSEGV. There is no exit routine so it was inevitable that such thing would happen.
I am using clang static analysis under Xcode 6.4 (6E35b), and getting a false positive warning about a potential memory leak. I do explicitly free the memory in question, but the freeing happens in a different compilation unit. Here is my MWE:
file2.c: Performs the actual freeing.
#include <stdlib.h>
void my_free(const void* p) {
free((void*) p);
}
file1.c: Allocates memory and explicitly frees it through an external function.
#include <stdlib.h>
void my_free(const void* p);
int main(int argc, char* argv[]) {
void* data = malloc(1);
if(data) my_free(data);
return 0; /* <-- "Potential leak of memory pointed to by 'data'" */
}
When I define my_free() in the same compilation unit as its invocation, no warning is generated, but of course I need to invoke my_free() from a large number of different source files.
I have read through FAQ and How to Deal with Common False Positives, but it does not address my situation. What can I do to assure clang that I really am freeing the memory in question?
In case the version information is relevant:
% clang --version
Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn)
One way to fix that would be to add code specific for the analyser, in your header file:
#ifdef __clang_analyzer__
#define my_free free
#endif
This will make the static analyser think you're using the classic free function and stop complaining.
I am using a third party library which apparently has a memory leak that we first discovered when upgrading from Visual Studio 2008 (VC9.0) to Visual Studio 2015 (VC14.0). On Windows I load the library at run-time using LoadLibrary and when done using it I unload it using FreeLibrary. When compiling and linking with VC9.0 all memory allocated by the library gets freed on FreeLibrary while using VC14.0 some memory is never freed. The memory profile for my test program below can be seen here: http://imgur.com/a/Hmn1S.
Why is the behavior different for VC9.0 and VC14.0? And can one do anything to avoid the leak without changing the source of the library, like mimic the behavior of VC9.0?
The only thing I could find here on SO is this: Memory leaks on DLL unload which hasn't really helped me, though one answer hints at some hacky solution.
I have made a minimal working example to show that it is not specific to the library. First I create a small library in C with a function that allocates some memory and never deallocates it:
leaklib.h:
#ifndef LEAKLIB_H_
#define LEAKLIB_H_
__declspec( dllexport ) void leak_memory(int memory_size);
#endif
leaklib.c:
#include "leaklib.h"
#include <stdio.h>
#include <stdlib.h>
void leak_memory(int memory_size)
{
double * buffer;
buffer = (double *) malloc(memory_size);
if (buffer != NULL)
{
printf("Allocated %d bytes of memory\n", memory_size);
}
}
And then a program that loads the library, calls the memory leak function, and then unloads the library again - repeatedly so that we can track the memory over time.
memleak.c:
#include <windows.h>
#include <stdio.h>
int main(void)
{
int i;
HINSTANCE handle;
int load_success;
void (*leak_memory)(int);
int dll_unloaded;
Sleep(30000);
for (i = 0; i < 100; ++i)
{
handle = LoadLibrary(TEXT("leaklib.dll"));
leak_memory = GetProcAddress(handle, "leak_memory");
printf("%d: leaking memory...\n", i);
leak_memory(50*1024*1024);
printf("ok\n\n");
Sleep(3000);
dll_unloaded = FreeLibrary(handle);
if (!dll_unloaded)
{
printf("Could not free dll'");
return 1;
}
Sleep(3000);
}
return 0;
}
I then build the library with:
cl.exe /MTd /LD leaklib.c
and the program with
cl.exe memleak.c
with cl.exe from either VS9.0 or VS14.0.
The real problem is that VC9 inadvertently deallocated memory that a reasonable program might expect to still be there - after all, free() wasn't called. Of course, this is one of those areas where the CRT can't please everybody - you wanted the automagic free behavior.
Underlying this is the fact that FreeLibrary is a pretty simple Win32 function. It removes a chunk of code from your address space. You might get a few final calls to DllMain, but as DllMain's documentation notes: you can't do much there. One thing that's especially hard is figuring out what memory would need to be freed, other than DLL's code and data segment.