So, I'm trying to do a little bot to automatize some boring tasks in a single player game. I'm a linux user so I don't have those game trainers that runs on windows, and besides that I want to learn more about memory allocation xD.
With that in mind, I want to know what can make the process of memory allocation non-deterministic. On my firsts tests I used this code:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
/**
* main - uses strdup to create a new string, loops forever-ever
*
* Return: EXIT_FAILURE if malloc failed. Other never returns
*/
int main(void)
{
char *s;
unsigned long int i;
s = strdup("Holberton");
if (s == NULL)
{
fprintf(stderr, "Can't allocate mem with malloc\n");
return (EXIT_FAILURE);
}
i = 0;
while (s)
{
printf("[%lu] %s (%p)\n", i, s, (void *)s);
sleep(1);
i++;
}
return (EXIT_SUCCESS);
}
This code was compiled with:
gcc -Wall -Wextra -pedantic -Werror loop.c -o loop
Which returns something like that:
[0] Holberton (0x5619c31ca260)
[1] Holberton (0x5619c31ca260)
[2] Holberton (0x5619c31ca260)
[3] Holberton (0x5619c31ca260)
[4] Holberton (0x5619c31ca260)
So every time I run this code the address printed changes, then I searched a little bit and discovered that most recent linux distro has ASLR activated by default for security, avoiding some exploits with overflow. When I deactivated the ASLR this code started to print always the same address. Ok, so ASLR is one of the reasons. So I started to test with the game (now using scanmen to search the address), and every time that I restart the game all address are different from the last run.
PS: The address founded in both cases (my C code and on the game) are in the heap.
PS2: It's possible to read more about the C code on this git.
I'd like to know why this happens? It's because the game uses dynamic allocation (like malloc), if so there is any method to force determinism? If it's because dynamic allocation the address shouldn't be on stack? Why it's always on heap?
Also what is the pointer scan? I tried to search about and all I found is some guys using game trainers on windows, I would like to know how it works, and why the pointer is deterministic.
Related
I have a C program which uses malloc (it could also have been C++ with new). I would like to test my program and simulate an "out of memory" scenario.
I would strongly prefer running my program from within a bash or sh shell environment without modifying the core code.
How do I make dynamic memory allocations fail for a program run?
Seems like it could be possible using ulimit but I can't seem to find the right parameters:
$ ulimit -d 50
$ ./program_which_heap_allocates
./program_which_heap_allocates: error while loading shared libraries: libc.so.6: cannot map zero-fill pages
$ ulimit -d 51
bash: ulimit: data seg size: cannot modify limit: Operation not permitted
I'm having trouble running the program in such a way that dynamic linking can occur (such as stdlib) but not the allocations from my program.
If you are under Linux and using glibc then there are Hooks for Malloc. The hooks allow you to catch calls to malloc and make them randomly fail.
Your test suite could use an environment variable to tell the code to insert the malloc hook and which call of malloc to fail. E.g. if you set FOOBAR_FAIL_MALLOC=10 then your malloc hook would count down and let the 10th use of malloc return 0.
FOOBAR_FAIL_MALLOC=0 could simply report the numbers of mallocs in a testcase. You would then run the test once with FOOBAR_FAIL_MALLOC=0 and capture the number of mallocs involved. Then repeat for FOOBAR_FAIL_MALLOC=1 to N to test every single malloc.
Unless after a failure of malloc you have more mallocs. Then you have to think of something more complex to specify which mallocs should fail.
You could also just make the hook fail randomly. Given enough runs every malloc call would fail at some point.
Note: a C++ new should also hot the malloc hook
You can have your test program include the .c under test and use a #define to override calls to malloc.
For example:
prog.c:
#include <stdio.h>
#include <stdlib.h>
void *foo(int x)
{
return malloc(x);
}
test.c:
#include <stdio.h>
#include <stdlib.h>
static char buf[100];
static int malloc_fail;
void *test_malloc(size_t n)
{
if (malloc_fail) {
return NULL;
} else {
return buf;
}
}
#define malloc(x) test_malloc(x)
#include "prog.c"
#undef malloc
int main()
{
void *p;
malloc_fail=0;
p = foo(5);
printf("buf=%p, p=%p\n", (void *)buf, p); // prints same value both times
malloc_fail=1;
p = foo(4);
if (p) {
printf("buf=%p, p=%p\n", (void *)buf, p);
} else {
printf("p is NULL\n"); // this prints
}
return 0;
}
I am using a third party library which apparently has a memory leak that we first discovered when upgrading from Visual Studio 2008 (VC9.0) to Visual Studio 2015 (VC14.0). On Windows I load the library at run-time using LoadLibrary and when done using it I unload it using FreeLibrary. When compiling and linking with VC9.0 all memory allocated by the library gets freed on FreeLibrary while using VC14.0 some memory is never freed. The memory profile for my test program below can be seen here: http://imgur.com/a/Hmn1S.
Why is the behavior different for VC9.0 and VC14.0? And can one do anything to avoid the leak without changing the source of the library, like mimic the behavior of VC9.0?
The only thing I could find here on SO is this: Memory leaks on DLL unload which hasn't really helped me, though one answer hints at some hacky solution.
I have made a minimal working example to show that it is not specific to the library. First I create a small library in C with a function that allocates some memory and never deallocates it:
leaklib.h:
#ifndef LEAKLIB_H_
#define LEAKLIB_H_
__declspec( dllexport ) void leak_memory(int memory_size);
#endif
leaklib.c:
#include "leaklib.h"
#include <stdio.h>
#include <stdlib.h>
void leak_memory(int memory_size)
{
double * buffer;
buffer = (double *) malloc(memory_size);
if (buffer != NULL)
{
printf("Allocated %d bytes of memory\n", memory_size);
}
}
And then a program that loads the library, calls the memory leak function, and then unloads the library again - repeatedly so that we can track the memory over time.
memleak.c:
#include <windows.h>
#include <stdio.h>
int main(void)
{
int i;
HINSTANCE handle;
int load_success;
void (*leak_memory)(int);
int dll_unloaded;
Sleep(30000);
for (i = 0; i < 100; ++i)
{
handle = LoadLibrary(TEXT("leaklib.dll"));
leak_memory = GetProcAddress(handle, "leak_memory");
printf("%d: leaking memory...\n", i);
leak_memory(50*1024*1024);
printf("ok\n\n");
Sleep(3000);
dll_unloaded = FreeLibrary(handle);
if (!dll_unloaded)
{
printf("Could not free dll'");
return 1;
}
Sleep(3000);
}
return 0;
}
I then build the library with:
cl.exe /MTd /LD leaklib.c
and the program with
cl.exe memleak.c
with cl.exe from either VS9.0 or VS14.0.
The real problem is that VC9 inadvertently deallocated memory that a reasonable program might expect to still be there - after all, free() wasn't called. Of course, this is one of those areas where the CRT can't please everybody - you wanted the automagic free behavior.
Underlying this is the fact that FreeLibrary is a pretty simple Win32 function. It removes a chunk of code from your address space. You might get a few final calls to DllMain, but as DllMain's documentation notes: you can't do much there. One thing that's especially hard is figuring out what memory would need to be freed, other than DLL's code and data segment.
I have a program problem for which I would like to declare a 256x256 array in C. Unfortunately, I each time I try to even declare an array of that size (integers) and I run my program, it terminates unexpectedly. Any suggestions? I haven't tried memory allocation since I cannot seem to understand how it works with multi-dimensional arrays (feel free to guide me through it though I am new to C). Another interesting thing to note is that I can declare a 248x248 array in C without any problems, but no larger.
dims = 256;
int majormatrix[dims][dims];
Compiled with:
gcc -msse2 -O3 -march=pentium4 -malign-double -funroll-loops -pipe -fomit-frame-pointer -W -Wall -o "SkyFall.exe" "SkyFall.c"
I am using SciTE 323 (not sure how to check GCC version).
There are three places where you can allocate an array in C:
In the automatic memory (commonly referred to as "on the stack")
In the dynamic memory (malloc/free), or
In the static memory (static keyword / global space).
Only the automatic memory has somewhat severe constraints on the amount of allocation (that is, in addition to the limits set by the operating system); dynamic and static allocations could potentially grab nearly as much space as is made available to your process by the operating system.
The simplest way to see if this is the case is to move the declaration outside your function. This would move your array to static memory. If crashes continue, they have nothing to do with the size of your array.
Unless you're running a very old machine/compiler, there's no reason that should be too large. It seems to me the problem is elsewhere. Try the following code and tell me if it works:
#include <stdio.h>
int main()
{
int ints[256][256], i, j;
i = j = 0;
while (i<256) {
while (j<256) {
ints[i][j] = i*j;
j++;
}
i++;
j = 0;
}
printf("Made it :) \n");
return 0;
}
You can't necessarily assume that "terminates unexpectedly" is necessarily directly because of "declaring a 256x256 array".
SUGGESTION:
1) Boil your code down to a simple, standalone example
2) Run it in the debugger
3) When it "terminates unexpectedly", use the debugger to get a "stack traceback" - you must identify the specific line that's failing
4) You should also look for a specific error message (if possible)
5) Post your code, the error message and your traceback
6) Be sure to tell us what platform (e.g. Centos Linux 5.5) and compiler (e.g. gcc 4.2.1) you're using, too.
For educational purposes I'm trying to accomplish a bufferoverflow that directs the program to a different adress.
This is the c-program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void secret1(void) {
puts("You found the secret function No. 1!\n");
}
int main () {
char string[2];
puts("Input: ");
scanf("%s", string);
printf("You entered %s.\n", string);
return 0;
}
I used gdb to find the address of secret1 as well es the offset the my variable string to the RIP. Using this information I created the following python-exploit:
import struct
rip = 0x0000000100000e40
print("A"*24 + struct.pack("<q", rip))
So far everything works - the program jumps to secret1 and then crashes with "Segmentation fault".
HOWEVER, if I extend my program like this:
...
void secret1(void) {
puts("You found the secret function No. 1!\n");
}
void secret2(void) {
puts("You found the secret function No. 2!\n");
}
void secret3(void) {
puts("You found the secret function No. 3!\n");
}
...
...it SegFaults WITHOUT jumping to any of the functions, even tho the new fake RIPs are correct (i.e. 0x0000000100000d6c for secret1, 0x0000000100000d7e for secret2). The offsets stay the same as far as gdb told me (or don't they?).
I noticed that none of my attempts work when the program is "big enough" to place the secret-functions in the memory-area ending with 0x100000 d .. - it works like a charm tho, when they are somewhere in 0x100000 e ..
It also works with more than one secret function when I compile it in 32-Bit-mode (addresses changed accordingly) but not in 64-Bit-mode.
-fno-stack-protector // doesn't make any difference.
Can anybody please explain this odd behaviour to me? Thank you soooo much!
Perhaps creating multiple hidden functions puts them all in a page of memory without execute permission... try explicitly giving RWX permission to that page using mprotect. Could be a number of other things, but this is the first issue I would address.
As for the -fno-stack-protector gcc option, I was convinced for a while this was obfuscated on gcc 4.2.1. But after playing with it a bit more, I have learned that in order for canary stack protection to be enabled, sizeof(buffer) >= 8 must be true. Additionally, it must be a char buffer, unless you specify the -fstack-protector-all or -fnostack-protector-all options, which enable canaries even for functions that don't contain char buffers. I'm running OS X 10.6.5 64-bit with aforementioned gcc version and on a buffer overflow exploit snippet I'm writing, my stack changes when compiling with -fstack-protector-all versus compiling with no relevant options (probably because the function being exploited doesn't have a char buffer). So if you want to be certain that this feature is either disabled or enabled, make sure to use the -all variants of the options.
I have seen a strange behavior with "strndup" call on AIX 5.3 and 6.1.
If I call strndup with size more than the size of actual source string length, then there is a stack corruption after that call.
Following is the sample code where this issue can come:
int main ()
{
char *dst_str = NULL;
char src_str[1023] = "sample string";
dst_str = strndup(src_str, sizeof(src_str));
free(dst_str);
return 0;
}
Does anybody have experienced this behavior?
If yes please let me know.
As per my observation, there must be a patch from OS where this issue got fixed. but i could not get that patch if at all there is any. Please throw some light.
Thanks & Regards,
Thumbeti
You are missing a #include <string.h> in your code. Please try that—I am fairly sure it will work. The reason is that without the #include <string.h>, there is no prototype for strndup() in scope, so the compiler assumes that strndup() returns an int, and takes an unspecified number of parameters. That is obviously wrong. (I am assuming you're compiling in POSIX compliant mode, so strndup() is available to you.)
For this reason, it is always useful to compile code with warnings enabled.
If your problem persists even after the change, there might be a bug.
Edit: Looks like there might be a problem with strndup() on AIX: the problem seems to be in a broken strnlen() function on AIX. If, even after #include <string.h> you see the problem, it is likely you're seeing the bug. A google search shows a long list of results about it.
Edit 2:
Can you please try the following program and post the results?
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
char *test1 = "abcdefghijabcdefghijabcdefghijk";
char *test2 = "012345678901234567890123456789";
char *control = "01234567890123456789012345678";
char *verify;
free(strndup(test1, 30));
verify = strndup(test2, 29); /* shorter then first strndup !!! */
fprintf(stderr,">%s<\n",verify);
if (strcmp(control, verify))
printf("strndup is broken\n");
}
(Taken from https://bugzilla.samba.org/show_bug.cgi?id=1097#c10.)
Edit 3: After seeing your output, which is >01234567890123456789012345678<, and with no strndup is broken, I don't think your version of AIX has the strndup bug.
Most likely you are corrupting memory somewhere (given the fact that the problem only appears in a large program, under certain conditions). Can you make a small, complete, compilable example that exhibits the stack corruption problem? Otherwise, you will have to debug your memory allocation/deallocation in your program. There are many programs to help you do that, such as valgrind, glibc mcheck, dmalloc, electricfence, etc.
Old topic, but I have experienced this issue as well. A simple test program on AIX 6.1, in conjunction with AIX's MALLOCDEBUG confirms the issue.
#include <string.h>
int main(void)
{
char test[32] = "1234";
char *newbuf = NULL;
newbuf = strndup(test, sizeof(test)-1);
}
Compile and run the program with buffer overflow detection:
~$ gcc -g test_strndup2.c
~$ MALLOCDEBUG=catch_overflow ./a.out
Segmentation fault (core dumped)
Now run dbx to analyze the core:
~$ dbx ./a.out /var/Corefiles/core.6225952.22190412
Type 'help' for help.
[using memory image in /var/Corefiles/core.6225952.22190412]
reading symbolic information ...
Segmentation fault in strncpy at 0xd0139efc
0xd0139efc (strncpy+0xdc) 9cc50001 stbu r6,0x1(r5)
(dbx) where
strncpy() at 0xd0139efc
strndup#AF5_3(??, ??) at 0xd03f3f34
main(), line 8 in "test_strndup2.c"
Tracing through the instructions in strndup, it appears that it mallocs a buffer that is just large enough to handle the string in s plus a NULL terminator. However, it will always copy n characters to the new buffer, padding with zeros if necessary, causing a buffer overflow if strlen(s) < n.
char* strndup(const char*s, size_t n)
{
char* newbuf = (char*)malloc(strnlen(s, n) + 1);
strncpy(newbuf, s, n-1);
return newbuf;
}
Alok is right. and with the gcc toolchain under glibc, you would need to define _GNU_SOURCE to get the decl of strndup, otherwise it's not decl'd, e.g.:
#include <string.h>
...
compilo:
gcc -D_GNU_SOURCE a.c
Thanks a lot for your prompt responses.
I have tried the given program.
following is the result:
bash-2.05b# ./mystrndup3
>01234567890123456789012345678<
In my program I have included , still problem is persistent.
Following is the strndup declaration in prepossessed code.
extern char * strndup(const char *, size_t);
I would like to clarify one thing, with small program I don't get effect of stack corruption. It is consistently appearing in my product which has huge amount of function calls.
Using strndup in the following way solved the problem:
dst_str = strndup(src_str, srtlen(src_str));
Please note: used strlen instead of sizeof as i need only the valid string.
I am trying to understand why it is happening.
Behavior i am seeing with my product when i use strndup with large size:
At the "exit" of main, execution is coring with "illegal instruction"
intermittently "Illegal Instruction" in the middle of execution (after strndup call).
Corrupt of some allocated memory, which is no where related to strndup.
All these issues are resolved by just modifying the usage of strndup with actual size of source string.
Thanks & Regards,
Thumbeti