Why the buffer isn't overflowing with this code? - c

This is the C code that I am compiling:
#include <stdio.h>
#include <stdlib.h>
int main(){
long val=0x41414141;
char buf[20];
printf("Correct val's value from 0x41414141 -> 0xdeadbeef!\n");
printf("Here is your chance: ");
scanf("%24s",&buf);
printf("buf: %s\n",buf);
printf("val: 0x%08x\n",val);
if(val==0xdeadbeef)
system("/bin/sh");
else {
printf("WAY OFF!!!!\n");
exit(1);
}
return 0;
}
Here, I am expecting an overflow in long val if user inputs string 24 character long, changing the value in val. But it just doesn't get overflowed even if string is long enough. Can someone please explain this behaviour?
I am on macOS. This is what gcc -v spits out:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.0.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Also, after googling a bit I tried gcc with these flags:
gcc -g -fno-stack-protector -D_FORTIFY_SOURCE=0 -o overflow_example overflow_example.c
still, the result is same.
This code is part of narnia wargame challenge on overthewire. I managed to crack this challenge on their remote shell, where it was behaving as expected. Now, I am trying to reproduce this same challenge on my local system and facing this issue. Please help.
EDIT: For all the people yelling out about UB: like I said, this was one of the challenge to be solved on overthewire, so it cannot have UB. There are some blogs (here's on I found) that provide walkthrough for this challenge with reasonable logical explanation for why the code behaves the way it does, with which I agree. I also understand that the compiled binary is platform dependent. So, what am I to do to produce this binary with potential overflow on my local system?

It's undefined behavior because C functions do not check whether an argument is too big for its buffer or not.

Apparantly the variables get laid out differently on the stack on your mac.
Wrapping them in a struct will ensure that they are placed in the order you want.
Since there is the possibility of padding, let's turn it off. For gcc, the precompiler directe #pragma pack controls struct packing.
int main(){
#pragma pack(1)
struct {
char buf[20];
long val=0x41414141;
} s;
#pragma pack()
printf("Correct val's value from 0x41414141 -> 0xdeadbeef!\n");
printf("Here is your chance: ");
scanf("%24s",&s.buf);
printf("buf: %s\n",s.buf);
printf("val: 0x%08x\n",s.val);
if(s.val==0xdeadbeef)
system("/bin/sh");
else {
printf("WAY OFF!!!!\n");
exit(1);
}
return 0;
}

I'm not sure what you mean.
1) Are you concerned that your input string will create a number that will be too big to store in a long. This will not happen the number will simply wrap round.
2) Are you concerned that you'll read to memory beyond the bounds of buf? In C this will produce undefined behavior not necessarily a crash.
buf is on the stack (could be just as well on the heap) so you can just keep writing to memory from the address where buf starts. The compiler will generate code that will not do a bounds check for you. So if you go beyond the 20byte you'll eventually start overwriting other parts of memory that do not belong block of memory you've set aside for buf.

Related

How to force a crash in C, is dereferencing a null pointer a (fairly) portable way?

I'm writing my own test-runner for my current project. One feature (that's probably quite common with test-runners) is that every testcase is executed in a child process, so the test-runner can properly detect and report a crashing testcase.
I want to also test the test-runner itself, therefore one testcase has to force a crash. I know "crashing" is not covered by the C standard and just might happen as a result of undefined behavior. So this question is more about the behavior of real-world implementations.
My first attempt was to just dereference a null-pointer:
int c = *((int *)0);
This worked in a debug build on GNU/Linux and Windows, but failed to crash in a release build because the unused variable c was optimized out, so I added
printf("%d", c); // to prevent optimizing away the crash
and thought I was settled. However, trying my code with clang instead of gcc revealed a surprise during compilation:
[CC] obj/x86_64-pc-linux-gnu/release/src/test/test/test_s.o
src/test/test/test.c:34:13: warning: indirection of non-volatile null pointer
will be deleted, not trap [-Wnull-dereference]
int c = *((int *)0);
^~~~~~~~~~~
src/test/test/test.c:34:13: note: consider using __builtin_trap() or qualifying
pointer with 'volatile'
1 warning generated.
And indeed, the clang-compiled testcase didn't crash.
So, I followed the advice of the warning and now my testcase looks like this:
PT_TESTMETHOD(test_expected_crash)
{
PT_Test_expectCrash();
// crash intentionally
int *volatile nptr = 0;
int c = *nptr;
printf("%d", c); // to prevent optimizing away the crash
}
This solved my immediate problem, the testcase "works" (aka crashes) with both gcc and clang.
I guess because dereferencing the null pointer is undefined behavior, clang is free to compile my first code into something that doesn't crash. The volatile qualifier removes the ability to be sure at compile time that this really will dereference null.
Now my questions are:
Does this final code guarantee the null dereference actually happens at runtime?
Is dereferencing null indeed a fairly portable way for crashing on most platforms?
I wouldn't rely on that method as being robust if I were you.
Can't you use abort(), which is part of the C standard and is guaranteed to cause an abnormal program termination event?
The answer refering to abort() was great, I really didn't think of that and it's indeed a perfectly portable way of forcing an abnormal program termination.
Trying it with my code, I came across msvcrt (Microsoft's C runtime) implements abort() in a special chatty way, it outputs the following to stderr:
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
That's not so nice, at least it unnecessarily clutters the output of a complete test run. So I had a look at __builtin_trap() that's also referenced in clang's warning. It turns out this gives me exactly what I was looking for:
LLVM code generator translates __builtin_trap() to a trap instruction if it is supported by the target ISA. Otherwise, the builtin is translated into a call to abort.
It's also available in gcc starting with version 4.2.4:
This function causes the program to exit abnormally. GCC implements this function by using a target-dependent mechanism (such as intentionally executing an illegal instruction) or by calling abort.
As this does something similar to a real crash, I prefer it over a simple abort(). For the fallback, it's still an option trying to do your own illegal operation like the null pointer dereference, but just add a call to abort() in case the program somehow makes it there without crashing.
So, all in all, the solution looks like this, testing for a minimum GCC version and using the much more handy __has_builtin() macro provided by clang:
#undef HAVE_BUILTIN_TRAP
#ifdef __GNUC__
# define GCC_VERSION (__GNUC__ * 10000 \
+ __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__)
# if GCC_VERSION > 40203
# define HAVE_BUILTIN_TRAP
# endif
#else
# ifdef __has_builtin
# if __has_builtin(__builtin_trap)
# define HAVE_BUILTIN_TRAP
# endif
# endif
#endif
#ifdef HAVE_BUILTIN_TRAP
# define crashMe() __builtin_trap()
#else
# include <stdio.h>
# define crashMe() do { \
int *volatile iptr = 0; \
int i = *iptr; \
printf("%d", i); \
abort(); } while (0)
#endif
// [...]
PT_TESTMETHOD(test_expected_crash)
{
PT_Test_expectCrash();
// crash intentionally
crashMe();
}
you can write memory instead of reading it.
*((int *)0) = 0;
No, dereferencing a NULL pointer is not a portable way of crashing a program. It is undefined behavior, which means just that, you have no guarantees what will happen.
As it happen, for the most part under any of the three main OS's used today on desktop computers, that being MacOS, Linux and Windows NT (*) dereferencing a NULL pointer will immediately crash your program.
That said: "The worst possible result of undefined behavior is for it to do what you were expecting."
I purposely put a star beside Windows NT, because under Windows 95/98/ME, I can craft a program that has the following source:
int main()
{
int *pointer = NULL;
int i = *pointer;
return 0;
}
that will run without crashing. Compile it as a TINY mode .COM files under 16 bit DOS, and you'll be just fine.
Ditto running the same source with just about any C compiler under CP/M.
Ditto running that on some embedded systems. I've not tested it on an Arduino, but I would not want to bet either way on the outcome. I do know for certain that were a C compiler available for the 8051 systems I cut my teeth on, that program would run fine on those.
The program below should work. It might cause some collateral damage, though.
#include <string.h>
void crashme( char *str)
{
char *omg;
for(omg=strtok(str, "" ); omg ; omg=strtok(NULL, "") ) {
strcat(omg , "wtf");
}
*omg =0; // always NUL-terminate a NULL string !!!
}
int main(void)
{
char buff[20];
// crashme( "WTF" ); // works!
// crashme( NULL ); // works, too
crashme( buff ); // Maybe a bit too slow ...
return 0;
}

Buffer overflow example working on Windows, but not on Linux

In the book I am reading, Software Exorcism, has this example code for a buffer overflow:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BUFFER_SIZE 4
void victim(char *str)
{
char buffer[BUFFER_SIZE];
strcpy(buffer,str);
return;
}
void redirected()
{
printf("\tYou've been redirected!\n");
exit(0);
return;
}
void main()
{
char buffer[]=
{
'1','2','3','4',
'5','6','7','8',
'\x0','\x0','\x0','\x0','\x0'
};
void *fptr;
unsigned long *lptr;
printf("buffer = %s\n", buffer);
fptr = redirected;
lptr = (unsigned long*)(&buffer[8]);
*lptr = (unsigned long)fptr;
printf("main()\n");
victim(buffer);
printf("main()\n");
return;
}
I can get this to work in Windows with Visual Studio 2010 by specifying
Basic Runtime Checks -> Uninitialized variables
Buffer Security Check -> No
With those compile options, I get this behavior when running:
buffer = 12345678
main()
You've been redirected!
My question is about the code not working on Linux. Is there any clear reason why it is so?
Some info on what I've tried:
I've tried to run this with 32-bit Ubuntu 12.04 (downloaded from here), with these options:
[09/01/2014 11:46] root#ubuntu:/home/seed# sysctl -w kernel.randomize_va_space=0
kernel.randomize_va_space = 0
Getting:
[09/01/2014 12:03] seed#ubuntu:~$ gcc -fno-stack-protector -z execstack -o overflow overflow.c
[09/01/2014 12:03] seed#ubuntu:~$ ./overflow
buffer = 12345678
main()
main()
Segmentation fault (core dumped)
And with 64-bit CentOS 6.0, with these options:
[root]# sysctl -w kernel.randomize_va_space=0
kernel.randomize_va_space = 0
[root]# sysctl -w kernel.exec-shield=0
kernel.exec-shield = 0
Getting:
[root]# gcc -fno-stack-protector -z execstack -o overflow overflow.c
[root]# ./overflow
buffer = 12345678
main()
main()
[root]#
Is there something fundamentally different in Linux environment, which would cause the example not working, or am I missing something simple here?
Note: I've been through the related questions such as this one and this one, but haven't been able to find anything that would help on this. I don't think this is a duplicate of previous questions even though there are a lot of them.
Your example overflows the stack, a small and predictable memory layout, in an attempt to modify the return address of the function void victim(), which would then point to void redirected() instead of coming back to main().
It works with Visual. But GCC is a different compiler, and can use some different stack allocation rule, making the exploit fail. C doesn't enforce a strict "stack memory layout", so compilers can make different choices.
A good way to view this hypothesis is to test your code using MinGW (aka GCC for Windows), proving the behavior difference is not related strictly to the OS.
#define BUFFER_SIZE 4
void victim(char *str)
{
char buffer[BUFFER_SIZE];
strcpy(buffer,str);
return;
}
There is another potential problem here if optimizations are enabled. buffer is 12 bytes, and its called as victim(buffer). Then, within victim, you try to copy 12 bytes into a 4 byte buffer with strcpy.
FORTIFY_SOURCES should cause the program to seg fault on the call to strcpy. If the compiler can deduce the destination buffer size (which it should in this case), then the compiler will replace strcpy with a "safer" version that includes the destination buffer size. If the bytes to copy exceeds the destination buffer size, then the "safer" strcpy will call abort().
To turn off FORTIFY_SOURCES, then compile with -U_FORTIFY_SOURCE or -D_FORTIFY_SOURCE=0.

Why is 248x248 the maximum bi dimensional array size I can declare?

I have a program problem for which I would like to declare a 256x256 array in C. Unfortunately, I each time I try to even declare an array of that size (integers) and I run my program, it terminates unexpectedly. Any suggestions? I haven't tried memory allocation since I cannot seem to understand how it works with multi-dimensional arrays (feel free to guide me through it though I am new to C). Another interesting thing to note is that I can declare a 248x248 array in C without any problems, but no larger.
dims = 256;
int majormatrix[dims][dims];
Compiled with:
gcc -msse2 -O3 -march=pentium4 -malign-double -funroll-loops -pipe -fomit-frame-pointer -W -Wall -o "SkyFall.exe" "SkyFall.c"
I am using SciTE 323 (not sure how to check GCC version).
There are three places where you can allocate an array in C:
In the automatic memory (commonly referred to as "on the stack")
In the dynamic memory (malloc/free), or
In the static memory (static keyword / global space).
Only the automatic memory has somewhat severe constraints on the amount of allocation (that is, in addition to the limits set by the operating system); dynamic and static allocations could potentially grab nearly as much space as is made available to your process by the operating system.
The simplest way to see if this is the case is to move the declaration outside your function. This would move your array to static memory. If crashes continue, they have nothing to do with the size of your array.
Unless you're running a very old machine/compiler, there's no reason that should be too large. It seems to me the problem is elsewhere. Try the following code and tell me if it works:
#include <stdio.h>
int main()
{
int ints[256][256], i, j;
i = j = 0;
while (i<256) {
while (j<256) {
ints[i][j] = i*j;
j++;
}
i++;
j = 0;
}
printf("Made it :) \n");
return 0;
}
You can't necessarily assume that "terminates unexpectedly" is necessarily directly because of "declaring a 256x256 array".
SUGGESTION:
1) Boil your code down to a simple, standalone example
2) Run it in the debugger
3) When it "terminates unexpectedly", use the debugger to get a "stack traceback" - you must identify the specific line that's failing
4) You should also look for a specific error message (if possible)
5) Post your code, the error message and your traceback
6) Be sure to tell us what platform (e.g. Centos Linux 5.5) and compiler (e.g. gcc 4.2.1) you're using, too.

Simple bufferoverflow using scanf (Mac OS X 10.6.5 64-Bit)

For educational purposes I'm trying to accomplish a bufferoverflow that directs the program to a different adress.
This is the c-program:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void secret1(void) {
puts("You found the secret function No. 1!\n");
}
int main () {
char string[2];
puts("Input: ");
scanf("%s", string);
printf("You entered %s.\n", string);
return 0;
}
I used gdb to find the address of secret1 as well es the offset the my variable string to the RIP. Using this information I created the following python-exploit:
import struct
rip = 0x0000000100000e40
print("A"*24 + struct.pack("<q", rip))
So far everything works - the program jumps to secret1 and then crashes with "Segmentation fault".
HOWEVER, if I extend my program like this:
...
void secret1(void) {
puts("You found the secret function No. 1!\n");
}
void secret2(void) {
puts("You found the secret function No. 2!\n");
}
void secret3(void) {
puts("You found the secret function No. 3!\n");
}
...
...it SegFaults WITHOUT jumping to any of the functions, even tho the new fake RIPs are correct (i.e. 0x0000000100000d6c for secret1, 0x0000000100000d7e for secret2). The offsets stay the same as far as gdb told me (or don't they?).
I noticed that none of my attempts work when the program is "big enough" to place the secret-functions in the memory-area ending with 0x100000 d .. - it works like a charm tho, when they are somewhere in 0x100000 e ..
It also works with more than one secret function when I compile it in 32-Bit-mode (addresses changed accordingly) but not in 64-Bit-mode.
-fno-stack-protector // doesn't make any difference.
Can anybody please explain this odd behaviour to me? Thank you soooo much!
Perhaps creating multiple hidden functions puts them all in a page of memory without execute permission... try explicitly giving RWX permission to that page using mprotect. Could be a number of other things, but this is the first issue I would address.
As for the -fno-stack-protector gcc option, I was convinced for a while this was obfuscated on gcc 4.2.1. But after playing with it a bit more, I have learned that in order for canary stack protection to be enabled, sizeof(buffer) >= 8 must be true. Additionally, it must be a char buffer, unless you specify the -fstack-protector-all or -fnostack-protector-all options, which enable canaries even for functions that don't contain char buffers. I'm running OS X 10.6.5 64-bit with aforementioned gcc version and on a buffer overflow exploit snippet I'm writing, my stack changes when compiling with -fstack-protector-all versus compiling with no relevant options (probably because the function being exploited doesn't have a char buffer). So if you want to be certain that this feature is either disabled or enabled, make sure to use the -all variants of the options.

strndup call is currupting stack frames

I have seen a strange behavior with "strndup" call on AIX 5.3 and 6.1.
If I call strndup with size more than the size of actual source string length, then there is a stack corruption after that call.
Following is the sample code where this issue can come:
int main ()
{
char *dst_str = NULL;
char src_str[1023] = "sample string";
dst_str = strndup(src_str, sizeof(src_str));
free(dst_str);
return 0;
}
Does anybody have experienced this behavior?
If yes please let me know.
As per my observation, there must be a patch from OS where this issue got fixed. but i could not get that patch if at all there is any. Please throw some light.
Thanks & Regards,
Thumbeti
You are missing a #include <string.h> in your code. Please try that—I am fairly sure it will work. The reason is that without the #include <string.h>, there is no prototype for strndup() in scope, so the compiler assumes that strndup() returns an int, and takes an unspecified number of parameters. That is obviously wrong. (I am assuming you're compiling in POSIX compliant mode, so strndup() is available to you.)
For this reason, it is always useful to compile code with warnings enabled.
If your problem persists even after the change, there might be a bug.
Edit: Looks like there might be a problem with strndup() on AIX: the problem seems to be in a broken strnlen() function on AIX. If, even after #include <string.h> you see the problem, it is likely you're seeing the bug. A google search shows a long list of results about it.
Edit 2:
Can you please try the following program and post the results?
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
char *test1 = "abcdefghijabcdefghijabcdefghijk";
char *test2 = "012345678901234567890123456789";
char *control = "01234567890123456789012345678";
char *verify;
free(strndup(test1, 30));
verify = strndup(test2, 29); /* shorter then first strndup !!! */
fprintf(stderr,">%s<\n",verify);
if (strcmp(control, verify))
printf("strndup is broken\n");
}
(Taken from https://bugzilla.samba.org/show_bug.cgi?id=1097#c10.)
Edit 3: After seeing your output, which is >01234567890123456789012345678<, and with no strndup is broken, I don't think your version of AIX has the strndup bug.
Most likely you are corrupting memory somewhere (given the fact that the problem only appears in a large program, under certain conditions). Can you make a small, complete, compilable example that exhibits the stack corruption problem? Otherwise, you will have to debug your memory allocation/deallocation in your program. There are many programs to help you do that, such as valgrind, glibc mcheck, dmalloc, electricfence, etc.
Old topic, but I have experienced this issue as well. A simple test program on AIX 6.1, in conjunction with AIX's MALLOCDEBUG confirms the issue.
#include <string.h>
int main(void)
{
char test[32] = "1234";
char *newbuf = NULL;
newbuf = strndup(test, sizeof(test)-1);
}
Compile and run the program with buffer overflow detection:
~$ gcc -g test_strndup2.c
~$ MALLOCDEBUG=catch_overflow ./a.out
Segmentation fault (core dumped)
Now run dbx to analyze the core:
~$ dbx ./a.out /var/Corefiles/core.6225952.22190412
Type 'help' for help.
[using memory image in /var/Corefiles/core.6225952.22190412]
reading symbolic information ...
Segmentation fault in strncpy at 0xd0139efc
0xd0139efc (strncpy+0xdc) 9cc50001 stbu r6,0x1(r5)
(dbx) where
strncpy() at 0xd0139efc
strndup#AF5_3(??, ??) at 0xd03f3f34
main(), line 8 in "test_strndup2.c"
Tracing through the instructions in strndup, it appears that it mallocs a buffer that is just large enough to handle the string in s plus a NULL terminator. However, it will always copy n characters to the new buffer, padding with zeros if necessary, causing a buffer overflow if strlen(s) < n.
char* strndup(const char*s, size_t n)
{
char* newbuf = (char*)malloc(strnlen(s, n) + 1);
strncpy(newbuf, s, n-1);
return newbuf;
}
Alok is right. and with the gcc toolchain under glibc, you would need to define _GNU_SOURCE to get the decl of strndup, otherwise it's not decl'd, e.g.:
#include <string.h>
...
compilo:
gcc -D_GNU_SOURCE a.c
Thanks a lot for your prompt responses.
I have tried the given program.
following is the result:
bash-2.05b# ./mystrndup3
>01234567890123456789012345678<
In my program I have included , still problem is persistent.
Following is the strndup declaration in prepossessed code.
extern char * strndup(const char *, size_t);
I would like to clarify one thing, with small program I don't get effect of stack corruption. It is consistently appearing in my product which has huge amount of function calls.
Using strndup in the following way solved the problem:
dst_str = strndup(src_str, srtlen(src_str));
Please note: used strlen instead of sizeof as i need only the valid string.
I am trying to understand why it is happening.
Behavior i am seeing with my product when i use strndup with large size:
At the "exit" of main, execution is coring with "illegal instruction"
intermittently "Illegal Instruction" in the middle of execution (after strndup call).
Corrupt of some allocated memory, which is no where related to strndup.
All these issues are resolved by just modifying the usage of strndup with actual size of source string.
Thanks & Regards,
Thumbeti

Resources