I was trying to do a buffer overflow (I'm using Linux) on a simple program that requires a password. Here's the program code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int check_authentication(char *password){
int auth_flag = 0;
char password_buffer[16];
strcpy(password_buffer, password);
if(strcmp(password_buffer, "pass1") == 0)
auth_flag = 1;
if(strcmp(password_buffer, "pass2") == 0)
auth_flag = 1;
return auth_flag;
}
int main(int argc, char **argv)
{
if(argc < 2){
printf("\t[!] Correct usage: %s <password>\n", argv[0]);
exit(0);
}
if(check_authentication(argv[1])){
printf("\n-=-=-=-=-=-=-=-=\n");
printf(" Access granted.\n");
printf("-=-=-=-=-=-=-=-=\n");
} else {
printf("\nAccess Denied.\n");
}
return 0;
}
OK, now I compiled it, no errors, and saved it as overflow.c.
Now I opened the Terminal, I moved into the file directory (Desktop) and then wrote:
./overflow.c AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
The Terminal said: "Stack smashing detected" (or something like that) and then quit the program execution.
Now, I'm reading a book, called "Hacking - The Art Of Exploitation" by Jon Erickson. In a chapter, he explains this type of exploit (I took the code from the book) and does the same command I've done. The memory overflows and the program prints "Access granted.". Now, why my OS is detecting I'm trying to exploit the program? I've done something wrong?
I also tried the exploit on Mac OS X. Same thing happened. Please, can someone help me? Thanks in advance.
In modern linux distributions buffer overflow is detected and the process is killed. In order to disable that mode simply compile your application with such flags (gcc):
-fno-stack-protector -fno-stack-protector-all
If compiling with gcc, add -fno-stack-protector flag. The message you received is meant to protect you from your bad code :)
The reason is stack smashing is actually a protection mechanism used by some compilers to detect buffer overflow attacks. You are trying to put the 29 A's into a shorter character array (16 bytes).
Most modern OS have protective mechanisms built in. Almost any good OS does not allow direct low level memory access to any program. It only allows programs to access the adress space allocated to them. Linux based OS automatically kill the processes that try to access beyond their allocated memory space.
Other than this, OS also have protective mechanisms that prevent a program from crashing the system by allocating large amounts of memory, in an attempt to severely deplete the resources available to the OS.
Related
I'm trying to call the oopsIGotToTheBadFunction by changing the return address via the user input in goodFunctionUserInput.
Here is the code:
#include <stdio.h>
#include <stdlib.h>
int oopsIGotToTheBadFunction(void)
{
printf("Gotcha!\n");
exit(0);
}
int goodFunctionUserInput(void)
{
char buf[12];
gets(buf);
return(1);
}
int main(void)
{
goodFunctionUserInput();
printf("Overflow failed\n");
return(1);
}
/**
> gcc isThisGood.c
> a.out
hello
Overflow failed
*/
I've tried loading the buffer with 0123456789012345 but not sure what to put for the rest of it to get the address. The address is 0x1000008fc.
Any insights or comments would be helpful.
I'm going to give this the benefit of the doubt and presume this is an exercise intended to learn about the stack (perhaps homework) and not learning how to do anything malicious.
Consider where the return address of goodFunctionUserInput is on the stack and what would happen if you changed it. You may wish to check the disassembly to see how much space on the stack the compiler made for goodFunctionUserInput and where exactly buf is. When you figure out how long a string to enter, consider the endianness of the machine and what that means for the address you want to overwrite into the return address of goodFunctionUserInput. Worrying about what sorts of awful things this does to the stack isn't important here as your function you want to call simply calls exit.
I am following codes from book "Hacking-The art of exploitation". The source code defined in the book are come along with the CD that the author has provided. I simply compile the pre-written code.According to the book if I provide right password it should grant me access, and if I give a large string with wrong password it should also grant me access but it is denying me.The source does are as following:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int check_authentication(char *password) {
int auth_flag = 0;
char password_buffer[16];
strcpy(password_buffer, password);
if(strcmp(password_buffer, "brillig") == 0)
auth_flag = 1;
if(strcmp(password_buffer, "outgrabe") == 0)
auth_flag = 1;
return auth_flag;
}
int main(int argc, char *argv[]) {
if(argc < 2) {
printf("Usage: %s <password>\n", argv[0]);
exit(0);
}
if(check_authentication(argv[1])) {
printf("\n-=-=-=-=-=-=-=-=-=-=-=-=-=-\n");
printf(" Access Granted.\n");
printf("-=-=-=-=-=-=-=-=-=-=-=-=-=-\n");
} else {
printf("\nAccess Denied.\n");
}
}
Add:
printf("delta: %td\n", (char *) &auth_flag - password_buffer);
in your check_authentication function.
If delta is negative, your program cannot be exploited.
Otherwise then use an argument of delta + 4 characters to exploit it.
As suggested earlier, if there is a particular VM that you can download that is intended to accompany the book you probably want to utilize it. This exploit gives me an error as well rather than the result you'd hope from overflow. E.g. If I try to overflow the buffer using your code on my system, it gives me a * stack smashing detected * error. My suspicion is that your OS' Kernel is protecting against this intended exploit.
I'd also suggest you compare the results using the following code instead of strcpy(dest,src):
strncpy(password, password_buffer, 16);
This has protections against creating a buffer overflow situation. Read the man pages to compare strcpy and strncpy.
it's a typical buffer overflow attack.
in textbook implementation of c program in linux, temporary variables are stored in stack. in your case, it should be:
-----------------
|int | char[16] |
-----------------
^
stack bottom
, address goes down from left to right.
when strcpy copy the string to your buffer, it will overflow to int and make it not 0, so it will be true.
but in modern os, this attack may not work. there are some protection mechanisms.
have a look at: http://en.wikipedia.org/wiki/Buffer_overflow
http://en.wikipedia.org/wiki/Buffer_overflow_protection
most likely, modern os's have canary implemented so it's not as vulnerable as the book provided.
Recently I came across the problem of geting 'Oops, Spwan error, can not allocate memory' while working with one C Application.
To understand the File Descriptor and Memory management better I give a try this sample program and it gives me shocking result.
Here is the code.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int ac, char *av[]);
int main(int ac, char *av[])
{
int fd = 0;
unsigned long counter=0;
while (1)
{
char *aa = malloc(16384);
usleep(5);
fprintf(stderr,"Counter is %ld \n", counter);
fd = fopen("/dev/null",r")
}
return 0;
}
Here in the sample program I am trying to allocate memory every 5 micro second and also open a file descriptor at the same time.
Now when I run the program it started increasing the memory and also file descriptor star increasing, but memory increase upto 82.5% and file descriptor increase upto 1024. I know 'ulimit' set this parameter and it is 1024 by default.
But this program must crash by eating the memory or it should gives error ' Can't spawn child', but it is working.
So Just wanted to know why it is not crashing and why it is not giving child error as it reached file descriptor limit.
It's not crashing probably because when malloc() finds no more memory to allocate and return, it simply returns NULL. Likewise, open() also just returns a negative value. In other words, the cooperation of your OS and the standard library is smarter than it would enable your program to crash.
What's the point in doing that?
Plus on linux, the system won't even eat up the memory if nothing is actually written on "aa".
And anyway, if you could actually take all the memory (which will never happen, for Linux and *bsd, don't know about windows), it would just result in making the system lag like hell or even freeze, not just crashing your application.
Here's a small program to a college's task:
#include <unistd.h>
#ifndef BUFFERSIZE
#define BUFFERSIZE 1
#endif
main()
{
char buffer[BUFFERSIZE];
int i;
int j = BUFFERSIZE;
i = read(0, buffer, BUFFERSIZE);
while (i>0)
{
write(1, buffer, i);
i = read(0, buffer, BUFFERSIZE);
}
return 0;
}
There is a alternative using the stdio.h fread and fwrite functions instead.
Well. I compiled this both versions of program with 25 different values of Buffer Size: 1, 2, 4, ..., 2^i with i=0..30
This is a example of how I compile it:
gcc -DBUFFERSIZE=8388608 prog_sys.c -o bin/psys.8M
The question: In my machine (Ubuntu Precise 64, more details in the end) all versions of the program works fine:
./psys.1M < data
(data is a small file with 3 line ascii text.)
The problem is: When the buffer size is 8MB or greater. Both versions (using system call or clib functions) crashes with these buffer sizes (Segmentation Fault).
I tested many things. The first version of the code was like this:
(...)
main()
{
char buffer[BUFFERSIZE];
int i;
i = read(0, buffer, BUFFERSIZE);
(...)
This crashs when I call the read function. But with these versions:
main()
{
char buffer[BUFFERSIZE]; // SEGMENTATION FAULT HERE
int i;
int j = BUFFERSIZE;
i = read(0, buffer, BUFFERSIZE);
main()
{
int j = BUFFERSIZE; // SEGMENTATION FAULT HERE
char buffer[BUFFERSIZE];
int i;
i = read(0, buffer, BUFFERSIZE);
Both them crashs (SEGFAULT) in the first line of main. However, if I move the buffer out of main to the global scope (thus, allocation in the heap instead the stack), this works fine:
char buffer[BUFFERSIZE]; //NOW GLOBAL AND WORKING FINE
main()
{
int j = BUFFERSIZE;
int i;
i = read(0, buffer, BUFFERSIZE);
I use a Ubuntu Precise 12.04 64bits and Intel i5 M 480 1st generation.
#uname -a
Linux hostname 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
I don't know the OS limitations about the stack. Is there some way to allocate big data in stack, even if this is not a good pratice?
I think it is pretty simple: if you try to make the buffer too big, it overflows the stack.
If you use malloc() to allocate the buffer, I bet you will have no problem.
Note that there is a function called alloca() that explicitly allocates stack storage. Using alloca() is pretty much the same thing as declaring a stack variable, except that it happens while your program is running. Reading the man page for alloca() or discussions of it may help you to understand what is going wrong with your program. Here's a good discussion:
Why is the use of alloca() not considered good practice?
EDIT: In a comment, #jim mcnamara told us about ulimit, a command-line tool that can check on user limits. On this computer (running Linux Mint 14, so it should be the same limits as Ubuntu 12.10) the ulimit -s command shows a stack size limit of: 8192 K-bytes, which tracks nicely with the problem as you describe it.
EDIT: In case it wasn't completely clear, I recommend that you solve the problem by calling malloc(). Another acceptable solution would be to statically allocate the memory, which will work fine as long as your code is single-threaded.
You should only use stack allocation for small buffers, precisely because you don't want to blow up your stack. If your buffers are big, or your code will recursively call a function many times such that the little buffers will be allocated many many times, you will clobber your stack and your code will crash. The worst part is that you won't get any warning: it will either be fine, or your program already crashed.
The only disadvantage of malloc() is that it is relatively slow, so you don't want to call malloc() inside time-critical code. But it's fine for initial setup; once the malloc is done, the address of your allocated buffer is just an address in memory like any other memory address within your program's address space.
I specifically recommend against editing the system defaults to make the stack size bigger, because that makes your program much less portable. If you call standard C library functions like malloc() you could port your code to Windows, Mac, Android, etc. with little trouble; if you start calling system functions to change the default stack size, you would have much more problems porting. (And you may have no plans to port this now, but plans change!)
The stack size is quite often limited in Linux. The command ulimit -s will give the current value, in Kbytes. You can change the default in (usually) the file /etc/security/limits.conf. You can also, depending on privileges, change it on a per-process basis via the code:
#include <sys/resource.h>
// ...
struct rlimit x;
if (getrlimit(RLIMIT_STACK, &x) < 0)
perror("getrlimit");
x.rlim_cur = RLIM_INFINITY;
if (setrlimit(RLIMIT_STACK, &x) < 0)
perror("setrlimit");
I would like to write a program to consume all the memory available to understand the outcome. I've heard that linux starts killing the processes once it is unable to allocate the memory.
Can anyone help me with such a program.
I have written the following, but the memory doesn't seem to get exhausted:
#include <stdlib.h>
int main()
{
while(1)
{
malloc(1024*1024);
}
return 0;
}
You should write to the allocated blocks. If you just ask for memory, linux might just hand out a reservation for memory, but nothing will be allocated until the memory is accessed.
int main()
{
while(1)
{
void *m = malloc(1024*1024);
memset(m,0,1024*1024);
}
return 0;
}
You really only need to write 1 byte on every page (4096 bytes on x86 normally) though.
Linux "over commits" memory. This means that physical memory is only given to a process when the process first tries to access it, not when the malloc is first executed. To disable this behavior, do the following (as root):
echo 2 > /proc/sys/vm/overcommit_memory
Then try running your program.
Linux uses, by default, what I like to call "opportunistic allocation". This is based on the observation that a number of real programs allocate more memory than they actually use. Linux uses this to fit a bit more stuff into memory: it only allocates a memory page when it is used, not when it's allocated with malloc (or mmap or sbrk).
You may have more success if you do something like this inside your loop:
memset(malloc(1024*1024L), 'w', 1024*1024L);
In my machine, with an appropriate gb value, the following code used 100% of the memory, and even got memory into the swap.
You can see that you need to write only one byte in each page: memset(m, 0, 1);,
If you change the page size: #define PAGE_SZ (1<<12) to a bigger page size: #define PAGE_SZ (1<<13) then you won't be writing to all the pages you allocated, thus you can see in top that the memory consumption of the program goes down.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define PAGE_SZ (1<<12)
int main() {
int i;
int gb = 2; // memory to consume in GB
for (i = 0; i < ((unsigned long)gb<<30)/PAGE_SZ ; ++i) {
void *m = malloc(PAGE_SZ);
if (!m)
break;
memset(m, 0, 1);
}
printf("allocated %lu MB\n", ((unsigned long)i*PAGE_SZ)>>20);
getchar();
return 0;
}
A little known fact (though it is well documented) - you can (as root) prevent the OOM killer from claiming your process (or any other process) as one of its victims. Here is a snippet from something directly out of my editor, where I am (based on configuration data) locking all allocated memory to avoid being paged out and (optionally) telling the OOM killer not to bother me:
static int set_priority(nex_payload_t *p)
{
struct sched_param sched;
int maxpri, minpri;
FILE *fp;
int no_oom = -17;
if (p->cfg.lock_memory)
mlockall(MCL_CURRENT | MCL_FUTURE);
if (p->cfg.prevent_oom) {
fp = fopen("/proc/self/oom_adj", "w");
if (fp) {
/* Don't OOM me, Bro! */
fprintf(fp, "%d", no_oom);
fclose(fp);
}
}
I'm not showing what I'm doing with scheduler parameters as its not relevant to the question.
This will prevent the OOM killer from getting your process before it has a chance to produce the (in this case) desired effect. You will also, in effect, force most other processes to disk.
So, in short, to see fireworks really quickly...
Tell the OOM killer not to bother you
Lock your memory
Allocate and initialize (zero out) blocks in a never ending loop, or until malloc() fails
Be sure to look at ulimit as well, and run your tests as root.
The code I showed is part of a daemon that simply can not fail, it runs at a very high weight (selectively using the RR or FIFO scheduler) and can not (ever) be paged out.
Have a look at this program.
When there is no longer enough memory malloc starts returning 0
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(1)
{
printf("malloc %d\n", (int)malloc(1024*1024));
}
return 0;
}
On a 32-bit Linux system, the maximum that a single process can allocate in its address space is approximately 3Gb.
This means that it is unlikely that you'll exhaust the memory with a single process.
On the other hand, on 64-bit machine you can allocate as much as you like.
As others have noted, it is also necessary to initialise the memory otherwise it does not actually consume pages.
malloc will start giving an error if EITHER the OS has no virtual memory left OR the process is out of address space (or has insufficient to satisfy the requested allocation).
Linux's VM overcommit also affects exactly when this is and what happens, as others have noted.
I just exexuted #John La Rooy's snippet:
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(1)
{
printf("malloc %d\n", (int)malloc(1024*1024));
}
return 0;
}
but it exhausted my memory very fast and cause the system hanged so that I had to restart it.
So I recommend you change some code.
For example:
On my ubuntu 16.04 LTS the code below takes about 1.5 GB ram, physical memory consumption raised from 1.8 GB to 3.3 GB after executing and go down back to 1.8GiB after finishing execution.Though it looks like I have allocate 300GiB ram in the code.
#include <stdlib.h>
#include <stdio.h>
int main()
{
while(int i<300000)
{
printf("malloc %p\n", malloc(1024*1024));
i += 1;
}
return 0;
}
When index i is less then 100000(ie, allocate less than 100 GB), either physical or virtual memory are just very slightly used(less then 100MB), I don't know why, may be there is something to do with virtual memory.
One thing interesting is that when the physical memory begins to shrink, the addresses malloc() returns definitely changes, see picture link below.
I used malloc() and calloc(), seems that they behave similarily in occupying physical memory.
memory address number changes from 48 bits to 28 bits when physical memory begins shrinking
I was bored once and did this. Got this to eat up all memory and needed to force a reboot to get it working again.
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char** argv)
{
while(1)
{
malloc(1024 * 4);
fork();
}
}
If all you need is to stress the system, then there is stress tool, which does exactly what you want. It's available as a package for most distros.