Why the heap did not corrupt earlier? - c

I am trying to understand at a lower level how C manages memory. I have found some code on a webpage, whose aim is teaching you how bad can be poor memory management - so I copied and pasted it, and compiled:
int main(int argc, char **argv) {
char *p, *q;
p = malloc(1024);
q = malloc(1024);
if (argc >= 2)
strcpy(p, argv[1]);
free(q);
free(p);
return 0;
}
The test cases were executed with the generic command
/development/heapbug$ ./heapbug `perl -e 'print "A"x$K'`
For $K < 1023 I did not expect problems, but for $K = 1024 I expected a core dump, which didn't take place. Long story short, I started having segfaults for $K > 1033.
Two questions:
1) why did this happen?
2) is there a formula that states the "tolerance" of a system?

When you write past the bounds of allocated memory, you invoke undefined behavior. This means you can't accurately predict the behavior of the program. It may crash, it may output strange results, or it may appear to work properly.
Also, making a seemingly unrelated change such as adding an unused local variable or a printf call for debugging can change how undefined behavior manifests itself, as can compiling with a different compiler or with the same compiler with different optimization settings.
Just because the program could crash doesn't mean it will.
That being said, what probably happened has to do with how malloc is implemented on your system. It probably puts aside a few more bytes than what was requested for alignment and bookkeeping purposes. Without aggressive optimization those extra bytes for alignment probably aren't used for anything else so you get awya with writing to them, but then you have problems when you write further into bytes than might contain internal structures used by malloc and free that you corrupt.
But again, you can't depend on this behavior. C depends on the developer to follow the rules, and if you don't bad things happen.

Undefined behaviour is just that. It might crash. It might not. It might work flawlessly. It might drink all the milk in your fridge. It might steal your favourite pair of shoes and stomp around in the mud with them.
Just because something is undefined behaviour does not mean it will be immediately obvious as such. You've overflowed the buffer here but the consequences weren't observed. It's likely because you don't actually use the second buffer you allocate, so if you started writing data to that there's no impact to any code.
This is why tools like Valgrind exist, to look for mistakes that may not always produce obvious or undesirable results.

From my understanding, if you overflow into memory controlled in the user space of your application(code/stack/etc) it isn't guaranteed to cause a coredump and can indeed overwrite some of that memory which is the risk identified by unintentional buffer overflows.
Once you start attempting to overwrite data outside of those bounds, the OS is more likely to block it.

Writing to unallocated memory is undefined behavior. The outcome isn't specified. It may or may not cause a crash. A heap overflow may corrupt the contents of other memory addresses, but how that will affect the program is unknown.

Related

simple overwriting buffer not causing bufferoverflow C valgrind gcc no error [duplicate]

why is this not giving error when I compile?
#include <iostream>
using namespace std;
int main()
{
int *a = new int[2];
// int a[2]; // even this is not giving error
a[0] = 0;
a[1] = 1;
a[2] = 2;
a[3] = 3;
a[100] = 4;
int b;
return 0;
}
can someone explain why this is happening.
Thanks in advance.)
Because undefined behavior == anything can happen. You're unlucky that it doesn't crash, this sort of behavior can potentially hide bugs.
Declaring two variables called a certainly is an error; if your compiler accepts that, then it's broken. I assume you mean that you still don't get an error if you replace one declaration with the other.
Array access is not range-checked. At compile time, the size of an array is often not known, and the language does not require a check even when it is. At run time, a check would degrade performance, which would go against the C++ philosophy of not paying for something you don't need. So access beyond the end of an array gives undefined behaviour, and it's up to the programmer to make sure it doesn't happen.
Sometimes, an invalid access will cause a segmentation fault, but this is not guaranteed. Typically, memory protection is only applied to whole pages of memory, with a typical page size of a few kilobytes. Any access within a page of valid memory will not be caught. There's a good chance that the memory you access contains some other program variable, or part of the call stack, so writing there could affect the program's behaviour in just about any way you can imagine.
If you want to be safe, you could use std::vector, and only access its elements using its at() function. This will check the index, and throw an exception if it's out of range. It will also manage memory allocation for you, fixing the memory leak in your example.
I'm guessing you're coming from Java or a Java-like language where once you step out of the boundary of an array, you get the "array index out of bounds" exception.
Well, C expects more from you; it saves up the space you ask for, but it doesn't check to see if you're going outside the boundary of that saved up space. Once you do that as mentioned above, the program has that dreaded undefined behavior.
And remember for the future that if you have a bug in your program and you can't seem to find it, and when you go over the code/debug it, everything seems OK, there is a good chance you're "out of bounds" and accessing an unallocated place.
compilers with good code analysis would certainly warn on that code referencing beyond your array allocation. forgetting the multiple a declaration, if you ran it, it may or may not fault (undefined behavior as others have said). if, for example, you got a 4KB page of heap (in processor address space), if you don't write outside of that page, you won't get a fault from the processor. upon delete of the array, if you had done it, and depending on the heap implementation, the heap might detect that it is corrupted.

Why does it work if the size of buffer is fewer than nbyte? [duplicate]

This question already has answers here:
Undefined, unspecified and implementation-defined behavior
(9 answers)
Closed 9 years ago.
The codes are like these:
#define BUFSIZ 5
#include <stdio.h>
#include <sys/syscall.h>
main()
{
char buf[BUFSIZ];
int n;
n = read(0, buf, 10);
printf("%d",n);
printf("%s",buf);
return 0;
}
I inputabcdefg then and the output is:
8abcdefg
In the read(0, buf, 10);, the 10 is larger than 5, which is the size of buf. But it doesn't seem to lead to a wrong result.. Does anyone have ideas about this? Thanks!
This is a quirk of how allocation in C works. You have a buffer allocated on the stack, which is really just a chunk of contiguous memory that you can read and write. The fact that you're allowed to write off the end of this array means that in this case it just so happens to work. Perhaps on your machine with your particular compiler and stack layout, you don't end up overwriting anything important :-)
Relying on this behavior being the same between compiler versions is not advised.
You can in principle1 read from and write to any address, but it is only safe and meaningful to access data in an organized, well-defined manner.
The purpose of memory allocation (explicit or implicit) is to bring order into chaos. When you declare your buf array, a small block of memory is reserved on the stack.
Usually, allocations have a certain alignment (and sometimes a certain minimum size, also the operating system can only detect wrong accesses on a very coarse level), so there will often be small gaps in between your allocated memory blocks and small areas that you can write to and read from, seemingly without "anything bad" happening -- but you should pretend that this isn't the case, and you should not even think about using these implementation details to your advantage.
Your code example "works" because you were unlucky enough not to hit an unallocated or write-protected memory page, and you didn't overwrite another vital stack value that would have caused the application to crash (such as the function's return address).
I am purposely saying "unlucky", not "lucky" as the fact that it appears to work is not a good thing. It's incorrect code2, and such code should crash early, so you can detect and fix the problem. It may otherwise lead to very hard to diagnose problems that appear to occur at an entirely unrelated time or location. Even if it works now, you have no guarantee whatsoever that it will work tomorrow (or, on a different computer, or with a different compiler, or with ever so slightly different code).
Memory allocation is generally a three-step process. It is an allocation request to the operating system done by the C library (which usually does not directly correspond to your requests) followed by some bookkeeping done in the library, and a promise made by you. At the operating system level, the actual physical allocation on a page level happens on demand as you access memory for the first time, supposed that the C library has requested allocation for the accessed location earlier.
In the case of stack allocation, the process is somewhat easier on the library level, since it really only has to decrement one special register, but this is mostly irrelevant for you. The concept remains the same.
The promise you make is that you will only ever read from or write to the agreed area, and this is the primary thing that is important for you.
It can happen that you break your promise (deliberately or by accident) and it still "works", but that is pure coincidence.
On the stack, you will sooner or later overwrite either the store of some local variables (which may go undetected if they're cached in a register) and finally the return addresses, which will almost certainly cause a crash (or similar undesired behavior) when the function returns. On the heap, you may overwrite some other program data or access a page that hasn't been communicated to the operating system as being reserved. In that case, the program will be terminated immediately.
1 Let's not consider virtual memory and page protections for an instant.
2 Strictly speaking, it's not incorrect code, but code that invokes undefined behavior. However, overwriting unallocated memory is in my opinion serious enough to merit the label "incorrect".

Exceeding array bound in C -- Why does this NOT crash?

I have this piece of code, and it runs perfectly fine, and I don't why:
int main(){
int len = 10;
char arr[len];
arr[150] = 'x';
}
Seriously, try it! It works (at least on my machine)!
It doesn't, however, work if I try to change elements at indices that are too large, for instance index 20,000. So the compiler apparently isn't smart enough to just ignore that one line.
So how is this possible? I'm really confused here...
Okay, thanks for all the answers!
So I can use this to write into memory consumed by other variables on the stack, like so:
#include <stdio.h>
main(){
char b[4] = "man";
char a[10];
a[10] = 'c';
puts(b);
}
Outputs "can". That's a really bad thing to do.
Okay, thanks.
C compilers generally do not generate code to check array bounds, for the sake of efficiency. Out-of-bounds array accesses result in "undefined behavior", and one
possible outcome is that "it works". It's not guaranteed to cause a crash or other
diagnostic, but if you're on an operating system with virtual memory support, and your array index points to a virtual memory location that hasn't yet been mapped to physical memory, your program is more likely to crash.
So how is this possible?
Because the stack was, on your machine, large enough that there happened to be a memory location on the stack at the location to which &arr[150] happened to correspond, and because your small example program exited before anything else referred to that location and perhaps crashed because you'd overwritten it.
The compiler you're using doesn't check for attempts to go past the end of the array (the C99 spec says that the result of arr[150], in your sample program, would be "undefined", so it could fail to compile it, but most C compilers don't).
Most implementations don't check for these kinds of errors. Memory access granularity is often very large (4 KiB boundaries), and the cost of finer-grained access control means that it is not enabled by default. There are two common ways for errors to cause crashes on modern OSs: either you read or write data from an unmapped page (instant segfault), or you overwrite data that leads to a crash somewhere else. If you're unlucky, then a buffer overrun won't crash (that's right, unlucky) and you won't be able to diagnose it easily.
You can turn instrumentation on, however. When using GCC, compile with Mudflap enabled.
$ gcc -fmudflap -Wall -Wextra test999.c -lmudflap
test999.c: In function ‘main’:
test999.c:3:9: warning: variable ‘arr’ set but not used [-Wunused-but-set-variable]
test999.c:5:1: warning: control reaches end of non-void function [-Wreturn-type]
Here's what happens when you run it:
$ ./a.out
*******
mudflap violation 1 (check/write): time=1362621592.763935 ptr=0x91f910 size=151
pc=0x7f43f08ae6a1 location=`test999.c:4:13 (main)'
/usr/lib/x86_64-linux-gnu/libmudflap.so.0(__mf_check+0x41) [0x7f43f08ae6a1]
./a.out(main+0xa6) [0x400a82]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f43f0538ead]
Nearby object 1: checked region begins 0B into and ends 141B after
mudflap object 0x91f960: name=`alloca region'
bounds=[0x91f910,0x91f919] size=10 area=heap check=0r/3w liveness=3
alloc time=1362621592.763807 pc=0x7f43f08adda1
/usr/lib/x86_64-linux-gnu/libmudflap.so.0(__mf_register+0x41) [0x7f43f08adda1]
/usr/lib/x86_64-linux-gnu/libmudflap.so.0(__mf_wrap_alloca_indirect+0x1a4) [0x7f43f08afa54]
./a.out(main+0x45) [0x400a21]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f43f0538ead]
number of nearby objects: 1
Oh look, it crashed.
Note that Mudflap is not perfect, it won't catch all of your errors.
Native C arrays do not get bounds checking. That would require additional instructions and data structures. C is designed for efficiency and leanness, so it doesn't specify features that trade performance for safety.
You can use a tool like valgrind, which runs your program in a kind of emulator and attempts to detect such things as buffer overflows by tracking which bytes are initialized and which aren't. But it's not infallible, for example if the overflowing access happens to perform an otherwise-legal access to another variable.
Under the hood, array indexing is just pointer arithmetic. When you say arr[ 150 ], you are just adding 150 times the sizeof one element and adding that to the address of arr to obtain the address of a particular object. That address is just a number, and it might be nonsense, invalid, or itself an arithmetic overflow. Some of these conditions result in the hardware generating a crash, when it can't find memory to access or detects virus-like activity, but none result in software-generated exceptions because there is no room for a software hook. If you want a safe array, you'll need to build functions around the principle of addition.
By the way, the array in your example isn't even technically of fixed size.
int len = 10; /* variable of type int */
char arr[len]; /* variable-length array */
Using a non-const object to set the array size is a new feature since C99. You could just as well have len be a function parameter, user input, etc. This would be better for compile-time analysis:
const int len = 10; /* constant of type int */
char arr[len]; /* constant-length array */
For the sake of completeness: The C standard doesn't specify bounds checking but neither is it prohibited. It falls under the category of undefined behavior, or errors that need not generate error messages, and can have any effect. It is possible to implement safe arrays, various approximations of the feature exist. C does nod in this direction by making it illegal, for example, to take the difference between two arrays in order to find the correct out-of-bounds index to access an arbitrary object A from array B. But the language is very free-form, and if A and B are part of the same memory block from malloc it is legal. In other words, the more C-specific memory tricks you use, the harder automatic verification becomes even with C-oriented tools.
Under the C spec, accessing an element past the end of an array is undefined behaviour. Undefined behaviour means that the specification does not say what would happen -- therefore, anything could happen, in theory. The program might crash, or it might not, or it might crash hours later in a completely unrelated function, or it might wipe your harddrive (if you got unlucky and poked just the right bits into the right place).
Undefined behaviour is not easily predictable, and it should absolutely never be relied upon. Just because something appears to work does not make it right, if it invokes undefined behaviour.
Because you were lucky. Or rather unlucky, because it means it's harder to find the bug.
The runtime will only crash if you start using the memory of another process (or in some cases unallocated memory). Your application is given a certain amount of memory when it opens, which in this case is enough, and you can mess about in your own memory as much as you like, but you'll give yourself a nightmare of a debugging job.

Array is larger than allocated?

I have an array that's declared as char buff[8]. That should only be 8 bytes, but looking as the assembly and testing the code, I get a segmentation fault when I input something larger than 32 characters into that buff, whereas I would expect it to be for larger than 8 characters. Why is this?
What you're saying is not a contradiction:
You have space for 8 characters.
You get an error when you input more than 32 characters.
So what?
The point is that nobody told you that you would be guaranteed to get an error if you input more than 8 characters. That's simply undefined behaviour, and anything can (and will) happen.
You absolutely mustn't think that the absence of obvious misbehaviour is proof of the correctness of your code. Code correctness can only be verified by checking the code against the rules of the language (though some automated tools such as valgrind are an immense help).
Writing beyond the end of the array is undefined behavior. Undefined behavior means nothing (including a segmentation fault) is guaranteed.
In other words, it might do anything. More practical, it's likely the write didn't touch anything protected, so from the point of view of the OS everything is still OK until 32.
This raises an interesting point. What is "totally wrong" from the point of view of C might be OK with the OS. The OS only cares about what pages you access:
Is the address mapped for your process ?
Does your process have the rights ?
You shouldn't count on the OS slapping you if anything goes wrong. A useful tool for this (slapping) is valgrind, if you are using Unix. It will warn you if your process is doing nasty things, even if those nasty things are technically OK with the OS.
C arrays have no bound checking.
As other said, you are hitting undefined behavior; until you stay inside the bounds of the array, everything works fine. If you cheat, as far as the standard is concerned, anything can happen, including your program seeming to work right as well as the explosion of the Sun.
What happens in practice is that with stack-allocated variables you are likely to overwrite other variables on the stack, getting "impossible" bugs, or, if you hit a canary value put by the compiler, it may detect the buffer overflow on return from the function. For variables allocated in the so-called heap, the heap allocator may have given some more room than requested, so the mistake may be less easy to spot, although you may easily mess up the internal structures of the heap.
In both cases you can also hit a protected memory page, which will result in your program being terminated forcibly (for the stack this happens less often because usually you have to overwrite the entire stack to get to a protected page).
Your declaration char buff[8] sounds like a stack allocated variable, although it could be heap allocated if part of a struct. Accessing out of bounds of an array is undefined behaviour and is known as a buffer overrun. Buffer overruns on stack allocated memory may corrupt the current stack frame and possibly other stack frames in the call stack. With undefined behaviour, anything could happen, including no apparent error. You would not expect a seg fault immediately because the stack is typically when the thread starts.
For heap allocated memory, memory managers typically allocate large blocks of memory and then sub-allocate from those larger blocks. That is why you often don't get a seg fault when you access beyond the end of a block of memory.
It is undefined behaviour to access beyond the end of a memory block. And it is perfectly valid, according to the standard, for such out of bounds accesses to result in seg faults or indeed an apparently successful read or write. I say apparently successful because if you are writing then you will quite possibly produce a heap corruption by writing out of bounds.
Unless you are not telling us something you answered your owflown question.
declaring
char buff[8] ;
means that the compiler grabs 8 bytes of memory. If you try and stuff 32 char's into it you should get a seg fault, that's called a buffer overflow.
Each char is a byte ( unless you are doing unicode in which it is a word ) so you are trying to put 4x the number of chars that will fit in your buffer.
Is this your first time coding in C ?

Writing more characters than malloced. Why does it not fail?

Why does the following work and not throw some kind of segmentation fault?
char *path = "/usr/bin/";
char *random = "012";
// path + random + \0
// so its malloc(13), but I get 16 bytes due to memory alignment (im on 32bit)
newPath = (char *) malloc(strlen(path) + strlen(random) + 1);
strcat(newPath, path);
strcat(newPath, "random");
// newPath is now: "/usr/bin/012\0" which makes 13 characters.
However, if I add
strcat(newPath, "RANDOMBUNNIES");
shouldn't this call fail, because strcat uses more memory than allocated? Consequently, shouldn't
free(newPath)
also fail because it tries to free 16 bytes but I used 26 bytes ("/usr/bin/012RANDOMBUNNIES\0")?
Thank you so much in advance!
Most often this kind of overrun problems doesn't make your program explode in a cloud of smoke and the smell of burnt sulphur. It's more subtle: the variable that is allocated after the overrun variable will be altered, causing unexplainable and seemingly random behavior of the program later on.
The whole program snippet is wrong. You are assuming that malloc() returns something that has at least the first byte set to 0. This is not generally the case, so even your "safe" strcat() is wrong.
But otherwise, as others have said, undefined behavior doesn't mean your program will crash. It only means it can do anything (including crashing, but also not crashing, if you are unlucky).
(Also, you shouldn't cast the return value of malloc().)
Writing more characters than malloced is an Undefined Behavior.
Undefined Behavior means anything can happen and the behavior cannot be explained.
Segmentation fault generally occurs because of accessing the invalid memory section. Here it won't give error(Segmentation fault) because you can still access memory. However you are overwriting other memory locations which is undefined behavior, your code runs fine.
It will fail and not fail at random, depending on the availability of the memory just after the malloc'd memory.
Also when you want to concat random you shouldn't be putting in quotes. that should be
strcat(newPath, random);
Many C library functions do not check whether they overrun. Its up to the programmer to manage the memory allocated. You may just be writing over another variable in memory, with unpredictable effects for the operation of your program. C is designed for efficiency not for pointing out errors in programming.
You have luck with this call. You don't get a segfault because your calls presumably stay in a allocated part of the address space. This is undefined behaviour. The last chars of what has been written are not guaranteed to not be overwritten. This calls may also fail.
Buffer overruns aren't guaranteed to cause a segfault. The behavior is simply undefined. You may get away with writing to memory that's not yours one time, cause a crash another time, and silently overwrite something completely unrelated a third time. Which one of these happens depends on the OS (and OS version), the hardware, the compiler (and compiler flags), and pretty much everything else that is running on your system.
This is what makes buffer overruns such nasty sources of bugs: Often, the apparent symptom shows in production, but not when run through a debugger; and the symptoms usually don't show in the part of the program where they originate. And of course, they are a welcome vulnerability to inject your own code.
Operating systems allocate at a certain granularity which is on my system a page-size of 4kb (which is typical on 32bit machines), whether a malloc() always takes a fresh page from the OS depends on your C runtime library.

Resources