Is there any way to guarantee a segfault? - c

I know that segfault is a common manifestation of undefined behavior. But I have two small questions about it:
Are ALL segfaults undefined behavior?
If no, is there any way to ensure a segfault?
What is a segmentation fault? is far more general than my question and none of the answers answers any of my questions.

A segmentation fault simply means that you did an invalid access to memory -- either because the address requested is not mapped (mapping error) or because you don't have permissions to access it (access error).
There are segmentation faults that are intended. One such example can be found here -- a mini application that deliberately plays with permissions of memory pages in order to detect where writes are being made by a given function.
The easiest way is to use the raise function.
Source:
#include <signal.h>
int main() {
raise(SIGSEGV);
return 0;
}

Are ALL segfaults undefined behavior?
This question is trickier than it might seem, because "undefined behavior" is a description of either a C source program, or the result of running a C program in the "abstract machine" that describes behavior of C programs in general; but "segmentation fault" is a possible behavior of a particular operating system, often with help from particular CPU features.
The C Standard doesn't say anything at all about segmentation faults. The one nearly relevant thing it does say is that if a program execution does not have undefined behavior, then a real implementation's execution of the program will have the same observable behavior as the abstract machine's execution. And "observable behavior" is defined to include just accesses to volatile objects, data written into files, and input and output of interactive devices.
If we can assume that a "segmentation fault" always prevents further actions by a program, then any segmentation fault without the presence of undefined behavior could only happen after all of the observable behavior has completed as expected. (But note that valid optimizations can sometimes cause things to happen in a different order from the obvious one.)
So a situation where a program causes a segmentation fault (for the OS) although there is no undefined behavior (according to the C Standard) doesn't make much sense for a real compiler and OS, but we can't rule it out completely.
But also, all that is assuming perfect computers. If RAM is bad, an intended address value might end up changed. There are even very infrequent but measurable events where cosmic rays can change a bit within otherwise good RAM. Soft errors like those could cause a segmentation fault (on a system where "segmentation fault" is a thing), for practically any perfectly written C program, with no undefined behavior possible on any implementation or input.
If no, is there any way to ensure a segfault?
That depends on the context, and what you mean by "ensure".
Can you write a C program that will always cause a segfault? No, because some computers might not even have such a concept.
Can you write a C program that always causes a segfault if it is possible on a computer? No, because some compilers might do things to avoid the actual problem in some cases. And since the program's behavior is undefined, not causing a segfault is just as valid a result as causing a segfault. In particular, one real obstacle you might run into, doing even simple things like deliberately dereferencing a null pointer value, is that compiler optimizations sometimes assume that the inputs and logic will always turn out so that undefined behavior will not happen, since it's okay to not do what the program says for inputs that do lead to undefined behavior.
Knowing details about how one specific OS, and possibly the CPU, handle memory and sometimes generate segmentation faults, can you write assembly instructions that will always cause a segfault? Certainly, if the segfault handling is of any value at all. Can you write a C program that will trigger a segfault in roughly the same manner? Most probably.

You can never go wrong with dereferencing a NULL pointer.
int main() {
int *a = 0;
*a = 0;
return 0;
}
As the comments rightfully mention, this will not work 100% of the time and is platform specific. But it should work in most common platforms.

Assuming the platform supports segfaults at all, here are some possibilities:
Use raise. This will produce noticeably a different siginfo_t, but often that doesn't matter.
Dereference a non-volatile pointer. This is likely to be optimized out as "unreachable".
Dereference a volatile pointer. This should prevent the compiler from optimizing out the access.
Use asm volatile and dereference a pointer. Be sure to include a phony output that is later used, otherwise the compiler can still perform unwanted optimizations. This requires special code for each platform.
Build a kernel module that directly produces a SIGSEGV with appropriate siginfo_t. This assumes kernel modules even can be loaded (or even compiled).
If we're dereferencing a pointer, do we:
read through it, or
write through it, or
both?
and, what pointer should it be?
NULL is a popular choice, but it is sometimes possible for that page to be mapped. Security-conscious kernels generally disallow this, these days.
Use a valid, but misaligned, pointer. This is highly platform dependent, and probably produces a different siginfo_t. Maybe even a SIGBUS or something.
Use a misaligned pointer that straddles pages, of which one page is unmapped or protected, as below.
call mmap with protection that forbids that access you're attempting. I don't remember if this siginfo_t is distinguishable or not. Can race, but only if some other thread in the program is hostile. It is also possible for the mmap to fail.
call mmap followed immediately by munmap, then dereference the pointer. In single-threaded programs with signals blocked, this can produce an address that is guaranteed not to be mapped. However, it may race in multi-threaded programs, and it is possible for the initial mmap to fail.
call mmap in a loop until it fails because you've exhausted the kernel's 64k limit on number of mapped memory regions. Then parse /proc/self/maps and find an address that isn't covered by any of them. This might be racy if other threads are currently munmaping. If other threads are currently mmaping, they may fail in mysterious ways before your segfault kills the process.
In summary, there is no perfectly reliable/portable method, but there are plenty that are good enough.

Related

What happens when you do array[-1]? [duplicate]

How dangerous is accessing an array outside of its bounds (in C)? It can sometimes happen that I read from outside the array (I now understand I then access memory used by some other parts of my program or even beyond that) or I am trying to set a value to an index outside of the array. The program sometimes crashes, but sometimes just runs, only giving unexpected results.
Now what I would like to know is, how dangerous is this really? If it damages my program, it is not so bad. If on the other hand it breaks something outside my program, because I somehow managed to access some totally unrelated memory, then it is very bad, I imagine.
I read a lot of 'anything can happen', 'segmentation might be the least bad problem', 'your hard disk might turn pink and unicorns might be singing under your window', which is all nice, but what is really the danger?
My questions:
Can reading values from way outside the array damage anything
apart from my program? I would imagine just looking at things does
not change anything, or would it for instance change the 'last time
opened' attribute of a file I happened to reach?
Can setting values way out outside of the array damage anything apart from my
program? From this
Stack Overflow question I gather that it is possible to access
any memory location, that there is no safety guarantee.
I now run my small programs from within XCode. Does that
provide some extra protection around my program where it cannot
reach outside its own memory? Can it harm XCode?
Any recommendations on how to run my inherently buggy code safely?
I use OSX 10.7, Xcode 4.6.
As far as the ISO C standard (the official definition of the language) is concerned, accessing an array outside its bounds has "undefined behavior". The literal meaning of this is:
behavior, upon use of a nonportable or erroneous program construct or
of erroneous data, for which this International Standard imposes no
requirements
A non-normative note expands on this:
Possible undefined behavior ranges from ignoring the situation
completely with unpredictable results, to behaving during translation
or program execution in a documented manner characteristic of the
environment (with or without the issuance of a diagnostic message), to
terminating a translation or execution (with the issuance of a
diagnostic message).
So that's the theory. What's the reality?
In the "best" case, you'll access some piece of memory that's either owned by your currently running program (which might cause your program to misbehave), or that's not owned by your currently running program (which will probably cause your program to crash with something like a segmentation fault). Or you might attempt to write to memory that your program owns, but that's marked read-only; this will probably also cause your program to crash.
That's assuming your program is running under an operating system that attempts to protect concurrently running processes from each other. If your code is running on the "bare metal", say if it's part of an OS kernel or an embedded system, then there is no such protection; your misbehaving code is what was supposed to provide that protection. In that case, the possibilities for damage are considerably greater, including, in some cases, physical damage to the hardware (or to things or people nearby).
Even in a protected OS environment, the protections aren't always 100%. There are operating system bugs that permit unprivileged programs to obtain root (administrative) access, for example. Even with ordinary user privileges, a malfunctioning program can consume excessive resources (CPU, memory, disk), possibly bringing down the entire system. A lot of malware (viruses, etc.) exploits buffer overruns to gain unauthorized access to the system.
(One historical example: I've heard that on some old systems with core memory, repeatedly accessing a single memory location in a tight loop could literally cause that chunk of memory to melt. Other possibilities include destroying a CRT display, and moving the read/write head of a disk drive with the harmonic frequency of the drive cabinet, causing it to walk across a table and fall onto the floor.)
And there's always Skynet to worry about.
The bottom line is this: if you could write a program to do something bad deliberately, it's at least theoretically possible that a buggy program could do the same thing accidentally.
In practice, it's very unlikely that your buggy program running on a MacOS X system is going to do anything more serious than crash. But it's not possible to completely prevent buggy code from doing really bad things.
In general, Operating Systems of today (the popular ones anyway) run all applications in protected memory regions using a virtual memory manager. It turns out that it is not terribly EASY (per se) to simply read or write to a location that exists in REAL space outside the region(s) that have been assigned / allocated to your process.
Direct answers:
Reading will almost never directly damage another process, however it can indirectly damage a process if you happen to read a KEY value used to encrypt, decrypt, or validate a program / process. Reading out of bounds can have somewhat adverse / unexpected affects on your code if you are making decisions based on the data you are reading
The only way your could really DAMAGE something by writing to a loaction accessible by a memory address is if that memory address that you are writing to is actually a hardware register (a location that actually is not for data storage but for controlling some piece of hardware) not a RAM location. In all fact, you still wont normally damage something unless you are writing some one time programmable location that is not re-writable (or something of that nature).
Generally running from within the debugger runs the code in debug mode. Running in debug mode does TEND to (but not always) stop your code faster when you have done something considered out of practice or downright illegal.
Never use macros, use data structures that already have array index bounds checking built in, etc....
ADDITIONAL
I should add that the above information is really only for systems using an operating system with memory protection windows. If writing code for an embedded system or even a system utilizing an operating system (real-time or other) that does not have memory protection windows (or virtual addressed windows) that one should practice a lot more caution in reading and writing to memory. Also in these cases SAFE and SECURE coding practices should always be employed to avoid security issues.
Not checking bounds can lead to to ugly side effects, including security holes. One of the ugly ones is arbitrary code execution. In classical example: if you have an fixed size array, and use strcpy() to put a user-supplied string there, the user can give you a string that overflows the buffer and overwrites other memory locations, including code address where CPU should return when your function finishes.
Which means your user can send you a string that will cause your program to essentially call exec("/bin/sh"), which will turn it into shell, executing anything he wants on your system, including harvesting all your data and turning your machine into botnet node.
See Smashing The Stack For Fun And Profit for details on how this can be done.
You write:
I read a lot of 'anything can happen', 'segmentation might be the
least bad problem', 'your harddisk might turn pink and unicorns might
be singing under your window', which is all nice, but what is really
the danger?
Lets put it that way: load a gun. Point it outside the window without any particular aim and fire. What is the danger?
The issue is that you do not know. If your code overwrites something that crashes your program you are fine because it will stop it into a defined state. However if it does not crash then the issues start to arise. Which resources are under control of your program and what might it do to them? I know at least one major issue that was caused by such an overflow. The issue was in a seemingly meaningless statistics function that messed up some unrelated conversion table for a production database. The result was some very expensive cleanup afterwards. Actually it would have been much cheaper and easier to handle if this issue would have formatted the hard disks ... with other words: pink unicorns might be your least problem.
The idea that your operating system will protect you is optimistic. If possible try to avoid writing out of bounds.
Not running your program as root or any other privileged user won't harm any of your system, so generally this might be a good idea.
By writing data to some random memory location you won't directly "damage" any other program running on your computer as each process runs in it's own memory space.
If you try to access any memory not allocated to your process the operating system will stop your program from executing with a segmentation fault.
So directly (without running as root and directly accessing files like /dev/mem) there is no danger that your program will interfere with any other program running on your operating system.
Nevertheless - and probably this is what you have heard about in terms of danger - by blindly writing random data to random memory locations by accident you sure can damage anything you are able to damage.
For example your program might want to delete a specific file given by a file name stored somewhere in your program. If by accident you just overwrite the location where the file name is stored you might delete a very different file instead.
NSArrays in Objective-C are assigned a specific block of memory. Exceeding the bounds of the array means that you would be accessing memory that is not assigned to the array. This means:
This memory can have any value. There's no way of knowing if the data is valid based on your data type.
This memory may contain sensitive information such as private keys or other user credentials.
The memory address may be invalid or protected.
The memory can have a changing value because it's being accessed by another program or thread.
Other things use memory address space, such as memory-mapped ports.
Writing data to unknown memory address can crash your program, overwrite OS memory space, and generally cause the sun to implode.
From the aspect of your program you always want to know when your code is exceeding the bounds of an array. This can lead to unknown values being returned, causing your application to crash or provide invalid data.
You may want to try using the memcheck tool in Valgrind when you test your code -- it won't catch individual array bounds violations within a stack frame, but it should catch many other sorts of memory problem, including ones that would cause subtle, wider problems outside the scope of a single function.
From the manual:
Memcheck is a memory error detector. It can detect the following problems that are common in C and C++ programs.
Accessing memory you shouldn't, e.g. overrunning and underrunning heap blocks, overrunning the top of the stack, and accessing memory after it has been freed.
Using undefined values, i.e. values that have not been initialised, or that have been derived from other undefined values.
Incorrect freeing of heap memory, such as double-freeing heap blocks, or mismatched use of malloc/new/new[] versus free/delete/delete[]
Overlapping src and dst pointers in memcpy and related functions.
Memory leaks.
ETA: Though, as Kaz's answer says, it's not a panacea, and doesn't always give the most helpful output, especially when you're using exciting access patterns.
If you ever do systems level programming or embedded systems programming, very bad things can happen if you write to random memory locations. Older systems and many micro-controllers use memory mapped IO, so writing to a memory location that maps to a peripheral register can wreak havoc, especially if it is done asynchronously.
An example is programming flash memory. Programming mode on the memory chips is enabled by writing a specific sequence of values to specific locations inside the address range of the chip. If another process were to write to any other location in the chip while that was going on, it would cause the programming cycle to fail.
In some cases the hardware will wrap addresses around (most significant bits/bytes of address are ignored) so writing to an address beyond the end of the physical address space will actually result in data being written right in the middle of things.
And finally, older CPUs like the MC68000 can locked up to the point that only a hardware reset can get them going again. Haven't worked on them for a couple of decades but I believe it's when it encountered a bus error (non-existent memory) while trying to handle an exception, it would simply halt until the hardware reset was asserted.
My biggest recommendation is a blatant plug for a product, but I have no personal interest in it and I am not affiliated with them in any way - but based on a couple of decades of C programming and embedded systems where reliability was critical, Gimpel's PC Lint will not only detect those sort of errors, it will make a better C/C++ programmer out of you by constantly harping on you about bad habits.
I'd also recommend reading the MISRA C coding standard, if you can snag a copy from someone. I haven't seen any recent ones but in ye olde days they gave a good explanation of why you should/shouldn't do the things they cover.
Dunno about you, but about the 2nd or 3rd time I get a coredump or hangup from any application, my opinion of whatever company produced it goes down by half. The 4th or 5th time and whatever the package is becomes shelfware and I drive a wooden stake through the center of the package/disc it came in just to make sure it never comes back to haunt me.
I'm working with a compiler for a DSP chip which deliberately generates code that accesses one past the end of an array out of C code which does not!
This is because the loops are structured so that the end of an iteration prefetches some data for the next iteration. So the datum prefetched at the end of the last iteration is never actually used.
Writing C code like that invokes undefined behavior, but that is only a formality from a standards document which concerns itself with maximal portability.
More often that not, a program which accesses out of bounds is not cleverly optimized. It is simply buggy. The code fetches some garbage value and, unlike the optimized loops of the aforementioned compiler, the code then uses the value in subsequent computations, thereby corrupting theim.
It is worth catching bugs like that, and so it is worth making the behavior undefined for even just that reason alone: so that the run-time can produce a diagnostic message like "array overrun in line 42 of main.c".
On systems with virtual memory, an array could happen to be allocated such that the address which follows is in an unmapped area of virtual memory. The access will then bomb the program.
As an aside, note that in C we are permitted to create a pointer which is one past the end of an array. And this pointer has to compare greater than any pointer to the interior of an array.
This means that a C implementation cannot place an array right at the end of memory, where the one plus address would wrap around and look smaller than other addresses in the array.
Nevertheless, access to uninitialized or out of bounds values are sometimes a valid optimization technique, even if not maximally portable. This is for instance why the Valgrind tool does not report accesses to uninitialized data when those accesses happen, but only when the value is later used in some way that could affect the outcome of the program. You get a diagnostic like "conditional branch in xxx:nnn depends on uninitialized value" and it can be sometimes hard to track down where it originates. If all such accesses were trapped immediately, there would be a lot of false positives arising from compiler optimized code as well as correctly hand-optimized code.
Speaking of which, I was working with some codec from a vendor which was giving off these errors when ported to Linux and run under Valgrind. But the vendor convinced me that only several bits of the value being used actually came from uninitialized memory, and those bits were carefully avoided by the logic.. Only the good bits of the value were being used and Valgrind doesn't have the ability to track down to the individual bit. The uninitialized material came from reading a word past the end of a bit stream of encoded data, but the code knows how many bits are in the stream and will not use more bits than there actually are. Since the access beyond the end of the bit stream array does not cause any harm on the DSP architecture (there is no virtual memory after the array, no memory-mapped ports, and the address does not wrap) it is a valid optimization technique.
"Undefined behavior" does not really mean much, because according to ISO C, simply including a header which is not defined in the C standard, or calling a function which is not defined in the program itself or the C standard, are examples of undefined behavior. Undefined behavior doesn't mean "not defined by anyone on the planet" just "not defined by the ISO C standard". But of course, sometimes undefined behavior really is absolutely not defined by anyone.
Besides your own program, I don't think you will break anything, in the worst case you will try to read or write from a memory address that corresponds to a page that the kernel didn't assign to your proceses, generating the proper exception and being killed (I mean, your process).
Arrays with two or more dimensions pose a consideration beyond those mentioned in other answers. Consider the following functions:
char arr1[2][8];
char arr2[4];
int test1(int n)
{
arr1[1][0] = 1;
for (int i=0; i<n; i++) arr1[0][i] = arr2[i];
return arr1[1][0];
}
int test2(int ofs, int n)
{
arr1[1][0] = 1;
for (int i=0; i<n; i++) *(arr1[0]+i) = arr2[i];
return arr1[1][0];
}
The way gcc will processes the first function will not allow for the possibility that an attempt to write arr[0][i] might affect the value of arr[1][0], and the generated code is incapable of returning anything other than a hardcoded value of 1. Although the Standard defines the meaning of array[index] as precisely equivalent to (*((array)+(index))), gcc seems to interpret the notion of array bounds and pointer decay differently in cases which involve using [] operator on values of array type, versus those which use explicit pointer arithmetic.
I just want to add some practical examples to this questions - Imagine the following code:
#include <stdio.h>
int main(void) {
int n[5];
n[5] = 1;
printf("answer %d\n", n[5]);
return (0);
}
Which has Undefined Behaviour. If you enable for example clang optimisations (-Ofast) it would result in something like:
answer 748418584
(Which if you compile without will probably output the correct result of answer 1)
This is because in the first case the assignment to 1 is never actually assembled in the final code (you can look in the godbolt asm code as well).
(However it must be noted that by that logic main should not even call printf so best advice is not to depend on the optimiser to solve your UB - but rather have the knowledge that sometimes it may work this way)
The takeaway here is that modern C optimising compilers will assume undefined behaviour (UB) to never occur (which means the above code would be similar to something like (but not the same):
#include <stdio.h>
#include <stdlib.h>
int main(void) {
int n[5];
if (0)
n[5] = 1;
printf("answer %d\n", (exit(-1), n[5]));
return (0);
}
Which on contrary is perfectly defined).
That's because the first conditional statement never reaches it's true state (0 is always false).
And on the second argument for printf we have a sequence point after which we call exit and the program terminates before invoking the UB in the second comma operator (so it's well defined).
So the second takeaway is that UB is not UB as long as it's never actually evaluated.
Additionally I don't see mentioned here there is fairly modern Undefined Behaviour sanitiser (at least on clang) which (with the option -fsanitize=undefined) will give the following output on the first example (but not the second):
/app/example.c:5:5: runtime error: index 5 out of bounds for type 'int[5]'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.c:5:5 in
/app/example.c:7:27: runtime error: index 5 out of bounds for type 'int[5]'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.c:7:27 in
Here is all the samples in godbolt:
https://godbolt.org/z/eY9ja4fdh (first example and no flags)
https://godbolt.org/z/cGcY7Ta9M (first example and -Ofast clang)
https://godbolt.org/z/cGcY7Ta9M (second example and UB sanitiser on)
https://godbolt.org/z/vE531EKo4 (first example and UB sanitiser on)

C- Why does my multidimensional array only allow 3 user inputs before terminating [duplicate]

How dangerous is accessing an array outside of its bounds (in C)? It can sometimes happen that I read from outside the array (I now understand I then access memory used by some other parts of my program or even beyond that) or I am trying to set a value to an index outside of the array. The program sometimes crashes, but sometimes just runs, only giving unexpected results.
Now what I would like to know is, how dangerous is this really? If it damages my program, it is not so bad. If on the other hand it breaks something outside my program, because I somehow managed to access some totally unrelated memory, then it is very bad, I imagine.
I read a lot of 'anything can happen', 'segmentation might be the least bad problem', 'your hard disk might turn pink and unicorns might be singing under your window', which is all nice, but what is really the danger?
My questions:
Can reading values from way outside the array damage anything
apart from my program? I would imagine just looking at things does
not change anything, or would it for instance change the 'last time
opened' attribute of a file I happened to reach?
Can setting values way out outside of the array damage anything apart from my
program? From this
Stack Overflow question I gather that it is possible to access
any memory location, that there is no safety guarantee.
I now run my small programs from within XCode. Does that
provide some extra protection around my program where it cannot
reach outside its own memory? Can it harm XCode?
Any recommendations on how to run my inherently buggy code safely?
I use OSX 10.7, Xcode 4.6.
As far as the ISO C standard (the official definition of the language) is concerned, accessing an array outside its bounds has "undefined behavior". The literal meaning of this is:
behavior, upon use of a nonportable or erroneous program construct or
of erroneous data, for which this International Standard imposes no
requirements
A non-normative note expands on this:
Possible undefined behavior ranges from ignoring the situation
completely with unpredictable results, to behaving during translation
or program execution in a documented manner characteristic of the
environment (with or without the issuance of a diagnostic message), to
terminating a translation or execution (with the issuance of a
diagnostic message).
So that's the theory. What's the reality?
In the "best" case, you'll access some piece of memory that's either owned by your currently running program (which might cause your program to misbehave), or that's not owned by your currently running program (which will probably cause your program to crash with something like a segmentation fault). Or you might attempt to write to memory that your program owns, but that's marked read-only; this will probably also cause your program to crash.
That's assuming your program is running under an operating system that attempts to protect concurrently running processes from each other. If your code is running on the "bare metal", say if it's part of an OS kernel or an embedded system, then there is no such protection; your misbehaving code is what was supposed to provide that protection. In that case, the possibilities for damage are considerably greater, including, in some cases, physical damage to the hardware (or to things or people nearby).
Even in a protected OS environment, the protections aren't always 100%. There are operating system bugs that permit unprivileged programs to obtain root (administrative) access, for example. Even with ordinary user privileges, a malfunctioning program can consume excessive resources (CPU, memory, disk), possibly bringing down the entire system. A lot of malware (viruses, etc.) exploits buffer overruns to gain unauthorized access to the system.
(One historical example: I've heard that on some old systems with core memory, repeatedly accessing a single memory location in a tight loop could literally cause that chunk of memory to melt. Other possibilities include destroying a CRT display, and moving the read/write head of a disk drive with the harmonic frequency of the drive cabinet, causing it to walk across a table and fall onto the floor.)
And there's always Skynet to worry about.
The bottom line is this: if you could write a program to do something bad deliberately, it's at least theoretically possible that a buggy program could do the same thing accidentally.
In practice, it's very unlikely that your buggy program running on a MacOS X system is going to do anything more serious than crash. But it's not possible to completely prevent buggy code from doing really bad things.
In general, Operating Systems of today (the popular ones anyway) run all applications in protected memory regions using a virtual memory manager. It turns out that it is not terribly EASY (per se) to simply read or write to a location that exists in REAL space outside the region(s) that have been assigned / allocated to your process.
Direct answers:
Reading will almost never directly damage another process, however it can indirectly damage a process if you happen to read a KEY value used to encrypt, decrypt, or validate a program / process. Reading out of bounds can have somewhat adverse / unexpected affects on your code if you are making decisions based on the data you are reading
The only way your could really DAMAGE something by writing to a loaction accessible by a memory address is if that memory address that you are writing to is actually a hardware register (a location that actually is not for data storage but for controlling some piece of hardware) not a RAM location. In all fact, you still wont normally damage something unless you are writing some one time programmable location that is not re-writable (or something of that nature).
Generally running from within the debugger runs the code in debug mode. Running in debug mode does TEND to (but not always) stop your code faster when you have done something considered out of practice or downright illegal.
Never use macros, use data structures that already have array index bounds checking built in, etc....
ADDITIONAL
I should add that the above information is really only for systems using an operating system with memory protection windows. If writing code for an embedded system or even a system utilizing an operating system (real-time or other) that does not have memory protection windows (or virtual addressed windows) that one should practice a lot more caution in reading and writing to memory. Also in these cases SAFE and SECURE coding practices should always be employed to avoid security issues.
Not checking bounds can lead to to ugly side effects, including security holes. One of the ugly ones is arbitrary code execution. In classical example: if you have an fixed size array, and use strcpy() to put a user-supplied string there, the user can give you a string that overflows the buffer and overwrites other memory locations, including code address where CPU should return when your function finishes.
Which means your user can send you a string that will cause your program to essentially call exec("/bin/sh"), which will turn it into shell, executing anything he wants on your system, including harvesting all your data and turning your machine into botnet node.
See Smashing The Stack For Fun And Profit for details on how this can be done.
You write:
I read a lot of 'anything can happen', 'segmentation might be the
least bad problem', 'your harddisk might turn pink and unicorns might
be singing under your window', which is all nice, but what is really
the danger?
Lets put it that way: load a gun. Point it outside the window without any particular aim and fire. What is the danger?
The issue is that you do not know. If your code overwrites something that crashes your program you are fine because it will stop it into a defined state. However if it does not crash then the issues start to arise. Which resources are under control of your program and what might it do to them? I know at least one major issue that was caused by such an overflow. The issue was in a seemingly meaningless statistics function that messed up some unrelated conversion table for a production database. The result was some very expensive cleanup afterwards. Actually it would have been much cheaper and easier to handle if this issue would have formatted the hard disks ... with other words: pink unicorns might be your least problem.
The idea that your operating system will protect you is optimistic. If possible try to avoid writing out of bounds.
Not running your program as root or any other privileged user won't harm any of your system, so generally this might be a good idea.
By writing data to some random memory location you won't directly "damage" any other program running on your computer as each process runs in it's own memory space.
If you try to access any memory not allocated to your process the operating system will stop your program from executing with a segmentation fault.
So directly (without running as root and directly accessing files like /dev/mem) there is no danger that your program will interfere with any other program running on your operating system.
Nevertheless - and probably this is what you have heard about in terms of danger - by blindly writing random data to random memory locations by accident you sure can damage anything you are able to damage.
For example your program might want to delete a specific file given by a file name stored somewhere in your program. If by accident you just overwrite the location where the file name is stored you might delete a very different file instead.
NSArrays in Objective-C are assigned a specific block of memory. Exceeding the bounds of the array means that you would be accessing memory that is not assigned to the array. This means:
This memory can have any value. There's no way of knowing if the data is valid based on your data type.
This memory may contain sensitive information such as private keys or other user credentials.
The memory address may be invalid or protected.
The memory can have a changing value because it's being accessed by another program or thread.
Other things use memory address space, such as memory-mapped ports.
Writing data to unknown memory address can crash your program, overwrite OS memory space, and generally cause the sun to implode.
From the aspect of your program you always want to know when your code is exceeding the bounds of an array. This can lead to unknown values being returned, causing your application to crash or provide invalid data.
You may want to try using the memcheck tool in Valgrind when you test your code -- it won't catch individual array bounds violations within a stack frame, but it should catch many other sorts of memory problem, including ones that would cause subtle, wider problems outside the scope of a single function.
From the manual:
Memcheck is a memory error detector. It can detect the following problems that are common in C and C++ programs.
Accessing memory you shouldn't, e.g. overrunning and underrunning heap blocks, overrunning the top of the stack, and accessing memory after it has been freed.
Using undefined values, i.e. values that have not been initialised, or that have been derived from other undefined values.
Incorrect freeing of heap memory, such as double-freeing heap blocks, or mismatched use of malloc/new/new[] versus free/delete/delete[]
Overlapping src and dst pointers in memcpy and related functions.
Memory leaks.
ETA: Though, as Kaz's answer says, it's not a panacea, and doesn't always give the most helpful output, especially when you're using exciting access patterns.
If you ever do systems level programming or embedded systems programming, very bad things can happen if you write to random memory locations. Older systems and many micro-controllers use memory mapped IO, so writing to a memory location that maps to a peripheral register can wreak havoc, especially if it is done asynchronously.
An example is programming flash memory. Programming mode on the memory chips is enabled by writing a specific sequence of values to specific locations inside the address range of the chip. If another process were to write to any other location in the chip while that was going on, it would cause the programming cycle to fail.
In some cases the hardware will wrap addresses around (most significant bits/bytes of address are ignored) so writing to an address beyond the end of the physical address space will actually result in data being written right in the middle of things.
And finally, older CPUs like the MC68000 can locked up to the point that only a hardware reset can get them going again. Haven't worked on them for a couple of decades but I believe it's when it encountered a bus error (non-existent memory) while trying to handle an exception, it would simply halt until the hardware reset was asserted.
My biggest recommendation is a blatant plug for a product, but I have no personal interest in it and I am not affiliated with them in any way - but based on a couple of decades of C programming and embedded systems where reliability was critical, Gimpel's PC Lint will not only detect those sort of errors, it will make a better C/C++ programmer out of you by constantly harping on you about bad habits.
I'd also recommend reading the MISRA C coding standard, if you can snag a copy from someone. I haven't seen any recent ones but in ye olde days they gave a good explanation of why you should/shouldn't do the things they cover.
Dunno about you, but about the 2nd or 3rd time I get a coredump or hangup from any application, my opinion of whatever company produced it goes down by half. The 4th or 5th time and whatever the package is becomes shelfware and I drive a wooden stake through the center of the package/disc it came in just to make sure it never comes back to haunt me.
I'm working with a compiler for a DSP chip which deliberately generates code that accesses one past the end of an array out of C code which does not!
This is because the loops are structured so that the end of an iteration prefetches some data for the next iteration. So the datum prefetched at the end of the last iteration is never actually used.
Writing C code like that invokes undefined behavior, but that is only a formality from a standards document which concerns itself with maximal portability.
More often that not, a program which accesses out of bounds is not cleverly optimized. It is simply buggy. The code fetches some garbage value and, unlike the optimized loops of the aforementioned compiler, the code then uses the value in subsequent computations, thereby corrupting theim.
It is worth catching bugs like that, and so it is worth making the behavior undefined for even just that reason alone: so that the run-time can produce a diagnostic message like "array overrun in line 42 of main.c".
On systems with virtual memory, an array could happen to be allocated such that the address which follows is in an unmapped area of virtual memory. The access will then bomb the program.
As an aside, note that in C we are permitted to create a pointer which is one past the end of an array. And this pointer has to compare greater than any pointer to the interior of an array.
This means that a C implementation cannot place an array right at the end of memory, where the one plus address would wrap around and look smaller than other addresses in the array.
Nevertheless, access to uninitialized or out of bounds values are sometimes a valid optimization technique, even if not maximally portable. This is for instance why the Valgrind tool does not report accesses to uninitialized data when those accesses happen, but only when the value is later used in some way that could affect the outcome of the program. You get a diagnostic like "conditional branch in xxx:nnn depends on uninitialized value" and it can be sometimes hard to track down where it originates. If all such accesses were trapped immediately, there would be a lot of false positives arising from compiler optimized code as well as correctly hand-optimized code.
Speaking of which, I was working with some codec from a vendor which was giving off these errors when ported to Linux and run under Valgrind. But the vendor convinced me that only several bits of the value being used actually came from uninitialized memory, and those bits were carefully avoided by the logic.. Only the good bits of the value were being used and Valgrind doesn't have the ability to track down to the individual bit. The uninitialized material came from reading a word past the end of a bit stream of encoded data, but the code knows how many bits are in the stream and will not use more bits than there actually are. Since the access beyond the end of the bit stream array does not cause any harm on the DSP architecture (there is no virtual memory after the array, no memory-mapped ports, and the address does not wrap) it is a valid optimization technique.
"Undefined behavior" does not really mean much, because according to ISO C, simply including a header which is not defined in the C standard, or calling a function which is not defined in the program itself or the C standard, are examples of undefined behavior. Undefined behavior doesn't mean "not defined by anyone on the planet" just "not defined by the ISO C standard". But of course, sometimes undefined behavior really is absolutely not defined by anyone.
Besides your own program, I don't think you will break anything, in the worst case you will try to read or write from a memory address that corresponds to a page that the kernel didn't assign to your proceses, generating the proper exception and being killed (I mean, your process).
Arrays with two or more dimensions pose a consideration beyond those mentioned in other answers. Consider the following functions:
char arr1[2][8];
char arr2[4];
int test1(int n)
{
arr1[1][0] = 1;
for (int i=0; i<n; i++) arr1[0][i] = arr2[i];
return arr1[1][0];
}
int test2(int ofs, int n)
{
arr1[1][0] = 1;
for (int i=0; i<n; i++) *(arr1[0]+i) = arr2[i];
return arr1[1][0];
}
The way gcc will processes the first function will not allow for the possibility that an attempt to write arr[0][i] might affect the value of arr[1][0], and the generated code is incapable of returning anything other than a hardcoded value of 1. Although the Standard defines the meaning of array[index] as precisely equivalent to (*((array)+(index))), gcc seems to interpret the notion of array bounds and pointer decay differently in cases which involve using [] operator on values of array type, versus those which use explicit pointer arithmetic.
I just want to add some practical examples to this questions - Imagine the following code:
#include <stdio.h>
int main(void) {
int n[5];
n[5] = 1;
printf("answer %d\n", n[5]);
return (0);
}
Which has Undefined Behaviour. If you enable for example clang optimisations (-Ofast) it would result in something like:
answer 748418584
(Which if you compile without will probably output the correct result of answer 1)
This is because in the first case the assignment to 1 is never actually assembled in the final code (you can look in the godbolt asm code as well).
(However it must be noted that by that logic main should not even call printf so best advice is not to depend on the optimiser to solve your UB - but rather have the knowledge that sometimes it may work this way)
The takeaway here is that modern C optimising compilers will assume undefined behaviour (UB) to never occur (which means the above code would be similar to something like (but not the same):
#include <stdio.h>
#include <stdlib.h>
int main(void) {
int n[5];
if (0)
n[5] = 1;
printf("answer %d\n", (exit(-1), n[5]));
return (0);
}
Which on contrary is perfectly defined).
That's because the first conditional statement never reaches it's true state (0 is always false).
And on the second argument for printf we have a sequence point after which we call exit and the program terminates before invoking the UB in the second comma operator (so it's well defined).
So the second takeaway is that UB is not UB as long as it's never actually evaluated.
Additionally I don't see mentioned here there is fairly modern Undefined Behaviour sanitiser (at least on clang) which (with the option -fsanitize=undefined) will give the following output on the first example (but not the second):
/app/example.c:5:5: runtime error: index 5 out of bounds for type 'int[5]'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.c:5:5 in
/app/example.c:7:27: runtime error: index 5 out of bounds for type 'int[5]'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.c:7:27 in
Here is all the samples in godbolt:
https://godbolt.org/z/eY9ja4fdh (first example and no flags)
https://godbolt.org/z/cGcY7Ta9M (first example and -Ofast clang)
https://godbolt.org/z/cGcY7Ta9M (second example and UB sanitiser on)
https://godbolt.org/z/vE531EKo4 (first example and UB sanitiser on)

How am I writing on some spot of memory that I didnt allocated? [duplicate]

How dangerous is accessing an array outside of its bounds (in C)? It can sometimes happen that I read from outside the array (I now understand I then access memory used by some other parts of my program or even beyond that) or I am trying to set a value to an index outside of the array. The program sometimes crashes, but sometimes just runs, only giving unexpected results.
Now what I would like to know is, how dangerous is this really? If it damages my program, it is not so bad. If on the other hand it breaks something outside my program, because I somehow managed to access some totally unrelated memory, then it is very bad, I imagine.
I read a lot of 'anything can happen', 'segmentation might be the least bad problem', 'your hard disk might turn pink and unicorns might be singing under your window', which is all nice, but what is really the danger?
My questions:
Can reading values from way outside the array damage anything
apart from my program? I would imagine just looking at things does
not change anything, or would it for instance change the 'last time
opened' attribute of a file I happened to reach?
Can setting values way out outside of the array damage anything apart from my
program? From this
Stack Overflow question I gather that it is possible to access
any memory location, that there is no safety guarantee.
I now run my small programs from within XCode. Does that
provide some extra protection around my program where it cannot
reach outside its own memory? Can it harm XCode?
Any recommendations on how to run my inherently buggy code safely?
I use OSX 10.7, Xcode 4.6.
As far as the ISO C standard (the official definition of the language) is concerned, accessing an array outside its bounds has "undefined behavior". The literal meaning of this is:
behavior, upon use of a nonportable or erroneous program construct or
of erroneous data, for which this International Standard imposes no
requirements
A non-normative note expands on this:
Possible undefined behavior ranges from ignoring the situation
completely with unpredictable results, to behaving during translation
or program execution in a documented manner characteristic of the
environment (with or without the issuance of a diagnostic message), to
terminating a translation or execution (with the issuance of a
diagnostic message).
So that's the theory. What's the reality?
In the "best" case, you'll access some piece of memory that's either owned by your currently running program (which might cause your program to misbehave), or that's not owned by your currently running program (which will probably cause your program to crash with something like a segmentation fault). Or you might attempt to write to memory that your program owns, but that's marked read-only; this will probably also cause your program to crash.
That's assuming your program is running under an operating system that attempts to protect concurrently running processes from each other. If your code is running on the "bare metal", say if it's part of an OS kernel or an embedded system, then there is no such protection; your misbehaving code is what was supposed to provide that protection. In that case, the possibilities for damage are considerably greater, including, in some cases, physical damage to the hardware (or to things or people nearby).
Even in a protected OS environment, the protections aren't always 100%. There are operating system bugs that permit unprivileged programs to obtain root (administrative) access, for example. Even with ordinary user privileges, a malfunctioning program can consume excessive resources (CPU, memory, disk), possibly bringing down the entire system. A lot of malware (viruses, etc.) exploits buffer overruns to gain unauthorized access to the system.
(One historical example: I've heard that on some old systems with core memory, repeatedly accessing a single memory location in a tight loop could literally cause that chunk of memory to melt. Other possibilities include destroying a CRT display, and moving the read/write head of a disk drive with the harmonic frequency of the drive cabinet, causing it to walk across a table and fall onto the floor.)
And there's always Skynet to worry about.
The bottom line is this: if you could write a program to do something bad deliberately, it's at least theoretically possible that a buggy program could do the same thing accidentally.
In practice, it's very unlikely that your buggy program running on a MacOS X system is going to do anything more serious than crash. But it's not possible to completely prevent buggy code from doing really bad things.
In general, Operating Systems of today (the popular ones anyway) run all applications in protected memory regions using a virtual memory manager. It turns out that it is not terribly EASY (per se) to simply read or write to a location that exists in REAL space outside the region(s) that have been assigned / allocated to your process.
Direct answers:
Reading will almost never directly damage another process, however it can indirectly damage a process if you happen to read a KEY value used to encrypt, decrypt, or validate a program / process. Reading out of bounds can have somewhat adverse / unexpected affects on your code if you are making decisions based on the data you are reading
The only way your could really DAMAGE something by writing to a loaction accessible by a memory address is if that memory address that you are writing to is actually a hardware register (a location that actually is not for data storage but for controlling some piece of hardware) not a RAM location. In all fact, you still wont normally damage something unless you are writing some one time programmable location that is not re-writable (or something of that nature).
Generally running from within the debugger runs the code in debug mode. Running in debug mode does TEND to (but not always) stop your code faster when you have done something considered out of practice or downright illegal.
Never use macros, use data structures that already have array index bounds checking built in, etc....
ADDITIONAL
I should add that the above information is really only for systems using an operating system with memory protection windows. If writing code for an embedded system or even a system utilizing an operating system (real-time or other) that does not have memory protection windows (or virtual addressed windows) that one should practice a lot more caution in reading and writing to memory. Also in these cases SAFE and SECURE coding practices should always be employed to avoid security issues.
Not checking bounds can lead to to ugly side effects, including security holes. One of the ugly ones is arbitrary code execution. In classical example: if you have an fixed size array, and use strcpy() to put a user-supplied string there, the user can give you a string that overflows the buffer and overwrites other memory locations, including code address where CPU should return when your function finishes.
Which means your user can send you a string that will cause your program to essentially call exec("/bin/sh"), which will turn it into shell, executing anything he wants on your system, including harvesting all your data and turning your machine into botnet node.
See Smashing The Stack For Fun And Profit for details on how this can be done.
You write:
I read a lot of 'anything can happen', 'segmentation might be the
least bad problem', 'your harddisk might turn pink and unicorns might
be singing under your window', which is all nice, but what is really
the danger?
Lets put it that way: load a gun. Point it outside the window without any particular aim and fire. What is the danger?
The issue is that you do not know. If your code overwrites something that crashes your program you are fine because it will stop it into a defined state. However if it does not crash then the issues start to arise. Which resources are under control of your program and what might it do to them? I know at least one major issue that was caused by such an overflow. The issue was in a seemingly meaningless statistics function that messed up some unrelated conversion table for a production database. The result was some very expensive cleanup afterwards. Actually it would have been much cheaper and easier to handle if this issue would have formatted the hard disks ... with other words: pink unicorns might be your least problem.
The idea that your operating system will protect you is optimistic. If possible try to avoid writing out of bounds.
Not running your program as root or any other privileged user won't harm any of your system, so generally this might be a good idea.
By writing data to some random memory location you won't directly "damage" any other program running on your computer as each process runs in it's own memory space.
If you try to access any memory not allocated to your process the operating system will stop your program from executing with a segmentation fault.
So directly (without running as root and directly accessing files like /dev/mem) there is no danger that your program will interfere with any other program running on your operating system.
Nevertheless - and probably this is what you have heard about in terms of danger - by blindly writing random data to random memory locations by accident you sure can damage anything you are able to damage.
For example your program might want to delete a specific file given by a file name stored somewhere in your program. If by accident you just overwrite the location where the file name is stored you might delete a very different file instead.
NSArrays in Objective-C are assigned a specific block of memory. Exceeding the bounds of the array means that you would be accessing memory that is not assigned to the array. This means:
This memory can have any value. There's no way of knowing if the data is valid based on your data type.
This memory may contain sensitive information such as private keys or other user credentials.
The memory address may be invalid or protected.
The memory can have a changing value because it's being accessed by another program or thread.
Other things use memory address space, such as memory-mapped ports.
Writing data to unknown memory address can crash your program, overwrite OS memory space, and generally cause the sun to implode.
From the aspect of your program you always want to know when your code is exceeding the bounds of an array. This can lead to unknown values being returned, causing your application to crash or provide invalid data.
You may want to try using the memcheck tool in Valgrind when you test your code -- it won't catch individual array bounds violations within a stack frame, but it should catch many other sorts of memory problem, including ones that would cause subtle, wider problems outside the scope of a single function.
From the manual:
Memcheck is a memory error detector. It can detect the following problems that are common in C and C++ programs.
Accessing memory you shouldn't, e.g. overrunning and underrunning heap blocks, overrunning the top of the stack, and accessing memory after it has been freed.
Using undefined values, i.e. values that have not been initialised, or that have been derived from other undefined values.
Incorrect freeing of heap memory, such as double-freeing heap blocks, or mismatched use of malloc/new/new[] versus free/delete/delete[]
Overlapping src and dst pointers in memcpy and related functions.
Memory leaks.
ETA: Though, as Kaz's answer says, it's not a panacea, and doesn't always give the most helpful output, especially when you're using exciting access patterns.
If you ever do systems level programming or embedded systems programming, very bad things can happen if you write to random memory locations. Older systems and many micro-controllers use memory mapped IO, so writing to a memory location that maps to a peripheral register can wreak havoc, especially if it is done asynchronously.
An example is programming flash memory. Programming mode on the memory chips is enabled by writing a specific sequence of values to specific locations inside the address range of the chip. If another process were to write to any other location in the chip while that was going on, it would cause the programming cycle to fail.
In some cases the hardware will wrap addresses around (most significant bits/bytes of address are ignored) so writing to an address beyond the end of the physical address space will actually result in data being written right in the middle of things.
And finally, older CPUs like the MC68000 can locked up to the point that only a hardware reset can get them going again. Haven't worked on them for a couple of decades but I believe it's when it encountered a bus error (non-existent memory) while trying to handle an exception, it would simply halt until the hardware reset was asserted.
My biggest recommendation is a blatant plug for a product, but I have no personal interest in it and I am not affiliated with them in any way - but based on a couple of decades of C programming and embedded systems where reliability was critical, Gimpel's PC Lint will not only detect those sort of errors, it will make a better C/C++ programmer out of you by constantly harping on you about bad habits.
I'd also recommend reading the MISRA C coding standard, if you can snag a copy from someone. I haven't seen any recent ones but in ye olde days they gave a good explanation of why you should/shouldn't do the things they cover.
Dunno about you, but about the 2nd or 3rd time I get a coredump or hangup from any application, my opinion of whatever company produced it goes down by half. The 4th or 5th time and whatever the package is becomes shelfware and I drive a wooden stake through the center of the package/disc it came in just to make sure it never comes back to haunt me.
I'm working with a compiler for a DSP chip which deliberately generates code that accesses one past the end of an array out of C code which does not!
This is because the loops are structured so that the end of an iteration prefetches some data for the next iteration. So the datum prefetched at the end of the last iteration is never actually used.
Writing C code like that invokes undefined behavior, but that is only a formality from a standards document which concerns itself with maximal portability.
More often that not, a program which accesses out of bounds is not cleverly optimized. It is simply buggy. The code fetches some garbage value and, unlike the optimized loops of the aforementioned compiler, the code then uses the value in subsequent computations, thereby corrupting theim.
It is worth catching bugs like that, and so it is worth making the behavior undefined for even just that reason alone: so that the run-time can produce a diagnostic message like "array overrun in line 42 of main.c".
On systems with virtual memory, an array could happen to be allocated such that the address which follows is in an unmapped area of virtual memory. The access will then bomb the program.
As an aside, note that in C we are permitted to create a pointer which is one past the end of an array. And this pointer has to compare greater than any pointer to the interior of an array.
This means that a C implementation cannot place an array right at the end of memory, where the one plus address would wrap around and look smaller than other addresses in the array.
Nevertheless, access to uninitialized or out of bounds values are sometimes a valid optimization technique, even if not maximally portable. This is for instance why the Valgrind tool does not report accesses to uninitialized data when those accesses happen, but only when the value is later used in some way that could affect the outcome of the program. You get a diagnostic like "conditional branch in xxx:nnn depends on uninitialized value" and it can be sometimes hard to track down where it originates. If all such accesses were trapped immediately, there would be a lot of false positives arising from compiler optimized code as well as correctly hand-optimized code.
Speaking of which, I was working with some codec from a vendor which was giving off these errors when ported to Linux and run under Valgrind. But the vendor convinced me that only several bits of the value being used actually came from uninitialized memory, and those bits were carefully avoided by the logic.. Only the good bits of the value were being used and Valgrind doesn't have the ability to track down to the individual bit. The uninitialized material came from reading a word past the end of a bit stream of encoded data, but the code knows how many bits are in the stream and will not use more bits than there actually are. Since the access beyond the end of the bit stream array does not cause any harm on the DSP architecture (there is no virtual memory after the array, no memory-mapped ports, and the address does not wrap) it is a valid optimization technique.
"Undefined behavior" does not really mean much, because according to ISO C, simply including a header which is not defined in the C standard, or calling a function which is not defined in the program itself or the C standard, are examples of undefined behavior. Undefined behavior doesn't mean "not defined by anyone on the planet" just "not defined by the ISO C standard". But of course, sometimes undefined behavior really is absolutely not defined by anyone.
Besides your own program, I don't think you will break anything, in the worst case you will try to read or write from a memory address that corresponds to a page that the kernel didn't assign to your proceses, generating the proper exception and being killed (I mean, your process).
Arrays with two or more dimensions pose a consideration beyond those mentioned in other answers. Consider the following functions:
char arr1[2][8];
char arr2[4];
int test1(int n)
{
arr1[1][0] = 1;
for (int i=0; i<n; i++) arr1[0][i] = arr2[i];
return arr1[1][0];
}
int test2(int ofs, int n)
{
arr1[1][0] = 1;
for (int i=0; i<n; i++) *(arr1[0]+i) = arr2[i];
return arr1[1][0];
}
The way gcc will processes the first function will not allow for the possibility that an attempt to write arr[0][i] might affect the value of arr[1][0], and the generated code is incapable of returning anything other than a hardcoded value of 1. Although the Standard defines the meaning of array[index] as precisely equivalent to (*((array)+(index))), gcc seems to interpret the notion of array bounds and pointer decay differently in cases which involve using [] operator on values of array type, versus those which use explicit pointer arithmetic.
I just want to add some practical examples to this questions - Imagine the following code:
#include <stdio.h>
int main(void) {
int n[5];
n[5] = 1;
printf("answer %d\n", n[5]);
return (0);
}
Which has Undefined Behaviour. If you enable for example clang optimisations (-Ofast) it would result in something like:
answer 748418584
(Which if you compile without will probably output the correct result of answer 1)
This is because in the first case the assignment to 1 is never actually assembled in the final code (you can look in the godbolt asm code as well).
(However it must be noted that by that logic main should not even call printf so best advice is not to depend on the optimiser to solve your UB - but rather have the knowledge that sometimes it may work this way)
The takeaway here is that modern C optimising compilers will assume undefined behaviour (UB) to never occur (which means the above code would be similar to something like (but not the same):
#include <stdio.h>
#include <stdlib.h>
int main(void) {
int n[5];
if (0)
n[5] = 1;
printf("answer %d\n", (exit(-1), n[5]));
return (0);
}
Which on contrary is perfectly defined).
That's because the first conditional statement never reaches it's true state (0 is always false).
And on the second argument for printf we have a sequence point after which we call exit and the program terminates before invoking the UB in the second comma operator (so it's well defined).
So the second takeaway is that UB is not UB as long as it's never actually evaluated.
Additionally I don't see mentioned here there is fairly modern Undefined Behaviour sanitiser (at least on clang) which (with the option -fsanitize=undefined) will give the following output on the first example (but not the second):
/app/example.c:5:5: runtime error: index 5 out of bounds for type 'int[5]'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.c:5:5 in
/app/example.c:7:27: runtime error: index 5 out of bounds for type 'int[5]'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.c:7:27 in
Here is all the samples in godbolt:
https://godbolt.org/z/eY9ja4fdh (first example and no flags)
https://godbolt.org/z/cGcY7Ta9M (first example and -Ofast clang)
https://godbolt.org/z/cGcY7Ta9M (second example and UB sanitiser on)
https://godbolt.org/z/vE531EKo4 (first example and UB sanitiser on)

what does 0xdeadbeef in address 0x0 mean? [duplicate]

This question already has answers here:
What is the function of this statement *(long*)0=0;?
(4 answers)
Closed 8 years ago.
I just saw in a code the following line :
#define ERR_FATAL( str, a, b, c ) {while(1) {*(unsigned int *)0 = 0xdeadbeef;} }
I know that 0xdeadbeef means error, but what putting this value mean when it's in address 0 ?
What address 0 represents ?
The address 0x0 is recognized by the compiler as a NULL pointer and is going to be an implementation defined invalid memory address, on some systems is quite literally at the first addressable location in the system memory but on others it's just a placeholder in the code for some generic invalid memory address. From the point of view of the C code we don't know what address that will be, only that it's invalid to access it. Essentially what this code snippet is trying to do is to write to an "illegal" memory address, with the value 0xdeadbeef. The value itself is some hex that spells out "dead beef" hence indicating that the program is dead beef (ie. a problem), if you aren't a native english speaker I can see how this might not be so clear :). The idea is that this will trigger a segmentation fault or similar, with the intention of informing you that there is a problem by immediately terminating the program with no cleanup or other operations performed in the interim (the macro name ERR_FATAL hints at that). Most operating systems don't give programs direct access to all the memory in the system and the code presumes that the operating system won't let you directly access memory that's located at address 0x0. Given that you tagged this question with the linux tag this is the behavior you will see (because linux will not allow an access to that memory address). Note that if you are working on something like an embedded system where there's no such guarantee then this could cause a bunch of problems as you might be overwriting something important.
Note that there's going to be better ways out there to report problems than this that don't depend on certain types of undefined behaviors causing certain side effects. Using things like assert is going to likely be a better choice. If you want to terminate the program using abort() is a better choice as it in the standard library and does exactly what you want. See the answer from ComicSansMS for more about why this is preferable.
Putting this value (or any value for that matter) in address 0 is supposed to terminate the program immediately.
A fatal error indicates an error situation so severe that the program cannot safely continue execution without risking further data corruption.
In particular, no pending cleanup operations are to be executed, as they could have an undesired effect. Think of flushing a buffer of corrupted data to the filesystem. Instead, the program is to terminate immediately and allow further examination of the situation via a core dump or an attached debugger.
Note that C already provides a standard library function for this purpose: abort(). Calling abort would be preferable for this purpose for a number of reasons:
Writing to address 0 is not guaranteed to terminate the program. This unnecessarily restricts portability of the code and might have devastating consequences in case the code actually gets recompiled and executed on a platform where writing to address 0 results in an actual memory store operation.
Calling abort() is more understandable to someone reading the code. Your question proves that many developers will not understand what the code in question is supposed to do. While the name of the macro and the value deadbeef give some hints, it is still unnecessarily obscure. Also, note that the name of the macro will not be visible when looking at the disassembled code in a debugger.
Calling abort() signals intent more clearly. This is not only true for the code itself, but also for the observable behavior of the binary. Assuming the operation executes as intended on a Unix machine, you would get a SIGSEGV as a result, which is a signal indicating memory corruption. abort() on the other hand causes SIGABRT which indicates an abnormal program termination. Unless the reason for the fatal error was indeed a memory corruption, throwing SIGSEGV in this case obscures why the program is failing and might be misleading to a developer trying to hunt down the error. This is particularly delicate when you think that the signal might not be caught by a debugger, but by an automated signal handler, which might then invoke unfitting code for error handling.
Therefore, if the sole intent of the macro is to signal a fatal error (as the name suggests), calling abort would be a better implementation.
Dereferencing an invalid pointer (which is what the above code is trying to do) results in Undefined Behavior. As such the system could do anything - it could crash, it could do nothing, or demons can fly out of your nose. This page is an excellent page on what every C programmer should know about Undefined Behavior.
Thus the above code is the wrong way to cause a crash. As a comenter pointed out, it would be better to call abort().

NULL automatically appended in C string arrays? [duplicate]

How dangerous is accessing an array outside of its bounds (in C)? It can sometimes happen that I read from outside the array (I now understand I then access memory used by some other parts of my program or even beyond that) or I am trying to set a value to an index outside of the array. The program sometimes crashes, but sometimes just runs, only giving unexpected results.
Now what I would like to know is, how dangerous is this really? If it damages my program, it is not so bad. If on the other hand it breaks something outside my program, because I somehow managed to access some totally unrelated memory, then it is very bad, I imagine.
I read a lot of 'anything can happen', 'segmentation might be the least bad problem', 'your hard disk might turn pink and unicorns might be singing under your window', which is all nice, but what is really the danger?
My questions:
Can reading values from way outside the array damage anything
apart from my program? I would imagine just looking at things does
not change anything, or would it for instance change the 'last time
opened' attribute of a file I happened to reach?
Can setting values way out outside of the array damage anything apart from my
program? From this
Stack Overflow question I gather that it is possible to access
any memory location, that there is no safety guarantee.
I now run my small programs from within XCode. Does that
provide some extra protection around my program where it cannot
reach outside its own memory? Can it harm XCode?
Any recommendations on how to run my inherently buggy code safely?
I use OSX 10.7, Xcode 4.6.
As far as the ISO C standard (the official definition of the language) is concerned, accessing an array outside its bounds has "undefined behavior". The literal meaning of this is:
behavior, upon use of a nonportable or erroneous program construct or
of erroneous data, for which this International Standard imposes no
requirements
A non-normative note expands on this:
Possible undefined behavior ranges from ignoring the situation
completely with unpredictable results, to behaving during translation
or program execution in a documented manner characteristic of the
environment (with or without the issuance of a diagnostic message), to
terminating a translation or execution (with the issuance of a
diagnostic message).
So that's the theory. What's the reality?
In the "best" case, you'll access some piece of memory that's either owned by your currently running program (which might cause your program to misbehave), or that's not owned by your currently running program (which will probably cause your program to crash with something like a segmentation fault). Or you might attempt to write to memory that your program owns, but that's marked read-only; this will probably also cause your program to crash.
That's assuming your program is running under an operating system that attempts to protect concurrently running processes from each other. If your code is running on the "bare metal", say if it's part of an OS kernel or an embedded system, then there is no such protection; your misbehaving code is what was supposed to provide that protection. In that case, the possibilities for damage are considerably greater, including, in some cases, physical damage to the hardware (or to things or people nearby).
Even in a protected OS environment, the protections aren't always 100%. There are operating system bugs that permit unprivileged programs to obtain root (administrative) access, for example. Even with ordinary user privileges, a malfunctioning program can consume excessive resources (CPU, memory, disk), possibly bringing down the entire system. A lot of malware (viruses, etc.) exploits buffer overruns to gain unauthorized access to the system.
(One historical example: I've heard that on some old systems with core memory, repeatedly accessing a single memory location in a tight loop could literally cause that chunk of memory to melt. Other possibilities include destroying a CRT display, and moving the read/write head of a disk drive with the harmonic frequency of the drive cabinet, causing it to walk across a table and fall onto the floor.)
And there's always Skynet to worry about.
The bottom line is this: if you could write a program to do something bad deliberately, it's at least theoretically possible that a buggy program could do the same thing accidentally.
In practice, it's very unlikely that your buggy program running on a MacOS X system is going to do anything more serious than crash. But it's not possible to completely prevent buggy code from doing really bad things.
In general, Operating Systems of today (the popular ones anyway) run all applications in protected memory regions using a virtual memory manager. It turns out that it is not terribly EASY (per se) to simply read or write to a location that exists in REAL space outside the region(s) that have been assigned / allocated to your process.
Direct answers:
Reading will almost never directly damage another process, however it can indirectly damage a process if you happen to read a KEY value used to encrypt, decrypt, or validate a program / process. Reading out of bounds can have somewhat adverse / unexpected affects on your code if you are making decisions based on the data you are reading
The only way your could really DAMAGE something by writing to a loaction accessible by a memory address is if that memory address that you are writing to is actually a hardware register (a location that actually is not for data storage but for controlling some piece of hardware) not a RAM location. In all fact, you still wont normally damage something unless you are writing some one time programmable location that is not re-writable (or something of that nature).
Generally running from within the debugger runs the code in debug mode. Running in debug mode does TEND to (but not always) stop your code faster when you have done something considered out of practice or downright illegal.
Never use macros, use data structures that already have array index bounds checking built in, etc....
ADDITIONAL
I should add that the above information is really only for systems using an operating system with memory protection windows. If writing code for an embedded system or even a system utilizing an operating system (real-time or other) that does not have memory protection windows (or virtual addressed windows) that one should practice a lot more caution in reading and writing to memory. Also in these cases SAFE and SECURE coding practices should always be employed to avoid security issues.
Not checking bounds can lead to to ugly side effects, including security holes. One of the ugly ones is arbitrary code execution. In classical example: if you have an fixed size array, and use strcpy() to put a user-supplied string there, the user can give you a string that overflows the buffer and overwrites other memory locations, including code address where CPU should return when your function finishes.
Which means your user can send you a string that will cause your program to essentially call exec("/bin/sh"), which will turn it into shell, executing anything he wants on your system, including harvesting all your data and turning your machine into botnet node.
See Smashing The Stack For Fun And Profit for details on how this can be done.
You write:
I read a lot of 'anything can happen', 'segmentation might be the
least bad problem', 'your harddisk might turn pink and unicorns might
be singing under your window', which is all nice, but what is really
the danger?
Lets put it that way: load a gun. Point it outside the window without any particular aim and fire. What is the danger?
The issue is that you do not know. If your code overwrites something that crashes your program you are fine because it will stop it into a defined state. However if it does not crash then the issues start to arise. Which resources are under control of your program and what might it do to them? I know at least one major issue that was caused by such an overflow. The issue was in a seemingly meaningless statistics function that messed up some unrelated conversion table for a production database. The result was some very expensive cleanup afterwards. Actually it would have been much cheaper and easier to handle if this issue would have formatted the hard disks ... with other words: pink unicorns might be your least problem.
The idea that your operating system will protect you is optimistic. If possible try to avoid writing out of bounds.
Not running your program as root or any other privileged user won't harm any of your system, so generally this might be a good idea.
By writing data to some random memory location you won't directly "damage" any other program running on your computer as each process runs in it's own memory space.
If you try to access any memory not allocated to your process the operating system will stop your program from executing with a segmentation fault.
So directly (without running as root and directly accessing files like /dev/mem) there is no danger that your program will interfere with any other program running on your operating system.
Nevertheless - and probably this is what you have heard about in terms of danger - by blindly writing random data to random memory locations by accident you sure can damage anything you are able to damage.
For example your program might want to delete a specific file given by a file name stored somewhere in your program. If by accident you just overwrite the location where the file name is stored you might delete a very different file instead.
NSArrays in Objective-C are assigned a specific block of memory. Exceeding the bounds of the array means that you would be accessing memory that is not assigned to the array. This means:
This memory can have any value. There's no way of knowing if the data is valid based on your data type.
This memory may contain sensitive information such as private keys or other user credentials.
The memory address may be invalid or protected.
The memory can have a changing value because it's being accessed by another program or thread.
Other things use memory address space, such as memory-mapped ports.
Writing data to unknown memory address can crash your program, overwrite OS memory space, and generally cause the sun to implode.
From the aspect of your program you always want to know when your code is exceeding the bounds of an array. This can lead to unknown values being returned, causing your application to crash or provide invalid data.
You may want to try using the memcheck tool in Valgrind when you test your code -- it won't catch individual array bounds violations within a stack frame, but it should catch many other sorts of memory problem, including ones that would cause subtle, wider problems outside the scope of a single function.
From the manual:
Memcheck is a memory error detector. It can detect the following problems that are common in C and C++ programs.
Accessing memory you shouldn't, e.g. overrunning and underrunning heap blocks, overrunning the top of the stack, and accessing memory after it has been freed.
Using undefined values, i.e. values that have not been initialised, or that have been derived from other undefined values.
Incorrect freeing of heap memory, such as double-freeing heap blocks, or mismatched use of malloc/new/new[] versus free/delete/delete[]
Overlapping src and dst pointers in memcpy and related functions.
Memory leaks.
ETA: Though, as Kaz's answer says, it's not a panacea, and doesn't always give the most helpful output, especially when you're using exciting access patterns.
If you ever do systems level programming or embedded systems programming, very bad things can happen if you write to random memory locations. Older systems and many micro-controllers use memory mapped IO, so writing to a memory location that maps to a peripheral register can wreak havoc, especially if it is done asynchronously.
An example is programming flash memory. Programming mode on the memory chips is enabled by writing a specific sequence of values to specific locations inside the address range of the chip. If another process were to write to any other location in the chip while that was going on, it would cause the programming cycle to fail.
In some cases the hardware will wrap addresses around (most significant bits/bytes of address are ignored) so writing to an address beyond the end of the physical address space will actually result in data being written right in the middle of things.
And finally, older CPUs like the MC68000 can locked up to the point that only a hardware reset can get them going again. Haven't worked on them for a couple of decades but I believe it's when it encountered a bus error (non-existent memory) while trying to handle an exception, it would simply halt until the hardware reset was asserted.
My biggest recommendation is a blatant plug for a product, but I have no personal interest in it and I am not affiliated with them in any way - but based on a couple of decades of C programming and embedded systems where reliability was critical, Gimpel's PC Lint will not only detect those sort of errors, it will make a better C/C++ programmer out of you by constantly harping on you about bad habits.
I'd also recommend reading the MISRA C coding standard, if you can snag a copy from someone. I haven't seen any recent ones but in ye olde days they gave a good explanation of why you should/shouldn't do the things they cover.
Dunno about you, but about the 2nd or 3rd time I get a coredump or hangup from any application, my opinion of whatever company produced it goes down by half. The 4th or 5th time and whatever the package is becomes shelfware and I drive a wooden stake through the center of the package/disc it came in just to make sure it never comes back to haunt me.
I'm working with a compiler for a DSP chip which deliberately generates code that accesses one past the end of an array out of C code which does not!
This is because the loops are structured so that the end of an iteration prefetches some data for the next iteration. So the datum prefetched at the end of the last iteration is never actually used.
Writing C code like that invokes undefined behavior, but that is only a formality from a standards document which concerns itself with maximal portability.
More often that not, a program which accesses out of bounds is not cleverly optimized. It is simply buggy. The code fetches some garbage value and, unlike the optimized loops of the aforementioned compiler, the code then uses the value in subsequent computations, thereby corrupting theim.
It is worth catching bugs like that, and so it is worth making the behavior undefined for even just that reason alone: so that the run-time can produce a diagnostic message like "array overrun in line 42 of main.c".
On systems with virtual memory, an array could happen to be allocated such that the address which follows is in an unmapped area of virtual memory. The access will then bomb the program.
As an aside, note that in C we are permitted to create a pointer which is one past the end of an array. And this pointer has to compare greater than any pointer to the interior of an array.
This means that a C implementation cannot place an array right at the end of memory, where the one plus address would wrap around and look smaller than other addresses in the array.
Nevertheless, access to uninitialized or out of bounds values are sometimes a valid optimization technique, even if not maximally portable. This is for instance why the Valgrind tool does not report accesses to uninitialized data when those accesses happen, but only when the value is later used in some way that could affect the outcome of the program. You get a diagnostic like "conditional branch in xxx:nnn depends on uninitialized value" and it can be sometimes hard to track down where it originates. If all such accesses were trapped immediately, there would be a lot of false positives arising from compiler optimized code as well as correctly hand-optimized code.
Speaking of which, I was working with some codec from a vendor which was giving off these errors when ported to Linux and run under Valgrind. But the vendor convinced me that only several bits of the value being used actually came from uninitialized memory, and those bits were carefully avoided by the logic.. Only the good bits of the value were being used and Valgrind doesn't have the ability to track down to the individual bit. The uninitialized material came from reading a word past the end of a bit stream of encoded data, but the code knows how many bits are in the stream and will not use more bits than there actually are. Since the access beyond the end of the bit stream array does not cause any harm on the DSP architecture (there is no virtual memory after the array, no memory-mapped ports, and the address does not wrap) it is a valid optimization technique.
"Undefined behavior" does not really mean much, because according to ISO C, simply including a header which is not defined in the C standard, or calling a function which is not defined in the program itself or the C standard, are examples of undefined behavior. Undefined behavior doesn't mean "not defined by anyone on the planet" just "not defined by the ISO C standard". But of course, sometimes undefined behavior really is absolutely not defined by anyone.
Besides your own program, I don't think you will break anything, in the worst case you will try to read or write from a memory address that corresponds to a page that the kernel didn't assign to your proceses, generating the proper exception and being killed (I mean, your process).
Arrays with two or more dimensions pose a consideration beyond those mentioned in other answers. Consider the following functions:
char arr1[2][8];
char arr2[4];
int test1(int n)
{
arr1[1][0] = 1;
for (int i=0; i<n; i++) arr1[0][i] = arr2[i];
return arr1[1][0];
}
int test2(int ofs, int n)
{
arr1[1][0] = 1;
for (int i=0; i<n; i++) *(arr1[0]+i) = arr2[i];
return arr1[1][0];
}
The way gcc will processes the first function will not allow for the possibility that an attempt to write arr[0][i] might affect the value of arr[1][0], and the generated code is incapable of returning anything other than a hardcoded value of 1. Although the Standard defines the meaning of array[index] as precisely equivalent to (*((array)+(index))), gcc seems to interpret the notion of array bounds and pointer decay differently in cases which involve using [] operator on values of array type, versus those which use explicit pointer arithmetic.
I just want to add some practical examples to this questions - Imagine the following code:
#include <stdio.h>
int main(void) {
int n[5];
n[5] = 1;
printf("answer %d\n", n[5]);
return (0);
}
Which has Undefined Behaviour. If you enable for example clang optimisations (-Ofast) it would result in something like:
answer 748418584
(Which if you compile without will probably output the correct result of answer 1)
This is because in the first case the assignment to 1 is never actually assembled in the final code (you can look in the godbolt asm code as well).
(However it must be noted that by that logic main should not even call printf so best advice is not to depend on the optimiser to solve your UB - but rather have the knowledge that sometimes it may work this way)
The takeaway here is that modern C optimising compilers will assume undefined behaviour (UB) to never occur (which means the above code would be similar to something like (but not the same):
#include <stdio.h>
#include <stdlib.h>
int main(void) {
int n[5];
if (0)
n[5] = 1;
printf("answer %d\n", (exit(-1), n[5]));
return (0);
}
Which on contrary is perfectly defined).
That's because the first conditional statement never reaches it's true state (0 is always false).
And on the second argument for printf we have a sequence point after which we call exit and the program terminates before invoking the UB in the second comma operator (so it's well defined).
So the second takeaway is that UB is not UB as long as it's never actually evaluated.
Additionally I don't see mentioned here there is fairly modern Undefined Behaviour sanitiser (at least on clang) which (with the option -fsanitize=undefined) will give the following output on the first example (but not the second):
/app/example.c:5:5: runtime error: index 5 out of bounds for type 'int[5]'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.c:5:5 in
/app/example.c:7:27: runtime error: index 5 out of bounds for type 'int[5]'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.c:7:27 in
Here is all the samples in godbolt:
https://godbolt.org/z/eY9ja4fdh (first example and no flags)
https://godbolt.org/z/cGcY7Ta9M (first example and -Ofast clang)
https://godbolt.org/z/cGcY7Ta9M (second example and UB sanitiser on)
https://godbolt.org/z/vE531EKo4 (first example and UB sanitiser on)

Resources