Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Could anyone clarify this?
char str[1];
strcpy(str, "HHHHHHHHHHHH");
Here I declared a char array with size one, but the program doesn't crash untill I enter more than 12 characters and I only have an array size one. Why?
This code has undefined behaviour, since it writes more than one element into str. It could do anything. It is your responsibility to ensure that you only write into memory that you own.
This is undefined behaviour. In practice, you overwrite memory contents of something. In this case, that array goes to stack, if it is a local variable. It is likely that you have a CPU architecture where stack grows down, so you start overwriting things like other local variables, saved register values and return addresses of function calls.
You probably first overwrote something which had no immediate effect, or you did not notice the effect. This might be a local variable which wasn't initialized yet, or a local variable or saved register value which was not actually used after you overwrote it.
Then when you increased length of overflow, you probably corrupted function return address, and then crash actually happened when you returned from the function. If you had any other memory addresses, that is pointers, crash could also be because you tried to access the value pointed by corrupted pointer.
Finally, if you would increase the overflow size enough, the string copying would eventually directly write outside allowed area and cause immediate crash (assuming CPU and OS which have such memory protection, and not some ancient or embedded system). But this probably was not the reason here, as you wrote only 14 bytes before crash.
But note that above is kinda pointless from point of view of C language, undefined behaviour, which often changes if you chnage anything in the program, compiler options or input data. This can make memory corruption bugs hard to find, as adding debug stuff often makes the problem "disappear" (changes or hides the symptoms).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
int a=10;
char *b ;
b=(char*)&a;
strcpy(b,"xxxxx");
printf("%s",b);
The compilation can pass, but the program exits with an error. Why doesn't this work? What is the mechanism of realization?
It is likely that, in your C implementation, int is four bytes. The C standard defines a char to use one byte. So b = (char *) &a; sets b to point the first byte of the four that make up a. We do not know what lies after those four bytes.
strcpy(b, "xxxxx"); asks strcpy to copy six bytes (the five “x” characters and a terminating null character) to the memory pointed to by b. In the simplest case, this will overwrite two bytes beyond a. This can disrupt your program in a variety of ways—it can corrupt some other data the compiler stored there, it can make your stack frame unusable, it can corrupt a return address, and other things can go wrong.
Additionally, when the compiler translates and optimizes your program, it relies on guarantees made to it by the C standard, such as that the operation strcpy(b, …) will not write outside of the properly defined object pointed to by b, which is a. When you violate those guarantees, the C standard does not define the resulting behavior, and the translations and optimizations made by the compiler may cause your program to go awry in unexpected ways.
int a=10;
char *b ;
b=(char*)&a;
strcpy(b,"xxxxx");
printf("%s",b);
Why doesn't this work?
This doesn't work because strcpy() copy 6 characters (5 times 'x' and one nul terminator) to the address pointed by b and there is not enough room for that, at least if the compiler you used store int type into 32bits (4 bytes).
You didn't showed the full code, but assuming a is a local variable, it is allocated on the stack. You overflow the space allocated for variable a and this means you overwrite something on the stack. That data on the stack is essential for program continuation and being overwritten it crashes the system.
"at will"? No they are actually sentient!
If code does not behave as you expect it is because you have a semantic error - that is code that is syntactically valid (i.e. it compiles) but does not mean what you think it does when executed according to the rules of the language.
Moreover as systems level language C does not protect you from doing invalid things to the execution environment - i.e. runtime errors, and such errors generally have undefined behaviour.
In this case:
b=(char*)&a;
strcpy(b,"xxxxx");
b points to an object of int size. On Windows or any 32 bit system, that will normally be 4 bytes. You then copy 6 bytes to it, overrunning its space. The effect of this is undefined, but it is likely that it will corrupt some adjacent variable in memory or the function return address.
If b were corrupted by the strcpy() error, trying to print the string at b would cause a run-time error is b were no longer a valid address.
If the return address were corrupted, the program would fail when you return from the calling function.
In either case the precise behaviour is not defined, and may not be trapped; it depends on when gets corrupted, what value the corrupted data takes, and how and when that corrupted data is used.
You will be able to observe the effects on the variables and/or call stack by running and stepping this code in a debugger.
This question already has answers here:
How dangerous is it to access an array out of bounds?
(12 answers)
Closed 3 years ago.
My understanding is that if char *my_word is allocated ONE byte of memory malloc(1), then technically, then the following code would produce an out-of-bounds error
char *my_word = malloc(1);
my_word[0] = 'y';
my_word[1] = 'e';
my_word[2] = 's';
and yet, the code runs just fine and doesn't produce any error. In fact, printf("%s", my_word) prints the word just fine.
Why is this not producing an out-of-bounds error if I specifically only allocated 1 byte of memory?
C doesn't have explicit bounds checking. That's part of what makes it fast. But when you write past the bounds of allocated memory, you invoke undefined behavior.
Once you invoke undefined behavior, you can't reliable predict what the program will do. It may crash, it may output strange results, or (as in this case) it may appear to work properly. Additionally, making a seemingly unrelated change such as adding a printf call for debugging or adding an unused local variable can change how undefined behavior manifests itself.
Just because the program could crash doesn't mean it will.
This comes down to the system it is running on. Generally a malloc will allocate in multiples of a certain block size. Eg the block size maybe 16 bytes on your system and malloc will allocate 16 even though you only asked for 1. So in this case you are getting away with overflowing the buffer because you are not writing on memory that is used by anything else.
However you should never rely on this. Always assume that when you write outside the amount requested that bad things will happen.
C does not provide any built-in mechanism to protect you from buffer overflowing, it is up to you to know the size of your buffers and ensure that you never read/write outside of them.
For example if you allocated a buffer a multiple of the block size then writing to the next byte will probably start overwriting critical memory control blocks which may show up as bizarre errors later when you try to free or allocate more memory.
C does not performs bound check. It is just undefined behavior when you access out of bounds which means it can work as normal.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
When we define a variable, and do not initialize it, the block of memory allocated to the variable still contains a value from previous programs, known as garbage value. But suppose, in a hypothetical case, a block of unused memory is present in the system, and when I declare and define a variable, that block of memory is allocated to the variable. If I do not initialize the variable, and try to print the value of the variable, the system doesn't have any garbage value to print. What will be the result? What will the system do?
When we define a variable, and do not initialize it, the block of memory allocated to the variable still contains a value from previous programs, known as garbage value.
If I do not initialize the variable, and try to print it, it doesn't have any garbage value to print.
C does not specify these behaviors. There is no specified garbage value.
If code attempts to print (or use) the value of an uninitialized object, the result is undefined behavior (UB). Anything may happen: a trap error occurs, the value is 42, code dies, anything.
There is a special case if the uninitialized object is an unsigned char in that a value will be there of indeterminate value, just something in the range [0...UCHAR_MAX], but no UB. This is the closest to garbage value C specifies.
Firstly, it isn't defined how an implementation behaves precisely when an uninitialised read is made, by the C standard. Merely that the value is not defined. The system may use whatever method it wishes to choose the value. Or it is even possible it is a trap representation and will crash the program I believe.
However on most real modern OSes. The data in reality is fresh pages that get mapped into the programs address space. For security reasons most kernels actually explicitly ensure these are zeroed out to avoid software spying on data from previous programs which were run and got left in memory.
However some OSes as you say will just leave this data in, meaning the page is either fresh and usually zeroed out or contains arbitrary data from previous programs (or even potentially arbitrary data defined by how the memory starts up, but at least with DRAM, that is generally in a zeroed state).
I think you need more of hardware perspective.
What is a memory? An example of memory: is made up of transistors and capacitors. A transistor and capacitor make a memory bit. A bit is either of value 0 or 1, a hypothetical scenario of non-existance of this bit value does not exist ;) as it has to hold either 0 or 1 and nothing else. If you think there is "nothing" in bit value, that means the hardware(transistor/capacitor) you are imagining is not working.
A bunch of bits makes a word or byte. A bunch of bytes holds an integer/ float or whatever variable you define. So even without initializing the variable, it contains 0's and 1's in each of the memory cells. When you access this - it's called garbage.
But suppose, in a hypothetical case, a block of unused memory is present in the system, and when I declare and define a variable, that block of memory is allocated to the variable. If I do not initialize the variable, and try to print it, it doesn't has any garbage value to print. What will it do?
Any given memory location has some value, no matter how it got there. The "garbage" value doesn't have to come from a program that ran in that space previously, it could just be the initial state of the memory location when the system starts up. The reason it's "garbage" is that you don't know what it is -- you didn't put it there, you don't have any idea how it got there, and you don't really care.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I would like to know why this occurs. I am creating a 5-element array of integers - meaning each element takes 4 Bytes in the memory. Why when I print the address of myArray[-1] I also get a valid address?
#include <stdio.h>
#include <stdlib.h>
int main()
{
int myArray[] = {1, 2, 3, 4, 5};
printf("0x%p\n0x%p\n0x%p\n", &myArray[-1], &myArray[0], &myArray[1]);
return 0;
}
Output:
0x0028FEF8
0x0028FEFC
0x0028FF00
Because undefined behavior is undefined: it may work or not, you are not guaranteed to get a segmentation fault.
The address isn't valid; it doesn't correspond to an object in your program. Attempting to access that memory location results in undefined behavior - it may cause a runtime error. Or not.
On almost any implementation, your array will be materialized in a larger region of storage, so naturally there will be memory cells on either side of that array (unless it starts at address 0, which it won't on almost any implementation you'll actually work on). Since C doesn't enforce any kinds of bounds checking on array accesses, it doesn't immediately throw an exception when you use the -1 subscript. Yes, you get what looks like a reasonable address value, but attempting to use that memory location may or may not result in some kind of mayhem depending on what's stored there (like a frame pointer, for example). The language definition leaves the behavior undefined; it places no requirement on the compiler to do handle the situation in any particular way. The compiler may issue a diagnostic that you're doing something stupid and halt translation. It may compile the code without complaint, and you won't know anything's wrong until you get a runtime error. It may do anything in between.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What is the difference between stack over flow and stack crash. When stack crash occurs?
what is the heap over flow and heap crash.
What happens when stack over flow/heap over flow occurs?
A stack overflow is extensively discussed here and means an overflow condition when there's not enough stack memory and other data gets overwritten causing undefined behavior.
"Stack crash" is likely a synonymous of the first although I've heard it (or stack corruption) to indicate, mostly in a debugging environment, when the stack pointer gets corrupted causing all the debugging stack-related views to stall (and obviously also the debuggee as well).
A heap overflow doesn't usually happen except in some memory-pool-managed circumstances since, assuming the operating system is doing a good job, you will never get to overwrite a used memory chunk by having that marked as writable. If heap memory gets exhausted your system will likely tell you that and fail.
A heap crash might be defined as an invalid use of heap memory, e.g. access violation or accessing invalid addresses. It should fall in the broader terminology of memory corruption and storage violation (these might be linked to stack overflows).
Not sure where you've heard of these terms, especially "stack crash", but I wouldn't use it to avoid confusion.
I never heard about Stack crash.
In general there is two kind of errors with memory access :
you violate some memory protection (trying to write in a readonly
part or access a memory you mustn't)
you access a memory you have right on but in a bad way
Stack overflow is generally used when the program intentionally or not corrupts the stack content by overflowing a structure inside it. This is much like case (2).
It is also used when you overrun the stack, by interleaving to much function calls for example. This is much like case (1). Java for example gives you a StackOverflow exception in this case.
You also have both cases with heap. A buffer overflow is an example of accessing memory the bad way and corrupting data in the heap (if the buffer is in the heap). In this case we can say that it is a Heap overflow.
You can also try to access some memory in the Heap region of your process that is not currently allocated. This leads you to different scenarios depending of the virtual memory layer. Sometimes you are able to use the memory but as it has not been previously allocated it will leads you to a future memory corruption (not reported at the time it appears and difficult to trace back).
Sometimes the virtual memory layer will be able to detect your access violation and will abort your process (Unix can report it as Bus error or Segmentation fault).
You can also consume all the Heap space by allocating too much memory. This is a Heap exhaustion a kind of Heap overrun...