Stack overflow vs stack crash [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What is the difference between stack over flow and stack crash. When stack crash occurs?
what is the heap over flow and heap crash.
What happens when stack over flow/heap over flow occurs?

A stack overflow is extensively discussed here and means an overflow condition when there's not enough stack memory and other data gets overwritten causing undefined behavior.
"Stack crash" is likely a synonymous of the first although I've heard it (or stack corruption) to indicate, mostly in a debugging environment, when the stack pointer gets corrupted causing all the debugging stack-related views to stall (and obviously also the debuggee as well).
A heap overflow doesn't usually happen except in some memory-pool-managed circumstances since, assuming the operating system is doing a good job, you will never get to overwrite a used memory chunk by having that marked as writable. If heap memory gets exhausted your system will likely tell you that and fail.
A heap crash might be defined as an invalid use of heap memory, e.g. access violation or accessing invalid addresses. It should fall in the broader terminology of memory corruption and storage violation (these might be linked to stack overflows).
Not sure where you've heard of these terms, especially "stack crash", but I wouldn't use it to avoid confusion.

I never heard about Stack crash.
In general there is two kind of errors with memory access :
you violate some memory protection (trying to write in a readonly
part or access a memory you mustn't)
you access a memory you have right on but in a bad way
Stack overflow is generally used when the program intentionally or not corrupts the stack content by overflowing a structure inside it. This is much like case (2).
It is also used when you overrun the stack, by interleaving to much function calls for example. This is much like case (1). Java for example gives you a StackOverflow exception in this case.
You also have both cases with heap. A buffer overflow is an example of accessing memory the bad way and corrupting data in the heap (if the buffer is in the heap). In this case we can say that it is a Heap overflow.
You can also try to access some memory in the Heap region of your process that is not currently allocated. This leads you to different scenarios depending of the virtual memory layer. Sometimes you are able to use the memory but as it has not been previously allocated it will leads you to a future memory corruption (not reported at the time it appears and difficult to trace back).
Sometimes the virtual memory layer will be able to detect your access violation and will abort your process (Unix can report it as Bus error or Segmentation fault).
You can also consume all the Heap space by allocating too much memory. This is a Heap exhaustion a kind of Heap overrun...

Related

Is putting an array on the heap rather than the stack, going to remove possibilities of segmentation faults?

That is my question. Say my array is size 10, and I get a segmentation fault when I loop through and fill the array to 13, because I corrupted some important information on the stack. If I stuck it on the heap instead, am I immune to segmentation faults? This is more of a conceptual question.
No. If you overrun the allocated space, you are using memory that does not belong to the application, or which belongs to some other part of the application.
What then happens is undefined. I would be surprised in either case if overrunning by just three bytes directly caused a segmentation fault - the page granularity is not that small. Seg-faults are a function of the processor and operating system not the C language, and occur when you access memory not allocated to the process.
In the case of a stack buffer overrun, you will most likely corrupt some adjacent data in the current or calling function, if a seg-fault occurs it will be due to acting upon the corrupted data, such as popping an invalid return address to the program-counter for example, rather than the overrun itself.
Similarly if you overrun the heap allocation, the result depends on what you are corrupting and how that is subsequently used. Heap corruption is particularly insidious, because the results of the error may remain undetected (latent), or result in failure long after the actual error in some unrelated area of the code - typically when you attempt to free or allocate some other allocation where the heap structures have been destroyed. The memory you have corrupted may be part of some other existing allocation, and the error may manifest itself only when that corrupted data is utilised.
The error you observe is entirely non-deterministic - an immediate seg-fault is perhaps unlikely in the scenario you have described, but would in fact be the best you could hope for, since all other possible manifestations of failure are particularly difficult to debug. A failure from a stack data overrun is likely to be more localised - typically you will see corrupted data within the function, or the function will fail on return, whereas a heap error is often less immediately obvious because the data you are corrupting can be associated with any code withing your application. If that code does not run, or runs infrequently, you may never observe any failure,
The "solution" to your problem is not to write code that overruns - it is always and error, and using a different type of memory allocation is not going to save you from that. To use a coding practice that simply "hides" bugs or makes them less apparent or deterministic is not a good strategy.

what happens in linux glibc if keep increasingly accessing memory exceeding malloc() allocated size [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Seems comments/answers just simply stops at C standard description, let's discuss a bit more deep with implementation specific.
I saw below code in other discussion:
struct { size_t x; char a[]; } *p;
p = malloc(sizeof *p + 100);
if (p)
{
/* You can now access up to p->a[99] safely */
}
Then what if keep accessing p->a[i], 99< i < 0xffff or even bigger value?
malloc implementation should have a virtual memory block backed area for "(sizeof *p + 100)", so after "i" exceeds 100, initially it should be just corrupt data within the virtual memory block which might be non harmful.
if later "i" exceed that virtual memory block size, while next block is available and is never physical memory backed up(means ready to be allocated), would copy-on-write in kernel physical memory happens for next block on this bad access? And would malloc() later aware of this?
if next block is not in heap management, should p->a[i] get a virtual memory access violation error? Because malloc() is not called, so brk/sbrk won't be triggered to expand memory region of process heap.
Just curious how damage it is in this case...
Accessing stuff outside of the allocated memory is undefined behavior. Anything can happen. I hear nasal demons are a possibility.
If you are really lucky, you might get an access violation/segfault. If you aren't lucky, then some other variable in the program may be overwritten, or nothing observable may happen, The moon may turn into the 7UP logo, or maybe something nasty squeezes out of your right nostril.

String memory allocation in C [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Could anyone clarify this?
char str[1];
strcpy(str, "HHHHHHHHHHHH");
Here I declared a char array with size one, but the program doesn't crash untill I enter more than 12 characters and I only have an array size one. Why?
This code has undefined behaviour, since it writes more than one element into str. It could do anything. It is your responsibility to ensure that you only write into memory that you own.
This is undefined behaviour. In practice, you overwrite memory contents of something. In this case, that array goes to stack, if it is a local variable. It is likely that you have a CPU architecture where stack grows down, so you start overwriting things like other local variables, saved register values and return addresses of function calls.
You probably first overwrote something which had no immediate effect, or you did not notice the effect. This might be a local variable which wasn't initialized yet, or a local variable or saved register value which was not actually used after you overwrote it.
Then when you increased length of overflow, you probably corrupted function return address, and then crash actually happened when you returned from the function. If you had any other memory addresses, that is pointers, crash could also be because you tried to access the value pointed by corrupted pointer.
Finally, if you would increase the overflow size enough, the string copying would eventually directly write outside allowed area and cause immediate crash (assuming CPU and OS which have such memory protection, and not some ancient or embedded system). But this probably was not the reason here, as you wrote only 14 bytes before crash.
But note that above is kinda pointless from point of view of C language, undefined behaviour, which often changes if you chnage anything in the program, compiler options or input data. This can make memory corruption bugs hard to find, as adding debug stuff often makes the problem "disappear" (changes or hides the symptoms).

Why we shouldn't dynamically allocate memory on stack in C?

We have functions to allocate memory on stack in both in windows and Linux systems but their use is discouraged also they are not a part of the C standard? This means that they provide some non-standard behavior. As I'm not that experienced I cannot understand what could be the problem when allocating memory from stack rather then using heap?
Thanks.
EDIT: My view: As Delan has explained that the amount of stack allocated to a program is decided during compile time so we cannot ask for more stack from the OS if we run out of it.The only way out would be a crash.So it's better to leave the stack for storage of primary things like variables,functions,function calls,arrays,structures etc. and use heap as much as the capacity of the OS/machine.
Stack memory has the benefit of frequently being faster to allocate than heap memory.
However, the problem with this, at least in the specific case of alloca(3), is that in many implementations, it just decreases the stack pointer, without giving regard or notification as to whether or not there actually is any stack space left.
The stack memory is fixed at compile- or runtime, and does not dynamically expand when more memory is needed. If you run out of stack space, and call alloca, you have a chance of getting a pointer to non-stack memory. You have no way of knowing if you have caused a stack overflow.
Addendum: this does not mean that we shouldn't use dynamically allocate stack memory; if you are
in a heavily controlled and monitored environment, such as an embedded application, where the stack limits are known or able to be set
keeping track of all memory allocations carefully to avoid a stack overflow
ensuring that you don't recurse enough to cause a stack overflow
then stack allocations are fine, and can even be beneficial to save time (motion of stack pointer is all that happens) and memory (you're using the pre-allocated stack, and not eating into heap).
Memory on stack (automatic in broader sense) is fast, safe and foolproof compared to heap.
Fast: Because it's allocated at compile time, so no overhead involved
safe: It's exception safe. The stack gets automatically wound up, when exception is thrown.
full proof: You don't have to worry about virtual destructors kind of scenarios. The destructors are called in proper order.
Still there are sometimes, you have to allocate memory runtime, at that time you can first resort on standard containers like vector, map, list etc. Allocating memory to row pointers should be always a judicious decision.

What is the difference between a segmentation fault and a stack overflow?

For example when we call say, a recursive function, the successive calls are stored in the stack. However, due to an error if it goes on infinitely the error is 'Segmentation fault' (as seen on GCC).
Shouldn't it have been 'stack-overflow'? What then is the basic difference between the two?
Btw, an explanation would be more helpful than wikipedia links (gone through that, but no answer to specific query).
Stack overflow is [a] cause, segmentation fault is the result.
At least on x86 and ARM, the "stack" is a piece of memory reserved for placing local variables and return addresses of function calls. When the stack is exhausted, the memory outside of the reserved area will be accessed. But the app did not ask the kernel for this memory, thus a SegFault will be generated for memory protection.
Modern processors use memory managers to protect processes from each other. The x86 memory manager has many legacy features, one of which is segmentation. Segmentation is meant to keep programs from manipulating memory in certain ways. For instance, one segment might be marked read-only and the code would be put there, while another segment is read/write and that's where your data goes.
During a stack overflow, you exhaust all of the space allocated to one of your segments, and then your program starts writing into segments that the memory manager does not permit, and then you get a segmentation fault.
A stack overflow can manifest as either an explicit stack overflow exception (depending on the compiler and architecture) or as a segmentation fault, i.e., invalid memory access. Ultimately, a stack overflow is the result of running out of stack space, and one possible result of running out of stack space is reading or writing to memory that you shouldn't access. Hence, on many architectures, the result of a stack overflow is a memory access error.
The call stack is being overflowed, however the result of the overflowing is that eventually call-related values are pushed into memory that is not part of the stack and then - SIGSEGV!

Resources