the difference between segment fault and Stack smashing detected - c

I have experienced two kind of error,one is segmentation fault, another is Stack smashing detected. I want to know what different between them and the different reasons caused them.

This is typically Undefined behavior.
Segmentation fault is typically when your process is accessing memory location to which it doesn't have permission to access, or that location does not exist.
Stack smashing is an alert (generated by gcc for instance) that warns about an access out of bounds, for instance, on the stack. Typically that happens when the stack is written to where it shouldn't, like a local array written to at an index out of bounds.
Stack overflow is a kind of "stack smashing" that may also trigger this alert. Stack overflow usually happens when the memory allocated for the stack is not large enough to hold functions local variables, return addresses... Typically happens in recursive functions going too deeply (too much accumulation of return addresses and local data). Or if the local variables take too much space in the stack (like huge arrays).
There is a problem in your code that produces undefined behavior. Maybe you could share it with us so that we can help you.
Check in particular:
Out of array bounds accesses
NULL pointers
The size of local variables too important ; these variables should either be stored in the heap, or be dynamically allocated (malloc() ...).

Segmentation fault is a fault raised by hardware with memory protection, notifying an operating system (OS) about a memory access violation. Stack smashing is reported when there is overflow of data in your program's call stack. Generally program's call stack is of fixed length.

stack overflow and stack smashing both problems are related to faulty code or value found in the variables.
For example when a loop run as that it run over extras index of the array and
overwrite the value of another variable of the code then , it become problem
to the function prolog and epilog to continue to next function hence the current function become unable to return to calle function because overrun of llop has just overwrite the return address of calle instruction and hence EIP pointing to somewhere it not allowed to fetch instruction .All codes into OS are run in memory protection schems , hence you get stack overrun or stack smashing .
Segmentation fault is problems is normal situation when dealing with array and pointer in Linux OS .
Try this http://www.drdobbs.com/security/anatomy-of-a-stack-smashing-attack-and-h/240001832

Both are memory access violation.
Segmentation fault is more general, means you are accessing something you are not allowed to.
stack smashing is more specific, means something wrong in your stack.
Actually, stack smashing can cause segmentation fault.
you can refer to:
https://en.wikipedia.org/wiki/Segmentation_fault
or
Stack smashing detected

Related

Is putting an array on the heap rather than the stack, going to remove possibilities of segmentation faults?

That is my question. Say my array is size 10, and I get a segmentation fault when I loop through and fill the array to 13, because I corrupted some important information on the stack. If I stuck it on the heap instead, am I immune to segmentation faults? This is more of a conceptual question.
No. If you overrun the allocated space, you are using memory that does not belong to the application, or which belongs to some other part of the application.
What then happens is undefined. I would be surprised in either case if overrunning by just three bytes directly caused a segmentation fault - the page granularity is not that small. Seg-faults are a function of the processor and operating system not the C language, and occur when you access memory not allocated to the process.
In the case of a stack buffer overrun, you will most likely corrupt some adjacent data in the current or calling function, if a seg-fault occurs it will be due to acting upon the corrupted data, such as popping an invalid return address to the program-counter for example, rather than the overrun itself.
Similarly if you overrun the heap allocation, the result depends on what you are corrupting and how that is subsequently used. Heap corruption is particularly insidious, because the results of the error may remain undetected (latent), or result in failure long after the actual error in some unrelated area of the code - typically when you attempt to free or allocate some other allocation where the heap structures have been destroyed. The memory you have corrupted may be part of some other existing allocation, and the error may manifest itself only when that corrupted data is utilised.
The error you observe is entirely non-deterministic - an immediate seg-fault is perhaps unlikely in the scenario you have described, but would in fact be the best you could hope for, since all other possible manifestations of failure are particularly difficult to debug. A failure from a stack data overrun is likely to be more localised - typically you will see corrupted data within the function, or the function will fail on return, whereas a heap error is often less immediately obvious because the data you are corrupting can be associated with any code withing your application. If that code does not run, or runs infrequently, you may never observe any failure,
The "solution" to your problem is not to write code that overruns - it is always and error, and using a different type of memory allocation is not going to save you from that. To use a coding practice that simply "hides" bugs or makes them less apparent or deterministic is not a good strategy.

Is the heartbleed bug a manifestation of the classic buffer overflow exploit in C?

In one of our first CS lectures on security we were walked through C's issue with not checking alleged buffer lengths and some examples of the different ways in which this vulnerability could be exploited.
In this case, it looks like it was a case of a malicious read operation, where the application just read out however many bytes of memory
Am I correct in asserting that the Heartbleed bug is a manifestation of the C buffer length checking issue?
Why didn't the malicious use cause a segmentation fault when it tried to read another application's memory?
Would simply zero-ing the memory before writing to it (and then subsequently reading from it) have caused a segmentation fault? Or does this vary between operating systems? Or between some other environmental factor?
Apparently exploitations of the bug cannot be identified. Is that because the heartbeat function does not log when called? Otherwise surely any request for a ~64k string is likely to be malicious?
Am I correct in asserting that the Heartbleed bug is a manifestation of the C buffer length checking issue?
Yes.
Is the heartbleed bug a manifestation of the classic buffer overflow exploit in C?
No. The "classic" buffer overflow is one where you write more data into a stack-allocated buffer than it can hold, where the data written is provided by the hostile agent. The hostile data overflows the buffer and overwrites the return address of the current method. When the method ends it then returns to an address containing code of the attacker's choice and starts executing it.
The heartbleed defect by contrast does not overwrite a buffer and does not execute arbitrary code, it just reads out of bounds in code that is highly likely to have sensitive data nearby in memory.
Why didn't the malicious use cause a segmentation fault when it tried to read another application's memory?
It did not try to read another application's memory. The exploit reads memory of the current process, not another process.
Why didn't the malicious use cause a segmentation fault when it tried to read memory out of bounds of the buffer?
This is a duplicate of this question:
Why does this not give a segmentation violation fault?
A segmentation fault means that you touched a page that the operating system memory manager has not allocated to you. The bug here is that you touched data on a valid page that the heap manager has not allocated to you. As long as the page is valid, you won't get a segfault. Typically the heap manager asks the OS for a big hunk of memory, and then divides that up amongst different allocations. All those allocations are then on valid pages of memory as far as the operating system is concerned.
Dereferencing null is a segfault simply because the operating system never makes the page that contains the zero pointer a valid page.
More generally: the compiler and runtime are not required to ensure that undefined behaviour results in a segfault; UB can result in any behaviour whatsoever, and that includes doing nothing. For more thoughts on this matter see:
Can a local variable's memory be accessed outside its scope?
For both me complaining that UB should always be the equivalent of a segfault in security-critical code, as well as some pointers to a discussion on static analysis of the vulnerability, see today's blog article:
http://ericlippert.com/2014/04/15/heartbleed-and-static-analysis/
Would simply zero-ing the memory before writing to it (and then subsequently reading from it) have caused a segmentation fault?
Unlikely. If reading out of bounds doesn't cause a segfault then writing out of bounds is unlikely to. It is possible that a page of memory is read-only, but in this case it seems unlikely.
Of course, the later consequences of zeroing out all kinds of memory that you should not are seg faults all over the show. If there's a pointer in that zeroed out memory that you later dereference, that's dereferencing null which will produce a segfault.
does this vary between operating systems?
The question is vague. Let me rephrase it.
Do different operating systems and different C/C++ runtime libraries provide differing strategies for allocating virtual memory, allocating heap memory, and identifying when memory access goes out of bounds?
Yes; different things are different.
Or between some other environmental factor?
Such as?
Apparently exploitations of the bug cannot be identified. Is that because the heartbeat function does not log when called?
Correct.
surely any request for a ~64k string is likely to be malicious?
I'm not following your train of thought. What makes the request likely malicious is a mismatch between bytes sent and bytes requested to be echoed, not the size of the data asked to be echoed.
A segmentation fault does not occur because the data accessed is that immediately adjacent to the data requested, and is generally within the memory of the same process. It might cause an exception if the request were sufficiently large I suppose, but doing that is not in the exploiter's interest, since crashing the process would prevent them obtaining the data.
For a clear explanation, this XKCD comic is hard to better:

Can stack buffer overflows cause heap corruption?

Is it possible for a stack buffer overflow to cause heap corruption issues without overflowing the return address? If so, can you think of an example?
Whether it can cause heap corruption depends a lot on the platform.
But say for example that a buffer overflow overwrites a pointer variable so that it gets a new value that happens to be a different, but valid pointer. If the code then goes on to free said pointer (not knowing it is now something else) then the code that references this pointer could crash or behave erratically because the memory has been prematurely freed and possibly reallocated for a different purpose.

In C, can a segmentation fault occur only for out of bound access in heap area? Or can it happen even for static arrays in a stack?

I gather from previous answers on SO that seg fault occurs due to deferencing a NULL pointer or due to out-of-bounds array access. But does it happen only for dynamically declared arrays or also for statically declared ones?
It is not always necessary that you get segmentation fault when you try to access an array out of bounds.
It all depends upon the memory location that is being referred to. Segmentation is a protection mechanism. When you are trying to enter into another process area, the MMU or MPU will catch such access and raises an access violation exception (also called segmentation fault).
First, C itself doesn't talk about segfaults, just undefined behaviour. But let's be practical and look at a typical Linux platform. If you access memory at a virtual address for which there's no mapping for your process, the kernel will send SIGSEGV to the process. When indexing in an array in static memory, you take the address of the array, add the offset, and dereference that. If the offset is far enough outside the valid range for the array, you can definitely reach an address that isn't mapped, and your process will segfault.

What is the difference between a segmentation fault and a stack overflow?

For example when we call say, a recursive function, the successive calls are stored in the stack. However, due to an error if it goes on infinitely the error is 'Segmentation fault' (as seen on GCC).
Shouldn't it have been 'stack-overflow'? What then is the basic difference between the two?
Btw, an explanation would be more helpful than wikipedia links (gone through that, but no answer to specific query).
Stack overflow is [a] cause, segmentation fault is the result.
At least on x86 and ARM, the "stack" is a piece of memory reserved for placing local variables and return addresses of function calls. When the stack is exhausted, the memory outside of the reserved area will be accessed. But the app did not ask the kernel for this memory, thus a SegFault will be generated for memory protection.
Modern processors use memory managers to protect processes from each other. The x86 memory manager has many legacy features, one of which is segmentation. Segmentation is meant to keep programs from manipulating memory in certain ways. For instance, one segment might be marked read-only and the code would be put there, while another segment is read/write and that's where your data goes.
During a stack overflow, you exhaust all of the space allocated to one of your segments, and then your program starts writing into segments that the memory manager does not permit, and then you get a segmentation fault.
A stack overflow can manifest as either an explicit stack overflow exception (depending on the compiler and architecture) or as a segmentation fault, i.e., invalid memory access. Ultimately, a stack overflow is the result of running out of stack space, and one possible result of running out of stack space is reading or writing to memory that you shouldn't access. Hence, on many architectures, the result of a stack overflow is a memory access error.
The call stack is being overflowed, however the result of the overflowing is that eventually call-related values are pushed into memory that is not part of the stack and then - SIGSEGV!

Resources