After debugging my code I get the following error:
Program received signal SIGSEGV, Segmentation fault.
0xb7d79a67 in fgets () from /lib/i386-linux-gnu/libc.so.6
Can anybody explain to me what this means? It's a project built using CMake and OpenGL.
When a program tries to access memory it has no privileges, the Linux Kernel interrupts the program by sending a signal called SEGSEGV. In your fgets, may be you are exceeding the memory you have allocated for your pointer by inputting too much text. Signals is one way the Linux Kernel communicates with the programs (processes in correct sense). It's kind of exception.
Since, you are dealing with files. It's worth checking if your file actually exists. May be you don't have privileges to read the file and hence getting the error.
/lib/i386-linux-gnu/libc.so.6 is a shared library on your Linux system in which fgets function resides and 0xb7d79a67 is (I guess) main memory address your program doesn't have privileges may be goes beyond the file length.
The segmentation-fault(SEGSEGV) can occur when you access protected memory areas, or the memory areas which are used by other programs and hence your program doesn't have any right to access.
Read these articles for better grasp:
Segmentation fault why?, Debugging segmentation faults
Related
I was debugging a seg fault in a Linux app that was caused by a program trying to change a static constant array structure (so the data was in the read-only section of the ELF and subsequently loaded in a page that was then given read-only permission).
While in GDB I put a breakpoint on the line of assembler that did the bad store, and when it stopped there I manually performed the equivalent write action using GDB. GDB did this without any complaints, and reading the value back proved it had indeed been written. I looked in /proc/thepid/maps and that particular page was still marked as "not writeable".
So my question is: does GDB temporarily set write permissions on a read-only page, perform the write, then reset the permissions? Thanks.
does GDB temporarily set write permissions
No.
On Linux/*86, ptrace() (which is what GDB uses to read and write the inferior (being debugged) process memory) allows reads and writes to pages that are not readable/writable by the inferior, leading exactly to the confusion you've described.
This could be considered a bug in the kernel.
It should be noted that the kernel has to allow ptrace to write to normally non-writable .text section for the debugger to be able to plant breakpoints (which is done by overwriting original instruction with the breakpoint/trap instruction -- int3 via PTRACE_POKETEXT request).
The kernel doesn't have to do the same for POKE_DATA, but man ptrace says:
PTRACE_POKETEXT, PTRACE_POKEDATA
Copies the word data to location addr in the child's memory.
As above, the two requests are currently equivalent.
I believe it's that equivalentness that causes the current behavior.
I am trying to run the program to test buffer overflow, but when program crashes it shows me SIGSEGV error as follows:
Program received signal SIGSEGV, Segmentation fault.
0x00000000004006c0 in main (argc=2, argv=0x7fffffffde78)
But the tutorial which I am following is getting the below message:
Program received signal SIGSEGV, Segmentation fault. 0x41414141 in ??
()
Due to this I am not able to get the exact memory location of buffer overflow.
I have already used -fno-stack-protector while compiling my program. because before this I was getting SIGABRT error.
Does anyone have any clue so that i can get in sync with the tutorial.
I was able to figure out the difference in both.
Actually I was trying the same code on Ubuntu 64-bit on virtual box.
But then I tried installing Ubuntu 32-bit on virtual box, so now I am also getting the same message as what was coming in the tutorial.
Also another difference which I noticed in 64 bit and 32-bit OS is that when using 32 bit we can examine the stack using $esp but in 64-bit machine we have to use $rsp
SIGSEGV is the signal raised when your program attempts to access a memory location where it is not supposed to do. Two typical scenarios are:
Deference a non-initialized pointer.
Access an array out-of-bound.
Note, however, even in these two cases, there is no guarantee that SIGSEGV always happen. So don't expect that SIGSEGV message is always the same even with the same code.
How do you make use of segmentation and paging to prevent buffer overflow?
One guess might be - because segmentation only gives a portion of memory to each process and if the process tried to access an address outside its segment then a segfault will occur. Please tell me if that is correct or not.
Thank you!
Segmentation / paging will not prevent your code from attempting to access memory outside of its boundaries. That is the definition of a buffer overflow, and no sort of memory protections will attempt broken code from attempting to do things it is not allowed to do.
What segmentation or paging can do, is prevent your code from successfully accessing memory it doesn't own. The only option an operating system really has is to kill a process that the hardware has detected attempting to do something "bad".
I need to panic kernel after some operations are done and verify what operation did
Can some one help me to know if there is any way? I searched a lot but no luck
I am looking for some generic call
thanks in Advance!
You can try a sysrq trigger:
echo c > /proc/sysrq-trigger
'c' - Will perform a system crash by a NULL pointer dereference.
A crashdump will be taken if configured.
Higher address range is mapped to the kernel. This if you write something there e.g. Say 0xFFFFFF7 kernel exits your process with a segmentation fault complaining that illegal memory location was accessed.
In user land your process is more like a sand box and any illegal access of memory outside your process is fined with kernel killing your process with a segmentation fault violation.
To panic a kernel you can try to set some wrong hardware registers typically with invocation of a syscntl sys call.
consider the following code in C
int n;
scanf("%d",n)
it gives the error Segmentation fault core dumped in GCC compiler in Linux Mandriva
but the following code
int *p=NULL;
*P=8;
gives only segmentation fault why is that so..?
A core dump is a file containing a dump of the state and memory of a program at the time it crashed. Since core dumps can take non-trivial amounts of disk space, there is a configurable limit on how large they can be. You can see it with ulimit -c.
Now, when you get a segmentation fault, the default action is to terminate the process and dump core. Your shell tells what has happened, if a process has terminated with a segmentation fault signal it will print Segmentation fault, and if that process has additionally dumped core (when the ulimit setting and the permissions on the directory where the core dump is to be generated allow it), it will tell you so.
Assuming you're running both of these on the same system, with the same ulimit -c settings (which would be my first guess as to the difference you're seeing), then its possible the optimizer is "noticing" the clearly undefined behavior in the second example, and generating its own exit. You could check with objdump -x.
In the first case 'n' could have any value, you might own this memory (or not), it might be writeable (or not) but it probably exists. There is no reason that n is necessarily zero.
Writing to NULL is definetly naughty and something the OS is going to notice!