PTrace Not Recognizing Child Process - c

I'm writing a program that monitors system calls ( among other things ). But I'm having some trouble getting ptrace to recognize the process ID that I'm passing to it. Upon executing the program, I get this error message:
:No such process
However, I have verified the process ID right before the call by printing it to the console and verifying it with ps -all.
Here's some of the code that may be relevant ( I can post more if necessary ):
Heading the child process:
/* Call to be traced */
if (ptrace (PTRACE_TRACEME, 0, 0, 0) < 0){
perror ("Process couldn't be traced");
exit (-1);
}
/* Execute process image */
if (execv (ProcessArgs[0], &ProcessArgs[1]) < 0){
perror ("Couldn't execute process");
exit (-1);
}
In a thread of the parent process:
DbgdProcess * _Process = ( DbgdProcess * ) _ProcessPass;
int SystemCall = 0,
Status = 0;
/* I have tried sleep(1) here to wait for PTRACE_ME to no avail */
while (!_Process->CloseSignal){
if ( wait (&Status) < 0) // error handler
if ( WIFEXITED (Status)) // error handler
if (!WIFSTOPPED (Status)) continue;
SystemCall = ptrace (PTRACE_PEEKUSER, _Process->ID, 4 * ORIG_RAX, 0);
if (SystemCall < 0) // error handler
printf ("Process made system call %d\n", SystemCall);
if (ptrace (PTRACE_CONT, _Process->ID, 0, 0) < 0) // error handler
}
May anyone explain this behavior to me?
Some extra notes:
The process being debugged is a direct child of the parent
I'm pretty sure this is a 64 bit compilation because sys/reg.h only defines RAX
All error handlers include a perror() message
Update:
I've read this from the man page:
Most ptrace commands (all except PTRACE_ATTACH, PTRACE_SEIZE,
PTRACE_TRACEME, PTRACE_INTERRUPT, and PTRACE_KILL) require the tracee
to be in a ptrace-stop, otherwise they fail with ESRCH.
ESRCH, I believe, gives the message 'No such process'. So maybe the process is not ptrace-stopped when I make the ptrace call?
Update:
I was testing the code in this example. I did getting it to work after doing the following:
- updating the header from to a
- changing (eax_orig * 4) to (rax_orig * 8)
But those changes are, as well, in my program and it's still not working.
Update:
I've got my code working. I'm not entirely sure why but it started working after I called PTRACE_ATTACH within the same thread that makes the polling calls with ptrace(2). I guess that would mean that ptrace must be used within the same thread of the parent process but I'm not entirely sure. My question now is, does anyone know if that's true? Or, if not, why ptrace behaves this way?
Update:
I found this link, which seems to suggest my problem is not unheard of.

sleep(1) is sometimes not enough; try sleep(5).

Why are you doing a PTRACE_SYSCALL, prior to checking if process has stopped or not?
Ideally, in the parent thread, you should wait for child to stop by using wait.
Once the child stops with WIFSTOPPED, then only use any other ptrace calls.
it appears that ESRCH is being returned by PTRACE_SYSCALL. Can you please confirm it

Related

popen returns -1 unexpectedly

I have a program that needs a file-list that is most easily generated by a script (that does a few configurable things besides generating the list)
In essence I do
fp = popen ("thescript", "r");
then
while (fgets (buf, 1024, fp))
and process the lines, and at the end:
rv = pclose (fp);
The (bash) script ends with exit 0. But when run "normally" the pclose call returns -1, ECHILD: No child process.
What I like about Linux is that I can normally find such problems by running strace and see what really happens. Not this time: When running strace as a normal user the mount in the script fails, so the script does exit 1 and rv reflects that. When I change that to exit 0 (when the mount fails!) the return value rv reflects that (rv == 0, gui does not print error message). When I run the whole thing as root while tracing, it works (rv == 0, no error message displayed).
I've written short test-program, and it all works as expected.
At first I wrote the code is proprietary. Decided it is simple enough to publish... Here is the actual code that "malfunctions". The "get_str_param" will return the name of the script to run.
files=popen(get_str_param("IMPORT_LIST"),"r");
//printf ("calling cmd for file list: %s\n", get_str_param("IMPORT_LIST"));
while(fgets(buf, 1024, files)) {
if((p=strstr(buf,".apl"))) {
*p=0;
if(strstr(buf," ")) continue; // ignore files with spaces
fl_add_browser_line(fd_import->applications,buf);
}
}
rv = pclose(files);
if (rv) {
printf ("Can't read file list! rv=%d\n", rv);
The "fl_add_browser_line" is from the "xforms" library.
So... what could possibly cause an ECHILD, only when NOT tracing the program with strace?
It's not shown in the code excerpt that was posted, but apparently some other part of the program establishes a SIGCHLD handler. This will interfere with functions like pclose() and system(). The documentation says:
The pclose() function waits for the associated process to terminate and returns the exit status of the command as returned by wait4(2).
If there's another SIGCHLD handler, and it calls wait(), then when pclose() tries to get the exit status itself the child will be gone, and it will get ECHILD.
You need to disable this handler while running the function that uses popen() and pclose(). Or write your own code that runs the child process and works in conjunction with the rest of your code that deals with forking.
You need to be careful about setting signal(SIGCHLD, SIG_DFL) around this code. If some other child process exits during this period, I don't think your regular handler will be notified when you re-establish the handler. I think you can deal with this by calling the regular handler explicitly after adding the handler back; when it calls wait() it will pick up any outstanding terminated processes.

waitpid() for non-child processes [duplicate]

For child processes, the wait() and waitpid() functions can be used to suspends execution of the current process until a child has exited. But this function can not be used for non-child processes.
Is there another function, which can wait for exit of any process ?
Nothing equivalent to wait(). The usual practice is to poll using kill(pid, 0) and looking for return value -1 and errno of ESRCH to indicate that the process is gone.
Update: Since linux kernel 5.3 there is a pidfd_open syscall, which creates an fd for a given pid, which can be polled to get notification when pid has exited.
On BSDs and OS X, you can use kqueue with EVFILT_PROC+NOTE_EXIT to do exactly that. No polling required. Unfortunately there's no Linux equivalent.
So far I've found three ways to do this on Linux:
Polling: you check for the existence of the process every so often, either by using kill or by testing for the existence of /proc/$pid, as in most of the other answers
Use the ptrace system call to attach to the process like a debugger so you get notified when it exits, as in a3nm's answer
Use the netlink interface to listen for PROC_EVENT_EXIT messages - this way the kernel tells your program every time a process exits and you just wait for the right process ID. I've only seen this described in one place on the internet.
Shameless plug: I'm working on a program (open source of course; GPLv2) that does any of the three.
You could also create a socket or a FIFO and read on them. The FIFO is especially simple: Connect the standard output of your child with the FIFO and read. The read will block until the child exits (for any reason) or until it emits some data. So you'll need a little loop to discard the unwanted text data.
If you have access to the source of the child, open the FIFO for writing when it starts and then simply forget about it. The OS will clean the open file descriptor when the child terminates and your waiting "parent" process will wake up.
Now this might be a process which you didn't start or own. In that case, you can replace the binary executable with a script that starts the real binary but also adds monitoring as explained above.
Here is a way to wait for any process (not necessarily a child) in linux to exit (or get killed) without polling:
Using inotify to wait for the /proc'pid' to be deleted would be the perfect solution, but unfortunately inotify does not work with pseudo file systems like /proc.
However we can use it with the executable file of the process.
While the process still exists, this file is being held open.
So we can use inotify with IN_CLOSE_NOWRITE to block until the file is closed.
Of course it can be closed for other reasons (e.g. if another process with the same executable exits) so we have to filter those events by other means.
We can use kill(pid, 0), but that can't guarantee if it is still the same process. If we are really paranoid about this, we can do something else.
Here is a way that should be 100% safe against pid-reuse trouble: we open the pseudo directory /proc/'pid', and keep it open until we are done. If a new process is created in the meantime with the same pid, the directory file descriptor that we hold will still refer to the original one (or become invalid, if the old process cease to exist), but will NEVER refer the new process with the reused pid. Then we can check if the original process still exists by checking, for example, if the file "cmdline" exists in the directory with openat(). When a process exits or is killed, those pseudo files cease to exist too, so openat() will fail.
here is an example code:
// return -1 on error, or 0 if everything went well
int wait_for_pid(int pid)
{
char path[32];
int in_fd = inotify_init();
sprintf(path, "/proc/%i/exe", pid);
if (inotify_add_watch(in_fd, path, IN_CLOSE_NOWRITE) < 0) {
close(in_fd);
return -1;
}
sprintf(path, "/proc/%i", pid);
int dir_fd = open(path, 0);
if (dir_fd < 0) {
close(in_fd);
return -1;
}
int res = 0;
while (1) {
struct inotify_event event;
if (read(in_fd, &event, sizeof(event)) < 0) {
res = -1;
break;
}
int f = openat(dir_fd, "fd", 0);
if (f < 0) break;
close(f);
}
close(dir_fd);
close(in_fd);
return res;
}
You could attach to the process with ptrace(2). From the shell, strace -p PID >/dev/null 2>&1 seems to work. This avoid the busy-waiting, though it will slow down the traced process, and will not work on all processes (only yours, which is a bit better than only child processes).
None I am aware of. Apart from the solution from chaos, you can use semaphores if you can change the program you want to wait for.
The library functions are sem_open(3), sem_init(3), sem_wait(3), ...
sem_wait(3) performs a wait, so you don´t have to do busy waiting as in chaos´ solution. Of course, using semaphores makes your programs more complex and it may not be worth the trouble.
Maybe it could be possible to wait for /proc/[pid] or /proc/[pid]/[something] to disappear?
There are poll() and other file event waiting functions, maybe that could help?
Since linux kernel 5.3 there is a pidfd_open syscall, which creates an fd for a given pid, which can be polled to get notification when pid has exited.
Simply poll values number 22 and 2 of the /proc/[PID]/stat.
The value 2 contains name of the executable and 22 contains start time.
If they change, some other process has taken the same (freed) PID. Thus the method is very reliable.
You can use eBPF to achieve this.
The bcc toolkit implements many excellent monitoring capabilities based on eBPF. Among them, exitsnoop traces process termination, showing the command name and reason for termination,
either an exit or a fatal signal.
It catches processes of all users, processes in containers, as well as processes that
become zombie.
This works by tracing the kernel sched_process_exit() function using dynamic tracing, and
will need updating to match any changes to this function.
Since this uses BPF, only the root user can use this tool.
You can refer to this tool for related implementation.
You can get more information about this tool from the link below:
Github repo: tools/exitsnoop: Trace process termination (exit and fatal signals). Examples.
Linux Extended BPF (eBPF) Tracing Tools
ubuntu manpages: exitsnoop-bpfcc
You can first install this tool and use it to see if it meets your needs, and then refer to its implementation for coding, or use some of the libraries it provides to implement your own functions.
exitsnoop examples:
Trace all process termination
# exitsnoop
Trace all process termination, and include timestamps:
# exitsnoop -t
Exclude successful exits, only include non-zero exit codes and fatal signals:
# exitsnoop -x
Trace PID 181 only:
# exitsnoop -p 181
Label each output line with 'EXIT':
# exitsnoop --label EXIT
Another option
Wait for a (non-child) process' exit using Linux's PROC_EVENTS
Reference project:
https://github.com/stormc/waitforpid
mentioned in the project:
Wait for a (non-child) process' exit using Linux's PROC_EVENTS. Thanks
to the CAP_NET_ADMIN POSIX capability permitted to the waitforpid
binary, it does not need to be set suid root. You need a Linux kernel
having CONFIG_PROC_EVENTS enabled.
Appricate #Hongli's answer for macOS with kqueue. I implement it with swift
/// Wait any pids, including non-child pid. Block until all pids exit.
/// - Parameters:
/// - timeout: wait until interval, nil means no timeout
/// - Throws: WaitOtherPidError
/// - Returns: isTimeout
func waitOtherPids(_ pids: [Int32], timeout: TimeInterval? = nil) throws -> Bool {
// create a kqueue
let kq = kqueue()
if kq == -1 {
throw WaitOtherPidError.createKqueueFailed(String(cString: strerror(errno)!))
}
// input
// multiple changes is OR relation, kevent will return if any is match
var changes: [Darwin.kevent] = pids.map({ pid in
Darwin.kevent.init(ident: UInt(pid), filter: Int16(EVFILT_PROC), flags: UInt16(EV_ADD | EV_ENABLE), fflags: NOTE_EXIT, data: 0, udata: nil)
})
let timeoutDeadline = timeout.map({ Date(timeIntervalSinceNow: $0)})
let remainTimeout: () ->timespec? = {
if let deadline = timeoutDeadline {
let d = max(deadline.timeIntervalSinceNow, 0)
let fractionalPart = d - TimeInterval(Int(d))
return timespec(tv_sec: Int(d), tv_nsec: Int(fractionalPart * 1000 * 1000 * 1000))
} else {
return nil
}
}
// output
var events = changes.map{ _ in Darwin.kevent.init() }
while !changes.isEmpty {
// watch changes
// sync method
let numOfEvent: Int32
if var timeout = remainTimeout() {
numOfEvent = kevent(kq, changes, Int32(changes.count), &events, Int32(events.count), &timeout);
} else {
numOfEvent = kevent(kq, changes, Int32(changes.count), &events, Int32(events.count), nil);
}
if numOfEvent < 0 {
throw WaitOtherPidError.keventFailed(String(cString: strerror(errno)!))
}
if numOfEvent == 0 {
// timeout. Return directly.
return true
}
// handle the result
let realEvents = events[0..<Int(numOfEvent)]
let handledPids = Set(realEvents.map({ $0.ident }))
changes = changes.filter({ c in
!handledPids.contains(c.ident)
})
for event in realEvents {
if Int32(event.flags) & EV_ERROR > 0 { // #see 'man kevent'
let errorCode = event.data
if errorCode == ESRCH {
// "The specified process to attach to does not exist"
// ingored
} else {
print("[Error] kevent result failed with code \(errorCode), pid \(event.ident)")
}
} else {
// succeeded event, pid exit
}
}
}
return false
}
enum WaitOtherPidError: Error {
case createKqueueFailed(String)
case keventFailed(String)
}
PR_SET_PDEATHSIG can be used to wait for parent process termination

C-program does not return from wait-statement

I have to migrate a C-program from OpenVMS to Linux, and have now difficulties with a program generating subprocesses. A subprocess is generated (fork works fine), but execve fails (which is correct, as the wrong program name is given).
But to reset the number of active subprocesses, I afterwards call a wait() which does not return. When I look at the process via ps, I see that there are no more subprocesses, but wait() does not return ECHILD as I had thought.
while (jobs_to_be_done)
{
if (running_process_cnt < max_process_cnt)
{
if ((pid = vfork()) == 0)
{
params[0] = param1 ;
params[1] = NULL ;
if ((cstatus = execv(command, params)) == -1)
{
perror("Child - Exec failed") ; // this happens
exit(EXIT_FAILURE) ;
}
}
else if (pid < 0)
{
printf("\nMain - Child process failed") ;
}
else
{
running_process_cnt++ ;
}
}
else // no more free process slot, wait
{
if ((pid = wait(&cstatus)) == -1) // does not return from this statement
{
if (errno != ECHILD)
{
perror("Main: Wait failed") ;
}
anz_sub = 0 ;
}
else
{
...
}
}
}
Is the anything that has to be done to tell the wait-command that there are no more subprocesses?
With OpenVMS the program works fine.
Thanks a lot in advance for your help
I don't recommend using vfork these days on Linux, since fork(2) is efficient enough, thanks to lazy copy-on-write techniques in the Linux kernel.
You should check the result of fork. Unless it is failing, a process has been created, and wait (or waitpid(2), perhaps with WNOHANG if you don't want to really wait, but just find out about already ended child processes ...) should not fail (even if the exec function in the child has failed, the fork did succeed).
You might also carefully use the SIGCHLD signal, see signal(7). A defensive way of using signals is to set some volatile sigatomic_t flag in signal handlers, and test and clear these flags inside your loop. Recall that only async signal safe functions (and there are quite few of them) can be called -even indirectly- inside a signal handler. Read also about POSIX signals.
Take time to read Advanced Linux Programming to get a wider picture in your mind. Don't try to mimic OpenVMS on POSIX, but think in a POSIX or Linux way!
You probably may want to always waitpid in your loop, perhaps (sometimes or always) with WNOHANG. So waitpid should not be only called in the else part of your if (running_process_cnt < max_process_cnt) but probably in every iteration of your loop.
You might want to compile with all warnings & debug info (gcc -Wall -Wextra -g) then use the gdb debugger. You could also strace(1) your program (probably with -f)
You might want to learn about memory overcommitment. I dislike this feature and usually disable it (e.g. by running echo 0 > /proc/sys/vm/overcommit_memory as root). See also proc(5) -which is very useful to know about...
From man vfork:
The child must not return from the current function or call exit(3), but may call _exit(2)
You must not call exit() when the call to execv (after vfork) fails - you must use _exit() instead. It is quite possible that this alone is causing the problem you see with wait not returning.
I suggest you use fork instead of vfork. It's much easier and safer to use.
If that alone doesn't solve the problem, you need to do some debugging or reduce the code down until you find the cause. For example the following should run without hanging:
#include <sys/wait.h>
int main(int argc, char ** argv)
{
pid_t pid;
int cstatus;
pid = wait(&cstatus);
return 0;
}
If you can verify that this program doesn't hang, then it must be some aspect of your program that is causing a hang. I suggest putting in print statements just before and after the call to wait.

main process -> pthread -> fork + execvp

I am seeing a strange issue.
Sometimes when i run my program long enough i see that there are two copies of my program running. The second is a child process of the first since i see that the parent PID of the second one is that of the first one.
I realized that i have a fork in my code and its only because of this that i can have two copies running -- i can otherwise never have two copies of my program running.
This happens very rarely but it does happen.
The architecture is as follows:
The main program gets an event and spawns a pthread. In that thread i do some processing and based on some result i do a fork immediately followed by an execvp.
I realize that its not best to call a fork from a pthread but in my design the main process gets many events and the only way to parallely work on all those events was to use pthreads. Each pthread does some processing and in certain cases it needs to call a different program (for which i use execvp). Since i had to call a different program i had to use fork
I am wondering if because i am eventually calling a fork from a thread context is it possible that multiple threads parallely call fork + execvp and this "somehow" results in two copies being created.
If this is indeed happening would it help if i protect the code that does fork+execvp with a mutex since that would result in only one thread calling the fork + execvp.
However, if i take a mutex before fork + excvp then i dont know when to release it.
Any help here would be appreciated.
thread code that does fork + execvp -- in case you guys can spot an issue there:
In main.c
status = pthread_create(&worker_thread, tattr,
do_some_useful_work, some_pointer);
[clipped]
void *do_some_useful_work (void * arg)
{
/* Do some processing and fill pArguments array */
child_pid = fork();
if (child_pid == 0)
{
char *temp_log_file;
temp_log_file = (void *) malloc (strlen(FORK_LOG_FILE_LOCATION) +
strlen("/logfile.") + 8);
sprintf (temp_log_file, "%s/logfile.%d%c", FORK_LOG_FILE_LOCATION, getpid(),'\0');
/* Open log file */
int log = creat(temp_log_file, 0777);
/* Redirect stdout to log file */
close(1);
dup(log);
/* Redirect stderr to log file */
close(2);
dup(log);
syslog(LOG_ERR, "Opening up log file %s\n", temp_log_file);
free (temp_log_file);
close (server_sockets_that_parent_is_listening_on);
execvp ("jazzy_program", pArguments);
}
pthread_exit (NULL);
return NULL;
}
I looked through this code and i see no reason why i would do a fork and not do an execvp -- so the only scenario that comes to my mind is that multiple threads get executed and they all call fork + execvp. This sometimes causes two copies of my main program to run.
In the case where execvp fails for any reason (perhaps too many processes, out of memory, etc.), you fail to handle the error; instead the forked copy of the thread keeps running. Calling pthread_exit (or any non-async-signal-safe) function in this process has undefined behavior, so it might not exit properly but hang or do something unexpected. You should always check for exec failure and immediately _exit(1) or similar when this happens. Also, while this probably isn't your problem, it's unsafe to call malloc after forking in a multithreaded process since it's non-async-signal-safe.

How to cancel an alarm() signal via a child process?

For an assignment, I am working on creating a time aware shell. The shell forks and executes commands and kills them if they run for more than a set amount of time. For example.
input# /bin/ls
a.out code.c
input# /bin/cat
Error - Expired After 10 Seconds.
input#
Now, my question is: is there a way to prevent the alarm from starting if an error is incurred in the processing of the program, that is, when exevce returns -1?
Since the child-process runs separately and after hours of experimenting and research I have yet to find anything that discusses or even hints at this type of task, I have a feeling it may be impossible. If it is indeed impossible, how can I prevent something like the following from happening...
input# /bin/fgdsfgs
Error executing program
input# Error - Expired After 10 Seconds.
For context, here is the code I am currently working with, with my attempt at doing this myself removed. Thanks for the help in advance!
while(1){
write(1, prompt, sizeof(prompt)); //Prompt user
byteCount = read(0, cmd, 1024); //Retrieve command from user, and count bytes
cmd[byteCount-1] = '\0'; //Prepare command for execution
//Create Thread
child = fork();
if(child == -1){
write(2, error_fork, sizeof(error_fork));
}
if(child == 0){ //Working in child
if(-1 == execve(cmd,arg,env)){ //Execute program or error
write(2, error_exe, sizeof(error_exe));
}
}else if(child != 0){ //Working in the parent
signal(SIGALRM, handler); //Handle the alarm when it goes off
alarm(time);
wait();
alarm(0);
}
}
According to the man page:
Description
The alarm() function shall cause the system to generate a SIGALRM signal for the process after the number of realtime seconds specified by seconds have elapsed. Processor scheduling delays may prevent the process from handling the signal as soon as it is generated.
If seconds is 0, a pending alarm request, if any, is canceled.
Alarm requests are not stacked; only one SIGALRM generation can be scheduled in this manner. If the SIGALRM signal has not yet been generated, the call shall result in rescheduling the time at which the SIGALRM signal is generated.
Interactions between alarm() and any of setitimer(), ualarm(), or usleep() are unspecified.
So, to cancel an alarm: alarm(0). It is even present in your sample code.
The main problem
By the way, you're missing an important piece here:
if(child == 0){ //Working in child
if(-1 == execve(cmd,arg,env)){ //Execute program or error
write(2, error_exe, sizeof(error_exe));
_exit(EXIT_FAILURE); // EXIT OR A FORKED SHELL WILL KEEP GOING
}
}else if(child != 0){ //Working in the parent
The wait() system call takes an argument; why aren't you compiling with the right headers in the source file and with compiler warnings (preferably errors) for undeclared functions? Or, if you are getting such warnings, pay heed to them before submitting code for review on places like StackOverflow.
You don't need to test the return status from execve() (or any of the exec*() functions); if it returns, it failed.
It is good to write an error on failure. It would be better if the child process exited as well, so that it doesn't go back into the while (1) loop, competing with your main shell for input data.
if (child == 0)
{
execve(cmd, arg, env);
write(2, error_exe, sizeof(error_exe));
exit((errno == ENOEXEC) ? 126 : 127);
}
In fact, the non-exiting of your child is the primary cause of your problem; the wait doesn't return until the alarm goes off because the child hasn't exited. The exit statuses shown are intended to match the POSIX shell specification.

Resources