C - How to check if a process is a system process? - c

I'm trying to detect a fork bomb, and through that process I am trying to caluclate the descendant count of each process. However, I only want to calculate the descendant count for non-system processes, as the fork bomb will be a non-system process. However, I'm unsure how do do that. This is what I have so far:
struct task_struct *pTask;
for_each_process(pTask)
{
struct task_struct *p;
*p = *pTask;
//trace back to every ancestor
for(p = current; p != &init_task; p->parent)
{
//increment the descendant count of p's parent
}
}
This loop goes up to the init task correct, since it's &init_task? Is there any way to instead to go to the first system process and stop? Because for example, the fork bomb will be the immediate child of a system process. Any help would be greatly appreciated, thank you!!
[EDIT]
And by system process, I mean like things like for example, bash. I should have explained more clearly, but at the basic level, I don't want to delete any process that runs from boot-up. Any processes orginating from user-space that is run after boot up is fair game, but other processes are not. And I will not be checking for anything like tomcat,httpd, because I know 100% that those processes will not be running.

A login bash shell is execed by another process (which process depends on whether it is a console login shell, ssh login shell, gnome-terminal shell, etc.) The process execing bash is execed by init or some other process launched by init, not by the kernel.
A user can easily create a bash script that forks itself, so if you exempt /bin/bash from your checking then fork bombs written in bash will not be detected. For example, the following script, put in a file called foo and executed in the current directory, will create a fork bomb.
#!/bin/bash
while [ /bin/true ]
do
./foo &
done
Take a look at ulimit in bash(1) or setrlimit(2) if you want to limit the number of processes a particular user can run.
Or, you could set a really high threshold for the descendant count that triggers killing a process. If the chain of parents back to init is several hundred deep, then probably something fishy is going on.

You can use the logind / ConsoleKit2 D-Bus APIs to determine the session of a process by its PID. "System" processes (IIRC) will have no session.
Obviously this requires that logind (part of Systemd) or ConsoleKit2 are running.
Without such a service that tracks user sessions, there may be no reliable way to differentiate a "system" process from a user process, except perhaps by user-id (assuming that your fork bomb won't be running as a system user). On many distributions system user IDs are less than 1000, and regular user IDs are >= 1000, except for the "nobody" user (normally 65534).

Related

waitpid() function returns ERROR (-1), why?

I'm writing a Linux shell-like program in C.
Among others, I'm implementing two built-in commands: jobs, history.
In jobs, I print the list of currently working commands (in the background).
In history I print the list of all commands history until now, specifying for each command if it's RUNNING or DONE.
To implement the two, my idea was to have a list of commands, mapping the command name to their PID. Once the jobs/history command is called, I run through them, check which ones are running or done, and print accordingly.
I read online that the function: waitpid(pid, &status, WNOHANG), can detect from "PID" whether a process is still running or done, without stopping the process.
It works well, except for this:
When a program is alive, the function returns it.
When a program is done, the first time I call it returns done, and from there on, if called again with the same PID, it returns -1 (ERROR).
For example, it would look like this: (the & symbolizes background command)
$ sleep 3 &
$ jobs
sleep ALIVE
$ jobs (withing the 3 seconds)
sleep ALIVE
$ jobs (after 3 seconds)
sleep DONE
$ jobs
sleep ERROR
$ jobs
sleep ERROR
....
Also, these are not influenced by other command calls I might do before or after, it seems the behavior described above is independent of other commands.
I read online various reasons why waitpid might return -1, but I wasn't able to identify the reason in my case. Also, I tried looking for how to understand what type of waitpid error is it, but again unsuccessfully.
My questions are:
Why do you think this behavior is happening
If you have a solution (the ideal thing would it for it to keep returning DONE)
If you have a better idea of how to implement the jobs/history command is well accepted
One solution for this problem is that as soon as I get "DONE", I sign the command as DONE, and don't perform the waitid anymore on it before printing it. This would solve the issue, but I would remain in the dark as to WHY is this happening
You should familiarize yourself with how child processes are handled on Unix environments. In particular read about Zombie processes.
When a process dies, it enters a 'zombie' state, so that its PID is still reserved and uniquely identifies the now-dead process. A successful wait on a zombie process frees up the process descriptor and its PID. Consequently subsequent calls to wait on the same PID will fail cause there's no more process with that PID (unless a new process is allocated the same PID, in which case waiting on it would be a logical error).
You should restructure your program so that if a wait is successful and reports that a process is DONE, you record that information in your own data structure and never call wait on that PID again.
For comparison, once a process is done, bourne shell reports it one last time and then removes it from the list of jobs:
$ sleep 10 &
$ jobs
[1] + Running sleep 10
$ jobs
[1] + Running sleep 10
$ jobs
[1] Done sleep 10
$ jobs
$

Implementing shell-like job control in C

I am trying to implement simple shell in C language and i am having a hard time implementing job control. Everything online seems complicated enough and i think some simplicity is always good. So let me ask this ... After fork() is called can i handle Ctrl-Z signal with just 2 function and just with the pid ?
I want to call a function e.x. put_background(pid_t pid) when i hit Ctrl-Z and make process with pid = pid to run background and finally call another function e.x. put_foreground(pid_t pid) when i write fg and i want the process with pid = pid to go to foreground again.
So, is this possible? Any help is appreciated.. code more however.
I am trying to implement simple shell in C language and i am having a
hard time implementing job control. Everything online seems
complicated enough and i think some simplicity is always good.
So let
me ask this ... After fork() is called can i handle Ctrl-Z signal with
just 2 function and just with the pid ?
Note that Ctrl-Z is meaningful primarily to the terminal driver. It causes a SIGTSTP to be sent to the foreground process group of the terminal in which that character was typed -- that is, the process group that has that terminal as its controlling one, and has permission to read from it. By default, this causes the processes in that group to stop, but that's it. You don't need to do anything to achieve that.*
I want to call a function e.x. put_background(pid_t pid) when i hit
Ctrl-Z and make process with pid = pid to run background and finally
call another function e.x. put_foreground(pid_t pid) when i write fg
and i want the process with pid = pid to go to foreground again.
By definition and design, at most one process group has control of a given terminal at any particular time. Thus, to move a foreground job to the background, all you need to do is move a different one to the foreground. That can be the shell itself or some other job under its control. The tcsetpgrp() library function accomplishes this. Unless it's the shell itself, you would also want to send a SIGCONT to that process group in case it was stopped.
You additionally need a mechanism to resume a stopped background job, but that's easy: just send that process group a SIGCONT.
So, is this possible? Any help is appreciated.. code more however.
Well sure, you could write one function for moving a job to the foreground and resuming it, and one for resuming a background job. The only information these functions need about the jobs they operate on is their process group IDs (which is the same as the process IDs of their initial processes).
But you also need to maintain some bookkeeping of the current active jobs, and you need to take some care about starting new jobs, and you need to monitor current jobs -- especially the foreground job -- so as to be able to orchestrate all of the transitions appropriately.
The GLIBC manual has an entire chapter on job control, including a substantial section specifically on implementing a job-control shell. This would probably be useful to you even if you are not writing for a GLIBC-based system. The actual code needed is not all that complicated, but getting it right requires a good understanding of a fairly wide range of concepts.
*But you do need to ensure that your shell puts commands it launches into process groups different from its own, else a Ctrl-Z will stop it, too.

How to keep track of all descendant processes to cleanup?

I have a program that can fork() and exec() multiple processes in a chain.
E.g.: process A --> fork, exec B --> fork, exec C --> fork, exec D. So A is the great-great-grandparent of C.
Now the problem is that I do not have any control of processes B, C and D. So, several things can happen.
It might so happen that a descendant process can do setsid() to change its process group and session.
Or one of the descendant process dies (say C) and hence its child (D) is parented by init.
Therefore, I can't rely on process group id or parent id to track all descendants of A. Is there any reliable way of keeping track of all descendants? More specifically, I would like to kill all the descendants (orphans and otherwise).
It would be also great if its POSIX compliant.
The POSIX way to do this is simply to use process groups. Descendant processes that explicitly change their process group / session are making a deliberate decision not to have their lifetime tracked by their original parent - they are specifically emancipating themselves from the parent's control. Such processes are not orphans - they are adults that have "flown the nest" and wish to exert control over their own lifetime.
I agree with caf's general sentiment: if a process calls setsid, it's saying it wants to live on its own, no matter what . You need to think carefully as to whether you really want to kill them.
That being said, sometimes, you will want some form of “super-session” to contain a tree of processes. There is no tool that provides such super-sessions in the POSIX toolbox, but I'm going to propose a few solutions. Each solution has its own limitations, so it's likely that they won't all be applicable to your case, but hopefully one of them will be suitable.
A clean solution is to run the processes in their own virtualized environment. This could be a FreeBSD-style jail, Linux cgroups, or any other kind of virtualization technology. The limitations of this approach are that virtualization technologies are OS-dependant, and the processes will run in a somewhat different context.
If you only have a single instance of these processes on the system and you can get root involved, run the processes as a dedicated user. The super-session is defined as the processes running as the dedicated user. Kill the descendants with kill(-1, signum) (note that this will kill the killer process itself unless it's blocked or handled the signal).
You can make the process open a unique file, making sure that the FD_CLOEXEC flag is set on the file descriptor. All child processes will then inherit the open file unless they explicitly remove the FD_CLOEXEC flag before calling execve or close the file. Kill the processes with fuser -k or by obtaining the list of process IDs with fuser or lsof (fuser is in POSIX, but not fuser -k.) Note that there's a race condition: a process may fork between the time you call fuser and the time you kill it; therefore you need to call fuser in a loop until no more processes appear (don't loop until all processes are dead, as this could be an infinite loop if one of the processes is blocking your signal).
You can generate a unique random string and define an environment variable with that name, or with a well-known name and that unique string as a value. It will be inherited by all descendant processes unless they choose to change their environment. There is no portable way to search for processes based on their environment, or even to obtain the environment of another process. On many unix variants, you can obtain the information with an option to ps (such as ps -e on *BSD or ps e on Linux); the information may not be easy to parse, but the presence of the unique string is a sufficient indicator. As with fuser above, note the need for a loop to avoid a race condition if a descendant calls fork too late for you to notice its child but before you could kill the parent.
You can LD_PRELOAD a small library that forks a thread that listens on a communication channel, and kills its process when notified. This may disrupt the process if it expects to know about all of its own threads; it's only a possibility on architectures where the standard library is always thread-safe, and you'll miss statically linked processes. The communication channel can be anything that allows the master process to broadcast the suicide order; one possibility is a pipe where each descendant process does a blocking read and the ancestor process closes the pipe to notify the descendants. Pass the file descriptor number through an environment variable.

Determine if a process is running?

Is there an easy way to determine if a certain process is running?
I need to know if an instance of my program is running in the background, and if not fork and create the background process.
Normally the race-free way of doing this is:
Open a lock file / pid file for writing (but do not truncate it)
Attempt to take an exclusive lock on it (using fcntl or flock) without blocking
If that fails with EAGAIN, then the other process is already running.
The file descriptor should now be inherited by the daemon and left open for its lifetime
The advantage of doing this over simply storing a PID, is that if somebody reuses the PID, you won't get a false positive.
The biggest problem with storing the pid in the file is that a low-numbered pid used by a system start up daemon can get reused on a subsequent reboot by a different daemon. I have seen this happen.
This is usually done using pidfiles: a file in /var/run/[name].pid containing only the process ID returned by fork().
if pidfile exists:
exit()
else:
create pidfile
pid = start_background()
pidfile.write(pid)
On shutdown: remove pidfile
Linux software, by far and large does not care about the exclusivity of programs, only the resources they use. "Caring" is most often provided by the implementation (E.G. the infrastructure of the distro).
For instance, if you want to run a program, but that program locks up or turns zombie and you have no way to kill it, or it's running as a different user performing some other function. Why should the program care whether another copy of itself is running? Having it do so only seems like an unnecessary restriction.
If it's a process that opens a socket (like a TCP port), have the program fail if it can't open the socket. If it needs exclusive access to a file, have it fail if it can't get it. Support a PID file, but don't make it mandatory.
You'll see this methodology all over GNU software, which is part of what makes it so versatile.

Passing the shell to a child before aborting

Current scenario, I launch a process that forks, and after a while it aborts().
The thing is that both the fork and the original process print to the shell, but after the original one dies, the shell "returns" to the prompt.
I'd like to avoid the shell returning to the prompt and keep as if the process didn't die, having the child handle the situation there.
I'm trying to figure out how to do it but nothing yet, my first guess goes somewhere around tty handling, but not sure how that works.
I forgot to mention, the shell takeover for the child could be done on fork-time, if that makes it easier, via fd replication or some redirection.
I think you'll probably have to go with a third process that handles user interaction, communicating with the "parent" and "child" through pipes.
You can even make it a fairly lightweight wrapper, just passing data back and forth to the parent and terminal until the parent dies, and then switching to passing to/from the child.
To add a little further, as well, I think the fundamental problem you're going to run into is that the execution of a command by the shell just doesn't work that way. The shell is doing the equivalent of calling system() -- it's going to wait for the process it just spawned to die, and once it does, it's going to present the user with a prompt again. It's not really a tty issue, it's how the shell works.
bash (and I believe other shells) have the wait command:
wait: wait [n]
Wait for the specified process and report its termination status. If
N is not given, all currently active child processes are waited for,
and the return code is zero. N may be a process ID or a job
specification; if a job spec is given, all processes in the job's
pipeline are waited for.
Have you considered inverting the parent child relationship?
If the order in which the new processes will die is predictable, run the code that will abort in the "child" and the code that will continue in the parent.

Resources