Fork multiple process simultaneously without blocking main thread - c

This is more of a general question than a coding question and I would appreciate some directions or a general approach.
My programming task is to implement a simple job scheduler that will execute non-interactive jobs. At any given time only 4 jobs should be executing. If more than 4 jobs are submitted, then these additional jobs must wait until one of the 4 executing jobs are completed. A prompt will keep asking the user to enter a command that should be executed.
This means that the main thread, or to be precise the main function that runs the infinite loop asking the user to enter a command, should never be blocked or waiting on a process to finish. Using fork() and exec() and wait() will cause my main process to wait which is not the desired behavior. Therefore, I thought of omitting the wait() in the parent process and using a signal handler for SIGCHLD to catch the instant when the forked process terminated. I would have a global variable holding the value of how many process are running at any given time.
Is this the right approach or is there a better/more elegant solution to that?
Thanks a lot in advance!

Related

Are my fork processes running parallel or executing one after another?

I am just going to post pseudo code,
but my question is I have a loop like such
for(i<n){
createfork();
if(child)
/*
Exit so I can control exact amount of forks
without children creating more children
*/
exit
}
void createfork(){
fork
//execute other methods
}
Does my fork create a process do what it is suppose to do and exit then create another process and repeat? And if so what are some ways around this, to get the processes running concurrently?
Your pseudocode is correct as written and does not need to be modified.
The processes are already executing in parallel, all six of them or however many you spawn. As written, the parent process does not wait for the children to finish before spawning more children. It calls fork(), checks if (child) (which is skipped), then immediately proceeds to the next for loop iteration and forks again.
Notably, there's no wait() call. If the parent were to call wait() or waitpid() to wait for each child to finish then that would introduce the serialism you're trying to avoid. But there is no such call, so you're good.
When a process successfully performs a POSIX fork(), that process and the new child process are initially both eligible to run. In that sense, they will run concurrently until one or the other blocks. Whether there will be any periods of time when both are executing machine instructions (on different processing units) depends at least on details of hardware capabilities, OS scheduling, the work each process is performing, and what other processes there are in the system and what they are doing.
The parent certainly does not, in general, automatically wait for the child to terminate before it proceeds with its own work (there is a family of functions to make it wait when you want that), nor does the child process automatically wait for any kind of signal from the parent. If the next thing the parent does is fork another child, then that will under many circumstances result in the parent running concurrently with both (all) children, in the sense described above.
I cannot speak to specifics of the behavior of your pseudocode, because it's pseudocode.

How to kill a process synchronously on Linux?

When I call kill() on a process, it returns immediately, because it just send a signal. I have a code where I am checking some (foreign, not written nor modifiable by me) processes in a loop infinitely and if they exceed some limits (too much ram eaten etc) it kills them (and write to a syslog etc).
Problem is that when processes are heavily swapped, it takes many seconds to kill them, and because of that, my process executes the same check against same processes multiple times and attempts to send the signal many times to same process, and write this to syslog as well. (this is not done on purpose, it's just a side effect which I am trying to fix)
I don't care how many times it send a signal to process, but I do care how many times it writes to syslog. I could keep a list of PID's that were already sent the kill signal, but in theory, even if there is low probability, there could be another process spawned with same pid as previously killed one had, which might also be supposed to be killed and in this case, the log would be missing.
I don't know if there is unique identifier for any process, but I doubt so. How could I kill a process either synchronously, or keep track of processes that got signal and don't need to be logged again?
Even if you could do a "synchronous kill", you still have the race condition where you could kill the wrong process. It can happen whenever the process you want to kill exits by its own volition, or by third-party action, after you see it but before you kill it. During this interval, the PID could be assigned to a new process. There is basically no solution to this problem. PIDs are inherently a local resource that belongs to the parent of the identified process; use of the PID by any other process is a race condition.
If you have more control over the system (for example, controlling the parent of the processes you want to kill) then there may be special-case solutions. There might also be (Linux-specific) solutions based on using some mechanisms in /proc to avoid the race, though I'm not aware of any.
One other workaround may be to use ptrace on the target process as if you're going to debug it. This allows you to partially "steal" the parent role, avoiding invalidation of the PID while you're still using it and allowing you to get notification when the process terminates. You'd do something like:
Check the process info (e.g. from /proc) to determine that you want to kill it.
ptrace it, temporarily stopping it.
Re-check the process info to make sure you got the process you wanted to kill.
Resume the traced process.
kill it.
Wait (via waitpid) for notification that the process exited.
This will make the script wait for process termination.
kill $PID
while [ kill -0 $PID 2>/dev/null ]
do
sleep 1
done
kill -0 [pid] tests the existence of a process
The following solution works for most processes that aren't debuggers or processes being debugged in a debugger.
Use ptrace with argument PTRACE_ATTACH to attach to the process. This stops the process you want to kill. At this point, you should probably verify that you've attached to the right process.
Kill the target with SIGKILL. It's now gone.
I can't remember whether the process is now a zombie that you need to reap or whether you need to PTRACE_CONT it first. In either case, you'll eventually have to call waitpid to reap it, at which point you know it's dead.
If you are writing this in C you are sending the signal with the kill system call. Rather than repeatedly sending the terminating signal just send it once and then loop (or somehow periodically check) with kill(pid, 0); The zero value of signal will just tell you if the process is still alive and you can act appropriately. When it dies kill will return ESRCH.
when you spawn these processes, the classical waitpid(2) family can be used
when not used anywhere else, you can move the processes going to be killed into an own cgroup; there can be notifiers on these cgroups which get triggered when process is exiting.
to find out, whether process has been killed, you can chdir(2) into /proc/<pid> or open(2) this directory. After process termination, the status files there can not be accessed anymore. This method is racy (between your check and the action, the process can terminate and a new one with the same pid be spawned).

c/unix: abort process that runs for too long

I need to kill such user processes that are taking longer time than a said expected interval on UNIX (Solaris) operating system. This needs to be done inside the process that is currently being executed.
Please suggest how this can be achieved in C or in UNIX?
See the alarm() system call. It provides you with a SIGALRM signal which your process can handle, and use to quit.
As long as killing without warning the process in overtime is acceptable, one alternative is to use ulimit -t <time> at the time of launching the process.
With setrlimit, you can limit the amount of CPU time used by the process. Your process will receive a SIGXCPU once the limit is exceeded.
#include <sys/resource.h>
struct rlimit limits = {42, RLIM_INFINITY};
setrlimit(RLIMIT_CPU, &limits);
At one time I had to solve this exact same problem.
My solution was as follows:
Write a controller program that does the following:
Fork a child process that starts the process you want to control.
Back in the parent, fork a second child process that sleeps for the maximum time allowed and then exits.
In the parent, wait for the children to complete and whichever finishes first causes the parent to kill the other.
There's an easier way. Launch a worker thread to do the work, then call workerThread.join(timeoutInMS) in the main thread. That will wait for that long. If that statement returns and the worker thread is still running, then you can kill it and exit.

Passing the shell to a child before aborting

Current scenario, I launch a process that forks, and after a while it aborts().
The thing is that both the fork and the original process print to the shell, but after the original one dies, the shell "returns" to the prompt.
I'd like to avoid the shell returning to the prompt and keep as if the process didn't die, having the child handle the situation there.
I'm trying to figure out how to do it but nothing yet, my first guess goes somewhere around tty handling, but not sure how that works.
I forgot to mention, the shell takeover for the child could be done on fork-time, if that makes it easier, via fd replication or some redirection.
I think you'll probably have to go with a third process that handles user interaction, communicating with the "parent" and "child" through pipes.
You can even make it a fairly lightweight wrapper, just passing data back and forth to the parent and terminal until the parent dies, and then switching to passing to/from the child.
To add a little further, as well, I think the fundamental problem you're going to run into is that the execution of a command by the shell just doesn't work that way. The shell is doing the equivalent of calling system() -- it's going to wait for the process it just spawned to die, and once it does, it's going to present the user with a prompt again. It's not really a tty issue, it's how the shell works.
bash (and I believe other shells) have the wait command:
wait: wait [n]
Wait for the specified process and report its termination status. If
N is not given, all currently active child processes are waited for,
and the return code is zero. N may be a process ID or a job
specification; if a job spec is given, all processes in the job's
pipeline are waited for.
Have you considered inverting the parent child relationship?
If the order in which the new processes will die is predictable, run the code that will abort in the "child" and the code that will continue in the parent.

Suspending the execution of a remote process (C, Windows)

I can suspend a thread of another process by using SuspendThread(). Is there any way to also suspend the execution of that process altogether?
If yes, please post code.
Thanks.
PS:
Since you will ask "Why do you want to do this" I'll post it here.
I am dealing with legacy software that is not maintained anymore. I don't have access to the source code. Right now I need it to pause until a file is filled with data and then resume the execution.
The only way is to suspend all threads of that process.
If you want to see actual code, check the sample here.
> The only way is to suspend all threads of that process.
No.
Use Undocumented Kernel apis (exported since NT 3.1) to suspend the Pid.
If the process has or spawns many threads rapidly or asynchronously, your subject to a race condition with SuspendThread().
A way to accomplish the same thing (that is process wide) is to attach as a debugger to the target process with DebugActiveProcess() and then simply call DebugBreakProcess. When a process is at a break point, no new threads will be created and all execution, process wide will stop.

Resources