Disable SIGPIPE signal on write(2) call in library - c

Question
Is it possible to disable the raising of a signal (SIGPIPE) when writing to a pipe() FD, without installing my own signal handler or disabling/masking the signal globally?
Background
I'm working on a small library that occasionally creates a pipe, and fork()s a temporary child/dummy process that waits for a message from the parent. When the child process receives the message from the parent, it dies (intentionally).
Problem
The child process, for circumstances beyond my control, runs code from another (third party) library that is prone to crashing, so I can't always be certain that the child process is alive before I write() to the pipe.
This results in me sometimes attempting to write() to the pipe with the child process' end already dead/closed, and it raises a SIGPIPE in the parent process. I'm in a library other customers will be using, so my library must be as self-contained and transparent to the calling application as possible. Installing a custom signal handler could break the customer's code.
Work so far
I've got around this issue with sockets by using setsockopt(..., MSG_NOSIGNAL), but I can't find anything functionally equivalent for pipes. I've looked at temporarily installing a signal handler to catch the SIGPIPE, but I don't see any way to limit its scope to the calling function in my library rather than the entire process (and it's not atomic).
I've also found a similar question here on SO that is asking the same thing, but unfortunately, using poll()/select() won't be atomic, and there's the remote (but possible) chance that the child process dies between my select() and write() calls.
Question (redux)
Is there any way to accomplish what I'm attempting here, or to atomically check-and-write to a pipe without triggering the behavior that will generate the SIGPIPE? Additionally, is it possible to achieve this and know if the child process crashed? Knowing if it crashed lets me build a case for the vendor that supplied the "crashy" library, and lets them know how often it's failing.

Is it possible to disable the raising of a signal (SIGPIPE) when writing to a pipe() FD [...]?
The parent process can keep its copy of the read end of the pipe open. Then there will always be a reader, even if it doesn't actually read, so the condition for a SIGPIPE will never be satisfied.
The problem with that is it's a deadlock risk. If the child dies and the parent afterward performs a blocking write that cannot be accommodated in the pipe's buffer, then you're toast. Nothing will ever read from the pipe to free up any space, and therefore the write can never complete. Avoiding this problem is one of the purposes of SIGPIPE in the first place.
You can also test whether the child is still alive before you try to write, via a waitpid() with option WNOHANG. But that introduces a race condition, because the child could die between waitpid() and the write.
However, if your writes are consistently small, and if you get sufficient feedback from the child to be confident that the pipe buffer isn't backing up, then you could combine those two to form a reasonably workable system.

After going through all the possible ways to tackle this issue, I discovered there were only two venues to tackle this problem:
Use socketpair(PF_LOCAL, SOCK_STREAM, 0, fd), in place of pipes.
Create a "sacrificial" sub-process via fork() which is allowed to crash if SIGPIPE is raised.
I went the socketpair route. I didn't want to, since it involved re-writing a fair bit of pipe logic, but it's wasn't too painful.
Thanks!

Not sure I follow: you are the parent process, i.e. you write to the pipe. You do so to send a message after a certain period. The child process interprets the message in some way, does what it has to do and exits. You also have to have it waiting, you can't get the message ready first and then spawn a child to handle it. Also just sending a signal would not do the trick as the child has to really act on the content of the message, and not just the "do it" call.
First hack which comes to mind would be that you wont close the read side of the pipe in the parent. That allows you to freely write to the pipe, while not hurting child's ability to read from it.
If this is not fine, please elaborate on the issue.

Related

What is the closest Windows equivalent to the POSIX wait mechanism?

Linux supports the POSIX wait mechanism defined in "sys/wait.h". The methods wait, waitid, waitpid might be used to exchange status information between parent and child processes that have been created using fork.
Windows neither does provide (native) support for fork nor the POSIX wait mechanism. Instead there are other means available to spwan child processes i.e. CreateProcess.
When porting linux applications written in C or C++ using fork/wait to Windows what would the most proper native* way to monitor state changes (namely WEXITED, WSTOPPED, WCONTINUED) of child processes in the parent process?
*native meaning using no additional libraries, frameworks, programs (like cygwin, minGW) that do not ship with windows or are provided directly by MS in form of runtime environments.
Edit: As requested in the comments I did provide some more information about what problem should be solved in form of pseudo code:
//creates a new child process that is a copy of the parent (compare
//POSIX fork()) and returns some sort of handle to it.
function spawnChild()
// returns TRUE if called from the master process FALSE otherwise
function master()
// return TRUE if called from a child process FALSE otherwise
function child()
// returns TRUE if child process has finished its work entirely,
// FALSE otherwise.
function completelyFinished()
//sends signal/message "sig" to receive where receiver is a single
//handle or a set of handles to processes that shall receive sig
function sendSignal(sig, receiver)
// terminates the calling process
function exit()
// returns a handle to the sender of signal "sig"
function senderOf(sig)
function masterprocess()
master //contains handle to the master process
children = {} //this is an empty set of handles to child processes
buf[SIZE] //some memory area of SIZE bytes available to master process and all children
FOR i = 0 TO n - 1
//spawn new child process and at its handle to the list of running
//child processes.
children <- children UNION spawnChild()
IF(master())
<logic here>
sendSignal(STARTWORKING, children) //send notification to children
WHILE(signal = wait()) // wait for any child to respond (wait is blocking)
IF signal == IMDONE
<logic here (involving reads/writes to buf)>
sendSignal(STARTWORKING, senderOf(signal))
ELSEIF signal == EXITED
children <- children \ signal.sender //remove sender from list of children
ELSEIF(child())
WHILE(wait() != STARTWORKING);
<logic here (involving reads/writes to buf)>
IF completelyFinished()
sendSignal(EXITED, master)
exit()
ELSE
sendSignal(IMDONE, master)
Before I answer the actual question, I'm going to recommend a better solution: you should consider simplifying the relationship between the parent and children.
Based on the pseudocode, the signals between parent and children are serving as a crude form of cross-process mutex, i.e., all they do is to prevent the code here:
IF signal == IMDONE
<logic here (involving reads/writes to buf)>
sendSignal(STARTWORKING, senderOf(signal))
from running multiple instances simultaneously. Instead, <logic here> should be moved into the corresponding child process, protected by a mutex so that only one child can run it at a time.
At that point, all the parent needs to do is to launch the children and wait for them all to exit. That is easily done in Windows by waiting on the process handle.
(I would imagine that modern POSIX also supports some sort of cross-process mutex somewhat more sophisticated than signals.)
It would also be worth reconsidering whether you really really need multiple processes. Multiple threads would be more efficient, and if the code is properly written, it should not be difficult to adapt it.
Be that as it may, if for some reason you absolutely must retain as much of the original program structure as possible, pipes are probably going to be your best bet.
Sending a signal becomes writing a single byte.
In a child, waiting for a signal from the parent becomes reading a single byte.
Waiting in the parent for a message from any of the children is a little trickier. It is still a single-byte read (for each child) but you'll need to use overlapped I/O and, if you need to support more than 64 children, IOCP.
(Alternatively, you could use multiple threads, but that might involve too much of a structural change.)
If the pipes are implemented correctly, when a child exits or dies the corresponding read operation in the parent will terminate with the ERROR_BROKEN_PIPE error. So there is no need for a separate mechanism to monitor the health of the children.
In this context, I think anonymous pipes would be the most appropriate choice. These are simplex, so you'll need two pipes for each child. You can pass the child's end of the pipe handles as the standard input and output for the child process.
For anonymous pipes, you will need to make sure that you close the parent's copy of the handles once each child has been started, and also that each child only inherits the handles corresponding to its own pipe. If there are any additional handles left open to the child's end of its pipe, the parent will not receive any notification when the child exits.
None of this is particularly complicated, but be aware that named pipe I/O has a bit of a learning curve. Asynchronous I/O even more so, particularly if you are coming from a UNIX background. Note in particular that to use asynchronous I/O, you issue an operation and then wait for it to complete, as opposed to the UNIX model where you wait for the I/O to be ready and then issue the operation.
If you want to signal boolean conditions to other processes you probably should use shared events for that. You can share them by name or by handle duplication. You can have as many of these signals as you like. For example, you could have one for each of WEXITED, WSTOPPED, WCONTINUED.
Seeing your edit: Events are great for that. Create named events in the parent and pass their names on the command like to the children. That way parent and child can signal each other.
You also need to share a memory section, for example though a memory mapped file. That would correspond to buf in your code.
What you have there appears to be a work queue arrangement, where you have a producer process and a bunch of worker processes. It's unclear whether you're using the shared memory merely as a work queue, or whether your workers are operating on the shared memory (maybe it's a massive matrix or vector problem).
In Win32, you probably wouldn't implement this as separate processes.
You'd use a collection of producer/consumer threads, which are already sharing memory (same address space), and you'd implement a work queue using semaphores or condition variables.
In fact, you'd probably use a higher-level abstraction, such as QueueUserWorkItem. This uses the default Windows thread pool, but you can create your own thread pool, using CreateThreadpool.

Architecture for multi-processing application in C: fork or fork + exec

My question is about more philosophical than technical issues.
Objective is to write a multiprocess (not multithread) program with one "master" process and N "worker" processes. Program is linux-only, async, event-based web-server, like nginx. So, the main problem is how to spawn "worker" processes.
In linux world there are two ways:
1). fork()
2). fork() + exec*() family
A short description for each way and what confused me in each of them.
First way with fork() is dirty, because forked process has copy (...on-write, i know) of parent memory: signal handlers, variables, file\socket descriptors, environ and other, e.g. stack and heap. In conclusion, after fork i need to...hmm..."clear memory", for example, disable signal handlers, socket connections and other horrible things, inherited from parent, because child has a lot of data that he was not intended - breaks encapsulation, and many side-effects is possible.
The general way for this case is run infinite loop in forked process to handle some data and do some magic with socket pair, pipes or shared memory for creating communication channel between parent and child before and after fork(), because socket descriptors reopen in child and used same socket as parent.
Also, this is nginx-way: it has one executable binary, that use fork() for spawn child process.
The second way is similar to first, but have a difference with usage one of exec*() function in child process after fork() for run external binary. One important thing is that exec*() loads binary in current (forked) process memory, automatic clear stack, heap and do all other nasty job, so fork will look like a clearly new instance of program without copy of parent memory or something other trash.
There has another problem with communication establishing between parent and child: because forked process after exec*() remove all data inherited from parent, that i need somehow create a socket pair between parent and child. For example, create additional listen socket (domain or in another port) in parent and wait child connections and child should connect to parent after initialization.
The first way is simple, but confuse me, that is not a clear process, just a copy of parent memory, with many possible side-effects and trash, and need to keep in mind that forked process has many dependencies to parent code. Second way needs more time to support two binary, and not so elegant like single-file solution. Maybe, the best way is use fork() for process create and something to clear it memory without exec*() call, but I cant find any solution for second step.
In conclusion, I need help to decide which way to use: create one-file executable file like nginx, and use fork(), or create two separate files, one with "server" and one with "worker", and use fork() + exec*(worker) N times from "server", and want know for pros and cons for each way, maybe I missed something.
For a multiprocess solution both options, fork and fork+exec, are almost equivalent and depends on the child and parent process context. If the child process executes the parents' text (binary) and needs all or a part of parents' staff (descriptors, signals etc) - it is a sign to use fork. If the child should execute a new binary and needs nothing from the parents' staff - it seems fork+exec much more suitable.
There is also a good function in the pthread library - pthread_atfork().
It allows to register handlers that will be called before and after fork.
These handlers may perform all the necessary work (closing file descriptors, for example).
As a Linux Programmer, you have a rich library of multithreading process capabilities. Look at pthread and friends.
If you need a process per request, then fork and friends have been the most widely used since time immemorial.

How to kill a process synchronously on Linux?

When I call kill() on a process, it returns immediately, because it just send a signal. I have a code where I am checking some (foreign, not written nor modifiable by me) processes in a loop infinitely and if they exceed some limits (too much ram eaten etc) it kills them (and write to a syslog etc).
Problem is that when processes are heavily swapped, it takes many seconds to kill them, and because of that, my process executes the same check against same processes multiple times and attempts to send the signal many times to same process, and write this to syslog as well. (this is not done on purpose, it's just a side effect which I am trying to fix)
I don't care how many times it send a signal to process, but I do care how many times it writes to syslog. I could keep a list of PID's that were already sent the kill signal, but in theory, even if there is low probability, there could be another process spawned with same pid as previously killed one had, which might also be supposed to be killed and in this case, the log would be missing.
I don't know if there is unique identifier for any process, but I doubt so. How could I kill a process either synchronously, or keep track of processes that got signal and don't need to be logged again?
Even if you could do a "synchronous kill", you still have the race condition where you could kill the wrong process. It can happen whenever the process you want to kill exits by its own volition, or by third-party action, after you see it but before you kill it. During this interval, the PID could be assigned to a new process. There is basically no solution to this problem. PIDs are inherently a local resource that belongs to the parent of the identified process; use of the PID by any other process is a race condition.
If you have more control over the system (for example, controlling the parent of the processes you want to kill) then there may be special-case solutions. There might also be (Linux-specific) solutions based on using some mechanisms in /proc to avoid the race, though I'm not aware of any.
One other workaround may be to use ptrace on the target process as if you're going to debug it. This allows you to partially "steal" the parent role, avoiding invalidation of the PID while you're still using it and allowing you to get notification when the process terminates. You'd do something like:
Check the process info (e.g. from /proc) to determine that you want to kill it.
ptrace it, temporarily stopping it.
Re-check the process info to make sure you got the process you wanted to kill.
Resume the traced process.
kill it.
Wait (via waitpid) for notification that the process exited.
This will make the script wait for process termination.
kill $PID
while [ kill -0 $PID 2>/dev/null ]
do
sleep 1
done
kill -0 [pid] tests the existence of a process
The following solution works for most processes that aren't debuggers or processes being debugged in a debugger.
Use ptrace with argument PTRACE_ATTACH to attach to the process. This stops the process you want to kill. At this point, you should probably verify that you've attached to the right process.
Kill the target with SIGKILL. It's now gone.
I can't remember whether the process is now a zombie that you need to reap or whether you need to PTRACE_CONT it first. In either case, you'll eventually have to call waitpid to reap it, at which point you know it's dead.
If you are writing this in C you are sending the signal with the kill system call. Rather than repeatedly sending the terminating signal just send it once and then loop (or somehow periodically check) with kill(pid, 0); The zero value of signal will just tell you if the process is still alive and you can act appropriately. When it dies kill will return ESRCH.
when you spawn these processes, the classical waitpid(2) family can be used
when not used anywhere else, you can move the processes going to be killed into an own cgroup; there can be notifiers on these cgroups which get triggered when process is exiting.
to find out, whether process has been killed, you can chdir(2) into /proc/<pid> or open(2) this directory. After process termination, the status files there can not be accessed anymore. This method is racy (between your check and the action, the process can terminate and a new one with the same pid be spawned).

Wait for part of child process to finish?

I have a fork occurring in a loop, and above the fork I prompt for a user's input. In my forked process, there's also some printing. Because there's no guarantee to the order the processes will run in, I often (or always) get lines from the child process printing between my prompt to the user and the place where they can enter information.
I.e., I get something like this:
Enter info: <OUTPUT FROM CHILD>
_
(where the _ indicates that the user is free to enter an input.)
Since I'm trying to allow my parent process to fork many children process (each based on piece of information given by the user) that run simultaneously, I can't wait for the child to end before letting the parent continue. Is there a way to make the parent wait for part of the child to complete before moving on?
A lot depends on what you're really trying to do, but you can't use waitpid() or wait() to wait for part of a process to finish. The wait family of functions wait on moribund processes, or processes that have been stopped due to a signal (SIGSTOP, SIGTTIN, SIGTTOU, etc).
Some questions:
Should the output from the child processes be sent to the screen, which leads to this confusion, or should it be sent to a file?
Or, should the program have a pipe from each child so that it can read the output from the child and display it on an appropriate portion of the screen when it is convenient?
Or, in a windowing environment, should the children's messages be sent to a different window (like the console window)?
Or should the children write to the syslog daemon?
Or should the children be made to hang on a SIGTTOU signal?
A lot depends on the purpose of the messages, and the importance of immediate display of the messages.
The other answers are definitely more general, and the proper way to solve this problem would involve some kind of pipe, but my case was actually very simple, and just needed the parent to wait for a while, so I added a usleep() line, to make the parent wait a few milliseconds for the child to finish printing. It's definitely not perfect, but it worked.

Is it possible to adopt a process?

Process A fork()s process B.
Process A dies and therefore init adopts B.
A watchdog creates process C.
Is it somehow possible for C to adopt B from init?
Update:
Or would it even be possible to have C adopt B directly (when A dies), if C were created prior to A's dead, without init becoming an intermediate parent of B?
Update-1:
Also I would appreciate any comments on why having the possiblity to adopt a process the way I described would be a bad thing or difficult to impossible to implement.
Update-2 - The use case (parent and children refer to process(es)):
I have an app using a parent to manage a whole bunch of children, which rely on the parent's managment facility. To do its job the parent relies on being notified by a child's termination, which is done via receiving the related SIGCHLD signal.
If the parent itself dies due some accident (including segfaulting) I need to restart the whole "family", as it's impossible now to trigger something on a child's termination (which also might due to a segfault).
In such a case I need to bring down all children and do a full system's restart.
A possible approach to avoid this situation, would be to have a spare-process in place which could take over the dead parent's role ... - if it could again receive the step children's SIGCHLD signals!
No, most definitely not possible. It couldn't be implemented either, without some nasty race conditions. The POSIX guys who make these APIs would never create something with an inherent race condition, so even if you're not bothered, your kernel's not getting it anytime soon.
One problem is that pids get reused (they're a scarce resource!), and you can't get a handle or lock on one either; it's just a number. So, say, somewhere in your code, you have a variable where you put the pid of the process you want to reparent. Then you call make_this_a_child_of_me(thepid). What would happen then? In the meantime, the other process might have exited and thepid changed to refer to some other process! Oops. There can't be a way to provide a make_this_a_child_of_me API without large restructuring of the way unix handles processes.
Note that the whole deal with waiting on child pids is precisely to prevent this problem: a zombie process still exists in the process table in order to prevent its pid being reused. The parent can then refer to its child by its pid, confident that the process isn't going to exit and have the child pid reused. If the child does exit, its pid is reserved until the parent catches SIGCHLD, or waits for it. Once the process is reaped, its pid is up for grabs immediately for other programs to start using when they fork, but the parent is guaranteed to already know about it.
Response to update: consider a more complicated scheme, where processes are reparented to their next ancestor. Clearly, this can't be done in every case, because you often want a way of disowning a child, to ensure that you avoid zombies. init fulfills that role very well. So, there has to some way for a process to specify that it intends to either adopt, or not, its grandchildren (or lower). The problem with this design is exactly the same as the first situation: you still get race conditions.
If it's done by pid again, then the grandparent exposes itself to a race condition: only the parent is able to reap a pid, so only the parent really knows which process a pid goes with. Because the grandparent can't reap, it can't be sure that the grandchild process hasn't changed from the one it intended to adopt (or disown, depending on how the hypothetical API would work). Remember, on a heavily-loaded machine, there's nothing stopping a process from being taken off the CPU for minutes, and a whole load could have changed in that time! Not ideal, but POSIX's got to account for it.
Finally, suppose then that this API doesn't work by pid, but just generally says, "send all grandchildren to me" or "send them to init". If it's called after the child processes are spawned, then you get race conditions just as before. If it's called before, then the whole thing's useless: you should be able to restructure your application a little bit to get the same behaviour. That is, if you know before you start spawning child processes who should be the parent of whom, why can't you just go ahead and create them the right way round in the first place? Pipes and IPC really are able to do all the required work.
No there is no way that you can enforce Reparenting in the way you have described.
I don't know of a good way to do this, but one reason for having it is that a process running can stand on its own or add a capability to a parent process. The adoption would occur as the result of an event, know by the (not yet) child, but not the parent. The soon-to-be child would send a signal to the parent. The parent would adopt (or not) the child. Once part of the parent, the parent/child process would be able to react to the event, whereas neither could react to the event when standing alone.
This docking behavior could be coded into the apps, but I don't know how to do it in real-time. There are other ways to achieve the same functionality. A parent, who could accept docking children could have its functionality extended in novel ways not previously known to the parent.
While the original question is tagged with unix, there is a way to achieve this on linux so it's worth mentioning. This is achievable with the use of a subreaper process. When a process's parent, it will get adopted by the nearest subreaper ancestor or init. So in your case you'll have process C set as subreaper via prctl(PR_SET_CHILD_SUBREAPER) and spawns process A, when process A dies, process B will be adopted by C
An alternative on Linux would be to spawn C in a separate PID namespace, making it the init process of the PID namespace and hence can adopt the children of A when A dies.

Resources