OS: Windows Language: C/C++
The design demands to use a mutex variable across process and its sub processes.
If I create mutex in one process, I have to open the mutex in another processs to check the critical section's availablity.
To open the mutex, I need to know the name of the mutex created in parent process. Suppose, If I keep the mutex as my application name. I could know the name of the mutex, as it is fixed. However, If I load the second instance of my application parallel, there would be a confusion.
Can the following be the better idea?
I have an idea to name the mutex in the parent process as the process id. So now I need to fetch the Parent's process ID from the child process/grand child process to open the mutex.
I guess there are no direct ways to fetch parent process id from the grand child process. So I have to pass on the process id in every create process api(in lpenvironment parm).
Can anyone suggest a simple method, as mutexes are most commonly used.... I am a newbie.
The main idea is fine, but you can maybe make some implementation tweaks.
For one, if your application involves multiple processes cooperating, then the main "controller" process which spawns sub-processes can easily pass its PID via a command line argument. If sub-processes spawn their own children as well, they can transfer the PID via the same mechanism.
Taking this idea further, you can also skip the PID entirely and pass the mutex name itself via command line argument to child processes. This approach has the advantage that the parent and child processes do not need to both include code that derives the mutex name from the PID. By passing the mutex name itself you decouple child processes from having to know how it is produced. This approach is used by many mainstream apps, e.g. Google Chrome.
And finally, you can maybe do better by adding a random string (a GUID maybe?) to the mutex name. I don't believe anyone will name their own global synchronization object with the same name, but some extra precautions won't hurt.
As I understand it, you propose to use a process ID (PID) as the basis for naming a mutex to be used by your application and its subprocesses. This way, they will have their own mutex name that will not clash with the mutex name used by a second instance of your application.
This appears valid, but handles would be reliable than PIDs, because PIDs can get recycled. The method of using handles (passing them to child processes, similar to what you sugggest) is discussed on this StackOverflow thread.
I think passing the information you need to share to child processes is the way to go. Windows has the concepts for progress groups for a console process and its child processes, but this is really designed for being able to signal all the processes as a group -- not for sharing information among the group.
And there are also job objects for managing a group of processes that belong to a common job, but again, this is designed for managing a group of processes, not for information sharing between the processes in the group.
If I interprete the wording "a process and its sub-processes" as well as "child/grandchild", the situation is that you have a single parent process that launches one or several children (or, children launching grandchildren). Or, any combination of these, but either way, every process we talk about using the same mutex that is created by the parent.
If that assumption is correct, why not just use something embarrassingly simple as:
#define MUTEXNAME "MzdhYTYzYzc3Mzk4ZDk1NDQ3MzI2MmUxYTAwNTdjMWU2MzJlZGE3Nw"
In case you wonder where this one came from, I generated it with this one-liner:
php -r "echo substr(base64_encode(sha1('some text')), 0, -2);"
Replace 'some text' with your name, the current date, or whatever random words are at your mind at this very moment. The chances that any other application on your system will ever have the same mutex name is practically zero.
Related
I have an application which uses mmap for ipc. Can I run this application multiple times? Will it have any side effects ?
My application scenario:
my application forks off a child process whose job is to always kill the parent process randomly but it should do this in controlled manner, for example setting a variable in parent process which indicates the child process to kill the parent process (here comes the mmap). The parent process has a signal handler where it can resume the application again the child process kills the parent process it continues...
Can any one help me? thanks in adavnce
Whether running your application multiple times will have side effects or not depends on how you implement it. Please have a look at this answer. It contains a lot of helpful information. For example:
mmap is great if you have multiple processes accessing data in a read only fashion from the same file [...]
This mean: If you want to use the same shared memory for multiple parent/child pairs, then you need to synchronize access to that shared memory. Please have a look at this Q&A on how to do that. Of course, you have to make sure, that each parent/child pair uses its own variables in the shared memory.
Another option is to use a separate shared memory segment for each parent/child pair. You could do this, for example, by making the process ID of the parent process a part of the shared memory file name. Then, when you fork the child process, you pass the process ID (or the shared memory file name) to the child process, so that parent and child know which shared memory to use in order to comunicate to each other.
My question is about more philosophical than technical issues.
Objective is to write a multiprocess (not multithread) program with one "master" process and N "worker" processes. Program is linux-only, async, event-based web-server, like nginx. So, the main problem is how to spawn "worker" processes.
In linux world there are two ways:
1). fork()
2). fork() + exec*() family
A short description for each way and what confused me in each of them.
First way with fork() is dirty, because forked process has copy (...on-write, i know) of parent memory: signal handlers, variables, file\socket descriptors, environ and other, e.g. stack and heap. In conclusion, after fork i need to...hmm..."clear memory", for example, disable signal handlers, socket connections and other horrible things, inherited from parent, because child has a lot of data that he was not intended - breaks encapsulation, and many side-effects is possible.
The general way for this case is run infinite loop in forked process to handle some data and do some magic with socket pair, pipes or shared memory for creating communication channel between parent and child before and after fork(), because socket descriptors reopen in child and used same socket as parent.
Also, this is nginx-way: it has one executable binary, that use fork() for spawn child process.
The second way is similar to first, but have a difference with usage one of exec*() function in child process after fork() for run external binary. One important thing is that exec*() loads binary in current (forked) process memory, automatic clear stack, heap and do all other nasty job, so fork will look like a clearly new instance of program without copy of parent memory or something other trash.
There has another problem with communication establishing between parent and child: because forked process after exec*() remove all data inherited from parent, that i need somehow create a socket pair between parent and child. For example, create additional listen socket (domain or in another port) in parent and wait child connections and child should connect to parent after initialization.
The first way is simple, but confuse me, that is not a clear process, just a copy of parent memory, with many possible side-effects and trash, and need to keep in mind that forked process has many dependencies to parent code. Second way needs more time to support two binary, and not so elegant like single-file solution. Maybe, the best way is use fork() for process create and something to clear it memory without exec*() call, but I cant find any solution for second step.
In conclusion, I need help to decide which way to use: create one-file executable file like nginx, and use fork(), or create two separate files, one with "server" and one with "worker", and use fork() + exec*(worker) N times from "server", and want know for pros and cons for each way, maybe I missed something.
For a multiprocess solution both options, fork and fork+exec, are almost equivalent and depends on the child and parent process context. If the child process executes the parents' text (binary) and needs all or a part of parents' staff (descriptors, signals etc) - it is a sign to use fork. If the child should execute a new binary and needs nothing from the parents' staff - it seems fork+exec much more suitable.
There is also a good function in the pthread library - pthread_atfork().
It allows to register handlers that will be called before and after fork.
These handlers may perform all the necessary work (closing file descriptors, for example).
As a Linux Programmer, you have a rich library of multithreading process capabilities. Look at pthread and friends.
If you need a process per request, then fork and friends have been the most widely used since time immemorial.
Process A fork()s process B.
Process A dies and therefore init adopts B.
A watchdog creates process C.
Is it somehow possible for C to adopt B from init?
Update:
Or would it even be possible to have C adopt B directly (when A dies), if C were created prior to A's dead, without init becoming an intermediate parent of B?
Update-1:
Also I would appreciate any comments on why having the possiblity to adopt a process the way I described would be a bad thing or difficult to impossible to implement.
Update-2 - The use case (parent and children refer to process(es)):
I have an app using a parent to manage a whole bunch of children, which rely on the parent's managment facility. To do its job the parent relies on being notified by a child's termination, which is done via receiving the related SIGCHLD signal.
If the parent itself dies due some accident (including segfaulting) I need to restart the whole "family", as it's impossible now to trigger something on a child's termination (which also might due to a segfault).
In such a case I need to bring down all children and do a full system's restart.
A possible approach to avoid this situation, would be to have a spare-process in place which could take over the dead parent's role ... - if it could again receive the step children's SIGCHLD signals!
No, most definitely not possible. It couldn't be implemented either, without some nasty race conditions. The POSIX guys who make these APIs would never create something with an inherent race condition, so even if you're not bothered, your kernel's not getting it anytime soon.
One problem is that pids get reused (they're a scarce resource!), and you can't get a handle or lock on one either; it's just a number. So, say, somewhere in your code, you have a variable where you put the pid of the process you want to reparent. Then you call make_this_a_child_of_me(thepid). What would happen then? In the meantime, the other process might have exited and thepid changed to refer to some other process! Oops. There can't be a way to provide a make_this_a_child_of_me API without large restructuring of the way unix handles processes.
Note that the whole deal with waiting on child pids is precisely to prevent this problem: a zombie process still exists in the process table in order to prevent its pid being reused. The parent can then refer to its child by its pid, confident that the process isn't going to exit and have the child pid reused. If the child does exit, its pid is reserved until the parent catches SIGCHLD, or waits for it. Once the process is reaped, its pid is up for grabs immediately for other programs to start using when they fork, but the parent is guaranteed to already know about it.
Response to update: consider a more complicated scheme, where processes are reparented to their next ancestor. Clearly, this can't be done in every case, because you often want a way of disowning a child, to ensure that you avoid zombies. init fulfills that role very well. So, there has to some way for a process to specify that it intends to either adopt, or not, its grandchildren (or lower). The problem with this design is exactly the same as the first situation: you still get race conditions.
If it's done by pid again, then the grandparent exposes itself to a race condition: only the parent is able to reap a pid, so only the parent really knows which process a pid goes with. Because the grandparent can't reap, it can't be sure that the grandchild process hasn't changed from the one it intended to adopt (or disown, depending on how the hypothetical API would work). Remember, on a heavily-loaded machine, there's nothing stopping a process from being taken off the CPU for minutes, and a whole load could have changed in that time! Not ideal, but POSIX's got to account for it.
Finally, suppose then that this API doesn't work by pid, but just generally says, "send all grandchildren to me" or "send them to init". If it's called after the child processes are spawned, then you get race conditions just as before. If it's called before, then the whole thing's useless: you should be able to restructure your application a little bit to get the same behaviour. That is, if you know before you start spawning child processes who should be the parent of whom, why can't you just go ahead and create them the right way round in the first place? Pipes and IPC really are able to do all the required work.
No there is no way that you can enforce Reparenting in the way you have described.
I don't know of a good way to do this, but one reason for having it is that a process running can stand on its own or add a capability to a parent process. The adoption would occur as the result of an event, know by the (not yet) child, but not the parent. The soon-to-be child would send a signal to the parent. The parent would adopt (or not) the child. Once part of the parent, the parent/child process would be able to react to the event, whereas neither could react to the event when standing alone.
This docking behavior could be coded into the apps, but I don't know how to do it in real-time. There are other ways to achieve the same functionality. A parent, who could accept docking children could have its functionality extended in novel ways not previously known to the parent.
While the original question is tagged with unix, there is a way to achieve this on linux so it's worth mentioning. This is achievable with the use of a subreaper process. When a process's parent, it will get adopted by the nearest subreaper ancestor or init. So in your case you'll have process C set as subreaper via prctl(PR_SET_CHILD_SUBREAPER) and spawns process A, when process A dies, process B will be adopted by C
An alternative on Linux would be to spawn C in a separate PID namespace, making it the init process of the PID namespace and hence can adopt the children of A when A dies.
I've tried to look this up, but I'm struggling a bit to understand the relation between the Parent Process and the Child Process immediately after I call fork().
Are they completely separate processes, only associated by the id/parent id? Or do they share memory? For example the 'code' section of each process - is that duplicated so that each process has it's own identical copy, or is that 'shared' in some way so that only one exists?
I hope that makes sense.
In the name of full disclosure this is 'homework related'; while not a direct question from the book, I have a feeling it's mostly academic and, in practice, I probably don't need to know.
As it appears to the process, the entire memory is duplicated.
In reality, it uses "copy on write" system. The first time either process changes its memory after fork(), a separate copy is made of the modified page (usually 4kB).
Usually the code segment of a process is not modified, in which case it remains shared.
Logically, a fork creates an identical copy of the original process that is largely independent of the original. For performance reasons, memory is shared with copy-on-write semantics, which means that unmodified memory (such as code) remains shared.
File descriptors are duplicated, so that the forked process could, in principle, take over a database connection on behalf of the parent (or they could even jointly communicate with the database if the programmer is a bit twisted). More commonly, this is used to set up pipes between processes so you can write find -name '*.c' | xargs grep fork.
A bunch of other stuff is shared. See here for details.
One important omission is threads — the child process only inherits the thread that called fork(). This causes no end of trouble in multithreaded programs, since the status of mutexes, etc., that were locked in the parent is implementation-specific (and don't forget that malloc() and printf() use locks internally). The only safe thing to do in the child after fork() returns is to call execve() as soon as possible, and even then you have to be cautious with file descriptors. See here for the full horror story.
They are separate processes i.e. the Child and the Parent will have separate PIDs
The child will inherit all of the open descriptors from the Parent
Internally the pages i.e. the stack/heap regions which can be modified unlike the .text region, will be shared b/w the Parent and the Child until one of them tries to modify the contents. In such cases a new page is created and data specific to the page being modified is copied to this freshly allocated page and mapped to the region corresponding to the one who caused the change - could be either the Parent or Child. This is called COW (mentioned by other members in this forum above in their answers).
The Child can finish execution and until reclaimed by the parent using the wait() or waitpid() calls will be in ZOMBIE state. This will help clear the child's process entry from the process table and allow the child pid to be reused. Usually when a child dies, the SIGCHLD signal is sent out to the parent which would ideally result in a handler being called subsequent to which the wait() call is executed in that handler.
In case the Parent exits without cleaning up the already running or zombie child (via the wait() waitpid calls), the init() process (PID 1) becomes the parent to these now orphan children. This init() process executes wait() or waitpid() calls at regular intervals.
EDIT: typos
HTH
Yes, they are separate processes, but with some special "properties". One of them is the child-parent relation.
But more important is the sharing of memory pages in a copy-on-write (COW) manner: until the one of them performs a write (to a global variable or whatever) on a page, the memory pages are shared. When a write is performed, a copy of that page is created by the kernel and mapped at the right address.
The COW magic is done by in the kernel by marking the pages as read-only and using the fault mechanism.
I have a program that can fork() and exec() multiple processes in a chain.
E.g.: process A --> fork, exec B --> fork, exec C --> fork, exec D. So A is the great-great-grandparent of C.
Now the problem is that I do not have any control of processes B, C and D. So, several things can happen.
It might so happen that a descendant process can do setsid() to change its process group and session.
Or one of the descendant process dies (say C) and hence its child (D) is parented by init.
Therefore, I can't rely on process group id or parent id to track all descendants of A. Is there any reliable way of keeping track of all descendants? More specifically, I would like to kill all the descendants (orphans and otherwise).
It would be also great if its POSIX compliant.
The POSIX way to do this is simply to use process groups. Descendant processes that explicitly change their process group / session are making a deliberate decision not to have their lifetime tracked by their original parent - they are specifically emancipating themselves from the parent's control. Such processes are not orphans - they are adults that have "flown the nest" and wish to exert control over their own lifetime.
I agree with caf's general sentiment: if a process calls setsid, it's saying it wants to live on its own, no matter what . You need to think carefully as to whether you really want to kill them.
That being said, sometimes, you will want some form of “super-session” to contain a tree of processes. There is no tool that provides such super-sessions in the POSIX toolbox, but I'm going to propose a few solutions. Each solution has its own limitations, so it's likely that they won't all be applicable to your case, but hopefully one of them will be suitable.
A clean solution is to run the processes in their own virtualized environment. This could be a FreeBSD-style jail, Linux cgroups, or any other kind of virtualization technology. The limitations of this approach are that virtualization technologies are OS-dependant, and the processes will run in a somewhat different context.
If you only have a single instance of these processes on the system and you can get root involved, run the processes as a dedicated user. The super-session is defined as the processes running as the dedicated user. Kill the descendants with kill(-1, signum) (note that this will kill the killer process itself unless it's blocked or handled the signal).
You can make the process open a unique file, making sure that the FD_CLOEXEC flag is set on the file descriptor. All child processes will then inherit the open file unless they explicitly remove the FD_CLOEXEC flag before calling execve or close the file. Kill the processes with fuser -k or by obtaining the list of process IDs with fuser or lsof (fuser is in POSIX, but not fuser -k.) Note that there's a race condition: a process may fork between the time you call fuser and the time you kill it; therefore you need to call fuser in a loop until no more processes appear (don't loop until all processes are dead, as this could be an infinite loop if one of the processes is blocking your signal).
You can generate a unique random string and define an environment variable with that name, or with a well-known name and that unique string as a value. It will be inherited by all descendant processes unless they choose to change their environment. There is no portable way to search for processes based on their environment, or even to obtain the environment of another process. On many unix variants, you can obtain the information with an option to ps (such as ps -e on *BSD or ps e on Linux); the information may not be easy to parse, but the presence of the unique string is a sufficient indicator. As with fuser above, note the need for a loop to avoid a race condition if a descendant calls fork too late for you to notice its child but before you could kill the parent.
You can LD_PRELOAD a small library that forks a thread that listens on a communication channel, and kills its process when notified. This may disrupt the process if it expects to know about all of its own threads; it's only a possibility on architectures where the standard library is always thread-safe, and you'll miss statically linked processes. The communication channel can be anything that allows the master process to broadcast the suicide order; one possibility is a pipe where each descendant process does a blocking read and the ancestor process closes the pipe to notify the descendants. Pass the file descriptor number through an environment variable.