Read STDIN in child process after exiting parent process - c

I'm looking for a way to read from STDIN(console) in child process, after closing parent process.
My program should be like:
parent process forks and creates child process.
After creating child process, parent MUST be closed. I can not use functions such as wait() etc.
The thing is, when I exit from parent process, I can not read from console anymore. Is there any way to 'pass the control' to child process, instead of passing it back to shell ?
Instructions:
Process 1: reads data (sigle lines) from standard input stream and passes it with ipc message queue to process 2.
Process 2: receives data send by process 1 and prints it in standard output stream.
Two processes should be executed automatically from 1 initiative process. After executing child processes, initiative process should immediatelly close.

Related

Control flow of fork system call when wait is present or not present

In this code (run on linux):
void child_process()
{
int count=0;
for(;count<1000;count++)
{
printf("Child Process: %04d\n",count);
}
printf("Child's process id: %d\n",getpid());
}
void parent_process()
{
int count=0;
for(;count<1000;count++)
{
printf("Parent Process: %04d\n",count);
}
}
int main()
{
pid_t pid;
int status;
if((pid = fork()) < 0)
{
printf("unable to create child process\n");
exit(1);
}
if(pid == 0)
child_process();
if(pid > 0)
{
printf("Return value of wait: %d\n",wait();
parent_process();
}
return 0;
}
If the wait() were not present in the code, one of the process (child or parent) would finish it's execution and then the control is given to the linux terminal and then finally the process left (child or parent) would run. The output of such a case is:
Parent Process: 0998
Parent Process: 0999
guest#debian:~/c$ Child Process: 0645 //Control given to terminal & then child process is again picked for processing
Child Process: 0646
Child Process: 0647
In case wait() is present in the code, what should be the flow of execution?
When fork() is called then a process tree must be created containing parent and child process. In above code when the processing of child process ends, the parent is informed about the death of child zombie process via wait() system call, but parent and child being two separate processes, is it mandatory that the control is passed the directly to the parent after child process is over? (no control given to other process like terminal at all) - if yes then it is like child process is a part of parent process (like a function called from another function).
This comment is, at least, misleading:
//Control given to terminal & then child process is again picked for processing
The "terminal" process doesn't really enter into the equation. It's always running, assuming that you are using a terminal emulator to interact with your program. (If you're using the console, then there is no terminal process. But that's unlikely these days.)
The process in control of the user interface is whatever shell you're using. You type some command-line like
$ ./a.out
and the shell arranges for your program to run. (The shell is an ordinary user program without special privileges, by the way. You could write your own.)
Specifically, the shell:
Uses fork to create a child process.
Uses waitpid to wait for that child process to finish.
The child process sets up any necessary redirects and then uses some exec system call, typically execve, to replace itself with the ./a.out program, passing execve (or whatever) the command line arguments you specified.
That's it.
Your program, in ./a.out, uses fork to create a child and then possibly waits for the child to finish before terminating. As soon as your parent process terminates, the shell's waitpid() can return, and as soon as it returns, the shell prints a new command prompt.
So there are at least three relevant processes: the shell, your parent process, and your child process. In the absence of synchronisation functions like waitpid(), there are no guarantees about ordering. So when your parent process calls fork(), the created child could start executing immediately. Or not. If it does start executing immediately, it does not necessarily preempt your parent process, assuming your computer is reasonably modern and has more than one core. They could both be executing at the same time. But that's not going to last very long because your parent process will either immediately call exit or immediately call wait.
When a process calls wait (or waitpid), it is suspended and becomes runnable again when the process it is waiting for terminates. But again there are no guarantees. The mere fact that a process is runnable doesn't mean that it will immediately start running. But generally, in the absence of high load, the operating system will start running it pretty soon. Again, it might be running at the same time as another process, such as your child process (if your parent didn't wait for it to finish).
In short, if you performed your experiment a million times, and your parent waits for your child, then you will see the same result a million times; the child must finish before the parent is unsuspended, and your parent must finish before the shell is unsuspended. (If your parent process printed something before waiting, you would see different results; the parent and child outputs could be in any order, or even overlapped.)
If, on the other hand, your parent does not wait for the child, then you could see any of a number of results, and in a million repetitions you're likely to see more than one of them (but not with the same probability). Since there is no synchronisation between parent and child, the outputs could appear in either order (or be interleaved). And since the child is not synchronised with the shell, its output could appear before or after the shell's prompt, or be interleaved with the shell's prompt. No guarantees, other than that the shell will not resume until your parent is done.
Note that the terminal emulator, which is a completely independent process, is runnable the entire time. It owns a pseudo-terminal ("pty") which is how it emulates a terminal. The pseudo-terminal is a kind of pipe; at one end of the pipe is the process which thinks it's communicating with a console, and at the other end is the terminal emulator which interprets whatever is being written to the pty in order to render it in the GUI, and which sends any keystrokes it receives, suitably modified as a character stream back through the pipe. Since the terminal emulator is never suspended and its execution is therefore interleaved with whatever other processes are active on your computer, it will (more or less) immediately show you any output which is sent by your shell or the processes it starts up. (Again, assuming the machine is not overloaded.)

Can a parent read from a pipe if a child dies?

If a parent and child process communicate via a pipe, what would happen if a parent reads from a pipe after a child process has written to it and died?
When the child dies, its end of the pipe is automatically closed. The parent will read EOF after reading everything that the child wrote before it died, just as if the client called close() explicitly.
Note that the parent can only read data that has actually been written to the pipe. If the child process is performing buffered output, as is the default when using stdio, all the data the application has written might not be in the pipe when it dies. Stdio buffers are automatically flushed when the process calls exit(), but if it dies due to a signal this will not be called.

Assign new tasks to forked() processes

I need to create a multi-process program that:
Creates 5 processes using fork();
Sends stuff to do the the child processes using pipes
When a child process completes its stuff, it should receive new stuff to do from parent process until all stuff are completed.
Right now my idea is to wait() on completed child tasks (and it exits) and then to create a new child process so I always have a maximum of 5 processes.
Is there a way to "re-use" the already existing process? Maybe "signaling" something? Cannot find it on Google.
Using C.
I solved my problem in this way:
Children write the result of their calculation on a pipe A (this pipe is not blocking). Then they wait their next input on a pipe B (this pipe is blocking).
Parent loops on all children trying to read something from pipe A (this pipe is not blocking so parent goes on if child didn't send anything). If pipe A contains a result, the parent sends another task to that child on pipe B.
There is a pipe A and a pipe B for each child.

read() hangs on zombie process

I have a while loop that reads data from a child process using blocking I/O by redirecting stdout of the child process to the parent process. Normally, as soon as the child process exits, a blocking read() in this case will return since the pipe that is read from is closed by the child process.
Now I have a case where the read() call does not exit for a child process that finishes. The child process ends up in a zombie state, since the operating system is waiting for my code to reap it, but instead my code is blocking on the read() call.
The child process itself does not have any child processes running at the time of the hang, and I do not see any file descriptors listed when looking in /proc/<child process PID>/fd. The child process did however fork two daemon processes, whose purpose seems to be to monitor the child process (the child process is a proprietary application I do not have any control over, so it is hard to say for sure).
When run from a terminal, the child process I try to read() from exits automatically, and in turn the daemon processes it forked terminate as well.
Linux version is 4.19.2.
What could be the reason of read() not returning in this case?
Follow-up: How to avoid read() from hanging in the following situation?
The child process did however fork two daemon processes ... What could be the reason of read() not returning in this case?
Forked processes still have the file descriptor open when the child terminates. Hence read call never returns 0.
Those daemon processes should close all file descriptors and open files for logging.
A possible reason (the most common) for read(2) blocking on a pipe with a dead child, is that the parent has not closed the writing side of the pipe, so there's still an open (for writing) descriptor for that pipe. Close the writing side of the pipe in the parent process before reading from it. The child is dead (you said zombie) so it cannot be the process with the writing side of the pipe open. And don't forget to wait(2) for the child in the parent, or you'll get a system full of zombies :)
Remember, you have to do two closes in your code:
One in the parent process, to close the writing side of the pipe, leaving the parent process with only a reading descriptor.
One in the child process (just before exec(2)ing) closing the reading side of the pipe, leaving the child process only with a writing descriptor.
In case you want to use the pipe(2) to send information to the child, change the reading for writing and viceversa in the above two points.

Piping From Child To Parent Process

I am currently writing a program in C using Linux in which a parent process creates a child process and then that child process creates another child processes (so three processes total). I have to pipe a string from parent to the first child process and then from the first child process to the child of that process. I am currently piping the string from parent to child using code similar to this (all code not shown):
pipe(pipeArray)
write(pipeArray[1], myString, length);
close(pipeArray[1]);
read(pipeArray[0], FirstProcessString, length+1);
close(pipeArray[0]);
My problem is the second part of of my program in which now I must take the string from the second child process and pipe it all the way up to the first parent process. How do you pipe from the second child process to the original parent process (the first process)? I have tried variations of this code in order to pipe up with no luck and also researched this topic and was not able to find anything helpful.

Resources