Let's say we have 3 pipes, 3 children that send data through them, 1 parent that reads this data (eg. printing some sent string). All the children are the same: they send data to pipe, let's say 4 times, and then they close their side of pipe and exit the program.
How can you know (in parent) that all the pipes have been closed?
When we're simply iterating through pipes, with some sleep time, and checking if there's some data to be read.
When we're using poll to read data (multiplexing).
(I'm not sure, but I think in the first case my program was returning with some pipe error, after all the write sides of pipes have been closed?)
I guess we could iterate through the write side of pipes and if all of them are -1?? (not sure to what value file descriptor is set when we close it, or if it's set at all, if not maybe we could set it by ourselves in child) then we can exit parent (data reading) program?
Related
I am trying to program the server side of a groupchat system using C whilst my friend is programming the client side. For each client connection the server receives, it forks a child process so as to handle the client and continue accepting any possibly further clients.
The server is required to send a list of all online users(connected clients) to each of the current connected clients and for this reason I have used pipes. Basically, when a child process is created, it receives come information from the client through a socket and sends such information to the parent, which is keeping a list of all the clients, through a pipe. This list has to be updated every time a client makes a change like starts chatting or disconnects. For example if a client disconnects then the child sends a message to the parent through the pipe and the parent makes the necessary operations to the list so that it gets updated. Note that the pipe is created for each and every new connection.
My problem is that if for example I receive 3 connections one after another, and the 2nd child disconnects, the parent is not reading the information from the pipe since such parent has a different pipe from the 2nd child. (Remember that a new pipe has been created because a 3rd connection has been made). How can I go about solving this problem?
I have also tried creating one common pipe but if I don't close the pipe before reading/writing I get an error and if I do close them I get a segmentation fault when the second client connects since the pipe would be closed.
Any help would be greatly appreciated because I have been searching for hours to no avail.
Thanks.
The parent server process knows when a child is created because it creates the child. It can tell when a child dies by setting a SIGCLD signal handler so it is notified when a child does die. The Nth child has N-1 pipes to close — those going to the other children (unless some of the children have died). The parent process closes the write end of the pipes it creates; the child process closes the read end of the pipes it inherits (which leaves it with a socket to the client and the write end of the pipe created for it to communicate with the parent).
If you need to know when a child starts communicating with a client, then you need to send a message down the pipe from the child to the parent. It is not so obvious how to tell when the child stops communicating — how long needs to elapse before you declare that the child is idle again?
In the parent, you end up polling in some shape or form (select(), poll(), epoll()) on the listening socket and all the read pipes. When some activity occurs, the parent wakes up and responds appropriately. It's a feasible design as long as it doesn't have to scale to thousands or more clients. It requires some care, notably in closing enough file descriptors.
You say:
My problem is that if for example I receive 3 connections one after another, and the 2nd child disconnects, the parent is not reading the information from the pipe since such parent has a different pipe from the 2nd child. (Remember that a new pipe has been created because a 3rd connection has been made). How can I go about solving this problem?
The parent should have an array of open file descriptors (pipes open for reading to various children), along with an indication of which child (PID) is on the other end of the pipe. The parent will close the pipe when it gets EOF on the pipe, or when it is notified that the child has died (via waitpid() or a relative). The polling mechanism will tell you when a pipe is closed, at least indirectly (you will be told the file descriptor won't block, and then you get the EOF — zero bytes read).
In your scenario, the parent has one listening socket open, plus 3 read file descriptors for the pipes to the 3 children (plus standard input, output and error, and maybe syslog).
Although you could use a single pipe from all the children, it is much trickier to handle. You'd have to identify which child wrote each message in the message, ensuring that the message is written atomically by the child. The parent has to be able to tell how much to read at any point so as not to be confused. The advantage of a single pipe is that there is less file descriptor manipulation to do for the polling system call; it also scales indefinitely (no running out of file descriptors).
In neither case should you run into problems with core dumps.
Almost all the pipe examples I've seen advice closing the unused write/read ends. Also man clearly states that pipe() creates a pipe, a unidirectional data channel But I've tried reading and writing to both ends of the pipe in both the parent and the child and everything seems to be OK.
So my doubt is why do we need 2 pipes if two processes have to both read and write to each other and why not do it using a single pipe?
If you use the same pipe how does the child separate its messages from the parents messages and vice versa?
For example:
Parent writes to pipe
Parent reads from pipe hoping to get message from child but gets its own message :(
It is much easier to use one pipe for child->parent and another pipe for parent->child.
Even if you have some protocol for reading/writing it is quite easy to deadlock the parent and child process.
You can read and write at both ends of the created pipe, but uni-directional means that data only travels in one direction at any time, from parent to child or vice versa. Two pipes are needed for non-blocking sending and receiving of data, meaning that you can read and write at the same time with two pipes, but with one pipe you must finish reading before you can write to the pipe or you must finish writing something before you can read the pipe. In layman terms, you can only read or write at any point of time with only one pipe
I have a scenario where two pipes are used for IPC between child and parent. The child process uses execvp to execute a remote program. The parent process takes care of writing data to the pipe. The remote programs stdin is duplicated to read end of one pipe. To the same pipe parent writes data at the write end. The remote program has a simple getchar() in one of the functions that is called twice in the remote program's main function.
The parent writes data in the following sequence.
writes data to the pipe. Closes all the required handles. (say wrote 1)
after some time writes data again to the pipe. Closes handle (say wrote 2)
The getchar in the remote program reads "1" in the proper fashion. But the problem comes while reading "2". The getchar is reading garbage values.
I have debugged using GDB and the program exits normally. No "signals" are raised while debugging.
I have used the fork(), dup2() and pipe() functions and need to stick to it.
The program should fork, then the parent should read user input, send it to the child; the child should deal with it, then send a result to the parent, who prints it (it is required to work this way).
I've done a part of it, but the program locks after reading from the fifo the first time.
I suspect the problem is somewhere between lines 122–199. Making the pipe nonblocking makes the program jump through the scanf at 185 and loop indefinitely. Closing and reopening the pipe before writing and after reading leads to the same effect.
Here is the source: link.
Later edit (clarification):
The parent blocks before the printf at 184, when it reads the second command (the first time it seems to work just fine).
I haven't implemented the "child sends stuff back to the parent" part. At the moment I just want to make the child output the data it recieves through the pipe from the parent and then give control back to the parent to read another command.
The child lives in a paused state (pause()) while the parent reads input and sends it through the pipe, then it wakes up the child and goes in a paused state itself. The child reads data from the pipe and outputs it, then wakes up the parent and goes to sleep.
Did you use some multiplexing system call like select or poll which are able to test of a set of file descriptors has some of them ready (for input, or for output)?
Learn more about poll or about select and friends.
You should clarify your question which with part (parent or child) it is that locks.
Make sure all output is line feed terminated. It seems (from a quick reading) that you use puts(), which should take care of that.
Try calling fflush() on the output after the client is done, to make sure the output gets written to the pipe, if the child lives on while the parent reads its output. I didn't read your code closely enough to track down the lifetime handling.
I am currently learning sockets programming using C in a Linux environment. As a project I am attempting to write a basic chat server and client.
The intention is to have the server fork a process for each client that connects.
The problem that I am having is reading data in one child and writing it to all of the connected clients.
I have tried to accomplish this by looping on a call to select in the child that waits for data to arrive on the socket or read end of the pipe. If it arrives on the socket the idea is that it writes to the write end of the pipe which causes select to return the read end of the pipe as ready for reading.
As this pipe is shared between all the children, each child should then read the data on the pipe. This does not work as the data pipe, it appears, cannot be read by each child process at the same time and the children that "miss" the data block in the call to read.
Below is the code in the child process that does this:
for( ; ; )
{
rset = mset;
if(select(maxfd+1, &rset, NULL, NULL, NULL) > 0)
{
if(FD_ISSET(clientfd, &rset))
{
read(clientfd, buf, sizeof(buf));
write(pipes[1], buf, strlen(buf));
}
if(FD_ISSET(pipes[0], &rset))
{
read(pipes[0], buf, sizeof(buf));
write(clientfd, buf, sizeof(buf));
}
}
}
I am presuming the method that I am currently using simply will not work. Is it going to be possible for messages received from a client to be written to all of the other connected clients via IPC?
Thanks
To get around the problem of a child reading from the pipe more data than it should (and in-turn making another child get "stuck" trying to read from an empty pipe), you should probably look into using either POSIX message queues or a single pipe between the parent and an individual child processes rather than a single global pipe to communicate between the parent and child processes. As it stands right now, when the server writes to the pipe to communicate with its children, it's not really able to control exactly which child will read from the pipe at any given time, since the scheduling of processes by the OS is non-deterministic. In other words, without some type of synchronizing mechanism or read/write barriers, if the server writes to the pipe, there is nothing in your code stopping one child from "skipping" a read, and a second child from doing a double-read, leaving another child that should have gotten the broadcasted data from the server starved, and therefore blocked.
A simple way around this could again be to have a single private pipe shared between the parent and an individual child. Thus in the server the child processes can read from the client, send that data back to the parent process, and the parent can then, using the entire list of pipe descriptors that it's accumulated for all the children, write back to each individual child process the broadcast message which is then sent back to each client. No child ever gets "starved" of data since there is no possibility of a double-read by another child-process. There is only a single reader/writer on each pipe, and the communication is deterministic.
If you don't want to handle multiple pipes for each child in the server's parent process, you could use a global message queue using POSIX message queues (found in mqueue.h). With that approach, if a child grabs a message that it's not suppose to have (i.e., you would need to pass around a struct that contained some type of ID value), it would place the message back in the queue and attempt to read another message ... that's not quite as efficient speed-wise as the direct pipe approach, but it would allow you to write-back a message that was not designated for the current child without the interleaving complications that would take place with a global pipe or FIFO mechanism.
Each byte of data written to a pipe will be read exactly once. It isn't duplicated to every process with the read end of the pipe open.
If you want the data duplicated to multiple destination processes, you have to explicitly duplicate the data. For example, you could have one "master" process that has a pipe to and from every "slave" process. When a slave wants to broadcast a message to the other slaves, it sends it to the master process, which loops around and writes it once to each pipe going to the other slaves.