I am trying to create a web server in C that uses epoll for multiplexing IO. I was trying to make it capable of generating PHP pages.
What I did: For each connection I read the path, created unnamed pipe, called forked then I redirected the output of the child process to the pipe and used execvp("php", (char *const *) argv);. In the parent process I added the pipe to epoll with EPOLLIN and then I wait for it in the main loop. When it was signaled I used io_prep_pread from libaio to read asynchronously, and then when the read part is finished I would send the buffer to the client.
The problem is the right result is outputted like 5-10% of the time. Is the logic I presented correct or should I wait for the child process to send SIGCHLD and then start reading the pipe?
Related
To initialize an application the parent process forks 3 child processes, the child processes then set up their signal handlers and signal back to the parent that they are ready to start activity. SIGUSR1 signal is used to achieve this.
The parent process in the meanwhile is waiting for these signals from the child processes. As soon as a signal is received, the parent matches its pid with the child pids it has stored and increments a counter. Once the parent knows that go-ahead signals from all child processes have been received it starts to send each one of them a SIGUSR1 signal to indicate to start activity.
The fact that all signals are sent from the parent for each child is verified; however, most times one of the child processes misses the signal. Over multiple trials, I have identified that the process to which the parent sends the signal first, misses it. However sometimes it also occurs that all child processes miss their signals. I have also used the 'strace' tool to check the flow of all signals but still can't seem to identify why the child processes fail to catch the signals sent by the parent.
Any feedback will be appreciated.
SIGUSR1 and other POSIX signals are not queued. If the process has already one pending, any other signals will be discarded.
You can avoid this by using "realtime signals". You use them just like you use the standard POSIX signals; the first one is named SIGRTMIN+0, and the last one is named SIGRTMAX-0. If you use sigqueue(), you can even attach one int (or a void pointer) as a payload.
POSIX realtime signals are queued (up to a limit), so you are less likely to lose them.
However, I would not use signals for tracking the child processes. I would use pipes, with the child having the write ends, and parent having the read ends, and all descriptors marked for close-on-exec using fcntl(descriptor, O_SETFD, O_CLOEXEC) on each.
The children update the parent on their status via one-byte messages. If the child exits or executes another program, the parent will see it as end-of-file condition (read() returning zero). If the parent exits, the write end will become unwritable for the child write end, and any attempt at writing to the pipe will fail with EPIPE error. (It will also raise the SIGPIPE signal, so you might wish to use sigaction() to ignore the SIGPIPE signal.)
The parent can monitor the child process statuses in parallel using select() or poll(). Whenever a child process sends data, or exits or executes another program (which closes the write end of the pipe), the parent descriptor (read end of the pipe) will become readable. Personally, I also mark the parent descriptors nonblocking using fcntl(rfd, F_SETFL, O_NONBLOCK), so that if there is a glitch, instead of blocking on a mis-read, the read on a parent will just fail with EWOULDBLOCK in errno.
If you want bidirectional data flow, it is easiest to use an additional pipe for each child, parent writing and child reading from it.
It is also possible to use unnamed Unix domain datagram sockets (created via socketpair(AF_UNIX, SOCK_DGRAM, 0, fds). (See also man 2 socket and man 7 unix for details on the parameters.) Also use fcntl(fds[0], F_SETFL, O_CLOEXEC) and fcntl(fds[1], F_SETFL, O_CLOEXEC) to make the descriptors close-on-exec, just as in the pipe case, too.
The problem with Unix domain socket pairs (of any type -- SOCK_STREAM, SOCK_DGRAM, SOCK_SEQPACKET), is that they can contain ancillary data. This ancillary data can contain additional file descriptors, which are a limited commodity. If there is a possibility of an untrustworthy or nasty child process, it might kill its parent by sending it a few thousand file descriptors. For security, the parent process should monitor what it receives from the child process, and if it contains ancillary data, immediately close that descriptor (as the child process is obviously hostile!), and if any file descriptors were provided, close those too. You can only avoid this if you trust your child processes as much as you trust your original process for not doing anything nefarious.
Securing unix domain sockets is not hard, but checking each received datagram or receive for ancillary data is a couple of dozen additional lines of code.. Pipes are simpler.
When using a pipe for process-process communication, what is the purpose of closing one end of the pipe?
For example: How to send a simple string between two programs using pipes?
Notice that one side of the pipe is closed in the child and parent processes. Why is this required?
If you connect two processes - parent and child - using a pipe, you create the pipe before the fork.
The fork makes the both processes have access to both ends of the pipe. This is not desirable.
The reading side is supposed to learn that the writer has finished if it notices an EOF condition. This can only happen if all writing sides are closed. So it is best if it closes its writing FD ASAP.
The writer should close its reading FD just in order not to have too many FDs open and thus reaching a maybe existing limit of open FDs. Besides, if the then only reader dies, the writer gets notified about this by getting a SIGPIPE or at least an EPIPE error (depending on how signals are defined). If there are several readers, the writer cannot detect that "the real one" went away, goes on writing and gets stuck as the writing FD blocks in the hope, the "unused" reader will read something.
So here in detail what happens:
parent process calls pipe() and gets 2 file descriptors: let's call it rd and wr.
parent process calls fork(). Now both processes have a rd and a wr.
Suppose the child process is supposed to be the reader.
Then
the parent should close its reading end (for not wasting FDs and for proper detection of dying reader) and
the child must close its writing end (in order to be possible to detect the EOF condition).
The number of file descriptors that can be open at a given time is limited. If you keep opening pipes and not closing them pretty soon you'll run out of FDs and can't open anything anymore: not pipes, not files, not sockets, ...
Another reason why it can be important to close the pipe is when the closing itself has a meaning to the application. For example, a common use of pipes is to send the errno from a child process to the parent when using fork and exec to launch an external program:
The parent creates the pipe, calls fork to create a child process, closes its writing end, and tries to read from the pipe.
The child process attempts to use exec to run a different program:
If exec fails, for example because the program does not exist, the child writes errno to the pipe, and the parent reads it and knows what went wrong, and can tell the user.
If exec is successful the pipe is closed without anything being written. The read function in the parent returns 0 indicating the pipe was closed and knows the program was successfully started.
If the parent did not close its writing end of the pipe before trying to read from the pipe this would not work because the read function would never return when exec is successful.
Closing unused pipe file descriptor is more than a matter of ensuring that a process doesn't exhaust its limited set of file descriptor-it is essential to the correct use of pipes. We now consider why the unused file descriptors for both the read and write ends of the pipe must be closed.
The process reading from the pipe closes its write descriptor for the pipe, so that, when the other process completes its output and closes its write descriptor, the read sees end-of-file (once it has ready any outstanding data in the pipe).
If the reading process doesn't close the write end of the pipe, then after the other process closes its write descriptor, the reader won't see end-of-file, even after it has read all data from the pipe. Instead, a read() would block waiting for data, because the kernel knows that there is still at least one write descriptor open for the pipe.That this descriptor is held open by the reading process itself is irrelevant; In theory, that process could still write to the pipe, even if it is blocked trying to read.
For example, the read() might be interrupted by a signal handler that writes data to the pipe.
The writing process closes its read descriptor for the pipe for a different reason.
When a process tries to write to a pipe for which no process has an open read descriptor, the kernel sends the SIGPIPE signal to the writing process. By default, this signal kills a process. A process can instead arrange to catch or ignore this signal, in which case the write() on the pipe fails with the error EPIPE (broken pipe). Receiving the SIGPIPE signal or getting the EPIPE error is useful indication about the status of the pipe, and this is why unused read descriptors for the pipe should be closed.
If the writing process doesn't close the read end of the pipe, then even after the other process closes the read end of the pipe, the writing process will fill the pipe, and a further attempt to write will block indefinitely.
One final reason for closing unused file descriptor is that only after it all file descriptor are closed that the pipe is destroyed and its resources released for reuse by other processes. At this point, any unread data in the pipe is lost.
~ Micheal Kerrisk , the Linux programming interface
I am trying to program the server side of a groupchat system using C whilst my friend is programming the client side. For each client connection the server receives, it forks a child process so as to handle the client and continue accepting any possibly further clients.
The server is required to send a list of all online users(connected clients) to each of the current connected clients and for this reason I have used pipes. Basically, when a child process is created, it receives come information from the client through a socket and sends such information to the parent, which is keeping a list of all the clients, through a pipe. This list has to be updated every time a client makes a change like starts chatting or disconnects. For example if a client disconnects then the child sends a message to the parent through the pipe and the parent makes the necessary operations to the list so that it gets updated. Note that the pipe is created for each and every new connection.
My problem is that if for example I receive 3 connections one after another, and the 2nd child disconnects, the parent is not reading the information from the pipe since such parent has a different pipe from the 2nd child. (Remember that a new pipe has been created because a 3rd connection has been made). How can I go about solving this problem?
I have also tried creating one common pipe but if I don't close the pipe before reading/writing I get an error and if I do close them I get a segmentation fault when the second client connects since the pipe would be closed.
Any help would be greatly appreciated because I have been searching for hours to no avail.
Thanks.
The parent server process knows when a child is created because it creates the child. It can tell when a child dies by setting a SIGCLD signal handler so it is notified when a child does die. The Nth child has N-1 pipes to close — those going to the other children (unless some of the children have died). The parent process closes the write end of the pipes it creates; the child process closes the read end of the pipes it inherits (which leaves it with a socket to the client and the write end of the pipe created for it to communicate with the parent).
If you need to know when a child starts communicating with a client, then you need to send a message down the pipe from the child to the parent. It is not so obvious how to tell when the child stops communicating — how long needs to elapse before you declare that the child is idle again?
In the parent, you end up polling in some shape or form (select(), poll(), epoll()) on the listening socket and all the read pipes. When some activity occurs, the parent wakes up and responds appropriately. It's a feasible design as long as it doesn't have to scale to thousands or more clients. It requires some care, notably in closing enough file descriptors.
You say:
My problem is that if for example I receive 3 connections one after another, and the 2nd child disconnects, the parent is not reading the information from the pipe since such parent has a different pipe from the 2nd child. (Remember that a new pipe has been created because a 3rd connection has been made). How can I go about solving this problem?
The parent should have an array of open file descriptors (pipes open for reading to various children), along with an indication of which child (PID) is on the other end of the pipe. The parent will close the pipe when it gets EOF on the pipe, or when it is notified that the child has died (via waitpid() or a relative). The polling mechanism will tell you when a pipe is closed, at least indirectly (you will be told the file descriptor won't block, and then you get the EOF — zero bytes read).
In your scenario, the parent has one listening socket open, plus 3 read file descriptors for the pipes to the 3 children (plus standard input, output and error, and maybe syslog).
Although you could use a single pipe from all the children, it is much trickier to handle. You'd have to identify which child wrote each message in the message, ensuring that the message is written atomically by the child. The parent has to be able to tell how much to read at any point so as not to be confused. The advantage of a single pipe is that there is less file descriptor manipulation to do for the polling system call; it also scales indefinitely (no running out of file descriptors).
In neither case should you run into problems with core dumps.
I am implementing a web server where I need to make the parent process do the following:
fork() new worker processes (a pool) at the beginning.
Looping forever, listening for incoming requests (via socket communication).
Putting the socket descriptor (returned by accept() function) into a queue.
and the worker process will do the following:
Once created, loops forever watching the queue for any passed socket descriptors.
If he takes the socket descriptor, he handles the request and serves the client accordingly.
After looking around and searching the internet, I found that I can send a file descriptor between different processes via UNIX Domain Socket or Pipes. But unfortunately, I can do this synchronously only! (I can send one fd at a time, and I cannot put it in a waiting queue)
So, my question is:
How can I make the parent process puts the socket descriptor into a waiting queue, so that, the request is pending until one of the worker processes finishes a previous request?
File descriptors are just integers. They are used to index into a per-process table of file information, maintained by the kernel. You can't expect a file descriptor to be "portable" to other processes.
It works (somewhat) if you create the files before calling fork(), since the file descriptor table is part of the process and thus clone()d when the child is created. For file descriptors allocated after the processes have split, such as when using accept() to get a new socket, you can't do this.
UPDATE: It seems there is a way, using sendmsg() with AF_UNIX sockets, see here for details as mentioned in this question. I did not know that, sounds a bit "magical" but apparently it's a well-established mechanism so why not go ahead and implement that.
put the fd on an internal queue (lock-free if you want, but probably not necessary)
have a thread in the parent process which just reads an fd from the internal queue, and sends it via the pipe
all child processes inherit the other end of the pipe, and compete to read the next fd when they finish their current job
I am currently learning sockets programming using C in a Linux environment. As a project I am attempting to write a basic chat server and client.
The intention is to have the server fork a process for each client that connects.
The problem that I am having is reading data in one child and writing it to all of the connected clients.
I have tried to accomplish this by looping on a call to select in the child that waits for data to arrive on the socket or read end of the pipe. If it arrives on the socket the idea is that it writes to the write end of the pipe which causes select to return the read end of the pipe as ready for reading.
As this pipe is shared between all the children, each child should then read the data on the pipe. This does not work as the data pipe, it appears, cannot be read by each child process at the same time and the children that "miss" the data block in the call to read.
Below is the code in the child process that does this:
for( ; ; )
{
rset = mset;
if(select(maxfd+1, &rset, NULL, NULL, NULL) > 0)
{
if(FD_ISSET(clientfd, &rset))
{
read(clientfd, buf, sizeof(buf));
write(pipes[1], buf, strlen(buf));
}
if(FD_ISSET(pipes[0], &rset))
{
read(pipes[0], buf, sizeof(buf));
write(clientfd, buf, sizeof(buf));
}
}
}
I am presuming the method that I am currently using simply will not work. Is it going to be possible for messages received from a client to be written to all of the other connected clients via IPC?
Thanks
To get around the problem of a child reading from the pipe more data than it should (and in-turn making another child get "stuck" trying to read from an empty pipe), you should probably look into using either POSIX message queues or a single pipe between the parent and an individual child processes rather than a single global pipe to communicate between the parent and child processes. As it stands right now, when the server writes to the pipe to communicate with its children, it's not really able to control exactly which child will read from the pipe at any given time, since the scheduling of processes by the OS is non-deterministic. In other words, without some type of synchronizing mechanism or read/write barriers, if the server writes to the pipe, there is nothing in your code stopping one child from "skipping" a read, and a second child from doing a double-read, leaving another child that should have gotten the broadcasted data from the server starved, and therefore blocked.
A simple way around this could again be to have a single private pipe shared between the parent and an individual child. Thus in the server the child processes can read from the client, send that data back to the parent process, and the parent can then, using the entire list of pipe descriptors that it's accumulated for all the children, write back to each individual child process the broadcast message which is then sent back to each client. No child ever gets "starved" of data since there is no possibility of a double-read by another child-process. There is only a single reader/writer on each pipe, and the communication is deterministic.
If you don't want to handle multiple pipes for each child in the server's parent process, you could use a global message queue using POSIX message queues (found in mqueue.h). With that approach, if a child grabs a message that it's not suppose to have (i.e., you would need to pass around a struct that contained some type of ID value), it would place the message back in the queue and attempt to read another message ... that's not quite as efficient speed-wise as the direct pipe approach, but it would allow you to write-back a message that was not designated for the current child without the interleaving complications that would take place with a global pipe or FIFO mechanism.
Each byte of data written to a pipe will be read exactly once. It isn't duplicated to every process with the read end of the pipe open.
If you want the data duplicated to multiple destination processes, you have to explicitly duplicate the data. For example, you could have one "master" process that has a pipe to and from every "slave" process. When a slave wants to broadcast a message to the other slaves, it sends it to the master process, which loops around and writes it once to each pipe going to the other slaves.