Can't read from multiple fifos - c

I have a really annoying problem while trying to read from multiple fifos. I have 1 process that waits for structure from fifo and few processes that are sending him those structures on signal. After first read I can't read anything more.. from any signal. It looks like the program frozes.
The sending process has this in main with
myfifo = '/tmp/myfifo{0}' //{0} is a number that every process has individual.
mkfifo(myfifo, 0666);
fd = open(myfifo, O_WRONLY);
write(fd, &demon1 , sizeof(demon1));
close(fd);
while (1)
{
}
and this in signal_handler
void signal_handler(int signum)
{
if (signum == SIGUSR1)
{
//some declarations here
mkfifo(myfifo, 0666);
fd = open(myfifo, O_WRONLY | O_NONBLOCK);
write(fd, &demon1 , sizeof(demon1));
}
}
While the reading process has
myfifo[i] = /tmp/myfifo{0} // {0} is i which is the number of process that sends.
while(1)
{
for(i=0;i<n;i++)
{
fd = open(myfifo[i], O_RDONLY | O_NONBLOCK);
r = read(fd, &demon1, sizeof(demon1));
if(r > 1)
{
//printf struct elements
}
}
}

You open the pipe inside the loop. That way, you quickly run out of file descriptors (which you would see if you checked the result of open() for errors).
I suggest to open all FIFOs outside the loop, store the file descriptors in an array and then just read each of them but ... the read will block. See select(2) to find out which FIFO has data.
Another solution would be a single FIFO and the writing process should send it's ID in the message. That way, the main process just has to listen to a single FIFO. If it wants to know who sent the message, it can look at the ID in the message. The problem here: You need some kind of locking or several processes will write to the FIFO at the same time and their data can get mixed up (this depends on the FIFO buffers).

You do not close the filedescriptors after opening and reading:
while(1)
{
for(i=0;i<n;i++)
{
fd = open(myfifo[i], O_RDONLY | O_NONBLOCK);
r = read(fd, &demon1, sizeof(demon1));
if(r > 1)
{
//printf struct elements
}
Here a close(fd) is missing.
}
}
Since the open is non-blocking, the maximum number of fds per process is reached very soon and subsequent opens will fail.

Related

Why does read() block and wait forever in parent process despite the writing end of pipe being closed?

I'm writing a program with two processes that communicate through a pipe. The child process reads some parameters from the parent, executes a shell script with them and returns the results to the parent process line by line.
My code worked just fine until I wrote the while(read()) part at the end of the parent process. The child would execute the shell script, read its echo's from popen() and print them to standard output.
Now I tried to write the results to the pipe as well and read them in the while() loop at the parent's end but it blocks and neither will the child process print the result to standard output. Apparently it won't even reach the point after reading the data from the pipe sent by the parent.
If I comment out the while() at the parent process, the child will print the results and return, and the program will end smoothly.
Why does the while(read()) block even if I closed the writing end of the pipe in both parent and child processes?
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
#include <errno.h>
#include <string.h>
#include <fcntl.h>
int read_from_file(char **directory, int *octal) {
FILE *file = fopen("input", "r");
if (file == NULL) {
perror("error opening file");
exit(1);
}
fscanf(file, "%s %d", *directory, octal);
}
int main(int argc, char *argv[]) {
char *directory = malloc(256);
int *octal = malloc(sizeof *octal);
pid_t pid;
int pfd[2];
char res[256];
if (pipe(pfd) < 0) {
perror("Error opening pipe");
return 1;
}
if ((pid = fork()) < 0)
perror("Error forking");
if (pid == 0) {
printf("client here\n");
if (read(pfd[0], directory, 256) < 0)
perror("error reading from pipe");
if (read(pfd[0], octal, sizeof(int)) < 0)
perror("error reading from pipe");
// This won't get printed:
printf("client just read from pipe\n");
// close(pfd[0]);
char command[256] = "./asd.sh ";
strcat(command, directory);
char octal_c[5];
sprintf(octal_c, " %d", *octal);
strcat(command, octal_c);
FILE *f = popen(command, "r");
while (fgets(res, 256, f) != NULL) {
printf("%s", res);
if (write(pfd[1], res, 256) < 0)
perror("Error writing res to pipe");
}
fclose(f);
close(pfd[1]);
close(pfd[0]);
fflush(stdout);
return 1;
}
read_from_file(&directory, octal);
if (write(pfd[1], directory, 256) < 0)
perror("Error writing dir to pipe");
if (write(pfd[1], octal, sizeof(int)) < 0)
perror("error writing octal to pipe");
int r;
close(pfd[1]);
while (r = read(pfd[0], res, 256)) {
if (r > 0) {
printf("%s", res);
}
}
close(pfd[0]);
while (wait(NULL) != -1 || errno != ECHILD);
}
Since the child demonstrably reaches ...
printf("client here\n");
... but seems not to reach ...
printf("client just read from pipe\n");
... we can suppose that it blocks indefinitely on one of the two read() calls between. With the right timing, that explains why the parent blocks on its own read() from the pipe. But how and why does that blocking occur?
There are at least three significant semantic errors in your program:
pipes do not work well for bidirectional communication. It is possible, for example, for a process to read back the bytes that it wrote itself and intended for a different process. If you want bidirectional communication then use two pipes. In your case, I think that would have avoided the apparent deadlock, though it would not, by itself, have made the program work correctly.
write and read do not necessarily transfer the full number of bytes requested, and short reads and writes are not considered erroneous. On success, these functions return the number of bytes transferred, and if you want to be sure to transfer a specific number of bytes then you need to run the read or write in a loop, using the return values to track progress through the buffer being transferred. Or use fread() and fwrite() instead.
Pipes convey undifferentiated streams of bytes. That is, they are not message oriented. It is not safe to assume that reads from a pipe will be paired with writes to the pipe, so that each read receives exactly the bytes written by one write. Yet your code depends on that to happen.
Here's a plausible failure scenario that could explain your observations:
The parent:
fork()s the child.
after some time performs two writes to the pipe, one from variable directory and the other from variable octal. At least the first of those is a short write.
closes its copy of the write end of the pipe.
blocks attempting to read from the pipe.
The child:
reads all the bytes written via its first read (into its copy of directory).
blocks on its second read(). It can do this despite the parent closing its copy of the write end, because the write end of the pipe is still open in the child.
You then have a deadlock. Both ends of the pipe are open in at least one process, the pipe is empty, and both processes are blocked trying to read bytes that can never arrive.
There are other possibilities that arrive at substantially the same place, too, some of them not relying on a short write.
The parent process was trying to read from the pipe before the child could have read from it and write the results to it. Using two different pipes for the two-way communication solved the problem.

Named pipe - run child process

I'm trying to use named pipe in C to run a child process in the background from a path in non-blocking mode and read the output of the child.
This is my code:
int fifo_in = open("fifo_1", O_RDONLY| O_NONBLOCK);
int fifo_out = open("fifo_2", O_WRONLY| O_NONBLOCK);
dup2(fifo_in, 0);
dup2(fifo_out, 1);
char app[] = "/usr/local/bin/probemB";
char * const argsv[] = { app, "1", NULL };
if (execv(app, argsv) < 0) {
printf("execv error\n");
exit(4);
}
I will later use read function to read the child process output.
But the problem is that execv is blocking while it is reading the output from the process instead of allowing me to read.
Can someone help me to correct the above problem please ?
You're wrong in that execv is blocking.
If execv works, it will never return. It replaces your program. You need to fork a new process for execv:
if (fork() == 0)
{
// In child process, first setup the file descriptors
dup2(fifo_out, STDOUT_FILENO); // Writes to standard output will be written to the pipe
close(fifo_out); // These are not needed anymore
close(fifo_in);
// Run the program with execv...
}
else
{
// Unless there was an error, this is in the parent process
close(fifo_out);
// TODO: Read from fifo_in, which will contain the standard output of the child process
}
Another thing, you seem have two different and unconnected named pipes. You should open only one pipe, for reading in the parent process, and for writing in the child process:
int fifo_in = open("fifo_1", O_RDONLY| O_NONBLOCK);
int fifo_out = open("fifo_1", O_WRONLY| O_NONBLOCK);
But if you only want to communicate internally, you don't need named pipes. Instead use anonymous pipes as created by the pipe function.

Reading from FIFO after unlink()

I have created a FIFO, wrote to it and unlinked it.
To my surprise I was able to read data from the fifo after unlinking, why is that?
#include <fcntl.h>
#include <sys/stat.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
#define MAX_BUF 256
int main()
{
int fd;
char * myfifo = "/tmp/myfifo";
/* create the FIFO (named pipe) */
mkfifo(myfifo, 0666);
int pid = fork();
if (pid != 0)
{
/* write "Hi" to the FIFO */
fd = open(myfifo, O_WRONLY);
write(fd, "Hi", sizeof("Hi"));
close(fd);
/* remove the FIFO */
unlink(myfifo);
}
else
{
wait(NULL);
char buf[MAX_BUF];
/* open, read, and display the message from the FIFO */
fd = open(myfifo, O_RDONLY);
read(fd, buf, MAX_BUF);
printf("Received: %s\n", buf);
close(fd);
return 0;
}
return 0;
}
Unless you pass the O_NONBLOCK flag to open(2), opening a FIFO blocks until the other end is opened. From man 7 fifo:
The FIFO must be opened on both ends (reading and writing) before data
can be passed. Normally, opening the FIFO blocks until the other end
is opened also.
A process can open a FIFO in nonblocking mode. In this case, opening
for read only will succeed even if no-one has opened on the write side
yet, opening for write only will fail with ENXIO (no such device or
address) unless the other end has already been opened.
Which is to say, your parent / child processes are implicitly synchronized upon opening the FIFO. So by the time the parent process calls unlink(2), the child process opened the FIFO long ago. So the child will always find the FIFO object and open it before the parent calls unlink(2) on it.
A note about unlink(2): unlink(2) simply deletes the filename from the filesystem; as long as there is at least one process with the file (FIFO in this case) open, the underlying object will persist. Only after that process terminates or closes the file descriptor will the operating system free the associated resources. FWIW, this is irrelevant in the scope of this question, but it seems worth noting.
A couple of other (unrelated) remarks:
Don't call wait(2) on the child. It will return an error (which you promptly ignore), because the child didn't fork any process.
mkfifo(3), fork(2), open(2), read(2), write(2), close(2) and unlink(2) can all fail and return -1. You should gracefully handle possible errors instead of ignoring them. A common strategy for these toy programs is to print a descriptive error message with perror(3) and terminate.
If you just want parent to child communication, use a pipe: it's easier to setup, you don't need to unlink it, and it is not exposed in the filesystem (but you need to create it with pipe(2) before forking, so that the child can access it).

Redirecting stdout to file stops in the middle. Linux

My program launches two binaries using "system" command the following way:
int status = system("binaryOne &");
int status = system("binaryTwo &");
Since all three binaries write to the same stdout, all the output is mixed and not understandable. So I changed launch command to redirect stdout of two binaries to different files on which I do tail -f:
int status = system("binaryOne > oneOut.txt &");
int status = system("binaryTwo > twoOut.txt &");
The problem is that writing to files stops at some point. Sometimes it freezes, buffered somewhere and than part of it thrown back and again. Most of time it just stops.
I verified that binaries continue to run and write to stdout.
Here is how you could try it with fork + exec
pid_t child_pid = fork();
if (!child_pid) {
// child goes here
char * args[] = {"binaryOne", NULL};
int fd = open("oneOut.txt", O_WRONLY | O_CREAT | O_TRUNC);
if (!fd) {
perror("open");
exit(-1);
}
// map fd onto stdout
dup2(fd, 1);
// keep in mind that the children will retain the parent's stdin and stderr
// this will fix those too:
/*
fd = open("/dev/null", O_RDWR);
if (!fd) {
perror("open");
exit(-1);
}
// disable stdin
dup2(fd, 0);
// disable stderr
dup2(fd, 2);
*/
execvp(*args, args);
// will only return if exec fails for whatever reason
// for instance file not found
perror("exec");
exit(-1);
}
// parent process continues here
if(child_pid == -1) {
perror("fork");
}
Edit Important: forgot to set write flag for open.
You can avoid most of this by using popen(). Each popen call will set up a separate pipe for your main program to read and the output won't be intertwined as it is having everything directed to stdout. Also obviously a lot less clumsy than writing to and tailing files. Whether this is better than fork/exec for your purposes only you can say.

forkpty - socket

I'm trying to develop a simple "telnet/server" daemon which have to run a program on a new socket connection.
This part working fine.
But I have to associate my new process to a pty, because this process have some terminal capabilities (like a readline).
The code I've developped is (where socketfd is the new socket file descriptor for the new input connection) :
int masterfd, pid;
const char *prgName = "...";
char *arguments[10] = ....;
if ((pid = forkpty(&masterfd, NULL, NULL, NULL)) < 0)
perror("FORK");
else if (pid)
return pid;
else
{
close(STDOUT_FILENO);
dup2(socketfd, STDOUT_FILENO);
close(STDIN_FILENO);
dup2(socketfd, STDIN_FILENO);
close(STDERR_FILENO);
dup2(socketfd, STDERR_FILENO);
if (execvp(prgName, arguments) < 0)
{
perror("execvp");
exit(2);
}
}
With that code, the stdin / stdout / stderr file descriptor of my "prgName" are associated to the socket (when looking with ls -la /proc/PID/fd), and so, the terminal capabilities of this process doesn't work.
A test with a connection via ssh/sshd on the remote device, and executing "localy" (under the ssh connection) prgName, show that the stdin/stdout/stderr fd of this process "prgName" are associated to a pty (and so the terminal capabilities of this process are working fine).
What I am doing wrong?
How to associate my socketfd with the pty (created by forkpty) ?
Thank
Alex
You must write some code to transfer data from the socket to the master pty and vice versa. It's usually a parent process' job. Note that the data transfer must be bidirectional. There are many options: a select()-driven cycle to track both the masterfd and the socketfd
(just as hint, very bad code, not for production!!! Missing error and eof checks!!!)
for (;;) {
FD_ZERO(&set);
FD_SET(masterfd,&set);
FD_SET(socketfd,&set);
select(...,&set,...);
if (FD_ISSET(masterfd,&set)) {
read(masterfd,&c,1);
write(socketfd,&c,1);
}
if (FD_ISSET(sockerfd,&set)) {
read(sochetfd,&c,1);
write(masterfd,&c,1);
}
or a pair of threads, one for socketfd->masterfd and one for masterfd->sockefd transfers.
(just as hint, very bad code, not for production!!!)
/*thread 1 */
while (read(masterfd,&c,1) > 0)
write(socketfd,&c,1);
/*thread 2 */
while (read(socketfd,&c,1) > 0)
write(masterfdfd,&c,1);
Anyway you must add some code in the parent side of the branch.
Regards
---EDIT---
Of course, you must not redirect fd 0,1 and 2 to socketfd in the child process.

Resources