Write some random numbers to the pipe? - c

my question is can I write an integer to pipe ? and how ?
I need to make 3 processes first one generate 2 numbers, second make sum of the numbers, third print the result (USING PIPE)
Thanks all

The complicated part of what you're trying to do is creating the pipeline. You could just have the shell do it for you...
$ ./makenumbers | ./addnumbers | ./printresult
but that's boring, eh? And you have to have three executables. So let's have a look at what those vertical bars are doing at the C level.
You create a pipe with the pipe system call. You reassign standard input/output with dup2. You create new processes with fork, and you wait for them to terminate with waitpid. A program to set the whole thing up would look something like this:
int
main(void)
{
pid_t children[2];
int pipe1[2], pipe2[2];
int status;
pipe(pipe1);
pipe(pipe2);
children[0] = fork();
if (children[0] == 0)
{
/* in child 0 */
dup2(pipe1[1], 1);
generate_two_numbers_and_write_them_to_fd_1();
_exit(0);
}
children[1] = fork();
if (children[1] == 0)
{
/* in child 1 */
dup2(pipe1[0], 0);
dup2(pipe2[1], 1);
read_two_numbers_from_fd_0_add_them_and_write_result_to_fd_1();
_exit(0);
}
/* parent process still */
dup2(pipe2[0], 0);
read_a_number_from_fd_0_and_print_it();
waitpid(children[0], &status, 0);
waitpid(children[1], &status, 0);
return 0;
}
Please note:
I left out all error handling, because that would make the program about three times longer. Your instructor wants you to include error handling.
Similarly, I left out checking the exit status of the children; your instructor also wants you to check that.
You do not need the dup2 calls; you could just pass the pipe fd numbers to the subroutine calls. But if you were exec-ing a new binary in the child, which is more typical, you would need them. You would then also have to worry about making sure all file descriptors numbered 3 and higher were closed.
There is a reason I am using _exit instead of exit. Try to figure out what it is.
You need to use read and write instead of stdio.h calls in the subroutines called from child processes. The reason is related to the reason I am using _exit.

Since a pipe is just a file, you can use the fprintf() function to convert a random number to text and write that to the pipe. For instance:
FILE *pipe = popen("path/to/your/program", "w");
if (pipe != NULL) {
fprintf(pipe, "%d\n", rand());
pclose(pipe);
}

Related

How do I use 2 child processes one for executing command and the other one for reading output and passing it to the next?

So my program needs to pipe multiple processes and read the number of bytes each process output has.
The way I implemented it, in a for loop, we have two children:
Child 1: dups output and executes the process
Child 2: reads the output and writes it for the next input
Currently, child 1 executes the process and the child 2 reads its output, but it doesn't seem to write it in the right place because in the second loop iteration it prints the output to the screen and blocks.
for (int i = 0; i < processes; i++) {
int result = socketpair(PF_LOCAL, SOCK_STREAM, 0, apipe[i]);
if (result == -1) {
error_and_exit();
}
int pid;
int pid2;
pid = fork_or_die();
// child one points to STDOUT
if (pid == FORK_CHILD) {
if (dup2(apipe[i][1], STDOUT_FILENO) == -1)
error_and_exit();
if (close(apipe[i][1]) == -1)
error_and_exit();
if (close(apipe[i][0]) == -1)
error_and_exit();
if (execlp("/bin/sh", "sh", "-c", tabCommande[i], (char *)NULL) == -1)
error_and_exit();
}
pid2 = fork_or_die();
//CHILD 2 reads the output and writes if for the next command to use
if(pid2 == FORK_CHILD){
FILE *fp;
fp = fopen("count", "a");
close(apipe[i][1]);
int count=0;
char str[4096];
count = read(apipe[i][0], str, sizeof(str)+1);
close(apipe[i][0]);
write(STDIN_FILENO, str, count);
fprintf(fp, "%d : %d \n ", i, count);
fclose(fp);
}
}
Your second child does “write(STDIN_FILENO, …); that’s not a conventional way of using standard input.
If standard input is a terminal, then the device is usually opened for reading and writing and the three standard I/O channels are created using dup() or dup2(). Thus you can read from the outputs and write to the input — but only if the streams are connected to a login terminal (window). If the input is a pipe, you can't successfully write to it, nor can you read from the output if it is a pipe. (Similarly if the input is redirected from a file or the output is redirected to a file.) This terminal setup is done by the process that creates the terminal window. It is background information explaining why writing to standard input appears on the terminal.
Anyway, that's what you're doing. You are writing to the terminal via standard input. Your minimum necessary change is to replace STDIN_FILENO with STDOUT_FILENO.
You are also going to need a loop around the reading and writing code. In general, processes generate lots of output in small chunks. The close on the input pipe will be outside the loop, of course, not between the read() and write() operations. You should check that the write() operations write all the data to the output.
You should also have the second child exit after it closes the output file. In this code, I'd probably open the file after the counting loop (or what will become the counting loop), but that's mostly a stylistic change, keeping the scope of variables to a minimum.
You will probably eventually need to handle signals like SIGPIPE (or ignore it so that the output functions return errors when the pipe is closed early). However, that's a refinement for when you have the basic code working.
Bug: you have:
count = read(apipe[i][0], str, sizeof(str)+1);
This is a request to the o/s to give you a buffer overflow — you ask it to write more data into str than str can hold. Remove the +1!
Minor note: you don’t need to check the return value from execlp() or any of that family of functions. If the call succeeds, it doesn’t return; if it returns, it failed. Your code is correct to exit after the call to execlp(), though; that's good.
You said:
I replaced STDIN_FILENO to STDOUT_FILENO in the second child but it doesn't seem to solve the issue. The output is still shown in the terminal and there's a pipe blockage after.
That observation may well be correct, but it isn't something that can be resolved by studying this code alone. The change to write to an output stream is necessary — and in the absence of any alternative information, writing to STDOUT_FILENO is better than writing to STDIN_FILENO.
That is a necessary change, but it is probably not a sufficient change. There are other changes needed too.
Did you set up the inputs and outputs for the pair of children this code creates correctly? It is very hard to know from the code shown — but given that it is not working as you intended, it's a reasonable inference that you did not get all the plumbing correct. You need to draw a diagram of how the processes are intended to operate in the larger context. At a minimum, you need to know where the standard input for each process comes from, and where its standard input goes. Sometimes, you need to worry about standard error too — most likely though, in this case, you can quietly ignore it.
This is what I think your code could look like — though the comments in it describe numerous possible variants.
#include <sys/socket.h>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
/* The code needs these declarations and definition to compile */
extern _Noreturn void error_and_exit(void);
extern pid_t fork_or_die(void);
extern void unknown_function(void);
static ssize_t copy_bytes(int fd1, int fd2);
#define FORK_CHILD 0
int processes;
int apipe[20][2];
char *tabCommande[21];
void unknown_function(void)
{
for (int i = 0; i < processes; i++)
{
int result = socketpair(PF_LOCAL, SOCK_STREAM, 0, apipe[i]);
if (result == -1)
error_and_exit();
int pid1 = fork_or_die();
// child one points to STDOUT
if (pid1 == FORK_CHILD)
{
if (dup2(apipe[i][1], STDOUT_FILENO) == -1)
error_and_exit();
if (close(apipe[i][1]) == -1)
error_and_exit();
if (close(apipe[i][0]) == -1)
error_and_exit();
execlp("/bin/sh", "sh", "-c", tabCommande[i], (char *)NULL);
error_and_exit();
}
//CHILD 2 reads the output and writes if for the next command to use
int pid2 = fork_or_die();
if (pid2 == FORK_CHILD)
{
close(apipe[i][1]);
ssize_t count = copy_bytes(apipe[i][0], STDOUT_FILENO);
FILE *fp = fopen("count", "a");
if (fp == NULL)
error_and_exit();
/*
** Using %zd for ssize_t is a reasonable guess at a format to
** print ssize_t - but it is a guess. Alternatively, change the
** type of count to long long and use %lld. There isn't a
** documented, official (fully standardized by POSIX) conversion
** specifier for ssize_t AFAIK.
*/
fprintf(fp, "%d : %zd\n ", i, count);
fclose(fp);
exit(EXIT_SUCCESS);
}
/*
** This is crucial - the parent has all the pipes open, and the
** child processes won't get EOF until the parent closes the
** write ends of the pipes, and they won't get EOF on the inputs
** until the parent closes the read ends of the pipe.
**
** It could be avoided if the first child creates the pipe or
** socketpair and then creates the second child as a grandchild
** of the main process. That also alters the process structure
** and reduces the number of processes that the original parent
** process has to wait for. If the first child creates the
** pipe, then the apipe array of arrays becomes unnecessary;
** you can have a simple int apipe[2]; array that's local to the
** two processes. However, you may need the array of arrays so
** that you can chain the outputs of one process (pair of
** processes) to the input of the next.
*/
close(apipe[i][0]);
close(apipe[i][1]);
}
}
static ssize_t copy_bytes(int fd1, int fd2)
{
ssize_t tbytes = 0;
ssize_t rbytes;
char buffer[4096];
while ((rbytes = read(fd1, buffer, sizeof(buffer))) > 0)
{
ssize_t wbytes = write(fd2, buffer, rbytes);
if (wbytes != rbytes)
{
/*
** There are many possible ways to deal with this. If
** wbytes is negative, then the write failed, presumably
** irrecoverably. The code could break the loop, reporting
** how many bytes were written successfully to the output.
** If wbytes is zero (pretty improbable), it isn't clear
** what happened. If wbytes is positive, then you could add
** the current value to tbytes and try to write the rest in
** a loop until everything has been written or an error
** occurs. You pays your money and takes your pick.
*/
error_and_exit();
}
tbytes += wbytes;
}
if (tbytes == 0 && rbytes < 0)
tbytes = rbytes;
return tbytes;
}
You could add #include <signal.h> and signal(SIGPIPE, SIG_IGN); to the code in the second child.

How to use pipe between parent and child process after call to popen?

I want to communicate with a child process like the following:
int main(int argc, char *argv[])
{
int bak, temp;
int fd[2];
if (pipe(fd) < 0)
{
// pipe error
exit(1);
}
close(fd[0]);
dup2(STDOUT_FILENO, fd[1]);
fflush(stdout);
bak = dup(1);
temp = open("/dev/null", O_WRONLY);
dup2(temp, 1);
close(temp );
Mat frame;
std::vector<uchar> buf;
namedWindow( "Camera", WINDOW_AUTOSIZE );
VideoCapture cam(0 + CAP_V4L);
sleep(1);
if (!cam.isOpened())
{
cout << "\nCould not open reference " << 0 << endl;
return -1;
}
for (int i=0; i<30; i++)
{
cam>>frame;
}
//cout<<"\nCamera initialized\n";
/*Set the normal STDOUT back*/
fflush(stdout);
dup2(bak, 1);
close(bak);
imencode(".png",frame, buf);
cout<<buf.size()<<endl;
ssize_t written= 0;
size_t s = 128;
while (written<buf.size())
{
written += write(fd[1], buf.size()+written, s);
}
cout<<'\0';
return 0;
}
The process corresponding to the compilation of the source code above is called from the parent with popen.
Note that I am writing to the std out that has been duplicated with a pipe.
The parent will read the data and resend them to UDP socket.
If I do something like this:
#define BUFLEN 128
FILE *fp;
char buf[BUFLEN];
if ((fp = popen("path/to/exec", "r")) != NULL)
{
while((fgets(buf, BUFLEN, fp)!=NULL))
{
sendto(sockfd, buf, strlen(buf),0, addr, alen);
}
}
the program is working i.e. the receiver of sendto will receive the data.
I tried to use a pipe as done in the child process:
int fd[2];
if (pipe(fd) < 0)
{
// pipe error
exit(1);
}
close(fd[1]);
dup2(STDIN_FILENO, fd[0]);
if ((fp = popen("path/to/exec", "r")) != NULL)
{
while((read(fd[0], buf, BUFLEN) > 0)
{
sendto(sockfd, buf, strlen(buf),0, addr, alen);
}
}
but with this are not sent.
So how to use pipe in this case to achieve the same behaviour of the first case? Should I do dup2(STDIN_FILENO, fd[0]); or dup2(STDOUT_FILENO, fd[0]);?
I am using the sandard(s) since the file descriptors are inherited by the child process so should not require any other effort. That is why I thought I can use pipe but is that so?
In the parent:
if (pipe(fd) < 0)
{
// pipe error
exit(1);
}
close(fd[0]);
you get a pipe, and then immediately close one end of it. This pipe is now useless, because no-one will ever be able to recover the closed end, and so no data can flow through it. You have converted a pipe into a hollow cylinder sealed at one end.
Then in the child:
if (pipe(fd) < 0)
{
// pipe error
exit(1);
}
close(fd[1]);
you create another unrelated pipe, and seal this at the other end. The two pipes are not connected, and now you have two separate hollow cyclinders, each sealed at one end. Nothing can flow through either of them.
If putting something in the first cylinder made it appear in the other, that'd be a pretty good magic trick. Without sleight of hand or cleverly arranged mirrors, the solution is to create one pipe, keep both ends open and push data through it.
The usual way to manually set up a pipe from which a parent process can read a child process's standard output has these general steps:
parent creates a pipe by calling pipe()
parent fork()s
parent closes (clarification: its copy of) the write end of the pipe
child dupes the write end of the pipe onto its standard output via dup2()
child closes the original file descriptor for the write end of the pipe
(optional) child closes (clarification: its copy of) the read end of the pipe
child execs the desired command, or else performs the wanted work directly
The parent can then read the child's output from the read end of the pipe.
The popen() function does all of that for you, plus wraps the parent's pipe end in a FILE. Of course, it can and will set up a pipe going in the opposite direction instead if that's what the caller requests.
You need to understand and appreciate that in the procedural scheme presented above, it is important which actions are performed by which process, and in what order relative to other actions in the same process. In particular, the parent must not close the write end of the pipe before the child is launched, because that renders the pipe useless. The child inherits the one-end-closed pipe, through which no data can be conveyed.
With respect to your latter example, note also that redirecting the standard input to the read end of the pipe is not part of the process for either parent or child. The fact that your pipe is half-closed, so that nothing can ever be read from it anyway, is just icing on the cake. Moreover, the parent clobbers its own standard input this way. That's not necessarily wrong, but the parent does not even rely on it.
Overall, however, there is a bigger picture that you seem not to appreciate. Even if you performed the redirection you seem to want in the parent, so that it could be inherited by the child, popen() performs its own redirection to a pipe of its own creation. The FILE * it returns is the means by which you can read the child's output. No previous output redirection you may have performed is relevant (clarification: of the child's standard output).
In principle, an approach similar to yours could be used to create a second redirection going the other way, but at that point the convenience factor of popen() is totally lost. It would be better go take the direct pipe / fork / dup2 / exec route all the way through if you want to redirect the child's input and output.
Applying all that to your first example, you have to appreciate that although a process can redirect its own standard streams, it cannot establish a pipe to its parent process that way. The parent needs to provide the pipe, else it has no knowledge of it. And when a process dupes one file descriptor onto another, that replaces the original with the new, closing the original if it is open. It does not redefine the original. And of course, in this case, too, a pipe is useless once either end is no longer open anywhere.

fork / pipe / close in a recursive function

In order to realize a shell command interpretor, I try to execute pipes.
To do it, I use a recursive function in wich I use the pipe function and some redirections with dup2.
Here is my code :
void test_recurs(pid_t pid, char **ae)
{
char *const arg[2] = {"/bin/ls", NULL};
char *const arg2[3] = {"/bin/wc", NULL};
static int limit = 0;
int check;
int fd[2];
if (limit > 5)
return ;
if (pipe(fd) == -1)
{
printf("pipe failed\n");
return ;
}
pid = fork();
if(pid != 0)
{
printf("père %d\n",getpid());
close(fd[0]);
dup2(fd[1], 1);
close(fd[1]);
if ((execve("/bin/ls", arg, ae)) == -1)
exit(125);
dprintf(2, "execution ls\n");
wait(&check);
}
else
{
printf("fils %d\n", getpid());
close(fd[1]);
dup2(fd[0], 0);
close(fd[0]);
if ((execve("/bin/wc", arg2, ae)) == -1)
printf("echec execve\n");;
dprintf(2, "limit[%d]\n", limit);
limit++;
test_recurs(pid, ae);
}
}
The problem is it only execute "ls | wc" one time and then wait on the standard input. I know that the problem may come from the pipes (and the redirections).
It's a bit unclear how you are trying to use the function you present, but here are some notable points about it:
It's poor form to rely on a static variable to limit recursion depth because it's not thread-safe and because you need to do extra work to manage it (for example, to ensure that any changes are backed out when the function returns). Use a function parameter instead.
As has been observed in comments, the exec-family functions return only on failure. Although you acknowledge that, I'm not sure you appreciate the consequences, for both branches of your fork contain code that will never be executed as a result. The recursive call in particular is dead and will never be executed.
Moreover, the process in which the function is called performs an execve() call itself. The reason that function does not return is that it replaces the process image with that of the new process. That means that function test_recurs() also does not return.
Just as shell ordinarily must fork / exec to launch a single external command, it ordinarily must fork / exec for each command in a pipeline. If it fails to do so then afterward it is no longer running -- whatever it exec'ed without forking runs instead.
The problem is it only execute "ls | wc" one time and then wait on the standard input.
Certainly it does not recurse, because the recursive call is in a section of dead code. I suspect you are mistaken in your claim that it afterward waits on standard input, because the process that calls that function execs /bin/ls, which does not read from standard input. When the ls exits, however, leaving you with neither shell nor ls, what you then see might seem to be a wait on stdin.

communicate with an execv()'ed program via pipe doesn't work

i try to write a socket which loads programs and redirects socket io to these. sounds much like inetd but as far as i know, inetd loads the program when its port is requested. i want to have it loaded permanently.
so far so good. writing a socket server is not that tricky but i didn't get the rest working.
I basically want to open a pipe(), dup2() it to stdin and stdout and execv() my program.
the problem is, that my called program doesn't get any input.I'll try to show it with a test program. can someone tell me, what's wrong?
int create_program_fork(int *ios, char const *program) {
// create pipes to program
if (pipe(ios) != 0) {
return -1;
}
// fork to new process
int f = fork();
if (f < 0) {
// fork didn't work
close(ios[0]);
close(ios[1]);
return(-1);
}
if (f > 0) {
// master hasn't much to do here
return f;
}
// *** Child Process
// close std** file descriptors
printf ("executing program");
close(STDIN_FILENO);
close(STDOUT_FILENO);
// duplicate pipes as std**
dup2(ios[0], STDIN_FILENO);
dup2(ios[1], STDOUT_FILENO);
// close pipes
close(ios[0]);
close(ios[1]);
// call program
return execvp(program, NULL );
}
int main(int argc, char *argv[]) {
int ios[2];
// call program
int pid = create_program_fork(ios, "/bin/bash");
if (0 != pid){
exit(EXIT_FAILURE);
}
char const exit_order[] = "exit\0";
char const order[] = ">/tmp/test.txt\0";
// do something
write(ios[1], order, strlen(order));
// bash should stop then..
write(ios[1], exit_order, strlen(exit_order));
return 0;
}
I see two possible source of trouble:
1) the write part of the pipe is redirected to the child's stdout, so the new process' output
is sent back to the input. I suggest to dup only the pipe's read part at the child side. If you want to intercept the child's output, you need another channel (i.e. a new pipe, or simply let both parent and child share the same stdout).
2) the strings you send seem to contain line-oriented commands. It's possible that the child process expects newlines at the end of the strings. This is a very common source of problems. I suggest to check the way the child reads its input. A "\n" at the end of the strings could help (by the way, it's not necessary to explicitly add a "\0" at the end of C strings, since the compiler do it for you. Anyway, strlen won't count the "\0").

fork(), pipe() and exec() process creation and communication

I have to write program that create process using pipe().
My first task is to write a parent process that generates four child processes using the fork() function.
Once the fork() is successful, replace the child process with another process rover1, rover2, rover3, and rover4, though all of them have the same code.
The function of the processes is as follows.
Each child process is initially given its own number. It receives a new number from the parent. Using the following formula it creates its own new number as follows and forwards it to the parent:
mynumber = (3 * mynumber + 4 * numberreceived)/7
This process continues until the parent sends the message that the system is stable. The parent also has its initial number. It receives numbers of all the children and computes its new number as follows:
mynumber = (3 * mynumber + numbers sent by all the children)/7
The parent will send this number to all its children. This process will continue until the parent finds that its number is not changing anymore. At that time it will tell the children the system has become stable.
This is what I did but my professor said I have to use exec() to execute the child and replace child process with another child process. I am not sure how to use exec(). Could you please help me with this.
I am attaching only first child generation.
// I included stdio.h, unistd.h stdlib.h and errno.h
int main(void)
{
// Values returned from the four fork() calls
pid_t rover1, rover2, rover3, rover4;
int parentnumber, mynumber1, mynumber2, mynumber3, mynumber4;
int childownnumber1 = 0, status = 1, childownnumber2 = 0,
childownnumber3 = 0, childownnumber4 = 0, numberreceived = 0;
printf("Enter parent number: ");
printf("%d", parentnumber);
printf("Enter each children number");
printf("%d %d %d %d", mynumber1, mynumber2, mynumber3, mynumber4);
// Create pipes for communication between child and parent
int p1[2], p2[2];
// Attempt to open pipe
if (pipe(p1) == -1) {
perror("pipe call error");
exit(1);
}
// Attempt to open pipe
if (pipe(p2) == -1) {
perror("pipe call error");
exit(1);
}
// Parent process generates 4 child processes
rover1 = fork();
// if fork() returns 0, we're in the child process;
// call exec() for each child to replace itself with another process
if (rover1 == 0) {
for(; numberreceived != 1; ) {
close(p1[1]); // Close write end of pipe
close(p2[0]); // Close read end of second pipe
// Read parent's number from pipe
read(p1[0], &numberreceived, sizeof(int));
if (numberreceived == 1) {
// System stable, end child process
close(p1[0]);
close(p2[1]);
_exit(0); // End child process
}
mynumber1 = (int)((3*mynumber1 + 4*numberreceived)/7.0);
printf("\nrover1 number: ");
printf("%i", mynumber1);
// Write to pipe
write(p2[1], &mynumber1, sizeof(int));
}
}
/* Error:
* If fork() returns a negative number, an error happened;
* output error message
*/
if (rover1 < 0) {
fprintf(stderr,
"can't fork, child process 1 not created, error %d\n",
errno);
exit(EXIT_FAILURE);
}
}
The exec family of functions is used to replace the current process with a new process. Note the use of the word replace. Once exec is called, the current process is gone and the new process starts. If you want to create a separate process, you must first fork, and then exec the new binary within the child process.
Using the exec functions is similar to executing a program from the command line. The program to execute as well as the arguments passed to the program are provided in the call to the exec function.
For example, the following execcommand* is the equivalent to the subsequent shell command:
execl("/bin/ls", "/bin/ls", "-r", "-t", "-l", (char *) 0);
/bin/ls -r -t -l
* Note that "arg0" is the command/file name to execute
Since this is homework, it is important to have a good understanding of this process. You could start by reading documentation on pipe, fork, and exec combined with a few tutorials to gain a better understanding each step.
The following links should help to get you started:
IBM developerWorks: Delve into UNIX process creation
YoLinux Tutorial: Fork, Exec and Process control
Pipe, Fork, Exec and Related Topics
If you are supposed to use exec, then you should split your program into two binaries.
Basically, the code that now gets executed by the child should be in the second binary and should be invoked with exec.
Before calling one of the exec family of functions, you'll also need to redirect the pipe descriptors to the new process' standard input/output using dup2. This way the code in the second binary that gets exec'd won't be aware of the pipe and will just read/write to the standard input/output.
It's also worth noting that some of the data you are using now in the child process is inherited from the parent through the fork. When using exec the child won't share the data nor the code of the parent, so maybe you can consider transmitting the needed data through the pipe as well.

Resources