I'm trying to implement the following simple UNIX command:
cat -n < file.txt
where file.txt contains simply an integer "5".
Im fine with output redirection, but this input redirection has me stumped. This is my attempt at emulating the above command:
int f_des[2];
char *three[]={"cat", "-n", NULL};
// Open a pipe and report error if it fails
if (pipe(f_des)==-1){
perror("Pipe");
exit(1);
}
int filed=open("file.txt", O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR);
//fork child
if(fork()==0){
dup2(f_des[1], filed);
close(f_des[0]);
}
//fork child
if(fork()==0){
dup2(f_des[0], fileno(stdin));
close(f_des[1]);
execvp(three[0], three);
}
I get the following error:
cat: -: Input/output error
My thinking was that I send filed(the fd for the file) through the pipe, the other end of the pipe would gather the file's contents from the pipe as standard input, then I would execute "cat -n" with the file's contents sitting in standard input.
You don't indicate the context. If all you are wanting to do is implement cat -n < file, you can dispense with the pipe and fork entirely.
This should suffice:
filed = open("file.txt", O_RDONLY);
dup2(filed, 0); // make file.txt be stdin.
close(filed);
execvp(three[0], three);
If you are implementing this within another program and need to resume after the cat call, fork is necessary but you only need to call it once. You don't need the pipe.
So you would do:
int ret;
if ((ret = fork()) == 0) {
// in child
// open file, dup2, execvp...
}
// in parent
wait(&ret); // wait for child to exit
// do other stuff...
fork clones a copy of the process. It looks like the one you had before except for the PID and the return value from fork.
Checking the return value of fork() tells you whether that process is the child or the parent.
If the return value is zero, you are in the child. Do what you like in the if(ret == 0) {} section. In your case, you do execvp which eventually exits and takes the child with it.
If the return value is not zero, you are in the parent. You will skip over the if(ret == 0) {} section. You should wait on the child to exit before proceeding.
Related
I'm trying to use named pipe in C to run a child process in the background from a path in non-blocking mode and read the output of the child.
This is my code:
int fifo_in = open("fifo_1", O_RDONLY| O_NONBLOCK);
int fifo_out = open("fifo_2", O_WRONLY| O_NONBLOCK);
dup2(fifo_in, 0);
dup2(fifo_out, 1);
char app[] = "/usr/local/bin/probemB";
char * const argsv[] = { app, "1", NULL };
if (execv(app, argsv) < 0) {
printf("execv error\n");
exit(4);
}
I will later use read function to read the child process output.
But the problem is that execv is blocking while it is reading the output from the process instead of allowing me to read.
Can someone help me to correct the above problem please ?
You're wrong in that execv is blocking.
If execv works, it will never return. It replaces your program. You need to fork a new process for execv:
if (fork() == 0)
{
// In child process, first setup the file descriptors
dup2(fifo_out, STDOUT_FILENO); // Writes to standard output will be written to the pipe
close(fifo_out); // These are not needed anymore
close(fifo_in);
// Run the program with execv...
}
else
{
// Unless there was an error, this is in the parent process
close(fifo_out);
// TODO: Read from fifo_in, which will contain the standard output of the child process
}
Another thing, you seem have two different and unconnected named pipes. You should open only one pipe, for reading in the parent process, and for writing in the child process:
int fifo_in = open("fifo_1", O_RDONLY| O_NONBLOCK);
int fifo_out = open("fifo_1", O_WRONLY| O_NONBLOCK);
But if you only want to communicate internally, you don't need named pipes. Instead use anonymous pipes as created by the pipe function.
I am implementing pipe in C. When I try the command 'cat aa | grep "something" ' in my program. The grep process just hanging there, seems waiting for input. I don't know why. Here is the core code. Just take ExecuteCommand as simply call execve function and all arguments are correctly passed.
if ((pid = fork()) < 0)
{
perror("fork failed\n");
exit(1);
}
if (pid)
{ // parent as pipe WRITER
close(pd[0]);
close(1);
// replace input with pipe
dup(pd[1]);
// recursively call next commands
ExecuteCommand(cmds, env);
FreeCommandsArray(&cmds);
exit(0);
}
else
{ // child as pipe READER
close(pd[1]);
close(0); // close its READ end
dup(pd[0]);
ExecuteCommand(*(++splitCmds), env);
FreeCommandsArray(&cmds);
exit(0);
}
The full code is open.
Another problem is I have to use the full path of the command file as the first parameter for execve (e.g. /bin/ls for ls), otherwise, I got error message, no such file existed.
It is the quotation mark at the first argument of grep cause the problem. It works well if I get rid of it on input. e.g 'cat aa | grep drw' instead of 'cat aa | grep "something"'
I just finished my shell interpretor but I think my pipe implementation is wrong.
It's working, basic things like ls | cat -e works but I am afraid of segmentation fault possibilities if the file descriptor is over 60 ko.
I found also a infinite loop when I do a cat of a file which is longer than 60 ko. For exemple if a do a cat foo | cat -e where foo is a long file, infinite loop happen.
Or other exemple when I do cat /dev/urandom | cat -e it don't show me any display so it execute first cat /dev/urandom and then cat -e.
This is my code :
int son(int *fd_in, int p[2], t_list *cmd, char **env)
{
(void)env;
dup2(*fd_in, 0);
if (cmd->act != ENDACT && cmd->act != LEFT && cmd->act != DLEFT)
dup2(p[1], 1);
close(p[0]);
execve(cmd->av[0], cmd->av, NULL);
return (-1);
}
t_list *execute_pipe(t_list *cmd, int *fd_in)
{
int p[2];
pid_t pid;
*fd_in = 0;
while (cmd->act != -1)
{
pipe(p);
if ((pid = fork()) == -1)
return (NULL);
else if (pid == 0)
son(fd_in, p, cmd, NULL);
else
{
wait(NULL);
close(p[1]);
*fd_in = p[0];
if (cmd->act != PIPE)
return (cmd);
cmd = cmd->next;
}
}
return (cmd);
}
Part of the idea of a shell pipeline is that the processes involved run concurrently (or may do). The code you presented actively prevents that from happening by wait()ing on each child process before launching the next one. Among other things, that runs the risk of filling the (OS-level) pipe's buffer before there's anything ready to drain it. That will deadlock or, if you're lucky, generate an error.
At a high level, the procedure should look like this:
[shell] Let C initially be the command for the first segment of the pipe, and set fd0 to be STDIN_FILENO
[shell] Prepare an output file descriptor:
If there are any subsequent commands, create a pipe(), and set fd1 to be the write end of that pipe;
otherwise, set fd1 to be STDOUT_FILENO
[shell] fork() a child in which to run command C. In it:
[child] if fd0 is different from STDIN_FILENO then dup2() fd0 onto STDIN_FILENO and close fd0
[child] if fd1 is different from STDOUT_FILENO then dup2() fd1 onto STDOUT_FILENO and close fd1
[child] exec command C
[shell] if fd0 is different from STDIN_FILENO then close fd0
[shell] if fd1 is different from STDOUT_FILENO then close fd1
[shell] If there are any more commands in the pipe, then
set C to be the next command
set fd0 to be the read end of the pipe from step (2), above
go to step 2 (Prepare an output file descriptor)
[shell] (At this point, all processes in the pipeline have been started.) wait() or waitpid() for all the child processes
Note that that works equally well for a pipeline containing any positive number of commands, including 1.
I am trying to implement multi pipe in C, to run multiple commands like a shell.
I have made a linked list (called t_launch in my code) which look like that if you type "ls | grep src | wc" :
wc -- PIPE -- grep src -- PIPE -- ls
Every PIPE node contain an int tab[2] from the pipe() function (of course, there have been one pipe() call for each PIPE node)
Now i am trying to execute these commands :
int execute_launch_list(t_shell *shell, t_launch *launchs)
{
pid_t pid;
int status;
int firstpid;
firstpid = 0;
while (launchs != NULL)
{
if ((pid = fork()) == -1)
return (my_error("Unable to fork\n"));
if (pid == 0)
{
if (launchs->prev != NULL)
{
close(1);
dup2(launchs->prev->pipefd[1], 1);
close(launchs->prev->pipefd[0]);
}
if (launchs->next != NULL)
{
close(0);
dup2(launchs->next->pipefd[0], 0);
close(launchs->next->pipefd[1]);
}
execve(launchs->cmdpath, launchs->words, shell->environ);
}
else if (firstpid == 0)
firstpid = pid;
launchs = launchs->next == NULL ? launchs->next : launchs->next->next;
}
waitpid(firstpid, &status, 0);
return (SUCCESS);
}
But that doesn't work : it looks like commands dont stop reading.
For example if i type "ls | grep src, "src" will be print from the grep command, but the grep continue reading and never stop. If i type "ls | grep src | wc", nothing is printed. What's wrong with my code ?
Thank you.
If I understand your code correctly, you first call pipe in the shell process for every PIPE. You then proceed to fork each process.
While you do close the unused end of each of the child's pipes in the child process, this procedure suffers from two problems:
Every child has every pipe, and doesn't close the ones which don't belong to it
The parent (shell) process has all the pipes open.
Consequently, all the pipes are open, and the children don't get EOFs.
By the way, you need to wait() for all the children, not just the last one. Consider the case where the first child does some long computation after closing stdout, but remember that any computation or side-effect after stdout is closed, even a short one, could be sequenced after the sink process terminates since multiprocessing is essentially non-deterministic.
I need to create two child processes. One child needs to run the command "ls -al" and redirect its output to the input of the next child process, which in turn will run the command "sort -r -n -k 5" on its input data. Finally, the parent process needs to read that (data already sorted) and display it in the terminal. The final result in the terminal (when executing the program) should be the same as if I entered the following command directly in the shell: "ls -al | sort -r -n -k 5". For this I need to use the following methods: pipe(), fork(), execlp().
My program compiles, but I don't get the desired output to the terminal. I don't know what is wrong. Here is the code:
#include <sys/types.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
int main()
{
int fd[2];
pid_t ls_pid, sort_pid;
char buff[1000];
/* create the pipe */
if (pipe(fd) == -1) {
fprintf(stderr, "Pipe failed");
return 1;
}
/* create child 2 first */
sort_pid = fork();
if (sort_pid < 0) { // error creating Child 2 process
fprintf(stderr, "\nChild 2 Fork failed");
return 1;
}
else if(sort_pid > 0) { // parent process
wait(NULL); // wait for children termination
/* create child 1 */
ls_pid = fork();
if (ls_pid < 0) { // error creating Child 1 process
fprintf(stderr, "\nChild 1 Fork failed");
return 1;
}
else if (ls_pid == 0) { // child 1 process
close(1); // close stdout
dup2(fd[1], 1); // make stdout same as fd[1]
close(fd[0]); // we don't need this end of pipe
execlp("bin/ls", "ls", "-al", NULL);// executes ls command
}
wait(NULL);
read(fd[0], buff, 1000); // parent reads data
printf(buff); // parent prints data to terminal
}
else if (sort_pid == 0) { // child 2 process
close(0); // close stdin
dup2(fd[0], 0); // make stdin same as fd[0]
close(fd[1]); // we don't need this end of pipe
execlp("bin/sort", "sort", "-r", "-n", "-k", "5", NULL); // executes sort operation
}
return 0;
}
Your parent process waits for the sort process to finish before creating the ls process.
The sort process needs to read its input before it can finish. And its input is coming from the ls that won't be started until after the wait. Deadlock.
You need to create both processes, then wait for both of them.
Also, your file descriptor manipulations aren't quite right. In this pair of calls:
close(0);
dup2(fd[0], 0);
the close is redundant, since dup2 will automatically close the existing fd 0 if there is one. You should do a close(fd[0]) after ther dup2, so you only have one file descriptor tied to that end of the pipe. And if you want to be really robust, you should test wither fd[0]==0 already, and in that case skip the dup2 and close.
Apply all of that to the other dup2 also.
Then there's the issue of the parent process holding the pipe open. I'd say you should close both ends of the pipe in the parent after you've passed them on to the children, but you have that weird read from fd[0] after the last wait... I'm not sure why that's there. If the ls|sort pipeline has run correctly, the pipe will be empty afterward, so there will be nothing to read. In any case, you definitely need to close fd[1] in the parent, otherwise the sort process won't finish because the pipe won't indicate EOF until all writers are closed.
After the weird read is a printf that will probably crash, since the read buffer won't be '\0'-terminated.
And the point of using execlp is that it does the $PATH lookup for you so you don't have to specify /bin/. My first test run failed because my sort is in /usr/bin/. Why hardcode paths when you don't have to?