Process Hangs in Parent Process in C - c

I have a program that seems to be hanging in the parent process. It's a mock bash program that accepts commands like bash, and then operates them. The code is below. (Note this is simplified code without error checking so it would be more easily readable. Assume its all properly nested inside main function)
#define MAX_LINE 80
char *args[MAX_LINE/2 + 1];
while(should_run){
char *inputLine = malloc(MAX_LINE);
runConcurrently = 0; /*Resets the run in background to be line specific */
fprintf(stdout, "osh> "); /*Command prompt austhetic */
fflush(stdout);
/*User Input */
fgets(inputLine, MAX_LINE, stdin);
/*Reads into Args array */
char *token = strtok(inputLine, " \n");
int spot = 0;
while (token){
args[spot] = token;
token = strtok(NULL, " \n");
spot++;
}
args[spot] = NULL;
/* checks for & and changes flag */
if (strcmp(args[spot-1], "&") == 0){
runConcurrently = 1;
args[spot-1] = NULL;
}
/* Child-Parent Fork Process */
pid_t pid;
pid = fork(); /*Creates the fork */
if (pid == 0){
int run = execvp(args[0], args);
if (run < 0){
fprintf(stdout, "Commands Failed, check syntax!\n");
exit(1);
}
}
else if (pid > 0) {
if (!runConcurrently){
wait(NULL);
}
}
else {
fprintf(stderr, "Fork Failed \n");
return 1;
}
}
The problem here has to do with when I use an '&' and activate the run concurrently flag. This makes it so the parent no longer has to wait, however when I do this I lose some functionality.
Expected output:
osh> ls-a &
//Outputs a list of all in current directory
osh>
So I want it to run them concurently, but give control of the terminal back to me. But instead I get this.
Actual Result:
osh> ls -a &
//Outputs a list of all in current directory
<---- Starts a new line without the osh>. And stays like this indefinitely
And if I type something into this blank area the result is:
osh> ls -a &
//Outputs a list of all in current directory
ls -a
//Outputs a list of all in current directory
osh> osh> //I get two osh>'s this time.
This is my first time working with split processes and fork(). Am I missing something here? When I run it concurrently should I be choosing processes or something like that? Any help is welcome, thanks!

Your code actually works fine. The only problem is, you spit out the prompt "too quickly", and the new prompt appears before the command output. See a test output here:
osh> ls -al &
osh> total 1944 --- LOOK HERE
drwxrwxrwt 15 root root 4096 Feb 15 14:34 .
drwxr-xr-x 24 root root 4096 Feb 3 02:13 ..
drwx------ 2 test test 4096 Feb 15 09:30 .com.google.Chrome.5raKDW
drwx------ 2 test test 4096 Feb 15 13:35 .com.google.Chrome.ueibHT
drwx------ 2 test test 4096 Feb 14 12:15 .com.google.Chrome.ypZmNA
See the "LOOK HERE" line. The new prompt is there, but ls command output appears later. Your application is responsive to new commands, even though the prompt is displayed before command output. You can verify all this by using a command that does not produce any output, for example
osh> sleep 10 &
Hannu

Related

How to make 'cd' command work from my own custom shell?

I'm currently doing my Homework for UNIX(LINUX) programming.
I was assigned to build my own custom shell that all commonly used linux command and custom program can work.
I also created my_ls, my_cp, my_rm, my_cd for checking that both linux command and my own command works.
Simple story's are below
./myOwnShell // Run my own shell
home/testFolder>>ls . // Shell prompt
a.out helloWorld.txt myOwnShell.c myOwnShell // Print ls command's result
home/test/Folder>>my_ls . // Run my own ls command program
a.out helloWorld.txt myOwnShell.c myOwnShell
So far, all the linux command (which in /bin/) and my own command (which in home//bin/) works.
But it comes with differences when I type cd and my_cd - which changes current cwd
home/testFolder>>cd ..
Fail to run program // Error message from exec function failure
home/testFolder>>my_cd ..
// No message but also cwd is not changed
home/testFolder>> // Prompt from same folder
Somewhat pseudo source codes are below for myShell program
(I cannot copy/paste for my source cause it is in university server and transfer protocols are blocked)
int main() {
char** res; // store command by tokening
while (1) {
printf("%s>>", cwd);
gets(in); // get command
// <Some codes that split `in` by space and store it into res>
// <If cmd is "ls ./folder" -> res = ["ls", "./folder", NULL]>
pid = fork();
if (pid == 0) { // child
if (execvp(res[0], res) == -1) { // Run 'ls' command
printf("Fail to run program");
exit(0); // Exit child process
}
} else { // Parent, I omit in case of fork failure
wait(0);
// Omit exit status checking code
}
}
return 0;
}
Command cd is linux built-in command,
Command my_cd is my own program which change it's cwd.
And I do know that changing child process's cwd cannot effect to parent process and that's why 'cd' dose not change my shell's cwd. And I found that cd command is not in /bin/, so I guess cdis coded inside linux shell.
How can I make it work?
For linux cd
For my own my_cd -- I don't have it's source code, only have program. It is from my professor.
My guess is that cd cannot be implemented unless it is coded in shell itself. But professor give me this homework and it can mean that it is possible.
Any idea please?

Understanding pipe, fork and exec - C programming

I am trying to understand pipe, fork and exec in C, so I tried to write a little program that takes an input string and prints it out with the help of 2 child processes that run simultaneously.
Since the code is too long I posted it in this link: https://pastebin.com/mNcRWkDg which I will use as a reference. I also posted the short version of my code at the bottom
Example what should it do with input abcd:
> ./echo
abcd
result ->
abcd
I am taking an input though getline() and checking if the input_lengh is even or can be broken into even parts. If it's just one char it just prints it out.
If it is for example abcd i.e has input_length of 4, it will split it into 2 parts first_part ab and second_part cd with the help of the struct parts like this:
struct parts p1;
split(input, &p1);
Then I setup pipe for first child and fork it and then for the second child the same. I redirect first child output to be input of the parent process and the same for the second child. Let's assume that that part works like it should.
Then I write it to their child processes input:
write(pipeEndsFirstChild2[1], p1.first_half, strlen(p1.first_half));
write(pipeEndsSecondChild2[1], p1.second_half, strlen(p1.second_half));
After that I open their outputs with fdopen() and read it with fgets()
At the end I allocate memory and concat both results with:
char *result = malloc(strlen(readBufFirstChild) + strlen(readBufSecondChild));
strcat(result, readBufFirstChild);
strcat(result, readBufSecondChild);
I used stderr to see the output since stdout is redirected and what I get is:
>./echo
abcd
result ->
cd
result ->
ab
result ->
����
Question:
How do I get child process 1 to give me the ab first and then the second child to give me cd i.e how do I assure child processes run in correct order? Since I am only printing How do I save ab and cd between processes and concat them in the parent process to output them onto stdout?
If I try:
>./echo
ab
result ->
ab
everything works as expected, so I guess if I have to call child processes multiple times as in abcd input then something gets messed up. Why?
int main(int argc, char *argv[])
{
int status = 0;
char *input;
input = getLine();
int input_length = strlen(input);
if((input_length/2)%2 == 1 && input_length > 2)
{
usage("input must have even length");
}
if (input_length == 1)
{
fprintf(stdout, "%s", input);
}else
{
struct parts p1;
split(input, &p1);
int pipeEndsFirstChild1[2];
int pipeEndsFirstChild2[2];
.
.
.
pid_t pid1 = fork();
redirectPipes(pid1, pipeEndsFirstChild1, pipeEndsFirstChild2);
int pipeEndsSecondChild1[2];
int pipeEndsSecondChild2[2];
.
.
.
pid_t pid2 = fork();
redirectPipes(pid2, pipeEndsSecondChild1, pipeEndsSecondChild2);
// write to 1st and 2nd child input
write(pipeEndsFirstChild2[1], p1.first_half, strlen(p1.first_half));
write(pipeEndsSecondChild2[1], p1.second_half, strlen(p1.second_half));
.
.
.
// open output fd of 1st child
FILE *filePointer1 = fdopen(pipeEndsFirstChild1[0], "r");
// put output into readBufFirstChild
fgets(readBufFirstChild,sizeof(readBufFirstChild),filePointer1);
// open output fd of 2nd child
FILE *filePointer2 = fdopen(pipeEndsSecondChild1[0], "r");
// open output fd of 2st child
fgets(readBufSecondChild,sizeof(readBufSecondChild),filePointer2);
//concat results
char *result = malloc(strlen(readBufFirstChild) +
strlen(readBufSecondChild) + 1);
strcpy(result, readBufFirstChild);
strcat(result, readBufSecondChild);
fprintf(stderr, "result ->\n%s\n", result);
if(wait(&status) == -1){
exit(EXIT_FAILURE);
}
exit(EXIT_SUCCESS);
}
}
There's no way to control the order that child processes run if they both have input available to them.
The way to solve this in your application is that you shouldn't write to the second child until after you've read the response from the first child.
write(pipeEndsFirstChild2[1], p1.first_half, strlen(p1.first_half));
char readBufFirstChild[128];
FILE *filePointer1 = fdopen(pipeEndsFirstChild1[0], "r");
fgets(readBufFirstChild,sizeof(readBufFirstChild),filePointer1);
write(pipeEndsSecondChild2[1], p1.second_half, strlen(p1.second_half));
char readBufSecondChild[128];
FILE *filePointer2 = fdopen(pipeEndsSecondChild1[0], "r");
fgets(readBufSecondChild,sizeof(readBufSecondChild),filePointer2);
I've omitted the error checking and closing of all the unnecessary pipe ends.
You only need to do this because each process is printing its portion of the result to stderr, so you care what order they're running. Normally you shouldn't care about the order that they result, since they can contribute their portion of the final result in any order. If only the original parent process displayed the result, your code would be fine.

SSH communication using pipes and read() write()

I'm recently writing a piece of code to access an external server using SSH and then communicating with interactive shell-like application to which I'm directly connecting through it.
I'm using embedded Linux with only basic libraries available, with no possibility to use any additional software nor library. Also, I have to do it via C/C++ code inside the application. So I've decided to use pipes and read(), write() system calls, and I would rather stick to that.
I've written some code to better understand and test the concept. But it doesn't work as expected. I've used a snippet from here. It seems to work fine, but then the loop in main behaves not as expected
#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
static bool
start_subprocess(char *const command[], int *pid, int *infd, int *outfd)
{
int p1[2], p2[2];
if (!pid || !infd || !outfd)
return false;
if (pipe(p1) == -1)
goto err_pipe1;
if (pipe(p2) == -1)
goto err_pipe2;
if ((*pid = fork()) == -1)
goto err_fork;
if (*pid) {
/* Parent process. */
*infd = p1[1];
*outfd = p2[0];
close(p1[0]);
close(p2[1]);
return true;
} else {
/* Child process. */
dup2(p1[0], STDIN_FILENO);
dup2(p2[1], STDOUT_FILENO);
close(p1[0]);
close(p1[1]);
close(p2[0]);
close(p2[1]);
execvp(*command, command);
/* Error occured. */
fprintf(stderr, "error running %s: %s", *command, strerror(errno));
abort();
}
err_fork:
close(p2[1]);
close(p2[0]);
err_pipe2:
close(p1[1]);
close(p1[0]);
err_pipe1:
return false;
}
int main() {
char *cmd[4];
cmd[0] = "/usr/bin/ssh";
cmd[1] = "-tt";
cmd[2] = "user#localhost";
cmd[3] = NULL;
char buf[65535];
char msg[65535];
int pid, infd, outfd;
start_subprocess(cmd, &pid, &infd, &outfd);
printf ("Started app %s as %d\n\n", *cmd, pid);
while(1) {
read(outfd, buf, 65535);
printf(">>> %s\n", buf);
printf("<<< ");
scanf("%s", msg);
if(strcmp(msg, "exit") == 0) break;
write(infd, msg, strlen(msg));
}
return 0;
}
I've experimeted with various SSH -t settings and it seems to somehow work with -tt option enabled (As I understand it forces pseudoterminal), without it I'm getting
Pseudo-terminal will not be allocated because stdin is not a terminal.
So I assume -tt is correct here. But the beheviour is strange. I wanted to connect through SSH, then issue ls command and see the output, which should be similar to normal SSH :
user#xubuntuLTS ~/dev/cpp/pipes$ ssh localhost
>>>> WELCOME TO SSH SERVER <<<<
Last login: Thu Jan 3 22:34:35 2019 from 127.0.0.1
user#xubuntuLTS ~$ ls
Desktop dev Documents Downloads Music Pictures Public Templates TEST_FILE Videos
user#xubuntuLTS ~$
But insted, my application works like this:
user#xubuntuLTS ~/dev/cpp/pipes$ ./a.out
Started app /usr/bin/ssh as 18393
>>>> WELCOME TO SSH SERVER <<<<
>>> Last login: Thu Jan 3 22:35:28 2019 from 127.0.0.1
<<< ls
>>> user#xubuntuLTS ~$
<<<
ls
>>> ls0;user#xubuntuLTS: ~user#xubuntuLTS ~$
<<< ls
>>> ls0;user#xubuntuLTS: ~user#xubuntuLTS ~$
Can you hint me what is wrong in my code? I want to read exactly the same output as I see during "normal" SSH session from terminal, possibly having "appending" output during each read() call, so I can easily perform some automated task with this type of interactive communication. Please note that using standard terminal here is just an example, in the real world solution I'm connecting to some kind of command line interface program directly by logging through SSH, without actual access to shell.
I'm pretty sure that there is something wrong with correct usage of write() and read() here, but I'm not an expert in that matter.

Realize a console in a GTK3 GUI programming in C

I realized a GUI with GTK3 that, essentially, generates an input text file for an exe program that with these inputs can do elaborations.
This exe is put in executions in the GUI by mean of a System call ( system("exe input.dat &") ).
This exe can print on screen message of information or error.
What I want to do is redirect these message on a GtkTextView.
The idea that I had is to redirect output and error on a file ( system("exe input.dat > output_file.txt 2>&1 &") ) and in the GUI read line by line this
file and send this strings in the textView.
I was not sure that 2 process can write and read the same file and to test this concept I used these 2 simple programs:
the writer (used like ./writer > out_file.txt):
#include <stdio.h>
#include <unistd.h>
main()
{
int a;
while(1)
{
fprintf(stdout,"a=%d\n",a);
fflush(stdout);
sleep(1);
a++;
}
}
and the reader:
#include <stdio.h>
#include <string.h>
int main()
{
FILE *fp;
fp = fopen("out_file.txt","r");
char string_new[1024];
char string_old[1024];
strcpy(string_old," ");
while(1)
{
fgets(string_new,1024,fp);
if ( strlen(string_new) != 0 )
{
if ( strcmp(string_new, string_old) != 0 )
{
fprintf(stdout,"%s",string_new);
fflush(stdout);
strcpy(string_old,string_new);
}
}
}
}
This two programs run correctly and the second one print the output of the first one.
Putting in the GUI a similar code, the GUI read only the first line of the file.
How I can solve this issue?
Thank you
You should use popen instead of executing system("exe input.dat &"), then it's easy to read from the stdout output of the program.
Like this:
#include <stdio.h>
int main(void)
{
FILE *fp = popen("ls -lah /tmp", "r");
if(fp == NULL)
return 1;
char buffer[1024];
int linecnt = 0;
while(fgets(buffer, sizeof buffer, fp))
printf("Line: %d: %s", ++linecnt, buffer);
putchar('\n');
fclose(fp);
return 0;
}
which outputs:
$ ./b
Line: 1: total 108K
Line: 2: drwxrwxrwt 8 root root 12K Mar 10 02:30 .
Line: 3: drwxr-xr-x 26 root root 4.0K Feb 15 01:05 ..
Line: 4: -rwxr-xr-x 1 shaoran shaoran 16K Mar 9 22:29 a
Line: 5: -rw-r--r-- 1 shaoran shaoran 3.6K Mar 9 22:29 a.c
Line: 6: -rw------- 1 shaoran shaoran 16K Mar 9 22:29 .a.c.swp
Line: 7: -rwxr-xr-x 1 shaoran shaoran 11K Mar 10 02:30 b
Line: 8: -rw-r--r-- 1 shaoran shaoran 274 Mar 10 02:30 b.c
Line: 9: -rw------- 1 shaoran shaoran 12K Mar 10 02:30 .b.c.swp
Line: 10: drwx------ 2 shaoran shaoran 4.0K Mar 9 20:08 firefox_shaoran
Line: 11: drwxrwxrwt 2 root root 4.0K Mar 9 20:06 .ICE-unix
Line: 12: srwx------ 1 mongodb mongodb 0 Mar 9 20:07 mongodb-27017.sock
Line: 13: prwx------ 1 shaoran shaoran 0 Mar 9 20:08 oaucipc-c2s-1874
Line: 14: prwx------ 1 shaoran shaoran 0 Mar 9 20:08 oaucipc-s2c-1874
Line: 15: drwxrwxr-x 2 root utmp 4.0K Mar 9 20:06 screen
Line: 16: drwx------ 2 shaoran shaoran 4.0K Mar 9 20:07 ssh-XueH0w8zWCSE
Line: 17: drwx------ 2 shaoran shaoran 4.0K Mar 9 20:08 thunderbird_shaoran
Line: 18: -r--r--r-- 1 root root 11 Mar 9 20:07 .X0-lock
Line: 19: drwxrwxrwt 2 root root 4.0K Mar 9 20:07 .X11-unix
If you need more control and want also to read stderr, then you would have to create pipes for stdout and stderr,
make a fork and the child dup2 the pipes to stderr & stdout and
then execute exec (or any other function of that family) to execute the
program.
Like this:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/wait.h>
int main(void)
{
int stdout_pipe[2];
int stderr_pipe[2];
pipe(stdout_pipe);
pipe(stderr_pipe);
pid_t pid = fork();
if(pid < 0)
return 1;
if(pid == 0)
{
// closing reading ends and duplicating writing ends
close(stdout_pipe[0]);
dup2(stdout_pipe[1], STDOUT_FILENO);
close(stderr_pipe[0]);
dup2(stderr_pipe[1], STDERR_FILENO);
execlp("ls", "ls", "-alh", "a.c", "kslkdl", NULL);
exit(1);
}
// closing writing end
close(stdout_pipe[1]);
close(stderr_pipe[1]);
int status;
if(waitpid(pid, &status, 0) < 0)
{
fprintf(stderr, "could not wait\n");
return 1;
}
if(WIFEXITED(status) == 0)
{
fprintf(stderr, "ls exited abnormally\n");
close(stdout_pipe[0]);
close(stderr_pipe[0]);
return 1;
}
puts("STDOUT:");
char buffer[1024];
ssize_t len;
while((len = read(stdout_pipe[0], buffer, sizeof(buffer) - 1)) > 0)
{
buffer[len] = 0;
printf("%s", buffer);
}
putchar('\n');
close(stdout_pipe[0]);
puts("STDERR:");
while((len = read(stderr_pipe[0], buffer, sizeof(buffer) - 1)) > 0)
{
buffer[len] = 0;
printf("%s", buffer);
}
putchar('\n');
close(stderr_pipe[0]);
return 0;
}
which outputs:
$ ./b
STDOUT:
-rw-r--r-- 1 shaoran shaoran 3.6K Mar 9 22:29 a.c
STDERR:
ls: cannot access 'kslkdl': No such file or directory
Pablo's answer is correct, you need to use pipe(7)-s.
And you could probably use GTK & Glib's g_spawn_async_with_pipes (which is based on pipe and fork and execve on Linux) for that (instead of fork or popen). In a GTK interactive program, it is better than the usual popen because the forked program would run concurrently with your event loop.
You could even consider using g_source_add_unix_fd on the (or on some) of the pipe fd-s given by pipe(2) or by g_spawn_async_with_pipes which use that pipe(2) call. But you might prefer g_io_channel_unix_new and g_io_add_watch
Be aware that the GTK main loop (and Gtk Input and Event Handling Model), i.e. GtkApplication and the related g_application_run or the older gtk_main are an event loop around some multiplexing system call like poll(2) (or the older select(2)) and you probably need that loop to be aware of your pipes. When some data arrives on the pipe, you probably want to read(2) it (and then call some GtkTextBuffer insert function).
You should make design choices: do you want the GUI interface and the other process to run concurrently? Or is the other exe process always so quick and with a small output (and no input) that you might just use popen?
On current GUI applications, the event loop should run quickly (at least 30 or 50 times per second) if you want a responsive GUI app.
Look also for inspiration inside the source code of some existing free software GTK application (e.g. on github or from your Linux distro).

Implementing pipelining in C. What would be the best way to do that?

I can't think of any way to implement pipelining in c that would actually work. That's why I've decided to write in here. I have to say, that I understand how do pipe/fork/mkfifo work. I've seen plenty examples of implementing 2-3 pipelines. It's easy. My problem starts, when I've got to implement shell, and pipelines count is unknown.
What I've got now:
eg.
ls -al | tr a-z A-Z | tr A-Z a-z | tr a-z A-Z
I transform such line into something like that:
array[0] = {"ls", "-al", NULL"}
array[1] = {"tr", "a-z", "A-Z", NULL"}
array[2] = {"tr", "A-Z", "a-z", NULL"}
array[3] = {"tr", "a-z", "A-Z", NULL"}
So I can use
execvp(array[0],array)
later on.
Untli now, I believe everything is OK. Problem starts, when I'm trying to redirect those functions input/output to eachother.
Here's how I'm doing that:
mkfifo("queue", 0777);
for (i = 0; i<= pipelines_count; i++) // eg. if there's 3 pipelines, there's 4 functions to execvp
{
int b = fork();
if (b == 0) // child
{
int c = fork();
if (c == 0)
// baby (younger than child)
// I use c process, to unblock desc_read and desc_writ for b process only
// nothing executes in here
{
if (i == 0) // 1st pipeline
{
int desc_read = open("queue", O_RDONLY);
// dup2 here, so after closing there's still something that can read from
// from desc_read
dup2(desc_read, 0);
close(desc_read);
}
if (i == pipelines_count) // last pipeline
{
int desc_write = open("queue", O_WRONLY);
dup2(desc_write, 0);
close(desc_write);
}
if (i > 0 && i < pipelines_count) // pipeline somewhere inside
{
int desc_read = open("queue", O_RDONLY);
int desc_write = open("queue", O_WRONLY);
dup2(desc_write, 1);
dup2(desc_read, 0);
close(desc_write);
close(desc_read);
}
exit(0); // closing every connection between process c and pipeline
}
else
// b process here
// in b process, i execvp commands
{
if (i == 0) // 1st pipeline (changing stdout only)
{
int desc_write = open("queue", O_WRONLY);
dup2(desc_write, 1); // changing stdout -> pdesc[1]
close(desc_write);
}
if (i == pipelines_count) // last pipeline (changing stdin only)
{
int desc_read = open("queue", O_RDONLY);
dup2(desc_read, 0); // changing stdin -> pdesc[0]
close(desc_read);
}
if (i > 0 && i < pipelines_count) // pipeline somewhere inside
{
int desc_write = open("queue", O_WRONLY);
dup2(desc_write, 1); // changing stdout -> pdesc[1]
int desc_read = open("queue", O_RDONLY);
dup2(desc_read, 0); // changing stdin -> pdesc[0]
close(desc_write);
close(desc_read);
}
wait(NULL); // it wait's until, process c is death
execvp(array[0],array);
}
}
else // parent (waits for 1 sub command to be finished)
{
wait(NULL);
}
}
Thanks.
Patryk, why are you using a fifo, and moreover the same fifo for each stage of the pipeline?
It seems to me that you need a pipe between each stage. So the flow would be something like:
Shell ls tr tr
----- ---- ---- ----
pipe(fds);
fork();
close(fds[0]); close(fds[1]);
dup2(fds[0],0);
pipe(fds);
fork();
close(fds[0]); close(fds[1]);
dup2(fds[1],1); dup2(fds[0],0);
exex(...); pipe(fds);
fork();
close(fds[0]); etc
dup2(fds[1],1);
exex(...);
The sequence that runs in each forked shell (close, dup2, pipe etc) would seem like a function (taking the name and parameters of the desired process). Note that up until the exec call in each, a forked copy of the shell is running.
Edit:
Patryk:
Also, is my thinking correct? Shall it work like that? (pseudocode):
start_fork(ls) -> end_fork(ls) -> start_fork(tr) -> end_fork(tr) ->
start_fork(tr) -> end_fork(tr)
I'm not sure what you mean by start_fork and end_fork. Are you implying that ls runs to completion before tr starts? This isn't really what is meant by the diagram above. Your shell will not wait for ls to complete before starting tr. It starts all of the processes in the pipe in sequence, setting up stdin and stdout for each one so that the processes are linked together, stdout of ls to stdin of tr; stdout of tr to stdin of the next tr. That is what the dup2 calls are doing.
The order in which the processes run is determined by the operating system (the scheduler), but clearly if tr runs and reads from an empty stdin it has to wait (to block) until the preceding process writes something to the pipe. It is quite possible that ls might run to completion before tr even reads from its stdin, but it is equally possible that it wont. For example if the first command in the chain was something that ran continually and produced output along the way, the second in the pipeline will get scheduled from time to time to prcess whatever the first sends along the pipe.
Hope that clarifies things a little :-)
It might be worth using libpipeline. It takes care of all the effort on your part and you can even include functions in your pipeline.
The problem is you're trying to do everything at once. Break it into smaller steps instead.
1) Parse your input to get ls -al | out of it.
1a) From this you know you need to create a pipe, move it to stdout, and start ls -al. Then move the pipe to stdin. There's more coming of course, but you don't worry about it in code yet.
2) Parse the next segment to get tr a-z A-Z |. Go back to step 1a as long as your next-to-spawn command's output is being piped somewhere.
Implementing pipelining in C. What would be the best way to do that?
This question is a bit old, but here's an answer that was never provided. Use libpipeline. libpipeline is a pipeline manipulation library. The use case is one of the man page maintainers who had to frequently use a command like the following (and work around associated OS bugs):
zsoelim < input-file | tbl | nroff -mandoc -Tutf8
Here's the libpipeline way:
pipeline *p;
int status;
p = pipeline_new ();
pipeline_want_infile (p, "input-file");
pipeline_command_args (p, "zsoelim", NULL);
pipeline_command_args (p, "tbl", NULL);
pipeline_command_args (p, "nroff", "-mandoc", "-Tutf8", NULL);
status = pipeline_run (p);
The libpipeline homepage has more examples. The library is also included in many distros, including Arch, Debian, Fedora, Linux from Scratch and Ubuntu.

Resources