Execute commands in background - c

I am implementing a minishell that will emulate a real bash shell. I am stacked with the execution of commands in background such as ls &.
My first approach was the following (which does not work)
char *execArgs[] = { "ls", "&", NULL };
execvp("ls", execArgs);
Then, I tried another way by modifying the parent process of the fork() and not waiting for the child in case it should be run in background. The problem here is that then it should print the list of jobs running on background in order to simulate the background bahaviour in a bash shell, but the command jobs is not executed correctly as a parameter of execvp().
My question is, is there any easier way to implement this background calls in C? In case there isn't, what does it fail in either of the options that I have mentioned?

The reasons your example fails is
The "&" as you have it coded is an argument to the "ls" program. When you enter the command at a shell prompt, the "&" is consumed by the shell and the "ls" program never see the backgrounding.
The "exec()" call terminates the current program (i.e., your mini-shell).
What you may want is
system("ls &");
Read a good Unix book. You will need to know fork(), exec(), wait() and more.

Related

Logic to determine whether a "prompt" should be printed out

Seems like a basic idea: I want to print out a prompt for a mini shell I am making in C. Here is an example of what I mean for the prompt:
$ ls
The $ being the "prompt". This little mini shell I am making supports backgrounding a process via the normal bash notation of putting a & symbol on the end of a line. Like $ ls &.
My logic currently is that in my main command loop, if the process is not going to be backgrounded then print out the prompt:
if(isBackground == 0)
prompt();
And then in my signal handler I print out the prompt using write() which covers the case of it being a background process.
This works fine if the background command returns right away like with a quick $ ls &, but in the case of something like $ sleep 10 & the shell will look like it is being blocked as the prompt will not be printed out until it hits the signal handler.
I can't figure out how to fix this because I don't know when the background process will end which means that the signal handler somehow needs to be saying when to print the new prompt because if the background process happened to have an output, it would output and then there would no longer be a prompt.
How can I resolve this problem? Is there a better way to do this that I'm not thinking of that could resolve my problem?

What does execvp actually do? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Writing a shell - how to execute commands
I've been tasked with writing a shell in C. So far I understand that execvp will try to run the program in arg1 with arg2 as parameters. Now it seems that doing this
execvp ("ls", args); //assume args is {"ls", ">", "awd.txt"}
is not equivalent to typing this in the console
ls > awd.txt
I realize that I need to use freopen to achieve the same results but I'm curious what execvp is doing.
The exec family of functions are ultimately a system call. System calls go straight into the kernel and typically perform a very specific service that only the kernel can do.
Redirection, on the other hand, is a shell feature.
So, when one types ls > awd.txt at a shell, the shell first does a fork(2), then it closes standard output in the child, then it opens awd.txt on file descriptor one so that it's the new standard output.
Then, and only then, the shell will make an exec-family system call.
In your case you simply passed the strings > and awd.txt to the exec system call, and from there to ls. BTW, be sure you terminate your execvp arg array with a null pointer.
Note: As you can see, the redirection operators are never seen by the executed program. Before Unix, directing output to a file had to be done by every program based on an option. More trivia: most programs never know they were redirected, but ironically, ls does check to see if its output is a tty, and if so, it does the multi-column formatted output thing.
It's executing ls with 2 arguments: > and awd.txt. It is equivalent to running:
'ls' '>' 'awd.txt'
You can pass your command directly to the shell:
char * const args[] = { "sh", "-c", "ls > awd.txt", NULL};
execvp("/bin/sh", args);
But it doesn't seems like a good idea.

how to run a command on different terminal using fork() and execlp()

I had written a program using NCURSES in which i am displaying a menu on one terminal and want to use fork() and execlp() in the same program but whatever command i am running using fork() and execlp() had to be executed on a different terminal or in the back ground.How that can be done.I am simply using
if(fork())
wait(0);
else
execlp("ls","ls",(char *)NULL);
within a conditional statement which is displaying a message on the main terminal and will be executing the command inside execlp in the background
You probably need to start a new terminal, and hand it the command to run.
If you look at the command line parameters of e.g. gnome-terminal you can figure out how to format the command line.

Fork and wait - how to wait for all grandchildren to finish

I am working on an assignment to build a simple shell, and I'm trying to add a few features that aren't required yet, but I'm running into an issue with pipes.
Once my command is parsed, I fork a process to execute them. This process is a subroutine that will execute the command, if there is only one left, otherwise it will fork. The parent will execute the first command, the child will process the rest. Pipes are set up and work correctly.
My main process then calls wait(), and then outputs the prompt. When I execute a command like ls -la | cat, the prompt is printed before the output from cat.
I tried calling wait() once for each command that should be executed, but the first call works and all successive calls return ECHILD.
How can I force my main thread to wait until all children, including children of children, exit?
You can't. Either make your child process wait for its children and don't exit until they've all been waited for or fork all the children from the same process.
See this answer how to wait() for child processes: How to wait until all child processes called by fork() complete?
There is no way to wait for a grandchild; you need to implement the wait logic in each process. That way, each child will only exit after all it's children have exited (and that will then include all grandchildren recusively).
Since you are talking about grandchilds, you are obviously spawning the childs in a cascading manner. Thats a possible way to implement a pipe.
But keep in mind that the returned value from your pipe (the one you get when doing echo $? in your terminal) is the one returned from the right-most command.
This means that you need to spawn childs from right to left in this cascading implementation. You dont want to lose that returned value.
Now assuming we are only talking about builtin commands for the sake of simplicity (no extra calls to fork() and execve() are made), an intersting fact is that in some shells like "zsh", the right-most command is not even forked. We can see that with a simple piped command like:
export stack=OVERFLOW | export overflow=STACK
Using then the command env, we can appreciate the persistance of the overflow=STACK in the environment variables. It shows that the right-most command was not executed in a subshell, whereas export stack=OVERFLOW was.
Note: This is not the case in a shell like "sh".
Now lets use a basic piped command to give a possible logic for this cascading implementation.
cat /dev/random | head
Note: Even though cat /dev/random is supposedly a never ending command, it will stop as soon as the command head is done reading the first line outputed by cat /dev/random. This is because stdin is closed when head is done, and the command cat /dev/random aborts because its writing in a broken pipe.
LOGIC:
The parent process (your shell) sees that there is a pipe to execute. It will then fork two processes. The parent stays your shell, it will wait for the child to return, and store the returned value.
In the context of the first generation child: (trying to execute the right-most command of the pipe)
It sees that the command is not the last command, it will fork() again (What i call "cascading implementation").
Now that the fork is done, the parent process is going to execute first of all its task (head -1), it will then close its stdin and stdout, then wait() for its child. This is really important to close firstly stdin and stdout, then call wait(). Closing stdout sends EOF to the parent, if reading on stdin. Closing stdin make sure the grand-children trying to write in the pipe aborts, with a "broken pipe" error.
In the context of the grand-children:
It sees that it is the last command of a pipe, it will just execute the command and return its value (it closes stdin and stdout).

Safe version of popen()?

I use fork()/exec()/wait() rather than system() when the command has user input as some of its arguments so the user can't put something like...
&rm -rf /home/* && echo HAHA
... as an argument.
I'm assuming popen is as dangerous as system() because it takes a single string and not a list of strings like the exec family of functions do.
I can only get the return value from the exec functions though. Is there a "safe" version of popen that I can run with user input and process stdout / stderr back in the parent process?
The safe way is to set up the necessary pipes yourself, using straight pipe() calls directly.
That's what popen() does under the hood, but it also invokes the shell in order to run the child process. Skipping that step should make it safer.

Resources