So I'm building a minishell in C(for unix). I just figured out how to get pipelines to work, however I'm having a Zombie problem. Let's say I have:
echo a | echo b | echo c
This doesn't output anything, when it should be outputting "c". However, if I tell my shell to execute each sub-command, and then wait before moving on to the next command, it works fine. However this isn't a real solution, as I want that natural coordination between pipes that you get if you don't wait.
I'm having trouble devising an efficient way to wait on all the zombies once the last command is executed. I tried doing this after the last execution, but before the shell exits:
while(waitpid(-1, NULL, WNOHANG) > 0);
However, no luck. So far the only thing that works is by telling my shell to execute each sub-command, and then wait before starting the next command. Here's the entire main shell file:
http://pastebin.com/YV96mFy7
The main function that processes input(processline()) starts at line 105.
Thanks for the help, if you guys need anything more just ask.
Change this
while(waitpid(-1, NULL, WNOHANG) > 0);
To:
while(wait(NULL) > 0);
/* which is equivalent to */
while(waitpid(-1, NULL, 0) > 0);
This will cause the parent process to wait for all child processes to finish, if you don't wish to block the parent process then catch SIGCHLD and call wait() in the signal handler instead.
Related
I am creating an application in C which I have to execute the firefox with the command execlp but every time I execute it I "lost" my current terminal, but after the execlp i still need to use the terminal which I was before, so my question is: Is there a way where I can be in one terminal call execlp and it executes in another one without block the one I am on?
here is a snippet of my code:
pid_t child = fork();
if (child == -1) {
perror("fork error");
} else if (child == 0) {
exec_pid = getpid();
execlp("firefox", "firefox", URL, NULL);
perror("exec error");
}
// keep with program logic
If I'm understanding you correctly, you're saying that your program launches Firefox and then keeps control of your shell until Firefox terminates. If this is the case, there are a couple of ways around this.
The easiest solution is to run your program in the background. Execute it like ./my_program & and it be launched in a separate process and control of your terminal will be returned to you immediately.
If you want to solve this from your C code, the first step would be to print out the process ID of the child process after the fork. In a separate shell, use ps to monitor both your program and the forked PID. Ensure that your program is actually terminating and that it's not just stuck waiting on something.
I am trying make my own shell for my school homework, after the sucessfull fork call, I want to put pid which comes from fork() function into foreground and then I want to put my own shell into background. Then after the waitpid function, I need to my own shell into foreground again. For this I think like this:
if(tcsetpgrp(0, getpgid(pid))!=0)
perror("Foreground error: ");
waitpid(pid, NULL, 0);
if(tcsetpgrp(0, getpgid(shellpid))!=0)
perror("Foreground error: ");}
But after new process finishes, the linux shell stops my own shell.
For instance, ls command is the new process in the picture.
Please look at here: for terminal screen shot
adding "signal(SIGTTOU, SIG_IGN);" before tcsetpgrp solved my problem. – Ali Can Üstünel
I am writing a basic shell program for a university assignment, and i need to test for when the user enters the string "exit". When this happens the program should quit.
I can test for this successfully, but if i have forked new processes that have dealt with an strerror in my program, i have to keep entering exit for however many active processes are running at that current time.
Is there a way of exiting all associated processes with a program under this condition?
Cheers.
As said in comments, you should not spawn interactive processes in the background (at least how your shell and your command will handle the only stdin?).
Also as a shell you should keep track of all spawned processes (in background) so that you are able to catch their return code, as done in sh/bash (at least). For exemple in bash:
> sleep 1 &
[1] 8215
>
(1 sec later)
[1]+ Terminated sleep 1
So if you have the list of existing children you can send SIGINT/SIGKILL to all of them.
Whatever if you really want to be sure to kill everyone you should use process group (PG) killing. Using kill() function with PID=0 sends the kill signal to all processes in the same process group than you.
So you can start your shell by setting a new process group (to be sure to not kill something else), and this PG will be inherited by your children (appart if a child set a new PG of course).
This would looks like:
// at the begining of your main code
// try to get a new process group for me
x = setpgid(0,0);
if (x == -1) {
perror("setpgid");
exit(1);
}
(…)
// here you're about to exit from main, just kill
// all members of your group
kill(0, SIGINT); // send an INT signal
kill(0, SIGKILL); // paranoid: if a child catch INT it will get a KILL
// now you can exit, but you're probably dead 'cause you
// also receive the SIGINT. If you want to survive you have to
// catch SIGINT, but you will not catch KILL whatever
If it is needed for you to survive the kill you may catch the signal using signal() or better sigaction() so that you will not be killed and so able to perform other before-exit actions.
First I have used mpg123 remotely using FIFOs to implement Pause functionality, but now I want to close the mpg123 player when file is played through automatically.
The code for playing current is
pid p = fork();
if (p<0)
return;
else if (p==0)
execlp("mpg123", "mpg123", "-R", "--fifo", "aFifo", NULL);
else
system("load test.mp3 > aFifo");
Currently if the file is played through then also child process else if (p==0) will stay there and mpg123 player process will continue to exist
You have no ? in your question, but any way
your code looks wrong, because system uses fork and exec
under the hood. So instead of one fork and one exec
you use fork three times and uses execv twice.
Read how to run process here: how to correctly use fork, exec, wait
after you run process in proper way, you have real pid of mpg123,
and so you can kill it if you want, or pause or what else you want.
I want to launch a process from within my c program, but I don't want to wait for that program to finish. I can launch that process OK using system() but that always waits. Does anyone know of a 'non-blocking' version that will return as soon as the process has been started?
[Edit - Additional Requirement] When the original process has finished executing, the child process needs to keep on running.
One option is in your system call, do this:
system("ls -l &");
the & at the end of the command line arguments forks the task you've launched.
Why not use fork() and exec(), and simply don't call waitpid()?
For example, you could do the following:
// ... your app code goes here ...
pid = fork();
if( pid < 0 )
// error out here!
if( !pid && execvp( /* process name, args, etc. */ )
// error in the child proc here!
// ...parent execution continues here...
The normal way to do it, and in fact you shouldn't really use system() anymore is popen.
This also allows you to read or write from the spawned process's stdin/out
edit: See popen2() if you need to read and write - thansk quinmars
You could use posix_spawnp() function. It's much similar to system() than the fork and exec* combination, but non-blocking.
In the end, this code appears to work. Bit of a mis-mash of the above answers:
pid = fork();
if (!pid)
{
system("command here &");
}
exit(0);
Not quite sure why it works, but it does what I'm after, thanks to everyone for your help
How about using "timeout" command if you are looking for your command to exit after a specific time:
Ex: system("timeout 5 your command here"); // Kills the command in 5 seconds if process is not completed