How to know if a command given to execlp() exists? - c

I've searched quite a lot, but I still don't have an answer for this. I've got a program that creates other processes by asking the user the desired command, then I use execlp to open this new process. I wanted to know if there's an easy way to the parent process find out if the command was executed, or if the received command doesn't exist.
I have the following code:
if (executarComando(comando) != OK)
fprintf(stderr,"Nao foi possivel executar esse comando. ");
where executarComando is:
int executarComando(char* cmd) {
if ( execlp("xterm", "xterm", "-hold", "-e", cmd, NULL) == ERROR) // error
return ERROR;
return OK;
}

Your problem is that your execlp always succeeds; it's running xterm, not the command you're passing to the shell xterm runs. You will need to add some kind of communication channel between your program and this shell so that you can communicate back success or failure. I would do something like replacing the command with
( command ) 99>&- ; echo $? >&99
Then, open a pipe before forking to call execlp, and in the child, use dup2 to create as file descriptor number 99 corresponding to the write end of the pipe. Now, you can read back the exit status of the command across the pipe.
Just hope xterm doesn't go closing all file descriptors on you; otherwise you're out of luck and you'll have to make a temporary fifo (via mkfifo) somewhere in the filesystem to achieve the same result.
Note that the number 99 was arbitrary; anything other than 0, 1, or 2 should work.

There's no trivial way; a convention often used is that the fork()ed child will report the error and exit(-1) (or exit(255)) in the specific case where the exec() fails, and most commands avoid using that for their own failure modes.

Related

Controlling terminal and GDB

I have a Linux process running in the background. I want to take over its stdin/out/err over SSH and also be the terminal controller. The "original" file descriptors are pseudo terminals, too.
I have tried Reptyr and dupx. Reptyr fails around vfork, but dupx works very well. The GDB script it generated:
attach 123
set $fd=open("/dev/pts/14", 0)
set $xd=dup(0)
call dup2($fd, 0)
call close($fd)
call close($xd)
set $fd=open("/dev/pts/14", 1089)
set $xd=dup(1)
call dup2($fd, 1)
call close($fd)
call write($xd, "Remaining standard output of 123 is redirected to /dev/pts/14\n", 62)
call close($xd)
set $fd=open("/dev/pts/14", 1089)
set $xd=dup(2)
call dup2($fd, 2)
call close($fd)
call write($xd, "Remaining standard error of 123 is redircted to /dev/pts/14\n", 60)
call close($xd)
As soon as the dupx command finished, the shell is not returned and the target app receives my input (via pts/14) immediately.
Now I want to achieve the same result using my standalone binary application. I've ported the same syscalls (dup/dup2/close, etc) what being executed by the gdb by script driven by dupx:
int fd; int xd;
char* s = "Remaining standard output is redirected to new terminal\n";
fd = open(argv[1], O_RDONLY);
xd = dup( STDIN_FILENO);
dup2(fd, STDIN_FILENO );
close(fd);
close(xd);
fd = open(argv[1], O_WRONLY|O_CREAT|O_APPEND);
xd = dup( STDOUT_FILENO);
dup2(fd, STDOUT_FILENO);
close(fd);
write(xd, s, strlen(s));
close(xd);
fd = open(argv[1], O_WRONLY|O_CREAT|O_APPEND);
xd = dup( STDERR_FILENO);
dup2(fd, STDERR_FILENO);
close(fd);
write(xd, s, strlen(s));
close(xd);
Running the snipplet above is done by injecting a shared library into the remote process via sigstop/ptrace attach/dlopen/etc (using a tool similar to hotpatch). Lets consider this part of the problem to be safe and working reliable: after doing all this, the file descriptors of the target process are changed as I wanted. I can verify it by simply checking /proc/pidof target/fd.
However, the shell returns and it still receives all my input, not the target app.
I noticed if I simply attach/detach with gdb after this point (= fds changed by the injected C code) without actually changing anything, the desired behavior is accomplished (mean: the shell is not returned but the target app starts receiving my input). The command is:
gdb --pid=`pidof target` --batch --ex=quit
And now my question is: how? What happens in the background? How can I do the same without gdb? I've tried stracing gdb to get some hints, and also tried playing with the tty ioctl API's without any luck.
Please note, that obtaining the terminal controller status (if that is the key of this problem at all) by the fork/setsid way what Reptyr uses is not acceptable for me: I want to avoid forking.
Additionally, I cant control starting the target, so "why don't you run it in screen" is no answer here.
I've ssh access, thats where pts/14 was coming from. Shell and the
target app might be competing, but I've never experienced such
behaviour; dupx alwaysed did what I wanted in this scenario.
Well, sitting and wondering why the known problem by chance didn't show up in the past won't solve it, even if this point would be clarified. The way to go is to make it work by design rather than by accident. For this purpose it is necessary for your standalone binary application to not return to the shell (to avoid the concurrent reading of input) while the input is supposed to go to the target app.
See e. g. also Redirect input from one terminal to another, Why does tapping a TTY device only capture every other character?

Fork and dup2 - Child process is not terminating - Issues with file descriptors?

I am writing my own shell for a homework assignment, and am running into issues.
My shell program gets an input cat scores | grep 100 from the console and prints the output as expected but the grep command doesn't terminate and I can see it running infinitely from ps command.
EDIT - There was an error while closing fds. Now grep command is not executing and console output is -
grep: (standard input): Bad file descriptor
I am reading the number of commands from the console and creating necessary pipes and storing them in a two dimensional int array fd[][] before forking the first process.
fd[0][0] will contain read end of 1st pipe and fd[0][1] will contain write end of 1st pipe. fd[1][0] will contain read end of 2nd pipe and fd[1][1] will contain write end of 2nd pipe and so on.
Each new process duplicates its stdin with the read end of its pipe with the previous process and duplicates its stdout with the write end of its pipe with the next process.
Below is my function:
void run_cmds(char **args, int count,int pos)
{
int pid,status;
pid = fork();
if ( pid == 0 )
{
if(pos != 0) dup2(fd[pos-1][0],0); // not changing stdin for 1st process
if(pos != count) dup2(fd[pos][1],1); //not changing stdout for last process
close_fds(pos);
execvp(*args,args);
}
else
{
waitpid(pid,&status,0);
count--;
pos++;
//getting next command and storing it in args
if(count > 0)
run_cmds(args,count,pos);
}
}
}
args will contain the arguments for the command.
count is the number of commands I need to create.
pos is the position of the command in the input
I am not able to figure out the problem. I used this same approach for hard coded values before this and it was working.
What am I missing with my understanding/implementation of dup2/fork and why is the command waiting infinitely?
Any inputs would be greatly helpful. Struck with this for the past couple of days!
EDIT : close_fds() function is as below -
For any process , I am closing both the pipes linking the process.
void close_fds(int pos)
{
if ( pos != 0 )
{
close(fd[pos-1][0]);
close(fd[pos-1][1]);
}
if ( pos != count)
{
close(fd[pos][0]);
close(fd[pos][1]);
}
}
First diagnosis
You say:
Each new process duplicates its stdin with the read end of its pipe with the previous process and duplicates its stdout with the write end of its pipe with the next process.
You don't mention the magic word close().
You need to ensure that you close both the read and the write end of each pipe when you use dup() or dup2() to connect it to standard input. That means with 2 pipes you have 4 calls to close().
If you don't close the pipes correctly, the process that is reading won't get EOF (because there's a process, possibly itself, that could write to the pipe). It is crucial to have enough (not too few, not too many) calls to close().
I am calling close_fds() after dup2 calls. The function will go through the fd[][2] array and do a close() call for each fd in the array.
OK. That is important. It means my primary diagnosis probably wasn't spot on.
Second diagnoses
Several other items:
You should have code after the execvp() that reports an error and exits if the execvp() returns (which means it fails).
You should not immediately call waitpid(). All the processes in a pipeline should be allowed to run concurrently. You need to launch all the processes, then wait for the last one to exit, cleaning up any others as they die (but not necessarily worrying about everything in the pipeline exiting before continuing).
If you do force the first command to execute in its entirety before launching the second, and if the first command generates more output than will fit into the pipe, you will have a deadlock — the first process can't exit because it is blocked writing, and the second process can't be started because the first hasn't exited. Interrupts and reboots and the end of the universe will all solve the problem somewhat crudely.
You decrement count as well as incrementing pos before you recurse. That might be bad. I think you should just increment pos.
Third diagnosis
After update showing close_fds() function.
I'm back to "there are problems with closing pipes" (though the waiting and error reporting problems are still problems). If you have 6 processes in a pipeline and all 5 connecting pipes are created before any processes are run, each process has to close all 10 pipe file descriptors.
Also, don't forget that if the pipes are created in the parent shell, rather than in a subshell that executes one of the commands in the pipeline, then the parent must close all the pipe descriptors before it waits for the commands to complete.
Please manufacture an MCVE (How to create a Minimal, Complete, and Verifiable Example?) or
SSCCE (Short, Self-Contained, Correct Example) — two names and links for the same basic idea.
You should create a program that manufactures the data structures that you're passing to the code that invokes run_cmds(). That is, you should create whatever data structures your parsing code creates, and show the code that creates the pipe or pipes for the 'cat score | grep 100' command.
I am no longer clear how the recursion works — or whether it is invoked in your example. I think it is unused, in fact in your example, which is probably as well since you would end up with the same command being executed multiple times, AFAICS.
Most probable reasons why grep doesn't terminate:
You don't call waitpid with the proper PID (even though there is such a call in your code, it may not get executed for some reason), so grep becomes a zombie process. Maybe your parent shell process is waiting for another process first (infinitely, because the other one never terminates), and it doesn't call waitpid with the PID of grep. You can find Z in the output of ps if grep is a zombie.
grep doesn't receive an EOF on its stdin (fd 0), some process is keeping the write end of its pipe open. Have you closed all file descriptors in the fd array in the parent shell process? If not closed everywhere, grep will never receive an EOF, and it will never terminate, because it will be blocked (forever) waiting for more data on its stdin.

execvp/fork -- how to catch unsuccessful executions?

Right now I'm writing a C program that must execute a child process. I'm not doing multiple child processes simultaneously or anything, so this is fairly straightforward. I am definitely executing the built-in shell programs (i.e. things like cat and echo) successfully, but I also need to be able to tell when one of these programs fails to execute successfully. I'm trying this with the following simplified code:
int returnStatus; // The return status of the child process.
pid_t pid = fork();
if (pid == -1) // error with forking.
{
// Not really important for this question.
}
else if (pid == 0) // We're in the child process.
{
execvp(programName, programNameAndCommandsArray); // vars declared above fork().
// If this code executes the execution has failed.
exit(127); // This exit code was taken from a exec tutorial -- why 127?
}
else // We're in the parent process.
{
wait(&returnStatus); // Wait for the child process to exit.
if (returnStatus == -1) // The child process execution failed.
{
// Log an error of execution.
}
}
So for example, if I try to execute rm fileThatDoesntExist.txt, I would like to consider that a failure since the file didn't exist. How can I accomplish this? Also, while that execvp() call successfully executes built-in shell programs, it doesn't execute programs in the current directory of the executable (i.e. the program that this code is running inside of); Is there something else that I have to do in order to get it to run programs in the current directory?
Thanks!
This is a classic problem with a very elegant solution. Before forking, create a pipe in the parent. After fork, the parent should close the writing end of the pipe, and block attempting to read from the reading end. The child should close the reading end and set the close-on-exec flag, using fcntl, for the writing end.
Now, if the child calls execvp successfully, the writing end of the pipe will be closed with no data, and read in the parent will return 0. If execvp fails in the child, write the error code to the pipe, and read in the parent will return nonzero, having read the error code for the parent to handle.
wait(2) gives you more than just the exit status of the child process. In order to get the real exit status, you need to use the WIFEXITED() macro to test if the child exited normally (as opposed to abnormally via a signal etc.), and then use the WEXITSTATUS() macro to get the real exit status:
wait(&status);
if(WIFEXITED(status))
{
if(WEXITSTATUS(status) == 0)
{
// Program succeeded
}
else
{
// Program failed but exited normally
}
}
else
{
// Program exited abnormally
}
In order for execvp(3) to run a program in the current directory, you either need to add the current directory to your $PATH environment (generally not a good idea), or pass it the full path, e.g. use ./myprogram instead of just myprogram.
In terms of failure detection, if an exec() function replaces the current process with a new one, then the current process is gone; it doesn't matter if the executed program decides that what it has been asked to do is a success or failure. However, the parent process from before the fork can discover the child's exit code which would likely have the command success/failure information.
In terms of finding executables, execvp() duplicates the action of the shell, searching the current path. If it is not finding executables in the working directory, it is likely that directory is not in the search path (or the files are not actually executable). You can try specifying them by a full path name.
If you simply want to run a command and wait for the result, you might want to use the system() function which handles this for you, instead of building it yourself with fork/exec.

A Linux Daemon and the STDIN/STDOUT

I am working on a linux daemon and having some issues with the stdin/stdout. Normally because of the nature of a daemon you do not have any stdin or stdout. However, I do have a function in my daemon that is called when the daemon runs for the first time to specify different parameters that are required for the daemon to run successfully. When this function is called the terminal becomes so sluggish that I have to launch a seperate shell and kill the daemon with top to get a responsive prompt back. Now I suspect that this has something to do with the forking process closing the stdin/stdout but I am not quite sure how I could work around this. If you guys could shed some light on the situation that would be most appreciated. Thanks.
Edit:
int main(argc, char *argv[]) {
/* setup signal handling */
/* check command line arguments */
pid_t pid, sid;
pid = fork();
if (pid < 0) {
exit(EXIT_FAILURE);
}
if(pid > 0){
exit(EXIT_SUCCESS);
}
sid = setsid();
if(sid < 0) {
exit(EXIT_FAILURE);
}
umask(027);
/* set syslogging */
/* do some logic to determine wether we are running the daemon for the first time and if we are call the one time function which uses fgets() to recieve some input */
while(1) {
/* do required work */
}
/* do some clean up procedures and exit */
return 0;
}
You guys mention using a config file. This is is exactly what I do to store the parameters recieved via input. However I still initially need to get these from the user via the stdin. The logic for determining whether we are running for the first time is based off of the existence of the config file.
Normally, the standard input of a daemon should be connected to /dev/null, so that if anything is read from standard input, you get an EOF immediately. Normally, standard output should be connected to a file - either a log file or /dev/null. The latter means all writes will succeed, but no information will be stored. Similarly, standard error should be connected to /dev/null or to a log file.
All programs, including daemons, are entitled to assume that stdin, stdout and stderr are appropriately opened file streams.
It is usually appropriate for a daemon to control where its input comes from and outputs go to. There is seldom occasion for input to come from other than /dev/null. If the code was written to survive without standard output or standard error (for example, it opens a standard log channel, or perhaps uses syslog(3)) then it may be appropriate to close stdout and stderr. Otherwise, it is probably appropriate to redirect them to /dev/null, while still logging messages to a log file. Alternatively, you can redirect both stdout and stderr to a log file - beware continuously growing log files.
Your sluggish-to-impossible response time might be because your program is not paying attention to EOF in a read loop somewhere. It might be prompting for user input on /dev/null, and reading a response from /dev/null, and not getting a 'y' or 'n' back, it tries again, which chews up your system horribly. Of course, the code is flawed in not handling EOF, and counting the number of times it gets an invalid response and stopping being silly after a reasonable number of attempts (16, 32, 64). The program should shut up shop sanely and safely if it expects a meaningful input and continues not to get it.
You guys mention using a config file. This is is exactly what I do to store the parameters recieved via input. However I still initially need to get these from the user via the stdin. The logic for determining whether we are running for the first time is based off of the existence of the config file.
Instead of reading stdin, have the user write the config file themselves; check for its existence before forking, and exit with an error if it doesn't. Include a sample config file with the daemon, and document its format in your daemon's manpage. You do have a manpage, yes? Your config file is textual, yes?
Also, your daemonization logic is missing a key step. After forking, but before calling setsid, you need to close fds 0, 1, and 2 and reopen them to /dev/null (do not attempt to do this with fclose and fopen). That should fix your sluggish terminal problem.
Your design is wrong. Daemon processes should not take input via stdin or deliver output to stdout/stderr. You'll close those descriptors as part of the daemonizing phase. Daemons should take configuration parameters from the command line, a config file, or both. If runtime-input is required you'll have to read a file, open a socket, etc., but the point of a daemon is that it should be able to run and do its thing without a user being present at the console.
If you want to run your program detached, use the shell: (setsid <command> &). Do not fork() inside your program, which will cause sysadmin nightmare.
Don't use syslog() nor redirect stdout or stderr.
Better yet, use a daemon manager such as daemon tools, runit, OpenRC and systemd, to daemonize your program for you.
Use a config file. Do not use STDIN or STDOUT with a daemon. Daemons are meant to run in the background with no user interaction.
If you insist on using stdin/keyboard input to fire up the daemon (e.g. to get some magic passphrase you wouldn't want to store in a file) then handle all I/O before the fork().

capturing commandline output directly in a buffer

I want to execute a command using system() command or execl and want to capture the output directly in a buffer in C. Is ther any possibility to capture the output in a buffer using dup() system call or using pipe(). I dont want to use any file in between using mkstemp or any other temporary file. please help me in this.Thanks in advance.
I tried it with fork() creating two process and piping the output and it is working.However I dont want to use fork system call since i am going to run the module infinitely using seperate thread and it is invoking lot of fork() and system is running out of resources sometimes after.
To be clear about what i am doing is capturing an output of a shell script in a buffer processing the ouput and displaying it in a window which i have designed using ncurses.Thankyou.
Here is some code for capturing the output of program; it uses exec() instead of system(), but that is straightforward to accomodate by invoking the shell directly:
How can I implement 'tee' programmatically in C?
void tee(const char* fname) {
int pipe_fd[2];
check(pipe(pipe_fd));
const pid_t pid = fork();
check(pid);
if(!pid) { // our log child
close(pipe_fd[1]); // Close unused write end
FILE* logFile = fname? fopen(fname,"a"): NULL;
if(fname && !logFile)
fprintf(stderr,"cannot open log file \"%s\": %d (%s)\n",fname,errno,strerror(errno));
char ch;
while(read(pipe_fd[0],&ch,1) > 0) {
//### any timestamp logic or whatever here
putchar(ch);
if(logFile)
fputc(ch,logFile);
if('\n'==ch) {
fflush(stdout);
if(logFile)
fflush(logFile);
}
}
putchar('\n');
close(pipe_fd[0]);
if(logFile)
fclose(logFile);
exit(EXIT_SUCCESS);
} else {
close(pipe_fd[0]); // Close unused read end
// redirect stdout and stderr
dup2(pipe_fd[1],STDOUT_FILENO);
dup2(pipe_fd[1],STDERR_FILENO);
close(pipe_fd[1]);
}
}
A simple way is to use popen ( http://www.opengroup.org/onlinepubs/007908799/xsh/popen.html), which returns a FILE*.
You can try popen(), but your fundamental problem is running too many processes. You have to make sure your commands finish, otherwise you will end up with exactly the problems you're having. popen() internally calls fork() anyway (or the effect is as if it did).
So, in the end, you have to make sure that the program you want to run from your threads exits "soon enough".
You want to use a sequence like this:
Call pipe once per stream you want to create (eg. stdin, stdout, stderr)
Call fork
in the child
close the parent end of the handles
close any other handles you have open
set up stdin, stdout, stderr to be the appropriate child side of the pipe
exec your desired command
If that fails, die.
in the parent
close the child side of the handles
Read and write to the pipes as appropriate
When done, call waitpid() (or similar) to clean up the child process.
Beware of blocking and buffering. You don't want your parent process to block on a write while the child is blocked on a read; make sure you use non-blocking I/O or threads to deal with those issues.
If you are have implemented a C program and you want to execute a script, you want to use a fork(). Unless you are willing to consider embedding the script interpreter in your program, you have to use fork() (system() uses fork() internally).
If you are running out of resources, most likely, you are not reaping your children. Until the parent process get the exit code, the OS needs keeps the child around as a 'zombie' process. You need to issue a wait() call to get the OS to free up the final resources associated with the child.

Resources