I'm trying to create a very basic telnet server to practice memory corruption exploits. When I try to issue a command, in the first iteration, nothing happens. Second iteration I am getting multiple bad file descriptor errors printing on my server side. On the client side, everything seems ok. I get all the required prompts. Here's my relevant code:
int piper[2];
pipe(piper);
...
while (1) {
n = write(newsockfd,"Enter a command...\n",21);
if (n < 0) error("ERROR writing to socket");
bzero(buffer,4096);
n = read(newsockfd,buffer,4095);
strcpy(command, buffer);
pid_t childpid;
childpid = fork();
if(childpid == -1) {
perror("Failed to fork");
return 1;
}
if(childpid == 0) { //child
printf("I am child %ld\n", (long)getpid());
if(dup2(piper[1], 1) < 0) {
perror("Failed to pipe in child process");
}
else {
close(piper[0]);
close(piper[1]);
char *args[] = {command, NULL};
execve(command, args, NULL);
}
}
else { // parent
if(dup2(piper[0], 0) < 0) {
perror("Failed to pipe in parent process");
}
else {
// read command output from child
while(fgets(command_out, sizeof(command_out), stdin)) {
printf("%s", command_out);
}
}
}
}
If I enter /bin/ls into my client, I get the following outputted onto my server:
I am child 26748
2nd time I do it, I get the following outputted to my server:
Failed to pipe in parent process: Bad file descriptor
0I am child 26749
Failed to pipe in child process: Bad file descriptor
There's a possibility that closing the pipe in the child process closes it in the parent process also. Consider moving your piper(pipe) in the beginning of the while loop. And to be safe, close the pipe at the end of the file loop not forgetting to test the return value of close.
Actually read puts a newline character at the end of input so your command could be for example testprog but in reality, when using read(), it is testprog\n so you have to get rid of the newline added or execve() will expect a program name with a newline in it.
#define STDIN 0
int n = read(STDIN, command, 4096);
command[n - 1] = '\0'; // get rid of newline
char *args = { command, NULL };
execve(buf, &args[0], NULL);
Related
For some unknown reason, when I'm executing piped commands in my shell program, they're only outputting once I exit the program, anyone see why?
Code:
int execCmdsPiped(char **cmds, char **pipedCmds){
// 0 is read end, 1 is write end
int pipefd[2];
pid_t pid1, pid2;
if (pipe(pipefd) == -1) {
fprintf(stderr,"Pipe failed");
return 1;
}
pid1 = fork();
if (pid1 < 0) {
fprintf(stderr, "Fork Failure");
}
if (pid1 == 0) {
// Child 1 executing..
// It only needs to write at the write end
close(pipefd[0]);
dup2(pipefd[1], STDOUT_FILENO);
close(pipefd[1]);
if (execvp(pipedCmds[0], pipedCmds) < 0) {
printf("\nCouldn't execute command 1: %s\n", *pipedCmds);
exit(0);
}
} else {
// Parent executing
pid2 = fork();
if (pid2 < 0) {
fprintf(stderr, "Fork Failure");
exit(0);
}
// Child 2 executing..
// It only needs to read at the read end
if (pid2 == 0) {
close(pipefd[1]);
dup2(pipefd[0], STDIN_FILENO);
close(pipefd[0]);
if (execvp(cmds[0], cmds) < 0) {
//printf("\nCouldn't execute command 2...");
printf("\nCouldn't execute command 2: %s\n", *cmds);
exit(0);
}
} else {
// parent executing, waiting for two children
wait(NULL);
}
}
}
Output:
In this example of the output, I have used "ls | sort -r" as the example, another important note is that my program is designed to only handle one pipe, I'm not supporting multi-piped commands. But with all that in mind, where am I going wrong, and what should I do to fix it so that it's outputting within the shell, not outside it. Many thanks in advance for any and all advice and help given.
The reason would be your parent process file descriptors are not closed yet. When you wait for the second command to terminate, it hangs because the writing end is not closed so it wait until either the writing end is closed, or new data is available to read.
Try closing both pipefd[0] and pipefd[1] before waiting for process to terminate.
Also note that wait(NULL); will immediately return when one process has terminated, you would need a second one as to not generate zombies if your process still runs after that.
This is a homework assignment that has me stumped. I make two pipes, then two child processes to handle both sides of the pipe. The first child handles the first command and writes it to the first pipe, the second child handles the second command and writes it to the second pipe. However, when all is said and done, I read the contents from the second pipe and put it into a buffer and simply printf(buffer). Its at this step that my code is failing. I cannot read from the buffer. I have tested all my method calls such as getWordsBeforePipe() and I know they work. Do you guys see anything I am missing?
// Create the first pipe
pipeStatus = pipe(pfd1);
if (pipeStatus == -1) {
perror("pipe");
exit(1);
}
// create the first child
pid = fork();
if (pid == -1) {
printf("Bad first fork()...\n");
exit(1);
}
// Here we will run the first command inside of the first child.
if (pid == 0) {
printf("Im in the first child...\n");
getWordsBeforePipe(pipeLoc); // get the words before the pipe
close(pfd1[0]); // close read end because we arent reading anything
dup2(pfd1[1], 1); // copy to write-end of pfd instead of stdout
close(pfd1[1]); // close the write end
firstCommand = execve(pathFirst, beforePipeWords, environ);
perror("execve"); // we only get here if execve died
_exit(1);
}
// create the second pipe
pipeStatus = pipe(pfd2);
if (pipeStatus == -1) {
perror("pipe");
exit(1);
}
// create the second child
pid = fork();
if (pid == -1) {
printf("Bad second fork()...\n");
exit(1);
}
// Here we will run the second command and put its
// output into the second pipe
// first command business
if (pid == 0) {
printf("Im in the second child...\n");
getWordsAfterPipe(pipeLoc);
close(pfd1[1]); // close first child write end
dup2(pfd1[0], 0); // read from the pfd read end instead of stdin
close(pfd1[0]); // close the read end
// second command business
close(pfd2[0]); // close read end because we arent reading anything
dup2(pfd2[1], 1); // copy to write end of pfd instead of stdout
close(pfd2[1]);
secondCommand = execve(pathSecond, afterPipeWords, environ);
perror("execve"); // we only get here if execve died
_exit(1);
}
close(pfd1[0]);
close(pfd2[0]);
close(pfd2[1]);
// read from the second pipe and output the final value
readSuccess = read(pfd2[0], buffer, 256);
if (readSuccess < 0) {
printf("Failure reading the buffer...\n"); // I keep getting this error
exit(1);
}
if (readSuccess == 0) {
printf("Empty buffer...\n");
exit(1);
}
buffer[readSuccess] = '\0';
printf("%s", buffer);
The parent process is doing this:
close(pfd2[0]);
Followed by this:
readSuccess = read(pfd2[0], buffer, 256);
You can't read from a file descriptor after it's been closed.
You properly closed both ends of the pfd1 pair, since the two children read/write from them. The second child writes to pfd2[1], so the parent should be closing that instead of pfd2[0].
Check that the command specified by pathFirst writes to stdout, and that the command specified by pathSecond both reads from stdin and writes to stdout.
I am trying to run ls|wc using execvp. So I create a pipe and then fork to create a child. I close the appropriate(read./write) end in parent/child and then map the other end to stdout/stdin. Then I run the ls in parent using execvp and wc in child. When I run the program it says
wc:standard input:bad file descriptor.
0 0 0
wc: -:Bad file descriptor
Here is my code:
int main()
{
//int nbBytes = 0; //stream length
int pfd_1[2]; //file descriptor
//char buffer[MAX_FILE_LENGTH];
char* arg[MAX_FILE_LENGTH];
pid_t processPid;
//Create a pipe
if(pipe(pfd_1) == -1)
{
printf("Error in creating pipe");
return 0;
}
//Create a child
processPid = fork();
if(processPid == -1)
{
printf("Erro in fork");
exit(1);
}
else if(processPid == 0) //Child
{
//redirect read end file descriptor to standard input
dup2(pfd_1[0],0);
//Close the write end
if(close(pfd_1[1] == -1))
{
printf("Error in closing the write end file descriptor");
exit(1);
}
arg[0] = "wc";
//arg[1] = "-l";
arg[1] = '\0';
if(execvp(arg[0],arg) == -1)
{
printf("Error in executing ls");
}
}
else //Parent
{
//redirect standard output to the file descriptor
dup2(pfd_1[1],1);
//Close the read end
if(close(pfd_1[0] == -1))
{
printf("Error in closing the read end from parent");
exit(1);
}
//Command
arg[0] = "ls";
arg[1] = "/proc/1/status";
arg[2] = '\0';
if(execvp(arg[0],arg) == -1)
{
printf("Error in executing ls");
}
}
}
Any idea what might be wrong? Why would it consider standard input as bad file descriptor? My understanding was since the stdin and read end file descriptor are aliases so the wc -l would read whatever the output is from the parent process. Do I need to do scanf to read from the stdin?
The problem is in this line:
if(close(pfd_1[1] == -1))
You are closing the result of pfd_1[1] == -1, which is by necessity equal to 0 (as they will never be equal). The correct line would probably be:
if (close(pfd_1[1]) == -1)
Note that you do this again later in attempting to close the read end in the parent process.
If you're going to fork children, you have to call wait() in the parent process in order to avoid "zombie" child processes. So you don't want to overlay the parent process that did the original process forking with another executable via exec.
One quick way to setup a series of pipes in the way you want would be to fork a child for each executable you want to run, and read that data back into a buffer in the parent. Then feed that data from the first child into a new child process that the parent forks off. So each child is fed data from the parent, processes the data, and writes the data back to the parent process, which stores the transformed data in a buffer. That buffer is then fed to the next child, etc., etc. The final results of the data in the buffer are the final output of the pipe.
Here's a little pseudo-code:
//allocate buffer
unsigned char buffer[SIZE];
for (each executable to run in pipeline)
{
pipes[2];
pipe(pipes);
pid_t pid = fork();
if (pid == 0)
{
//setup the pipe in the child process
//call exec
}
else
{
//setup the pipe in the parent process
if (child executable is not the first in the pipeline)
{
//write contents of buffer to child process
}
//read from the pipe until the child exits
//store the results in buffer
//call wait, and maybe also check the return value to make sure the
//child returned successfully
wait(NULL);
//clean up the pipe
}
}
I created a pipe between two child processes,
first, I run ls, which writes to the proper fd,
then, I run grep r, which reads from the proper fd,
I can see in the terminal that the grep command works fine (the output)
The problem is that grep doesn't quit, it stays there, even though ls isn't running anymore
for other programs the pipe works fine..
for (i = 0; i < commands_num ; i++) { //exec all the commands instants
if (pcommands[i]._flag_pipe_out == 1) { //creates pipe if necessary
if (pipe(pipe_fd) == -1) {
perror("Error: \"pipe()\" failed");
}
pcommands[i]._fd_out = pipe_fd[1];
pcommands[i+1]._fd_in = pipe_fd[0];
}
pid = fork(); //the child exec the commands
if (pid == -1) {
perror("Error: \"fork()\" failed");
break;
} else if (!pid) { //child process
if (pcommands[i]._flag_pipe_in == 1) { //if there was a pipe to this command
if (dup2(pcommands[i]._fd_in, STDIN) == -1) {
perror("Error: \"dup2()\" failed");
exit(0);
}
close(pcommands[i]._fd_in);
}
if (pcommands[i]._flag_pipe_out == 1) { //if there was a pipe from this command
if (dup2(pcommands[i]._fd_out, STDOUT) == -1) {
perror("Error: \"dup2()\" failed");
exit(0);
}
close(pcommands[i]._fd_out);
}
execvp(pcommands[i]._commands[0] , pcommands[i]._commands); //run the command
perror("Error: \"execvp()\" failed");
exit(0);
} else if (pid > 0) { //father process
waitpid(pid, NULL, WUNTRACED);
}
}
//closing all the open fd's
for (i = 0; i < commands_num ; i++) {
if (pcommands[i]._fd_in != STDIN) { //if there was an other stdin that is not 0
close(pcommands[i]._fd_in);
}
if (pcommands[i]._fd_out != STDOUT) { //if there was an other stdout that is not 1
close(pcommands[i]._fd_out);
}
}
So, I have a "command" instant pcommands[i]
It has:
a flag of pipein,pipeout
fdin,fdout,
and a char** (for the real command, like "ls -l")
lets say everything is good,
that means that:
pcommands[0]:
pipein=0
pipeout=1
char** = {"ls","-l",NULL}
pcommands[1]:
pipein=1
pipeout=0
char** = {"grep","r",NULL}
now, the loop will go twice (because I have two commands instants)
at the first time, it will see the pcommands[0] has pipeout==1
create pipe
do fork
pcommands[0] has pipeout==1
child: dup2 to the stdout
execvp
second time:
doesn't create pipe
do fork
child:
the pcomands[1] has pipein==1
then: dup2 to the input
exevp
..
this command works, my output is:
errors.log exer2.pdf multipal_try
(all the things with 'r')
but then it get stuck, and doesn't get out of grep..
in an other terminal i can see grep is still working
I hope I close all the fd's I need to close...
I don't understand why doesn't it work, it seems like I do it right (well, it works for other commands..)
can someone please help? thanks
You aren't closing enough pipe file descriptors.
Rule of Thumb:
If you use dup() or dup2() to duplicate a pipe file descriptor to standard input or standard output, you should close both of the original pipe file descriptors.
You also need to be sure that if the parent shell creates the pipe, it closes both of its copies of the pipe file descriptors.
Also note that the processes in a pipeline should be allowed to run concurrently. In particular, pipes have a limited capacity, and a process blocks when there's no room left in the pipe. The limit can be quite small (POSIX mandates it must be at least 4 KiB, but that's all). If your programs deal with megabytes of data, they must be allowed to run concurrently in the pipeline. Therefore, the waitpid() should occur outside the loop that launches the children. You also need to close the pipes in the parent process before waiting; otherwise, the child reading the pipe will never see EOF (because the parent could, in theory, write to the pipe, even though it won't).
You have structure members whose names start with an underscore. That's dangerous. Names starting with an underscore are reserved for the implementation. The C standard says:
ISO/IEC 9899:2011 §7.1.3 Reserved Identifiers
— All identifiers that begin with an underscore and either an uppercase letter or another
underscore are always reserved for any use.
— All identifiers that begin with an underscore are always reserved for use as identifiers
with file scope in both the ordinary and tag name spaces.
That means that if you run into problems, then the trouble is yours, not the system's. Obviously, your code works, but you should be aware of the problems you could run into and it is wisest to avoid them.
Sample Code
This is a fixed SSCCE based on the code above:
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/wait.h>
#include <unistd.h>
typedef struct Command Command;
struct Command
{
int _fd_out;
int _fd_in;
int _flag_pipe_in;
int _flag_pipe_out;
char **_commands;
};
typedef int Pipe[2];
enum { STDIN = STDIN_FILENO, STDOUT = STDOUT_FILENO, STDERR = STDERR_FILENO };
int main(void)
{
char *ls_cmd[] = { "ls", 0 };
char *grep_cmd[] = { "grep", "r", 0 };
Command commands[] =
{
{
._fd_in = 0, ._flag_pipe_in = 0,
._fd_out = 1, ._flag_pipe_out = 1,
._commands = ls_cmd,
},
{
._fd_in = 0, ._flag_pipe_in = 1,
._fd_out = 1, ._flag_pipe_out = 0,
._commands = grep_cmd,
}
};
int commands_num = sizeof(commands) / sizeof(commands[0]);
/* Allow valgrind to check memory */
Command *pcommands = malloc(commands_num * sizeof(Command));
for (int i = 0; i < commands_num; i++)
pcommands[i] = commands[i];
for (int i = 0; i < commands_num; i++) { //exec all the commands instants
if (pcommands[i]._flag_pipe_out == 1) { //creates pipe if necessary
Pipe pipe_fd;
if (pipe(pipe_fd) == -1) {
perror("Error: \"pipe()\" failed");
}
pcommands[i]._fd_out = pipe_fd[1];
pcommands[i+1]._fd_in = pipe_fd[0];
}
pid_t pid = fork(); //the child exec the commands
if (pid == -1) {
perror("Error: \"fork()\" failed");
break;
} else if (!pid) { //child process
if (pcommands[i]._flag_pipe_in == 1) { //if there was a pipe to this command
assert(i > 0);
assert(pcommands[i-1]._flag_pipe_out == 1);
assert(pcommands[i-1]._fd_out > STDERR);
if (dup2(pcommands[i]._fd_in, STDIN) == -1) {
perror("Error: \"dup2()\" failed");
exit(0);
}
close(pcommands[i]._fd_in);
close(pcommands[i-1]._fd_out);
}
if (pcommands[i]._flag_pipe_out == 1) { //if there was a pipe from this command
assert(i < commands_num - 1);
assert(pcommands[i+1]._flag_pipe_in == 1);
assert(pcommands[i+1]._fd_in > STDERR);
if (dup2(pcommands[i]._fd_out, STDOUT) == -1) {
perror("Error: \"dup2()\" failed");
exit(0);
}
close(pcommands[i]._fd_out);
close(pcommands[i+1]._fd_in);
}
execvp(pcommands[i]._commands[0] , pcommands[i]._commands); //run the command
perror("Error: \"execvp()\" failed");
exit(1);
}
else
printf("Child PID %d running\n", (int)pid);
}
//closing all the open pipe fd's
for (int i = 0; i < commands_num; i++) {
if (pcommands[i]._fd_in != STDIN) { //if there was another stdin that is not 0
close(pcommands[i]._fd_in);
}
if (pcommands[i]._fd_out != STDOUT) { //if there was another stdout that is not 1
close(pcommands[i]._fd_out);
}
}
int status;
pid_t corpse;
while ((corpse = waitpid(-1, &status, 0)) > 0)
printf("Child PID %d died with status 0x%.4X\n", (int)corpse, status);
free(pcommands);
return(0);
}
Just for my knowledge, how would you do it, so it won't get "indisputably messy"?
I'd probably keep the pipe information so that I the child didn't need to worry about the conditionals contained in the asserts (accessing the child information for the child before or after it in the pipeline). If each child only needs to access information in its own data structure, it is cleaner. I'd reorganize the 'struct Command' so it contained two pipes, plus indicators for which pipe contains information that needs closing. In many ways, not radically different from what you've got; just tidier in that child i only needs to look at pcommands[i].
You can see a partial answer in a different context at C Minishell adding pipelines.
I'm trying to develop a shell in Linux as an Operating Systems project. One of the requirements is to support pipelining (where calling something like ls -l|less passes the output of the first command to the second). I'm trying to use the C pipe() and dup2() commands but the redirection doesn't seem to be happening (less complains that it didn't receive a filename). Can you identify where I'm going wrong/how I might go about fixing that?
EDIT: I'm thinking that I need to use either freopen or fdopen somewhere since I'm not using read() or write()... is that correct?
(I've heard from others who've done this project that using freopen() is another way to solve this problem; if you think that would be better, tips for going that direction would also be appreciated.)
Here's my execute_external() function, which executes all commands not built-in to the shell. The various commands in the pipe (e.g. [ls -l] and [less]) are stored in the commands[] array.
void execute_external()
{
int numCommands = 1;
char **commands;
commands = malloc(sizeof(char *));
if(strstr(raw_command, "|") != NULL)
{
numCommands = separate_pipeline_commands(commands);
}
else
{
commands[0] = malloc(strlen(raw_command) * sizeof(char));
commands[0] = raw_command;
}
int i;
int pipefd[2];
for (i = 0; i < numCommands; i++)
{
char **parameters_array = malloc(strlen(commands[i]) * sizeof(char *));
int num_params;
num_params = str_to_str_array(commands[i], parameters_array);
if (numCommands > 1 && i > 0 && i != numCommands - 1)
{
if (pipe(pipefd) == -1)
{
printf("Could not open a pipe.");
}
}
pid_t pid = fork();
pmesg(2, "Process forked. ID = %i. \n", pid);
int status;
if (fork < 0)
{
fprintf(to_write_to, "Could not fork a process to complete the external command.\n");
exit(EXIT_FAILURE);
}
if (pid == 0) // This is the child process
{
if (numCommands > 1) { close(pipefd[1]); } // close the unused write end of the pipe
if (i == 0) // we may be pipelining and this is the first process
{
dup2(1, pipefd[1]); // set the source descriptor (for the next iteration of the loop) to this proc's stdout
}
if (i !=0 && (i != numCommands-1)) // we are pipelining and this is not the first or last process
{
dup2(pipefd[0], 0); // set the stdin of this process to the source of the previous process
}
if (execvp(parameters_array[0], parameters_array) < 0)
{
fprintf(to_write_to, "Could not execute the external command. errno: %i.\n", errno);
exit(EXIT_FAILURE);
}
else { pmesg(2, "Executed the child process.\n");}
}
else
{
if (numCommands > 1) { close(pipefd[0]); } // close the unused read end of the pipe
if (backgrounding == 0) { while(wait(&status) != pid); }// Wait for the child to finish executing
}
free(parameters_array);
}
free(commands);
}
It looks like there are a couple of bugs going on in your code.
First, all your dup2's are only in the child. In order to connect a pipe you will need to dup2 the stdout of the parent to the write end pipefd[1] of the pipe. Then you would hook up the read end to stdin.
Also it looks like on of your dup2's is backwards with dup2 fildes is duplicated to fildes2. So when you reassign stdin you want dup2(in, 0) and for stdout you want dup2(out, 1).
So a stripped down piece of piping code is going to look like:
int pipefd[2];
pipe(pipefd);
pid_t pid = fork();
if (pid == 0) //The child
{
dup2(pipefd[0], 0);
}
else
{
dup2(pipefd[1], 1);
}