Why am I having difficulty making execvp in C work? - c

I need to implement a basic shell in C.
One of things I need is to implement a function that has a command and to execute it.
my code:
pID=fork();
if (pID == 0)
execvp(tmp[0], tmp);
else if (pID > 0)
{
printf("%d", pID);
wait(NULL);
}
else
printf("Failed to create proccess \n");
The problem is that no matter what is the command I put in tmp, the program shows me the prompt again, and do nothing except that.
For example if I write gedit (in order to open the gedit — a ntpad of Ubuntu), it doesn't open it, or if write ls -a it doesn't show me any output as the terminal of Ubuntu does.

execvp should work. As the others mentioned, you really need to show how you populate tmp. That said, I would guess that that's where the error is. tmp needs to be a null terminated array.
#include <stdio.h>
main( int argc, char * argv[] )
{
int pid = fork;
char * tmp[2];
memset( tmp, 0, sizeof(tmp) );
tmp[0] = argv[0];
if( 0 == pid )
{
if( -1 == execvp( tmp[0], tmp ) )
{
char errmsg[64];
snprintf( errmsg, sizeof(errmsg), "exec '%s' failed", tmp[0] );
perror( errmsg );
}
else if( 0 < pid )
{
printf("[%d] %s\n", pid, tmp[0]);
wait(NULL);
}
else
{
perror("fork failed");
}
}

Although you've failed to tell us what you're passing through the tmp variable to execvp, my psychic sense tells me that you forgot to null-terminate your argument list. A NULL argument tells execvp where the last argument is, and if you fail to put in a NULL, it will start reading random garbage off the stack.
If that random garbage points to large strings of non-zero data, it will run out of space to store the supposed arguments to the new process, which is typically a few hundred KB (see this page for some system-specific numbers, as well as various ways of getting your system's maximum arguments size).
When there's too much argument data, the system call execve(2) (called internally by execvp) fails with the error E2BIG.
So to see if this is what's happening to you, check the return value from execvp. If it even returns at all, it failed (if it succeeded, it wouldn't have returned since a new process would be executing!), so check the global value of errno to see why it failed:
if (pID == 0)
{
execvp(tmp[0], tmp);
printf("exec failed: %s\n", strerror(errno));
exit(1);
}

execvp() requires full path . If in tmp[0] isnt the full path of your executable file use execv()
execv(tmp[0], tmp);

Related

C dup2 pipe delay with stdout, how to fix?

I can't figure out how to get rid of a delay, it seems like a buffering delay but I havent had any luck with setvbuf or fflush...
I have a c/c++ program that executes a python script which immediately starts printing to stdout (quite a bit), however there seems to be a huge delay when I try to read that input in my program. I have tried to include a basic version of what I am doing below. In the output I see TEST0 immediately and then after quite some time I get a huge dump of prints.... I tried setvbuff but that didnt seem to make a difference. I think either I am doing something wrong or just not understanding what's happening.
Update: I am running in Linux.
Update2: fixed code typo with multiple forks
Update3: Adding stdout flushes in the python script fixed the problem, no more delays! Thanks #DavidGrayson
int pipeFd[2];
pid_t pid;
char buff[PATH_MAX];
std::string path = "/user/bin/python3";
std::string script = "";//use path to python script here
std::string args = ""; //use args for python script here
pid = fork();
if( pid == -1)
{
printf( "[ERROR] cant fork\n" );
}
else if( pid == 0 )
{
close( pipeFd[0] );
dup2( pipeFd[1], STDOUT_FILENO);
close( pipeFd[1] );
execl(path.c_str(), "python3", script.c_str(), args.c_str(), (char*)NULL );
printf( "[ERROR] script execl failed\n" );
exit(1);
}
else
{
//setvbuf(stdout, NULL, _IONBF, 0);
//setvbuf(stdin, NULL, _IONBF, 0);
printf( "TEST0\n" );
fflush(stdout);
//it takes a really long time to see this next print
read( pipeFd[0], buff, 1 );
printf( "TEST1:%c\n", buff[0] );
fflush(stdout);
}

I am working on creating my own UNIX shell and some times when I run 'ls' command it gives an error bad address

I am wondering if there is an error with execvp calling ls that can cause it to fail occassionally and then work properly other times.
void lookInsideCurrentDirectory(char **parsed){
char* line = NULL;
pid_t pid = fork();
if(pid == -1){
return;
}
else if(pid == 0){
if(execvp(parsed[0], parsed) == -1){
perror("Error: ");
}
exit(0);
}
else{
wait(NULL);
return;
}
}
According to https://www.gnu.org/software/libc/manual/html_node/Error-Codes.html:
Macro: int EFAULT
“Bad address.” An invalid pointer was detected.
If you receive this error from execvp, that means that some of the pointer in parsed was invalid. You should look into the rest of the program and make sure any strings in the parsed array are not free()-ed until the execvp calls are completed.
Another very common mistake is that since you're not passing the length of the argument array to execvp, the argv argument must be a NULL-terminated array, so that execvp knows when to stop reading arguments. That means that if you're receiving command that looks like so: ls -lah /bin then your argv array should be one larger and end it with a NULL pointer:
char** argv = {"ls", "-lah", "/bin", NULL};
If you don't end the argv with a NULL pointer, execvp will try to dereference whatever comes next in the memory as an pointer address, and unless it happens to contain NULL bytes, then the derefence likely will fail or may dereference to unexpected things.

using dup2 and pipe to redirect stdin

I have a program A that takes two arguments from stdin and exits with a unique code depending on the arguments. I am writing a program B that calls program A using fork and exec and let program B print out the code program A exits with. For some reason, program A doesn't seem to be getting the data I piped through to it in the child process of fork. I'm not sure if I'm piping the correct data to the child process.
Could someone help me please? Thanks!
Here is my code:
int program_B(void) {
char var_a[256];
char var_b[256];
int fd[2];
// Read from stdin
char *sendarray[2];
sendarray[0] = var_a;
sendarray[1] = var_b;
if(fgets(var_a, MAXLINE, stdin) == NULL) {
perror("fgets");
exit(1);
}
if(fgets(var_b, MAXLINE, stdin) == NULL) {
perror("fgets");
exit(1);
}
if (pipe(fd) == -1) {
perror("pipe");
exit(1);
}
int pid = fork();
// Child process -- error seems to be here.
if (pid == 0) {
close(fd[1]);
dup2(fd[0], fileno(stdin));
close(fd[0]);
execl("program_A", NULL);
perror("exec");
exit(1);
} else {
close(fd[0]);
write(fd[1], sendarray, 2*sizeof(char*));
close (fd[1]);
int status;
if (wait(&status) != -1) {
if (WIFEXITED(status)) {
printf("%d\n", WEXITSTATUS(status));
} else {
perror("wait");
exit(1);
}
}
}
return 0;
}
You are piping the wrong data to the child process.
I am assuming var_a and var_b are the strings you want to send to program A. They are both of type array of chars, which in C is the same thing as pointer to char (Actually there is a small difference between pointers and arrays but this is irrelevant for this problem). So they are actually just pointers to the first byte of each argument. sendarray, however is an array of char-pointers which is the same thing as a pointer to char-pointer. Keep this in mind for a second.
When calling write() the 2nd parameter tells it where the data is in memory. By passing sendarray, write thinks this sendarray points the data you want to write although it actually points to yet another pointer. So what happens is that the pointer values of var_a and var_b (which is what sendarray points to), are written to the pipe.
So you have to pass var_a and var_b to write(), since those are pointers to the actual data you want to send. Also you have to know how long (how many bytes) this data is. If var_a and var_b point to null-terminated strings, you can use strlen() to determine their length.
One last thing: I don't know how exactly your program A obtains 2 arguments from a continuous byte stream like stdin, but assuming it reads it line by line, you obviously have to send a new-line character from program B, as well.
So putting it all together your write statements should look something like this:
write(fd[1], var_a, strlen(var_a));
write(fd[1], "\n", 1);
write(fd[1], var_b, strlen(var_b));
Of course, if any of the assumptions I made is wrong, you have to adopt this code appropriately.

Within C fork, printf() is not executing after while() loop

I am working on writing a custom shell for a school project, and I need to be able to run external commands via the "execv" function. I need my command to either run successfully with the appropriate output, or state that the command was not found. Here is my code (with some printf() output for debugging) at this point:
/* Create a child process */
pid_t pid = fork();
/* Check if the fork failed */
if (pid >= 0)
{
if (pid == 0)
{
/* This is the child process - see if we need to search for the PATH */
if( strchr( command.args[0], '/' ) == NULL )
{
/* Search the PATH for the program to run */
char fullpath[ sizeof( getenv("PATH") ) ];
strcpy( fullpath, getenv("PATH") );
/* Iterate through all the paths to find the appropriate program */
char* path;
path = strtok( fullpath, colon );
while(path != NULL)
{
char progpath[COMMAND_SIZE];
/* Try the next path */
path = strtok( NULL, colon );
strcpy(progpath, path);
strcat(progpath, "/");
strcat(progpath, command.args[0]);
/* Determine if the command exists */
struct stat st;
if(stat(progpath, &st) == 0)
{
/* File exists. Set the flag and break. */
execv( progpath, command.args );
exit(0);
}
else
{
printf("Not found!\n");
}
}
printf("%s: Command not found!\n", command.args[0]);
}
else
{
...
}
/* Exit the process */
exit(EXIT_FAILURE);
}
else
{
/* This is the parent process - wait for the child command to exit */
waitpid( pid, NULL, 0 );
printf("Done with fork!\n");
}
}
else
{
/* Could not fork! */
printf("%s: %s > Failed to fork command!\n", command.args[0], strerror(errno) );
}
And here is the output:
john#myshell:/home/john/project>dir
/usr/local/sbin/dir: Not found!
/usr/local/bin/dir: Not found!
/usr/sbin/dir: Not found!
/usr/bin/dir: Not found!
/sbin/dir: Not found!
/bin/dir: Found!
makefile makefile~ myshell.c myshell.c~ myshell.x
Done with fork!
john#myshell:/home/john/project>foo
/usr/local/sbin/foo: Not found!
/usr/local/bin/foo: Not found!
/usr/sbin/foo: Not found!
/usr/bin/foo: Not found!
/sbin/foo: Not found!
/bin/foo: Not found!
/usr/games/foo: Not found!
Done with fork!
john#myshell:/home/john/project>
The known command "dir" is being found and executed properly. The output is great. However, when I use the fake "foo" command, I expected it to not find the command (which is clearly doesn't), complete the "while" loop, and execute the following "printf" command. This being said, I expected to see the following near the end of the output:
foo: Command not found!
I have tried using a boolean and integer value as a "flag" to determine if the command was found. However, no code seems to run outside the while loop at all. If I remove the "exit(0)", the "printf" command still doesn't run. I am stuck and baffled as to why the code outside the while loop doesn't seem to run at all. I also don't know if this is a problem with the way I am forking or if this has to do with the output buffer.
Am I doing this the wrong way, or how can I ensure that the "Command not found" message always runs exactly one time if the command was not found?
There's an error in your code -- you are using strcpy() and causing a buffer overrun:
// Note the declaration of getenv():
char *getenv(const char *name);
Therefore sizeof(getenv("PATH")) == sizeof(char*), which is probably 4 or 8.
/* Search the PATH for the program to run */
char fullpath[ sizeof( getenv("PATH") ) ]; // allocate fullpath[4] or [8]
strcpy(fullpath, getenv("PATH")); // overrun... copy to 4-8 char stack buffer
// UNDEFINED behavior after this - Bad Things ahead.
You could use malloc() instead to allocate fullpath on the heap dynamically:
char* fullpath = malloc(strlen(getenv("PATH")) + 1); // +1 for terminating NUL
strcpy(fullpath, getenv("PATH")); // OK, buffer is allocated large enough
// ... use fullpath ...
// Then when you are done, free the allocated memory.
free(fullpath);
// And as a general habit you want to clear the pointer after freeing
// the memory to prevent hard-to-debug use-after-free bugs.
fullpath = 0;

Unable to use "execve()" successfully

The aim of the program is to fork a new child process and execute a process which also has command line arguments. If I enter /bin/ls --help, I get the error:
shadyabhi#shadyabhi-desktop:~/lab/200801076_lab3$ ./a.out
Enter the name of the executable(with full path)/bin/ls --help
Starting the executable as a new child process...
Binary file to be executed: /bin/ls
/bin/ls: unrecognized option '--help
'
Try `/bin/ls --help' for more information.
Status returned by Child process: 2
shadyabhi#shadyabhi-desktop:~/lab/200801076_lab3$
What would be the right argument to execve()?
#include<stdio.h>
#include<string.h> //strcpy() used
#include<malloc.h> //malloc() used
#include<unistd.h> //fork() used
#include<stdlib.h> //exit() function used
#include<sys/wait.h> //waitpid() used
int main(int argc, char **argv)
{
char command[256];
char **args=NULL;
char *arg;
int count=0;
char *binary;
pid_t pid;
printf("Enter the name of the executable(with full path)");
fgets(command,256,stdin);
binary=strtok(command," ");
args=malloc(sizeof(char*)*10);
args[0]=malloc(strlen(binary)+1);
strcpy(args[0],binary);
while ((arg=strtok(NULL," "))!=NULL)
{
if ( count%10 == 0) args=realloc(args,sizeof(char*)*10);
count++;
args[count]=malloc(strlen(arg));
strcpy(args[count],arg);
}
args[++count]=NULL;
if ((pid = fork()) == -1)
{
perror("Error forking...\n");
exit(1);
}
if (pid == 0)
{
printf("Starting the executable as a new child process...\n");
printf("Binary file to be executed: %s\n",binary);
execve(args[0],args,NULL);
}
else
{
int status;
waitpid(-1, &status, 0);
printf("Status returned by Child process: %d\n",WEXITSTATUS(status));
}
return 0;
}
The first entry in the args array should be the program name again. Your code calls /bin/ls with --help as the process name.
Please check to make sure args is not getting clobbered by the realloc call. See here on SO regarding realloc
Edit:
Also the loop looks funny....
You called strtok like this:
binary=strtok(command," ");
Change the loop construct to use binary instead as shown...
char *tmpPtr;
while (binary != NULL){
if ( count%10 == 0) tmpPtr=realloc(args,sizeof(char)*10);
if (tmpPtr != NULL) args = tmpPtr;
count++;
args[count-1]=malloc(strlen(binary)+1);
strcpy(args[count-1],binary);
binary = strtok(command, " ");
}
And use the binary for copying the string....
Hope this helps,
Best regards,
Tom.
Your program has some obvious errors. For instance, declaring char **args=NULL; and then args=realloc(args,sizeof(char)*10); (since it's char**, you should be alloc-ing to char*, no?..).
Since sizeof(char*) is usually 4 while sizeof(char) is usually 1, you end up with some serious memory management problems around there (you alloc less than you use, and you end up writing where you shouldn't). From there on, all hell breaks loose and you can't expect your program's behavior to make any sense.
I'd suggest that you run your program through an util such as Valgrind to figure out memory leaks and correct the program appropriately. Probably your execve problems will disappear as soon as the memory problems are corrected.

Resources