The following code successfully lists the contents of current directory on Ubuntu bash and MacOS bash.
int main() {
char* args[3];
args[0] = "ls";
args[1] = NULL;
args[2] = NULL;
execvp(args[0], args);
return 0;
}
The following code doesn't print anything on Ubuntu bash but prints ls is /bin/ls on MacOS bash.
int main() {
//pid_t pid = fork();
char * args[3];
args[0] = "type";
args[1] = "ls";
args[2] = NULL;
//if (!pid)
execvp(args[0], args);
return 0;
}
When I run type on Ubuntu bash directly, it prints ls is hashed (/bin/ls).
The difference is that type is a bash internal command while ls is not. But why does bash on Ubuntu behave differently from that on MacOS?
Ubuntu bash version: GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
MacOS bash version: GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin17)
Judging purely by version numbers (which may be an incorrect thing to do), older version prints output correctly while newer version doesn't?
You forgot to test against failure of execvp. Try at least to code:
if (execvp(args[0], args)) {
fprintf(stderr, "execvp %s failed: %s\n",
args[0], strerror(errno));
exit(EXIT_FAILURE);
}
Probably on your Ubuntu execvp of type fails. Perhaps MacOSX has some /usr/bin/type or whatever is found in your PATH.
Read carefully the documentation of execvp(3) on both systems. Consider also using strace(1) on Linux to understand what is going on (you can find a similar thing for MacOSX).
Notice that execvp works only on executable files (not on shell builtin commands)
There is no bash in your question. That is to say, nothing in the execution of that program has anything to do with bash.
execvp is, effectively, a system call which has the effect (if it succeeds, which should not be taken for granted) of replacing the current execution environment with a new process image, loading the executable from the file indicated as the first argument. The OS neither needs or seeks assistance from bash to execute the program.
If you want to use bash, you need to ask the OS to run bash. This might be useful if you want to run a bash built-in command:
char* args[] = { "bash", "-c", "type ls", 0};
execvp(args[0], args);
But since you are not invoking bash, you rely on the existence of an external command utility named type. And it is the existence or not of that utility which leads to the different behaviour. It has nothing to do with bash or any other shell.
Related
I have spent the last 2 days trying to understand the execlp() system call, but yet here I am. Let me get straight to the issue.
The man page of execlp declares the system call as int execlp(const char *file, const char *arg, ...); with the description: The const char arg and subsequent ellipses in the execl(), execlp(), and execle() functions can be thought of as arg0, arg1, ..., argn.
Yet I see the system call being called like this in our text book: execlp(“/bin/sh”, ..., “ls -l /bin/??”, ...); (the "..." are for us to figure out as students). However this system call doesn´t even resemble anything like the declaration on the man page of the system call.
I am super confused. Any help is appreciated.
this prototype:
int execlp(const char *file, const char *arg, ...);
Says that execlp ìs a variable argument function. It takes 2 const char *. The rest of the arguments, if any, are the additional arguments to hand over to program we want to run - also char * - all these are C strings (and the last argument must be a NULL pointer)
So, the file argument is the path name of an executable file to be executed. arg is the string we want to appear as argv[0] in the executable. By convention, argv[0] is just the file name of the executable, normally it's set to the same as file.
The ... are now the additional arguments to give to the executable.
Say you run this from a commandline/shell:
$ ls
That'd be execlp("ls", "ls", (char *)NULL);
Or if you run
$ ls -l /
That'd be execlp("ls", "ls", "-l", "/", (char *)NULL);
So on to execlp("/bin/sh", ..., "ls -l /bin/??", ...);
Here you are going to the shell, /bin/sh , and you're giving the shell a command to execute. That command is "ls -l /bin/??". You can run that manually from a commandline/shell:
$ ls -l /bin/??
Now, how do you run a shell and tell it to execute a command ? You open up the documentation/man page for your shell and read it.
What you want to run is:
$ /bin/sh -c "ls -l /bin/??"
This becomes
execlp("/bin/sh","/bin/sh", "-c", "ls -l /bin/??", (char *)NULL);
Side note:
The /bin/?? is doing pattern matching, this pattern matching is done by the shell, and it expands to all files under /bin/ with 2 characters. If you simply did
execlp("ls","ls", "-l", "/bin/??", (char *)NULL);
Probably nothing would happen (unless there's a file actually named /bin/??) as there's no shell that interprets and expands /bin/??
The limitation of execl is that when executing a shell command or any other script that is not in the current working directory, then we have to pass the full path of the command or the script.
Example:
execl("/bin/ls", "ls", "-la", NULL);
The workaround to passing the full path of the executable is to use the function execlp, that searches for the file (1st argument of execlp) in those directories pointed by PATH:
execlp("ls", "ls", "-la", NULL);
i've been trying to use execvp to run a c program but it always seems to fail.
from main.c
int main() {
char* args[] = {"2", "1"};
if(execvp("trial.c", args) == -1) {
printf("\nfailed connection\n");
}
from trial.c
int main(int argc, char** argv){
printf("working");
return 1;
}
I think i tried every way to possibly represent that file location in the exec() and it always results in "failed connection".
The first parameter to execvp expects the name of an executable file. What you've passed it is the name of a source file. You need to first compile trial.c, then pass the name of the compiled executable to execvp.
Regarding the second parameter to execvp, the last element in the array must be NULL. That's how it knows it reached the end of the list. Also, by convention the first parameter to a program is the name of the program itself.
So first compile trial.c:
gcc -g -Wall -Wextra -o trial trial.c
Then modify how to call it in main.c:
int main() {
char* args[] = { "trial", "2", "1", NULL };
if(execvp("trial", args) == -1) {
printf("\nfailed connection\n");
return 1;
}
First argument for execvp is path to executable.
You need to build the executable for trial.c and pass the path of the executable to execvp.
if(execvp("---path to executable---/ExecTrial", args) == -1) {
printf("\nfailed connection\n");
}
If you don't pass the executable path, execvp will search the executable in the colon-separated list of directory pathnames specified in the
PATH environment variable.
trial.c is not a valid executable file. C is not a scripting or interpreted language; you cannot run C source files directly. A C program must be compiled and linked into an executable.
If you're trying to call execvp() on a source file then this is not what this function does. The first argument is expected to be a path of an executable. If you want to run the program which trial.c is the source of, you should build (compile and such) it first. For example like this:
$ gcc -o trial trial.c
Then call execvp() on the your newly created executable instead of the source file:
if(execvp("trial", args) == -1) { //...
Generally argv[0] is as same as exec file name. For example:
If I execute program with ./my_program then argv[0] is ./my_program
If I execute program with /home/username/my_program then argv[0] is /home/username/my_program.
My question is, if PATH=/home/username why I can't see argv[0] value?
This is my real situation
PATH=/home/knight/bin:/home/knight/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/knight
My test program source is:
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("%s\n", argv[0]);
}
My home directory is /home/knight so I can execute program directly.
knight#knight-desktop:~$ test
knight#knight-desktop:~$ ./test
./test
I can't understand, why doesn't the knight#knight-desktop:~$ test command print any result?
Because test is a shell builtin command.
And there is a big difference between ./test(it is an executable file) while test is a command passed direct to the shell of which if typed incorrect, it could have been not recognised for example lets say you use the command tst the result will be -bash: tst: command not found
To check if any word is a builtin command/reserved keyword for shell,use command type.
on terminal,
$type test
test is a shell builtin
$type if
if is a shell keyword
I have spent the last 2 days trying to understand the execlp() system call, but yet here I am. Let me get straight to the issue.
The man page of execlp declares the system call as int execlp(const char *file, const char *arg, ...); with the description: The const char arg and subsequent ellipses in the execl(), execlp(), and execle() functions can be thought of as arg0, arg1, ..., argn.
Yet I see the system call being called like this in our text book: execlp(“/bin/sh”, ..., “ls -l /bin/??”, ...); (the "..." are for us to figure out as students). However this system call doesn´t even resemble anything like the declaration on the man page of the system call.
I am super confused. Any help is appreciated.
this prototype:
int execlp(const char *file, const char *arg, ...);
Says that execlp ìs a variable argument function. It takes 2 const char *. The rest of the arguments, if any, are the additional arguments to hand over to program we want to run - also char * - all these are C strings (and the last argument must be a NULL pointer)
So, the file argument is the path name of an executable file to be executed. arg is the string we want to appear as argv[0] in the executable. By convention, argv[0] is just the file name of the executable, normally it's set to the same as file.
The ... are now the additional arguments to give to the executable.
Say you run this from a commandline/shell:
$ ls
That'd be execlp("ls", "ls", (char *)NULL);
Or if you run
$ ls -l /
That'd be execlp("ls", "ls", "-l", "/", (char *)NULL);
So on to execlp("/bin/sh", ..., "ls -l /bin/??", ...);
Here you are going to the shell, /bin/sh , and you're giving the shell a command to execute. That command is "ls -l /bin/??". You can run that manually from a commandline/shell:
$ ls -l /bin/??
Now, how do you run a shell and tell it to execute a command ? You open up the documentation/man page for your shell and read it.
What you want to run is:
$ /bin/sh -c "ls -l /bin/??"
This becomes
execlp("/bin/sh","/bin/sh", "-c", "ls -l /bin/??", (char *)NULL);
Side note:
The /bin/?? is doing pattern matching, this pattern matching is done by the shell, and it expands to all files under /bin/ with 2 characters. If you simply did
execlp("ls","ls", "-l", "/bin/??", (char *)NULL);
Probably nothing would happen (unless there's a file actually named /bin/??) as there's no shell that interprets and expands /bin/??
The limitation of execl is that when executing a shell command or any other script that is not in the current working directory, then we have to pass the full path of the command or the script.
Example:
execl("/bin/ls", "ls", "-la", NULL);
The workaround to passing the full path of the executable is to use the function execlp, that searches for the file (1st argument of execlp) in those directories pointed by PATH:
execlp("ls", "ls", "-la", NULL);
I have two (Ubuntu Linux) bash scripts which take input arguments. They need to be run simultaneously. I tried execve with arguments e.g.
char *argv[10] = { "/mnt/hgfs/F/working/script.sh", "file1", "file2", NULL };
execve(argv[0], argv, NULL)
but the bash script can't seem to find any arguments at e.g. $0, $1, $2.
printf "gcc -c ./%s.c -o ./%s.o\n" $1 $1;
gcc -c ./$1.c -o ./$1.o -g
exit 0;
output is gcc -c ./main.c -o ./main.o
and then a lot of errors like /usr/include/libio.h:53:21: error: stdarg.h: No such file or directory
What's missing?
Does your script start with the hashbang line? I think that's a must, something like:
#!/bin/bash
For example, see the following C program:
#include <stdio.h>
#include <unistd.h>
char *argv[10] = { "./qq.sh", "file1", NULL };
int main (void) {
int rc = execve (argv[0], argv, NULL);
printf ("rc = %d\n", rc);
return 0;
}
When this is compiled and run with the following qq.sh file, it outputs rc = -1:
echo $1
when you change the file to:
#!/bin/bash
echo $1
it outputs:
file1
as expected.
The other thing you need to watch out for is with using these VMWare shared folders, evidenced by /mnt/hgfs. If the file was created with a Windows-type editor, it may have the "DOS" line endings of carriage-return/line-feed - that may well be causing problems with the execution of the scripts.
You can check for this by running:
od -xcb /mnt/hgfs/F/working/script.sh
and seeing if any \r characters appear.
For example, if I use the shell script with the hashbang line in it (but appen a carriage return to the line), I also get the rc = -1 output, meaning it couldn't find the shell.
And, now, based on your edits, your script has no trouble interpreting the arguments at all. The fact that it outputs:
gcc -c ./main.c -o ./main.o
is proof positive of this since it's seeing $1 as main.
The problem you actually have is that the compiler is working but it cannot find strdarg.h included from your libio.h file - this has nothing to do with whether bash can see those arguments.
My suggestion is to try and compile it manually with that command and see if you get the same errors. If so, it's a problem with what you're trying to compile rather than a bash or exec issue.
If it does compile okay, it may be because of the destruction of the environment variables in your execve call.