I have a command that execute well in the normal terminal on Linux:
xterm -e bash -c "some commands"
I want to execute the above command using c program execXX system calls. I try to use the following codes but it gives me a normal xterm window.
execl("/usr/bin/xterm", "/usr/bin/xterm -e bash -c \"some commands\"", NULL);
Is there any way I can execute the above command using execXX system calls? Thank you!
You need to call it like:
execl("/usr/bin/xterm", "/usr/bin/xterm", "-e", "bash", "-c", "some commands", (void*)NULL);
The convention is to let the first argument be the same as the path to the program. If you have spaces in the arguments, it will be the same effect as calling xterm 'something with spaces' instead of xterm something with spaces.
A possible tangent: is there any reason why you need these to run specifically within xterm? If you just want to run some shell commands, then running them within /bin/sh or /bin/bash would be more natural, and probably more reliable.
Related
I'm studying Apple's implementation of popen() at https://opensource.apple.com/source/Libc/Libc-167/gen.subproj/popen.c.auto.html and noticed that they do execl(_PATH_BSHELL, "sh", "-c", command, NULL) instead of execl(_PATH_BSHELL, command, NULL).
Why would you want to (or should you) exec an executable, e.g. a.out via sh -c instead of just the executable itself?
If you exec sh -c a.out instead of just a.out itself, does the actual a.out process end up being a "grandchild" process and not a child process?
Why would you want to (or should you) exec an executable, e.g. a.out via sh -c instead of just the executable itself?
popen() is designed to run shell commands that include shell syntax like > redirection, | pipes, and && command chaining. It needs to pass the string through sh -c in order to support those constructs. If it didn't those syntactical features would be passed verbatim to the program in question as arguments.
For example, popen("make clean && make") should trigger two make invocations. Without sh -c it would call make once with three arguments, as if one had typed
$ make clean '&&' make
at the terminal.
If you exec sh -c a.out instead of just a.out itself, does the actual a.out process end up being a "grandchild" process and not a child process?
Yes, that is correct. There will be a sh process in between the current process and a.out.
that they do execl(_PATH_BSHELL, "sh", "-c", command, NULL) instead of execl(_PATH_BSHELL, command, NULL)
The latter would NOT have executed command directly, but _PATH_BSHELL (/bin/sh) with its $0 set to command and no arguments, resulting in an shell expecting commands from its stdin.
Also, that syntax relies on NULL being defined to an explicit pointer (e.g. ((void*)0)), and not just 0, which is not guaranteed anywhere. While they can do that in their implementation (because they control all the headers), it's not what you should do in application code.
And no, execl(command, command, (void*)NULL) wouldn't have executed command directly either, unless command is a) a full path and b) in an executable format (binary or a script starting with a she-bang #! -- the latter being a non-standard extension). If command was a simple command name to be looked up in PATH (like pwd or a.out) or an executable script not starting with a she-bang, you should've used execlp instead of execl.
The exec[lv]p[e] functions do some of the things a shell does (like looking through the PATH), but not all of them (like running multiple commands or expanding variables): that's why functions like system(3) or popen(3) pass the command to /bin/sh -c. Notice that with both it's /bin/sh, not the user's login shell or the $SHELL from the environment which is used.
If you exec sh -c a.out instead of just a.out itself, does the actual a.out process end up being a "grandchild" process and not a child process?
Only with some shells like dash. Not with bash, ksh93, mksh, zsh, yash, busybox, etc, which will execute a.out directly instead of forking and waiting for it.
I would like to know if my program is executed from a command line, or executed through system() call, or from a script.
I initially thought about getting parent id (getppid()), and looking up /proc/#pppid directory checking either the exe link or contents of the cmdline file. If it is /bin/bash, or /bin/csh, or /bin/sh I would know that it runs from the command line.
The problem is that it is not true, because a standalone script would also tell me /bin/bash.
Even if it worked, it could have been very specific Linux version approach and could stop working in a future.
Is there a better way to do that?
Thank you for any advice or pointing out to some direction.
Most shells written since 1980 support job control, which is implemented by assigning a process group to each process in a pipeline of commands. The process group is set by setpgrp(), which sets the process's pgrp to its pid.
The pgrp is inherited across forks.
So, if your shell is a relatively modern one, a program started by an interactive shell will have getpid() == getpgrp(), and any additional processes forked by that process (say, if it's a shell script or if it calls system()) will have getpid() != getpgrp().
Here's a test program, and its behavior under bash (it will behave the same under ksh93 and tcsh as well):
pp.c
#include <unistd.h>
#include <stdio.h>
main()
{
printf("pid=%d pgrp=%d\n", (int)getpid(), (int)getpgrp());
}
$ ./pp
pid=3164 pgrp=3164
$ ./pp &
[1] 3165
$ pid=3165 pgrp=3165
In a pipeline, the leftmost command is the process group leader. (This isn't documented, but bash, ksh93, and tcsh all do it this way).
$ ls|./pp
pid=3179 pgrp=3178
$ ./pp|cat
pid=3180 pgrp=3180
A program called with system() will have the same pgrp as its parent:
pps.c
#include <stdlib.h>
main()
{
system("./pp");
}
$ ./pps
pid=4610 pgrp=4608
In a shell script, the shell is the process group leader, and any command invoked by it will inherit the pgrp:
pp.sh
#!/bin/sh
./pp
$ ./pp.sh
pid=4501 pgrp=4500
But if the shell script execs a program, the pid doesn't change, and the execed program will be a process group leader, so you probably don't want to do that.
ppe.sh
#!/bin/sh
exec ./pp
$ ./ppe.sh
pid=4504 pgrp=4504
In the unlikely case that the user turns off job control, every command is going to have the same pgrp as the shell:
$ set +m
$ ./pp
pid=4521 pgrp=2990
$ ./pp
pid=4522 pgrp=2990
You can intercept signal from PID when the script is done and check "kill" for it.
Not sure if that solves your problem but environment variables can give you a good hint. For example:
$ set | grep "^_="
_=
$ bash -c "set" | grep "^_="
_=/bin/bash
$ sh -c "set" | grep "^_="
_='/bin/sh'
I am trying to run a biological program called BLASTP which takes in two strings (fasta_GWIDD and fasta_UNIPROT in the code) and compares them. The problem that I am encountering is the use of echo/system in the code. Can anyone suggest what am I missing out??
for(i=0;i<index1;i++)
{
sprintf(fasta_GWIDD,">%s\\n%s\n",fasta_name1[i],fasta_seq1[i]);
setenv("GwiddVar", fasta_GWIDD, 1) ;
sprintf(fasta_UNIPROT,">%s\\n%s\n",fasta_name2[i],fasta_seq2[i]);
setenv("UniprotVar", fasta_UNIPROT, 1) ;
system("blastp -query <(echo -e $GwiddVar) -subject<(echo -e $UniprotVar)");
}
The error is:
sh: -c: line 0: syntax error near unexpected token `('
sh: -c: line 0: `blastp -query <(echo -e $GwiddVar) -subject<(echo -e $UniprotVar)'
It seems that the shell does not understand the
<(echo -e $GwiddVar)
syntax. Mind that the system command may use different shell than the one you are used to (like csh instead of bash, and so on). It's everything in somewhere in your OS config files and profile, but I can't guess what you have out there.
Btw. I think that you should be able to check which shell is being used by the system() command by either of these:
system("echo $SHELL") // should simply write the path to current shell
system("ps -aux") // look at it and find what is the parent of the PS
etc.
Considering that this was correct on some shell:
blastp -query <(echo -e $GwiddVar) -subject<(echo -e $UniprotVar)
The syntax cited above apparently is meant only to pass the variable as intput. I think you are overdoing it. You are using echo -e $GwiddVar to print and capture the data, which you already have in a vairable at hand. Have you tried something as simple as:
blastp -query $GwiddVar -subject $UniprotVar
I don't know which shell you are trying to use, but considering that echo got its data, then it should be exactly the same.
If you are worried about spaces, then various shells usually allow you to use quotation marks:
blastp -query "$GwiddVar" -subject "$UniprotVar"
Of course it depends on the shell. If your program uses a shell that does not like quotation marks, well, you have to adapt it. Not to your shell, but to the shell the system() has used.
Another thing is that using system is quite rough. When you have arguments that are difficult to escape correctly, you should be using other functions like execve that are able to take an array of real raw direct strings and pass them directly as ARGV to the process. Using these, you will not need (and you should not) add any quotes or escape any spaces in the strings to be passed.
sprintf(fasta_GWIDD,">%s\\n%s\n",fasta_name1[i],fasta_seq1[i]);
sprintf(fasta_UNIPROT,">%s\\n%s\n",fasta_name2[i],fasta_seq2[i]);
char** args = .....; // allocate an array of char*[5], malloc, or whatever
args[0] = "blastp";
args[1] = "-query";
args[2] = fasta_GWIDD;
args[3] = "-subject";
args[4] = fasta_UNIPROT;
int errcode = execve(4, args, null);
if( errcode ) ... // check the error (if any) and react
However! Note that the execve comes from the exec family, so it replaces your current process. This is why I write only a sketch and don't show the whole ready-to-run code. You will probably need to fork() before it and then wait for the children in the outer loop.
So, I'd first check the shell and syntax ;)
From man 3 system:
DESCRIPTION
system() executes a command specified in command by calling /bin/sh -c
command, and returns after the command has been completed.
On many systems, /bin/sh is not bash, and even when it is, it is a different configuration of bash (bash typically operates differently if it is invoked as /bin/sh). So, you are passing bash syntax to a shell that is either not bash, or doesn't allow the full set of bash-isms... Also, there's a space missing after -system that might be confusing things as well... And, I'm not entirely sure environment variables are expanded within system() strings...
What's the difference between "sh -c cmd" and "cmd", when executed from a shell commnad line, and when executed from a c exec() function, respectively? Thanks.
It depends on what 'cmd' represents. In the basic case where it is a simple command name (say ps or ls), then there is no difference at the shell command line, and precious little difference when executed by execvp(). The 'non-p' exec*() functions do have slightly different semantics; they don't use the PATH variable, so the command must exist and be executable in the current directory or it will fail.
However, if cmd is more complex, then it can make a big difference. For example:
$ echo $$
17429
$ sh -c 'echo $$'
76322
$ sh -c "echo $$"
17429
$
The first reports the process ID of the original shell; the second reports the process ID of the shell run as sh; the third is an expensive way of reporting the process ID of the original shell. Note that the single quotes vs double quotes are significant too. Here, the quotes would not be present in the C invocation (the shell removes the quotes from around the arguments), and the value of $$ would be that of the child shell:
char *argv[] = { "sh", "-c", "echo $$", 0 };
execvp(argv[0], argv);
(I said the quotes are not present in the C invocation; they are needed around the string in the C code, but the value passed to sh doesn't contain any quotes — so I meant what I said, though it might not be quite as blindingly obvious as all that.)
From the man page:
-c string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
Just cmd will run through bash (or whatever your default) shell is. You need the sh to explicitly call -c argument.
exec shouldn't make a difference here.
I want to get the Output of multiple strace calls in one file,
but i do not know how.
At the moment i am using:
strace -o tmpfile, but this just puts the output of one file in and then overrites the file with the new output.
Has anyone an idea, how to do this?
I hope this is no dumb question.
Thanks in advance.
Under the bash shell use the following command
strace -o >(cat >>outputfile) command [args] ...
This will pass to the -o flag an argument that will appear like a file, but will be instead a file descriptor to the standard input of the
cat >>outputfile
process. This process will append its input to the specified output file.
Instead of strace -o somefile command, can you just do strace command >> somefile? Alternatively, assuming a similar version of strace, my manual for strace indicates this should work: strace -o "|tail -a somefile" command (the -o "|command" functionality is implemented by strace itself, not by the shell).
I could not manage to do this via the call itself (in the Android Shell).
I just read through all files and write them to one Log file.
This solution slows the whole process down, but was the only solution I found.
The strace output is on stderr, strace 2>> outfile did the trick for me. If you invoke strace as single command you have to call it like this: adb -e shell "strace -p pid 2>> file"