ZSH alias definition and expansion within eval string - eval

I came across a curiosity when playing around whilst learning ZSH, and i'm having a hard type finding information related to this. I was wondering about the technical explanation of why this doesn't work (defining and then expanding an alias within a single eval call):
eval "alias d='echo hello'; d"
zsh: command not found: d
whereas this does work:
eval "function d = { echo hello; }; d"
hello

eval has nothing to do with the issue. Calling just
alias d='echo hello'; d
will not work either.
The reason for that lies in the way zsh parses a command line. All aliases in a command line are substituted before zsh even tries to execute it. In this example zsh does not know about the alias d when the aliases are substituted and so zsh comes up empty when looking for the command d.
The example with the function on the other hand works, because zsh looks up where a command name points just before it tries to run it. So first the function d is defined and when zsh encounters the command d it looks for a matching function (or built-in or external command) and finds the previously defined function.

Related

(LLDB on MacOs Catalina) Shell Expansion Failed

When trying to use the r or run commands in lldb I get an error like this: error: shell expansion failed (reason: invalid JSON). consider launching with 'process launch'.
It works when I just use process launch but I really do not feel like doing that.
Is there any way I could make either an alias or make shell expansions not fail?
The way lldb does shell expansion is to run a little tool called lldb-argdumper (it is in Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources on macOS) with the command arguments that you passed. lldb-argdumper wraps the contents of argv as JSON, and writes that to stdout. lldb then parses the JSON back into args and inserts the args one by oneinto the argc/argv array when it launches the process.
Something in the output is not getting properly wrapped. You can probably see what it is by looking at the output of lldb-argdumper with your arguments. Whatever it is, it's a bug, so if you can reproduce it please file with your example with http://bugs.llvm.org.
(lldb) command alias run-no-shell process launch -X 0 --
will produce an alias that doesn't do shell expansion. You can also put this in your ~/.lldbinit.
I ran into this recently. TL;DR: make sure your shell does not echo anything during initialization. Run <your-shell> -c date to confirm; only the date should be printed.
The problem was that my shell's initialization file was echoing some stuff, which was getting prepended to lldb-argdumper's JSON output. (lldb doesn't run lldb-argdumper directly; it invokes your default shell to run lldb-argdumper.)
Specifically, I use fish as my shell, which does not have separate initialization paths for interactive and non-interactive sessions. (See this issue for discussion of whether this is good.) bash and zsh have separate init files for interactive/non-interactive sessions, which makes avoiding this problem slightly easier.

dlopen() ignoring $LD_LIBRARY_PATH [duplicate]

What is export for?
What is the difference between:
export name=value
and
name=value
export makes the variable available to sub-processes.
That is,
export name=value
means that the variable name is available to any process you run from that shell process. If you want a process to make use of this variable, use export, and run the process from that shell.
name=value
means the variable scope is restricted to the shell, and is not available to any other process. You would use this for (say) loop variables, temporary variables etc.
It's important to note that exporting a variable doesn't make it available to parent processes. That is, specifying and exporting a variable in a spawned process doesn't make it available in the process that launched it.
To illustrate what the other answers are saying:
$ foo="Hello, World"
$ echo $foo
Hello, World
$ bar="Goodbye"
$ export foo
$ bash
bash-3.2$ echo $foo
Hello, World
bash-3.2$ echo $bar
bash-3.2$
It has been said that it's not necessary to export in bash when spawning subshells, while others said the exact opposite. It is important to note the difference between subshells (those that are created by (), ``, $() or loops) and subprocesses (processes that are invoked by name, for example a literal bash appearing in your script).
Subshells will have access to all variables from the parent, regardless of their exported state.
Subprocesses will only see the exported variables.
What is common in these two constructs is that neither can pass variables back to the parent shell.
$ noexport=noexport; export export=export; (echo subshell: $noexport $export; subshell=subshell); bash -c 'echo subprocess: $noexport $export; subprocess=subprocess'; echo parent: $subshell $subprocess
subshell: noexport export
subprocess: export
parent:
There is one more source of confusion: some think that 'forked' subprocesses are the ones that don't see non-exported variables. Usually fork()s are immediately followed by exec()s, and that's why it would seem that the fork() is the thing to look for, while in fact it's the exec(). You can run commands without fork()ing first with the exec command, and processes started by this method will also have no access to unexported variables:
$ noexport=noexport; export export=export; exec bash -c 'echo execd process: $noexport $export; execd=execd'; echo parent: $execd
execd process: export
Note that we don't see the parent: line this time, because we have replaced the parent shell with the exec command, so there's nothing left to execute that command.
This answer is wrong but retained for historical purposes. See 2nd edit below.
Others have answered that export makes the variable available to subshells, and that is correct but merely a side effect. When you export a variable, it puts that variable in the environment of the current shell (ie the shell calls putenv(3) or setenv(3)).
The environment of a process is inherited across exec, making the variable visible in subshells.
Edit (with 5 year's perspective):
This is a silly answer. The purpose of 'export' is to make variables "be in the environment of subsequently executed commands", whether those commands be subshells or subprocesses. A naive implementation would be to simply put the variable in the environment of the shell, but this would make it impossible to implement export -p.
2nd Edit (with another 5 years in passing).
This answer is just bizarre. Perhaps I had some reason at one point to claim that bash puts the exported variable into its own environment, but those reasons were not given here and are now lost to history. See Exporting a function local variable to the environment.
export NAME=value for settings and variables that have meaning to a subprocess.
NAME=value for temporary or loop variables private to the current shell process.
In more detail, export marks the variable name in the environment that copies to a subprocesses and their subprocesses upon creation. No name or value is ever copied back from the subprocess.
A common error is to place a space around the equal sign:
$ export FOO = "bar"
bash: export: `=': not a valid identifier
Only the exported variable (B) is seen by the subprocess:
$ A="Alice"; export B="Bob"; echo "echo A is \$A. B is \$B" | bash
A is . B is Bob
Changes in the subprocess do not change the main shell:
$ export B="Bob"; echo 'B="Banana"' | bash; echo $B
Bob
Variables marked for export have values copied when the subprocess is created:
$ export B="Bob"; echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash &
[1] 3306
$ B="Banana"; echo '(sleep 30; echo "Subprocess 2 has B=$B")' | bash
Subprocess 1 has B=Bob
Subprocess 2 has B=Banana
[1]+ Done echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash
Only exported variables become part of the environment (man environ):
$ ALICE="Alice"; export BOB="Bob"; env | grep "ALICE\|BOB"
BOB=Bob
So, now it should be as clear as is the summer's sun! Thanks to Brain Agnew, alexp, and William Prusell.
It should be noted that you can export a variable and later change the value. The variable's changed value will be available to child processes. Once export has been set for a variable you must do export -n <var> to remove the property.
$ K=1
$ export K
$ K=2
$ bash -c 'echo ${K-unset}'
2
$ export -n K
$ bash -c 'echo ${K-unset}'
unset
export will make the variable available to all shells forked from the current shell.
As you might already know, UNIX allows processes to have a set of environment variables, which are key/value pairs, both key and value being strings.
Operating system is responsible for keeping these pairs for each process separately.
Program can access its environment variables through this UNIX API:
char *getenv(const char *name);
int setenv(const char *name, const char *value, int override);
int unsetenv(const char *name);
Processes also inherit environment variables from parent processes. Operating system is responsible for creating a copy of all "envars" at the moment the child process is created.
Bash, among other shells, is capable of setting its environment variables on user request. This is what export exists for.
export is a Bash command to set environment variable for Bash. All variables set with this command would be inherited by all processes that this Bash would create.
More on Environment in Bash
Another kind of variable in Bash is internal variable. Since Bash is not just interactive shell, it is in fact a script interpreter, as any other interpreter (e.g. Python) it is capable of keeping its own set of variables. It should be mentioned that Bash (unlike Python) supports only string variables.
Notation for defining Bash variables is name=value. These variables stay inside Bash and have nothing to do with environment variables kept by operating system.
More on Shell Parameters (including variables)
Also worth noting that, according to Bash reference manual:
The environment for any simple command or function may be augmented
temporarily by prefixing it with parameter assignments, as described
in Shell Parameters. These assignment statements affect only the
environment seen by that command.
To sum things up:
export is used to set environment variable in operating system. This variable will be available to all child processes created by current Bash process ever after.
Bash variable notation (name=value) is used to set local variables available only to current process of bash
Bash variable notation prefixing another command creates environment variable only for scope of that command.
The accepted answer implies this, but I'd like to make explicit the connection to shell builtins:
As mentioned already, export will make a variable available to both the shell and children. If export is not used, the variable will only be available in the shell, and only shell builtins can access it.
That is,
tango=3
env | grep tango # prints nothing, since env is a child process
set | grep tango # prints tango=3 - "type set" shows `set` is a shell builtin
Two of the creators of UNIX, Brian Kernighan and Rob Pike, explain this in their book "The UNIX Programming Environment". Google for the title and you'll easily find a pdf version.
They address shell variables in section 3.6, and focus on the use of the export command at the end of that section:
When you want to make the value of a variable accessible in sub-shells, the shell's export command should be used. (You might think about why there is no way to export the value of a variable from a sub-shell to its parent).
Here's yet another example:
VARTEST="value of VARTEST"
#export VARTEST="value of VARTEST"
sudo env | grep -i vartest
sudo echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"
sudo bash -c 'echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"'
Only by using export VARTEST the value of VARTEST is available in sudo bash -c '...'!
For further examples see:
http://mywiki.wooledge.org/SubShell
bash-hackers.org/wiki/doku.php/scripting/processtree
Just to show the difference between an exported variable being in the environment (env) and a non-exported variable not being in the environment:
If I do this:
$ MYNAME=Fred
$ export OURNAME=Jim
then only $OURNAME appears in the env. The variable $MYNAME is not in the env.
$ env | grep NAME
OURNAME=Jim
but the variable $MYNAME does exist in the shell
$ echo $MYNAME
Fred
By default, variables created within a script are only available to the current shell; child processes (sub-shells) will not have access to values that have been set or modified. Allowing child processes to see the values, requires use of the export command.
As yet another corollary to the existing answers here, let's rephrase the problem statement.
The answer to "should I export" is identical to the answer to the question "Does your subsequent code run a command which implicitly accesses this variable?"
For a properly documented standard utility, the answer to this can be found in the ENVIRONMENT section of the utility's man page. So, for example, the git manual page mentions that GIT_PAGER controls which utility is used to browse multi-page output from git. Hence,
# XXX FIXME: buggy
branch="main"
GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log "$branch"
will not work correctly, because you did not export GIT_PAGER. (Of course, if your system already declared the variable as exported somewhere else, the bug is not reproducible.)
We are explicitly referring to the variable $branch, and the git program code doesn't refer to a system variable branch anywhere (as also suggested by the fact that its name is written in lower case; but many beginners erroneously use upper case for their private variables, too! See Correct Bash and shell script variable capitalization for a discussion) so there is no reason to export branch.
The correct code would look like
branch="main"
export GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log -p "$branch"
(or equivalently, you can explicitly prefix each invocation of git with the temporary assignment
branch="main"
GIT_PAGER="less" git log -n 25 --oneline "$branch"
GIT_PAGER="less" git log -p "$branch"
In case it's not obvious, the shell script syntax
var=value command arguments
temporarily sets var to value for the duration of the execution of
command arguments
and exports it to the command subprocess, and then afterwards, reverts it back to the previous value, which could be undefined, or defined with a different - possibly empty - value, and unexported if that's what it was before.)
For internal, ad-hoc or otherwise poorly documented tools, you simply have to know whether they silently inspect their environment. This is rarely important in practice, outside of a few specific use cases, such as passing a password or authentication token or other secret information to a process running in some sort of container or isolated environment.
If you really need to know, and have access to the source code, look for code which uses the getenv system call (or on Windows, with my condolences, variations like getenv_s, w_getenv, etc). For some scripting languages (such as Perl or Ruby), look for ENV. For Python, look for os.environ (but notice also that e.g. from os import environ as foo means that foo is now an alias for os.environ). In Node, look for process.env. For C and related languages, look for envp (but this is just a convention for what to call the optional third argument to main, after argc and argv; the language lets you call them anything you like). For shell scripts (as briefly mentioned above), perhaps look for variables with uppercase or occasionally mixed-case names, or usage of the utility env. Many informal scripts have undocumented but discoverable assignments usually near the beginning of the script; in particular, look for the ?= default assignment parameter expansion.
For a brief demo, here is a shell script which invokes a Python script which looks for $NICKNAME, and falls back to a default value if it's unset.
#!/bin/sh
NICKNAME="Handsome Guy"
demo () {
python3 <<\____
from os import environ as env
print("Hello, %s" % env.get("NICKNAME", "Anonymous Coward"))
____
}
demo
# prints "Hello, Anonymous Coward"
# Fix: forgot export
export NICKNAME
demo
# prints "Hello, Handsome Guy"
As another tangential remark, let me reiterate that you only ever need to export a variable once. Many beginners cargo-cult code like
# XXX FIXME: redundant exports
export PATH="$HOME/bin:$PATH"
export PATH="/opt/acme/bin:$PATH"
but typically, your operating system has already declared PATH as exported, so this is better written
PATH="$HOME/bin:$PATH"
PATH="/opt/acme/bin:$PATH"
or perhaps refactored to something like
for p in "$HOME/bin" "/opt/acme/bin"
do
case :$PATH: in
*:"$p":*) ;;
*) PATH="$p:$PATH";;
esac
done
# Avoid polluting the variable namespace of your interactive shell
unset p
which avoids adding duplicate entries to your PATH.
Although not explicitly mentioned in the discussion, it is NOT necessary to use export when spawning a subshell from inside bash since all the variables are copied into the child process.

Using exec on each file in a bash script

I'm trying to write a basic find command for a assignment (without using find). Right now I have an array of files I want to exec something on. The syntax would look like this:
-exec /bin/mv {} ~/.TRASH
And I have an array called current that holds all of the files. My array only holds /bin/mv, {}, and ~/.TRASH (since I shift the -exec out) and are in an array called arguments.
I need it so that every file gets passed into {} and exec is called on it.
I'm thinking I should use sed to replace the contents of {} like this (within a for loop):
for i in "${current[#]}"; do
sed "s#$i#{}"
#exec stuff?
done
How do I exec the other arguments though?
You can something like this:
cmd='-exec /bin/mv {} ~/.TRASH'
current=(test1.txt test2.txt)
for f in "${current[#]}"; do
eval $(sed "s/{}/$f/;s/-exec //" <<< "$cmd")
done
Be very careful with eval command though as it can do nasty things if input comes from untrusted sources.
Here is an attempt to avoid eval (thanks to #gniourf_gniourf for his comments):
current=( test1.txt test2.txt )
arguments=( "/bin/mv" "{}" ~/.TRASH )
for f in "${current[#]}"; do
"${arguments[#]/\{\}/$f}"
done
Your are lucky that your design is not too bad, that your arguments are in an array.
But you certainly don't want to use eval.
So, if I understand correctly, you have an array of files:
current=( [0]='/path/to/file'1 [1]='/path/to/file2' ... )
and an array of arguments:
arguments=( [0]='/bin/mv' [1]='{}' [2]='/home/alex/.TRASH' )
Note that you don't have the tilde here, since Bash already expanded it.
To perform what you want:
for i in "${current[#]}"; do
( "${arguments[#]//'{}'/"$i"}" )
done
Observe the quotes.
This will replace all the occurrences of {} in the fields of arguments by the expansion of $i, i.e., by the filename1, and execute this expansion. Note that each field of the array will be expanded to one argument (thanks to the quotes), so that all this is really safe regarding spaces, glob characters, etc. This is really the safest and most correct way to proceed. Every solution using eval is potentially dangerous and broken (unless some special quotings is used, e.g., with printf '%q', but this would make the method uselessly awkward). By the way, using sed is also broken in at least two ways.
Note that I enclosed the expansion in a subshell, so that it's impossible for the user to interfere with your script. Without this, and depending on how your full script is written, it's very easy to make your script break by (maliciously) changing some variables stuff or cd-ing somewhere else. Running your argument in a subshell, or in a separate process (e.g., separate instance of bash or sh—but this would add extra overhead) is really mandatory for obvious security reasons!
Note that with your script, user has a direct access to all the Bash builtins (this is a huge pro), compared to some more standard find versions2!
1 Note that POSIX clearly specifies that this behavior is implementation-defined:
If a utility_name or argument string contains the two characters "{}", but not just the two characters "{}", it is implementation-defined whether find replaces those two characters or uses the string without change.
In our case, we chose to replace all occurrences of {} with the filename. This is the same behavior as, e.g., GNU find. From man find:
The string {} is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find.
2 POSIX also specifies that calling builtins is not defined:
If the utility_name names any of the special built-in utilities (see Special Built-In Utilities), the results are undefined.
In your case, it's well defined!
I think that trying to implement (in pure Bash) a find command is a wonderful exercise that should teach you a lot… especially if you get relevant feedback. I'd be happy to review your code!

ls piped into the command line

I have been trying to pipe in the results from ls into the command line for a C program I am writing (in Unix). I want to be able to have an index of the files and so I was planning on using argv. This is how I thought it should work:
./foo &(ls ~/path)
It doesn't work — what's the correct way to pass the output of ls as arguments to the command?
Your syntax is a bit off...
./foo $(ls ~/path)
Do note that this will choke on files with certain characters in them. Use an array instead to fix this.
pushd ~/path
files=(*)
popd
./foo "${files[#]}"
The notation you specified does two things:
./foo &
runs the program foo in the background (with no arguments other than its command name). Then:
(ls ~/path)
runs the ls command in a sub-shell (which, in this context, is the same as running it in the main shell). The problem is you intended (or need) to use $ in place of &.
./foo $(ls ~/path)
This runs the command ls ~/path and captures the output, which is split into words (using the separators listed in the $IFS variable). Each word is then supplied as an argument to the command ./foo, as you required.
We can then debate the wisdom of using the output of ls like that, but unless you have file names containing spaces (tabs, newlines etc,) you will be OK.
You know how Unix tools accept glob patterns, so you can do cat *.txt or rm ~/Pictures/Vacation*.jpg, without having to pipe/expand ls?
That's an ability your shell gives your program for free!
Just use ./foo ~/path/* and argv[1] will contain /home/you/path/fileone, argv[2] will contain /home/you/path/filetwo, and so forth.
These filenames may be relative or absolute, but can always be passed directly to open/fopen/execve or whichever function you want to use.
Using ls as you describe will only give you the last part of the filename with no directory, so you won't know where the files are to do anything with them (though if that's what you want, just use basename(argv[1])).

Building simple shell script which executes program passed as argument

I am building a program which helps in memory debugging of C programs. I call
execlp("gnome-terminal","gnome-terminal","-e",command,(char*)0);
to open a new terminal window where the program to be debugged runs. I do this to not have my debugging info intermixed with the users program output. Because I need to set up an environmental variable before running the users program, command var is actually the name of the shell script where I pass the users program as the first arg.
Here is my script:
#!/bin/bash
export LD_PRELOAD="./mylib.so"
$1
This works fine for programs with no arguments but what happens if the user also supplies args with his program?
For example I wish to call my script like that :
myScript.sh usersProgram arg1 arg2 etc
How can I correctly run the users program inside the script and pass all the arguments to it?
Thank you
Use "$#", which will handle all arguments properly.
Assuming that args to program always start from the 2nd arg, I'd suggest doing it like this:
#!/bin/bash
PROG=$1
shift
$PROG "$#"
Practically, just specifying "$#" instead of the three lines above will also work. But this way, you can easily do some manipulation based on $PROG before actually executing it.

Resources