What does eval $(opam env) do, does it activate a opam environment? [duplicate] - opam

This question already has answers here:
What is the use of eval `opam config env` or eval $(opam env) and their difference?
(2 answers)
Closed 9 months ago.
I keep seeing
eval $(opam env)
e.g. when I try to install coq:
# brew install opam # for mac
# for ubuntu
conda install -c conda-forge opam
opam init
# if doing local env?
# eval $(opam env)
# - install coq
# local install
#opam switch create . 4.12.1
#eval $(opam env)
#opam repo add coq-released https://coq.inria.fr/opam/released
#opam install coq
# If you want a single global (wrt conda) coq installation (for say your laptop):
opam switch create 4.12.1
opam switch 4.12.1
opam repo add coq-released https://coq.inria.fr/opam/released
opam install coq
but I never know what it does (nor no one I talk to knows) and when I google it this comes up instead: What is the use of eval `opam config env`?
cross posted: https://discuss.ocaml.org/t/what-does-eval-opam-env-do-does-it-activate-a-opam-environment/9990
Clarification
Note that
eval `opam config env`
might be very similar. In the sense that I think the dashes '' are also command substitution as in $(...) but not sure. Which might make What is the use of eval `opam config env`? very related (though I don't know the difference for sure)

tldr: it activates your opam context according to your current switch -- similar to how a python virtual env is activated in python but for opam.
What eval $(opam env) does is command substitute $(opam env) i.e. run opam env in a subshell (that's what $( ) does) return the string it outputted and then give it to eval to evaluate as bash code. Command substitution is usually done to next the output of commands in bash. In python eval would interpret the input string to it as literal python code to evaluate, parse, run here similarly it would do the same but assume it's bash (is my guess).
What does opam env does? It returns the current bash env variables for the current swithc (i.e. siwth approximately euqal to opam environment). So therefore, doing:
eval $(opam env)
does this:
first runs in a subshell because thats what $(cmd) does. It's command substitution. Usually used to nest commands.
then it's output (so the output string of $(opam env)) is given to eval. opam env returns a string of env variables needed to "activate: the current opam env (similar to how you activate virtual envs in python).
finally eval evaluates the string it receives from the command substitution i.e. it parses the string as a bash command and runs it.
For compeleteness see the output of opam env:
(iit_synthesis) brandomiranda~ ❯ opam env
OPAM_SWITCH_PREFIX='/Users/brandomiranda/.opam/4.12.1'; export OPAM_SWITCH_PREFIX;
CAML_LD_LIBRARY_PATH='/Users/brandomiranda/.opam/4.12.1/lib/stublibs:/Users/brandomiranda/.opam/4.12.1/lib/ocaml/stublibs:/Users/brandomiranda/.opam/4.12.1/lib/ocaml'; export CAML_LD_LIBRARY_PATH;
OCAML_TOPLEVEL_PATH='/Users/brandomiranda/.opam/4.12.1/lib/toplevel'; export OCAML_TOPLEVEL_PATH;
PATH='/Users/brandomiranda/.opam/4.12.1/bin:/Users/brandomiranda/miniconda/envs/iit_synthesis/bin:/Users/brandomiranda/miniconda/condabin:/usr/local/bin:/Users/brandomiranda/.opam/4.12.1/bin:/opt/homebrew/bin:/usr/bin:/bin:/usr/sbin:/sbin'; export PATH;
Comment on backard ticks:
Please note : The back ticks shown around opam env after eval are essential. They change the order of application, which is very important. The back ticks tells the system to first evaluate opam env (which returns a string of commands) and then eval executes those commands in the string. Executing them doesn’t return anything, but it initializes the Opam environment behind the scenes.
ref: https://ocaml.org/docs/up-and-running
also see:
What does eval $(opam env) do, does it activate a opam environment?
What is the use of eval `opam config env` or eval $(opam env) and their difference?
https://ocaml.org/docs/up-and-running
https://discuss.ocaml.org/t/what-does-eval-opam-env-do-does-it-activate-a-opam-environment/9990
All details that lead to this answer:
From the man page for opam env:
This is most usefully used as eval $(opam env) to have further shell commands be evaluated in the proper opam context.
see: https://opam.ocaml.org/doc/man/opam-env.html
Slightly more details:
Finally, to get all the pieces together, what is the difference between command substitution (e.g. $( ) or ) and evaluating an expression (e.g. eval string or eval $(cmd) ) (in bash mainly). Ref: https://unix.stackexchange.com/a/23116/55620. My understanding is that $( ) runs a command in a subshells and outputs the string so that it's string is an input to another command e.g. eval. So to my understanding you can think of $(cmd) as replacing that expression and pasting the string of it back to to the command line for another commadn e.g. eval.  While eval will take in the string and evaluate the expression in it instead. So what eval $(cmd) does is first evaluate $(cmd) replace it with the output string of running cmd in a subshell and then feeding it to eval -- where eval will pretend that input is code and evaluate it, parse and run it. So $(...) is usually done to nest command inside other commands. 
The only puzzling thing to me is why $(cmd) without a cmd2 (eg eval) in the front interprets the strings of $(cmd) as bash commands to run instead of just evaluting e.g.
(iit_synthesis) brandomiranda~ ❯ $(echo v=1)
zsh: command not found: v=1
but
(iit_synthesis) brandomiranda~ ❯ v=1
(iit_synthesis) brandomiranda~ ❯ echo $v
1
does evaluate my string v=1 and then run it. 
Thus what eval $(opam env) does is command substitute $(opam env) i.e. run opam env in a subshell return the string it outputted and then given to eval to evaluate as bash code. In python eval would interpret the input string to it as literal python code to evaluate, parse, run here similarly it would do the same but assume its bash (is my guess)
Long answer with all details:
toolchain = In software, a toolchain is a set of programming tools that is used to perform a complex software development task or to create a software product, which is typically another computer program or a set of related programs. https://www.google.com/search?q=toolchain&oq=toolchain&aqs=chrome..69i57j0i512l9.209j0j7&sourceid=chrome&ie=UTF-8
(iit_synthesis) brandomiranda~ ❯ opam env
OPAM_SWITCH_PREFIX='/Users/brandomiranda/.opam/4.12.1'; export OPAM_SWITCH_PREFIX;
CAML_LD_LIBRARY_PATH='/Users/brandomiranda/.opam/4.12.1/lib/stublibs:/Users/brandomiranda/.opam/4.12.1/lib/ocaml/stublibs:/Users/brandomiranda/.opam/4.12.1/lib/ocaml'; export CAML_LD_LIBRARY_PATH;
OCAML_TOPLEVEL_PATH='/Users/brandomiranda/.opam/4.12.1/lib/toplevel'; export OCAML_TOPLEVEL_PATH;
PATH='/Users/brandomiranda/.opam/4.12.1/bin:/Users/brandomiranda/miniconda/envs/iit_synthesis/bin:/Users/brandomiranda/miniconda/condabin:/usr/local/bin:/Users/brandomiranda/.opam/4.12.1/bin:/opt/homebrew/bin:/usr/bin:/bin:/usr/sbin:/sbin'; export PATH;
so opam env outputs a bunch of paths that are needed for the toolchain to work...perhaps here toolchain == opam? Based on What is the use of eval `opam config env` or eval $(opam env) and their difference? (question for eval opam config env ) I am inferring that what it means is that these are the bash env variables needed to "activate the ocaml/opam environment". Concluded that because opam env outputs a bunch of env variables. 
So running opam env by itself only prints the output to the terminal the output of the command opam env. 
$ usually refers that the identifier is a variable. e.g. $x is the variable x while x in the terminal is literally the string x (or command x if it exists).
$(command) or command is command substitution. So doing $(command) runs the output (e.g. the script that would have been printed to the terminal after running command) of command. I assume it's called command substitution because we substitute command with the output of it and try to run it. e.g. it says: In computing, command substitution is a facility that allows a command to be run and its output to be pasted back on the command line as arguments to another command in the wikipedia article so yes. https://en.wikipedia.org/wiki/Command_substitution
opam env = opam-env - Prints appropriate shell variable assignments to stdout. Returns the bindings for the environment variables set in the current switch, e.g. PATH, in a format intended to be evaluated by a shell. https://opam.ocaml.org/doc/man/opam-env.html note that same opam man page says This is most usefully used as eval $(opam env) to have further shell commands be evaluated in the proper opam context.
Finally, to get all the pieces together, what is the difference between command substitution (e.g. $( ) or ) and evaluating an expression (e.g. eval string or eval $(cmd) ) (in bash mainly). Ref: https://unix.stackexchange.com/a/23116/55620. My understanding is that $( ) runs a command in a subshells and outputs the string so that it's string is an input to another command e.g. eval. So to my understanding you can think of $(cmd) as replacing that expression and pasting the string of it back to to the command line for another commadn e.g. eval.  While eval will take in the string and evaluate the expression in it instead. So what eval $(cmd) does is first evaluate $(cmd) replace it with the output string of running cmd in a subshell and then feeding it to eval -- where eval will pretend that input is code and evaluate it, parse and run it. So $(...) is usually done to nest command inside other commands. 
The only puzzling thing to me is why $(cmd) without a cmd2 (eg eval) in the front interprets the strings of $(cmd) as bash commands to run instead of just evaluting e.g.
(iit_synthesis) brandomiranda~ ❯ $(echo v=1)
zsh: command not found: v=1
but
(iit_synthesis) brandomiranda~ ❯ v=1
(iit_synthesis) brandomiranda~ ❯ echo $v
1
does evaluate my string v=1 and then run it. 
Thus what eval $(opam env) does is command substitute $(opam env) i.e. run opam env in a subshell return the string it outputted and then given to eval to evaluate as bash code. In python eval would interpret the input string to it as literal python code to evaluate, parse, run here similarly it would do the same but assume its bash (is my guess)
Eval vs $( ) details:
Playground 1
(iit_synthesis) brandomiranda~ ❯ clear
(iit_synthesis) brandomiranda~ ❯ $(echo v=1)
zsh: command not found: v=1
(iit_synthesis) brandomiranda~ ❯ v=1
(iit_synthesis) brandomiranda~ ❯ echo $v
1
Playground 2
(iit_synthesis) brandomiranda~ ❯ echo vv=2
vv=2
(iit_synthesis) brandomiranda~ ❯ $(echo vv=2)
zsh: command not found: vv=2
(iit_synthesis) brandomiranda~ ❯ eval $(echo vv=2)
(iit_synthesis) brandomiranda~ ❯ echo $vv
2

Related

How do you assign an Array inside a Dockerfile?

I have tried a number of different ways to assign an array inside a RUN command within a Dockerfile. None of them seem to work. I am running on Ubuntu-Slim, with bash as my default shell.
I've tried this (second line below)
RUN addgroup --gid 1000 node \
&& NODE_BUILD_PACKAGES=("binutils-gold" "g++" "gcc" "gnupg" "libgcc-7-dev" "linux-headers-generic" "make" "python3" ) \
...
But it fails with /bin/sh: 1: Syntax error: "(" unexpected.
I also tried assigning it as an ENV variable, as in:
ENV NODE_BUILD_PACKAGES=("binutils-gold" "g++" "gcc" "gnupg" "libgcc-7-dev" "linux-headers-generic" "make" "python3" )
but that fails as well.
Assigning and using arrays in Bash is completely supported. Yet, it appears that I can't use that feature of Bash when running in a Dockerfile. Can someone confirm/deny that you can assign arrays variables inside of shell commands in Dockerfile RUN syntax (or ENV variable syntax)?
The POSIX shell specification does not have arrays. Even if you're using non-standard shells like GNU bash, environment variables are always simple strings and never hold arrays either.
The default shell in Docker is usually /bin/sh, which should conform to the POSIX spec, and not bash. Alpine-based images don't have bash at all unless you go out of your way to install it. I'd generally recommend trying to stick to the POSIX syntax whenever possible.
A typical Dockerfile is fairly straightforward; it doesn't have a lot of parts that get reused multiple times and most of the things you can specify in a Dockerfile you don't need to be user-configurable. So for a list of OS packages, for example, I'd just list them out in a RUN command and not bother trying to package them into a variable.
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
binutils-gold \
g++ \
gcc \
...
Other things I see in Stack Overflow questions that do not need to be parameterized include the container path (set it once as the WORKDIR and refer to . thereafter), the process's port (needs to be a fixed number for the second docker run -p part), and user IDs (can be overridden with docker run -u, and you don't usually want to build an image that can only run on one system).
WORKDIR /app # not an ENV or ARG
COPY . . # into the WORKDIR, do not need to repeat
RUN adduser node # with no specific uid
EXPOSE 3000 # a fixed port number
RUN mkdir /data # also use a fixed path for potential mount points
You can have array NODE_BUILD_PACKAGES in RUN if you define SHELL :
SHELL ["/bin/bash", "-c"]

How to write bash function to print and run command when the command has arguments with spaces or things to be expanded

In Bash scripts, I frequently find this pattern useful, where I first print the command I'm about to execute, then I execute the command:
echo 'Running this cmd: ls -1 "$HOME/temp/some folder with spaces'
ls -1 "$HOME/temp/some folder with spaces"
echo 'Running this cmd: df -h'
df -h
# etc.
Notice the single quotes in the echo command to prevent variable expansion there! The idea is that I want to print the cmd I'm running, exactly as I will type and run the command, then run it!
How do I wrap this up into a function?
Wrapping the command up into a standard bash array, and then printing and calling it, like this, sort-of works:
# Print and run the passed-in command
# USAGE:
# cmd_array=(ls -a -l -F /)
# print_and_run_cmd cmd_array
# See:
# 1. My answer on how to pass regular "indexed" and associative arrays by reference:
# https://stackoverflow.com/a/71060036/4561887 and
# 1. My answer on how to pass associative arrays: https://stackoverflow.com/a/71060913/4561887
print_and_run_cmd() {
local -n array_reference="$1"
echo "Running cmd: ${cmd_array[#]}"
# run the command by calling all elements of the command array at once
${cmd_array[#]}
}
For simple commands like this it works fine:
Usage:
cmd_array=(ls -a -l -F /)
print_and_run_cmd cmd_array
Output:
Running cmd: ls -a -l -F /
(all output of that cmd is here)
But for more-complicated commands it is broken!:
Usage:
cmd_array=(ls -1 "$HOME/temp/some folder with spaces")
print_and_run_cmd cmd_array
Desired output:
Running cmd: ls -1 "$HOME/temp/some folder with spaces"
(all output of that command should be here)
Actual Output:
Running cmd: ls -1 /home/gabriel/temp/some folder with spaces
ls: cannot access '/home/gabriel/temp/some': No such file or directory
ls: cannot access 'folder': No such file or directory
ls: cannot access 'with': No such file or directory
ls: cannot access 'spaces': No such file or directory
The first problem, as you can see, is that $HOME got expanded in the Running cmd: line, when it shouldn't have, and the double quotes around that path argument were removed, and the 2nd problem is that the command doesn't actually run.
How do I fix these 2 problems?
References:
my bash demo program where I have this print_and_run_cmd function: https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/bash/argument_parsing__3_advanced__gen_prog_template.sh
where I first documented how to pass bash arrays by reference, as I do in that function:
Passing arrays as parameters in bash
How to pass an associative array as argument to a function in Bash?
Follow-up question:
Bash: how to print and run a cmd array which has the pipe operator, |, in it
If you've got Bash version 4.4 or later, this function may do what you want:
function print_and_run_cmd
{
local PS4='Running cmd: '
local -
set -o xtrace
"$#"
}
For example, running
print_and_run_cmd echo 'Hello World!'
outputs
Running cmd: echo 'Hello World!'
Hello World!
local PS4='Running cmd: ' sets a prefix for commands printed by the shell when the xtrace option is on. The default is + . Localizing it means that the previous value of PS4 is automatically restored when the function returns.
local - causes any changes to shell options to be reverted automatically when the function returns. In particular, it causes the set -o xtrace on the next line to be automatically undone when the function returns. Support for local - was added in Bash 4.4.
From man bash, under the local [option] [name[=value] ... | - ] section (emphasis added):
If name is -, the set of shell options is made local to the function in which local is invoked: shell options changed using the set builtin inside the function are restored to their original values when the function returns.
set -o xtrace (which is equivalent to set -x) causes the shell to print commands, preceded by the expanded value of PS4, before running them.
See help set.
Check your scripts with shellcheck:
Line 2:
local -n array_reference="$1"
^-- SC2034 (warning): array_reference appears unused. Verify use (or export if used externally).
Line 3:
echo "Running cmd: ${cmd_array[#]}"
^-- SC2145 (error): Argument mixes string and array. Use * or separate argument.
^-- SC2154 (warning): cmd_array is referenced but not assigned.
Line 5:
${cmd_array[#]}
^-- SC2068 (error): Double quote array expansions to avoid re-splitting elements.
You might want to research https://github.com/koalaman/shellcheck/wiki/SC2068 . We fix all errors and we get:
print_and_run_cmd() {
local -n array_reference="$1"
echo "Running cmd: ${array_reference[*]}"
# run the command by calling all elements of the command array at once
"${array_reference[#]}"
}
For me it's odd to pass an array by reference in this case. I would pass the actual values. I often do:
prun() {
# in the style of set -x
# print to stderr, so output can be captured
echo "+ $*" >&2
# or echo "+ ${*#Q}" >&2
# or echo "+$(printf " %q" "$#")" >&2
# or echo "+$(/bin/printf " %q" "$#")" >&2
"$#"
}
prun "${cmd_array[#]}"
How do I fix these 2 problems?
Incorporate into your workflow linters, formatters and static analysis tools, like shellcheck, and check the problems they point out.
And quote variable expansion. It's "${array[#]}".
You can achieve what you want with DEBUG trap :
#!/bin/bash
set -T
trap 'test "$FUNCNAME" = print_and_run_cmd || trap_saved_command="${BASH_COMMAND}"' DEBUG
print_and_run_cmd(){
echo "Running this cmd: ${trap_saved_command#* }"
"$#"
}
outer(){
print_and_run_cmd ls -1 "$HOME/temp/some folder with spaces"
}
outer
# output ->
# Running this cmd: ls -1 "$HOME/temp/some folder with spaces"
# ...
I really like #pjh's answer, so I've marked it as correct. It doesn't fully answer my original question though, so if another answer comes along that does, I may have to change that. Anyway, see #pjh's answer or a full explanation of how the below code works, and what all those lines mean. I've helped edit that answer with some of the sources from man bash and help set.
I'd like to change the formatting and provide some more examples, however, to show that variable expansion does take place within the command. I'd also like to provide one version which passes by reference, and one which does not, so you can choose the call style which you like best.
Here are my examples, showing both call styles (print_and_run1 cmd_array and print_and_run2 "${cmd_array[#]}"):
#!/usr/bin/env bash
# Print and run the passed-in command, which is passed in as an
# array **by reference**.
# See here for a full explanation: https://stackoverflow.com/a/71151669/4561887
# USAGE:
# cmd_array=(ls -a -l -F /)
# print_and_run1 cmd_array
print_and_run1() {
local -n array_reference="$1"
local PS4='Running cmd: '
local -
set -o xtrace
# Call the cmd
"${array_reference[#]}"
}
# Print and run the passed-in command, which is passed in as members
# of an array **by value**.
# See here for a full explanation: https://stackoverflow.com/a/71151669/4561887
# USAGE:
# cmd_array=(ls -a -l -F /)
# print_and_run2 "${cmd_array[#]}"
print_and_run2() {
local PS4='Running cmd: '
local -
set -o xtrace
# Call the cmd
"$#"
}
cmd_array=(ls -1 "$HOME/temp/some folder with spaces")
print_and_run1 cmd_array
echo ""
print_and_run2 "${cmd_array[#]}"
echo ""
Sample run and output:
eRCaGuy_hello_world/bash$ ./print_and_run.sh
Running cmd: ls -1 '/home/gabriel/temp/some folder with spaces'
file1.txt
file2.txt
Running cmd: ls -1 '/home/gabriel/temp/some folder with spaces'
file1.txt
file2.txt
This seems to work too:
print_and_run_cmd() {
echo "Running cmd: $1"
eval "$cmd"
}
cmd='ls -1 "$HOME/temp/some folder with spaces"'
print_and_run_cmd "$cmd"
Output:
Running cmd: ls -1 "$HOME/temp/some folder with spaces"
(result of running the cmd is here)
But now the problem is, if I want to print an expanded version of the cmd too, to verify that part worked properly, I can't, or at least, don't know how.

why environment variables (PATH) work in bash?

i'm trying to built a bash using c.
but I faced this problem when I try this: env -i bash
this should pass a void env into bash so all the environment variables should be null
example :
➜ ~ env -i bash --norc
bash-3.2$ env
PWD=/Users/mbari
SHLVL=1
_=/usr/bin/env
bash-3.2$ ls
#*mail*#78979jxq# Documents Pictures result.log
Applications Downloads VirtualBox VMs tmux-client-73012.log
Cleaner.sh Library docker_start_up.bash tmux-client-73105.log
Cleaner_42.sh Movies file
Desktop Music goinfre
bash-3.2$
screenshot :
Bash has a default path in case it does not inherit nor anything does not set PATH. It's defined as DEFAULT_PATH_VALUE in bash sources and while there are some defaults in the source, usually distributions override this value in build scripts. As you're building your own shell, you might find that config file interesting.

CMake: how to avoid escaping spaces in command line? [duplicate]

I'm trying to create a custom command that runs with some environment variables, such as LDFLAGS, whose value needs to be quoted if it contains spaces:
LDFLAGS="-Lmydir -Lmyotherdir"
I cannot find a way to include this argument in a CMake custom command, due to CMake's escaping rules. Here's what I've tried so far:
COMMAND LDFLAGS="-Ldir -Ldir2" echo blah VERBATIM)
yields "LDFLAGS=\"-Ldir -Ldir2\"" echo blah
COMMAND LDFLAGS=\"-Ldir -Ldir2\" echo blah VERBATIM)
yields LDFLAGS=\"-Ldir -Ldir2\" echo blah
It seems I either get the whole string quoted, or the escaped quotes don't resolve when used as part of the command.
I would appreciate either a way to include the literal double-quote or as an alternative a better way to set environment variables for a command. Please note that I'm still on CMake 2.8, so I don't have the new "env" command available in 3.2.
Note that this is not a duplicate of When to quote variables? as none of those quoting methods work for this particular case.
The obvious choice - often recommended when hitting the boundaries of COMMAND especially with older versions of CMake - is to use an external script.
I just wanted to add some simple COMMAND only variations that do work and won't need a shell, but are - I have to admit - still partly platform dependent.
One example would be to put only the quoted part into a variable:
set(vars_as_string "-Ldir -Ldir2")
add_custom_target(
QuotedEnvVar
COMMAND env LD_FLAGS=${vars_as_string} | grep LD_FLAGS
)
Which actually does escape the space and not the quotes.
Another example would be to add it with escaped quotes as a "launcher" rule:
add_custom_target(
LauncherEnvVar
COMMAND env | grep LD_FLAGS
)
set_target_properties(
LauncherEnvVar
PROPERTIES RULE_LAUNCH_CUSTOM "env LD_FLAGS=\"-Ldir -Ldir2\""
)
Edit: Added examples for multiple quoted arguments without the need of escaping quotes
Another example would be to "hide some of the complexity" in a function and - if you want to add this to all your custom command calls - use the global/directory RULE_LAUNCH_CUSTOM property:
function(set_env)
get_property(_env GLOBAL PROPERTY RULE_LAUNCH_CUSTOM)
if (NOT _env)
set_property(GLOBAL PROPERTY RULE_LAUNCH_CUSTOM "env")
endif()
foreach(_arg IN LISTS ARGN)
set_property(GLOBAL APPEND_STRING PROPERTY RULE_LAUNCH_CUSTOM " ${_arg}")
endforeach()
endfunction(set_env)
set_env(LDFLAGS="-Ldir1 -Ldir2" CFLAGS="-Idira -Idirb")
add_custom_target(
MultipleEnvVar
COMMAND env | grep -E 'LDFLAGS|CFLAGS'
)
Alternative (for CMake >= 3.0)
I think what we actually are looking for here (besides the cmake -E env ...) is named Bracket Argument and does allow any character without the need of adding backslashes:
set_property(
GLOBAL PROPERTY
RULE_LAUNCH_CUSTOM [=[env LDFLAGS="-Ldir1 -Ldir2" CFLAGS="-Idira -Idirb"]=]
)
add_custom_target(
MultipleEnvVarNew
COMMAND env | grep -E 'LDFLAGS|CFLAGS'
)
References
0005145: Set environment variables for ADD_CUSTOM_COMMAND/ADD_CUSTOM_TARGET
How to modify environment variables passed to custom CMake target?
[CMake] How to set environment variable for custom command
cmake: when to quote variables?
You need three backslashes. I needed this recently to get a preprocessor define from PkgConfig and apply it to my C++ flags:
pkg_get_variable(SHADERDIR movit shaderdir)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DSHADERDIR=\\\"${SHADERDIR}\\\"")
Florian's answer is wrong on several counts:
Putting the quoted part in a variable makes no difference.
You should definitely use VERBATIM. It fixes platform-specific quoting bugs.
You definitely shouldn't use RULE_LAUNCH_CUSTOM for this. It isn't intended for this and only works with some generators.
You shouldn't use env as the command. It isn't available on Windows.
It turns out the real reason OPs code doesn't work is that CMake always fully quotes the first word after COMMAND because it's supposed to be the name of an executable. You simply shouldn't put environment variables first.
For example:
add_custom_command(
OUTPUT q1.txt
COMMAND ENV_VAR="a b" echo "hello" > q1.txt
VERBATIM
)
add_custom_target(q1 ALL DEPENDS q1.txt)
$ VERBOSE=1 make
...
"ENV_VAR=\"a b\"" echo hello > q1.txt
/bin/sh: ENV_VAR="a b": command not found
So how do you pass an environment variable with spaces? Simple.
add_custom_command(
OUTPUT q1.txt
COMMAND ${CMAKE_COMMAND} -E env ENV_VAR="a b" echo "hello" > q1.txt
VERBATIM
)
Ok, I removed my original answer as the one proposed by #Florian is better. There is one additional tweak needed for multiple quoted args. Consider a list of environment variables as such:
set(my_env_vars LDFLAGS="-Ldir1 -Ldir2" CFLAGS="-Idira -Idirb")
In order to produce the desired expansion, convert to string and then replace ; with a space.
set(my_env_string "${my_env_vars}") #produces LDFLAGS="...";CFLAGS="..."
string(REPLACE ";" " " my_env_string "${my_env_string}")
Then you can proceed with #Florian's brilliant answer and add the custom launch rule. If you need semicolons in your string then you'll need to convert them to something else first.
Note that in this case I didn't need to launch with env:
set_target_properties(mytarget PROPERTIES RULE_LAUNCH_CUSTOM "${my_env_string}")
This of course depends on your shell.
On second thought, my original answer is below as I also have a case where I don't have access to the target name.
set(my_env LDFLAGS=\"-Ldir -Ldir2" CFLAGS=\"-Idira -Idirb\")
add_custom_command(COMMAND sh -c "${my_env} grep LDFLAGS" VERBATIM)
This technique still requires that the semicolons from the list->string conversion be replaced.
Some folks suggest to use ${CMAKE_COMMAND} and pass your executable as an argument, e.g:
COMMAND ${CMAKE_COMMAND} -E env "$(WindowsSdkDir)/bin/x64/makecert.exe" ...
That worked for me.

Exporting an array in bash script

I can not export an array from a bash script to another bash script like this:
export myArray[0]="Hello"
export myArray[1]="World"
When I write like this there are no problem:
export myArray=("Hello" "World")
For several reasons I need to initialize my array into multiple lines. Do you have any solution?
Array variables may not (yet) be exported.
From the manpage of bash version 4.1.5 under ubuntu 10.04.
The following statement from Chet Ramey (current bash maintainer as of 2011) is probably the most official documentation about this "bug":
There isn't really a good way to encode an array variable into the environment.
http://www.mail-archive.com/bug-bash#gnu.org/msg01774.html
TL;DR: exportable arrays are not directly supported up to and including bash-5.1, but you can (effectively) export arrays in one of two ways:
a simple modification to the way the child scripts are invoked
use an exported function to store the array initialisation, with a simple modification to the child scripts
Or, you can wait until bash-4.3 is released (in development/RC state as of February 2014, see ARRAY_EXPORT in the Changelog). Update: This feature is not enabled in 4.3. If you define ARRAY_EXPORT when building, the build will fail. The author has stated it is not planned to complete this feature.
The first thing to understand is that the bash environment (more properly command execution environment) is different to the POSIX concept of an environment. The POSIX environment is a collection of un-typed name=value pairs, and can be passed from a process to its children in various ways (effectively a limited form of IPC).
The bash execution environment is effectively a superset of this, with typed variables, read-only and exportable flags, arrays, functions and more. This partly explains why the output of set (bash builtin) and env or printenv differ.
When you invoke another bash shell you're starting a new process, you loose some bash state. However, if you dot-source a script, the script is run in the same environment; or if you run a subshell via ( ) the environment is also preserved (because bash forks, preserving its complete state, rather than reinitialising using the process environment).
The limitation referenced in #lesmana's answer arises because the POSIX environment is simply name=value pairs with no extra meaning, so there's no agreed way to encode or format typed variables, see below for an interesting bash quirk regarding functions , and an upcoming change in bash-4.3(proposed array feature abandoned).
There are a couple of simple ways to do this using declare -p (built-in) to output some of the bash environment as a set of one or more declare statements which can be used reconstruct the type and value of a "name". This is basic serialisation, but with rather less of the complexity some of the other answers imply. declare -p preserves array indexes, sparse arrays and quoting of troublesome values. For simple serialisation of an array you could just dump the values line by line, and use read -a myarray to restore it (works with contiguous 0-indexed arrays, since read -a automatically assigns indexes).
These methods do not require any modification of the script(s) you are passing the arrays to.
declare -p array1 array2 > .bash_arrays # serialise to an intermediate file
bash -c ". .bash_arrays; . otherscript.sh" # source both in the same environment
Variations on the above bash -c "..." form are sometimes (mis-)used in crontabs to set variables.
Alternatives include:
declare -p array1 array2 > .bash_arrays # serialise to an intermediate file
BASH_ENV=.bash_arrays otherscript.sh # non-interactive startup script
Or, as a one-liner:
BASH_ENV=<(declare -p array1 array2) otherscript.sh
The last one uses process substitution to pass the output of the declare command as an rc script. (This method only works in bash-4.0 or later: earlier versions unconditionally fstat() rc files and use the size returned to read() the file in one go; a FIFO returns a size of 0, and so won't work as hoped.)
In a non-interactive shell (i.e. shell script) the file pointed to by the BASH_ENV variable is automatically sourced. You must make sure bash is correctly invoked, possibly using a shebang to invoke "bash" explicitly, and not #!/bin/sh as bash will not honour BASH_ENV when in historical/POSIX mode.
If all your array names happen to have a common prefix you can use declare -p ${!myprefix*} to expand a list of them, instead of enumerating them.
You probably should not attempt to export and re-import the entire bash environment using this method, some special bash variables and arrays are read-only, and there can be other side-effects when modifying special variables.
(You could also do something slightly disagreeable by serialising the array definition to an exportable variable, and using eval, but let's not encourage the use of eval ...
$ array=([1]=a [10]="b c")
$ export scalar_array=$(declare -p array)
$ bash # start a new shell
$ eval $scalar_array
$ declare -p array
declare -a array='([1]="a" [10]="b c")'
)
As referenced above, there's an interesting quirk: special support for exporting functions through the environment:
function myfoo() {
echo foo
}
with export -f or set +a to enable this behaviour, will result in this in the (process) environment, visible with printenv:
myfoo=() { echo foo
}
The variable is functionname (or functioname() for backward compatibility) and its value is () { functionbody }.
When a subsequent bash process starts it will recreate a function from each such environment variable. If you peek into the bash-4.2 source file variables.c you'll see variables starting with () { are handled specially. (Though creating a function using this syntax with declare -f is forbidden.) Update: The "shellshock" security issue is related to this feature, contemporary systems may disable automatic function import from the environment as a mitigation.
If you keep reading though, you'll see an #if 0 (or #if ARRAY_EXPORT) guarding code that checks variables starting with ([ and ending with ), and a comment stating "Array variables may not yet be exported". The good news is that in the current development version bash-4.3rc2 the ability to export indexed arrays (not associative) is enabled. This feature is not likely to be enabled, as noted above.
We can use this to create a function which restores any array data required:
% function sharearray() {
array1=(a b c d)
}
% export -f sharearray
% bash -c 'sharearray; echo ${array1[*]}'
So, similar to the previous approach, invoke the child script with:
bash -c "sharearray; . otherscript.sh"
Or, you can conditionally invoke the sharearray function in the child script by adding at some appropriate point:
declare -F sharearray >/dev/null && sharearray
Note there is no declare -a in the sharearray function, if you do that the array is implicitly local to the function, not what is wanted. bash-4.2 supports declare -g that makes a variable declared in a function into a global, so declare -ga can then be used. (Since associative arrays require a declare -A you won't be able to use this method for global associative arrays prior to bash-4.2, from v4.2 declare -Ag will work as hoped.) The GNU parallel documentation has useful variation on this method, see the discussion of --env in the man page.
Your question as phrased also indicates you may be having problems with export itself. You can export a name after you've created or modified it. "exportable" is a flag or property of a variable, for convenience you can also set and export in a single statement. Up to bash-4.2 export expects only a name, either a simple (scalar) variable or function name are supported.
Even if you could (in future) export arrays, exporting selected indexes (a slice) may not be supported (though since arrays are sparse there's no reason it could not be allowed). Though bash also supports the syntax declare -a name[0], the subscript is ignored, and "name" is simply a normal indexed array.
Jeez. I don't know why the other answers made this so complicated. Bash has nearly built-in support for this.
In the exporting script:
myArray=( ' foo"bar ' $'\n''\nbaz)' ) # an array with two nasty elements
myArray="${myArray[#]#Q}" ./importing_script.sh
(Note, the double quotes are necessary for correct handling of whitespace within array elements.)
Upon entry to importing_script.sh, the value of the myArray environment variable comprises these exact 26 bytes:
' foo"bar ' $'\n\\nbaz)'
Then the following will reconstitute the array:
eval "myArray=( ${myArray} )"
CAUTION! Do not eval like this if you cannot trust the source of the myArray environment variable. This trick exhibits the "Little Bobby Tables" vulnerability. Imagine if someone were to set the value of myArray to ) ; rm -rf / #.
The environment is just a collection of key-value pairs, both of which are character strings. A proper solution that works for any kind of array could either
Save each element in a different variable (e.g. MY_ARRAY_0=myArray[0]). Gets complicated because of the dynamic variable names.
Save the array in the file system (declare -p myArray >file).
Serialize all array elements into a single string.
These are covered in the other posts. If you know that your values never contain a certain character (for example |) and your keys are consecutive integers, you can simply save the array as a delimited list:
export MY_ARRAY=$(IFS='|'; echo "${myArray[*]}")
And restore it in the child process:
IFS='|'; myArray=($MY_ARRAY); unset IFS
Based on #mr.spuratic use of BASH_ENV, here I tunnel $# through script -f -c
script -c <command> <logfile> can be used to run a command inside another pty (and process group) but it cannot pass any structured arguments to <command>.
Instead <command> is a simple string to be an argument to the system library call.
I need to tunnel $# of the outer bash into $# of the bash invoked by script.
As declare -p cannot take #, here I use the magic bash variable _ (with a dummy first array value as that will get overwritten by bash). This saves me trampling on any important variables:
Proof of concept:
BASH_ENV=<( declare -a _=("" "$#") && declare -p _ ) bash -c 'set -- "${_[#]:1}" && echo "$#"'
"But," you say, "you are passing arguments to bash -- and indeed I am, but these are a simple string of known character. Here is use by script
SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$#") && declare -p _ && echo 'set -- "${_[#]:1}"') script -f -c 'echo "$#"' /tmp/logfile
which gives me this wrapper function in_pty:
in_pty() {
SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$#") && declare -p _ && echo 'set -- "${_[#]:1}"') script -f -c 'echo "$#"' /tmp/logfile
}
or this function-less wrapper as a composable string for Makefiles:
in_pty=bash -c 'SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$$#") && declare -p _ && echo '"'"'set -- "$${_[#]:1}"'"'"') script -qfc '"'"'"$$#"'"'"' /tmp/logfile' --
...
$(in_pty) test --verbose $# $^
I was editing a different post and made a mistake. Augh. Anyway, perhaps this might help?
https://stackoverflow.com/a/11944320/1594168
Note that because the shell's array format is undocumented on bash or any other shell's side,
it is very difficult to return a shell array in platform independent way.
You would have to check the version, and also craft a simple script that concatinates all
shell arrays into a file that other processes can resolve into.
However, if you know the name of the array you want to take back home then there is a way, while a bit dirty.
Lets say I have
MyAry[42]="whatever-stuff";
MyAry[55]="foo";
MyAry[99]="bar";
So I want to take it home
name_of_child=MyAry
take_me_home="`declare -p ${name_of_child}`";
export take_me_home="${take_me_home/#declare -a ${name_of_child}=/}"
We can see it being exported, by checking from a sub-process
echo ""|awk '{print "from awk =["ENVIRON["take_me_home"]"]"; }'
Result :
from awk =['([42]="whatever-stuff" [55]="foo" [99]="bar")']
If we absolutely must, use the env var to dump it.
env > some_tmp_file
Then
Before running the another script,
# This is the magic that does it all
source some_tmp_file
As lesmana reported, you cannot export arrays. So you have to serialize them before passing through the environment. This serialization useful other places too where only a string fits (su -c 'string', ssh host 'string'). The shortest code way to do this is to abuse 'getopt'
# preserve_array(arguments). return in _RET a string that can be expanded
# later to recreate positional arguments. They can be restored with:
# eval set -- "$_RET"
preserve_array() {
_RET=$(getopt --shell sh --options "" -- -- "$#") && _RET=${_RET# --}
}
# restore_array(name, payload)
restore_array() {
local name="$1" payload="$2"
eval set -- "$payload"
eval "unset $name && $name=("\$#")"
}
Use it like this:
foo=("1: &&& - *" "2: two" "3: %# abc" )
preserve_array "${foo[#]}"
foo_stuffed=${_RET}
restore_array newfoo "$foo_stuffed"
for elem in "${newfoo[#]}"; do echo "$elem"; done
## output:
# 1: &&& - *
# 2: two
# 3: %# abc
This does not address unset/sparse arrays.
You might be able to reduce the 2 'eval' calls in restore_array.
Although this question/answers are pretty old, this post seems to be the top hit when searching for "bash serialize array"
And, although the original question wasn't quite related to serializing/deserializing arrays, it does seem that the answers have devolved in that direction.
So with that ... I offer my solution:
Pros
All Core Bash Concepts
No Evals
No Sub-Commands
Cons
Functions take variable names as arguments (vs actual values)
Serializing requires having at least one character that is not present in the array
serialize_array.bash
# shellcheck shell=bash
##
# serialize_array
# Serializes a bash array to a string, with a configurable seperator.
#
# $1 = source varname ( contains array to be serialized )
# $2 = target varname ( will contian the serialized string )
# $3 = seperator ( optional, defaults to $'\x01' )
#
# example:
#
# my_arry=( one "two three" four )
# serialize_array my_array my_string '|'
# declare -p my_string
#
# result:
#
# declare -- my_string="one|two three|four"
#
function serialize_array() {
declare -n _array="${1}" _str="${2}" # _array, _str => local reference vars
local IFS="${3:-$'\x01'}"
# shellcheck disable=SC2034 # Reference vars assumed used by caller
_str="${_array[*]}" # * => join on IFS
}
##
# deserialize_array
# Deserializes a string into a bash array, with a configurable seperator.
#
# $1 = source varname ( contains string to be deserialized )
# $2 = target varname ( will contain the deserialized array )
# $3 = seperator ( optional, defaults to $'\x01' )
#
# example:
#
# my_string="one|two three|four"
# deserialize_array my_string my_array '|'
# declare -p my_array
#
# result:
#
# declare -a my_array=([0]="one" [1]="two three" [2]="four")
#
function deserialize_array() {
IFS="${3:-$'\x01'}" read -r -a "${2}" <<<"${!1}" # -a => split on IFS
}
NOTE: This is hosted as a gist here:
https://gist.github.com/TekWizely/c0259f25e18f2368c4a577495cd566cd
[edits]
Logic simplified after running through shellcheck + shfmt.
Added URL for hosted GIST
you (hi!) can use this, dont need writing a file, for ubuntu 12.04, bash 4.2.24
Also, your multiple lines array can be exported.
cat >>exportArray.sh
function FUNCarrayRestore() {
local l_arrayName=$1
local l_exportedArrayName=${l_arrayName}_exportedArray
# if set, recover its value to array
if eval '[[ -n ${'$l_exportedArrayName'+dummy} ]]'; then
eval $l_arrayName'='`eval 'echo $'$l_exportedArrayName` #do not put export here!
fi
}
export -f FUNCarrayRestore
function FUNCarrayFakeExport() {
local l_arrayName=$1
local l_exportedArrayName=${l_arrayName}_exportedArray
# prepare to be shown with export -p
eval 'export '$l_arrayName
# collect exportable array in string mode
local l_export=`export -p \
|grep "^declare -ax $l_arrayName=" \
|sed 's"^declare -ax '$l_arrayName'"export '$l_exportedArrayName'"'`
# creates exportable non array variable (at child shell)
eval "$l_export"
}
export -f FUNCarrayFakeExport
test this example on terminal bash (works with bash 4.2.24):
source exportArray.sh
list=(a b c)
FUNCarrayFakeExport list
bash
echo ${list[#]} #empty :(
FUNCarrayRestore list
echo ${list[#]} #profit! :D
I may improve it here
PS.: if someone clears/improve/makeItRunFaster I would like to know/see, thx! :D
For arrays with values without spaces, I've been using a simple set of functions to iterate through each array element and concatenate the array:
_arrayToStr(){
array=($#)
arrayString=""
for (( i=0; i<${#array[#]}; i++ )); do
if [[ $i == 0 ]]; then
arrayString="\"${array[i]}\""
else
arrayString="${arrayString} \"${array[i]}\""
fi
done
export arrayString="(${arrayString})"
}
_strToArray(){
str=$1
array=${str//\"/}
array=(${array//[()]/""})
export array=${array[#]}
}
The first function with turn the array into a string by adding the opening and closing parentheses and escaping all of the double quotation marks. The second function will strip the quotation marks and the parentheses and place them into a dummy array.
In order export the array, you would pass in all the elements of the original array:
array=(foo bar)
_arrayToStr ${array[#]}
At this point, the array has been exported into the value $arrayString. To import the array in the destination file, rename the array and do the opposite conversion:
_strToArray "$arrayName"
newArray=(${array[#]})
Much thanks to #stéphane-chazelas who pointed out all the problems with my previous attempts, this now seems to work to serialise an array to stdout or into a variable.
This technique does not shell-parse the input (unlike declare -a/declare -p) and so is safe against malicious insertion of metacharacters in the serialised text.
Note: newlines are not escaped, because read deletes the \<newlines> character pair, so -d ... must instead be passed to read, and then unescaped newlines are preserved.
All this is managed in the unserialise function.
Two magic characters are used, the field separator and the record separator (so that multiple arrays can be serialized to the same stream).
These characters can be defined as FS and RS but neither can be defined as newline character because an escaped newline is deleted by read.
The escape character must be \ the backslash, as that is what is used by read to avoid the character being recognized as an IFS character.
serialise will serialise "$#" to stdout, serialise_to will serialise to the varable named in $1
serialise() {
set -- "${#//\\/\\\\}" # \
set -- "${#//${FS:-;}/\\${FS:-;}}" # ; - our field separator
set -- "${#//${RS:-:}/\\${RS:-:}}" # ; - our record separator
local IFS="${FS:-;}"
printf ${SERIALIZE_TARGET:+-v"$SERIALIZE_TARGET"} "%s" "$*${RS:-:}"
}
serialise_to() {
SERIALIZE_TARGET="$1" serialise "${#:2}"
}
unserialise() {
local IFS="${FS:-;}"
if test -n "$2"
then read -d "${RS:-:}" -a "$1" <<<"${*:2}"
else read -d "${RS:-:}" -a "$1"
fi
}
and unserialise with:
unserialise data # read from stdin
or
unserialise data "$serialised_data" # from args
e.g.
$ serialise "Now is the time" "For all good men" "To drink \$drink" "At the \`party\`" $'Party\tParty\tParty'
Now is the time;For all good men;To drink $drink;At the `party`;Party Party Party:
(without a trailing newline)
read it back:
$ serialise_to s "Now is the time" "For all good men" "To drink \$drink" "At the \`party\`" $'Party\tParty\tParty'
$ unserialise array "$s"
$ echo "${array[#]/#/$'\n'}"
Now is the time
For all good men
To drink $drink
At the `party`
Party Party Party
or
unserialise array # read from stdin
Bash's read respects the escape character \ (unless you pass the -r flag) to remove special meaning of characters such as for input field separation or line delimiting.
If you want to serialise an array instead of a mere argument list then just pass your array as the argument list:
serialise_array "${my_array[#]}"
You can use unserialise in a loop like you would read because it is just a wrapped read - but remember that the stream is not newline separated:
while unserialise array
do ...
done
I've wrote my own functions for this and improved the method with the IFS:
Features:
Doesn't call to $(...) and so doesn't spawn another bash shell process
Serializes ? and | characters into ?00 and ?01 sequences and back, so can be used over array with these characters
Handles the line return characters between serialization/deserialization as other characters
Tested in cygwin bash 3.2.48 and Linux bash 4.3.48
function tkl_declare_global()
{
eval "$1=\"\$2\"" # right argument does NOT evaluate
}
function tkl_declare_global_array()
{
local IFS=$' \t\r\n' # just in case, workaround for the bug in the "[#]:i" expression under the bash version lower than 4.1
eval "$1=(\"\${#:2}\")"
}
function tkl_serialize_array()
{
local __array_var="$1"
local __out_var="$2"
[[ -z "$__array_var" ]] && return 1
[[ -z "$__out_var" ]] && return 2
local __array_var_size
eval declare "__array_var_size=\${#$__array_var[#]}"
(( ! __array_var_size )) && { tkl_declare_global $__out_var ''; return 0; }
local __escaped_array_str=''
local __index
local __value
for (( __index=0; __index < __array_var_size; __index++ )); do
eval declare "__value=\"\${$__array_var[__index]}\""
__value="${__value//\?/?00}"
__value="${__value//|/?01}"
__escaped_array_str="$__escaped_array_str${__escaped_array_str:+|}$__value"
done
tkl_declare_global $__out_var "$__escaped_array_str"
return 0
}
function tkl_deserialize_array()
{
local __serialized_array="$1"
local __out_var="$2"
[[ -z "$__out_var" ]] && return 1
(( ! ${#__serialized_array} )) && { tkl_declare_global $__out_var ''; return 0; }
local IFS='|'
local __deserialized_array=($__serialized_array)
tkl_declare_global_array $__out_var
local __index=0
local __value
for __value in "${__deserialized_array[#]}"; do
__value="${__value//\?01/|}"
__value="${__value//\?00/?}"
tkl_declare_global $__out_var[__index] "$__value"
(( __index++ ))
done
return 0
}
Example:
a=($'1 \n 2' "3\"4'" 5 '|' '?')
tkl_serialize_array a b
tkl_deserialize_array "$b" c
I think you can try it this way (by sourcing your script after export):
export myArray=(Hello World)
. yourScript.sh

Resources