Creating an alias for `git commit` accepting N arbitrary `-m` arguments - eval

Using the Fish shell, I would like to create a function (gcom) to use as an alias for the git commit command, accepting an arbitrary number of string arguments and passing them as -m options, so that
git commit -m "a" -m "b" -m "c" could be written, in short, gcom "a" "b" "c".
The gcom function should accept N arguments, and run git command with the respective N -m options.
This could be done with a loop over the $argv array, constructing a command manually using string join and executing such command with eval.
However, that looks clunky, and I would love to find a sleeker alternative.
Even better if additional gcom -? options are passed through unmodified!

git commit -mfoo uses "foo" as the message, so you can simply prefix every argument with -m:
function gcom
git commit -m$argv
end
Since $argv is a list, this will add "-m" to every argument in it.
Unlike other shells, there is no word-splitting here. $argv is not split on whitespace, $argv is the list of all arguments as they have been given. So it works with spaces, newlines, etc. The arguments simply need to be quoted where you give them to gcom:
gcom "first argument with spaces" "second"\n"argument"\n"with"\n"newlines"
will run the equivalent of
git commit -m"first argument with spaces" -m "second
argument
with
newlines"
Try it out with printf:
set -l mylist "first argument with spaces" "second"\n"argument"\n"with"\n"newlines"
printf '<%s>\n' -m$mylist
will print
<-mfirst argument with spaces>
<-msecond
argument
with
newlines>
Even better if additional gcom -? options are passed through unmodified!
If you wanted to do that, you might leave all arguments starting with - in an additional $args list without adding a -m. This would be broken if any of the options itself takes an argument, like --author.
Think about:
gcom --author "mycoolemail#example.com" these are my message words
This should execute like
git commit --author mycoolemail#example.com -m these -m are -m my -m message -m words
But the only way to figure out that the "mycoolemail#example.com" belongs to --author and therefore shouldn't have a corresponding -m is to know that --author takes options.
To do that you could use fish's argparse builtin, and you would have to tell it about all of git-commit's options. This would be something like
function gcom
# The part before the `--` are descriptions for the options that git-commit takes
# A "=" means that the option takes a mandatory argument and so it can be written like "--author foo",
# A "=?" means it takes an optional argument and needs to be written like "-ufoo".
argparse author= a interactive patch s v 'u=?' -- $argv
# Argparse leaves all the non-option arguments in $argv.
# For each option it gives you a $_flag_option variable,
# but that only includes the *value* for the options that take arguments
#
# If we wanted to pass all --author= options we could use the `string split -m1` trick from above,
# but here we assume that only the last --author is of use.
#
# This needs to be done for all the options that take arguments,
# simple boolean flags are just stored as the flags
# - `$_flag_patch` will contain "--patch"
set -l flags $_flag_a $_flag_interactive $_flag_patch
if set -q _flag_author
set -a flags --author $_flag_author[-1]
end
if set -q _flag_u
set -a flags -u$_flag_u[-1]
end
# [ now do the -m trick with the leftover $argv ]
# ...
git commit $flags -m$argv
end
Unfortunately, there is no way around listing all the options that take arguments, because otherwise the argument to the option is confused with a normal message argument. There simply isn't anything that fish could do here, that's just how argument parsing works.
You could run argparse --ignore-unknown, which would tell argparse to leave unknown options in $argv, and then still skip everything starting with a -. This would break if you ever passed an option group (think ls -lah, which has three short-options) and the last option takes an argument but you haven't listed one of the options before.
Because fish wouldn't be able to figure out that e.g. -sFfile means -s -F=file if you hadn't told it that "-s" exists - because it can't know that -s takes no options. If it did, this would be the same as -s=Ffile. So it can only leave the entire group intact.

Related

How to set arrays with variables with loop in bash [duplicate]

I am confused about a bash script.
I have the following code:
function grep_search() {
magic_way_to_define_magic_variable_$1=`ls | tail -1`
echo $magic_variable_$1
}
I want to be able to create a variable name containing the first argument of the command and bearing the value of e.g. the last line of ls.
So to illustrate what I want:
$ ls | tail -1
stack-overflow.txt
$ grep_search() open_box
stack-overflow.txt
So, how should I define/declare $magic_way_to_define_magic_variable_$1 and how should I call it within the script?
I have tried eval, ${...}, \$${...}, but I am still confused.
I've been looking for better way of doing it recently. Associative array sounded like overkill for me. Look what I found:
suffix=bzz
declare prefix_$suffix=mystr
...and then...
varname=prefix_$suffix
echo ${!varname}
From the docs:
The ‘$’ character introduces parameter expansion, command substitution, or arithmetic expansion. ...
The basic form of parameter expansion is ${parameter}. The value of parameter is substituted. ...
If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of indirection. Bash uses the value formed by expanding the rest of parameter as the new parameter; this is then expanded and that value is used in the rest of the expansion, rather than the expansion of the original parameter. This is known as indirect expansion. The value is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. ...
Use an associative array, with command names as keys.
# Requires bash 4, though
declare -A magic_variable=()
function grep_search() {
magic_variable[$1]=$( ls | tail -1 )
echo ${magic_variable[$1]}
}
If you can't use associative arrays (e.g., you must support bash 3), you can use declare to create dynamic variable names:
declare "magic_variable_$1=$(ls | tail -1)"
and use indirect parameter expansion to access the value.
var="magic_variable_$1"
echo "${!var}"
See BashFAQ: Indirection - Evaluating indirect/reference variables.
Beyond associative arrays, there are several ways of achieving dynamic variables in Bash. Note that all these techniques present risks, which are discussed at the end of this answer.
In the following examples I will assume that i=37 and that you want to alias the variable named var_37 whose initial value is lolilol.
Method 1. Using a “pointer” variable
You can simply store the name of the variable in an indirection variable, not unlike a C pointer. Bash then has a syntax for reading the aliased variable: ${!name} expands to the value of the variable whose name is the value of the variable name. You can think of it as a two-stage expansion: ${!name} expands to $var_37, which expands to lolilol.
name="var_$i"
echo "$name" # outputs “var_37”
echo "${!name}" # outputs “lolilol”
echo "${!name%lol}" # outputs “loli”
# etc.
Unfortunately, there is no counterpart syntax for modifying the aliased variable. Instead, you can achieve assignment with one of the following tricks.
1a. Assigning with eval
eval is evil, but is also the simplest and most portable way of achieving our goal. You have to carefully escape the right-hand side of the assignment, as it will be evaluated twice. An easy and systematic way of doing this is to evaluate the right-hand side beforehand (or to use printf %q).
And you should check manually that the left-hand side is a valid variable name, or a name with index (what if it was evil_code # ?). By contrast, all other methods below enforce it automatically.
# check that name is a valid variable name:
# note: this code does not support variable_name[index]
shopt -s globasciiranges
[[ "$name" == [a-zA-Z_]*([a-zA-Z_0-9]) ]] || exit
value='babibab'
eval "$name"='$value' # carefully escape the right-hand side!
echo "$var_37" # outputs “babibab”
Downsides:
does not check the validity of the variable name.
eval is evil.
eval is evil.
eval is evil.
1b. Assigning with read
The read builtin lets you assign values to a variable of which you give the name, a fact which can be exploited in conjunction with here-strings:
IFS= read -r -d '' "$name" <<< 'babibab'
echo "$var_37" # outputs “babibab\n”
The IFS part and the option -r make sure that the value is assigned as-is, while the option -d '' allows to assign multi-line values. Because of this last option, the command returns with an non-zero exit code.
Note that, since we are using a here-string, a newline character is appended to the value.
Downsides:
somewhat obscure;
returns with a non-zero exit code;
appends a newline to the value.
1c. Assigning with printf
Since Bash 3.1 (released 2005), the printf builtin can also assign its result to a variable whose name is given. By contrast with the previous solutions, it just works, no extra effort is needed to escape things, to prevent splitting and so on.
printf -v "$name" '%s' 'babibab'
echo "$var_37" # outputs “babibab”
Downsides:
Less portable (but, well).
Method 2. Using a “reference” variable
Since Bash 4.3 (released 2014), the declare builtin has an option -n for creating a variable which is a “name reference” to another variable, much like C++ references. Just as in Method 1, the reference stores the name of the aliased variable, but each time the reference is accessed (either for reading or assigning), Bash automatically resolves the indirection.
In addition, Bash has a special and very confusing syntax for getting the value of the reference itself, judge by yourself: ${!ref}.
declare -n ref="var_$i"
echo "${!ref}" # outputs “var_37”
echo "$ref" # outputs “lolilol”
ref='babibab'
echo "$var_37" # outputs “babibab”
This does not avoid the pitfalls explained below, but at least it makes the syntax straightforward.
Downsides:
Not portable.
Risks
All these aliasing techniques present several risks. The first one is executing arbitrary code each time you resolve the indirection (either for reading or for assigning). Indeed, instead of a scalar variable name, like var_37, you may as well alias an array subscript, like arr[42]. But Bash evaluates the contents of the square brackets each time it is needed, so aliasing arr[$(do_evil)] will have unexpected effects… As a consequence, only use these techniques when you control the provenance of the alias.
function guillemots {
declare -n var="$1"
var="«${var}»"
}
arr=( aaa bbb ccc )
guillemots 'arr[1]' # modifies the second cell of the array, as expected
guillemots 'arr[$(date>>date.out)1]' # writes twice into date.out
# (once when expanding var, once when assigning to it)
The second risk is creating a cyclic alias. As Bash variables are identified by their name and not by their scope, you may inadvertently create an alias to itself (while thinking it would alias a variable from an enclosing scope). This may happen in particular when using common variable names (like var). As a consequence, only use these techniques when you control the name of the aliased variable.
function guillemots {
# var is intended to be local to the function,
# aliasing a variable which comes from outside
declare -n var="$1"
var="«${var}»"
}
var='lolilol'
guillemots var # Bash warnings: “var: circular name reference”
echo "$var" # outputs anything!
Source:
BashFaq/006: How can I use variable variables (indirect variables, pointers, references) or associative arrays?
BashFAQ/048: eval command and security issues
Example below returns value of $name_of_var
var=name_of_var
echo $(eval echo "\$$var")
Use declare
There is no need on using prefixes like on other answers, neither arrays. Use just declare, double quotes, and parameter expansion.
I often use the following trick to parse argument lists contanining one to n arguments formatted as key=value otherkey=othervalue etc=etc, Like:
# brace expansion just to exemplify
for variable in {one=foo,two=bar,ninja=tip}
do
declare "${variable%=*}=${variable#*=}"
done
echo $one $two $ninja
# foo bar tip
But expanding the argv list like
for v in "$#"; do declare "${v%=*}=${v#*=}"; done
Extra tips
# parse argv's leading key=value parameters
for v in "$#"; do
case "$v" in ?*=?*) declare "${v%=*}=${v#*=}";; *) break;; esac
done
# consume argv's leading key=value parameters
while test $# -gt 0; do
case "$1" in ?*=?*) declare "${1%=*}=${1#*=}";; *) break;; esac
shift
done
Combining two highly rated answers here into a complete example that is hopefully useful and self-explanatory:
#!/bin/bash
intro="You know what,"
pet1="cat"
pet2="chicken"
pet3="cow"
pet4="dog"
pet5="pig"
# Setting and reading dynamic variables
for i in {1..5}; do
pet="pet$i"
declare "sentence$i=$intro I have a pet ${!pet} at home"
done
# Just reading dynamic variables
for i in {1..5}; do
sentence="sentence$i"
echo "${!sentence}"
done
echo
echo "Again, but reading regular variables:"
echo $sentence1
echo $sentence2
echo $sentence3
echo $sentence4
echo $sentence5
Output:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
Again, but reading regular variables:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
This will work too
my_country_code="green"
x="country"
eval z='$'my_"$x"_code
echo $z ## o/p: green
In your case
eval final_val='$'magic_way_to_define_magic_variable_"$1"
echo $final_val
This should work:
function grep_search() {
declare magic_variable_$1="$(ls | tail -1)"
echo "$(tmpvar=magic_variable_$1 && echo ${!tmpvar})"
}
grep_search var # calling grep_search with argument "var"
An extra method that doesn't rely on which shell/bash version you have is by using envsubst. For example:
newvar=$(echo '$magic_variable_'"${dynamic_part}" | envsubst)
For zsh (newers mac os versions), you should use
real_var="holaaaa"
aux_var="real_var"
echo ${(P)aux_var}
holaaaa
Instead of "!"
As per BashFAQ/006, you can use read with here string syntax for assigning indirect variables:
function grep_search() {
read "$1" <<<$(ls | tail -1);
}
Usage:
$ grep_search open_box
$ echo $open_box
stack-overflow.txt
Even though it's an old question, I still had some hard time with fetching dynamic variables names, while avoiding the eval (evil) command.
Solved it with declare -n which creates a reference to a dynamic value, this is especially useful in CI/CD processes, where the required secret names of the CI/CD service are not known until runtime. Here's how:
# Bash v4.3+
# -----------------------------------------------------------
# Secerts in CI/CD service, injected as environment variables
# AWS_ACCESS_KEY_ID_DEV, AWS_SECRET_ACCESS_KEY_DEV
# AWS_ACCESS_KEY_ID_STG, AWS_SECRET_ACCESS_KEY_STG
# -----------------------------------------------------------
# Environment variables injected by CI/CD service
# BRANCH_NAME="DEV"
# -----------------------------------------------------------
declare -n _AWS_ACCESS_KEY_ID_REF=AWS_ACCESS_KEY_ID_${BRANCH_NAME}
declare -n _AWS_SECRET_ACCESS_KEY_REF=AWS_SECRET_ACCESS_KEY_${BRANCH_NAME}
export AWS_ACCESS_KEY_ID=${_AWS_ACCESS_KEY_ID_REF}
export AWS_SECRET_ACCESS_KEY=${_AWS_SECRET_ACCESS_KEY_REF}
echo $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY
aws s3 ls
Wow, most of the syntax is horrible! Here is one solution with some simpler syntax if you need to indirectly reference arrays:
#!/bin/bash
foo_1=(fff ddd) ;
foo_2=(ggg ccc) ;
for i in 1 2 ;
do
eval mine=( \${foo_$i[#]} ) ;
echo ${mine[#]}" " ;
done ;
For simpler use cases I recommend the syntax described in the Advanced Bash-Scripting Guide.
KISS approach:
a=1
c="bam"
let "$c$a"=4
echo $bam1
results in 4
I want to be able to create a variable name containing the first argument of the command
script.sh file:
#!/usr/bin/env bash
function grep_search() {
eval $1=$(ls | tail -1)
}
Test:
$ source script.sh
$ grep_search open_box
$ echo $open_box
script.sh
As per help eval:
Execute arguments as a shell command.
You may also use Bash ${!var} indirect expansion, as already mentioned, however it doesn't support retrieving of array indices.
For further read or examples, check BashFAQ/006 about Indirection.
We are not aware of any trick that can duplicate that functionality in POSIX or Bourne shells without eval, which can be difficult to do securely. So, consider this a use at your own risk hack.
However, you should re-consider using indirection as per the following notes.
Normally, in bash scripting, you won't need indirect references at all. Generally, people look at this for a solution when they don't understand or know about Bash Arrays or haven't fully considered other Bash features such as functions.
Putting variable names or any other bash syntax inside parameters is frequently done incorrectly and in inappropriate situations to solve problems that have better solutions. It violates the separation between code and data, and as such puts you on a slippery slope toward bugs and security issues. Indirection can make your code less transparent and harder to follow.
For indexed arrays, you can reference them like so:
foo=(a b c)
bar=(d e f)
for arr_var in 'foo' 'bar'; do
declare -a 'arr=("${'"$arr_var"'[#]}")'
# do something with $arr
echo "\$$arr_var contains:"
for char in "${arr[#]}"; do
echo "$char"
done
done
Associative arrays can be referenced similarly but need the -A switch on declare instead of -a.
POSIX compliant answer
For this solution you'll need to have r/w permissions to the /tmp folder.
We create a temporary file holding our variables and leverage the -a flag of the set built-in:
$ man set
...
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands.
Therefore, if we create a file holding our dynamic variables, we can use set to bring them to life inside our script.
The implementation
#!/bin/sh
# Give the temp file a unique name so you don't mess with any other files in there
ENV_FILE="/tmp/$(date +%s)"
MY_KEY=foo
MY_VALUE=bar
echo "$MY_KEY=$MY_VALUE" >> "$ENV_FILE"
# Now that our env file is created and populated, we can use "set"
set -a; . "$ENV_FILE"; set +a
rm "$ENV_FILE"
echo "$foo"
# Output is "bar" (without quotes)
Explaining the steps above:
# Enables the -a behavior
set -a
# Sources the env file
. "$ENV_FILE"
# Disables the -a behavior
set +a
While I think declare -n is still the best way to do it there is another way nobody mentioned it, very useful in CI/CD
function dynamic(){
export a_$1="bla"
}
dynamic 2
echo $a_2
This function will not support spaces so dynamic "2 3" will return an error.
for varname=$prefix_suffix format, just use:
varname=${prefix}_suffix

CMake: how to avoid escaping spaces in command line? [duplicate]

I'm trying to create a custom command that runs with some environment variables, such as LDFLAGS, whose value needs to be quoted if it contains spaces:
LDFLAGS="-Lmydir -Lmyotherdir"
I cannot find a way to include this argument in a CMake custom command, due to CMake's escaping rules. Here's what I've tried so far:
COMMAND LDFLAGS="-Ldir -Ldir2" echo blah VERBATIM)
yields "LDFLAGS=\"-Ldir -Ldir2\"" echo blah
COMMAND LDFLAGS=\"-Ldir -Ldir2\" echo blah VERBATIM)
yields LDFLAGS=\"-Ldir -Ldir2\" echo blah
It seems I either get the whole string quoted, or the escaped quotes don't resolve when used as part of the command.
I would appreciate either a way to include the literal double-quote or as an alternative a better way to set environment variables for a command. Please note that I'm still on CMake 2.8, so I don't have the new "env" command available in 3.2.
Note that this is not a duplicate of When to quote variables? as none of those quoting methods work for this particular case.
The obvious choice - often recommended when hitting the boundaries of COMMAND especially with older versions of CMake - is to use an external script.
I just wanted to add some simple COMMAND only variations that do work and won't need a shell, but are - I have to admit - still partly platform dependent.
One example would be to put only the quoted part into a variable:
set(vars_as_string "-Ldir -Ldir2")
add_custom_target(
QuotedEnvVar
COMMAND env LD_FLAGS=${vars_as_string} | grep LD_FLAGS
)
Which actually does escape the space and not the quotes.
Another example would be to add it with escaped quotes as a "launcher" rule:
add_custom_target(
LauncherEnvVar
COMMAND env | grep LD_FLAGS
)
set_target_properties(
LauncherEnvVar
PROPERTIES RULE_LAUNCH_CUSTOM "env LD_FLAGS=\"-Ldir -Ldir2\""
)
Edit: Added examples for multiple quoted arguments without the need of escaping quotes
Another example would be to "hide some of the complexity" in a function and - if you want to add this to all your custom command calls - use the global/directory RULE_LAUNCH_CUSTOM property:
function(set_env)
get_property(_env GLOBAL PROPERTY RULE_LAUNCH_CUSTOM)
if (NOT _env)
set_property(GLOBAL PROPERTY RULE_LAUNCH_CUSTOM "env")
endif()
foreach(_arg IN LISTS ARGN)
set_property(GLOBAL APPEND_STRING PROPERTY RULE_LAUNCH_CUSTOM " ${_arg}")
endforeach()
endfunction(set_env)
set_env(LDFLAGS="-Ldir1 -Ldir2" CFLAGS="-Idira -Idirb")
add_custom_target(
MultipleEnvVar
COMMAND env | grep -E 'LDFLAGS|CFLAGS'
)
Alternative (for CMake >= 3.0)
I think what we actually are looking for here (besides the cmake -E env ...) is named Bracket Argument and does allow any character without the need of adding backslashes:
set_property(
GLOBAL PROPERTY
RULE_LAUNCH_CUSTOM [=[env LDFLAGS="-Ldir1 -Ldir2" CFLAGS="-Idira -Idirb"]=]
)
add_custom_target(
MultipleEnvVarNew
COMMAND env | grep -E 'LDFLAGS|CFLAGS'
)
References
0005145: Set environment variables for ADD_CUSTOM_COMMAND/ADD_CUSTOM_TARGET
How to modify environment variables passed to custom CMake target?
[CMake] How to set environment variable for custom command
cmake: when to quote variables?
You need three backslashes. I needed this recently to get a preprocessor define from PkgConfig and apply it to my C++ flags:
pkg_get_variable(SHADERDIR movit shaderdir)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DSHADERDIR=\\\"${SHADERDIR}\\\"")
Florian's answer is wrong on several counts:
Putting the quoted part in a variable makes no difference.
You should definitely use VERBATIM. It fixes platform-specific quoting bugs.
You definitely shouldn't use RULE_LAUNCH_CUSTOM for this. It isn't intended for this and only works with some generators.
You shouldn't use env as the command. It isn't available on Windows.
It turns out the real reason OPs code doesn't work is that CMake always fully quotes the first word after COMMAND because it's supposed to be the name of an executable. You simply shouldn't put environment variables first.
For example:
add_custom_command(
OUTPUT q1.txt
COMMAND ENV_VAR="a b" echo "hello" > q1.txt
VERBATIM
)
add_custom_target(q1 ALL DEPENDS q1.txt)
$ VERBOSE=1 make
...
"ENV_VAR=\"a b\"" echo hello > q1.txt
/bin/sh: ENV_VAR="a b": command not found
So how do you pass an environment variable with spaces? Simple.
add_custom_command(
OUTPUT q1.txt
COMMAND ${CMAKE_COMMAND} -E env ENV_VAR="a b" echo "hello" > q1.txt
VERBATIM
)
Ok, I removed my original answer as the one proposed by #Florian is better. There is one additional tweak needed for multiple quoted args. Consider a list of environment variables as such:
set(my_env_vars LDFLAGS="-Ldir1 -Ldir2" CFLAGS="-Idira -Idirb")
In order to produce the desired expansion, convert to string and then replace ; with a space.
set(my_env_string "${my_env_vars}") #produces LDFLAGS="...";CFLAGS="..."
string(REPLACE ";" " " my_env_string "${my_env_string}")
Then you can proceed with #Florian's brilliant answer and add the custom launch rule. If you need semicolons in your string then you'll need to convert them to something else first.
Note that in this case I didn't need to launch with env:
set_target_properties(mytarget PROPERTIES RULE_LAUNCH_CUSTOM "${my_env_string}")
This of course depends on your shell.
On second thought, my original answer is below as I also have a case where I don't have access to the target name.
set(my_env LDFLAGS=\"-Ldir -Ldir2" CFLAGS=\"-Idira -Idirb\")
add_custom_command(COMMAND sh -c "${my_env} grep LDFLAGS" VERBATIM)
This technique still requires that the semicolons from the list->string conversion be replaced.
Some folks suggest to use ${CMAKE_COMMAND} and pass your executable as an argument, e.g:
COMMAND ${CMAKE_COMMAND} -E env "$(WindowsSdkDir)/bin/x64/makecert.exe" ...
That worked for me.

How to "-#-expand ("${array[#]}") possibly empty array in Bash when nounset is set?

I have bash script with set -o nounset option (and I want that!).
Now, I want construct a command invocation, but I don't know number of arguments beforehand, so I want to use an array for that (example below). However, when ARRAY is an empty array, "${ARRAY[#]}" fails.
Question: how to #-expand array ("${ARRAY[#]}") so that the expansion does not fail when set -o nounset is on?
Example:
# Clone git repo. Use --reference if ${reference_local_repo} exist.
reference_local_repo=.....
test -d "${reference_local_repo}" \
&& reference=("--reference" "${reference_local_repo}") \
|| reference=()
git clone "${reference[#]}" http://address/of/the/repo
Of course, I could use the following instead:
# bad example
reference=''
test -d "${reference_local_repo}" && reference="--reference ${reference_local_repo}"
... but that wouldn't work if the path to local repo contained a whitespace.
As a workaround, instead of reference=() i use reference=("-c" "dummy.dummy=dummy"). That way I avoid empty array, and Bash does not complain. Alternatively, i can (rename the array variable and) have "clone" as the first array element. So I got this working, but I'd like to learn The Proper Way.
For the record, I'm using GNU bash, version 4.3.42(1)-release (x86_64-pc-linux-gnu).
To answer your specific question: The very old and simple way to deal with this is:
${reference[#]+"${reference[#]}"}
If reference is unset, nothing is expanded.
If it is set, all its components are expanded.
Read the historical roots for this use:
Once upon 20 or so years ago, some broken minor variants of the Bourne Shell substituted an empty string "" for "$#" if there were no arguments,
Of course, in this specific case:
test -d "${reference_local_repo}" && abool="" || unset abool
git clone ${abool+--reference "$reference_local_repo"} http://address/of/the/repo
When abool is set to NUL ("") (or some other value if you so choose to use), it is set, and in the next line it expands to what is after the plus (yes, as exactly two parameters).
When abool is unset, it completely disappears in the next line expansion.
Maybe this is more verbose:
unset abool
if test -d "${reference_local_repo}"; then abool="ValidDir"; fi
git clone ${abool+--reference "$reference_local_repo"} http://address/of/the/repo
I don't understand why you're using an array here. You could just:
test -d "${reference_local_repo}" \
&& reference="${reference_local_repo}" \
|| reference=""
git clone ${reference:+--reference "$reference"} http://address/of/the/repo
Now there are no undefined variables, and no mucking about with arrays for what is actually a single value.
You may use an auxiliar variable (or just redefine the same variable) to check if an array has anything:
foo=${your_array[#]:-}
and then:
git clone ... "${foo}" ...
This is compatible with the nounset flag. The :- expansion at the end of the variable (${your_array[#]:-}) will yield an empty string if $your_array is undefined.

Bash arrays: appending and prepending to each element in array

I'm trying to build a long command involving find. I have an array of directories that I want to ignore, and I want to format this directory into the command.
Basically, I want to transform this array:
declare -a ignore=(archive crl cfg)
into this:
-o -path "$dir/archive" -prune -o -path "$dir/crl" -prune -o -path "$dir/cfg" -prune
This way, I can simply add directories to the array, and the find command will adjust accordingly.
So far, I figured out how to prepend or append using
${ignore[#]/#/-o -path \"\$dir/}
${ignore[#]/%/\" -prune}
But I don't know how to combine these and simultaneously prepend and append to each element of an array.
You cannot do it simultaneously easily. Fortunately, you do not need to:
ignore=( archive crl cfg )
ignore=( "${ignore[#]/%/\" -prune}" )
ignore=( "${ignore[#]/#/-o -path \"\$dir/}" )
echo ${ignore[#]}
Note the parentheses and double quotes - they make sure the array contains three elements after each substitution, even if there are spaces involved.
Have a look at printf, which does the job as well:
printf -- '-o -path "$dir/%s" -prune ' ${ignore[#]}
In general, you should strive to always treat each variable in the quoted form (e.g. "${ignore[#]}") instead of trying to insert quotation marks yourself (just as you should use parameterized statements instead of escaping the input in SQL) because it's hard to be perfect by manual escaping; for example, suppose a variable contains a quotation mark.
In this regard, I would aim at crafting an array where each argument word for find becomes an element: ("-o" "-path" "$dir/archive" "-prune" "-o" "-path" "$dir/crl" "-prune" "-o" "-path" "$dir/cfg" "-prune") (a 12-element array).
Unfortunately, Bash doesn't seem to support a form of parameter expansion where each element expands to multiple words. (p{1,2,3}q expands to p1q p2q p3q, but with a=(1 2 3), p"${a[#]}"q expands to p1 2 3q.) So you need to resort to a loop:
declare -a args=()
for i in "${ignore[#]}"
do
args+=(-o -path "$dir/$i" -prune) # I'm not sure if you want to have
# $dir expanded at this point;
# otherwise, just use "\$dir/$i".
done
find ... "${args[#]}" ...
If I understand right,
declare -a ignore=(archive crl cfg)
a=$(echo ${ignore[#]} | xargs -n1 -I% echo -o -path '"$dir/%"' -prune)
echo $a
prints
-o -path "$dir/archive" -prune -o -path "$dir/crl" -prune -o -path "$dir/cfg" -prune
Works only with xargs what has the next switches:
-I replstr
Execute utility for each input line, replacing one or more occurrences of replstr in up to replacements
(or 5 if no -R flag is specified) arguments to utility with the entire line of input. The resulting
arguments, after replacement is done, will not be allowed to grow beyond 255 bytes; this is implemented
by concatenating as much of the argument containing replstr as possible, to the constructed arguments to
utility, up to 255 bytes. The 255 byte limit does not apply to arguments to utility which do not contain
replstr, and furthermore, no replacement will be done on utility itself. Implies -x.
-J replstr
If this option is specified, xargs will use the data read from standard input to replace the first occur-
rence of replstr instead of appending that data after all other arguments. This option will not affect
how many arguments will be read from input (-n), or the size of the command(s) xargs will generate (-s).
The option just moves where those arguments will be placed in the command(s) that are executed. The
replstr must show up as a distinct argument to xargs. It will not be recognized if, for instance, it is
in the middle of a quoted string. Furthermore, only the first occurrence of the replstr will be
replaced. For example, the following command will copy the list of files and directories which start
with an uppercase letter in the current directory to destdir:
/bin/ls -1d [A-Z]* | xargs -J % cp -rp % destdir

Exporting an array in bash script

I can not export an array from a bash script to another bash script like this:
export myArray[0]="Hello"
export myArray[1]="World"
When I write like this there are no problem:
export myArray=("Hello" "World")
For several reasons I need to initialize my array into multiple lines. Do you have any solution?
Array variables may not (yet) be exported.
From the manpage of bash version 4.1.5 under ubuntu 10.04.
The following statement from Chet Ramey (current bash maintainer as of 2011) is probably the most official documentation about this "bug":
There isn't really a good way to encode an array variable into the environment.
http://www.mail-archive.com/bug-bash#gnu.org/msg01774.html
TL;DR: exportable arrays are not directly supported up to and including bash-5.1, but you can (effectively) export arrays in one of two ways:
a simple modification to the way the child scripts are invoked
use an exported function to store the array initialisation, with a simple modification to the child scripts
Or, you can wait until bash-4.3 is released (in development/RC state as of February 2014, see ARRAY_EXPORT in the Changelog). Update: This feature is not enabled in 4.3. If you define ARRAY_EXPORT when building, the build will fail. The author has stated it is not planned to complete this feature.
The first thing to understand is that the bash environment (more properly command execution environment) is different to the POSIX concept of an environment. The POSIX environment is a collection of un-typed name=value pairs, and can be passed from a process to its children in various ways (effectively a limited form of IPC).
The bash execution environment is effectively a superset of this, with typed variables, read-only and exportable flags, arrays, functions and more. This partly explains why the output of set (bash builtin) and env or printenv differ.
When you invoke another bash shell you're starting a new process, you loose some bash state. However, if you dot-source a script, the script is run in the same environment; or if you run a subshell via ( ) the environment is also preserved (because bash forks, preserving its complete state, rather than reinitialising using the process environment).
The limitation referenced in #lesmana's answer arises because the POSIX environment is simply name=value pairs with no extra meaning, so there's no agreed way to encode or format typed variables, see below for an interesting bash quirk regarding functions , and an upcoming change in bash-4.3(proposed array feature abandoned).
There are a couple of simple ways to do this using declare -p (built-in) to output some of the bash environment as a set of one or more declare statements which can be used reconstruct the type and value of a "name". This is basic serialisation, but with rather less of the complexity some of the other answers imply. declare -p preserves array indexes, sparse arrays and quoting of troublesome values. For simple serialisation of an array you could just dump the values line by line, and use read -a myarray to restore it (works with contiguous 0-indexed arrays, since read -a automatically assigns indexes).
These methods do not require any modification of the script(s) you are passing the arrays to.
declare -p array1 array2 > .bash_arrays # serialise to an intermediate file
bash -c ". .bash_arrays; . otherscript.sh" # source both in the same environment
Variations on the above bash -c "..." form are sometimes (mis-)used in crontabs to set variables.
Alternatives include:
declare -p array1 array2 > .bash_arrays # serialise to an intermediate file
BASH_ENV=.bash_arrays otherscript.sh # non-interactive startup script
Or, as a one-liner:
BASH_ENV=<(declare -p array1 array2) otherscript.sh
The last one uses process substitution to pass the output of the declare command as an rc script. (This method only works in bash-4.0 or later: earlier versions unconditionally fstat() rc files and use the size returned to read() the file in one go; a FIFO returns a size of 0, and so won't work as hoped.)
In a non-interactive shell (i.e. shell script) the file pointed to by the BASH_ENV variable is automatically sourced. You must make sure bash is correctly invoked, possibly using a shebang to invoke "bash" explicitly, and not #!/bin/sh as bash will not honour BASH_ENV when in historical/POSIX mode.
If all your array names happen to have a common prefix you can use declare -p ${!myprefix*} to expand a list of them, instead of enumerating them.
You probably should not attempt to export and re-import the entire bash environment using this method, some special bash variables and arrays are read-only, and there can be other side-effects when modifying special variables.
(You could also do something slightly disagreeable by serialising the array definition to an exportable variable, and using eval, but let's not encourage the use of eval ...
$ array=([1]=a [10]="b c")
$ export scalar_array=$(declare -p array)
$ bash # start a new shell
$ eval $scalar_array
$ declare -p array
declare -a array='([1]="a" [10]="b c")'
)
As referenced above, there's an interesting quirk: special support for exporting functions through the environment:
function myfoo() {
echo foo
}
with export -f or set +a to enable this behaviour, will result in this in the (process) environment, visible with printenv:
myfoo=() { echo foo
}
The variable is functionname (or functioname() for backward compatibility) and its value is () { functionbody }.
When a subsequent bash process starts it will recreate a function from each such environment variable. If you peek into the bash-4.2 source file variables.c you'll see variables starting with () { are handled specially. (Though creating a function using this syntax with declare -f is forbidden.) Update: The "shellshock" security issue is related to this feature, contemporary systems may disable automatic function import from the environment as a mitigation.
If you keep reading though, you'll see an #if 0 (or #if ARRAY_EXPORT) guarding code that checks variables starting with ([ and ending with ), and a comment stating "Array variables may not yet be exported". The good news is that in the current development version bash-4.3rc2 the ability to export indexed arrays (not associative) is enabled. This feature is not likely to be enabled, as noted above.
We can use this to create a function which restores any array data required:
% function sharearray() {
array1=(a b c d)
}
% export -f sharearray
% bash -c 'sharearray; echo ${array1[*]}'
So, similar to the previous approach, invoke the child script with:
bash -c "sharearray; . otherscript.sh"
Or, you can conditionally invoke the sharearray function in the child script by adding at some appropriate point:
declare -F sharearray >/dev/null && sharearray
Note there is no declare -a in the sharearray function, if you do that the array is implicitly local to the function, not what is wanted. bash-4.2 supports declare -g that makes a variable declared in a function into a global, so declare -ga can then be used. (Since associative arrays require a declare -A you won't be able to use this method for global associative arrays prior to bash-4.2, from v4.2 declare -Ag will work as hoped.) The GNU parallel documentation has useful variation on this method, see the discussion of --env in the man page.
Your question as phrased also indicates you may be having problems with export itself. You can export a name after you've created or modified it. "exportable" is a flag or property of a variable, for convenience you can also set and export in a single statement. Up to bash-4.2 export expects only a name, either a simple (scalar) variable or function name are supported.
Even if you could (in future) export arrays, exporting selected indexes (a slice) may not be supported (though since arrays are sparse there's no reason it could not be allowed). Though bash also supports the syntax declare -a name[0], the subscript is ignored, and "name" is simply a normal indexed array.
Jeez. I don't know why the other answers made this so complicated. Bash has nearly built-in support for this.
In the exporting script:
myArray=( ' foo"bar ' $'\n''\nbaz)' ) # an array with two nasty elements
myArray="${myArray[#]#Q}" ./importing_script.sh
(Note, the double quotes are necessary for correct handling of whitespace within array elements.)
Upon entry to importing_script.sh, the value of the myArray environment variable comprises these exact 26 bytes:
' foo"bar ' $'\n\\nbaz)'
Then the following will reconstitute the array:
eval "myArray=( ${myArray} )"
CAUTION! Do not eval like this if you cannot trust the source of the myArray environment variable. This trick exhibits the "Little Bobby Tables" vulnerability. Imagine if someone were to set the value of myArray to ) ; rm -rf / #.
The environment is just a collection of key-value pairs, both of which are character strings. A proper solution that works for any kind of array could either
Save each element in a different variable (e.g. MY_ARRAY_0=myArray[0]). Gets complicated because of the dynamic variable names.
Save the array in the file system (declare -p myArray >file).
Serialize all array elements into a single string.
These are covered in the other posts. If you know that your values never contain a certain character (for example |) and your keys are consecutive integers, you can simply save the array as a delimited list:
export MY_ARRAY=$(IFS='|'; echo "${myArray[*]}")
And restore it in the child process:
IFS='|'; myArray=($MY_ARRAY); unset IFS
Based on #mr.spuratic use of BASH_ENV, here I tunnel $# through script -f -c
script -c <command> <logfile> can be used to run a command inside another pty (and process group) but it cannot pass any structured arguments to <command>.
Instead <command> is a simple string to be an argument to the system library call.
I need to tunnel $# of the outer bash into $# of the bash invoked by script.
As declare -p cannot take #, here I use the magic bash variable _ (with a dummy first array value as that will get overwritten by bash). This saves me trampling on any important variables:
Proof of concept:
BASH_ENV=<( declare -a _=("" "$#") && declare -p _ ) bash -c 'set -- "${_[#]:1}" && echo "$#"'
"But," you say, "you are passing arguments to bash -- and indeed I am, but these are a simple string of known character. Here is use by script
SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$#") && declare -p _ && echo 'set -- "${_[#]:1}"') script -f -c 'echo "$#"' /tmp/logfile
which gives me this wrapper function in_pty:
in_pty() {
SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$#") && declare -p _ && echo 'set -- "${_[#]:1}"') script -f -c 'echo "$#"' /tmp/logfile
}
or this function-less wrapper as a composable string for Makefiles:
in_pty=bash -c 'SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$$#") && declare -p _ && echo '"'"'set -- "$${_[#]:1}"'"'"') script -qfc '"'"'"$$#"'"'"' /tmp/logfile' --
...
$(in_pty) test --verbose $# $^
I was editing a different post and made a mistake. Augh. Anyway, perhaps this might help?
https://stackoverflow.com/a/11944320/1594168
Note that because the shell's array format is undocumented on bash or any other shell's side,
it is very difficult to return a shell array in platform independent way.
You would have to check the version, and also craft a simple script that concatinates all
shell arrays into a file that other processes can resolve into.
However, if you know the name of the array you want to take back home then there is a way, while a bit dirty.
Lets say I have
MyAry[42]="whatever-stuff";
MyAry[55]="foo";
MyAry[99]="bar";
So I want to take it home
name_of_child=MyAry
take_me_home="`declare -p ${name_of_child}`";
export take_me_home="${take_me_home/#declare -a ${name_of_child}=/}"
We can see it being exported, by checking from a sub-process
echo ""|awk '{print "from awk =["ENVIRON["take_me_home"]"]"; }'
Result :
from awk =['([42]="whatever-stuff" [55]="foo" [99]="bar")']
If we absolutely must, use the env var to dump it.
env > some_tmp_file
Then
Before running the another script,
# This is the magic that does it all
source some_tmp_file
As lesmana reported, you cannot export arrays. So you have to serialize them before passing through the environment. This serialization useful other places too where only a string fits (su -c 'string', ssh host 'string'). The shortest code way to do this is to abuse 'getopt'
# preserve_array(arguments). return in _RET a string that can be expanded
# later to recreate positional arguments. They can be restored with:
# eval set -- "$_RET"
preserve_array() {
_RET=$(getopt --shell sh --options "" -- -- "$#") && _RET=${_RET# --}
}
# restore_array(name, payload)
restore_array() {
local name="$1" payload="$2"
eval set -- "$payload"
eval "unset $name && $name=("\$#")"
}
Use it like this:
foo=("1: &&& - *" "2: two" "3: %# abc" )
preserve_array "${foo[#]}"
foo_stuffed=${_RET}
restore_array newfoo "$foo_stuffed"
for elem in "${newfoo[#]}"; do echo "$elem"; done
## output:
# 1: &&& - *
# 2: two
# 3: %# abc
This does not address unset/sparse arrays.
You might be able to reduce the 2 'eval' calls in restore_array.
Although this question/answers are pretty old, this post seems to be the top hit when searching for "bash serialize array"
And, although the original question wasn't quite related to serializing/deserializing arrays, it does seem that the answers have devolved in that direction.
So with that ... I offer my solution:
Pros
All Core Bash Concepts
No Evals
No Sub-Commands
Cons
Functions take variable names as arguments (vs actual values)
Serializing requires having at least one character that is not present in the array
serialize_array.bash
# shellcheck shell=bash
##
# serialize_array
# Serializes a bash array to a string, with a configurable seperator.
#
# $1 = source varname ( contains array to be serialized )
# $2 = target varname ( will contian the serialized string )
# $3 = seperator ( optional, defaults to $'\x01' )
#
# example:
#
# my_arry=( one "two three" four )
# serialize_array my_array my_string '|'
# declare -p my_string
#
# result:
#
# declare -- my_string="one|two three|four"
#
function serialize_array() {
declare -n _array="${1}" _str="${2}" # _array, _str => local reference vars
local IFS="${3:-$'\x01'}"
# shellcheck disable=SC2034 # Reference vars assumed used by caller
_str="${_array[*]}" # * => join on IFS
}
##
# deserialize_array
# Deserializes a string into a bash array, with a configurable seperator.
#
# $1 = source varname ( contains string to be deserialized )
# $2 = target varname ( will contain the deserialized array )
# $3 = seperator ( optional, defaults to $'\x01' )
#
# example:
#
# my_string="one|two three|four"
# deserialize_array my_string my_array '|'
# declare -p my_array
#
# result:
#
# declare -a my_array=([0]="one" [1]="two three" [2]="four")
#
function deserialize_array() {
IFS="${3:-$'\x01'}" read -r -a "${2}" <<<"${!1}" # -a => split on IFS
}
NOTE: This is hosted as a gist here:
https://gist.github.com/TekWizely/c0259f25e18f2368c4a577495cd566cd
[edits]
Logic simplified after running through shellcheck + shfmt.
Added URL for hosted GIST
you (hi!) can use this, dont need writing a file, for ubuntu 12.04, bash 4.2.24
Also, your multiple lines array can be exported.
cat >>exportArray.sh
function FUNCarrayRestore() {
local l_arrayName=$1
local l_exportedArrayName=${l_arrayName}_exportedArray
# if set, recover its value to array
if eval '[[ -n ${'$l_exportedArrayName'+dummy} ]]'; then
eval $l_arrayName'='`eval 'echo $'$l_exportedArrayName` #do not put export here!
fi
}
export -f FUNCarrayRestore
function FUNCarrayFakeExport() {
local l_arrayName=$1
local l_exportedArrayName=${l_arrayName}_exportedArray
# prepare to be shown with export -p
eval 'export '$l_arrayName
# collect exportable array in string mode
local l_export=`export -p \
|grep "^declare -ax $l_arrayName=" \
|sed 's"^declare -ax '$l_arrayName'"export '$l_exportedArrayName'"'`
# creates exportable non array variable (at child shell)
eval "$l_export"
}
export -f FUNCarrayFakeExport
test this example on terminal bash (works with bash 4.2.24):
source exportArray.sh
list=(a b c)
FUNCarrayFakeExport list
bash
echo ${list[#]} #empty :(
FUNCarrayRestore list
echo ${list[#]} #profit! :D
I may improve it here
PS.: if someone clears/improve/makeItRunFaster I would like to know/see, thx! :D
For arrays with values without spaces, I've been using a simple set of functions to iterate through each array element and concatenate the array:
_arrayToStr(){
array=($#)
arrayString=""
for (( i=0; i<${#array[#]}; i++ )); do
if [[ $i == 0 ]]; then
arrayString="\"${array[i]}\""
else
arrayString="${arrayString} \"${array[i]}\""
fi
done
export arrayString="(${arrayString})"
}
_strToArray(){
str=$1
array=${str//\"/}
array=(${array//[()]/""})
export array=${array[#]}
}
The first function with turn the array into a string by adding the opening and closing parentheses and escaping all of the double quotation marks. The second function will strip the quotation marks and the parentheses and place them into a dummy array.
In order export the array, you would pass in all the elements of the original array:
array=(foo bar)
_arrayToStr ${array[#]}
At this point, the array has been exported into the value $arrayString. To import the array in the destination file, rename the array and do the opposite conversion:
_strToArray "$arrayName"
newArray=(${array[#]})
Much thanks to #stéphane-chazelas who pointed out all the problems with my previous attempts, this now seems to work to serialise an array to stdout or into a variable.
This technique does not shell-parse the input (unlike declare -a/declare -p) and so is safe against malicious insertion of metacharacters in the serialised text.
Note: newlines are not escaped, because read deletes the \<newlines> character pair, so -d ... must instead be passed to read, and then unescaped newlines are preserved.
All this is managed in the unserialise function.
Two magic characters are used, the field separator and the record separator (so that multiple arrays can be serialized to the same stream).
These characters can be defined as FS and RS but neither can be defined as newline character because an escaped newline is deleted by read.
The escape character must be \ the backslash, as that is what is used by read to avoid the character being recognized as an IFS character.
serialise will serialise "$#" to stdout, serialise_to will serialise to the varable named in $1
serialise() {
set -- "${#//\\/\\\\}" # \
set -- "${#//${FS:-;}/\\${FS:-;}}" # ; - our field separator
set -- "${#//${RS:-:}/\\${RS:-:}}" # ; - our record separator
local IFS="${FS:-;}"
printf ${SERIALIZE_TARGET:+-v"$SERIALIZE_TARGET"} "%s" "$*${RS:-:}"
}
serialise_to() {
SERIALIZE_TARGET="$1" serialise "${#:2}"
}
unserialise() {
local IFS="${FS:-;}"
if test -n "$2"
then read -d "${RS:-:}" -a "$1" <<<"${*:2}"
else read -d "${RS:-:}" -a "$1"
fi
}
and unserialise with:
unserialise data # read from stdin
or
unserialise data "$serialised_data" # from args
e.g.
$ serialise "Now is the time" "For all good men" "To drink \$drink" "At the \`party\`" $'Party\tParty\tParty'
Now is the time;For all good men;To drink $drink;At the `party`;Party Party Party:
(without a trailing newline)
read it back:
$ serialise_to s "Now is the time" "For all good men" "To drink \$drink" "At the \`party\`" $'Party\tParty\tParty'
$ unserialise array "$s"
$ echo "${array[#]/#/$'\n'}"
Now is the time
For all good men
To drink $drink
At the `party`
Party Party Party
or
unserialise array # read from stdin
Bash's read respects the escape character \ (unless you pass the -r flag) to remove special meaning of characters such as for input field separation or line delimiting.
If you want to serialise an array instead of a mere argument list then just pass your array as the argument list:
serialise_array "${my_array[#]}"
You can use unserialise in a loop like you would read because it is just a wrapped read - but remember that the stream is not newline separated:
while unserialise array
do ...
done
I've wrote my own functions for this and improved the method with the IFS:
Features:
Doesn't call to $(...) and so doesn't spawn another bash shell process
Serializes ? and | characters into ?00 and ?01 sequences and back, so can be used over array with these characters
Handles the line return characters between serialization/deserialization as other characters
Tested in cygwin bash 3.2.48 and Linux bash 4.3.48
function tkl_declare_global()
{
eval "$1=\"\$2\"" # right argument does NOT evaluate
}
function tkl_declare_global_array()
{
local IFS=$' \t\r\n' # just in case, workaround for the bug in the "[#]:i" expression under the bash version lower than 4.1
eval "$1=(\"\${#:2}\")"
}
function tkl_serialize_array()
{
local __array_var="$1"
local __out_var="$2"
[[ -z "$__array_var" ]] && return 1
[[ -z "$__out_var" ]] && return 2
local __array_var_size
eval declare "__array_var_size=\${#$__array_var[#]}"
(( ! __array_var_size )) && { tkl_declare_global $__out_var ''; return 0; }
local __escaped_array_str=''
local __index
local __value
for (( __index=0; __index < __array_var_size; __index++ )); do
eval declare "__value=\"\${$__array_var[__index]}\""
__value="${__value//\?/?00}"
__value="${__value//|/?01}"
__escaped_array_str="$__escaped_array_str${__escaped_array_str:+|}$__value"
done
tkl_declare_global $__out_var "$__escaped_array_str"
return 0
}
function tkl_deserialize_array()
{
local __serialized_array="$1"
local __out_var="$2"
[[ -z "$__out_var" ]] && return 1
(( ! ${#__serialized_array} )) && { tkl_declare_global $__out_var ''; return 0; }
local IFS='|'
local __deserialized_array=($__serialized_array)
tkl_declare_global_array $__out_var
local __index=0
local __value
for __value in "${__deserialized_array[#]}"; do
__value="${__value//\?01/|}"
__value="${__value//\?00/?}"
tkl_declare_global $__out_var[__index] "$__value"
(( __index++ ))
done
return 0
}
Example:
a=($'1 \n 2' "3\"4'" 5 '|' '?')
tkl_serialize_array a b
tkl_deserialize_array "$b" c
I think you can try it this way (by sourcing your script after export):
export myArray=(Hello World)
. yourScript.sh

Resources