Accessing Associative Arrays in GNU Parallel - arrays

Assume the following in Bash:
declare -A ar='([one]="1" [two]="2" )'
declare -a ari='([0]="one" [1]="two")'
for i in ${!ari[#]}; do
echo $i ${ari[i]} ${ar[${ari[i]}]}
done
0 one 1
1 two 2
Can the same be done with GNU Parallel, making sure to use the index of the associative array, not the sequence? Does the fact that arrays can't be exported make this difficult, if not impossible?

Yes, it makes it trickier. But not impossible.
You can't export an array directly. However, you can turn an array into a description of that same array using declare -p, and you can store that description in an exportable variable. In fact, you can store that description in a function and export the function, although it's a bit of a hack, and you have to deal with the fact that executing a declare command inside a function makes the declared variables local, so you need to
introduce a -g flag into the generated declare functions.
UPDATE: Since shellshock, the above hack doesn't work. A small variation on the theme does work. So if your bash has been updated, please skip down to the subtitle "ShellShock Version".
So, here's one possible way of generating the exportable function:
make_importer () {
local func=$1; shift;
export $func='() {
'"$(for arr in $#; do
declare -p $arr|sed '1s/declare -./&g/'
done)"'
}'
}
Now we can create our arrays and build an exported importer for them:
$ declare -A ar='([one]="1" [two]="2" )'
$ declare -a ari='([0]="one" [1]="two")'
$ make_importer ar_importer ar ari
And see what we've built
$ echo "$ar_importer"
() {
declare -Ag ar='([one]="1" [two]="2" )'
declare -ag ari='([0]="one" [1]="two")'
}
OK, the formatting is a bit ugly, but this isn't about whitespace. Here's the hack, though. All we've got there is an ordinary (albeit exported) variable, but when it gets imported into a subshell, a bit of magic happens [Note 1]:
$ bash -c 'echo "$ar_importer"'
$ bash -c 'type ar_importer'
ar_importer is a function
ar_importer ()
{
declare -Ag ar='([one]="1" [two]="2" )';
declare -ag ari='([0]="one" [1]="two")'
}
And it looks prettier, too.
Now we can run it in the command we give to parallel:
$ printf %s\\n ${!ari[#]} |
parallel \
'ar_importer; echo "{}" "${ari[{}]}" "${ar[${ari[{}]}]}"'
0 one 1
1 two 2
Or, for execution on a remote machine:
$ printf %s\\n ${!ari[#]} |
parallel -S localhost --env ar_importer \
'ar_importer; echo "{}" "${ari[{}]}" "${ar[${ari[{}]}]}"'
0 one 1
1 two 2
ShellShock version.
Unfortunately the flurry of fixes to shellshock make it a little harder to accomplish the same task. In particular, it is now necessary to export a function named foo as the environment variable named BASH_FUNC_foo%%, which is an invalid name (because of the percent signs). But we can still define the function (using eval) and export it, as follows:
make_importer () {
local func=$1; shift;
# An alternative to eval is:
# . /dev/stdin <<< ...
# but that is neither less nor more dangerous than eval.
eval "$func"'() {
'"$(for arr in $#; do
declare -p $arr|sed '1s/declare -./&g/'
done)"'
}'
export -f "$func"
}
As above, we can then build the arrays and make an exporter:
$ declare -A ar='([one]="1" [two]="2" )'
$ declare -a ari='([0]="one" [1]="two")'
$ make_importer ar_importer ar ari
But now, the function actually exists in our environment as a function:
$ type ar_importer
ar_importer is a function
ar_importer ()
{
declare -Ag ar='([one]="1" [two]="2" )';
declare -ag ari='([0]="one" [1]="two")'
}
Since it has been exported, we can run it in the command we give to parallel:
$ printf %s\\n ${!ari[#]} |
parallel \
'ar_importer; echo "{}" "${ari[{}]}" "${ar[${ari[{}]}]}"'
0 one 1
1 two 2
Unfortunately, it no longer works on a remote machine (at least with the version of parallel I have available) because parallel doesn't know how to export functions. If this gets fixed, the following should work:
$ printf %s\\n ${!ari[#]} |
parallel -S localhost --env ar_importer \
'ar_importer; echo "{}" "${ari[{}]}" "${ar[${ari[{}]}]}"'
However, there is one important caveat: you cannot export a function from a bash with the shellshock patch to a bash without the patch, or vice versa. So even if parallel gets fixed, the remote machine(s) must be running the same bash version as the local machine. (Or at least, either both or neither must have the shellshock patches.)
Note 1: The magic is that the way bash marks an exported variable as a function is that the exported value starts exactly with () {. So if you export a variable which starts with those characters and is a syntactically correct function, then bash subshells will treat it as a function. (Don't expect non-bash subshells to understand, though.)

A lot has happened in 4 years. GNU Parallel 20190222 comes with env_parallel. This is a shell function that makes it possible to export the most of the environment to the commands run by GNU Parallel.
It is supported in ash, bash, csh, dash, fish, ksh, mksh, pdksh, sh, tcsh, and zsh. The support varies from shell to shell (see details on https://www.gnu.org/software/parallel/env_parallel.html). For bash you would do:
# Load the env_parallel function
. `which env_parallel.bash`
# Ignore variables currently defined
env_parallel --session
[... define your arrays, functions, aliases, and variables here ...]
env_parallel my_command ::: values
# The environment is also exported to remote systems if they use the same shell
(echo value1; echo value2) | env_parallel -Sserver1,server2 my_command
# Optional cleanup
env_parallel --end-session
So in your case something like this:
env_parallel --session
declare -A ar='([one]="1" [two]="2" )'
declare -a ari='([0]="one" [1]="two")'
foo() {
for i in ${!ari[#]}; do
echo $i ${ari[i]} ${ar[${ari[i]}]}
done;
}
env_parallel foo ::: dummy
env_parallel --end-session
As you might expect env_parallel is a bit slower than pure parallel.

GNU Parallel is a perl program. If the perl program cannot access the variables, then I do not see a way that the variables can be passed on by the perl program.
So if you want to parallelize the loop I see two options:
declare -A ar='([one]="1" [two]="2" )'
declare -a ari='([0]="one" [1]="two")'
for i in ${!ari[#]}; do
sem -j+0 echo $i ${ari[i]} ${ar[${ari[i]}]}
done
The sem solution will not guard against mixed output.
declare -A ar='([one]="1" [two]="2" )'
declare -a ari='([0]="one" [1]="two")'
for i in ${!ari[#]}; do
echo echo $i ${ari[i]} ${ar[${ari[i]}]}
done | parallel

Related

How to iterate over array of associated arrays in zsh

Here is an example that runs how I want it to in bash
declare -A price_oracle=(
[name]="price_oracle_name"
[active]=true
)
declare -A market=(
[name]="market_name"
[active]=true
)
declare -a contract_list=(
price_oracle
market
)
for contract in "${contract_list[#]}"; do
declare -n lst="$contract"
if ${lst[is_active]}; then
echo "Importing schema for $contract contract"
wasm-cli import -s --name $contract contracts/$contract/schema
fi
done
How can this same functionality be accomplished in zsh? declare -n is not valid in zsh.
The nested parameter expansion flag (P) can be used instead of a bash nameref:
#!/usr/bin/env zsh
declare -A price_oracle=(
[name]=price_oracle_name
[active]=true
)
declare -A market=(
[name]=market_name
[active]=true
)
declare -a contract_list=(
price_oracle
market
)
for contract in $contract_list; do
local -A lst=("${(Pkv#)contract}")
if ${lst[active]}; then
echo "Importing schema for $contract contract"
wasm-cli import -s --name ${lst[name]} contracts/$contract/schema
fi
done
To work with the associative array, this also uses the key (k), value (v), and array separation (#) expansion flags. These are described in the zshexpn man page and the online zsh documentation.

How to pass an array and an associative array between bash scripts?

I want to pass an array and an associative array between bash script.
I tried to send the argument as in the example bellow, but I get the erro message:
./b.sh: line 3: ${1[#]}: bad substitution
How can I do this?
Example:
First script a.sh that call other script b.sh
a.sh
#!/usr/bin/bash -x
declare -a array=("a" "b")
declare -A associative_array
associative_array[10]="Hello world"
./b.sh "${array[#]}" $associative_array
b.sh
#!/usr/bin/bash
declare -a array="${1[#]}"
declare -A associative_array="$2"
echo "${array[#]}"
echo "${associative_array[10]}"
This is going to be flame-bait since it uses the evil eval, but you may find it useful until a better solution comes along. Don't use it without reading Why should eval be avoided in Bash, and what should I use instead? first.
a.sh
declare -a array=("a" "b")
declare -A associative_array
associative_array[10]="Hello world"
./b.sh "$(declare -p array)" "$(declare -p associative_array)"
b.sh
eval "$1"
eval "$2"
echo "${array[#]}"
echo "${associative_array[10]}"

Bash: Is there a way to add UUIDs and their Mount Points into an array?

I'm trying to write a backup script that takes a specific list of disk's UUIDs, mounts them to specified points, rsyncs the data to a specified end point, and then also does a bunch of other conditional checks when that's done. But since there will be a lot of checking after rsync, it would be nice to set each disk's UUID and associated/desired mount point as strings to save hardcoding them throughout the script, and also if say the UUIDs change in future (if a drive is updated or swapped out), this will be easier to maintain the script...
I've been looking at arrays in bash, to make the list of wanted disks, but have questions about how to make this possible, as my experience with arrays is non-existent!
The list of disks---in order of priority wanted to backup---are:
# sdc1 2.7T UUID 8C1CC0C19D012E29 /media/user/Documents/
# sdb1 1.8T UUID 39CD106C6FDA5907 /media/user/Photos/
# sdd1 3.7T UUID 5104D5B708E102C0 /media/user/Video/
... and notice I want to go sdc1, sdb1, sdd1 and so on, (i.e. custom order).
Is it possible to create this list, in order of priority, so it's something like this?
DisksToBackup
└─1
└─UUID => '8C1CC0C19D012E29'
└─MountPoint => '/media/user/Documents/'
└─2
└─UUID => '39CD106C6FDA5907'
└─MountPoint => '/media/user/Photos/'
└─3
└─UUID => '5104D5B708E102C0'
└─MountPoint => '/media/user/Video/'
OR some obviously better idea than this...
And then how to actually use this?
Let's say for example, how to go through our list and mount each disk (I know this is incorrect syntax, but again, I know nothing about arrays:
mount --uuid $DisksToBackup[*][UUID] $DisksToBackup[*][MountPoint]?
Update: Using Linux Mint 19.3
Output of bash --version gives: GNU bash, version 4.4.20(1)
Bash starting with version 4 provides associative arrays but only using a single dimension. Multiple dimensions you would have to simulate using keys such as 'sdc1-uuid' as shown in the following interactive bash examples (remove leading $ and > and bash output when putting into a script).
$ declare -A disks
$ disks=([0-uuid]=8C1CC0C19D012E29 [0-mount]=/media/user/Documents/
> [1-uuid]=39CD106C6FDA5907 [1-mount]=/media/user/Photos/)
$ echo ${disks[sdc1-uuid]}
8C1CC0C19D012E29
$ echo ${disks[*]}
/media/user/Documents/ 39CD106C6FDA5907 8C1CC0C19D012E29 /media/user/Photos/
$ echo ${!disks[*]}
0-mount 0-uuid 1-uuid 1-mount
However, there is no ordering for the keys (the order of the keys differs from the order in which we defined them). You may want to use a second array as in the following example which allows you to break down the multiple dimensions as well:
$ disks_order=(0 1)
$ for i in ${disks_order[*]}; do
> echo "${disks[$i-uuid]} ${disks[$i-mount]}"
> done
8C1CC0C19D012E29 /media/user/Documents/
39CD106C6FDA5907 /media/user/Photos/
In case you use bash version 3, you need to simulate the associative array using other means. See the question on associative arrays in bash 3 or simply represent your structure in a simple array such as which makes everything more readable anyway:
$ disks=(8C1CC0C19D012E29=/media/user/Documents/
> 39CD106C6FDA5907=/media/user/Photos/)
$ for disk in "${disks[#]}"; do
> uuid="${disk%=*}"
> path="${disk##*=}"
> echo "$uuid $path"
> done
8C1CC0C19D012E29 /media/user/Documents/
39CD106C6FDA5907 /media/user/Photos/
The %=* is a fancy way of saying remove everything after (and including) the = sign. And ##*= to remove everything before (and including) the = sign.
As an example of how one could read this data into a series of arrays:
#!/usr/bin/env bash
i=0
declare -g -A "disk$i"
declare -n currDisk="disk$i"
while IFS= read -r line || (( ${#currDisk[#]} )); do : "line=$line"
if [[ $line ]]; then
if [[ $line = *=* ]]; then
currDisk[${line%%=*}]=${line#*=}
else
printf 'WARNING: Ignoring unrecognized line: %q\n' "$line" >&2
fi
else
if [[ ${#currDisk[#]} ]]; then
declare -p "disk$i" >&2 # for debugging/demo: print out what we created
(( ++i ))
unset -n currDisk
declare -g -A "disk$i=( )"
declare -n currDisk="disk$i"
fi
fi
done < <(blkid -o export)
This gives you something like:
declare -g -A disk0=( [PARTLABEL]="primary" [UUID]="1111-2222-3333" [TYPE]=btrfs ...)
declare -g -A disk1=( [PARTLABEL]="esp" [LABEL]="boot" [TYPE]="vfat" ...)
...so you can write code iterating over them doing whatever search/comparison/etc you want. For example:
for _diskVar in "${!disk#}"; do # iterates over variable names starting with "disk"
declare -n _currDisk="$_diskVar" # refer to each such variable as _currDisk in turn
# replace the below with your actual application logic, whatever that is
if [[ ${_currDisk[LABEL]} = "something" ]] && [[ ${_currDisk[TYPE]} = "something_else" ]]; then
echo "Found ${_currDisk[DEVNAME]}"
fi
unset -n _currDisk # clear the nameref when done with it
done

Exporting an array in bash script

I can not export an array from a bash script to another bash script like this:
export myArray[0]="Hello"
export myArray[1]="World"
When I write like this there are no problem:
export myArray=("Hello" "World")
For several reasons I need to initialize my array into multiple lines. Do you have any solution?
Array variables may not (yet) be exported.
From the manpage of bash version 4.1.5 under ubuntu 10.04.
The following statement from Chet Ramey (current bash maintainer as of 2011) is probably the most official documentation about this "bug":
There isn't really a good way to encode an array variable into the environment.
http://www.mail-archive.com/bug-bash#gnu.org/msg01774.html
TL;DR: exportable arrays are not directly supported up to and including bash-5.1, but you can (effectively) export arrays in one of two ways:
a simple modification to the way the child scripts are invoked
use an exported function to store the array initialisation, with a simple modification to the child scripts
Or, you can wait until bash-4.3 is released (in development/RC state as of February 2014, see ARRAY_EXPORT in the Changelog). Update: This feature is not enabled in 4.3. If you define ARRAY_EXPORT when building, the build will fail. The author has stated it is not planned to complete this feature.
The first thing to understand is that the bash environment (more properly command execution environment) is different to the POSIX concept of an environment. The POSIX environment is a collection of un-typed name=value pairs, and can be passed from a process to its children in various ways (effectively a limited form of IPC).
The bash execution environment is effectively a superset of this, with typed variables, read-only and exportable flags, arrays, functions and more. This partly explains why the output of set (bash builtin) and env or printenv differ.
When you invoke another bash shell you're starting a new process, you loose some bash state. However, if you dot-source a script, the script is run in the same environment; or if you run a subshell via ( ) the environment is also preserved (because bash forks, preserving its complete state, rather than reinitialising using the process environment).
The limitation referenced in #lesmana's answer arises because the POSIX environment is simply name=value pairs with no extra meaning, so there's no agreed way to encode or format typed variables, see below for an interesting bash quirk regarding functions , and an upcoming change in bash-4.3(proposed array feature abandoned).
There are a couple of simple ways to do this using declare -p (built-in) to output some of the bash environment as a set of one or more declare statements which can be used reconstruct the type and value of a "name". This is basic serialisation, but with rather less of the complexity some of the other answers imply. declare -p preserves array indexes, sparse arrays and quoting of troublesome values. For simple serialisation of an array you could just dump the values line by line, and use read -a myarray to restore it (works with contiguous 0-indexed arrays, since read -a automatically assigns indexes).
These methods do not require any modification of the script(s) you are passing the arrays to.
declare -p array1 array2 > .bash_arrays # serialise to an intermediate file
bash -c ". .bash_arrays; . otherscript.sh" # source both in the same environment
Variations on the above bash -c "..." form are sometimes (mis-)used in crontabs to set variables.
Alternatives include:
declare -p array1 array2 > .bash_arrays # serialise to an intermediate file
BASH_ENV=.bash_arrays otherscript.sh # non-interactive startup script
Or, as a one-liner:
BASH_ENV=<(declare -p array1 array2) otherscript.sh
The last one uses process substitution to pass the output of the declare command as an rc script. (This method only works in bash-4.0 or later: earlier versions unconditionally fstat() rc files and use the size returned to read() the file in one go; a FIFO returns a size of 0, and so won't work as hoped.)
In a non-interactive shell (i.e. shell script) the file pointed to by the BASH_ENV variable is automatically sourced. You must make sure bash is correctly invoked, possibly using a shebang to invoke "bash" explicitly, and not #!/bin/sh as bash will not honour BASH_ENV when in historical/POSIX mode.
If all your array names happen to have a common prefix you can use declare -p ${!myprefix*} to expand a list of them, instead of enumerating them.
You probably should not attempt to export and re-import the entire bash environment using this method, some special bash variables and arrays are read-only, and there can be other side-effects when modifying special variables.
(You could also do something slightly disagreeable by serialising the array definition to an exportable variable, and using eval, but let's not encourage the use of eval ...
$ array=([1]=a [10]="b c")
$ export scalar_array=$(declare -p array)
$ bash # start a new shell
$ eval $scalar_array
$ declare -p array
declare -a array='([1]="a" [10]="b c")'
)
As referenced above, there's an interesting quirk: special support for exporting functions through the environment:
function myfoo() {
echo foo
}
with export -f or set +a to enable this behaviour, will result in this in the (process) environment, visible with printenv:
myfoo=() { echo foo
}
The variable is functionname (or functioname() for backward compatibility) and its value is () { functionbody }.
When a subsequent bash process starts it will recreate a function from each such environment variable. If you peek into the bash-4.2 source file variables.c you'll see variables starting with () { are handled specially. (Though creating a function using this syntax with declare -f is forbidden.) Update: The "shellshock" security issue is related to this feature, contemporary systems may disable automatic function import from the environment as a mitigation.
If you keep reading though, you'll see an #if 0 (or #if ARRAY_EXPORT) guarding code that checks variables starting with ([ and ending with ), and a comment stating "Array variables may not yet be exported". The good news is that in the current development version bash-4.3rc2 the ability to export indexed arrays (not associative) is enabled. This feature is not likely to be enabled, as noted above.
We can use this to create a function which restores any array data required:
% function sharearray() {
array1=(a b c d)
}
% export -f sharearray
% bash -c 'sharearray; echo ${array1[*]}'
So, similar to the previous approach, invoke the child script with:
bash -c "sharearray; . otherscript.sh"
Or, you can conditionally invoke the sharearray function in the child script by adding at some appropriate point:
declare -F sharearray >/dev/null && sharearray
Note there is no declare -a in the sharearray function, if you do that the array is implicitly local to the function, not what is wanted. bash-4.2 supports declare -g that makes a variable declared in a function into a global, so declare -ga can then be used. (Since associative arrays require a declare -A you won't be able to use this method for global associative arrays prior to bash-4.2, from v4.2 declare -Ag will work as hoped.) The GNU parallel documentation has useful variation on this method, see the discussion of --env in the man page.
Your question as phrased also indicates you may be having problems with export itself. You can export a name after you've created or modified it. "exportable" is a flag or property of a variable, for convenience you can also set and export in a single statement. Up to bash-4.2 export expects only a name, either a simple (scalar) variable or function name are supported.
Even if you could (in future) export arrays, exporting selected indexes (a slice) may not be supported (though since arrays are sparse there's no reason it could not be allowed). Though bash also supports the syntax declare -a name[0], the subscript is ignored, and "name" is simply a normal indexed array.
Jeez. I don't know why the other answers made this so complicated. Bash has nearly built-in support for this.
In the exporting script:
myArray=( ' foo"bar ' $'\n''\nbaz)' ) # an array with two nasty elements
myArray="${myArray[#]#Q}" ./importing_script.sh
(Note, the double quotes are necessary for correct handling of whitespace within array elements.)
Upon entry to importing_script.sh, the value of the myArray environment variable comprises these exact 26 bytes:
' foo"bar ' $'\n\\nbaz)'
Then the following will reconstitute the array:
eval "myArray=( ${myArray} )"
CAUTION! Do not eval like this if you cannot trust the source of the myArray environment variable. This trick exhibits the "Little Bobby Tables" vulnerability. Imagine if someone were to set the value of myArray to ) ; rm -rf / #.
The environment is just a collection of key-value pairs, both of which are character strings. A proper solution that works for any kind of array could either
Save each element in a different variable (e.g. MY_ARRAY_0=myArray[0]). Gets complicated because of the dynamic variable names.
Save the array in the file system (declare -p myArray >file).
Serialize all array elements into a single string.
These are covered in the other posts. If you know that your values never contain a certain character (for example |) and your keys are consecutive integers, you can simply save the array as a delimited list:
export MY_ARRAY=$(IFS='|'; echo "${myArray[*]}")
And restore it in the child process:
IFS='|'; myArray=($MY_ARRAY); unset IFS
Based on #mr.spuratic use of BASH_ENV, here I tunnel $# through script -f -c
script -c <command> <logfile> can be used to run a command inside another pty (and process group) but it cannot pass any structured arguments to <command>.
Instead <command> is a simple string to be an argument to the system library call.
I need to tunnel $# of the outer bash into $# of the bash invoked by script.
As declare -p cannot take #, here I use the magic bash variable _ (with a dummy first array value as that will get overwritten by bash). This saves me trampling on any important variables:
Proof of concept:
BASH_ENV=<( declare -a _=("" "$#") && declare -p _ ) bash -c 'set -- "${_[#]:1}" && echo "$#"'
"But," you say, "you are passing arguments to bash -- and indeed I am, but these are a simple string of known character. Here is use by script
SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$#") && declare -p _ && echo 'set -- "${_[#]:1}"') script -f -c 'echo "$#"' /tmp/logfile
which gives me this wrapper function in_pty:
in_pty() {
SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$#") && declare -p _ && echo 'set -- "${_[#]:1}"') script -f -c 'echo "$#"' /tmp/logfile
}
or this function-less wrapper as a composable string for Makefiles:
in_pty=bash -c 'SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$$#") && declare -p _ && echo '"'"'set -- "$${_[#]:1}"'"'"') script -qfc '"'"'"$$#"'"'"' /tmp/logfile' --
...
$(in_pty) test --verbose $# $^
I was editing a different post and made a mistake. Augh. Anyway, perhaps this might help?
https://stackoverflow.com/a/11944320/1594168
Note that because the shell's array format is undocumented on bash or any other shell's side,
it is very difficult to return a shell array in platform independent way.
You would have to check the version, and also craft a simple script that concatinates all
shell arrays into a file that other processes can resolve into.
However, if you know the name of the array you want to take back home then there is a way, while a bit dirty.
Lets say I have
MyAry[42]="whatever-stuff";
MyAry[55]="foo";
MyAry[99]="bar";
So I want to take it home
name_of_child=MyAry
take_me_home="`declare -p ${name_of_child}`";
export take_me_home="${take_me_home/#declare -a ${name_of_child}=/}"
We can see it being exported, by checking from a sub-process
echo ""|awk '{print "from awk =["ENVIRON["take_me_home"]"]"; }'
Result :
from awk =['([42]="whatever-stuff" [55]="foo" [99]="bar")']
If we absolutely must, use the env var to dump it.
env > some_tmp_file
Then
Before running the another script,
# This is the magic that does it all
source some_tmp_file
As lesmana reported, you cannot export arrays. So you have to serialize them before passing through the environment. This serialization useful other places too where only a string fits (su -c 'string', ssh host 'string'). The shortest code way to do this is to abuse 'getopt'
# preserve_array(arguments). return in _RET a string that can be expanded
# later to recreate positional arguments. They can be restored with:
# eval set -- "$_RET"
preserve_array() {
_RET=$(getopt --shell sh --options "" -- -- "$#") && _RET=${_RET# --}
}
# restore_array(name, payload)
restore_array() {
local name="$1" payload="$2"
eval set -- "$payload"
eval "unset $name && $name=("\$#")"
}
Use it like this:
foo=("1: &&& - *" "2: two" "3: %# abc" )
preserve_array "${foo[#]}"
foo_stuffed=${_RET}
restore_array newfoo "$foo_stuffed"
for elem in "${newfoo[#]}"; do echo "$elem"; done
## output:
# 1: &&& - *
# 2: two
# 3: %# abc
This does not address unset/sparse arrays.
You might be able to reduce the 2 'eval' calls in restore_array.
Although this question/answers are pretty old, this post seems to be the top hit when searching for "bash serialize array"
And, although the original question wasn't quite related to serializing/deserializing arrays, it does seem that the answers have devolved in that direction.
So with that ... I offer my solution:
Pros
All Core Bash Concepts
No Evals
No Sub-Commands
Cons
Functions take variable names as arguments (vs actual values)
Serializing requires having at least one character that is not present in the array
serialize_array.bash
# shellcheck shell=bash
##
# serialize_array
# Serializes a bash array to a string, with a configurable seperator.
#
# $1 = source varname ( contains array to be serialized )
# $2 = target varname ( will contian the serialized string )
# $3 = seperator ( optional, defaults to $'\x01' )
#
# example:
#
# my_arry=( one "two three" four )
# serialize_array my_array my_string '|'
# declare -p my_string
#
# result:
#
# declare -- my_string="one|two three|four"
#
function serialize_array() {
declare -n _array="${1}" _str="${2}" # _array, _str => local reference vars
local IFS="${3:-$'\x01'}"
# shellcheck disable=SC2034 # Reference vars assumed used by caller
_str="${_array[*]}" # * => join on IFS
}
##
# deserialize_array
# Deserializes a string into a bash array, with a configurable seperator.
#
# $1 = source varname ( contains string to be deserialized )
# $2 = target varname ( will contain the deserialized array )
# $3 = seperator ( optional, defaults to $'\x01' )
#
# example:
#
# my_string="one|two three|four"
# deserialize_array my_string my_array '|'
# declare -p my_array
#
# result:
#
# declare -a my_array=([0]="one" [1]="two three" [2]="four")
#
function deserialize_array() {
IFS="${3:-$'\x01'}" read -r -a "${2}" <<<"${!1}" # -a => split on IFS
}
NOTE: This is hosted as a gist here:
https://gist.github.com/TekWizely/c0259f25e18f2368c4a577495cd566cd
[edits]
Logic simplified after running through shellcheck + shfmt.
Added URL for hosted GIST
you (hi!) can use this, dont need writing a file, for ubuntu 12.04, bash 4.2.24
Also, your multiple lines array can be exported.
cat >>exportArray.sh
function FUNCarrayRestore() {
local l_arrayName=$1
local l_exportedArrayName=${l_arrayName}_exportedArray
# if set, recover its value to array
if eval '[[ -n ${'$l_exportedArrayName'+dummy} ]]'; then
eval $l_arrayName'='`eval 'echo $'$l_exportedArrayName` #do not put export here!
fi
}
export -f FUNCarrayRestore
function FUNCarrayFakeExport() {
local l_arrayName=$1
local l_exportedArrayName=${l_arrayName}_exportedArray
# prepare to be shown with export -p
eval 'export '$l_arrayName
# collect exportable array in string mode
local l_export=`export -p \
|grep "^declare -ax $l_arrayName=" \
|sed 's"^declare -ax '$l_arrayName'"export '$l_exportedArrayName'"'`
# creates exportable non array variable (at child shell)
eval "$l_export"
}
export -f FUNCarrayFakeExport
test this example on terminal bash (works with bash 4.2.24):
source exportArray.sh
list=(a b c)
FUNCarrayFakeExport list
bash
echo ${list[#]} #empty :(
FUNCarrayRestore list
echo ${list[#]} #profit! :D
I may improve it here
PS.: if someone clears/improve/makeItRunFaster I would like to know/see, thx! :D
For arrays with values without spaces, I've been using a simple set of functions to iterate through each array element and concatenate the array:
_arrayToStr(){
array=($#)
arrayString=""
for (( i=0; i<${#array[#]}; i++ )); do
if [[ $i == 0 ]]; then
arrayString="\"${array[i]}\""
else
arrayString="${arrayString} \"${array[i]}\""
fi
done
export arrayString="(${arrayString})"
}
_strToArray(){
str=$1
array=${str//\"/}
array=(${array//[()]/""})
export array=${array[#]}
}
The first function with turn the array into a string by adding the opening and closing parentheses and escaping all of the double quotation marks. The second function will strip the quotation marks and the parentheses and place them into a dummy array.
In order export the array, you would pass in all the elements of the original array:
array=(foo bar)
_arrayToStr ${array[#]}
At this point, the array has been exported into the value $arrayString. To import the array in the destination file, rename the array and do the opposite conversion:
_strToArray "$arrayName"
newArray=(${array[#]})
Much thanks to #stéphane-chazelas who pointed out all the problems with my previous attempts, this now seems to work to serialise an array to stdout or into a variable.
This technique does not shell-parse the input (unlike declare -a/declare -p) and so is safe against malicious insertion of metacharacters in the serialised text.
Note: newlines are not escaped, because read deletes the \<newlines> character pair, so -d ... must instead be passed to read, and then unescaped newlines are preserved.
All this is managed in the unserialise function.
Two magic characters are used, the field separator and the record separator (so that multiple arrays can be serialized to the same stream).
These characters can be defined as FS and RS but neither can be defined as newline character because an escaped newline is deleted by read.
The escape character must be \ the backslash, as that is what is used by read to avoid the character being recognized as an IFS character.
serialise will serialise "$#" to stdout, serialise_to will serialise to the varable named in $1
serialise() {
set -- "${#//\\/\\\\}" # \
set -- "${#//${FS:-;}/\\${FS:-;}}" # ; - our field separator
set -- "${#//${RS:-:}/\\${RS:-:}}" # ; - our record separator
local IFS="${FS:-;}"
printf ${SERIALIZE_TARGET:+-v"$SERIALIZE_TARGET"} "%s" "$*${RS:-:}"
}
serialise_to() {
SERIALIZE_TARGET="$1" serialise "${#:2}"
}
unserialise() {
local IFS="${FS:-;}"
if test -n "$2"
then read -d "${RS:-:}" -a "$1" <<<"${*:2}"
else read -d "${RS:-:}" -a "$1"
fi
}
and unserialise with:
unserialise data # read from stdin
or
unserialise data "$serialised_data" # from args
e.g.
$ serialise "Now is the time" "For all good men" "To drink \$drink" "At the \`party\`" $'Party\tParty\tParty'
Now is the time;For all good men;To drink $drink;At the `party`;Party Party Party:
(without a trailing newline)
read it back:
$ serialise_to s "Now is the time" "For all good men" "To drink \$drink" "At the \`party\`" $'Party\tParty\tParty'
$ unserialise array "$s"
$ echo "${array[#]/#/$'\n'}"
Now is the time
For all good men
To drink $drink
At the `party`
Party Party Party
or
unserialise array # read from stdin
Bash's read respects the escape character \ (unless you pass the -r flag) to remove special meaning of characters such as for input field separation or line delimiting.
If you want to serialise an array instead of a mere argument list then just pass your array as the argument list:
serialise_array "${my_array[#]}"
You can use unserialise in a loop like you would read because it is just a wrapped read - but remember that the stream is not newline separated:
while unserialise array
do ...
done
I've wrote my own functions for this and improved the method with the IFS:
Features:
Doesn't call to $(...) and so doesn't spawn another bash shell process
Serializes ? and | characters into ?00 and ?01 sequences and back, so can be used over array with these characters
Handles the line return characters between serialization/deserialization as other characters
Tested in cygwin bash 3.2.48 and Linux bash 4.3.48
function tkl_declare_global()
{
eval "$1=\"\$2\"" # right argument does NOT evaluate
}
function tkl_declare_global_array()
{
local IFS=$' \t\r\n' # just in case, workaround for the bug in the "[#]:i" expression under the bash version lower than 4.1
eval "$1=(\"\${#:2}\")"
}
function tkl_serialize_array()
{
local __array_var="$1"
local __out_var="$2"
[[ -z "$__array_var" ]] && return 1
[[ -z "$__out_var" ]] && return 2
local __array_var_size
eval declare "__array_var_size=\${#$__array_var[#]}"
(( ! __array_var_size )) && { tkl_declare_global $__out_var ''; return 0; }
local __escaped_array_str=''
local __index
local __value
for (( __index=0; __index < __array_var_size; __index++ )); do
eval declare "__value=\"\${$__array_var[__index]}\""
__value="${__value//\?/?00}"
__value="${__value//|/?01}"
__escaped_array_str="$__escaped_array_str${__escaped_array_str:+|}$__value"
done
tkl_declare_global $__out_var "$__escaped_array_str"
return 0
}
function tkl_deserialize_array()
{
local __serialized_array="$1"
local __out_var="$2"
[[ -z "$__out_var" ]] && return 1
(( ! ${#__serialized_array} )) && { tkl_declare_global $__out_var ''; return 0; }
local IFS='|'
local __deserialized_array=($__serialized_array)
tkl_declare_global_array $__out_var
local __index=0
local __value
for __value in "${__deserialized_array[#]}"; do
__value="${__value//\?01/|}"
__value="${__value//\?00/?}"
tkl_declare_global $__out_var[__index] "$__value"
(( __index++ ))
done
return 0
}
Example:
a=($'1 \n 2' "3\"4'" 5 '|' '?')
tkl_serialize_array a b
tkl_deserialize_array "$b" c
I think you can try it this way (by sourcing your script after export):
export myArray=(Hello World)
. yourScript.sh

save and restore shell variables

I have two shell scripts that I'd like to invoke from a C program. I would like shell variables set in the first script to be visible in the second. Here's what it would look like:
a.sh:
var=blah
<save vars>
b.sh:
<restore vars>
echo $var
The best I've come up with so far is a variant on "set > /tmp/vars" to save the variables and "eval $(cat /tmp/vars)" to restore them. The "eval" chokes when it tries to restore a read-only variable, so I need to grep those out. A list of these variables is available via "declare -r". But there are some vars which don't show up in this list, yet still can't be set in eval, e.g. BASH_ARGC. So I need to grep those out, too.
At this point, my solution feels very brittle and error-prone, and I'm not sure how portable it is. Is there a better way to do this?
One way to avoid setting problematic variables is by storing only those which have changed during the execution of each script. For example,
a.sh:
set > /tmp/pre
foo=bar
set > /tmp/post
grep -v -F -f/tmp/pre /tmp/post > /tmp/vars
b.sh:
eval $(cat /tmp/vars)
echo $foo
/tmp/vars contains this:
PIPESTATUS=([0]="0")
_=
foo=bar
Evidently evaling the first two lines has no adverse effect.
If you can use a common prefix on your variable names, here is one way to do it:
# save the variables
yourprefix_width=1200
yourprefix_height=2150
yourprefix_length=1975
yourprefix_material=gravel
yourprefix_customer_array=("Acme Plumbing" "123 Main" "Anytown")
declare -p $(echo ${!yourprefix#}) > varfile
# load the variables
while read -r line
do
if [[ $line == declare\ * ]]
then
eval "$line"
fi
done < varfile
Of course, your prefix will be shorter. You could do further validation upon loading the variables to make sure that the variable names conform to your naming scheme.
The advantage of using declare is that it is more secure than just using eval by itself.
If you need to, you can filter out variables that are marked as readonly or select variables that are marked for export.
Other commands of interest (some may vary by Bash version):
export - without arguments, lists all exported variables using a declare format
declare -px - same as the previous command
declare -pr - lists readonly variables
If it's possible for a.sh to call b.sh, it will carry over if they're exported. Or having a parent set all the values necessary and then call both. That's the most secure and sure method I can think of.
Not sure if it's accepted dogma, but:
bash -c 'export foo=bar; env > xxxx'
env `cat xxxx` otherscript.sh
The otherscript will have the env printed to xxxx ...
Update:
Also note:
man execle
On how to set environment variables for another system call from within C, if you need to do that. And:
man getenv
and http://www.crasseux.com/books/ctutorial/Environment-variables.html
An alternative to saving and restoring shell state would be to make the C program and the shell program work in parallel: the C program starts the shell program, which runs a.sh, then notifies the C program (perhaps passing some information it's learned from executing a.sh), and when the C program is ready for more it tells the shell program to run b.sh. The shell program would look like this:
. a.sh
echo "information gleaned from a"
arguments_for_b=$(read -r)
. b.sh
And the general structure of the C program would be:
set up two pairs of pipes, one for C->shell and one for shell->C
fork, exec the shell wrapper
read information gleaned from a on the shell->C pipe
more processing
write arguments for b on the C->shell pipe
wait for child process to end
I went looking for something similar and couldn't find it either, so I made the two scripts below. To start, just say shellstate, then probably at least set -i and set -o emacs which this reset_shellstate doesn't do for you. I don't know a way to ask bash which variables it thinks are special.
~/bin/reset_shellstate:
#!/bin/bash
__="$PWD/shellstate_${1#_}"
trap '
declare -p >"'"$__"'"
trap >>"'"$__"'"
echo cd \""$PWD"\" >>"'"$__"'" # setting PWD did this already, but...
echo set +abefhikmnptuvxBCEHPT >>"'"$__"'"
echo set -$- >>"'"$__"'" # must be last before sed, see $s/s//2 below
sed -ri '\''
$s/s//2
s,^trap --,trap,
/^declare -[^ ]*r/d
/^declare -[^ ]* [A-Za-z0-9_]*[^A-Za-z0-9_=]/d
/^declare -[^ ]* [^= ]*_SESSION_/d
/^declare -[^ ]* BASH[=_]/d
/^declare -[^ ]* (DISPLAY|GROUPS|SHLVL|XAUTHORITY)=/d
/^declare -[^ ]* WINDOW(ID|PATH)=/d
'\'' "'"$__"'"
shopt -op >>"'"$__"'"
shopt -p >>"'"$__"'"
declare -f >>"'"$__"'"
echo "Shell state saved in '"$__"'"
' 0
unset __
~/bin/shellstate:
#!/bin/bash
shellstate=shellstate_${1#_}
test -s $shellstate || reset_shellstate $1
shift
bash --noprofile --init-file shellstate_${1#_} -is "$#"
exit $?

Resources