Bash RegEx and Storing Commands into a Variable - arrays

in Bash I have an array names that contains the string values
Dr. Praveen Hishnadas
Dr. Vij Pamy
John Smitherson,Dr.,Service Account
John Dinkleberg,Dr.,Service Account
I want to capture only the names
Praveen Hishnadas
Vij Pamy
John Smitherson
John Dinkleberg
and store them back into the original array, overwriting their unsanitized versions.
I have the following snippet of code note that I'm executing the regex in Perl (-P)
for i in "${names[#]}"
do
echo $i|grep -P '(?:Dr\.)?\w+ \w+|$' -o | head -1
done
Which yields the output
Dr. Praveen Hishnadas
Dr. Vij Pamy
John Smitherson
John Dinkleberg
Questions:
1) Am I using the look-around command ?: incorrectly? I'm trying to optionally match "Dr." while
not capturing it
2) How would I store the result of that echo back into the array names? I have tried setting it to
i=echo $i|grep -P '(?:Dr\.)?\w+ \w+|$' -o | head -1
i=$(echo $i|grep -P '(?:Dr\.)?\w+ \w+|$' -o | head -1)
i=`echo $i|grep -P '(?:Dr\.)?\w+ \w+|$' -o | head -1`
but to no avail. I only started learning bash 2 days ago and I feel like my syntaxing is slightly off. Any help is appreciated.

Your lookahead says "include Dr. if it's there". You probably want a negative lookahead like (?!Dr\.)\w+ \w+. I'll throw in a leading \b anchor a a bonus.
names=('Dr. Praveen Hishnadas' 'Dr. Vij Pamy' 'John Smitherson,Dr.,Service Account' 'John Dinkleberg,Dr.,Service Account')
for i in "${names[#]}"
do
grep -P '\b(?!Dr\.)\w+ \w+' -o <<<"$i" |
head -n 1
done
It doesn't matter for the examples you provided, but you should basically always quote your variables. See When to wrap quotes around a shell variable?
Maybe also google "falsehoods programmers believe about names".
To update your array, loop over the array indices and assign back into the array.
for((i=0;i<${#names[#]};++i)); do
names[$i]=$(grep -P '\b(?!Dr\.)\w+ \w+|$' -o <<<"${names[i]}" | head -n 1)
done

How about something like this for the regex?
(?:^|\.\s)(\w+)\s+(\w+)
Regex Demo
(?: # Non-capturing group
^|\.\s # Start match if start of line or following dot+space sequence
)
(\w+) # Group 1 captures the first name
\s+ # Match unlimited number of spaces between first and last name (take + off to match 1 space)
(\w+) # Group 2 captures surname.

Related

Strange behaviour with Bash, Arrays and empty spaces

Problem:
Writing a bash script, i'm trying to import a list of products that are inside a csv file into an array:
#!/bin/bash
PRODUCTS=(`csvprintf -f "/home/test/data/input.csv" -x | grep "col2" | sed 's/<col2>//g' | sed 's/<\/col2>//g' | sed -n '1!p' | sed '$ d' | sed 's/ //g'`)
echo ${PRODUCTS[#]}
In the interactive shell, the result/output looks perfect as following:
burger
special fries
juice - 300ml
When i use exactly the same commands in bash script, even debugging with bash -x script.sh, in the part of echo ${PRODUCTS[#]}, the result of array is all files names located at /home/test/data/ and:
burger
special
fries
juice
-
300ml
The array is taking directory list AND messed up newlines. This don't happen in interactive shell (single command line).
Anyone know how to fix that?
Looking at the docs for csvprintf, you're converting the csv into XML and then parsing it with regular expressions. This is generally a very bad idea.
You might want to install csvkit then you can do
csvcut -c prod input.csv | sed 1d
Or you could use a language that comes with a CSV parser module. For example, ruby
ruby -rcsv -e 'CSV.read("input.csv", :headers=>true).each {|row| puts row["prod"]}'
Whichever method you use, read the results into a bash array with this construct
mapfile -t products < <(command to extract the product data)
Then, to print the array elements:
for prod in "${products[#]}"; do echo "$prod"; done
# or
printf "%s\n" "${products[#]}"
The quotes around the array expansion are critical. If missing, you'll see one word per line.
Tip: don't use ALLCAPS variable names in the shell: leave those for the shell. One day you'll write PATH=something and then wonder why your script is broken.

Bash help tallying/parsing substrings

I have a shell script I wrote a while back, that reads a word list (HITLIST), and recursively searches a directory for all occurrences of those words. Each line containing a "hit" is appended to file (HITOUTPUT).
I have used this script a couple of times over the last year or so, and have noticed that we often get hits from frequent offenders, and that it would be nice if we kept a count of each "super-string" that is triggered, and automatically remove repeat offenders.
For instance, if my word list contains "for" I might get a hundred hits or so for "foreign" or "form" or "force". Instead of validating each of these lines, it would be nice to simply wipe them all with one "yes/no" dialog per super-string.
I was thinking the best way to do this would be to start with a word from the hitlist, and record each unique occurrence of the super-string for that word (go until you are book-ended by why space) and go from there.
So on to the questions ...
What would be a good and efficient way to do this? My current idea
was to read in the file as a string, perform my counts, remove
repeat offenders from the the file input string, and output, but this is
proving to be a little more painful that I first suspected.
Would any specific data type/structure be preferred for this type of
work?
I have also thought about building the super-string count as I
create the HitOutput file, but I could not figure out a clean way of
doing this either. Any thoughts or suggestions?
A sample of the file I am reading in, and my code for reading in and traversing the hitlist and creating the HitOutput file below:
# Loop through hitlist list
while read -re hitlist || [[ -n "$hitlist" ]]
do
# If first character is "#" it's a comment, or line is blank, skip
if [ "$(echo $hitlist | head -c 1)" != "#" ]; then
if [ ! -z "$hitlist" -a "$histlist" != "" ]; then
# Parse comma delimited hitlist
IFS=',' read -ra categoryWords <<< "$hitlist"
# Search for occurrences/hits for each hit
for categoryWord in "${categoryWords[#]}"; do
# Append results to hit output string
eval 'find "$DIR" -type f -print0 | xargs -0 grep -HniI "$categoryWord"' >> HITOUTPUT
done
fi
fi
done < "$HITLIST"
src/fakescript.sh:1:Never going to win the war you mother!
src/open_source_licenses.txt:6147:May you share freely, never taking more than you give.
src/open_source_licenses.txt:8764:May you share freely, never taking more than you give.
src/open_source_licenses.txt:21711:No Third Party Beneficiaries. You agree that, except as otherwise expressly provided in this TOS, there shall be no third party beneficiaries to this Agreement. Waiver and Severability of Terms. The failure of UBM LLC to exercise or enforce any right or provision of the TOS shall
not constitute a waiver of such right or provision. If any provision of the TOS is found by a court of competent jurisdiction to be invalid, the parties nevertheless agree that the court should endeavor to give effect to the parties' intentions as reflected in the provision, and the other provisions of the TOS remain in full force and effect.
src/fakescript.sh:1:Never going to win the war you mother!
An example of my hitlist file:
# Comment out any category word lines that you do not want processed (the comma delimited lines)
# -----------------
# MEH
never,going,to give,you up
# ----------------
# blah
word to,your,mother
Let's divide this problem into two parts. First, we will update the hitlist interactively as requires by your customer. Second, we will find all matches to the updated hitlist.
1. Updating the hitlist
This searches for all words in files under directory dir that contain any word on the hitlist:
#!/bin/bash
grep -Erowhf <(sed -E 's/.*/([[:alpha:]]+&[[:alpha:]]*|[[:alpha:]]*&[[:alpha:]]+)/' hitlist) dir |
sort |
uniq -c |
while read n word
do
read -u 2 -p "$word occurs $n times. Include (y/n)? " a
[ "$a" = y ] && echo "$word" >>hitlist
done
This script runs interactively. As an example, suppose that dir contains these two files:
$ cat dir/file1.txt
for all foreign or catapult also cat.
The catapult hit the catermaran.
The form of a foreign formula
$ cat dir/file2.txt
dog and cat and formula, formula, formula
And hitlist contains two words:
$ cat hitlist
for
cat
If we then run our script, it looks like:
$ bash script.sh
catapult occurs 2 times. Include (y/n)? y
catermaran occurs 1 times. Include (y/n)? n
foreign occurs 2 times. Include (y/n)? y
form occurs 1 times. Include (y/n)? n
formula occurs 4 times. Include (y/n)? n
After the script is run, the file hitlist is updated with all the words that you want to include. We are now ready to proceed to the next step:
2. Finding matches to the updated hitlist
To read each word from a "hitlist" and search recursively for matches while ignoring, foreign even if the hitlist contains for, try:
grep -wrFf ../hitlist dir
-w tells grep to look only for full-words. Thus foreign will be ignored.
-r tells grep to search recursively.
-F tells grep to treat the hitlist as word, not regular expressions. (optional)
-f ../hitlist tells grep to read words from the file ../hitlist.
Following on with the example above, we would have:
$ grep -wrFf ./hitlist dir
dir/file2.txt:dog and cat and formula, formula, formula
dir/file1.txt:for all foreign or catapult also cat.
dir/file1.txt:The catapult hit the catermaran.
dir/file1.txt:The form of a foreign formula
If we don't want the file names displayed, use the -h option:
$ grep -hwrFf ./hitlist dir
dog and cat and formula, formula, formula
for all foreign or catapult also cat.
The catapult hit the catermaran.
The form of a foreign formula
Automatic update for counts 10 or less
#!/bin/bash
grep -Erowhf <(sed -E 's/.*/([[:alpha:]]+&[[:alpha:]]*|[[:alpha:]]*&[[:alpha:]]+)/' hitlist) dir |
sort |
uniq -c |
while read n word
do
a=y
[ "$n" -gt 10 ] && read -u 2 -p "$word occurs $n times. Include (y/n)? " a
[ "$a" = y ] && echo "$word" >>hitlist
done
Reformatting the customer's hitlist
I see that your customer's hitlist has extra formatting, including comments, empty lines, and duplicated words. For example:
$ cat hitlist.source
# MEH
never,going,to give,you up
# ----------------
# blah
word to,your,mother
To convert that to format useful here, try:
$ sed -E 's/#.*//; s/[[:space:],]+/\n/g; s/\n\n+/\n/g; /^$/d' hitlist.source | grep . | sort -u >hitlist
$ cat hitlist
give
going
mother
never
to
up
word
you
your

Count ip repeat in log from bash

bash as I can tell from the repetition of an IP within a log through a specific search?
By example:
#!/bin/bash
# Log line: [Sat Jul 04 21:55:35 2015] [error] [client 192.168.1.39] Access denied with status code 403.
grep "status\scode\s403" /var/log/httpd/custom_error_log | while read line ; do
pattern='^\[.*?\]\s\[error\]\s\[client\s(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\].*?403'
[[ $line =~ $pattern ]]
res_remote_addr="${BASH_REMATCH[1]}.${BASH_REMATCH[2]}.${BASH_REMATCH[3]}.${BASH_REMATCH[4]}"
echo "Remote Addr: $res_remote_addr"
done
I need to know the end results obtained a few times each message 403 ip, if possible sort highest to lowest.
By example output:
200.200.200.200 50 times.
200.200.200.201 40 times.
200.200.200.202 30 times.
... etc ...
This we need to create an html report from a monthly log of apache in a series of events (something like awstats).
there are better ways. following is my proposal, which should be more readable and easier to maintain:
grep -P -o '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}' log_file | sort | uniq -c | sort -k1,1 -r -n
output should be in a form of:
count1 ip1
count2 ip2
update:
filter only 403:
grep -P -o '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(?=.*403)' log_file | sort | uniq -c | sort -k1,1 -r -n
notice that a look ahead would suffice.
If log file is in the format as mentioned in question, the best is to use awk to filter out the status code needed plus output only the IP. Then use the uniq command to count each occurence:
awk '/code 403/ {print $8}' error.log | sort | uniq -c |sort -n
In awk, we filter by regexp /code 403/ and then for matching lines we print the 8th value (values are separated by whitespace), which is the IP.
Then we need to sort the output, so that the same IPs are one after another - this is requirement of the uniq program.
uniq -c prints each unique line from input only once - and preceded by the number of occurences. Finnaly we sort this list numericaly to get the IPs sorted by count.
Sample output (first is no. of occurences, second is IP):
1 1.1.1.1
10 2.2.2.2
12 3.3.3.3

Having issues using IFS to cut a string into an array. BASH

I have tried everything I can think of to cut this into separate elements for my array but I am struggling..
Here is what I am trying to do..
(This command just rips out the IP addresses on the first element returned )
$ IFS=$"\n"
$ aaa=( $(netstat -nr | grep -v '^0.0.0.0' | grep -v 'eth' | grep "UGH" | sed 's/ .*//') )
$ echo "${#aaa[#]}"
1
$ echo "${aaa[0]}"
4.4.4.4
5.5.5.5
This shows more than one value when I am looking for the array to separate 4.4.4.4 into ${aaa[0]} and 5.5.5.5 into ${aaa[1]}
I have tried:
IFS="\n"
IFS=$"\n"
IFS=" "
Very confused as I have been working with arrays a lot recently and have never ran into this particular issue.
Can someone tell me what I am doing wrong?
There is a very good example on how to use IFS + read -a to split a string into an array on this other stackoverflow page
How does splitting string to array by 'read' with IFS word separator in bash generated extra space element?
netstat is deprecated, replaced by ss, so I'm not sure how to reproduce your exact problem

Multi-dimensional arrays in Bash

I am planning a script to manage some pieces of my Linux systems and am at the point of deciding if I want to use bash or python.
I would prefer to do this as a Bash script simply because the commands are easier, but the real deciding factor is configuration. I need to be able to store a multi-dimensional array in the configuration file to tell the script what to do with itself. Storing simple key=value pairs in config files is easy enough with bash, but the only way I can think of to do a multi-dimensional array is a two layer parsing engine, something like
array=&d1|v1;v2;v3&d2|v1;v2;v3
but the marshall/unmarshall code could get to be a bear and its far from user friendly for the next poor sap that has to administer this. If i can't do this easily in bash i will simply write the configs to an xml file and write the script in python.
Is there an easy way to do this in bash?
thanks everyone.
Bash does not support multidimensional arrays, nor hashes, and it seems that you want a hash that values are arrays. This solution is not very beautiful, a solution with an xml file should be better :
array=('d1=(v1 v2 v3)' 'd2=(v1 v2 v3)')
for elt in "${array[#]}";do eval $elt;done
echo "d1 ${#d1[#]} ${d1[#]}"
echo "d2 ${#d2[#]} ${d2[#]}"
EDIT: this answer is quite old, since since bash 4 supports hash tables, see also this answer for a solution without eval.
Bash doesn't have multi-dimensional array. But you can simulate a somewhat similar effect with associative arrays. The following is an example of associative array pretending to be used as multi-dimensional array:
declare -A arr
arr[0,0]=0
arr[0,1]=1
arr[1,0]=2
arr[1,1]=3
echo "${arr[0,0]} ${arr[0,1]}" # will print 0 1
If you don't declare the array as associative (with -A), the above won't work. For example, if you omit the declare -A arr line, the echo will print 2 3 instead of 0 1, because 0,0, 1,0 and such will be taken as arithmetic expression and evaluated to 0 (the value to the right of the comma operator).
This works thanks to 1. "indirect expansion" with ! which adds one layer of indirection, and 2. "substring expansion" which behaves differently with arrays and can be used to "slice" them as described https://stackoverflow.com/a/1336245/317623
# Define each array and then add it to the main one
SUB_0=("name0" "value 0")
SUB_1=("name1" "value;1")
MAIN_ARRAY=(
SUB_0[#]
SUB_1[#]
)
# Loop and print it. Using offset and length to extract values
COUNT=${#MAIN_ARRAY[#]}
for ((i=0; i<$COUNT; i++))
do
NAME=${!MAIN_ARRAY[i]:0:1}
VALUE=${!MAIN_ARRAY[i]:1:1}
echo "NAME ${NAME}"
echo "VALUE ${VALUE}"
done
It's based off of this answer here
If you want to use a bash script and keep it easy to read recommend putting the data in structured JSON, and then use lightweight tool jq in your bash command to iterate through the array. For example with the following dataset:
[
{"specialId":"123",
"specialName":"First"},
{"specialId":"456",
"specialName":"Second"},
{"specialId":"789",
"specialName":"Third"}
]
You can iterate through this data with a bash script and jq like this:
function loopOverArray(){
jq -c '.[]' testing.json | while read i; do
# Do stuff here
echo "$i"
done
}
loopOverArray
Outputs:
{"specialId":"123","specialName":"First"}
{"specialId":"456","specialName":"Second"}
{"specialId":"789","specialName":"Third"}
Independent of the shell being used (sh, ksh, bash, ...) the following approach works pretty well for n-dimensional arrays (the sample covers a 2-dimensional array).
In the sample the line-separator (1st dimension) is the space character. For introducing a field separator (2nd dimension) the standard unix tool tr is used. Additional separators for additional dimensions can be used in the same way.
Of course the performance of this approach is not very well, but if performance is not a criteria this approach is quite generic and can solve many problems:
array2d="1.1:1.2:1.3 2.1:2.2 3.1:3.2:3.3:3.4"
function process2ndDimension {
for dimension2 in $*
do
echo -n $dimension2 " "
done
echo
}
function process1stDimension {
for dimension1 in $array2d
do
process2ndDimension `echo $dimension1 | tr : " "`
done
}
process1stDimension
The output of that sample looks like this:
1.1 1.2 1.3
2.1 2.2
3.1 3.2 3.3 3.4
After a lot of trial and error i actually find the best, clearest and easiest multidimensional array on bash is to use a regular var. Yep.
Advantages: You don't have to loop through a big array, you can just echo "$var" and use grep/awk/sed. It's easy and clear and you can have as many columns as you like.
Example:
$ var=$(echo -e 'kris hansen oslo\nthomas jonson peru\nbibi abu johnsonville\njohnny lipp peru')
$ echo "$var"
kris hansen oslo
thomas johnson peru
bibi abu johnsonville
johnny lipp peru
If you want to find everyone in peru
$ echo "$var" | grep peru
thomas johnson peru
johnny lipp peru
Only grep(sed) in the third field
$ echo "$var" | sed -n -E '/(.+) (.+) peru/p'
thomas johnson peru
johnny lipp peru
If you only want x field
$ echo "$var" | awk '{print $2}'
hansen
johnson
abu
johnny
Everyone in peru that's called thomas and just return his lastname
$ echo "$var" |grep peru|grep thomas|awk '{print $2}'
johnson
Any query you can think of... supereasy.
To change an item:
$ var=$(echo "$var"|sed "s/thomas/pete/")
To delete a row that contains "x"
$ var=$(echo "$var"|sed "/thomas/d")
To change another field in the same row based on a value from another item
$ var=$(echo "$var"|sed -E "s/(thomas) (.+) (.+)/\1 test \3/")
$ echo "$var"
kris hansen oslo
thomas test peru
bibi abu johnsonville
johnny lipp peru
Of course looping works too if you want to do that
$ for i in "$var"; do echo "$i"; done
kris hansen oslo
thomas jonson peru
bibi abu johnsonville
johnny lipp peru
The only gotcha iv'e found with this is that you must always quote the
var(in the example; both var and i) or things will look like this
$ for i in "$var"; do echo $i; done
kris hansen oslo thomas jonson peru bibi abu johnsonville johnny lipp peru
and someone will undoubtedly say it won't work if you have spaces in your input, however that can be fixed by using another delimeter in your input, eg(using an utf8 char now to emphasize that you can choose something your input won't contain, but you can choose whatever ofc):
$ var=$(echo -e 'field one☥field two hello☥field three yes moin\nfield 1☥field 2☥field 3 dsdds aq')
$ for i in "$var"; do echo "$i"; done
field one☥field two hello☥field three yes moin
field 1☥field 2☥field 3 dsdds aq
$ echo "$var" | awk -F '☥' '{print $3}'
field three yes moin
field 3 dsdds aq
$ var=$(echo "$var"|sed -E "s/(field one)☥(.+)☥(.+)/\1☥test☥\3/")
$ echo "$var"
field one☥test☥field three yes moin
field 1☥field 2☥field 3 dsdds aq
If you want to store newlines in your input, you could convert the newline to something else before input and convert it back again on output(or don't use bash...). Enjoy!
I am posting the following because it is a very simple and clear way to mimic (at least to some extent) the behavior of a two-dimensional array in Bash. It uses a here-file (see the Bash manual) and read (a Bash builtin command):
## Store the "two-dimensional data" in a file ($$ is just the process ID of the shell, to make sure the filename is unique)
cat > physicists.$$ <<EOF
Wolfgang Pauli 1900
Werner Heisenberg 1901
Albert Einstein 1879
Niels Bohr 1885
EOF
nbPhysicists=$(wc -l physicists.$$ | cut -sf 1 -d ' ') # Number of lines of the here-file specifying the physicists.
## Extract the needed data
declare -a person # Create an indexed array (necessary for the read command).
while read -ra person; do
firstName=${person[0]}
familyName=${person[1]}
birthYear=${person[2]}
echo "Physicist ${firstName} ${familyName} was born in ${birthYear}"
# Do whatever you need with data
done < physicists.$$
## Remove the temporary file
rm physicists.$$
Output:
Physicist Wolfgang Pauli was born in 1900 Physicist Werner Heisenberg was born in 1901 Physicist Albert Einstein was born in 1879 Physicist Niels Bohr was born in 1885
The way it works:
The lines in the temporary file created play the role of one-dimensional vectors, where the blank spaces (or whatever separation character you choose; see the description of the read command in the Bash manual) separate the elements of these vectors.
Then, using the read command with its -a option, we loop over each line of the file (until we reach end of file). For each line, we can assign the desired fields (= words) to an array, which we declared just before the loop. The -r option to the read command prevents backslashes from acting as escape characters, in case we typed backslashes in the here-document physicists.$$.
In conclusion a file is created as a 2D-array, and its elements are extracted using a loop over each line, and using the ability of the read command to assign words to the elements of an (indexed) array.
Slight improvement:
In the above code, the file physicists.$$ is given as input to the while loop, so that it is in fact passed to the read command. However, I found that this causes problems when I have another command asking for input inside the while loop. For example, the select command waits for standard input, and if placed inside the while loop, it will take input from physicists.$$, instead of prompting in the command-line for user input.
To correct this, I use the -u option of read, which allows to read from a file descriptor. We only have to create a file descriptor (with the exec command) corresponding to physicists.$$ and to give it to the -u option of read, as in the following code:
## Store the "two-dimensional data" in a file ($$ is just the process ID of the shell, to make sure the filename is unique)
cat > physicists.$$ <<EOF
Wolfgang Pauli 1900
Werner Heisenberg 1901
Albert Einstein 1879
Niels Bohr 1885
EOF
nbPhysicists=$(wc -l physicists.$$ | cut -sf 1 -d ' ') # Number of lines of the here-file specifying the physicists.
exec {id_file}<./physicists.$$ # Create a file descriptor stored in 'id_file'.
## Extract the needed data
declare -a person # Create an indexed array (necessary for the read command).
while read -ra person -u "${id_file}"; do
firstName=${person[0]}
familyName=${person[1]}
birthYear=${person[2]}
echo "Physicist ${firstName} ${familyName} was born in ${birthYear}"
# Do whatever you need with data
done
## Close the file descriptor
exec {id_file}<&-
## Remove the temporary file
rm physicists.$$
Notice that the file descriptor is closed at the end.
Bash does not supports multidimensional array, but we can implement using Associate array. Here the indexes are the key to retrieve the value. Associate array is available in bash version 4.
#!/bin/bash
declare -A arr2d
rows=3
columns=2
for ((i=0;i<rows;i++)) do
for ((j=0;j<columns;j++)) do
arr2d[$i,$j]=$i
done
done
for ((i=0;i<rows;i++)) do
for ((j=0;j<columns;j++)) do
echo ${arr2d[$i,$j]}
done
done
Expanding on Paul's answer - here's my version of working with associative sub-arrays in bash:
declare -A SUB_1=(["name1key"]="name1val" ["name2key"]="name2val")
declare -A SUB_2=(["name3key"]="name3val" ["name4key"]="name4val")
STRING_1="string1val"
STRING_2="string2val"
MAIN_ARRAY=(
"${SUB_1[*]}"
"${SUB_2[*]}"
"${STRING_1}"
"${STRING_2}"
)
echo "COUNT: " ${#MAIN_ARRAY[#]}
for key in ${!MAIN_ARRAY[#]}; do
IFS=' ' read -a val <<< ${MAIN_ARRAY[$key]}
echo "VALUE: " ${val[#]}
if [[ ${#val[#]} -gt 1 ]]; then
for subkey in ${!val[#]}; do
subval=${val[$subkey]}
echo "SUBVALUE: " ${subval}
done
fi
done
It works with mixed values in the main array - strings/arrays/assoc. arrays
The key here is to wrap the subarrays in single quotes and use * instead of # when storing a subarray inside the main array so it would get stored as a single, space separated string: "${SUB_1[*]}"
Then it makes it easy to parse an array out of that when looping through values with IFS=' ' read -a val <<< ${MAIN_ARRAY[$key]}
The code above outputs:
COUNT: 4
VALUE: name1val name2val
SUBVALUE: name1val
SUBVALUE: name2val
VALUE: name4val name3val
SUBVALUE: name4val
SUBVALUE: name3val
VALUE: string1val
VALUE: string2val
Lots of answers found here for creating multidimensional arrays in bash.
And without exception, all are obtuse and difficult to use.
If MD arrays are a required criteria, it is time to make a decision:
Use a language that supports MD arrays
My preference is Perl. Most would probably choose Python.
Either works.
Store the data elsewhere
JSON and jq have already been suggested. XML has also been suggested, though for your use JSON and jq would likely be simpler.
It would seem though that Bash may not be the best choice for what you need to do.
Sometimes the correct question is not "How do I do X in tool Y?", but rather "Which tool would be best to do X?"
I do this using associative arrays since bash 4 and setting IFS to a value that can be defined manually.
The purpose of this approach is to have arrays as values of associative array keys.
In order to set IFS back to default just unset it.
unset IFS
This is an example:
#!/bin/bash
set -euo pipefail
# used as value in asscciative array
test=(
"x3:x4:x5"
)
# associative array
declare -A wow=(
["1"]=$test
["2"]=$test
)
echo "default IFS"
for w in ${wow[#]}; do
echo " $w"
done
IFS=:
echo "IFS=:"
for w in ${wow[#]}; do
for t in $w; do
echo " $t"
done
done
echo -e "\n or\n"
for w in ${!wow[#]}
do
echo " $w"
for t in ${wow[$w]}
do
echo " $t"
done
done
unset IFS
unset w
unset t
unset wow
unset test
The output of the script below is:
default IFS
x3:x4:x5
x3:x4:x5
IFS=:
x3
x4
x5
x3
x4
x5
or
1
x3
x4
x5
2
x3
x4
x5
I've got a pretty simple yet smart workaround:
Just define the array with variables in its name. For example:
for (( i=0 ; i<$(($maxvalue + 1)) ; i++ ))
do
for (( j=0 ; j<$(($maxargument + 1)) ; j++ ))
do
declare -a array$i[$j]=((Your rule))
done
done
Don't know whether this helps since it's not exactly what you asked for, but it works for me. (The same could be achieved just with variables without the array)
echo "Enter no of terms"
read count
for i in $(seq 1 $count)
do
t=` expr $i - 1 `
for j in $(seq $t -1 0)
do
echo -n " "
done
j=` expr $count + 1 `
x=` expr $j - $i `
for k in $(seq 1 $x)
do
echo -n "* "
done
echo ""
done

Resources