Multi-dimensional arrays in Bash - arrays

I am planning a script to manage some pieces of my Linux systems and am at the point of deciding if I want to use bash or python.
I would prefer to do this as a Bash script simply because the commands are easier, but the real deciding factor is configuration. I need to be able to store a multi-dimensional array in the configuration file to tell the script what to do with itself. Storing simple key=value pairs in config files is easy enough with bash, but the only way I can think of to do a multi-dimensional array is a two layer parsing engine, something like
array=&d1|v1;v2;v3&d2|v1;v2;v3
but the marshall/unmarshall code could get to be a bear and its far from user friendly for the next poor sap that has to administer this. If i can't do this easily in bash i will simply write the configs to an xml file and write the script in python.
Is there an easy way to do this in bash?
thanks everyone.

Bash does not support multidimensional arrays, nor hashes, and it seems that you want a hash that values are arrays. This solution is not very beautiful, a solution with an xml file should be better :
array=('d1=(v1 v2 v3)' 'd2=(v1 v2 v3)')
for elt in "${array[#]}";do eval $elt;done
echo "d1 ${#d1[#]} ${d1[#]}"
echo "d2 ${#d2[#]} ${d2[#]}"
EDIT: this answer is quite old, since since bash 4 supports hash tables, see also this answer for a solution without eval.

Bash doesn't have multi-dimensional array. But you can simulate a somewhat similar effect with associative arrays. The following is an example of associative array pretending to be used as multi-dimensional array:
declare -A arr
arr[0,0]=0
arr[0,1]=1
arr[1,0]=2
arr[1,1]=3
echo "${arr[0,0]} ${arr[0,1]}" # will print 0 1
If you don't declare the array as associative (with -A), the above won't work. For example, if you omit the declare -A arr line, the echo will print 2 3 instead of 0 1, because 0,0, 1,0 and such will be taken as arithmetic expression and evaluated to 0 (the value to the right of the comma operator).

This works thanks to 1. "indirect expansion" with ! which adds one layer of indirection, and 2. "substring expansion" which behaves differently with arrays and can be used to "slice" them as described https://stackoverflow.com/a/1336245/317623
# Define each array and then add it to the main one
SUB_0=("name0" "value 0")
SUB_1=("name1" "value;1")
MAIN_ARRAY=(
SUB_0[#]
SUB_1[#]
)
# Loop and print it. Using offset and length to extract values
COUNT=${#MAIN_ARRAY[#]}
for ((i=0; i<$COUNT; i++))
do
NAME=${!MAIN_ARRAY[i]:0:1}
VALUE=${!MAIN_ARRAY[i]:1:1}
echo "NAME ${NAME}"
echo "VALUE ${VALUE}"
done
It's based off of this answer here

If you want to use a bash script and keep it easy to read recommend putting the data in structured JSON, and then use lightweight tool jq in your bash command to iterate through the array. For example with the following dataset:
[
{"specialId":"123",
"specialName":"First"},
{"specialId":"456",
"specialName":"Second"},
{"specialId":"789",
"specialName":"Third"}
]
You can iterate through this data with a bash script and jq like this:
function loopOverArray(){
jq -c '.[]' testing.json | while read i; do
# Do stuff here
echo "$i"
done
}
loopOverArray
Outputs:
{"specialId":"123","specialName":"First"}
{"specialId":"456","specialName":"Second"}
{"specialId":"789","specialName":"Third"}

Independent of the shell being used (sh, ksh, bash, ...) the following approach works pretty well for n-dimensional arrays (the sample covers a 2-dimensional array).
In the sample the line-separator (1st dimension) is the space character. For introducing a field separator (2nd dimension) the standard unix tool tr is used. Additional separators for additional dimensions can be used in the same way.
Of course the performance of this approach is not very well, but if performance is not a criteria this approach is quite generic and can solve many problems:
array2d="1.1:1.2:1.3 2.1:2.2 3.1:3.2:3.3:3.4"
function process2ndDimension {
for dimension2 in $*
do
echo -n $dimension2 " "
done
echo
}
function process1stDimension {
for dimension1 in $array2d
do
process2ndDimension `echo $dimension1 | tr : " "`
done
}
process1stDimension
The output of that sample looks like this:
1.1 1.2 1.3
2.1 2.2
3.1 3.2 3.3 3.4

After a lot of trial and error i actually find the best, clearest and easiest multidimensional array on bash is to use a regular var. Yep.
Advantages: You don't have to loop through a big array, you can just echo "$var" and use grep/awk/sed. It's easy and clear and you can have as many columns as you like.
Example:
$ var=$(echo -e 'kris hansen oslo\nthomas jonson peru\nbibi abu johnsonville\njohnny lipp peru')
$ echo "$var"
kris hansen oslo
thomas johnson peru
bibi abu johnsonville
johnny lipp peru
If you want to find everyone in peru
$ echo "$var" | grep peru
thomas johnson peru
johnny lipp peru
Only grep(sed) in the third field
$ echo "$var" | sed -n -E '/(.+) (.+) peru/p'
thomas johnson peru
johnny lipp peru
If you only want x field
$ echo "$var" | awk '{print $2}'
hansen
johnson
abu
johnny
Everyone in peru that's called thomas and just return his lastname
$ echo "$var" |grep peru|grep thomas|awk '{print $2}'
johnson
Any query you can think of... supereasy.
To change an item:
$ var=$(echo "$var"|sed "s/thomas/pete/")
To delete a row that contains "x"
$ var=$(echo "$var"|sed "/thomas/d")
To change another field in the same row based on a value from another item
$ var=$(echo "$var"|sed -E "s/(thomas) (.+) (.+)/\1 test \3/")
$ echo "$var"
kris hansen oslo
thomas test peru
bibi abu johnsonville
johnny lipp peru
Of course looping works too if you want to do that
$ for i in "$var"; do echo "$i"; done
kris hansen oslo
thomas jonson peru
bibi abu johnsonville
johnny lipp peru
The only gotcha iv'e found with this is that you must always quote the
var(in the example; both var and i) or things will look like this
$ for i in "$var"; do echo $i; done
kris hansen oslo thomas jonson peru bibi abu johnsonville johnny lipp peru
and someone will undoubtedly say it won't work if you have spaces in your input, however that can be fixed by using another delimeter in your input, eg(using an utf8 char now to emphasize that you can choose something your input won't contain, but you can choose whatever ofc):
$ var=$(echo -e 'field one☥field two hello☥field three yes moin\nfield 1☥field 2☥field 3 dsdds aq')
$ for i in "$var"; do echo "$i"; done
field one☥field two hello☥field three yes moin
field 1☥field 2☥field 3 dsdds aq
$ echo "$var" | awk -F '☥' '{print $3}'
field three yes moin
field 3 dsdds aq
$ var=$(echo "$var"|sed -E "s/(field one)☥(.+)☥(.+)/\1☥test☥\3/")
$ echo "$var"
field one☥test☥field three yes moin
field 1☥field 2☥field 3 dsdds aq
If you want to store newlines in your input, you could convert the newline to something else before input and convert it back again on output(or don't use bash...). Enjoy!

I am posting the following because it is a very simple and clear way to mimic (at least to some extent) the behavior of a two-dimensional array in Bash. It uses a here-file (see the Bash manual) and read (a Bash builtin command):
## Store the "two-dimensional data" in a file ($$ is just the process ID of the shell, to make sure the filename is unique)
cat > physicists.$$ <<EOF
Wolfgang Pauli 1900
Werner Heisenberg 1901
Albert Einstein 1879
Niels Bohr 1885
EOF
nbPhysicists=$(wc -l physicists.$$ | cut -sf 1 -d ' ') # Number of lines of the here-file specifying the physicists.
## Extract the needed data
declare -a person # Create an indexed array (necessary for the read command).
while read -ra person; do
firstName=${person[0]}
familyName=${person[1]}
birthYear=${person[2]}
echo "Physicist ${firstName} ${familyName} was born in ${birthYear}"
# Do whatever you need with data
done < physicists.$$
## Remove the temporary file
rm physicists.$$
Output:
Physicist Wolfgang Pauli was born in 1900 Physicist Werner Heisenberg was born in 1901 Physicist Albert Einstein was born in 1879 Physicist Niels Bohr was born in 1885
The way it works:
The lines in the temporary file created play the role of one-dimensional vectors, where the blank spaces (or whatever separation character you choose; see the description of the read command in the Bash manual) separate the elements of these vectors.
Then, using the read command with its -a option, we loop over each line of the file (until we reach end of file). For each line, we can assign the desired fields (= words) to an array, which we declared just before the loop. The -r option to the read command prevents backslashes from acting as escape characters, in case we typed backslashes in the here-document physicists.$$.
In conclusion a file is created as a 2D-array, and its elements are extracted using a loop over each line, and using the ability of the read command to assign words to the elements of an (indexed) array.
Slight improvement:
In the above code, the file physicists.$$ is given as input to the while loop, so that it is in fact passed to the read command. However, I found that this causes problems when I have another command asking for input inside the while loop. For example, the select command waits for standard input, and if placed inside the while loop, it will take input from physicists.$$, instead of prompting in the command-line for user input.
To correct this, I use the -u option of read, which allows to read from a file descriptor. We only have to create a file descriptor (with the exec command) corresponding to physicists.$$ and to give it to the -u option of read, as in the following code:
## Store the "two-dimensional data" in a file ($$ is just the process ID of the shell, to make sure the filename is unique)
cat > physicists.$$ <<EOF
Wolfgang Pauli 1900
Werner Heisenberg 1901
Albert Einstein 1879
Niels Bohr 1885
EOF
nbPhysicists=$(wc -l physicists.$$ | cut -sf 1 -d ' ') # Number of lines of the here-file specifying the physicists.
exec {id_file}<./physicists.$$ # Create a file descriptor stored in 'id_file'.
## Extract the needed data
declare -a person # Create an indexed array (necessary for the read command).
while read -ra person -u "${id_file}"; do
firstName=${person[0]}
familyName=${person[1]}
birthYear=${person[2]}
echo "Physicist ${firstName} ${familyName} was born in ${birthYear}"
# Do whatever you need with data
done
## Close the file descriptor
exec {id_file}<&-
## Remove the temporary file
rm physicists.$$
Notice that the file descriptor is closed at the end.

Bash does not supports multidimensional array, but we can implement using Associate array. Here the indexes are the key to retrieve the value. Associate array is available in bash version 4.
#!/bin/bash
declare -A arr2d
rows=3
columns=2
for ((i=0;i<rows;i++)) do
for ((j=0;j<columns;j++)) do
arr2d[$i,$j]=$i
done
done
for ((i=0;i<rows;i++)) do
for ((j=0;j<columns;j++)) do
echo ${arr2d[$i,$j]}
done
done

Expanding on Paul's answer - here's my version of working with associative sub-arrays in bash:
declare -A SUB_1=(["name1key"]="name1val" ["name2key"]="name2val")
declare -A SUB_2=(["name3key"]="name3val" ["name4key"]="name4val")
STRING_1="string1val"
STRING_2="string2val"
MAIN_ARRAY=(
"${SUB_1[*]}"
"${SUB_2[*]}"
"${STRING_1}"
"${STRING_2}"
)
echo "COUNT: " ${#MAIN_ARRAY[#]}
for key in ${!MAIN_ARRAY[#]}; do
IFS=' ' read -a val <<< ${MAIN_ARRAY[$key]}
echo "VALUE: " ${val[#]}
if [[ ${#val[#]} -gt 1 ]]; then
for subkey in ${!val[#]}; do
subval=${val[$subkey]}
echo "SUBVALUE: " ${subval}
done
fi
done
It works with mixed values in the main array - strings/arrays/assoc. arrays
The key here is to wrap the subarrays in single quotes and use * instead of # when storing a subarray inside the main array so it would get stored as a single, space separated string: "${SUB_1[*]}"
Then it makes it easy to parse an array out of that when looping through values with IFS=' ' read -a val <<< ${MAIN_ARRAY[$key]}
The code above outputs:
COUNT: 4
VALUE: name1val name2val
SUBVALUE: name1val
SUBVALUE: name2val
VALUE: name4val name3val
SUBVALUE: name4val
SUBVALUE: name3val
VALUE: string1val
VALUE: string2val

Lots of answers found here for creating multidimensional arrays in bash.
And without exception, all are obtuse and difficult to use.
If MD arrays are a required criteria, it is time to make a decision:
Use a language that supports MD arrays
My preference is Perl. Most would probably choose Python.
Either works.
Store the data elsewhere
JSON and jq have already been suggested. XML has also been suggested, though for your use JSON and jq would likely be simpler.
It would seem though that Bash may not be the best choice for what you need to do.
Sometimes the correct question is not "How do I do X in tool Y?", but rather "Which tool would be best to do X?"

I do this using associative arrays since bash 4 and setting IFS to a value that can be defined manually.
The purpose of this approach is to have arrays as values of associative array keys.
In order to set IFS back to default just unset it.
unset IFS
This is an example:
#!/bin/bash
set -euo pipefail
# used as value in asscciative array
test=(
"x3:x4:x5"
)
# associative array
declare -A wow=(
["1"]=$test
["2"]=$test
)
echo "default IFS"
for w in ${wow[#]}; do
echo " $w"
done
IFS=:
echo "IFS=:"
for w in ${wow[#]}; do
for t in $w; do
echo " $t"
done
done
echo -e "\n or\n"
for w in ${!wow[#]}
do
echo " $w"
for t in ${wow[$w]}
do
echo " $t"
done
done
unset IFS
unset w
unset t
unset wow
unset test
The output of the script below is:
default IFS
x3:x4:x5
x3:x4:x5
IFS=:
x3
x4
x5
x3
x4
x5
or
1
x3
x4
x5
2
x3
x4
x5

I've got a pretty simple yet smart workaround:
Just define the array with variables in its name. For example:
for (( i=0 ; i<$(($maxvalue + 1)) ; i++ ))
do
for (( j=0 ; j<$(($maxargument + 1)) ; j++ ))
do
declare -a array$i[$j]=((Your rule))
done
done
Don't know whether this helps since it's not exactly what you asked for, but it works for me. (The same could be achieved just with variables without the array)

echo "Enter no of terms"
read count
for i in $(seq 1 $count)
do
t=` expr $i - 1 `
for j in $(seq $t -1 0)
do
echo -n " "
done
j=` expr $count + 1 `
x=` expr $j - $i `
for k in $(seq 1 $x)
do
echo -n "* "
done
echo ""
done

Related

Bash RegEx and Storing Commands into a Variable

in Bash I have an array names that contains the string values
Dr. Praveen Hishnadas
Dr. Vij Pamy
John Smitherson,Dr.,Service Account
John Dinkleberg,Dr.,Service Account
I want to capture only the names
Praveen Hishnadas
Vij Pamy
John Smitherson
John Dinkleberg
and store them back into the original array, overwriting their unsanitized versions.
I have the following snippet of code note that I'm executing the regex in Perl (-P)
for i in "${names[#]}"
do
echo $i|grep -P '(?:Dr\.)?\w+ \w+|$' -o | head -1
done
Which yields the output
Dr. Praveen Hishnadas
Dr. Vij Pamy
John Smitherson
John Dinkleberg
Questions:
1) Am I using the look-around command ?: incorrectly? I'm trying to optionally match "Dr." while
not capturing it
2) How would I store the result of that echo back into the array names? I have tried setting it to
i=echo $i|grep -P '(?:Dr\.)?\w+ \w+|$' -o | head -1
i=$(echo $i|grep -P '(?:Dr\.)?\w+ \w+|$' -o | head -1)
i=`echo $i|grep -P '(?:Dr\.)?\w+ \w+|$' -o | head -1`
but to no avail. I only started learning bash 2 days ago and I feel like my syntaxing is slightly off. Any help is appreciated.
Your lookahead says "include Dr. if it's there". You probably want a negative lookahead like (?!Dr\.)\w+ \w+. I'll throw in a leading \b anchor a a bonus.
names=('Dr. Praveen Hishnadas' 'Dr. Vij Pamy' 'John Smitherson,Dr.,Service Account' 'John Dinkleberg,Dr.,Service Account')
for i in "${names[#]}"
do
grep -P '\b(?!Dr\.)\w+ \w+' -o <<<"$i" |
head -n 1
done
It doesn't matter for the examples you provided, but you should basically always quote your variables. See When to wrap quotes around a shell variable?
Maybe also google "falsehoods programmers believe about names".
To update your array, loop over the array indices and assign back into the array.
for((i=0;i<${#names[#]};++i)); do
names[$i]=$(grep -P '\b(?!Dr\.)\w+ \w+|$' -o <<<"${names[i]}" | head -n 1)
done
How about something like this for the regex?
(?:^|\.\s)(\w+)\s+(\w+)
Regex Demo
(?: # Non-capturing group
^|\.\s # Start match if start of line or following dot+space sequence
)
(\w+) # Group 1 captures the first name
\s+ # Match unlimited number of spaces between first and last name (take + off to match 1 space)
(\w+) # Group 2 captures surname.

How can for i in ${VAR} ever work in bash, instead of for i in "${VAR[#]}"?

If I try on my machine (with bash 3,4 and 5) the following command:
bash-5.0$ VAR=(1 2 3)
bash-5.0$ for i in ${VAR}; do echo $i; done
I get only one line with the 1.
If I do the same on ZSH for example, it nicely writes the three lines with progressive numbers.
However in one of our production servers I found this:
bash -c "for i in ${MY_VAR}; do stuff with $i; done"
And by checking the logs it seems that it is actually iterating correctly!
How is this possible? Is it a particular version of bash I’m not aware of? Or some flag I should set? Or maybe the array was populated in a particular way?
It "works" because the code isn't actually using an array at all.
export MY_VAR='1 2 3'
bash -c 'for i in ${MY_VAR}; do echo "Doing stuff with $i"; done'
...involves no arrays whatsoever; MY_VAR is a string being word-split and then glob-expanded.
Don't do that, ever, even if you really do need to iterate over items from a delimiter-separated string. The reliable alternative is to use read -r -a my_array <<<"$MY_VAR" to read your string into an array, and then for i in "${my_array[#]}"; do echo "Doing stuff with $i"; done to iterate over it.
You should write:
var=(1 2 3)
for i in "${var[#]}"; do
do stuff with "$i"
done
You need [#] as shown. And don't use uppercase variable names. Now as to why it works on your production server: possibly because MY_VAR is defined as MY_VAR="1 2 3" (or something analogous), i.e., MY_VAR isn't an array (which is bad).
Looks like Bash evaluates $arr to ${arr[0]}:
arr=(1 2 3)
echo $arr # yields 1
arr[0]=999
echo $arr # yields 999
With associative arrays:
declare -A h
h=([one]=1 [two]=2)
echo $h # yields nothing
h=([0]=1 [two]=2)
echo $h # yields 1
As others have pointed out, the right way to loop through an array is:
for i in "${arr[#]}"; do ...

Find the position of a string in an array in Bash script

I'm writing a shell script, and I have created an array containing several strings:
array=('string1' 'string2' ... 'stringN')
Now, I have a string saved in a variable, say a:
a='stringM'
And this string is part of the array.
My question is: how do I find the position of the string in the array, without having to check the terms one by one with a for loop?
Thanks in advance
The basic question is: why do you want to avoid a for loop?
Syntactical convenience and expressiveness: you want a more elegant way to conduct your search.
Performance: you're looking for the fastest way to conduct your search.
tl;dr
For performance reasons, prefer external-utility solutions to pure shell approaches; fortunately, external-utility solutions are often also the more expressive solutions:
For large element counts, they will be much faster.
While they will be slower for small element counts, the absolute time spent executing will still be low overall.
The following snippet shows you how these two goals intersect (note that both commands return the 1-based index of the item found; assumes that the array elements have no embedded newlines):
# Sample input array - adjust the number to experiment
array=( {1..300} )
# Look for the next-to-last item
itmToFind=${array[#]: -1}
# Bash `for` loop
i=1
time for a in "${array[#]}"; do
[[ $a == "$itmToFind" ]] && { echo "$i"; break; }
(( ++i ))
done
# Alternative approach: use external utility `grep`
IFS=$'\n' # make sure that "${array[*]}" expands to \n-separated elements
time grep -m1 -Fxn "$itmToFind" <<<"${array[*]}" | cut -d: -f1
grep's -m1 option means that at most one match is searched for; -Fnx means that the search term should be treated as a literal (-F), match exactly (the full line, -x), and prefix each match with its line number (-n).
With the array size given - 300 on my machine - the above commands perform about the same:
300
real 0m0.005s
user 0m0.004s
sys 0m0.000s
300
real 0m0.004s
user 0m0.002s
sys 0m0.002s
The specific threshold will vary, but:
Generally speaking, the higher the element count, the faster a solution based on an external utility such as grep will be.
For low element counts, the absolute time spent will probably not matter much, even if the external utility solution is comparatively slower.
To show one end of the extreme, here are the timings for a 1,000,000-element array (1 million elements):
1000000
real 0m13.861s
user 0m13.180s
sys 0m0.357s
1000000
real 0m1.520s
user 0m1.411s
sys 0m0.005s
without any other information on array there is no other solution than check each element, if data is sorted a search by dichotomy can be done.
otherwise another structure can be used like a hash.
for example instead of elements appending to array since bash 4.
declare -A hash
i=0;
for str in string{A..Z}; do
hash[$str]=$((i++))
done
echo "${hash['stringI']}"
Not sure if this will work for you or if this is the best way to do it avoiding a for loop, but you can try:
$ array=('string1' 'string2' 'string3' 'string4')
$ a='string3'
$ printf "%s\n" "${array[#]}" | grep -m1 -Fxn "$a" | cut -d: -f1
3
$ i=$(( $(printf "%s\n" "${array[#]}" | grep -m1 -Fxn "$a" | cut -d: -f1) - 1 ))
$ echo $i
2
Breaking it down:
printf "%s\n" "${array[#]}"
prints every element of the array separated by a new line, then we pipe it to grep to get the matching line number for the $a variable and use cut to get only the line number wihtout the match:
printf "%s\n" "${array[#]}" | grep -m1 -Fxn "$a" | cut -d: -f1
Finally, substract 1 from the matching line number returned using arithmetic expansion and store it in $i:
i=$(( $(printf "%s\n" "${array[#]}" | grep -m1 -Fxn "$a" | cut -d: -f1) - 1 ))
As others have shown way based on current array, may I suggest you could also turn the array into an associative one and have your strings as the indexes pointing to numbers.
declare -A array=(['string1']=1
['string2']=2
...
['stringN']=N )
a='stringM'
echo ${array[$a]}

Reading a file into an associative array in Bash

I'm trying to read the information of a structured file into an associative array using Bash script. The file contains in each line the name of a person and its address, separated by a "|". For example:
person1|address of person1
person2|address of person2
...
personN|address of personN
I tried to do this using the script below. Within the WHILE loop, the information is being printed. However, in the FOR loop the information is not being printed. It seems that the information is not being stored in the associative array outside of the WHILE loop.
What am I doing wrong? Why this is not working? Is there more efficient ways to do that?
#!/bin/bash
declare -A address
cat adresses.txt | while read line
do
name=`echo $line | cut -d '|' -f 1`
add=`echo $line | cut -d '|' -f 2`
address[$name]=$add
echo "$name - ${address[$name]}"
done
for name in ${!address[*]}
do
echo "$name - ${address[$name]}"
done
Wrong and useless usage of cut
#!/bin/bash
declare -A address
while IFS=\| read name add
do
address[$name]=$add
done < adresses.txt
for name in ${!address[*]}
do
echo "$name - ${address[$name]}"
done
cat addresses.txt | while read line
do
...
done
Shell commands in a pipelines are executed in subshells. Variables set in subshells aren't visible the parent shell.
You can fix this by replacing the pipelines with a redirection.
while read line
do
...
done < addresses.txt
Extending the accepted answer to resolve the OP's comment:
#!/bin/bash
declare -A address
while IFS='|' read name add
do
address[$name]=$add
echo "$name - ${address[$name]}"
done < adresses.txt
for name in "${!address[#]}"
do
echo "$name - ${address[$name]}"
done

How to store elements with whitespace in an array?

Just wondering, assuming I am storing my data in a file called BookDB.txt in the following format :
C++ for dummies:Jared:10.52:5:6
Java for dummies:David:10.65:4:6
whereby each field is seperated by the delimeter ":".
How would I preserve whitespace in the first field and have an array with the following contents : ('C++ for dummies' 'Java for dummies')?
Any help is very much appreciated!
Ploutox's solution is almost correct, but without setting IFS, you will not get the array that you seek, with two elements in this case.
Note: He corrected his solution after this post.
IFS=$'\n': arr=( $(awk -F':' '{print $1 }' Input.txt ) )
echo ${#arr[#]}
echo ${arr[0]}
echo ${arr[1]}
Output:
2
C++ for dummies
Java for dummies
Just use a while loop:
#!/bin/bash
# create and populate the array
a=()
while IFS=':' read -r field _
do
a+=("$field")
done < file
# print the array contents
printf "%s\n" "${a[#]}"
I totally misunderstood your question on my 1st attempt to answer. awk seems more suited for your need though. You can get what you want with simple scripting :
IFS=$'\n' : MYARRAY=($(awk -F ":" '{print $1}' myfile))
the -F flag forces : as the field separator.
echo ${MYARRAY[0]} will print :
C++ for dummies
$ yes sed -i "s/:/\'\'/" BookDB.txt | head -n100 | bash
this command while work. this is a linux command, run it on shell in same path with BookDB.txt

Resources