I have this type of script, which asks for the user input and then stores it on an indexed array like:
#!/bin/bash
declare -a ka=()
for i in {1..4};
do
read -a ka > ka();
done
echo ${ka[#]}
I can't manage to append the read statement to the array.
When you run read -r -a arrayname, the entire array is rewritten starting from the very first item; it doesn't retain any of the prior contents.
Thus, read into a temporary array, and append that temporary array to your "real" / final one:
#!/usr/bin/env bash
case $BASH_VERSION in '') echo "ERROR: must be run with bash" >&2; exit 1;; esac
declare -a ka=()
for i in {1..4}; do
# only do the append if the read reports success
# note that if we're reading from a file with no newline on the last line, that last
# line will be skipped (on UNIX, text must be terminated w/ newlines to be valid).
read -r -a ka_suffix && ka+=( "${ka_suffix[#]}" )
done
# show current array contents unambiguously, one-per-line (echo is _very_ ambiguous)
printf ' - %q\n' "${ka[#]}"
Related
I'm taking over a bash script from a colleague that reads a file, process it and print another file based on the line in the while loop at the moment.
I now need to append some features to it. The one I'm having issues with right now is to read a file and put each line into an array, except the 2nd column of that line can be empty, e.g.:
For a text file with \t as separator:
A\tB\tC
A\t\tC
For a CSV file same but with , as separator:
A,B,C
A,,C
Which should then give
["A","B","C"] or ["A", "", "C"]
The code I took over is as follow:
while IFS=$'\t\r' read -r -a col; do
# Process the array, put that into a file
lp -d $printer $file_to_print
done < $input_file
Which works if B is filled, but B need to be empty now sometimes, so when the input files keeps it empty, the created array and thus the output file to print just skips this empty cell (array is then ["A","C"]).
I tried writing the whole bloc on awk but this brought it's own sets of problems, making it difficult to call the lp command to print.
So my question is, how can I preserve the empty cell from the line into my bash array, so that I can call on it later and use it?
Thank you very much. I know this might be quite confused so please ask and I'll specify.
Edit: After request, here's the awk code I've tried. The issue here is that it only prints the last print request, while I know it loops over the whole file, and the lp command is still in the loop.
awk 'BEGIN {
inputfile="'"${optfile}"'"
outputfile="'"${file_loc}"'"
printer="'"${printer}"'"
while (getline < inputfile){
print "'"${prefix}"'" > outputfile
split($0,ft,"'"${IFSseps}"'");
if (length(ft[2]) == 0){
print "CODEPAGE 1252\nTEXT 465,191,\"ROMAN.TTF\",180,7,7,\""ft[1]"\"" >> outputfile
size_changer = 0
} else {
print "CODEPAGE 1252\nTEXT 465,191,\"ROMAN.TTF\",180,7,7,\""ft[1]"_"ft[2]"\"" >> outputfile
size_changer = 1
}
if ( split($0,ft,"'"${IFSseps}"'") > 6)
maxcounter = 6;
else
maxcounter = split($0,ft,"'"${IFSseps}"'");
for (i = 3; i <= maxcounter; i++){
x=191-(i-2)*33
print "CODEPAGE 1252\nTEXT 465,"x",\"ROMAN.TTF\",180,7,7,\""ft[i]"\"" >> outputfile
}
print "PRINT ""'"${copies}"'"",1" >> outputfile
close(outputfile)
"'"`lp -d ${printer} ${file_loc}`"'"
}
close("'"${file_loc}"'");
}'
EDIT2: Continuing to try to find a solution to it, I tried following code without success. This is weird, as just doing printf without putting it in an array keeps the formatting intact.
$ cat testinput | tr '\t' '>'
A>B>C
A>>C
# Should normally be empty on the second ouput line
$ while read line; do IFS=$'\t' read -ra col < <(printf "$line"); echo ${col[1]}; done < testinput
B
C
For tab, it's complicated.
From 3.5.7 Word Splitting in the manual:
A sequence of IFS whitespace characters is also treated as a delimiter.
Since tab is an "IFS whitespace character", sequences of tabs are treated as a single delimiter
IFS=$'\t' read -ra ary <<<$'A\t\tC'
declare -p ary
declare -a ary=([0]="A" [1]="C")
What you can do is translate tabs to a non-whitespace character, assuming it does not clash with the actual data in the fields:
line=$'A\t\tC'
IFS=, read -ra ary <<<"${line//$'\t'/,}"
declare -p ary
declare -a ary=([0]="A" [1]="" [2]="C")
To avoid the risk of colliding with commas in the data, we can use an unusual ASCII character: FS, octal 034
line=$'A\t\tC'
printf -v FS '\034'
IFS="$FS" read -ra ary <<<"${line//$'\t'/"$FS"}"
# or, without the placeholder variable
IFS=$'\034' read -ra ary <<<"${line//$'\t'/$'\034'}"
declare -p ary
declare -a ary=([0]="A" [1]="" [2]="C")
One bash example using parameter expansion where we convert the delimiter into a \n and let mapfile read in each line as a new array entry ...
For tab-delimited data:
for line in $'A\tB\tC' $'A\t\tC'
do
mapfile -t array <<< "${line//$'\t'/$'\n'}"
echo "############# ${line}"
typeset -p array
done
############# A B C
declare -a array=([0]="A" [1]="B" [2]="C")
############# A C
declare -a array=([0]="A" [1]="" [2]="C")
NOTE: The $'...' construct insures the \t is treated as a single <tab> character as opposed to the two literal characters \ + t.
For comma-delimited data:
for line in 'A,B,C' 'A,,C'
do
mapfile -t array <<< "${line//,/$'\n'}"
echo "############# ${line}"
typeset -p array
done
############# A,B,C
declare -a array=([0]="A" [1]="B" [2]="C")
############# A,,C
declare -a array=([0]="A" [1]="" [2]="C")
NOTE: This obviously (?) assumes the desired data does not contain a comma (,).
It may just be your # Process the array, put that into a file part.
IFS=, read -ra ray <<< "A,,C"
for e in "${ray[#]}"; do o="$o\"$e\","; done
echo "[${o%,}]"
["A","","C"]
See #Glenn's excellent answer regarding tabs.
My simple data file:
$: cat x # tab delimited, empty field 2 of line 2
a b c
d f
My test:
while IFS=$'\001' read -r a b c; do
echo "a:[$a] b:[$b] c:[$c]"
done < <(tr "\t" "\001"<x)
a:[a] b:[b] c:[c]
a:[d] b:[] c:[f]
Note that I used ^A (a 001 byte) but you might be able to use something as simple as a comma or pipe (|) character. Choose based on your data.
I need to read file line by line, and every line split by ",", and store to array.
File source_file.
usl-coop,/root
usl-dev,/bin
Script.
i=1
while read -r line; do
IFS="," read -ra para_$i <<< $line
echo ${para_$i[#]}
((i++))
done < source_file
Expected output.
para_1[0]=usl-coop
para_1[1]=/root
para_2[0]=usl-dev
para_2[1]=/bin
Script will out error about echo.
./sofimon.sh: line 21: ${para_$i[#]}: bad substitution
When I echo array one by one field, for example
echo para_1[0]
it shows, that variables are stored.
But I need use it with variable within, something like this.
${para_$i[1]}
Is possible to do this?
Thanks.
S.
There is a trick to simulate 2D arrays using associative arrays. It works nice and I think is the most flexible and extensible:
declare -A para
i=1
while IFS=, read -r -a line; do
for j in ${!line[#]}; do
para[$i,$j]="${line[$j]}"
done
((i++)) ||:
done < source_file
declare -p para
will output:
declare -A para=([1,0]="usl-coop" [1,1]="/root" [2,1]="/bin" [2,0]="usl-dev" )
Without modifying your script that much you could use indirect variable expansion. It's sometimes used in simpler scripts:
i=1
while IFS="," read -r -a para_$i; do
n="para_$i[#]"
echo "${!n}"
((i++)) ||:
done < source_file
declare -p ${!para_*}
or basically the same with a nameref a named reference to another variable (side note: see how [#] needs to be part of the variable in indirect expansion, but not in named reference):
i=1
while IFS="," read -r -a para_$i; do
declare -n n
n="para_$i"
echo "${n[#]}"
((i++)) ||:
done < source_file
declare -p ${!para_*}
both scripts above will output the same:
usl-coop /root
usl-dev /bin
declare -a para_1=([0]="usl-coop" [1]="/root")
declare -a para_2=([0]="usl-dev" [1]="/bin")
That said, I think you shouldn't read your file into memory at all. It's just a bad design. Shell and bash is build around passing your files with pipes, streams, fifos, redirections, process substitutions, etc. without ever saving/copying/storing the file. If you have a file to parse, you should stream it to another process, parse and save the result, without ever storing the whole input in memory. If you want some data to find inside a file, use grep or awk.
Here is a short awk script that do the task.
awk 'BEGIN{FS=",";of="para_%d[%d]=%s\n"}{printf(of, NR, 0, $1);printf(of, NR, 1, $2)}' input.txt
Provide the desired output.
Explanation:
BEGIN{
FS=","; # set field seperator to `,`
of="para_%d[%d]=%s\n" # define common printf output format
}
{ # for each input line
printf(of, NR, 0, $1); # output for current line, [0], left field
printf(of, NR, 1, $2) # output for current line, [1], right field
}
Let's say I have a file named tmp.out that contains the following:
c:\My files\testing\more files\stuff\test.exe
c:\testing\files here\less files\less stuff\mytest.exe
I want to put the contents of that file into an array and I do it like so:
ARRAY=( `cat tmp.out` )
I then run this through a for loop like so
for i in ${ARRAY[#]};do echo ${i}; done
But the output ends up like this:
c:\My
files\testing\more
files\stuff\test.sas
c:\testing\files
here\less
files\less
stuff\mytest.sas
and I want the output to be:
c:\My files\testing\more files\stuff\test.exe
c:\testing\files here\less files\less stuff\mytest.exe
How can I resolve this?
In order to iterate over the values in an array, you need to quote the array expansion to avoid word splitting:
for i in "${values[#]}"; do
Of course, you should also quote the use of the value:
echo "${i}"
done
That doesn't answer the question of how to get the lines of a file into an array in the first place. If you have bash 4.0, you can use the mapfile builtin:
mapfile -t values < tmp.out
Otherwise, you'd need to temporarily change the value of IFS to a single newline, or use a loop over the read builtin.
You can use the IFS variable, the Internal Field Separator. Set it to empty string to split the contents on newlines only:
while IFS= read -r line ; do
ARRAY+=("$line")
done < tmp.out
-r is needed to keep the literal backslashes.
Another simple way to control word-splitting is by controlling the Internal Field Separator (IFS):
#!/bin/bash
oifs="$IFS" ## save original IFS
IFS=$'\n' ## set IFS to break on newline
array=( $( <dat/2lines.txt ) ) ## read lines into array
IFS="$oifs" ## restore original IFS
for ((i = 0; i < ${#array[#]}; i++)) do
printf "array[$i] : '%s'\n" "${array[i]}"
done
Input
$ cat dat/2lines.txt
c:\My files\testing\more files\stuff\test.exe
c:\testing\files here\less files\less stuff\mytest.exe
Output
$ bash arrayss.sh
array[0] : 'c:\My files\testing\more files\stuff\test.exe'
array[1] : 'c:\testing\files here\less files\less stuff\mytest.exe'
I'm not sure whats going on here
#!/bin/bash
STRING_PREFIX="foo"
STRING_IDX="1,2,3,4,5"
declare -a STRING_ARRAY
main() {
assemble_strings
for i in "${STRING_ARRAY[#]}"; do
echo "TEST: $i"
done
}
assemble_strings() {
IFS=,
while IFS= read idx; do
STRING_ARRAY+=("${STRING_PREFIX}${idx}")
done < <(echo $STRING_IDX)
}
main
I expect an array of 5 strings each prepended with 'foo'. Instead I get an array of 1 string
TEST: foo1 2 3 4 5
For bonus points, how can I avoid the loop entirely? I can't figure out how to create an array from an expression in bash.
First: Because you put IFS= at the front of your read, the prior IFS=, does nothing (insofar as that read is concerned).
Second: Because you aren't setting -d , in your read, it's using the default -- newline -- value as record terminator. (IFS determines the field separator, not the record terminator; with an empty IFS value, your records have only one field in them anyhow). Thus, when you call read, it reads the whole record -- up to the newline -- so your loop only runs once.
One approach, using read -a to read directly to an array (in this case, treating the entire input stream as a single record, with fields separated by commas):
string_idx=1,2,3,4,5
string_prefix=foo
# use read to directly populate the array
IFS=, read -r -d '' -a string_array <<<"$string_idx"
# go back through and tack on prefixes
for idx in "${!string_array[#]}"; do
string_array[$idx]="${string_prefix}${string_array[$idx]}"
done
# print values
printf ' entry: %s\n' "${string_array[#]}"
Another, making the smallest change to your existing code -- treating the input stream as a series of single-field comma-separated records:
string_idx=1,2,3,4,5
string_prefix=foo
string_array=( )
while IFS= read -r -d , idx; do
string_array+=( "${string_prefix}${idx}" )
done <<<"$string_idx,"
I read the files of a directory and put each file name into an array (SEARCH)
Then I use a loop to go through each file name in the array (SEARCH) and open them up with a while read line loop and read each line into another array (filecount). My problem is its one huge array with 39 lines (each file has 13 lines) and I need it to be 3 seperate arrays, where
filecount1[line1] is the first line from the 1st file and so on. here is my code so far...
typeset -A files
for file in ${SEARCH[#]}; do
while read line; do
files["$file"]+="$line"
done < "$file"
done
So, Thanks Ivan for this example! However I'm not sure I follow how this puts it into a seperate array because with this example wouldnt all the arrays still be named "files"?
If you're just trying to store the file contents into an array:
declare -A contents
for file in "${!SEARCH[#]}"; do
contents["$file"]=$(< $file)
done
If you want to store the individual lines in a array, you can create a pseudo-multi-dimensional array:
declare -A contents
for file in "${!SEARCH[#]}"; do
NR=1
while read -r line; do
contents["$file,$NR"]=$line
(( NR++ ))
done < "$file"
done
for key in "${!contents[#]}"; do
printf "%s\t%s\n" "$key" "${contents["$key"]}"
done
line 6 is
$filecount[$linenum]}="$line"
Seems it is missing a {, right after the $.
Should be:
${filecount[$linenum]}="$line"
If the above is true, then it is trying to run the output as a command.
Line 6 is (after "fixing" it above):
${filecount[$linenum]}="$line"
However ${filecount[$linenum]} is a value and you can't have an assignment on a value.
Should be:
filecount[$linenum]="$line"
Now I'm confused, as in whether the { is actually missing, or } is the actual typo :S :P
btw, bash supports this syntax too
filecount=$((filecount++)) # no need for $ inside ((..)) and use of increment operator ++
This should work:
typeset -A files
for file in ${SEARCH[#]}; do # foreach file
while read line; do # read each line
files["$file"]+="$line" # and place it in a new array
done < "$file" # reading each line from the current file
done
a small test shows it works
# set up
mkdir -p /tmp/test && cd $_
echo "abc" > a
echo "foo" > b
echo "bar" > c
# read files into arrays
typeset -A files
for file in *; do
while read line; do
files["$file"]+="$line"
done < "$file"
done
# print arrays
for file in *; do
echo ${files["$file"]}
done
# same as:
echo ${files[a]} # prints: abc
echo ${files[b]} # prints: foo
echo ${files[c]} # prints: bar