Cannot send command output into an array - arrays

I want to place each sysctl -a output line into an array:
TAB=($(sysctl -a))
It does not work; the resulting array contains the output split on any whitespace, instead of only on newlines:
[..]
NONE
net.netfilter.nf_log.5
=
NONE
net.netfilter.nf_log.6
=
NONE
net.netfilter.nf_log.7
=
NONE
net.netfilter.nf_log.8
[..]
I try:
while read -r line
do
TAB+=("${line}")
done< <(sysctl -a)
That does not work neither (same issue).
I try:
while IFS=$'\n' read -r line
do
TAB+=("${line}")
done< <(sysctl -a)
But still same output, same issue.
What's the correct method to have each line correctly placed in the array?

One way - probably the easiest - is to use readarray (bash 4 needed).
readarray -t TAB < <(sysctl -a)
Test:
$ echo ${TAB[0]}
abi.vsyscall32 = 1
$ echo ${TAB[1]}
crypto.fips_enabled = 0

You were close:
IFS=$'\n' TAB=($(sysctl -a))
# Usage example:
for x in "${TAB[#]}"
do
echo $x
done

Related

read csv output into an array and process the variable in a loop using bash [duplicate]

This question already has answers here:
How to parse a CSV file in Bash?
(6 answers)
Closed last month.
This post was edited and submitted for review 28 days ago and failed to reopen the post:
Duplicate This question has been answered, is not unique, and doesn’t differentiate itself from another question.
assuming i have an output/file
1,a,info
2,b,inf
3,c,in
I want to run a while loop with read
while read r ; do
echo "$r";
# extract line to $arr as array separated by ','
# call some program (e.g. md5sum, echo ...) on one item of arr
done <<HEREDOC
1,a,info
2,b,inf
3,c,in
HEREDOC
I would like to use readarray and while, but compelling alternatives are welcome too.
There is a specific way to have readarray (mapfile) behave correctly with process substitution, but i keep forgetting it. this is intended as a Q&A so an explanation would be nice
Since compelling alternatives are welcome too and assuming you're just trying to populate arr one line at a time:
$ cat tst.sh
#!/usr/bin/env bash
while IFS=',' read -a arr ; do
# extract line to $arr as array separated by ','
# echo the first item of arr
echo "${arr[0]}"
done <<HEREDOC
1,a,info
2,b,inf
3,c,in
HEREDOC
$ ./tst.sh
1
2
3
or if you also need each whole input line in a separate variable r:
$ cat tst.sh
#!/usr/bin/env bash
while IFS= read -r r ; do
# extract line to $arr as array separated by ','
# echo the first item of arr
IFS=',' read -r -a arr <<< "$r"
echo "${arr[0]}"
done <<HEREDOC
1,a,info
2,b,inf
3,c,in
HEREDOC
$ ./tst.sh
1
2
3
but bear in mind why-is-using-a-shell-loop-to-process-text-considered-bad-practice anyway.
readarray (mapfile) and read -a disambiguation
readarray == mapfile first:
help readarray
readarray: readarray [-d delim] [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum] [array]
Read lines from a file into an array variable.
A synonym for `mapfile'.
Then
help mapfile
mapfile: mapfile [-d delim] [-n count] [-O origin] [-s count] [-t] [-u fd] [-C callback] [-c quantum] [array]
Read lines from the standard input into an indexed array variable.
Read lines from the standard input into the indexed array variable ARRAY, or
from file descriptor FD if the -u option is supplied. The variable MAPFILE
is the default ARRAY.
Options:
-d delim Use DELIM to terminate lines, instead of newline
-n count Copy at most COUNT lines. If COUNT is 0, all lines are copied
-O origin Begin assigning to ARRAY at index ORIGIN. The default index is 0
-s count Discard the first COUNT lines read
-t Remove a trailing DELIM from each line read (default newline)
-u fd Read lines from file descriptor FD instead of the standard input
-C callback Evaluate CALLBACK each time QUANTUM lines are read
-c quantum Specify the number of lines read between each call to
CALLBACK
...
While read -a:
help read
read: read [-ers] [-a array] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name ...]
Read a line from the standard input and split it into fields.
Reads a single line from the standard input, or from file descriptor FD
if the -u option is supplied. The line is split into fields as with word
splitting, and the first word is assigned to the first NAME, the second
word to the second NAME, and so on, with any leftover words assigned to
the last NAME. Only the characters found in $IFS are recognized as word
delimiters.
...
Options:
-a array assign the words read to sequential indices of the array
variable ARRAY, starting at zero
...
Note:
Only the characters found in $IFS are recognized as word delimiters.
Useful with -a flag!
Create an array from a splitted string
For creating an array by splitting a string you could either:
IFS=, read -ra myArray <<<'A,1,spaced string,42'
declare -p myArray
declare -a myArray=([0]="A" [1]="1" [2]="spaced string" [3]="42")
Oe use mapfile, but as this command is intented to work of whole files, syntax is something counter-intuitive:
mapfile -td, myArray < <(printf %s 'A,1,spaced string,42')
declare -p myArray
declare -a myArray=([0]="A" [1]="1" [2]="spaced string" [3]="42")
Or, if you want to avoid fork ( < <(printf... ), you have to
mapfile -td, myArray <<<'A,1,spaced string,42'
myArray[-1]=${myArray[-1]%$'\n'}
declare -p myArray
declare -a myArray=([0]="A" [1]="1" [2]="spaced string" [3]="42")
This will be a little quicker, but not more readable...
For you sample:
mapfile -t rows <<HEREDOC
1,a,info
2,b,inf
3,c,in
HEREDOC
for row in ${rows[#]};do
IFS=, read -a cols <<<"$row"
declare -p cols
done
declare -a cols=([0]="1" [1]="a" [2]="info")
declare -a cols=([0]="2" [1]="b" [2]="inf")
declare -a cols=([0]="3" [1]="c" [2]="in")
for row in ${rows[#]};do
IFS=, read -a cols <<<"$row"
printf ' %s | %s\n' "${cols[0]}" "${cols[2]}"
done
1 | info
2 | inf
3 | in
Or even, if really you want to use readarray:
for row in ${rows[#]};do
readarray -dt, cols <<<"$row"
cols[-1]=${cols[-1]%$'\n'}
declare -p cols
done
declare -a cols=([0]="1,a,info")
declare -a cols=([0]="2,b,inf")
declare -a cols=([0]="3,c,in")
Playing with callback option:
(Added some spaces on last line)
testfunc() {
local IFS array cnt line
read cnt line <<< "$#"
IFS=,
read -a array <<< "$line"
printf ' [%3d]: %3s | %3s :: %s\n' $cnt "${array[#]}"
}
mapfile -t -C testfunc -c 1 <<HEREDOC
1,a,info
2,b,inf
3,c d,in fo
HEREDOC
[ 0]: 1 | a :: info
[ 1]: 2 | b :: inf
[ 2]: 3 | c d :: in fo
Same, with -u flag:
Open the file descriptor:
exec {mydoc}<<HEREDOC
1,a,info
2,b,inf
3,c d,in fo
HEREDOC
Then
mapfile -u $mydoc -C testfunc -c 1
[ 0]: 1 | a :: info
[ 1]: 2 | b :: inf
[ 2]: 3 | c d :: in fo
And finally close the file descriptor:
exec {mydoc}<&-
About bash csv module,
For further informations about enable -f /path/to/csv csv, RFCs and limitations, have a look at my previous post about How to parse a CSV file in Bash?
If the loadable builtin csv is available/acceptable, something like:
help csv
csv: csv [-a ARRAY] string
Read comma-separated fields from a string.
Parse STRING, a line of comma-separated values, into individual fields,
and store them into the indexed array ARRAYNAME starting at index 0.
If ARRAYNAME is not supplied, "CSV" is the default array name.
The script.
#!/usr/bin/env bash
enable csv || exit
while IFS= read -r line && csv -a arr "$line"; do
printf '%s\n' "${arr[0]}"
done <<HEREDOC
1,a,info
2,b,inf
3,c,in
HEREDOC
See help enable
With bash 5.2+ there is a default path for the loadables in config-top.h which should be configurable at compile time.
BASH_LOADABLES_PATH
The solution is readarray -t -d, arr < <(printf "%s," "$r")
The special part is < <(...) because readarray ....
there is no proper reason to be found why it first needs a redirection arrow and then process-substitution. Neither in tldp process-sub nor SS64 .
My final understanding is that, <(...) opens a named pipe and readarray is waiting for it to close. By moving this in place of a file behind < it is handled by bash as a file input and (anonymously) piped into stdin.
example:
while read r ; do
echo "$r";
readarray -t -d, arr < <(printf "%s," "$r");
echo "${arr[0]}";
done <<HEREDOC
1,a,info
2,b,inf
3,c,in
HEREDOC
Anyway this is just a reminder for myself, because i keep forgetting and readarray is the only place where i actually need this.
The question was also answered mostly here, here why the pipe isn't working and somewhat here, but they are difficult to find and the reasoning to comprehend.
for example the shopt -s lastpipe solution is not clear at first, but it turns out that in bash all piped elements are usually not executed in the main shell, thus state changes have no effect on the full program. this command changes the behavior to have the last pipe element execute in main (except in an interactive shell)
shopt -s lastpipe;
while read r ; do
echo "$r";
printf "%s," "$r" | readarray -t -d, arr;
echo "${arr[0]}";
done <<HEREDOC
1,a,info
2,b,inf
3,c,in
HEREDOC
one alternative to lastpipe would be to do all activity in the sub shell:
while read r ; do
echo "$r";
printf "%s," "$r" | {
readarray -t -d, arr ;
echo "${arr[0]}";
}
done <<HEREDOC
1,a,info
2,b,inf
3,c,in
HEREDOC

Put lines of a text file in an array in bash

I'm taking over a bash script from a colleague that reads a file, process it and print another file based on the line in the while loop at the moment.
I now need to append some features to it. The one I'm having issues with right now is to read a file and put each line into an array, except the 2nd column of that line can be empty, e.g.:
For a text file with \t as separator:
A\tB\tC
A\t\tC
For a CSV file same but with , as separator:
A,B,C
A,,C
Which should then give
["A","B","C"] or ["A", "", "C"]
The code I took over is as follow:
while IFS=$'\t\r' read -r -a col; do
# Process the array, put that into a file
lp -d $printer $file_to_print
done < $input_file
Which works if B is filled, but B need to be empty now sometimes, so when the input files keeps it empty, the created array and thus the output file to print just skips this empty cell (array is then ["A","C"]).
I tried writing the whole bloc on awk but this brought it's own sets of problems, making it difficult to call the lp command to print.
So my question is, how can I preserve the empty cell from the line into my bash array, so that I can call on it later and use it?
Thank you very much. I know this might be quite confused so please ask and I'll specify.
Edit: After request, here's the awk code I've tried. The issue here is that it only prints the last print request, while I know it loops over the whole file, and the lp command is still in the loop.
awk 'BEGIN {
inputfile="'"${optfile}"'"
outputfile="'"${file_loc}"'"
printer="'"${printer}"'"
while (getline < inputfile){
print "'"${prefix}"'" > outputfile
split($0,ft,"'"${IFSseps}"'");
if (length(ft[2]) == 0){
print "CODEPAGE 1252\nTEXT 465,191,\"ROMAN.TTF\",180,7,7,\""ft[1]"\"" >> outputfile
size_changer = 0
} else {
print "CODEPAGE 1252\nTEXT 465,191,\"ROMAN.TTF\",180,7,7,\""ft[1]"_"ft[2]"\"" >> outputfile
size_changer = 1
}
if ( split($0,ft,"'"${IFSseps}"'") > 6)
maxcounter = 6;
else
maxcounter = split($0,ft,"'"${IFSseps}"'");
for (i = 3; i <= maxcounter; i++){
x=191-(i-2)*33
print "CODEPAGE 1252\nTEXT 465,"x",\"ROMAN.TTF\",180,7,7,\""ft[i]"\"" >> outputfile
}
print "PRINT ""'"${copies}"'"",1" >> outputfile
close(outputfile)
"'"`lp -d ${printer} ${file_loc}`"'"
}
close("'"${file_loc}"'");
}'
EDIT2: Continuing to try to find a solution to it, I tried following code without success. This is weird, as just doing printf without putting it in an array keeps the formatting intact.
$ cat testinput | tr '\t' '>'
A>B>C
A>>C
# Should normally be empty on the second ouput line
$ while read line; do IFS=$'\t' read -ra col < <(printf "$line"); echo ${col[1]}; done < testinput
B
C
For tab, it's complicated.
From 3.5.7 Word Splitting in the manual:
A sequence of IFS whitespace characters is also treated as a delimiter.
Since tab is an "IFS whitespace character", sequences of tabs are treated as a single delimiter
IFS=$'\t' read -ra ary <<<$'A\t\tC'
declare -p ary
declare -a ary=([0]="A" [1]="C")
What you can do is translate tabs to a non-whitespace character, assuming it does not clash with the actual data in the fields:
line=$'A\t\tC'
IFS=, read -ra ary <<<"${line//$'\t'/,}"
declare -p ary
declare -a ary=([0]="A" [1]="" [2]="C")
To avoid the risk of colliding with commas in the data, we can use an unusual ASCII character: FS, octal 034
line=$'A\t\tC'
printf -v FS '\034'
IFS="$FS" read -ra ary <<<"${line//$'\t'/"$FS"}"
# or, without the placeholder variable
IFS=$'\034' read -ra ary <<<"${line//$'\t'/$'\034'}"
declare -p ary
declare -a ary=([0]="A" [1]="" [2]="C")
One bash example using parameter expansion where we convert the delimiter into a \n and let mapfile read in each line as a new array entry ...
For tab-delimited data:
for line in $'A\tB\tC' $'A\t\tC'
do
mapfile -t array <<< "${line//$'\t'/$'\n'}"
echo "############# ${line}"
typeset -p array
done
############# A B C
declare -a array=([0]="A" [1]="B" [2]="C")
############# A C
declare -a array=([0]="A" [1]="" [2]="C")
NOTE: The $'...' construct insures the \t is treated as a single <tab> character as opposed to the two literal characters \ + t.
For comma-delimited data:
for line in 'A,B,C' 'A,,C'
do
mapfile -t array <<< "${line//,/$'\n'}"
echo "############# ${line}"
typeset -p array
done
############# A,B,C
declare -a array=([0]="A" [1]="B" [2]="C")
############# A,,C
declare -a array=([0]="A" [1]="" [2]="C")
NOTE: This obviously (?) assumes the desired data does not contain a comma (,).
It may just be your # Process the array, put that into a file part.
IFS=, read -ra ray <<< "A,,C"
for e in "${ray[#]}"; do o="$o\"$e\","; done
echo "[${o%,}]"
["A","","C"]
See #Glenn's excellent answer regarding tabs.
My simple data file:
$: cat x # tab delimited, empty field 2 of line 2
a b c
d f
My test:
while IFS=$'\001' read -r a b c; do
echo "a:[$a] b:[$b] c:[$c]"
done < <(tr "\t" "\001"<x)
a:[a] b:[b] c:[c]
a:[d] b:[] c:[f]
Note that I used ^A (a 001 byte) but you might be able to use something as simple as a comma or pipe (|) character. Choose based on your data.

Bash, wihle read line by line, split strings on line divided by ",", store to array

I need to read file line by line, and every line split by ",", and store to array.
File source_file.
usl-coop,/root
usl-dev,/bin
Script.
i=1
while read -r line; do
IFS="," read -ra para_$i <<< $line
echo ${para_$i[#]}
((i++))
done < source_file
Expected output.
para_1[0]=usl-coop
para_1[1]=/root
para_2[0]=usl-dev
para_2[1]=/bin
Script will out error about echo.
./sofimon.sh: line 21: ${para_$i[#]}: bad substitution
When I echo array one by one field, for example
echo para_1[0]
it shows, that variables are stored.
But I need use it with variable within, something like this.
${para_$i[1]}
Is possible to do this?
Thanks.
S.
There is a trick to simulate 2D arrays using associative arrays. It works nice and I think is the most flexible and extensible:
declare -A para
i=1
while IFS=, read -r -a line; do
for j in ${!line[#]}; do
para[$i,$j]="${line[$j]}"
done
((i++)) ||:
done < source_file
declare -p para
will output:
declare -A para=([1,0]="usl-coop" [1,1]="/root" [2,1]="/bin" [2,0]="usl-dev" )
Without modifying your script that much you could use indirect variable expansion. It's sometimes used in simpler scripts:
i=1
while IFS="," read -r -a para_$i; do
n="para_$i[#]"
echo "${!n}"
((i++)) ||:
done < source_file
declare -p ${!para_*}
or basically the same with a nameref a named reference to another variable (side note: see how [#] needs to be part of the variable in indirect expansion, but not in named reference):
i=1
while IFS="," read -r -a para_$i; do
declare -n n
n="para_$i"
echo "${n[#]}"
((i++)) ||:
done < source_file
declare -p ${!para_*}
both scripts above will output the same:
usl-coop /root
usl-dev /bin
declare -a para_1=([0]="usl-coop" [1]="/root")
declare -a para_2=([0]="usl-dev" [1]="/bin")
That said, I think you shouldn't read your file into memory at all. It's just a bad design. Shell and bash is build around passing your files with pipes, streams, fifos, redirections, process substitutions, etc. without ever saving/copying/storing the file. If you have a file to parse, you should stream it to another process, parse and save the result, without ever storing the whole input in memory. If you want some data to find inside a file, use grep or awk.
Here is a short awk script that do the task.
awk 'BEGIN{FS=",";of="para_%d[%d]=%s\n"}{printf(of, NR, 0, $1);printf(of, NR, 1, $2)}' input.txt
Provide the desired output.
Explanation:
BEGIN{
FS=","; # set field seperator to `,`
of="para_%d[%d]=%s\n" # define common printf output format
}
{ # for each input line
printf(of, NR, 0, $1); # output for current line, [0], left field
printf(of, NR, 1, $2) # output for current line, [1], right field
}

Using array inside awk in shell script

I am very new to Unix shell script and trying to get some knowledge in shell scripting. Please check my requirement and my approach.
I have a input file having data
ABC = A:3 E:3 PS:6
PQR = B:5 S:5 AS:2 N:2
I am trying to parse the data and get the result as
ABC
A=3
E=3
PS=6
PQR
B=5
S=5
AS=2
N=2
The values can be added horizontally and vertically so I am trying to use an array. I am trying something like this:
myarr=(main.conf | awk -F"=" 'NR!=1 {print $1}'))
echo ${myarr[1]}
# Or loop through every element in the array
for i in "${myarr[#]}"
do
:
echo $i
done
or
awk -F"=" 'NR!=1 {
print $1"\n"
STR=$2
IFS=':' read -r -a array <<< "$STR"
for i in "${!array[#]}"
do
echo "$i=>${array[i]}"
done
}' main.conf
But when I add this code to a .sh file and try to run it, I get syntax errors as
$ awk -F"=" 'NR!=1 {
> print $1"\n"
> STR=$2
> FS= read -r -a array <<< "$STR"
> for i in "${!array[#]}"
> do
> echo "$i=>${array[i]}"
> done
>
> }' main.conf
awk: cmd. line:4: FS= read -r -a array <<< "$STR"
awk: cmd. line:4: ^ syntax error
awk: cmd. line:5: for i in "${!array[#]}"
awk: cmd. line:5: ^ syntax error
awk: cmd. line:8: done
awk: cmd. line:8: ^ syntax error
How can I complete the above expectations?
This is the awk code to do what you want:
$ cat tst.awk
BEGIN { FS="[ =:]+"; OFS="=" }
{
print $1
for (i=2;i<NF;i+=2) {
print $i, $(i+1)
}
print ""
}
and this is the shell script (yes, all a shell script does to manipulate text is call awk):
$ awk -f tst.awk file
ABC
A=3
E=3
PS=6
PQR
B=5
S=5
AS=2
N=2
A UNIX shell is an environment from which to call UNIX tools (find, sort, sed, grep, awk, tr, cut, etc.). It has its own language for manipulating (e.g. creating/destroying) files and processes and sequencing calls to tools but it is NOT intended to be used to manipulate text. The guys who invented shell also invented awk for shell to call to manipulate text.
Read https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice and the book Effective Awk Programming, 4th Edition, by Arnold Robbins.
First off, a command that does what you want:
$ sed 's/ = /\n/;y/: /=\n/' main.conf
ABC
A=3
E=3
PS=6
PQR
B=5
S=5
AS=2
N=2
This replaces, on each line, the first (and only) occurrence of = with a newline (the s command), then turns all : into = and all spaces into newlines (the y command). Notice that
this works only because there is a space at the end of the first line (otherwise it would be a bit more involved to get the empty line between the blocks) and
this works only with GNU sed because it substitutes newlines; see this fantastic answer for all the details and how to get it to work with BSD sed.
As for what you tried, there is almost too much wrong with it to try and fix it piece by piece: from the wild mixing of awk and Bash to syntax errors all over the place. I recommend you read good tutorials for both, for example:
The BashGuide
Effective AWK Programming
A Bash solution
Here is a way to solve the same in Bash; I didn't use any arrays.
#!/bin/bash
# Read line by line into the 'line' variable. Setting 'IFS' to the empty string
# preserves leading and trailing whitespace; '-r' prevents interpretation of
# backslash escapes
while IFS= read -r line; do
# Three parameter expansions:
# Replace ' = ' by newline (escape backslash)
line="${line/ = /\\n}"
# Replace ':' by '='
line="${line//:/=}"
# Replace spaces by newlines (escape backslash)
line="${line// /\\n}"
# Print the modified input line; '%b' expands backslash escapes
printf "%b" "$line"
done < "$1"
Output:
$ ./SO.sh main.conf
ABC
A=3
E=3
PS=6
PQR
B=5
S=5
AS=2
N=2

Populating array from file

I've searched google and nowhere found the answer. I'm getting a function to populate an array from file:
#!/bin/bash
PusherListV2=()
PusherListV3=()
getArray () {
if ! [ -f $2 ]; then return 2
fi
i=0
while read p; do
PusherList$1[$i]="$p"
((i++))
done <$2
}
getArray V2 /tmp/test.txt
echo ${PusherListV2[#]}
I'm getting this kind of error to this:
./test.sh: line 11: PusherListV2[0]=p01: command not found
./test.sh: line 11: PusherListV2[1]=p02: command not found
./test.sh: line 11: PusherListV2[2]=p03: command not found
./test.sh: line 11: PusherListV2[3]=p04: command not found
./test.sh: line 11: PusherListV2[4]=p05: command not found
Could someone please help me?
You can't use variable substitution in assignment to construct the variable name. This doesn't work:
PusherList$1[$i]="$p"
Replace with:
eval PusherList$1[$i]=\"\$p\"
or even just this (as shekhar suman says, quotes are not particularly useful here):
eval PusherList$1[$i]=\$p
As long as you control $1 and $i, this should be a safe use of eval.
There is a very simple solution using readarray. Here is the test file:
$ cat file.tmp
first line
second line
third line
Now I read the file and store the lines in an array:
$ readarray mytab < file.tmp
Finally I check the array:
$ declare -p mytab
declare -a mytab='([0]="first line
" [1]="second line
" [2]="third line
")'
As you can see the lines are stored with the \n. Remove them with -t.
Now to solve your problem you can pass the array by reference in the function with the new nameref attribute (bash 4.3+), no need of eval:
PusherListV2=()
PusherListV3=()
getArray () array file
{
local -n array="$1" # nameref attribute
local file="$2"
test -f "$file" || return 2
readarray -t array < "$file"
}
getArray PusherListV2 /tmp/test.txt
echo "${PusherListV2[#]}" # always "" arround when using #
If you still want to pass V2 instead of PusherListV2 you simply have to write
local -n array="PusherList$1" # nameref attribute
in the function.
If I understand correctly, you want a function that takes two arguments:
First argument is a string that will be appended to PusherList to obtain an array name
Second argument is a file name
The function should put each line of the file in the array.
Easy, in Bash≥4:
getArray() {
[[ -f $2 ]] || return 2
# TODO: should also check file is readable
mapfile -t "PusherList$1" < "$2"
# TODO: check that mapfile succeeded
# (it may fail if e.g., PusherList$1 is not a valid variable name)
}
The -t option to mapfile so that the trailing newlines are trimmed.
Note. This is very likely going to be the most efficient method.

Resources