calculate the average in bash? [duplicate] - arrays

This question already has answers here:
Scripts for computing the average of a list of numbers in a data file
(3 answers)
How can I read first n and last n lines from a file?
(10 answers)
Closed 2 years ago.
I have a fasta file with a sequences (file with text) like:
file.fasta
>seq_1
AGCTAATACTTGTCCACGTTGTACTTCTTCACGAGAAACACCACGTAATAAAGCACCGAT
GTTATCTCCAGCTTCAGCGTAATCTAATAATTTACGGAACATTTCTACACCTGTAACTGT
AGTTTTAGCTGGCTCTTCAGTTAAACCGATGATTTCAACTTCTTCACCAACTTTAACTTG
TCCACGCTCAACACGTCCAGTTGCAACTGTACCACGACCAGTGATTGAGAATACGTCCTC
AACTGGCATCATGAATGGTTTGTCAGAATCACGTTCTGGAGTTGGGATGTACTCATCAAC
TGCGTTCATTAATTCCATGATTTTTTCTTCGTACTCTTCAACGCCTTCTAATGCTTTTAA
AGCAGATCCAGCGATTACAGGTACATCGTCACCAGGGAAGTCATATTCAGATAATAAGTC
ACGAACTTCC
>seq_2
AGCTAATACTTGTCCACGTTGTACTTCTTCACGAGAAACACCACGTAATAAAGCACCGAT
GTTATCTCCAGCTTCAGCGTAATCTAATAATTTACGGAACATTTCTACACCTGTAACTGT
AGTTTTAGATGGCTCTTCAGTTAAACCGATGATTTCAACTTCTTCACCAACTTTAACTTG
TCCACGCTCAACACGTCCAGTTGCAACTGTACCACGACCAGTGATTGAGAATACGTCCTC
AACTGGCATCATGAATGGTTTGTCAGAATCACGTTCTGGAGTTGGGATGTACTCATCAAC
TGCGTTCATTAATTCCATGATTTTATCTTCGTACTCTTCAACGCCTTCTAATGCTTTTAA
AGCAGATCCAGCGATTACAGGTACATCGTCACCAGGGAAGTCATATTCAGATAATAAGTC
ACGAACTTCC
>seq_3
AGCTAATACTTGTCCACGTTGTACTTCTTCACGAGAAACACCACGTAATAAAGCACCGAT
GTTATCTCCAGCTTCAGCGTAATCTAATAATTTACGGAACATTTCTACACCTGTAACTGT
AGTTTTAGATGGCTCTTCAGTTAAACCGATGATTTCAACTTCTTCACCAACTTTAACTTG
TCCACGCTCAACACGTCCAGTTGCAACTGTACCACGACCAGTGATTGAGAATACGTCCTC
AACTGGCATCATGAATGGTTTGTCAGAATCACGTTCTGGAGTTGGGATGTACTCATCAAC
TGCATTCATTAATTCCATGATTTTATCTTCGTACTCTTCAACGCCTTCTAATGCTTTTAA
AGCAGATCCAGCGATTACAGGTACATCGTCACCAGGGAAGTCATATTCAGATAATAAGTC
ACGAACTTCC
............
>seq_n
AGCAGATCCAGCGATTACAGGTACATCGTCACCAGGGAAGTCATATTCAGATAATAAGTC
..............
So I want to calculate the average length of the strings avoiding the lines with >seq_, my code to obtain the length of each line is:
array_length=$(awk '/^>/ {print n $0; n="\n"}; !/^>/ {printf "%s", $0} END {print ""}' My_file.fasta | awk '!/^>/ {print length(), $0}' | sort -n| awk '{print $1}')
until here everything is ok, I got the fist column that correspond to the length of each string:
echo "$array_length"
203
207
222
231
232
243
255
258
261
268
279
291
307
316
.....
161581
208146
242398
259601
288468
301866
427209
531340
557978
840257
well the length in the array could be variable, in this case I just show part of them.
my problem is that I want to calculate the average of the $array_length (sum of all numbers/length of the array)
A second question is how to take the fist element of the array and the last one; in order to do that, I just add a tail -1 and head -n 1 to the end of the code
awk '/^>/ {print n $0; n="\n"}; !/^>/ {printf "%s", $0} END {print ""}' My_file.fasta | awk '!/^>/ {print length(), $0}' | sort -n| awk '{print $1}' | tail -1
awk '/^>/ {print n $0; n="\n"}; !/^>/ {printf "%s", $0} END {print ""}' My_file.fasta | awk '!/^>/ {print length(), $0}' | sort -n| awk '{print $1}' | head -n 1
I know that, with a file I do it like
cat file.txt | tail -1
cat file.txt | head -n 1
But I dont want to use the same code twice to obtain the $small_one (203) and $big_one (840257), I just want to take the fist and last element of the variable $array_length like the one that I show here, how can I do it?

Related

Computing sum of specific field from array entries

I have an array trf. Would like to compute the sum of the second element in each array entry.
Example of array contents
trf=( "2 13 144" "3 21 256" "5 34 389" )
Here is the current implementation, but I do not find it robust enough. For instance, it fails with arbitrary number of elements (but considered constant from one array element to another) in each array entry.
cnt=0
m=${#trf[#]}
while (( cnt < m )); do
while read -r one two three
do
sum+="$two"+
done <<< $(echo ${array[$count]})
let count=$count+1
done
sum+=0
result=`echo "$sum" | /usr/bin/bc -l`
You're making it way too complicated. Something like
#!/usr/bin/env bash
trf=( "2 13 144" "3 21 256" "5 34 389" )
declare -i sum=0 # Integer attribute; arithmetic evaluation happens when assigned
for (( n = 0; n < ${#trf[#]}; n++)); do
read -r _ val _ <<<"${trf[n]}"
sum+=$val
done
printf "%d\n" "$sum"
in pure bash, or just use awk (This is handy if you have floating point numbers in your real data):
printf "%s\n" "${trf[#]}" | awk '{ sum += $2 } END { print sum }'
You can use printf to print the entire array, one entry per line. On such an input, one loop (while read) would be sufficient. You can even skip the loop entirely using cut and tr to build the bc command. The echo 0 is there so that bc can handle empty arrays and the trailing + inserted by tr.
{ printf %s\\n "${trf[#]}" | cut -d' ' -f2 | tr \\n +; echo 0; } | bc -l
For your examples this generates prints 68 (= 13+21+34+0).
Try this printf + awk combo:
$ printf '%s\n' "${trf[#]}" | awk '{print $2}{a+=$2}END{print "sum:", a}'
13
21
34
sum: 68
Oh, it's already suggested by Shawn. Then with loop:
$ for item in "${trf[#]}"; do
echo $item
done | awk '{print $2}{a+=$2}END{print "sum:", a}'
13
21
34
sum: 68
For relatively small arrays a for/while double loop should be ok re: performance; placing the final sum in the $result variable (as in OP's code):
result=0
for element in "${trf[#]}"
do
while read -r a b c
do
((result+=b))
done <<< "${element}"
done
echo "${result}"
This generates:
68
For larger data sets I'd probably opt for one of the awk-only solutions (for performance reasons).

Find items common between two Bash arrays

I have below shell script in which I have two arrays number1 and number2. I have a variable range which has list of numbers.
Now I need to figure out what are all numbers which are in number1 array are also present in range variable. Similarly for number2 array as well. Below is my shell script and it is working fine.
number1=(1220 1374 415 1097 1219 557 401 1230 1363 1116 1109 1244 571 1347 1404)
number2=(411 1101 273 1217 547 1370 286 1224 1362 1091 567 561 1348 1247 1106 304 435 317)
range=90,197,521,540,552,554,562,569:570,573,576,579,583,594,597,601,608:609,611,628,637:638,640:641,644:648
range_f=" "$(eval echo $(echo $range | perl -pe 's/(\d+):(\d+)/{$1..$2}/g;s/,/ /g;'))" "
echo "$range_f"
for item in "${number1[#]}"; do
if [[ $range_f =~ " $item " ]] ; then
new_number1+=($item)
fi
done
echo "new list: ${new_number1[#]}"
for item in "${number2[#]}"; do
if [[ $range_f =~ " $item " ]] ; then
new_number2+=($item)
fi
done
echo "new list: ${new_number2[#]}"
Is there any better way to write above stuff? As of now I have two for loops iterating and then figuring out new_number1 and new_number2 arrays.
Note:
Numbers like 644:648 means, it starts with 644 and ends with 648. It is just short form.
You can use comm with process substitution instead of looping:
mapfile -t new_number1 < <(comm -12 <(printf '%s\n' "${number1[#]}" | sort) <(printf '%s\n' $range_f | sort))
mapfile -t new_number2 < <(comm -12 <(printf '%s\n' "${number2[#]}" | sort) <(printf '%s\n' $range_f | sort))
mapfile -t name reads from the nested process substitution into the named array
printf ... | sort pair provides the sorted input streams for comm
comm -12 emits the items common to the two streams
Aside from codeforester's answer, I can think of two other ways of doing this:
Load the values of $range as keys of an associative array. The
values will be 1. Loop through each member of ${number1[#]} and
${number2[#]}, testing them against the values in the associative
array.
Use codeforester's printf ... | sort trick, but pipe both the list
and the range through sort | uniq -c, then grep for the
duplicates.
I'm not sure if either one of these is an actual improvement on your code. ... I would create a 'find duplicates' shell function, but otherwise your code looks solid.

Use bash variable as array in awk and filter input file by comparing with array

I have bash variable like this:
val="abc jkl pqr"
And I have a file that looks smth like this:
abc 4 5
abc 8 8
def 43 4
def 7 51
jkl 4 0
mno 32 2
mno 9 2
pqr 12 1
I want to throw away rows from file which first field isn't present in the val:
abc 4 5
abc 8 8
jkl 4 0
pqr 12 1
My solution in awk doesn't work at all and I don't have any idea why:
awk -v var="${val}" 'BEGIN{split(var, arr)}$1 in arr{print $0}' file
Just slice the variable into array indexes:
awk -v var="${val}" 'BEGIN{split(var, arr)
for (i in arr)
names[arr[i]]
}
$1 in names' file
As commented in the linked question, when you call split() you get values for the array, while what you want to set are indexes. The trick is to generate another array with this content.
As you see $1 in names suffices, you don't have to call for the action {print $0} when this happens, since it is the default.
As a one-liner:
$ awk -v var="${val}" 'BEGIN{split(var, arr); for (i in arr) names[arr[i]]} $1 in names' file
abc 4 5
abc 8 8
jkl 4 0
pqr 12 1
grep -E "$( echo "${val}"| sed 's/ /|/g' )" YourFile
# or
awk -v val="${val}" 'BEGIN{gsub(/ /, "|",val)} $1 ~ val' YourFile
Grep:
it use a regex (extended version with option -E) that filter all the lines that contains the value. The regex is build OnTheMove in a subshell with a sed that replace the space separator by a | meaning OR
Awk:
use the same princip as the grep but everything is made inside (so no subshell)
use the variable val assigned to the shell variable of the same name
At start of the script (before first line read) change the space, (in val) by | with BEGIN{gsub(/ /, "|",val)}
than, for every line where first field (default field separator is space/blank in awk, so first is the letter group) matching, print it (defaut action of a filter with $1 ~ val.

Bash Indented Output for Multiple Variables

I have a script that loops over every text file in a directory, and stores the content in variables. The content can be anywhere from 1-50 characters long. The amount of text files is unknown. I would like to print the content in such a way that each variable falls into a clean column.
for file in $LIBPATH/*.txt; do
name=$( awk 'FNR == 1 {print $0}' $file )
height=$( awk 'FNR == 2 {print $0}' $file )
weight=$( awk 'FNR == 3 {print $0}' $file )
echo $name $height $weight
done
This code produces the output:
Avril Stewart 99 54
Sally Kinghorn 170 60
John Young 195 120
While the desired output is:
Avril Stewart 99 54
Sally Kinghorn 170 60
John Young 195 120
Thanks!
Use printf:
printf '%-20s %3s %3s\n' "$name" "$height" "$weight"
%3s ensures that all fields use three characters, %-20s does the same for 20 characters, but the - in front makes the output left-aligned.
If you want to limit the output to e.g. 20 characters, you can use
printf '%-20.20s %3s %3s\n' "$name" "$height" "$weight"
This will give you a left aligned minimum width of 20 characters and a maximum width of 20 characters, in other words it will ensure that you always have exactly 20 characters.

Getting output of shell command in bash array

I have a uniq -c output, that outputs about 7-10 lines with the count of each pattern that was repeated for each unique line pattern. I want to store the output of my uniq -c file.txt into a bash array. Right now all I can do is store the output into a variable and print it. However, bash currently thinks the entire output is just one big string.
How does bash recognize delimiters? How do you store UNIX shell command output as Bash arrays?
Here is my current code:
proVar=`awk '{printf ("%s\t\n"), $1}' file.txt | grep -P 'pattern' | uniq -c`
echo $proVar
And current output I get:
587 chr1 578 chr2 359 chr3 412 chr4 495 chr5 362 chr6 287 chr7 408 chr8 285 chr9 287 chr10 305 chr11 446 chr12 247 chr13 307 chr14 308 chr15 365 chr16 342 chr17 245 chr18 252 chr19 210 chr20 193 chr21 173 chr22 145 chrX 58 chrY
Here is what I want:
proVar[1] = 2051
proVar[2] = 1243
proVar[3] = 1068
...
proVar[22] = 814
proVar[X] = 72
proVar[Y] = 13
In the long run, I'm hoping to make a barplot based on the counts for each index, where every 50 counts equals one "=" sign. It will hopefully look like the below
chr1 ===========
chr2 ===========
chr3 =======
chr4 =========
...
chrX ==
chrY =
Any help, guys?
To build the associative array, try this:
declare -A proVar
while read -r val key; do
proVar[${key#chr}]=$val
done < <(awk '{printf ("%s\t\n"), $1}' file.txt | grep -P 'pattern' | uniq -c)
Note: This assumes that your command's output is composed of multiple lines, each containing one key-value pair; the single-line output shown in your question comes from passing $proVar to echo without double quotes.
Uses a while loop to read each output line from a process substitution (<(...)).
The key for each assoc. array entry is formed by stripping prefix chr from each input line's first whitespace-separated token, whereas the value is the rest of the line (after the separating space).
To then create the bar plot, use:
while IFS= read -r key; do
echo "chr${key} $(printf '=%.s' $(seq $(( ${proVar[$key]} / 50 ))))"
done < <(printf '%s\n' "${!proVar[#]}" | sort -n)
Note: Using sort -n to sort the keys will put non-numeric keys such as X and Y before numeric ones in the output.
$(( ${proVar[$key]} / 50 )) calculates the number of = chars. to display, using integer division in an arithmetic expansion.
The purpose of $(seq ...) is to simply create as many tokens (arguments) as = chars. should be displayed (the tokens created are numbers, but their content doesn't matter).
printf '=%.s' ... is a trick that effectively prints as many = chars. as there are arguments following the format string.
printf '%s\n' "${!proVar[#]}" | sort -n sorts the keys of the assoc. array numerically, and its output is fed via a process substitution to the while loop, which therefore iterates over the keys in sorted order.
You can create an array in an assignment using parentheses:
proVar=(`awk '{printf ("%s\t\n"), $1}' file.txt | grep -P 'pattern' | uniq -c`)
There's no built-in way to create an associative array directly from input. For that you'll need an additional loop.

Resources