How to multiply small numbers in Bash - arrays

I want to multiply all entries in a array with numbers like 3.17 * 10^-7, but Bash can't do that. I tried with awk and bc, but it doesn't work. I would be obliged if someone can help me.
Input data example (overall 4000 datafile):
TecN210500-0100.plt
TecN210500-0200.plt
TecN210500-0300.plt
TecN210500-0400.plt
......
Here is my code:
#!/bin/bash
ZS=($(find . -name "*.plt"))
i=1
Variable=$(awk "BEGIN{print 10 ** -7}")
Solutiontime=$(awk "BEGIN{print 3.17 * $Variable}")
for Dataname in ${ZS[#]}
do
Cut=${Dataname:13}
Timesteps=${Cut:0:${#Cut}-4}
Array[i]=$Timesteps
i=$((i++))
p=$((i++))
done
Amount=$p
for ((i=1;i<10;i++))
do
Array[i]=${i}00
done
for (($i=1;i<$Amount+1;i++))
do
Array[i]=$(awk "BEGIN{print ${Array[i]} * $Solutiontime}")
done
Array[0]=Solutiontime
First loop:
Extract e.i. the "0100".
Second loop:
"Delete" the leading zero -> e.i. "100"
Last loop:
Multiply with time step -> e.i. "100 * 3.17*10^-7"

Do a little parameter expansion trimming on the filename, and then let awk do the math for you.
#!/bin/bash
for f in *.plt; do
num=${f##*-} # remove the stuff before the final -
num=${num%.*} # remove the stuff before the last .
num=${num#0} # remove the left-hand zero
awk "BEGIN {print $num * 3.17 * 10**-7}"
done
Or, done entirely with awk:
#!/bin/bash
for f in *.plt; do
awk -v f="$f" 'BEGIN {gsub(/^TecN[[:digit:]]+-0?|.plt$/, "", f); print f * 3.17 * 10**-7}'
done

awk to the rescue!
awk 'BEGIN{print 3.17 * 10^-7 }'
3.17e-07
iteration 1
awk -F'[-.]' '{printf "%s %e\n",substr($1,5),$2*3.17*10^-7}' file
210500 3.170000e-05
210500 6.340000e-05
210500 9.510000e-05
210500 1.268000e-04
for the posted file names used as input.
iteration 2
If you need just the computed numbers, simple drop the first field
awk -F'[-.]' '{printf "%e\n",$2*3.17*10^-7}' file
3.170000e-05
6.340000e-05
9.510000e-05
1.268000e-04
this will be the output of the script. I strongly suggest moving whatever logic you have inside the awk script rather than working on shell level with the array.

Related

How to implement awk using loop variables for the row?

I have a file with n rows and 4 columns, and I want to read the content of the 2nd and 3rd columns, row by row. I made this
awk 'NR == 2 {print $2" "$3}' coords.txt
which works for the second row, for example. However, I'd like to include that code inside a loop, so I can go row by row of coords.txt, instead of NR == 2 I'd like to use something like NR == i while going over different values of i.
I'll try to be clearer. I don't want to wxtract the 2nd and 3rd columns of coords.txt. I want to use every element idependently. For example, I'd like to be able to implement the following code
for (i=1; i<=20; i+=1)
awk 'NR == i {print $2" "$3}' coords.txt > auxfile
func(auxfile)
end
where func represents anything I want to do with the value of the 2nd and 3rd columns of each row.
I'm using SPP, which is a mix between FORTRAN and C.
How could I do this? Thank you
It is of course inefficient to invoke awk 20 times. You'd want to push the logic into awk so you only need to parse the file once.
However, one method to pass a shell variable to awk is with the -v option:
for ((i=1; i<20; i+=2)) # for example
do
awk -v line="$i" 'NR == line {print $2, $3}' file
done
Here i is the shell variable, and line is the awk variable.
something like this should work, there is no shell loop needed.
awk 'BEGIN {f="aux.aux"}
NR<21 {close(f); print $2,$3 > f; system("./mycmd2 "f)}' file
will call the command with the temp filename for the first 20 lines, the file will be overwritten at each call. Of course, if your function takes arguments or input from stdin instead of file name there are easier solution.
Here ./mycmd2 is an executable which takes a filename as an argument. Not sure how you call your function but this is generic enough...
Note also that there is no error handling for the external calls.
the hideous system( ) only way in awk would be like
system("printf \047%s\\n\047 \047" $2 "\047 \047" $3 "\047 | func \047/dev/stdin\047; ");
if the func( ) OP mentioned can be directly called by GNU parallel, or xargs, and can take in values of $2 + $3 as its $1 $2, then OP can even make it all multi-threaded like
{mawk/mawk2/gawk} 'BEGIN { OFS=ORS="\0"; } { print $2, $3; } (NR==20) { exit }' file \
\
| { parallel -0 -N 2 -j 3 func | or | xargs -0 -n 2 -P 3 func }

Load field 1 and print at the END{} equivalent awk in Perl

I have the following AWK script that counts occurences of elements in field 1 and when finishes to read entire file, prints each element and the times of repetitions.
awk '{a[$1]++} END{ for(i in a){print i"-->"a[i]} }' file
I'm very new with perl and I don't know how would be the equivalent. What I have so far is below, but it has incorrect syntax. Thanks in advance.
perl -lane '$a{$F[1]}++ END{foreach $a {print $a} }' file
____________________________________UPDATE
______________________________________
Hi, thanks both for your answers. The real input file has 34 million lines and the execution time is 3 or more times faster between awk and Perl. Is awk faster than perl?
awk '{a[$1]++}END{for(i in a){print i"-->"a[i]}}' file #--> 2:45 aprox
perl -lane '$a{$F[0]}++;END{foreach my $k (keys %a){ print "$k --> $a{$k}" } }' file #--> 7 min aprox
perl -lanE'$a{$F[0]}++; END { say "$_ => $a{$_}" for keys %a }' file # -->9 min aprox
Okay, Ger, one more time :-)
I upgraded my Perl to the latest version available to me and made a file like what you described (34.5 million lines each having a 16 digit integer in the 1st and only column):
schumack#linux2 52> wc -l listbig
34521909 listbig
schumack#linux2 53> head -3 listbig
1111111111111111
3333333333333333
4444444444444444
I then ran a specialized Perl line (works for this file but is not the same as the awk line). As before I timed the runs using /usr/bin/time:
schumack#linux2 54> /usr/bin/time -f '%E %P' /usr/local/bin/perl -lne 'chomp; $a{$_}++; END{foreach $i (keys %a){print "$i-->$a{$i}"}}' listbig
5555555555555555-->4547796
1111111111111111-->9715747
9999999999999999-->826872
3333333333333333-->9922465
1212121212121212-->826872
4444444444444444-->5374669
2222222222222222-->1653744
8888888888888888-->826872
7777777777777777-->826872
0:12.20 99%
schumack#linux2 55> /usr/bin/time -f '%E %P' awk '{a[$1]++} END{ for(i in a){print i"-->"a[i]} }' listbig
1111111111111111-->9715747
2222222222222222-->1653744
3333333333333333-->9922465
4444444444444444-->5374669
5555555555555555-->4547796
1212121212121212-->826872
7777777777777777-->826872
8888888888888888-->826872
9999999999999999-->826872
0:12.61 99%
Both perl and awk ran very fast on the 34.5 million line file and were within a half second of each other.
Curious as what type of machine / OS / Perl version you are currently using. I tested on an ASUS laptop that is about 4 years old, has Intel I7. I am using Ubuntu 16.04 and Perl v5.26.1
Anyways, thanks for the reason to play around with Perl!
Have fun,
Ken
Input file would obviously make a difference, but Perl 5.22.1 was slightly faster than Awk 4.1.3 below on my 33.5 million line test file (12.23 vs 12.52 seconds).
schumack#daddyo2 10-02T18:25:17 54> wc -l listbig
33521910 listbig
schumack#daddyo2 10-02T18:25:58 55> /usr/bin/time -f '%E %P' awk '{a[$1]++} END{ for(i in a){print i"-->"a[i]} }' listbig
1-->9434310
2-->1605840
3-->9635040
4-->5218980
5-->4416060
7-->802920
8-->802920
9-->802920
12-->802920
0:12.52 99%
schumack#daddyo2 10-02T18:26:17 56> /usr/bin/time -f '%E %P' perl -lne '$_=~s/^(\S+) .*/$1/; $a{$_}++; END{foreach $i (keys %a){print "$i-->$a{$i}"}}' listbig
1-->9434310
5-->4416060
2-->1605840
3-->9635040
12-->802920
8-->802920
9-->802920
4-->5218980
7-->802920
0:12.23 99%
Equivalent to your awk line
perl -lanE'$a{$F[0]}++; END { say "$_ => $a{$_}" for keys %a }' file
By -a the line is broken into fields in #F so you want $F[0] as a key in a hash %a with the value of the counter handled by ++. The hash is iterated over keys and printed in the END block.
However, the efficiency comparison comes up. One way to improve this is to not fetch all fields on the line, done with -a, since only the first one is needed. Between two ways that come to mind
perl -nE'$a{(/(\S+)/)[0]}++; END { ... }'
and
perl -nE'$a{(split " ", $_, 2)[0]}++; END { ... }'
the split is significantly faster with its 3.63s vs 4.41s for regex, on a 8M-line file.
This is still behind 1.99s for your awk line. So it seems that awk is faster for this task.
Summary of my timings for an 8-million line file (average of a few runs)
awk (question) 1.99s
perl (split) 3.63s
perl (regex) 4.41s
perl (like awk) 5.61s
These timings vary over runs by a few tens of miliseconds (a few 0.01s).
This destructive method is the fastest I came up with:
perl -lne '$_=~s/\s.*//; $a{$_}++; END{foreach $i (keys %a){print "$i-->$a{$i}"}}' file
However, it still is not quite as fast as awk.
You can go through a2p
$ cat file
1
1
2
3
3
3
$ perl -lane '$a{$F[0]}++;END{foreach my $k (keys %a){ print "$k --> $a{$k}" } }' file
1 --> 2
2 --> 1
3 --> 3
$ awk '{a[$1]++} END{ for(i in a){print i" --> "a[i]} }' file
1 --> 2
2 --> 1
3 --> 3

Shell - Looping Array with command and increment command values

var1=$(echo $getDate | awk '{print $1} {print $2}')
var2=$(echo $getDate | awk '{print $3} {print $4}')
var3=$(echo $getDate | awk '{print $5} {print $6}')
Instead of repeating like the code above, I need to:
loop the same command
increment the values ({print $1} {print $2})
store the value in an array
I was doing something like below but I am stuck maybe someone can help me please:
COMMAND=`find $locationA -type f | wc -l`
getDate=$(find $locationA -type f | xargs ls -lrt | awk '{print $6} {print $7}')
a=1
b=2
for i in $COMMAND
do
i=$(echo $getDate | awk '{print $a} {print $b}')
myarray+=('$i')
a=$((a+1))
b=$((b+1))
done
PS - using ksh
Problem: $COMMAND stores the number of files found in $locationA. I need to loop through the amount of files found and store their dates in an array.
I don't get the meaning of your example code (what is the 'for' loop supposed to do? What is the content of the variable COMMAND?), but in your question you ask to store something in an array, while in the code you wish to simplify, you don't use an array, but simple variables (var1, var2, ....).
If I understand your requirement correctly, your variable getDate contains a string of several words, which are separated by spaces, and you want to assign the first two words to var1, the following two words to var2, and so on. Is this correct?
Now the edited code is at least a bit clearer, though I still don't understand, why you use i as a loop variable, and overwrite it in the first statement inside the loop.
However, a few comments:
If you push '$i' into your array, you will get a literal '$' sign, followed by the letter 'i'. To add a variable i containing to numbers, you need double quotes ("$i").
I don't understand why you want to loop over the cotnent of the variable COMMAND. This variable will always hold a single number, which means that the loop will be executed exactly once.
You could use a counting loop, incrementing loop variable by 2 on each iteration. You would have to precalculate the number of iterations beforehand.
Perhaps an easier alternative, which would work in bash or in zsh (I did not try other shells) is to first turn your variable in an array,
tmparr=($(echo $getDate|fmt -w 1))
and then use a loop to collect pairs of this element:
myarray=()
for ((i=0; i<${#tmparr[*]}; i+=2))
do
myarray+=("${tmparr[$i]} ${tmparr[$((i+1))]}")
done
${myarray[0]} will hold a string consisting of the first to words from getDate, etc.
This one should work on zsh, at least with newer versions:
myarray=()
echo $g|fmt -w 1|paste -s -d " \n"|while read s; do myarray+=("$s"); done
This leaves the first pair in ${myarray[1]}, etc.
It doesn't work with bash (and old zsh versions), because these shells would execute the body of the loop in a subshell.
ADDED:
On a second thought, in zsh this one would be simpler:
myarray=("${(f)$(echo $g|fmt -w 1|paste -s -d ' \n')}")

Bash sum of array

I'm writing script in Bash and I have a problem with sum elements of array. I add to array results of df for two paths. In result I want to get sum elements of array.
use=()
i=0
for d in '$PATH1' '$PATH2'
do
usagebck=$(du $d | awk '{print awk $1}')
use[i]=$usagebck
sum=0
for j in $use
do
sum=$($sum + ${use[$i]})
done
i=$((i+1))
done
echo ${use[*]}
If your du has option -s:
use=()
sum=0
for d in "$PATH1" "$PATH2"
do
usagebck="$(du -s "$d" | awk 'END{print $1}')"
use+=($usagebck)
((sum+=$usagebck))
done
echo ${use[*]}
echo $sum
First, take a look at the parameters in du. On BSD based systems, there's -c which will give you a grand total. On GNU and BSD, there's the -a parameter which will report on all files for a directory.
Since you're already using awk, why not do everything in awk?
$ du -ms $PATH1 $PATH2 |
awk 'BEGIN {sum = 0}
END {print "Total: " sum }
{
sum+=$1
print $0
}'
du -ms specifies that I want the total sums of each file specified
BEGIN is executed before the main awk program. Here I'm initializing sum. This isn't necessary because variables are assumed to equal zero when created.
END is executed after the main awk program. Here, I'm specifying that I want sum printed.
Between the { ... } is the main Awk program. Two lines. The first line adds Column 1 (the size of the file) to sum. The second line prints out the entire line.

Assigning command's output to shell variable and get the variables size

I have a file consisting of digits. Usually, each line contains one single number. I would like to count the number of lines in the file that begin with digit '0'. If it's the case, then I would like to do some post-processing.
Although I'm able to retrieve correctly the corresponding line numbers, the total number of retrieved lines is not correct. Below, I'm posting the code that I'm using.
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
# linesToRemove=$(grep -n "^0" ${inputFile} | cut -d":" -f1);
linesNr=${#linesToRemove} # <- here, the error
# linesNr=${#linesToRemove[#]} # <- here, the error
if [ "${linesNr}" -gt "0" ]; then
# do something here, e.g. remove corresponding lines.
awk -v n=$linesToRemove 'NR == n {next} {print}' ${anotherFile} > ${outputFile}
fi
Also, as for the awk-based command, how could I use a shell-variable? I tried the command below, but it's not working correctly, since 'myIndex' is interpreted as a text and not as a variable.
linesToRemove=$(awk -v myIndex="$myIndex" '/^myIndex/ { print NR;}' ${inputFile});
Given the line numbers starting with 0 found in ${inputFile}, I would like to remove the corresponding lines numbers from ${anotherFile}. An example for both ${inputFile} and ${anotherFile} is given below:
// ${inputFile}
0
1
3
0
// ${anotherFile}
2.617300e+01 5.886700e+01 -1.894697e-01 1.251225e+02
5.707397e+01 2.214040e+02 8.607959e-02 1.229114e+02
1.725900e+01 1.734360e+02 -1.298053e-01 1.250318e+02
2.177940e+01 1.249531e+02 1.538853e-01 1.527150e+02
// ${outputFile}
5.707397e+01 2.214040e+02 8.607959e-02 1.229114e+02
1.725900e+01 1.734360e+02 -1.298053e-01 1.250318e+02
In the example above, I need to delete lines 0 and 3 from ${anotherFile}, given that those lines correspond to the lines starting with 0 in ${inputFile}.
If you want to count the number of lines in the file that begins with 0, then this line is wrong.
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
The above says to print the line number when the line start with 0, and your linesToRemove variable will contain all the line numbers, not the total number of lines. Use END{} block to capture the total. eg
linesToRemove=$(awk '/^0/ {c++}END{print c}' ${inputFile});
As for your 2nd question on using variable inside awk, use the regex operator ~. And then set your myIndex variable to include the ^ anchor
linesToRemove=$(awk -v myIndex="^$myIndex" '$0 ~ myIndex{ print NR;}' ${inputFile});
finally, if you just want to remove those lines that start with 0, then just simply remove it
awk '/^0/{next}{print $0>FILENAME}' file
If you want to remove lines from another file using what is captured in input file, here's one way
paste -d"|" inputfile anotherfile | awk '!/^0/{gsub(/^.*\|/,"");print}'
Or just one awk command
awk 'FNR==NR && /^0/{a[FNR]} NR>FNR && (!(FNR in a))' inputfile anotherfile
crude explanation: FNR==NR && /^0/ means process the first file whole line starts with 0 and put its line number into array a. NR>FNR means process the next file and if line number not in array, print the line. See the gawk documentation for what FNR,NR etc means
I think you have to do the following to assign an array:
linesToRemove=( $(awk '/^0/ { print NR; }' ${inputFile}) )
And to get the number of elements do (as you have in a commented line):
linesNr=${#linesToRemove[#]}
To remove the lines from from the file you could do something like:
sedCmd=""
for lineNr in ${linesToRemove[#]}; do
sedCmd="$sedCmd;${lineNr}d"
done
sed "$sedCmd" ${anotherFile} > ${outputFile}
In general if you do this:
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
instead of this:
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
linesNr=${#linesToRemove}
use this:
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
linesNr=${echo $linesToRemove|awk '{print NF}'}
POC :
cat temp.sh
#!/usr/bin/ksh
lines=$(awk '/^d/{print NR}' script.sh)
nooflines=$(echo $lines|awk '{print NF}')
echo $nooflines
torinoco!DBL:/oo_dgfqausr/test/dfqwrk12/vijay> temp.sh
8
torinoco!DBL:/oo_dgfqausr/test/dfqwrk12/vijay>
It greatly depends on the post-processing you are doing, but do you really need the actual count? Why not do something like this:
if grep ^0 $inputfile > /dev/null; then
# There is at least one line with a leading 0
:
fi
grep -v ^0 $inputfile | process-lines-without-leading-zero
grep ^0 $inputfile | process-lines-with-leading-zero
Or, even just:
if grep ^0 $inputfile | process-lines-with-leading-zero; then
# some post processing
:
fi
--EDIT--
Based on what you've said in your comment, I would recommend a different approach. If I understand you correctly, you want to read file a, looking for lines of the form ^0[0-9]*,
and then remove those line numbers from file b. Doing it one line at a time is pretty slow if the files get big. Just do:
cmd=$( grep '^0[0-9]*$' a | sed 's/$/d;/g' )
sed "$cmd" b
The assignment to cmd forms a sed command to delete the lines. Invoking sed on b will omit those lines. You'll need to redirect the sed output appropriately (perhaps to a temp file and then back to b, or just use 'sed -i' if you're using gnu sed.)
Given the large number of edits to this question, it seems easiest to start a new answer. Your problem can be solved with a simple one-liner:
$ sed "$( grep -n ^0 $inputFile | sed 's/:.*/d;/g' )" $anotherFile > $outputFile

Resources