How to put into array all words appearing between brackets in text file and replace it with random one from that array?
cat math.txt
First: {736|172|201|109} {+|-|*|%|/} {21|62|9|1|0}
Second: John had {22|12|15} apples and lost {2|4|3}
I need output like:
First: 172-9
Second: John had 15 apples and lost 4
This is trivial in awk:
$ cat tst.awk
BEGIN{ srand() }
{
for (i=1; i<=NF; i++) {
if ( match($i,/{[^}]+}/) ) {
n = split(substr($i,RSTART+1,RLENGTH-2),arr,/\|/)
idx = int(rand() * (n-1)) + 1
$i = arr[idx]
}
printf "%s%s", $i, (i<NF?OFS:ORS)
}
}
$ awk -f tst.awk file
First: 172 - 9
Second: John had 22 apples and lost 2
$ awk -f tst.awk file
First: 201 - 9
Second: John had 12 apples and lost 2
$ awk -f tst.awk file
First: 201 + 62
Second: John had 12 apples and lost 2
$ awk -f tst.awk file
First: 201 + 1
Second: John had 12 apples and lost 4
Just check the math on the line where idx is set - I think it's right but didn't put much thought into it.
$ awk 'BEGIN{srand()} {for (i=1;i<=NF;i++) {if (substr($i,1,1)=="{") {split(substr($i,2,length($i)-2),a,"|"); j=1+int(rand()*length(a)); $i=a[j]}}; print}' math.txt
First: 172 + 1
Second: John had 12 apples and lost 3
How it works
BEGIN{srand()}
This initializes the random number generator.
for (i=1;i<=NF;i++) {if (substr($i,1,1)=="{") {split(substr($i,2,length($i)-2),a,"|"); j=1+int(rand()*length(a)); $i=a[j]}
This loops through each field. If any field starts with {, then substr is used to remove the first and last characters of the field and the remainder is split with | as the divider into array a. Then, a random index j into array a is chosen. Lastly, the field is replaced with a[j].
print
The line, as revised above, is printed.
The same code as above, but reformatted over multiple lines, is:
awk 'BEGIN{srand()}
{
for (i=1;i<=NF;i++) {
if (substr($i,1,1)=="{") {
split(substr($i,2,length($i)-2),a,"|")
j=1+int(rand()*length(a))
$i=a[j]
}
}
print
}' math.txt
Revised Problem with Spaces
Suppose that match.txt now looks like:
$ cat math.txt
First: {736|172|201|109} {+|-|*|%|/} {21|62|9|1|0}
Second: John had {22|12|15} apples and lost {2|4|3}
Third: John had {22 22|12 12|15 15} apples and lost {2 2|4 4|3 3}
The last line has spaces inside the {...}. This changes how awk divides up the fields. For this situation, we can use:
$ awk -F'[{}]' 'BEGIN{srand()} {for (i=2;i<=NF;i+=2) {n=split($i,a,"|"); j=1+int(n*rand()); $i=a[j]}; print}' math.txt
First: 736 + 62
Second: John had 12 apples and lost 3
Third: John had 15 15 apples and lost 2 2
How it works:
-F'[{}]'
This tells awk to use either } or { as field separators.
BEGIN{srand()}
This initializes the random number generator
{for (i=2;i<=NF;i+=2) {n=split($i,a,"|"); j=1+int(n*rand()); $i=a[j]}
With our new definition for the field separator, the even numbered fields are the ones inside braces. Thus, we split these fields on | and randomly select one piece and assign the field to that piece: $i=a[j].
print
Having modified the line as above, we now print it.
You can try this awk:
awk -F'[{}]' 'function rand2(n) {
srand();
return 1 + int(rand() * n);
}
{
for (i=1; i<=NF; i++)
if (i%2)
printf $i OFS;
else {
split($i, arr, "|");
printf arr[rand2(length(arr))]
};
printf ORS
}' math.txt
First: 736 - 62
Second: John had 22 apples and lost 2
More runs of above may produce:
First: 172 * 9
Second: John had 12 apples and lost 4
First: 109 / 0
Second: John had 15 apples and lost 3
First: 201 % 1
Second: John had 12 apples and lost 3
...
...
Related
I have an array trf. Would like to compute the sum of the second element in each array entry.
Example of array contents
trf=( "2 13 144" "3 21 256" "5 34 389" )
Here is the current implementation, but I do not find it robust enough. For instance, it fails with arbitrary number of elements (but considered constant from one array element to another) in each array entry.
cnt=0
m=${#trf[#]}
while (( cnt < m )); do
while read -r one two three
do
sum+="$two"+
done <<< $(echo ${array[$count]})
let count=$count+1
done
sum+=0
result=`echo "$sum" | /usr/bin/bc -l`
You're making it way too complicated. Something like
#!/usr/bin/env bash
trf=( "2 13 144" "3 21 256" "5 34 389" )
declare -i sum=0 # Integer attribute; arithmetic evaluation happens when assigned
for (( n = 0; n < ${#trf[#]}; n++)); do
read -r _ val _ <<<"${trf[n]}"
sum+=$val
done
printf "%d\n" "$sum"
in pure bash, or just use awk (This is handy if you have floating point numbers in your real data):
printf "%s\n" "${trf[#]}" | awk '{ sum += $2 } END { print sum }'
You can use printf to print the entire array, one entry per line. On such an input, one loop (while read) would be sufficient. You can even skip the loop entirely using cut and tr to build the bc command. The echo 0 is there so that bc can handle empty arrays and the trailing + inserted by tr.
{ printf %s\\n "${trf[#]}" | cut -d' ' -f2 | tr \\n +; echo 0; } | bc -l
For your examples this generates prints 68 (= 13+21+34+0).
Try this printf + awk combo:
$ printf '%s\n' "${trf[#]}" | awk '{print $2}{a+=$2}END{print "sum:", a}'
13
21
34
sum: 68
Oh, it's already suggested by Shawn. Then with loop:
$ for item in "${trf[#]}"; do
echo $item
done | awk '{print $2}{a+=$2}END{print "sum:", a}'
13
21
34
sum: 68
For relatively small arrays a for/while double loop should be ok re: performance; placing the final sum in the $result variable (as in OP's code):
result=0
for element in "${trf[#]}"
do
while read -r a b c
do
((result+=b))
done <<< "${element}"
done
echo "${result}"
This generates:
68
For larger data sets I'd probably opt for one of the awk-only solutions (for performance reasons).
Let's say we have a shell variable $x containing a space separated list of numbers from 1 to 30:
$ x=$(for i in {1..30}; do echo -n "$i "; done)
$ echo $x
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
We can print the first three input record fields with AWK like this:
$ echo $x | awk '{print $1 " " $2 " " $3}'
1 2 3
How can we print all the fields starting from the Nth field with AWK? E.g.
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
EDIT: I can use cut, sed etc. to do the same but in this case I'd like to know how to do this with AWK.
Converting my comment to answer so that solution is easy to find for future visitors.
You may use this awk:
awk '{for (i=3; i<=NF; ++i) printf "%s", $i (i<NF?OFS:ORS)}' file
or pass start position as argument:
awk -v n=3 '{for (i=n; i<=NF; ++i) printf "%s", $i (i<NF?OFS:ORS)}' file
Version 4: Shortest is probably using sub to cut off the first three fields and their separators:
$ echo $x | awk 'sub(/^ *([^ ]+ +){3}/,"")'
Output:
4 5 6 7 8 9 ...
This will, however, preserve all space after $4:
$ echo "1 2 3 4 5" | awk 'sub(/^ *([^ ]+ +){3}/,"")'
4 5
so if you wanted the space squeezed, you'd need to, for example:
$ echo "1 2 3 4 5" | awk 'sub(/^ *([^ ]+ +){3}/,"") && $1=$1'
4 5
with the exception that if there are only 4 fields and the 4th field happens to be a 0:
$ echo "1 2 3 0" | awk 'sub(/^ *([^ ]+ +){3}/,"")&&$1=$1'
$ [no output]
in which case you'd need to:
$ echo "1 2 3 0" | awk 'sub(/^ *([^ ]+ +){3}/,"") && ($1=$1) || 1'
0
Version 1: cut is better suited for the job:
$ cut -d\ -f 4- <<<$x
Version 2: Using awk you could:
$ echo -n $x | awk -v RS=\ -v ORS=\ 'NR>=4;END{printf "\n"}'
Version 3: If you want to preserve those varying amounts of space, using GNU awk you could use split's fourth parameter seps:
$ echo "1 2 3 4 5 6 7" |
gawk '{
n=split($0,a,FS,seps) # actual separators goes to seps
for(i=4;i<=n;i++) # loop from 4th
printf "%s%s",a[i],(i==n?RS:seps[i]) # get fields from arrays
}'
Adding one more approach to add all value into a variable and once all fields values are done with reading just print the value of variable. Change the value of n= as per from which field onwards you want to get the data.
echo "$x" |
awk -v n=3 '{val="";for(i=n; i<=NF; i++){val=(val?val OFS:"")$i};print val}'
With GNU awk, you can use the join function which has been a built-in include since gawk 4.1:
x=$(seq 30 | tr '\n' ' ')
echo "$x" | gawk '#include "join"
{split($0, arr)
print join(arr, 4, length(arr), "|")}
'
4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30
(Shown here with a '|' instead of a ' ' for clarity...)
Alternative way of including join:
echo "$x" | gawk -i join '{split($0, arr); print join(arr, 4, length(arr), "|")}'
Using gnu awk and gensub:
echo $x | awk '{ print gensub(/^([[:digit:]]+[[:space:]]){3}(.*$)/,"\\2",$0)}'
Using gensub, split the string into two sections based on regular expressions and print the second section only.
Input:
1234-A1;1235-A2;2345-B1;5678-C2;2346-D5
Expected Output:
1234
1235
2345
5678
2346
Input shown is a user input. I want to store it in an array and do some operations to display as shown in 'Expected Output'
I have done it in perl, but want to achieve it in shell script. Please help in achieving this.
To split an input text to an array you can follow this technique:
IFS="[;-]" read -r -a arr <<< "1234-A1;1235-A2;2345-B1;5678-C2;2346-D5"
printf '%s\n' "${arr[#]}"
1234
A1
1235
A2
2345
B1
5678
C2
2346
D5
If you want to keep only 1234,1234, etc as per your expected output you can either to use the corresponding array elements (0-2-4-etc) or to do something like this:
a="1234-A1;1235-A2;2345-B1;5678-C2;2346-D5"
IFS="[;]" read -r -a arr <<< "${a//-[A-Z][0-9]/}" #or more generally <<< "${a//-??/}"
declare -p arr #This asks bash to print the array for us
#Output
declare -a arr='([0]="1234" [1]="1235" [2]="2345" [3]="5678" [4]="2346")'
# Array can now be printed or used elsewhere in your script. Array counting starts from zero
#Yash:#try:
echo "1234-A1;1235-A2;2345-B1;5678-C2;2346-D5" | awk '{gsub(/-[[:alnum:]]+/,"");gsub(/;/,RS);print}'
Substituting all alpha bate, numbers with NULL, then substituting all semi colons to RS(record separator) which is a new line by default.
Thanks #George and #Vipin.
Based on your inputs the solution which best suites my environment is as under:
i=0
a="1234-A1;1235-A2;2345-B1;5678-C2;2346-D5"
IFS="[;]" read -r -a arr <<< "${a//-??/}"
#declare -p arr
for var in "${arr[#]}"
do
echo " var $((i++)) is : $var"
done
Output:
var 0 is : 1234
var 1 is : 1235
var 2 is : 2345
var 3 is : 5678
var 4 is : 2346
Try this -
awk -F'[-;]' '{for(i=1;i<=NF;i++) if(i%2!=0) {print $i}}' f
1234
1235
2345
5678
2346
OR
echo "1234-A1;1235-A2;2345-B1;5678-C2;2346-D5"|tr ';' '\n'|cut -d'-' -f1
OR
As #George Vasiliou Suggested -
awk -F'[-;]' '{for(i=1;i<=NF;i+=2) {print $i}}'f
If Data needs to store in Array and you are using gawk, try below -
awk -F'[;-]' -v k=1 '{for(i=1;i<=NF;i++) if($i !~ /[[:alpha:]]/) {a[k++]=$i}} END {
> PROCINFO["sorted_in"] = "#ind_str_asc"
> for(k in a) print k,a[k]}' f
1 1234
2 1235
3 2345
4 5678
5 2346
PROCINFO["sorted_in"] = "#ind_str_asc" used to print the data in
sorted order.
I am using a 2D array to save the number of recurrences of certain patterns. For instance:
$4 == "Water" {s[$5]["w"]++}
$4 == "Fire" {s[$5]["f"]++}
$4 == "Air" {s[$5]["a"]++}
where $5 can be attack1, attack2 or attack3. In the END{ }, I print out these values. However, some of these patterns don't exist. So for s["attack1"]["Air"] =0, my code prints whitespace. Hence I would like to know if there is a way to initialize the array in one line instead of initializing each of the elements I need, in the BEGIN{ }.
awk -f script.awk data
This is the command I am using to run my script. I am not allowed to use any other flags.
EDIT 1:
Here's the current output
Water Air Fire
attack1 554 12
attack2 14 24
attack3 6 3
Here's the output I desire:
Water Air Fire
attack1 554 0 12
attack2 14 24 0
attack3 6 0 3
You don't need to initialise the array in this case. Awk already has a default empty value, so you just have to change the way you print the value.
Observe:
awk 'BEGIN {print "Blank:", a[1];
print "Zero: ", a[1] + 0;
printf("Blank: %s\n", a[1]);
printf("Zero: %i\n", a[1])}'
Output:
Blank:
Zero: 0
Blank:
Zero: 0
I have a file with two columns (1.4 million rows) that looks like:
CLM MXL
0 0
0 1
1 1
1 1
0 0
29 42
0 0
30 15
I would like to count the instances of each possible combination of values; for example if there are x number of lines where column CLM equals 0 and column MXL matches 1, I would like to print:
0 1 x
Since the maximum value of column CLM is 188 and the maximum value of column MXL is 128, I am trying to use a nested for loop in awk that looks something like:
awk '{for (i=0; i<=188; i++) {for (j=0; j<=128; j++) {if($9==i && $10==j) {print$0}}}}' 1000Genomes.ALL.new.txt > test
But this only prints out the original file, which makes sense, I just don't know how to correctly write a for loop that prints out one file for each combination of values, which I can then wc, or print out one file with counts of each combination. Any solution in awk, bash script, perl script would be great.
1. A Pure awk Solution
$ awk 'NR>1{c[$0]++} END{for (k in c)print k,c[k]}' file | sort -n
0 0 3
0 1 1
1 1 2
29 42 1
30 15 1
How it works
The code uses a single variable c. c is an associative array whose keys are lines in the file and whose values are the number of occurrences.
NR>1{c[$0]++}
For every line except the first (which has the headings), this increments the count for the combination in that line.
END{for (k in c)print k,c[k]}
This prints out the final counts.
sort -n
This is just for aesthetics: it puts the output lines in a predictable order.
2. Alternative using uniq -c
$ tail -n+2 file | sort -n | uniq -c | awk '{print $2,$3,$1}'
0 0 3
0 1 1
1 1 2
29 42 1
30 15 1
How it works
tail -n+2 file
This prints all but the first line of the file. The purpose of this is to remove the column headings.
sort -n | uniq -c
This sorts the lines and then counts the duplicates.
awk '{print $2,$3,$1}
uniq -c puts the counts first and you wanted the counts to be the last on the line. This just rearranges the columns to the format that you wanted.