nested for loops in awk to count number of fields matching values - loops

I have a file with two columns (1.4 million rows) that looks like:
CLM MXL
0 0
0 1
1 1
1 1
0 0
29 42
0 0
30 15
I would like to count the instances of each possible combination of values; for example if there are x number of lines where column CLM equals 0 and column MXL matches 1, I would like to print:
0 1 x
Since the maximum value of column CLM is 188 and the maximum value of column MXL is 128, I am trying to use a nested for loop in awk that looks something like:
awk '{for (i=0; i<=188; i++) {for (j=0; j<=128; j++) {if($9==i && $10==j) {print$0}}}}' 1000Genomes.ALL.new.txt > test
But this only prints out the original file, which makes sense, I just don't know how to correctly write a for loop that prints out one file for each combination of values, which I can then wc, or print out one file with counts of each combination. Any solution in awk, bash script, perl script would be great.

1. A Pure awk Solution
$ awk 'NR>1{c[$0]++} END{for (k in c)print k,c[k]}' file | sort -n
0 0 3
0 1 1
1 1 2
29 42 1
30 15 1
How it works
The code uses a single variable c. c is an associative array whose keys are lines in the file and whose values are the number of occurrences.
NR>1{c[$0]++}
For every line except the first (which has the headings), this increments the count for the combination in that line.
END{for (k in c)print k,c[k]}
This prints out the final counts.
sort -n
This is just for aesthetics: it puts the output lines in a predictable order.
2. Alternative using uniq -c
$ tail -n+2 file | sort -n | uniq -c | awk '{print $2,$3,$1}'
0 0 3
0 1 1
1 1 2
29 42 1
30 15 1
How it works
tail -n+2 file
This prints all but the first line of the file. The purpose of this is to remove the column headings.
sort -n | uniq -c
This sorts the lines and then counts the duplicates.
awk '{print $2,$3,$1}
uniq -c puts the counts first and you wanted the counts to be the last on the line. This just rearranges the columns to the format that you wanted.

Related

Complete file2 with data from file1

I have two files with fields separated with tabs:
File1 has 13 columns and 90 millions of lines (~5GB). The number of lines of file 1 is always smaller than the number of lines of file2.
1 1 27 0 2 0 0 1 0 0 0 1 false
1 2 33 0 3 0 0 0 0 0 0 1 false
1 5 84 3 0 0 0 0 0 0 0 2 false
1 6 41 0 1 0 0 0 0 0 0 1 false
1 7 8 4 0 0 0 0 0 0 0 1 false
File2 has 2 columns and 100 millions of lines (1.3GB)
1 1
1 2
1 3
1 4
1 5
What I want to achieve:
When the pair columns $1/$2 of file2 is identical to the pair columns $1/$2 of file1, I would like to print $1 and $2 from file2 and $3 from file1 into an output file. In addition, if the pair $1/$2 of file2 does not have a match in file1, print $1/$2 in the output and the 3rd column is left empty. Thus, the output keeps the same structure (number of lines) than file2.
If relevant: The pairs $1/$2 are unique in both file1 and 2 and both files are sorted according $1 first and then $2.
Output file:
1 1 27
1 2 33
1 3 45
1 4
1 5 84
What I have done so far:
awk -F"\t" 'NR == FNR {a[$1 "\t" $2] = $3; next } { print $0 "\t" a[$1 "\t" $2] }' file1 file2 > output
The command runs for few minutes and unexpectedly stop without additional information. When I open the output file, the first 5 to 6x10E6 lines have been correctly processed (I can see the 3rd column that was correctly added) but the rest of the output file does not have a 3rd column. I am running this command on a 3.2 GHz Intel Core i5 with 32 GB 1600 MHz DDR3. Any ideas why the command stops? Thanks for your help.
You are close.
I would do something like this:
awk 'BEGIN{FS=OFS="\t"}
{key=$1 FS $2}
NR==FNR{seen[key]=$3; next}
key in seen {print key, seen[key]}
' file1 file2
Or, since file1 is bigger, reverse which file is held in memory:
awk 'BEGIN{FS=OFS="\t"}
{key=$1 FS $2}
NR==FNR{seen[key]; next}
key in seen {print key, $3}
' file2 file1
You could also use join which will likely handle files much larger than memory. This is BSD join that can use multiple fields for the join:
join -1 1 -1 2 -2 1 -2 2 -t $'\t' -o 1.1,1.2,1.3 file1 file2
join requires the files be sorted, as your example is. If not sorted, you could do:
join -1 1 -1 2 -2 1 -2 2 -t $'\t' -o 1.1,1.2,1.3 <(sort -n file1) <(sort -n file2)
Or, if your join can only use a single field, you can temporarily use ' ' as the field separator between field 2 and 3 and set join to use that as the delimiter:
join -1 1 -2 1 -t $' ' -o 1.1,2.2 <(sort -k1n -k2n file2) <(awk '{printf("%s\t%s %s\n",$1,$2,$3)}' file1 | sort -k1n -k2n) | sed 's/[ ]/\t/'
Either awk or join prints:
1 1 27
1 2 33
1 3 45
1 4 7
1 5 84
Your comment:
After additional investigations, the suggested solutions did not worked because my question was not properly asked (my mistake). The suggested solutions printed lines only when matches between pairs ($1/$2) were found between files 1 and 2. Thus, the resulting output file has always the number of lines of file1 (that is always smaller than file2). I want the output file to keep the same structure than file2, as said, the same number of lines (for further comparison). The question was further refined.
If your computer can handle the file sizes:
awk 'BEGIN{FS=OFS="\t"}
{key=$1 FS $2}
NR==FNR{seen[key]=$3; next}
{if (key in seen)
print key, seen[key]
else
print key
}
' file1 file2
Otherwise you can filter file1 so only the matches are feed to the awk from file1 and then file2 dictates the final output structure:
awk 'BEGIN{FS=OFS="\t"}
{key=$1 FS $2}
NR==FNR{seen[key]=$3; next}
{if (key in seen)
print key, seen[key]
else
print key
}
' <(join -1 1 -1 2 -2 1 -2 2 -t $'\t' -o 1.1,1.2,1.3 file1 file2) file2
If you still need something more memory efficient, I would break out ruby for a line-by-line solution:
ruby -e 'f1=File.open(ARGV[0]); f2=File.open(ARGV[1])
l1=f1.gets
f2.each { |l2|
l1a=l1.chomp.split(/\t/)[0..2].map(&:to_i)
l2a=l2.chomp.split(/\t/).map(&:to_i)
while((tst=l1a[0..1]<=>l2a)<0 && !f1.eof?)
l1=f1.gets
l1a=l1.chomp.split(/\t/)[0..2].map(&:to_i)
end
if tst==0
l2a << l1a[2]
end
puts l2a.join("\t")
}
' file1 file2
Issues with OP's current awk code:
testing shows loading file1 into memory (a[$1 "\t" $2] = $3) requires ~290 bytes per entry; for 90 million rows this works out to ~26 GBytes; this amount of memory usage should not be an issue in OP's system (max of 32 GBytes) ... assuming all other processes are not consuming 6+ GBytes; having said that ...
in the 2nd half of OP's script (ie, file2 processing) the print/a[$1 "\t" $2] will actually create a new array entry if one doesn't already exist (ie, if file2 key not found in file1 then create a new array entry); since we know this situation can occur we have to take into consideration the amount of memory required to store an entry from file2 in the a[] array ...
testing shows loading file2 into memory (a[$1 "\t" $2] = $3) requires ~190 bytes per entry; for 100 million rows this works out to ~19 GBytes; 'course we won't be loading all of file2 into the a[] array so total additional memory will be less than 19 GBytes; then again, we only need to load about 26 million rows (from file2) into the a[] array to use up another ~5 GBytes (26 million * 190 bytes) to run out of memory
OP has mentioned that processing 'stops' after generating 5-6 million rows of output; this symptom ('stopped' process) can occur when the system runs out of memory and/or goes into heavy swapping (also a memory issue); with 100 million rows in file2 and only 5-6 million rows in the output, that leaves 94-95 million rows from file2 unaccounted for which in turn is considerably more then the 26 million rows it would take to use up the remaining ~5 GBytes of memory ...
net result: OP's current awk script is likely hanging/stopped due to running out of memory
net result: we need to look at solutions that keep us from running out of memory; better would be solutions that use considerably less memory than the current awk code; even better would be solutions that use little (effectively 'no') memory at all ...
Assumptions/understandings:
both input files have already been sorted by the 1st and 2nd columns
within a given file the combination of the 1st and 2nd columns represents a unique key (ie, there are no duplicate keys in a file)
all lines from file2 are to be written to stdout while an optional 3rd column will be derived from file1 (when the key exists in both files)
General approach for a 'merge join' operation:
read from both files in parallel
compare the 1st/2nd columns from both files to determine what to print
memory usage should be minimal since we never have more than 2 lines (one from each file) loaded in memory
One awk idea for implementing a 'merge join' operation:
awk -v lookup="file1" ' # assign awk variable "lookup" the name of the file where we will obtain the optional 3rd column from
function get_lookup() { # function to read a new line from the "lookup" file
rc=getline line < lookup # read next line from "lookup" file into variable "line"; rc==1 if successful; rc==0 if reached end of file
if (rc) split(line,f) # if successful (rc==1) then split "line" into array f[]
}
BEGIN { FS=OFS="\t" # define input/output field delimiters
get_lookup() # read first line from "lookup" file
}
{ c3="" # set optional 3rd column to blank
while (rc) { # while we have a valid line from the "lookup" file look for a matching line ...
if ($1< f[1] ) { break }
else if ($1==f[1] && $2< f[2]) { break }
else if ($1==f[1] && $2==f[2]) { c3= OFS f[3]; get_lookup(); break }
# else if ($1==f[1] && $2> f[2]) { get_lookup() }
# else if ($1> f[1] ) { get_lookup() }
else { get_lookup() }
}
print $0 c3 # print current line plus optional 3rd column
}
' file2
This generates:
1 1 27
1 2 33
1 3
1 4
1 5 84

BASH: Modify column containing also numbers to uppercase

I have a big file (~1000x500) with the following (simplified) format:
ABC1 ABC1 0 0 0 0
a123 a123 0 0 0 0
a.b1 a.b1 0 0 0 0
Some strings in column 1 and column 2 are already in uppercase, while some aren't. There are not only letters in the two first columns but also special characters and numbers.
How can I modify all letters in column 1 and 2 from lowercase to uppercase?
Note: to simplify the example, I put value 0 for the other columns, but in my real file those can be numbers or letters.
This is what I tried, however in a big file the script would be too long to write (plus this is not "nice").
while read -r col1 col2 col3 col4 col5 col6;
do
printf "%s%s%s%s%s%7s\n" "${col1^^}" "${col2^^}" "$col3" "$col4" "$col5" "$col6"
done < input.txt > output.txt
awk '{$1 = toupper($1); $2 = toupper($2)}1' input.txt > output.txt
This will change the field separators from "whitespace" to a single space.
To maintain the current spacing:
perl -pe 's/(\S+)(\s+)(\S+)/ uc($1) . $2 . uc($3) /e'
With gnu sed
sed -E 's/(\S+\s+){2}/\U&/' infile

In AWK: Count number of ocurrences in a column in a tab separated file and write data into a new tsv file

I have data stored in a large (20Gb) tab separated text file, as the sample below (input.txt):
1234 567 T 0
1267 890 Z 1
1269 908 T 1
3142 789 T 0
7896 678 Z 0
I would like to count the occurrences of each entry in Column 4, and write this automatically into a new tab separated file.
I would like to see the following in output.txt:
0 3
1 2
Can anybody suggest a fast way to do this with AWK?
awk '{ count[$4]++ } END { for (i in count) printf "%s\t%d\n", i, count[i] }' \
big.file.txt
For each value in column 4, increment the counter for that value. At the end, print each value found and its count. This prints the values in an indeterminate order. If you want it in some order, either post-process the output with sort or sort the keys inside awk and print in the sorted key order.

Filter column from file based on header matching a regex

I have the following file
foo_foo bar_blop baz_N toto_N lorem_blop
1 1 0 0 1
1 1 0 0 1
And I'd like to remove the columns with the _N tag on header (or selecting all the others)
So the output should be
foo_foo bar_blop lorem_blop
1 1 1
1 1 1
I found some answers but none were doing this exactly
I know awk can do this but I don't understand how to do it by myself (I'm not good at awk) with this language.
Thanks for the help :)
awk 'NR==1{for(i=1;i<=NF;i++)if(!($i~/_N$/)){a[i]=1;m=i}}
{for(i=1;i<=NF;i++)if(a[i])printf "%s%s",$i,(i==m?RS:FS)}' f|column -t
outputs:
foo_foo bar_blop lorem_blop
1 1 1
1 1 1
$ cat tst.awk
NR==1 {
for (i=1;i<=NF;i++) {
if ( (tgt == "") || ($i !~ tgt) ) {
f[++nf] = i
}
}
}
{
for (i=1; i<=nf; i++) {
printf "%s%s", $(f[i]), (i<nf?OFS:ORS)
}
}
$ awk -v tgt="_N" -f tst.awk file | column -t
foo_foo bar_blop lorem_blop
1 1 1
1 1 1
$ awk -f tst.awk file | column -t
foo_foo bar_blop baz_N toto_N lorem_blop
1 1 0 0 1
1 1 0 0 1
$ awk -v tgt="blop" -f tst.awk file | column -t
foo_foo baz_N toto_N
1 0 0
1 0 0
The main difference between this and #Kent's solution is performance and the impact will vary based on the percentage of fields you want to print on each line.
The above when reading the first line of the file creates an array of the field numbers to print and then for every line of the input file it just prints those fields in a loop. So if you wanted to print 3 out of 100 fields then this script would just loop through 3 iterations/fields on each input line.
#Kent's solution also creates an array of the field numbers to print but then for every line of the input file it visits every field to test if it's in that array before printing or not. So if you wanted to print 3 out of 100 fields then #Kent's script would loop through all 100 iterations/fields on each input line.

awk, declare array embracing FNR and field, output

I would like to declare an array of a certain number of lines, that means from line 10 to line 78, as an example. Could be other number, this is just an example.
My sample gives me that range of lines on stdout but sets "1" in between that lines. Can anybody tell me how to get rid of that "1"?
Sample as follows should go to stdout and embraces the named lines.
awk '
myarr["range-one"]=NR~/^2$/ , NR~/^8$/;
{print myarr["range-one"]};' /home/$USER/uplog.txt;
That is giving me this output:
0
12:33:49 up 3:57, 2 users, load average: 0,61, 0,37, 0,22 21.06.2014
1
12:42:02 up 4:06, 2 users, load average: 0,14, 0,18, 0,19 21.06.2014
1
12:42:29 up 4:06, 2 users, load average: 0,09, 0,17, 0,19 21.06.2014
1
12:43:09 up 4:07, 2 users, load average: 0,09, 0,16, 0,19 21.06.2014
1
Second question: how to set in that array one field of FNR or line?
When I do it this way there comes up the field that I wanted
awk ' NR~/^1$/ , NR~/^7$/ {print $3, $11; next} ; ' /home/$USER/uplog.txt;
But I need an array, thats why I'm asking. Any hints? Thanks in advance.
What the example script does
awk '
myarr["range-one"]=NR~/^2$/ , NR~/^8$/;
{print myarr["range-one"]};'
Your script is one of the more convoluted and decidedly less-than-obvious pieces of awk that I've ever seen. Let's take a simple input file:
Line 1
Line 2
Line 3
Line 4
Line 5
Line 6
Line 7
Line 8
Line 9
Line 10
Line 11
Line 12
The output from that is:
0
Line 2
1
Line 3
1
Line 4
1
Line 5
1
Line 6
1
Line 7
1
Line 8
1
0
0
0
0
Dissecting your script, it appears that the first line:
myarr["range-one"]=NR~/^2$/ , NR~/^8$/;
is equivalent to:
myarr["range-one"] = (NR ~ /^#$/, NR ~ /^8$/) { print }
That is, the value assigned to myarr["range-one"] is 1 inside the range of line numbers where NR is equal to 2 and is equal to 8, and 0 outside that range; further, when the value is 1, the line is printed.
The second line:
{print myarr["range-one"]};
print the value in myarr["range-one"] for each line of input. Thus, on the first line, the value 0 is printed. For lines 2 to 8, the line is printed followed by the value 1; for lines after that, the value 0 is printed once more.
What the question asks for
The question is not clear. It appears that lines 10 to 78 should be printed. In awk, there are essentially no variable declarations (we can debate about function parameters, but functions don't seem to figure in this). Therefore, declaring an array is not an option.
awk -v lo=10 -v hi=78 'NR >= lo && NR <= hi { print }'
This would print the lines between line 10 and line 78. It would be feasible to save the values in an array (a in the examples below). Said array could be indexed by NR or with a separate index starting at 0 or 1:
awk -v lo=10 -v hi=78 'NR >= lo && NR <= hi { a[NR] = $0 }' # Indexed by line number
awk -v lo=10 -v hi=78 'NR >= lo && NR <= hi { a[i++] = $0 }' # Indexed from 0
awk -v lo=10 -v hi=78 'NR >= lo && NR <= hi { a[++i] = $0 }' # Indexed from 1
Presumably, you'd also have an END block to do something with the data.
The semicolons in the original are both unnecessary. The blank line is ignored, of course.

Resources