shell insert a line every n lines - file

I have two files and I am trying to insert a line from file2 into file1 every other 4 lines starting at the beginning of file1. So for example:
file1:
line 1
line 2
line 3
line 4
line 5
line 6
line 7
line 8
line 9
line 10
file2:
50
43
21
output I am trying to get:
50
line 1
line 2
line 3
line 4
43
line 5
line 6
line 7
line 8
21
line 9
line 10
The code I have:
while read line
do
sed '0~4 s/$/$line/g' < file1.txt > file2.txt
done < file1.txt
I am getting the following error:
sed: 1: "0~4 s/$/$line/g": invalid command code ~

The following steps through both files without loading either one into an array in memory:
awk '(NR-1)%4==0{getline this<"file2";print this} 1' file1
This might be preferable if your actual file2 is larger than what you want to hold in memory.
This breaks down as follows:
(NR-1)%4==0 - a condition which matches every 4th line starting at 0
getline this<"file2" - gets a line from "file2" and stores it in the variable this
print this - prints ... this.
1 - shorthand for "print the current line", which in this case comes from file1 (awk's normal input)

It is easing to do this using awk:
awk 'FNR==NR{a[i++]=$0; next} !((FNR-1) % 4){print a[j++]} 1' file2 file1
50
line 1
line 2
line 3
line 4
43
line 5
line 6
line 7
line 8
21
line 9
line 10
While processing first file in input i.e. file2, we store each line in array with key as an incrementing number starting with 0.
While processing second file in input i.e. file1, we check if current record # is divisible by 4 using modulo arithmetic and if it is then insert a line from file2 and increment the index counter.
Finally using action 1, we print lines from file1.

This might work for you (GNU sed):
sed -e 'Rfile1' -e 'Rfile1' -e 'Rfile1' -e 'Rfile1' file2
or just use cat and paste:
cat file1 | paste -d\\n file2 - - - -

another alternative with unix toolchain
$ paste file2 <(pr -4ats file1) | tr '\t' '\n'
50
line 1
line 2
line 3
line 4
43
line 5
line 6
line 7
line 8
21
line 9
line 10

Here's a goofy way to do it with paste and tr
paste file2 <(paste - - - - <file1) | tr '\t' '\n'
Assumes you don't have any actual tabs in your input files.

Related

Complete file2 with data from file1

I have two files with fields separated with tabs:
File1 has 13 columns and 90 millions of lines (~5GB). The number of lines of file 1 is always smaller than the number of lines of file2.
1 1 27 0 2 0 0 1 0 0 0 1 false
1 2 33 0 3 0 0 0 0 0 0 1 false
1 5 84 3 0 0 0 0 0 0 0 2 false
1 6 41 0 1 0 0 0 0 0 0 1 false
1 7 8 4 0 0 0 0 0 0 0 1 false
File2 has 2 columns and 100 millions of lines (1.3GB)
1 1
1 2
1 3
1 4
1 5
What I want to achieve:
When the pair columns $1/$2 of file2 is identical to the pair columns $1/$2 of file1, I would like to print $1 and $2 from file2 and $3 from file1 into an output file. In addition, if the pair $1/$2 of file2 does not have a match in file1, print $1/$2 in the output and the 3rd column is left empty. Thus, the output keeps the same structure (number of lines) than file2.
If relevant: The pairs $1/$2 are unique in both file1 and 2 and both files are sorted according $1 first and then $2.
Output file:
1 1 27
1 2 33
1 3 45
1 4
1 5 84
What I have done so far:
awk -F"\t" 'NR == FNR {a[$1 "\t" $2] = $3; next } { print $0 "\t" a[$1 "\t" $2] }' file1 file2 > output
The command runs for few minutes and unexpectedly stop without additional information. When I open the output file, the first 5 to 6x10E6 lines have been correctly processed (I can see the 3rd column that was correctly added) but the rest of the output file does not have a 3rd column. I am running this command on a 3.2 GHz Intel Core i5 with 32 GB 1600 MHz DDR3. Any ideas why the command stops? Thanks for your help.
You are close.
I would do something like this:
awk 'BEGIN{FS=OFS="\t"}
{key=$1 FS $2}
NR==FNR{seen[key]=$3; next}
key in seen {print key, seen[key]}
' file1 file2
Or, since file1 is bigger, reverse which file is held in memory:
awk 'BEGIN{FS=OFS="\t"}
{key=$1 FS $2}
NR==FNR{seen[key]; next}
key in seen {print key, $3}
' file2 file1
You could also use join which will likely handle files much larger than memory. This is BSD join that can use multiple fields for the join:
join -1 1 -1 2 -2 1 -2 2 -t $'\t' -o 1.1,1.2,1.3 file1 file2
join requires the files be sorted, as your example is. If not sorted, you could do:
join -1 1 -1 2 -2 1 -2 2 -t $'\t' -o 1.1,1.2,1.3 <(sort -n file1) <(sort -n file2)
Or, if your join can only use a single field, you can temporarily use ' ' as the field separator between field 2 and 3 and set join to use that as the delimiter:
join -1 1 -2 1 -t $' ' -o 1.1,2.2 <(sort -k1n -k2n file2) <(awk '{printf("%s\t%s %s\n",$1,$2,$3)}' file1 | sort -k1n -k2n) | sed 's/[ ]/\t/'
Either awk or join prints:
1 1 27
1 2 33
1 3 45
1 4 7
1 5 84
Your comment:
After additional investigations, the suggested solutions did not worked because my question was not properly asked (my mistake). The suggested solutions printed lines only when matches between pairs ($1/$2) were found between files 1 and 2. Thus, the resulting output file has always the number of lines of file1 (that is always smaller than file2). I want the output file to keep the same structure than file2, as said, the same number of lines (for further comparison). The question was further refined.
If your computer can handle the file sizes:
awk 'BEGIN{FS=OFS="\t"}
{key=$1 FS $2}
NR==FNR{seen[key]=$3; next}
{if (key in seen)
print key, seen[key]
else
print key
}
' file1 file2
Otherwise you can filter file1 so only the matches are feed to the awk from file1 and then file2 dictates the final output structure:
awk 'BEGIN{FS=OFS="\t"}
{key=$1 FS $2}
NR==FNR{seen[key]=$3; next}
{if (key in seen)
print key, seen[key]
else
print key
}
' <(join -1 1 -1 2 -2 1 -2 2 -t $'\t' -o 1.1,1.2,1.3 file1 file2) file2
If you still need something more memory efficient, I would break out ruby for a line-by-line solution:
ruby -e 'f1=File.open(ARGV[0]); f2=File.open(ARGV[1])
l1=f1.gets
f2.each { |l2|
l1a=l1.chomp.split(/\t/)[0..2].map(&:to_i)
l2a=l2.chomp.split(/\t/).map(&:to_i)
while((tst=l1a[0..1]<=>l2a)<0 && !f1.eof?)
l1=f1.gets
l1a=l1.chomp.split(/\t/)[0..2].map(&:to_i)
end
if tst==0
l2a << l1a[2]
end
puts l2a.join("\t")
}
' file1 file2
Issues with OP's current awk code:
testing shows loading file1 into memory (a[$1 "\t" $2] = $3) requires ~290 bytes per entry; for 90 million rows this works out to ~26 GBytes; this amount of memory usage should not be an issue in OP's system (max of 32 GBytes) ... assuming all other processes are not consuming 6+ GBytes; having said that ...
in the 2nd half of OP's script (ie, file2 processing) the print/a[$1 "\t" $2] will actually create a new array entry if one doesn't already exist (ie, if file2 key not found in file1 then create a new array entry); since we know this situation can occur we have to take into consideration the amount of memory required to store an entry from file2 in the a[] array ...
testing shows loading file2 into memory (a[$1 "\t" $2] = $3) requires ~190 bytes per entry; for 100 million rows this works out to ~19 GBytes; 'course we won't be loading all of file2 into the a[] array so total additional memory will be less than 19 GBytes; then again, we only need to load about 26 million rows (from file2) into the a[] array to use up another ~5 GBytes (26 million * 190 bytes) to run out of memory
OP has mentioned that processing 'stops' after generating 5-6 million rows of output; this symptom ('stopped' process) can occur when the system runs out of memory and/or goes into heavy swapping (also a memory issue); with 100 million rows in file2 and only 5-6 million rows in the output, that leaves 94-95 million rows from file2 unaccounted for which in turn is considerably more then the 26 million rows it would take to use up the remaining ~5 GBytes of memory ...
net result: OP's current awk script is likely hanging/stopped due to running out of memory
net result: we need to look at solutions that keep us from running out of memory; better would be solutions that use considerably less memory than the current awk code; even better would be solutions that use little (effectively 'no') memory at all ...
Assumptions/understandings:
both input files have already been sorted by the 1st and 2nd columns
within a given file the combination of the 1st and 2nd columns represents a unique key (ie, there are no duplicate keys in a file)
all lines from file2 are to be written to stdout while an optional 3rd column will be derived from file1 (when the key exists in both files)
General approach for a 'merge join' operation:
read from both files in parallel
compare the 1st/2nd columns from both files to determine what to print
memory usage should be minimal since we never have more than 2 lines (one from each file) loaded in memory
One awk idea for implementing a 'merge join' operation:
awk -v lookup="file1" ' # assign awk variable "lookup" the name of the file where we will obtain the optional 3rd column from
function get_lookup() { # function to read a new line from the "lookup" file
rc=getline line < lookup # read next line from "lookup" file into variable "line"; rc==1 if successful; rc==0 if reached end of file
if (rc) split(line,f) # if successful (rc==1) then split "line" into array f[]
}
BEGIN { FS=OFS="\t" # define input/output field delimiters
get_lookup() # read first line from "lookup" file
}
{ c3="" # set optional 3rd column to blank
while (rc) { # while we have a valid line from the "lookup" file look for a matching line ...
if ($1< f[1] ) { break }
else if ($1==f[1] && $2< f[2]) { break }
else if ($1==f[1] && $2==f[2]) { c3= OFS f[3]; get_lookup(); break }
# else if ($1==f[1] && $2> f[2]) { get_lookup() }
# else if ($1> f[1] ) { get_lookup() }
else { get_lookup() }
}
print $0 c3 # print current line plus optional 3rd column
}
' file2
This generates:
1 1 27
1 2 33
1 3
1 4
1 5 84

terminal, diff of two files, print out all lines, which are on the 2nd file a no on 1st file

On follow samples, every line can be empty or can have some characters. The characters can be other than numbers too. Every line can have line feeds and tabs too.
Follow looks partly fine, a don't work with more complex content:
file1.txt
1
2
3
5
file2.txt
1
4
5
working with simple sample above:
comm -1 -3 file1.txt file2.txt
Output, which is fine
4
More complex sample, which don't fit
file1.txt
0
2
3
4
5
6
7
8
10
file2.txt
1
4
6
7
8
9
10
wrong output (the 10 should not on output on this sample)
1
9
10
If your sort the contend your file1.txt and file2.txt on the same way, before running your sample code, your sample code works fine.
You can do it on follow way:
sort file1.txt > file1_sorted.txt
sort file2.txt > file2_sorted.txt
After than, use the files above, for your code:
comm -1 -3 file1_sorted.txt file2_sorted.txt

compare single column of two files line by line

I've asked a similar question to this before but didn't find the exact answer I was looking for, I apologize for redundancy. I decided to repost the question phrased differently. I have two lengthy files, each with two columns separated by a space.
I'd like to eliminate all lines that have a matching column 2 in fileA and fileB (regardless of line number/column 1), and output the entire mismatching-line to a separate file.
File A:
1 AA
2 BB
3 CC
4 DD
5 EE
6 FF
7 GG
8 HH
File B:
1 AA
2 BB
3 XX
4 XX
5 CC
6 DD
7 XX
8 FF
9 GG
10 XX
11 XX
12 HH
Desired output:
3 XX
4 XX
7 XX
5 EE
10 XX
11 XX
fedorqui suggested I use awk to store the second column of fileA in an array, then loop through fileB to output lines with the following criteria:
column 1 is present in fileA
but column 2 in fileB is different
awk 'FNR==NR {a[$1]=$2; next} $1 in a && a[$1] != $2' fileA fileB
This is helpful until my code encounters the first discrepancy in column 2 between fileA and fileB, then the code outputs all following lines.
Instead of this, I would like to compare the array from column 2 of fileA line-by-line to column 2 of fileB. Once the code encounters a discrepancy, it outputs the entire mismatched line from fileB, then compares the same line of the array to the next line of fileB. It continues comparing the same line of the array, outputing discrepant lines of fileB until a match is found. If the code reaches the end of fileB and no match is found, then output the line from fileA, move to the next line of the array and continue comparing to each line of fileB. Is this possible, or any easier way to do this than creating an array with awk?
You can use this awk:
awk 'NR==FNR {a[$2]=$0;next} $2 in a{del[$2];next} 1;
END{for (i in a) if (!(i in del)) print a[i]}' fileA fileB
3 XX
4 XX
7 XX
10 XX
11 XX
5 EE
Note order is not as shown in question because (fileA - fileB) is printed in the end while (fileB - fileA) is computed while traversing fileB.

awk, declare array embracing FNR and field, output

I would like to declare an array of a certain number of lines, that means from line 10 to line 78, as an example. Could be other number, this is just an example.
My sample gives me that range of lines on stdout but sets "1" in between that lines. Can anybody tell me how to get rid of that "1"?
Sample as follows should go to stdout and embraces the named lines.
awk '
myarr["range-one"]=NR~/^2$/ , NR~/^8$/;
{print myarr["range-one"]};' /home/$USER/uplog.txt;
That is giving me this output:
0
12:33:49 up 3:57, 2 users, load average: 0,61, 0,37, 0,22 21.06.2014
1
12:42:02 up 4:06, 2 users, load average: 0,14, 0,18, 0,19 21.06.2014
1
12:42:29 up 4:06, 2 users, load average: 0,09, 0,17, 0,19 21.06.2014
1
12:43:09 up 4:07, 2 users, load average: 0,09, 0,16, 0,19 21.06.2014
1
Second question: how to set in that array one field of FNR or line?
When I do it this way there comes up the field that I wanted
awk ' NR~/^1$/ , NR~/^7$/ {print $3, $11; next} ; ' /home/$USER/uplog.txt;
But I need an array, thats why I'm asking. Any hints? Thanks in advance.
What the example script does
awk '
myarr["range-one"]=NR~/^2$/ , NR~/^8$/;
{print myarr["range-one"]};'
Your script is one of the more convoluted and decidedly less-than-obvious pieces of awk that I've ever seen. Let's take a simple input file:
Line 1
Line 2
Line 3
Line 4
Line 5
Line 6
Line 7
Line 8
Line 9
Line 10
Line 11
Line 12
The output from that is:
0
Line 2
1
Line 3
1
Line 4
1
Line 5
1
Line 6
1
Line 7
1
Line 8
1
0
0
0
0
Dissecting your script, it appears that the first line:
myarr["range-one"]=NR~/^2$/ , NR~/^8$/;
is equivalent to:
myarr["range-one"] = (NR ~ /^#$/, NR ~ /^8$/) { print }
That is, the value assigned to myarr["range-one"] is 1 inside the range of line numbers where NR is equal to 2 and is equal to 8, and 0 outside that range; further, when the value is 1, the line is printed.
The second line:
{print myarr["range-one"]};
print the value in myarr["range-one"] for each line of input. Thus, on the first line, the value 0 is printed. For lines 2 to 8, the line is printed followed by the value 1; for lines after that, the value 0 is printed once more.
What the question asks for
The question is not clear. It appears that lines 10 to 78 should be printed. In awk, there are essentially no variable declarations (we can debate about function parameters, but functions don't seem to figure in this). Therefore, declaring an array is not an option.
awk -v lo=10 -v hi=78 'NR >= lo && NR <= hi { print }'
This would print the lines between line 10 and line 78. It would be feasible to save the values in an array (a in the examples below). Said array could be indexed by NR or with a separate index starting at 0 or 1:
awk -v lo=10 -v hi=78 'NR >= lo && NR <= hi { a[NR] = $0 }' # Indexed by line number
awk -v lo=10 -v hi=78 'NR >= lo && NR <= hi { a[i++] = $0 }' # Indexed from 0
awk -v lo=10 -v hi=78 'NR >= lo && NR <= hi { a[++i] = $0 }' # Indexed from 1
Presumably, you'd also have an END block to do something with the data.
The semicolons in the original are both unnecessary. The blank line is ignored, of course.

process specifics line from text file

i want to insert to text file like this: (the content of the text file is not like this, this is only example):
This Is The 1 line
This Is The 2 line
This Is The 3 line
This Is The 4 line
This Is The 5 line
This Is The 6 line
This Is The 7 line
This Is The 8 line
This Is The 9 line
This Is The 10 line
This Is The 11 line
This Is The 12 line
This Is The 13 line
This Is The 14 line
the new file will be like this:
This Is The 1 line
This Is The 2 line
This Is The 3 line
This Is The 4 line
This Is The 5 line
This Is The 6 line
This Is The 7 line
This Is The 8 line
This Is The 9 line
This Is The 10 line
NEW INPUT HERE
This Is The 11 line
This Is The 12 line
This Is The 13 line
This Is The 14 line
my question is how to process line 1-10, insert some text, and afterwords process line 11-14?
Here is one robust way to do it:
This uses a helper batch file called findrepl.bat (by aacini) - download from: https://www.dropbox.com/s/rfdldmcb6vwi9xc/findrepl.bat
Place findrepl.bat in the same folder as the batch file or on the path.
#echo off
(
type "file.txt"|findrepl /o:1:10
echo NEW INPUT HERE
type "file.txt"|findrepl /o:11
) > "newfile.txt"

Resources