I have some information in two large files.
One of them(file1.txt, has ~ 4 million lines) contains all object names(which are unique) and types.
And the other(file2.txt, has ~ 2 million lines) some object names(they can be duplicated) and some values assigned to them.
So, I have something like below in file1.txt:
objName1 objType1
objName2 objType2
objName3 objType3
...
And in file2.txt I have:
objName3 val3_1
objName3 val3_2
objName4 val4
...
For the all objects in file2.txt I need to output object names, their types and values assigned to them in a single file like below:
objType3 val3_1 "objName3"
objType3 val3_2 "objName3"
objType4 val4 "objName4"
...
Previously object names in file2.txt supposed to be unique, so I've implemented some solution, where I'm reading all the data from both files, saving them to a Tcl arrays, and then iterating over larger array and checking whether object with the same name exists in a smaller array, and if so, writing my needed information to a separate file. But this runs too long (> 10 hours and hasn't completed yet).
How can I improve my solution, or is there another way to do this?
EDIT:
Actually I don't have file1.txt, I'm finding that data by some procedure and writing it into Tcl array. I'm running some procedure to get object types and save them to a Tcl array, then, I'm reading file2.txt and saving data to a Tcl array, then I'm iterating over items in the first array, and if object name match some object in second(object values) array, I'm writing info to output file and erasing that element from the second array. Here is a piece of code that I'm running:
set outFileName "output.txt"
if [catch {open $outFileName "w"} fid ] {
puts "ERROR: Failed to open file '$outFileName', no write permission"
exit 1
}
# get object types
set TIME_start [clock clicks -milliseconds]
array set objTypeMap [list]
# here is some proc that fills up objTypeMap
set TIME_taken [expr [clock clicks -milliseconds] - $TIME_start]
puts "Info: Object types are found. Elapsed time $TIME_taken"
# read file2.txt
set TIME_start [clock clicks -milliseconds]
set file2 [lindex $argv 5]
if [catch { set fp [open $file2 r] } errMsg] {
puts "ERROR: Failed to open file '$file2' for reading"
exit 1
}
set objValData [read $fp]
close $fp
# tcl list containing lines of file2.txt
set objValData [split $objValData "\n"]
# remove last empty line
set objValData [lreplace $objValData end end]
array set objValMap [list]
foreach item $objValData {
set objName [string range $item 0 [expr {[string first " " $item] - 1}] ]
set objValue [string range $item [expr {[string first " " $item] + 1}] end ]
set objValMap($instName) $objValue
}
# clear objValData
unset objValData
set TIME_taken [expr [clock clicks -milliseconds] - $TIME_start]
puts "Info: Object value data is read and processed. Elapsed time $TIME_taken"
# write to file
set TIME_start [clock clicks -milliseconds]
foreach { objName objType } [array get objTypeMap] {
if { [array size objValMap] eq 0 } {
break
}
if { [info exists objValMap($objName)] } {
set objValue $objValMap($objName)
puts $fid "$objType $objValue \"$objName\""
unset objValMap($objName)
}
}
if { [array size objValMap] neq 0 } {
foreach { objName objVal } [array get objValMap] {
puts "WARNING: Can not find object $objName type, skipped..."
}
}
close $fid
set TIME_taken [expr [clock clicks -milliseconds] - $TIME_start]
puts "Info: Output is cretaed. Elapsed time $TIME_taken"
Seems for the last step (writing to a file) there are ~8 * 10^12 iterations to do, and it's not realistic to complete in a reasonable time, because I've tried to do 8 * 10^12 iterations in a for loop and just print the iteration index, and ~850*10^6 iterations took ~30 minutes (so, the whole loop will finish in ~11hours).
So, there should be another solution.
EDIT:
Seems the reason was some unsuccessful hashing for file2.txt map, as I've tried to shuffle lines in file2.txt and got results in about 3 minutes.
Write the data to file1, and let an external tool do all the hard work (it's bound to be much more optimized for the task than home-spun Tcl code)
exec bash -c {join -o 0,1.2,2.2 <(sort file1.txt) <(sort file2.txt)} > result.txt
So… file1.txt is describing a mapping and file2.txt is the list of things to process and annotate? The right thing is to load the mapping into an array or dictionary where the key is the part that you will look things up by, and to then go through the other file line-by-line. That keeps the amount of data in memory down, but it's worth holding the whole mapping that way anyway.
# We're doing many iterations, so worth doing proper bytecode compilation
apply {{filename1 filename2 filenameOut} {
# Load the mapping; uses memory proportional to the file size
set f [open $filename1]
while {[gets $f line] >= 0} {
regexp {^(\S+)\s+(.*)} $line -> name type
set types($name) $type
}
close $f
# Now do the streaming transform; uses a small fixed amount of memory
set fin [open $filename2]
set fout [open $filenameOut "w"]
while {[gets $fin line] >= 0} {
# Assume that the mapping is probably total; if a line fails we're print it as
# it was before. You might have a different preferred strategy here.
catch {
regexp {^(\S+)\s+(.*)} $line -> name info
set line [format "%s %s \"%s\"" $types($name) $info $name]
}
puts $fout $line
}
close $fin
close $fout
# All memory will be collected at this point
}} "file1.txt" "file2.txt" "fileProcessed.txt"
Now, if the mapping is very large, so much that it doesn't fit in memory, then you might be better doing it via building file indices and stuff like that, but frankly then you're actually better off getting familiar with SQLite or some other database.
A pure-Tcl variant of Glenn Jackman's code would be
package require fileutil
package require struct::list
set data1 [lsort -index 0 [split [string trim [fileutil::cat file1.txt]] \n]]
set data2 [lsort -index 0 [split [string trim [fileutil::cat file2.txt]] \n]]
fileutil::writeFile result.txt [struct::list dbJoin -full 0 $data1 0 $data2]
But in this case each row will have four columns, not three: the two columns from file1.txt and the two columns from file2.txt. If that is a problem, reducing the number of columns to three is trivial.
The file join in the example is also full, i.e. all rows from both files will occur in the result, padded by empty strings if the other file has no corresponding data. To solve the OP's problem, an inner join is probably better (only rows that correspond are retained).
fileutil::cat reads the contents of a file, string trim removes leading and trailing whitespace from the contents, to avoid empty lines in the beginning or end, split ... \n creates a list where every row becomes an item, lsort -index 0 sorts that list based on the first word in every item.
The code is verified to work with Tcl 8.6 and fileutil 1.14.8. The fileutil package is a part of the Tcllib companion library for Tcl: the package can be individually upgraded to the current version by downloading the Tcl source and copying it to the relevant location in the Tcl installation's lib tree (C:\Tcl\lib\teapot\package\tcl\teapot\tcl8\8.2 in my case).
Quick-and-dirty install: download fileutil.tcl from here (use the Download button) and copy the file to where your other sources are. In your source code, call source fileutil.tcl and then package require fileutil. (There may still be compatibility problems with Tcl or with e.g. the cmdline package. Reading the source may suggest workarounds for such.) Remember to check the license conditions for conflicts.
Documentation: fileutil package, lsort, package, set, split, string, struct::list package
Related
I wrote a bash script that reads a file from stdin $1, and needs to read that file line by line within a loop, and based on a condition statement in each iteration, each line tested from the file will feed into one of two new arrays lets say named GOOD array and BAD array. Lastly, I'll display the total elements of each array.
#!/bin/bash
for x in $(cat $1); do
#testing something on x
if [ $? -eq 0 ]; then
#add the current value of x into array called GOOD
else
#add the current value of x into array called BAD
fi
done
echo "Total GOOD elements: ${#GOOD[#]}"
echo "Total BAD elements: ${#BAD[#]}"
What changes should i make to accomplish it?
#!/usr/bin/env bash
# here, we're checking the number of lines more than 5 characters long
# replace with your real test
testMyLine() { (( ${#1} > 5 )); }
good=( ); bad=( )
while IFS= read -r line; do
if testMyLine "$line"; then
good+=( "$line" )
else
bad+=( "$line" )
fi
done <"$1"
echo "Read ${#good[#]} good and ${#bad[#]} bad lines"
Note:
We're using a while read loop to iterate over file contents. This doesn't need to read more than one line into memory at a time (so it won't run out of RAM even with really big files), and it doesn't have unwanted side effects like changing a line containing * to a list of files in the current directory.
We aren't using $?. if foo; then is a much better way to branch on the exit status of foo than foo; if [ $? = 0 ]; then -- in particular, this avoids depending on the value of $? not being changed between when you assign it and when you need it; and it marks foo as "checked", to avoid exiting via set -e or triggering an ERR trap when your boolean returns false.
The use of lower-case variable names is intentional. All-uppercase names are used for shell-builtin variables and names with special meaning to the operating system -- and since defining a regular shell variable overwrites any environment variable with the same name, this convention applies to both types. See http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html
I have a very large text file from which I have to extract some data. I read the file line by line and look for keywords. As I know that the keywords I am looking for are much closer to the end of the file than to the beginning.
I tried tac keyword
set fh [open "|tac filename"]
I am getting error as : couldn't execute "tac": no such file or directory
My file size is big so i am not able to store the line in a loop and reverse it again. Please suggest some solution
tac is itself a fairly simple program -- you could just implement its algorithm in Tcl, at least if you're determined to literally read each line in reverse order. However, I think that constraint is not really necessary -- you said that the content you're looking for is more likely to be near the end than near the beginning, not that you had to scan the lines in reverse order. That means you can do something a little bit simpler. Roughly speaking:
Seek to an offset near the end of the file.
Read line-by-line as normal, until you hit data you've already processed.
Seek to an offset a bit further back from the end of the file.
Read line-by-line as normal, until you hit data you've already processed.
etc.
This way you don't actually have to keep anything more in memory than the single line you're processing right now, and you'll process the data at the end of the file before data earlier in the file. Maybe you could eke out a tiny bit more performance by strictly processing the lines in reverse order but I doubt it will matter compared to the advantage you gain by not scanning from start to finish.
Here's some sample code that implements this algorithm. Note the bit of care taken to avoid processing a partial line:
set BLOCKSIZE 16384
set offset [file size $filename]
set lastOffset [file size $filename]
set f [open $filename r]
while { 1 } {
seek $f $offset
if { $offset > 0 } {
# We may have accidentally read a partial line, because we don't
# know where the line boundaries are. Skip to the end of whatever
# line we're in, and discard the content. We'll get it instead
# at the end of the _next_ block.
gets $f
set offset [tell $f]
}
while { [tell $f] < $lastOffset } {
set line [gets $f]
### Do whatever you're going to do with the line here
puts $line
}
set lastOffset $offset
if { $lastOffset == 0 } {
# All done, we just processed the start of the file.
break
}
set offset [expr {$offset - $BLOCKSIZE}]
if { $offset < 0 } {
set offset 0
}
}
close $f
The cost of reversing a file is actually fairly high. The best option I can think of is to construct a list of file offsets of the starts of lines, and then to use a seek;gets pattern to go over that list.
set f [open $filename]
# Construct the list of indices
set indices {}
while {![eof $f]} {
lappend indices [tell $f]
gets $f
}
# Iterate backwards
foreach idx [lreverse $indices] {
seek $f $idx
set line [gets $f]
DoStuffWithALine $line
}
close $f
The cost of this approach is non-trivial (even if you happened to have a cache of the indices, you'd still have issues with it) as it doesn't work well with how the OS pre-fetches disk data.
I've written one piece of code by using a while loop but it will take too much time to read the file line by line. Can any one help me please?
my code :
set a [open myfile r]
while {[gets $a line]>=0} {
"do somethig by using the line variable"
}
The code looks fine. It's pretty quick (if you're using a sufficiently new version of Tcl; historically, there were some minor versions of Tcl that had buffer management problems) and is how you read a line at a time.
It's a little faster if you can read in larger amounts at once, but then you need to have enough memory to hold the file. To put that in context, files that are a few million lines are usually no problem; modern computers can handle that sort of thing just fine:
set a [open myfile]
set lines [split [read $a] "\n"]
close $a; # Saves a few bytes :-)
foreach line $lines {
# do something with each line...
}
If it truly is a large file you should do the following to read in only a line at a time. Using your method will read the entire contents into ram.
https://www.tcl.tk/man/tcl8.5/tutorial/Tcl24.html
#
# Count the number of lines in a text file
#
set infile [open "myfile.txt" r]
set number 0
#
# gets with two arguments returns the length of the line,
# -1 if the end of the file is found
#
while { [gets $infile line] >= 0 } {
incr number
}
close $infile
puts "Number of lines: $number"
#
# Also report it in an external file
#
set outfile [open "report.out" w]
puts $outfile "Number of lines: $number"
close $outfile
I have two files, the some of the contents of these might be common in both. (say file A.txt and file B.txt)
Both the files are sorted files.
I need to get the difference of file A.txt and B.txt, ie, a file C.txt which has contents of A except the common contents in both.
I used the typical search and print algorithm, ie, took a line from A.txt, searched in B.txt, if found, print nothing in C.txt, else print that line in C.txt.
But, I am dealing with files with huge # of contents, and thus, it throws error: failed to load too many files. (Though it works fine for smaller files)
Can anybody suggest more efficient way of getting C.txt?
Script to be used: TCL only!
First off, the too many files error is an indication that you're not closing a channel, probably in the B.txt scanner. Fixing that is probably your first goal. If you've got Tcl 8.6, try this helper procedure:
proc scanForLine {searchLine filename} {
set f [open $filename]
try {
while {[gets $f line] >= 0} {
if {$line eq $searchLine} {
return true
}
}
return false
} finally {
close $f
}
}
However, if one of the files is small enough to fit into memory reasonably, you'd be far better reading it into a hash table (e.g., a dictionary or array):
set f [open B.txt]
while {[gets $f line]} {
set B($line) "any dummy value; we'll ignore it"
}
close $f
set in [open A.txt]
set out [open C.txt w]
while {[gets $in line]} {
if {![info exists B($line)]} {
puts $out $line
}
}
close $in
close $out
This is much more efficient, but depends on B.txt being small enough.
If both A.txt and B.txt are too large for that, you are probably best doing some sort of processing by stages, writing things out to disk in-between. This is getting rather more complex!
set filter [open B.txt]
set fromFile A.txt
for {set tmp 0} {![eof $filter]} {incr tmp} {
# Filter by a million lines at a time; that'll probably fit OK
for {set i 0} {$i < 1000000} {incr i} {
if {[gets $filter line] < 0} break
set B($line) "dummy"
}
# Do the filtering
if {$tmp} {set fromFile $toFile}
set from [open $fromFile]
set to [open [set toFile /tmp/[pid]_$tmp.txt] w]
while {[gets $from line] >= 0} {
if {![info exists B($line)]} {
puts $to $line
}
}
close $from
close $to
# Keep control of temporary files and data
if {$tmp} {file delete $fromFile}
unset B
}
close $filter
file rename $toFile C.txt
Warning! I've not tested this code…
I'm processing headers of a .fasta file (which is a file universally used in genetics/bioinformatics to store DNA/RNA sequence data). Fasta files have headers starting with a > symbol (which gives specific info), followed by the actual sequence data on the next line that the header describes. The sequence data extends indefinitely until the next \n after which is followed the next header and its respective sequence. For example:
>scaffold1.1_size947603
ACGCTCGATCGTACCAGACTCAGCATGCATGACTGCATGCATGCATGCATCATCTGACTGATG....
>scaffold2.1_size747567.2.603063_605944
AGCTCTGATCGTCGAAATGCGCGCTCGCTAGCTCGATCGATCGATCGATCGACTCAGACCTCA....
and so on...
So, I have a problem with the fasta headers of the genome for the organism with which I am working with. Unfortunately the perl expertise needed to solve this problem seems to be beyond my current skill level :S So I was hoping someone on here could show me how it can be done.
My genome consists of about 25000 fasta headers and their respective sequences, the headers in their current state are giving me a lot of trouble with sequence aligners I am trying to use, so I have to simplify them significantly. Here is an example of my first few headers:
>scaffold1.1_size947603
>scaffold10.1_size550551
>scaffold100.1_size305125:1-38034
>scaffold100.1_size305125:38147-38987
>scaffold100.1_size305125:38995-44965
>scaffold100.1_size305125:76102-78738
>scaffold100.1_size305125:84171-87568
>scaffold100.1_size305125:87574-89457
>scaffold100.1_size305125:90495-305068
>scaffold1000.1_size94939
Essentially I would like to refine these to look like this:
>scaffold1.1a
>scaffold10.1a
>scaffold100.1a
>scaffold100.1b
>scaffold100.1c
>scaffold100.1d
>scaffold100.1e
>scaffold100.1f
>scaffold100.1g
>scaffold1000.1a
Or perhaps even this (but this seems like it would be more complicated):
>scaffold1.1
>scaffold10.1
>scaffold100.1a
>scaffold100.1b
>scaffold100.1c
>scaffold100.1d
>scaffold100.1e
>scaffold100.1f
>scaffold100.1g
>scaffold1000.1
What I'm doing here is getting rid of all the size data for each scaffold of the genome. For scaffolds that happen to be fragmented, I'd like to denote them with a,b,c,d etc. There are a few scaffolds with more than 26 fragments so perhaps I could denote them with x, y, z, A, B, C, D .... etc..
I was thinking to do this with a simple replace foreach loop similar to this:
#!/usr/bin/perl -w
### Open the files
$gen = './Hc_genome/haemonchus_V1.fa';
open(FASTAFILE, $gen);
#lines = <FASTAFILE>;
#print #lines;
###Add an # symbol to the start of the label
my #refined;
foreach my $lines (#lines){
chomp $lines;
$lines =~ s/match everything after .1/replace it with a, b, c.. etc/g;
push #refined, $lines;
}
#print #refined;
###Push the array on to a new fasta file
open FILE3, "> ./Hc_genome/modded_haemonchus_V1.fa" or die "Cannot open output.txt: $!";
foreach (#refined)
{
print FILE3 "$_\n"; # Print each entry in our array to the file
}
close FILE3;
But I don't know have to build in the added alphabetical label additions between the $1 and the \n in the match and replace operator. Essentially because I'm not sure how to do it sequentially through the alphabet for each fragment of a particular scaffold (All I could manage is to add an a to the start of each one...)
Please if you don't mind, let me know how I might achieve this!
Much appreciated!
Andrew
In Perl, the increment operator ++ has “magical” behaviour with respect to strings. E.g. my $s = "a"; $a++ increments $a to "b". This goes on until "z", where the increment will produce "aa" and so forth.
The headers of your file appear to be properly sorted, so we can just loop through each header. From the header, we extract the starting part (everything up to including the .1). If this starting part is the same as the starting part of the previous header, we increment our sequence identifier. Otherwise, we set it to "a":
use strict; use warnings; # start every script with these
my $index = "a";
my $prev = "";
# iterate over all lines (rather than reading all 25E3 into memory at once)
while (<>) {
# pass through non-header lines
unless (/^>/) {
print; # comment this line to remove non-header lines
next;
}
s/\.1\K.*//s; # remove everything after ".1". Implies chomping
# reset or increment $index
if ($_ eq $prev) {
$index++;
} else {
$index = "a";
}
# update the previous line
$prev = $_;
# output new header
print "$_$index\n";
}
Usage: $ perl script.pl <./Hc_genome/haemonchus_V1.fa >./Hc_genome/modded_haemonchus_V1.fa.
It is considered good style to write programs that accept input from STDIN and write to STDOUT, as this improves flexibility. Rather than hardcoding paths in your perl script, keep your script general, and use shell redirection operators like < to specify the input. This also saves you the hassle of manually opening the files.
Example Output:
>scaffold1.1a
>scaffold10.1a
>scaffold100.1a
>scaffold100.1b
>scaffold100.1c
>scaffold100.1d
>scaffold100.1e
>scaffold100.1f
>scaffold100.1g
>scaffold1000.1a