I am trying to do this:
I have a text file which has a line starting with the specific pattern:
vvdd vdd
I need to locate this line and insert another line following this with a pattern vvss vss
All the other lines below the original line has to be displaced accordingly.
Here is my code so far which inserts into a wrong location:
set filename "path265.spi"
set line_no 0
set count 0
set pattern "vvdd vdd"
set fp [open $filename r+]
while {[gets $fp line] != -1} {
incr count 1
if {[regexp $pattern $line]} {
set line_no $count
}
}
seek $fp 0
for {set i 0} {$i<$line_no} {incr i} {gets $fp replace}
puts $fp "\nvvnw vnw 0 1.08"
puts $line_no
puts $count
close $fp
You can use ::fileutil::updateInPlace to simplify things.
package require fileutil
proc change {pattern newtext data} {
set res {}
foreach line [split $data \n] {
lappend res $line
if {[regexp $pattern $line]} {
lappend res $newtext
}
}
return [join $res \n]
}
::fileutil::updateInPlace path265.spi {change "^vvdd vdd" "vvss vss"}
The updateInPlace command takes a file name and a command prefix. It adds the contents of the file to that command prefix and invokes it, then writes the result back to to file.
In this case, the command called iterates through the lines of the file, adding $newtext after every line that matches $pattern. This is just one way to write the procedure for making the change. If only the first match is relevant, this could be used:
proc change {pattern newtext data} {
set lines [split $data \n]
set index [lsearch -regexp $lines $pattern]
if {$index >= 0} {
set lines [linsert $lines $index+1 $newtext]
}
return [join $lines \n]
}
etc.
Documentation: fileutil package, foreach, if, lappend, linsert, lsearch, package, proc, regexp, return, set, split
Related
I want to read line by line values in cd.txt and consider each value as variable.
# Damper Properties
set Kd 0.000001;
set Cd [open "CD.txt" r];
set ad 0.000001;
if [catch {open $CD.txt r} cd] { ; # Open the input file and check for error
puts stderr "Cannot open $inFilename for reading"; # output error statement
} else {
foreach line [[read $cd] \n] { ; # Look at each line in the file
if {[llength $line] == 0} { ; # Blank line -> do nothing
continue;
} else {
set Xvalues $line; # execute operation on read data
}
}
close $cd; ; # Close the input file
}
# Define ViscousDamper Material
#uniaxialMaterial ViscousDamper $matTag $Kd $Cd $alpha
uniaxialMaterial ViscousDamper 1 $Kd $Cd $ad
Whats wrong in it? Values in cd.txt is a decimal value. The loop is not working. Please help.
This line:
foreach line [[read $cd] \n] { ; # Look at each line in the file
is missing a critical bit. This version looks more correct:
foreach line [split [read $cd] \n] { ; # Look at each line in the file
(I hope you're really doing more than just setting Xvalues to each non-empty line, since that's unlikely to be useful by itself.)
You're opening the wrong file:
set Cd [open "CD.txt" r];
The Cd variable now holds a file handle, and the value is something like "file3421". You then do
if [catch {open $CD.txt r} cd] { ; # Open the input file and check for error
You're now trying to open the file "file3421.txt" -- I'd expect you to get an "file not found" error there.
Also, you should brace the expression:
if {[catch {open "CD.txt" r} cd]} { ...
#..^............................^
The idiomatic way to read lines from a file is:
while {[gets $cd line] != -1} { ...
I am trying to determine how many time a string, Apples appears in a text file and in which lines it appears.
The script outputs incorrect line numbers, instead it outputs numbers consecutively (1,2,..) and not the correct lines for the word.
file.txt
Apples
Grapes
Oranges
Apples
Goal Output
Apples appear 2 times in this file
Apples appear on these lines: 1, 4,
Instead my output as illustrated from the code below is:
Apples appear 2 times in this file
Apples appear on these lines: 1, 2,
Perl
my $filename = "<file.txt";
open( TEXT, $filename );
$initialLine = 10; ## holds the number of the line
$line = 0;
$counter = 0;
# holder for line numbers
#lineAry = ();
while ( $line = <TEXT> ) {
chomp( $line );
if ( $line =~ /Apples/ ) {
while ( $line =~ /Apples/ig ) {
$counter++;
}
push( #lineAry, $counter );
$initialLine++;
}
}
close( TEXT );
# print "\n\n'Apples' occurs $counter times in file.\n";
print "Apples appear $counter times in this file\n";
print "Apples appear on these lines: ";
foreach $a ( #lineAry ) {
print "$a, ";
}
print "\n\n";
exit;
There are a number of problems with your code, but the reason for the line numbers being printed wrongly is that you are incrementing your variable $counter once each time Apples appears on a line and saving it to #lineAry. That is different from the number of the line where the string appears, and the easiest fix is to use the built-in variable $. which represents the number of times a read has been performed on the file handle
In addition, I would encourage you to use lexical file handles, and the three-parameter form of open, and check that every call to open has succeeded
You never use the value of $initialLine, and I don't understand why you have initialised it to 10
I would write it like this
use strict;
use warnings 'all';
my $filename = 'file.txt';
open my $fh, '<', $filename or die qq{Unable to open "$filename" for input: $!};
my #lines;
my $n;
while ( <$fh> ) {
push #lines, $. if /apples/i;
++$n while /apples/ig;
}
print "Apples appear $n times in this file\n";
print "Apples appear on these lines: ", join( ', ', #lines ), "\n\n";
output
Apples appear 2 times in this file
Apples appear on these lines: 1, 4
Change
push(#lineAry, $counter);
to
push(#lineAry, $.);
$. is a variable that stores the line number when using perl's while (<>).
The alternative, if you want to use your $counter variable, is that you move the increment to increment on every line, not on every match.
This is the log file I am dealing with -
|
blah1a
blah1b
blah1c
|
****blahnothing1
|
blah2a
blah2b
blah2c
|
blahnothing2
|
blah3a
blah3b
blah3c
|
blahnothing3
The information that I need is nestled between two pipe characters. There are alot of lines with that start with asteriks, I skip over them. Each line has windows end of line characters. The data in between these pipe characters is contigious, but when read on a linux host, it is chopped up with the windows new lines.
I wrote the perl script with a range operator between the two lines hoping that everything that starts with a pipe delimiter would get pushed into an array element and then stop at the next pipe delimiter, then start again. Each array element would have all the lines in between the two pipes characters.
Ideally the arrays would look like this, sans the windows control characters.
$lines[0] blah1a blah1b blah1c
$lines[1] blah2a blah2b blah2c
$lines[2] blah3a blah3b blah3c
However each arrays do not look like that.
#!/usr/bin/perl
use strict ;
use warnings ;
my $delimiter = "|";
my $filename = $ARGV[0] ;
my #lines ;
open(my $fh, '<:encoding(UTF-8)' , $filename) or die "could not open file $filename $!";
while (my $line = readline $fh) {
next if ($line =~/^\*+/) ;
if ($line =~ /$delimiter/ ... $line =~/$delimiter/) {
push (#lines, $line) ;
}
}
print $lines[0] ;
print $lines[1] ;
print $lines[2] ;
This seems to satisfy your requirement
I've left the two lines blahnothing2 and blahnothing3 in place as I couldn't see a rationale for removing them
The \R regex pattern is the generic newline, and matches the newline sequences from any platform, i.e. CR, LF, or CRLF
use strict;
use warnings 'all';
my $data = do {
open my $fh, '<:raw', 'blah.txt' or die $!;
local $/;
<$fh>;
};
$data =~ s/^\s*\*.*\R/ /gm; # Remove lines starting with *
$data =~ s/\R/ /g; # Change all line endings to spaces
# Split on pipe and remove blank elements
my #data = grep /\S/, split /\s*\|\s*/, $data;
use Data::Dump;
dd \#data;
output
[
"blah1a blah1b blah1c",
"blah2a blah2b blah2c",
"blahnothing2",
"blah3a blah3b blah3c",
"blahnothing3 ",
]
It seems that you want to merge lines between |, into a string, which gets placed on an array.
One way is to set the | as input record separator, so read a chunk between pipes each time
{ # localize the change to $/
local $/ = "|";
open(my $fh, '<:encoding(UTF-8)' , $filename)
or die "could not open file $filename $!";
my #records;
while (my $section = <$fh>)
{
next if $section =~ /^\s*\*/;
chomp $section; # remove the record separator (| here)
$section =~ s/\R/ /g; # clean up newlines
$section =~ s/^\s*//; # clean up leading spaces
push #records, $section if $section;
}
print "$_\n" for #records;
}
I skip a "section" if it starts with * (and an optional space). There can be more restrictive versions. The $section can end up being an emtpy string, so we push it on the array conditionally.
Output, with the example in the question copy-pasted into the input file with $filename
blah1a blah1b blah1c
blah2a blah2b blah2c
blahnothing2
blah3a blah3b blah3c
blahnothing3
The approach in the question is fine, but you need to merge lines within a "section" (between pipes) and place each such string on the array. So you need a flag to track when enter/leave a section.
I have two files, the some of the contents of these might be common in both. (say file A.txt and file B.txt)
Both the files are sorted files.
I need to get the difference of file A.txt and B.txt, ie, a file C.txt which has contents of A except the common contents in both.
I used the typical search and print algorithm, ie, took a line from A.txt, searched in B.txt, if found, print nothing in C.txt, else print that line in C.txt.
But, I am dealing with files with huge # of contents, and thus, it throws error: failed to load too many files. (Though it works fine for smaller files)
Can anybody suggest more efficient way of getting C.txt?
Script to be used: TCL only!
First off, the too many files error is an indication that you're not closing a channel, probably in the B.txt scanner. Fixing that is probably your first goal. If you've got Tcl 8.6, try this helper procedure:
proc scanForLine {searchLine filename} {
set f [open $filename]
try {
while {[gets $f line] >= 0} {
if {$line eq $searchLine} {
return true
}
}
return false
} finally {
close $f
}
}
However, if one of the files is small enough to fit into memory reasonably, you'd be far better reading it into a hash table (e.g., a dictionary or array):
set f [open B.txt]
while {[gets $f line]} {
set B($line) "any dummy value; we'll ignore it"
}
close $f
set in [open A.txt]
set out [open C.txt w]
while {[gets $in line]} {
if {![info exists B($line)]} {
puts $out $line
}
}
close $in
close $out
This is much more efficient, but depends on B.txt being small enough.
If both A.txt and B.txt are too large for that, you are probably best doing some sort of processing by stages, writing things out to disk in-between. This is getting rather more complex!
set filter [open B.txt]
set fromFile A.txt
for {set tmp 0} {![eof $filter]} {incr tmp} {
# Filter by a million lines at a time; that'll probably fit OK
for {set i 0} {$i < 1000000} {incr i} {
if {[gets $filter line] < 0} break
set B($line) "dummy"
}
# Do the filtering
if {$tmp} {set fromFile $toFile}
set from [open $fromFile]
set to [open [set toFile /tmp/[pid]_$tmp.txt] w]
while {[gets $from line] >= 0} {
if {![info exists B($line)]} {
puts $to $line
}
}
close $from
close $to
# Keep control of temporary files and data
if {$tmp} {file delete $fromFile}
unset B
}
close $filter
file rename $toFile C.txt
Warning! I've not tested this codeā¦
let's say I opened a file, then parsed it into lines. Then I use a loop:
foreach line $lines {}
inside the loop, for some lines, I want to replace them inside the file with different lines. Is it possible? Or do I have to write to another temporary file, then replace the files when I'm done?
e.g., if the file contained
AA
BB
and then I replace capital letters with lower case letters, I want the original file to contain
aa
bb
Thanks!
for plain text files, it's safest to move the original file to a "backup" name then rewrite it using the original filename:
Update: edited based on Donal's feedback
set timestamp [clock format [clock seconds] -format {%Y%m%d%H%M%S}]
set filename "filename.txt"
set temp $filename.new.$timestamp
set backup $filename.bak.$timestamp
set in [open $filename r]
set out [open $temp w]
# line-by-line, read the original file
while {[gets $in line] != -1} {
#transform $line somehow
set line [string tolower $line]
# then write the transformed line
puts $out $line
}
close $in
close $out
# move the new data to the proper filename
file link -hard $filename $backup
file rename -force $temp $filename
In addition to Glenn's answer. If you would like to operate on the file on a whole contents basis and the file is not too large, then you can use fileutil::updateInPlace. Here is a code sample:
package require fileutil
proc processContents {fileContents} {
# Search: AA, replace: aa
return [string map {AA aa} $fileContents]
}
fileutil::updateInPlace data.txt processContents
If this is Linux it'd be easier to exec "sed -i" and let it do the work for you.
If it's a short file you can just store it in a list:
set temp ""
#saves each line to an arg in a temp list
set file [open $loc]
foreach {i} [split [read $file] \n] {
lappend temp $i
}
close $file
#rewrites your file
set file [open $loc w+]
foreach {i} $temp {
#do something, for your example:
puts $file [string tolower $i]
}
close $file
set fileID [open "lineremove.txt" r]
set temp [open "temp.txt" w+]
while {[eof $fileID] != 1} {
gets $fileID lineInfo
regsub -all "delted information type here" $lineInfo "" lineInfo
puts $temp $lineInfo
}
file delete -force lineremove.txt
file rename -force temp.txt lineremove.txt
For the next poor soul that is looking for a SIMPLE tcl script to change all occurrences of one word to a new word, below script will read each line of myfile and change all red to blue then output the line to in a new file called mynewfile.
set fin "myfile"
set fout "mynewfile"
set win [open $fin r]
set wout [open $fout w]
while {[gets $win line] != -1} {
set line [regsub {(red)} $line blue]
puts $wout $line
}
close $win
close $wout