I want to read line by line values in cd.txt and consider each value as variable.
# Damper Properties
set Kd 0.000001;
set Cd [open "CD.txt" r];
set ad 0.000001;
if [catch {open $CD.txt r} cd] { ; # Open the input file and check for error
puts stderr "Cannot open $inFilename for reading"; # output error statement
} else {
foreach line [[read $cd] \n] { ; # Look at each line in the file
if {[llength $line] == 0} { ; # Blank line -> do nothing
continue;
} else {
set Xvalues $line; # execute operation on read data
}
}
close $cd; ; # Close the input file
}
# Define ViscousDamper Material
#uniaxialMaterial ViscousDamper $matTag $Kd $Cd $alpha
uniaxialMaterial ViscousDamper 1 $Kd $Cd $ad
Whats wrong in it? Values in cd.txt is a decimal value. The loop is not working. Please help.
This line:
foreach line [[read $cd] \n] { ; # Look at each line in the file
is missing a critical bit. This version looks more correct:
foreach line [split [read $cd] \n] { ; # Look at each line in the file
(I hope you're really doing more than just setting Xvalues to each non-empty line, since that's unlikely to be useful by itself.)
You're opening the wrong file:
set Cd [open "CD.txt" r];
The Cd variable now holds a file handle, and the value is something like "file3421". You then do
if [catch {open $CD.txt r} cd] { ; # Open the input file and check for error
You're now trying to open the file "file3421.txt" -- I'd expect you to get an "file not found" error there.
Also, you should brace the expression:
if {[catch {open "CD.txt" r} cd]} { ...
#..^............................^
The idiomatic way to read lines from a file is:
while {[gets $cd line] != -1} { ...
Related
I got a request to make perl script using array uniq,
my #filtered = uniq(#array);
But don't know how to code it by perl
Any one can help me, it's appreciate..
======================================================
1) Input .csv file:
ATPG TCD tcd/T1, xxx/tcd/T2
ATPG INSTANCE inst1/I1, xxx/inst2/I2, inst3/I3
ATPG PATTERN pat1/P1, pat2/P2
SIM BLACKBOX bb1/B1, bb2/B2
2) Request:
STEP 1: read input .csv file
STEP 2: create output out.txt with expected contents
======================================================
3) output out.txt (if (1st column = ATPG))
// TCD
read_core_description tcd/T1
read_core_description xxx/tcd/T2
// INSTANCE
add_core_instance -instance inst1/I1
add_core_instance -instance xxx/inst2/I2
add_core_instance -instance inst3/I3
//PATTERN
read_patterns pat1/P1
read_patterns pat2/P2
======================================================
My current code here, pls help to continue at the end
#1) Open input
my $inF="input.csv";
open( my $inF_var, "<", $inF) || die ("Can't open input file\n");
#2) Open output
my $outF_Blk="output/out.txt";
open( my $outF_Blk_var, ">$outF_Blk") || die ("Can't open output file");
#3) Write to files
while( <$inF_var> ){
next if (/^$/);
my $line=$_;
my #value=split(/:/, $line);
# EXTRACT THE EXPECTED OUTPUT HERE #
### NEED EVERYONE CAN HELP ####
I've only implemented one of the types of output. The others should be simple enough to add.
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
my %process = (
TCD => \&process_tcd,
INSTANCE => \&process_instance,
PATTERN => \&process_pattern,
);
while (<DATA>) {
my ($type, $cmd, #row) = split /[\s,]+/;
next unless $type eq 'ATPG';
if (!exists $process{$cmd}) {
warn "Unknown record type: $_";
next;
}
$process{$cmd}->(#row);
}
sub process_tcd {
my #data = #_;
say "//TCD\n";
say "read_core_description $_" for #data;
say '';
}
sub process_instance {}
sub process_pattern {}
__DATA__
ATPG TCD tcd/T1, xxx/tcd/T2
ATPG INSTANCE inst1/I1, xxx/inst2/I2, inst3/I3
ATPG PATTERN pat1/P1, pat2/P2
SIM BLACKBOX bb1/B1, bb2/B2
I am trying to do this:
I have a text file which has a line starting with the specific pattern:
vvdd vdd
I need to locate this line and insert another line following this with a pattern vvss vss
All the other lines below the original line has to be displaced accordingly.
Here is my code so far which inserts into a wrong location:
set filename "path265.spi"
set line_no 0
set count 0
set pattern "vvdd vdd"
set fp [open $filename r+]
while {[gets $fp line] != -1} {
incr count 1
if {[regexp $pattern $line]} {
set line_no $count
}
}
seek $fp 0
for {set i 0} {$i<$line_no} {incr i} {gets $fp replace}
puts $fp "\nvvnw vnw 0 1.08"
puts $line_no
puts $count
close $fp
You can use ::fileutil::updateInPlace to simplify things.
package require fileutil
proc change {pattern newtext data} {
set res {}
foreach line [split $data \n] {
lappend res $line
if {[regexp $pattern $line]} {
lappend res $newtext
}
}
return [join $res \n]
}
::fileutil::updateInPlace path265.spi {change "^vvdd vdd" "vvss vss"}
The updateInPlace command takes a file name and a command prefix. It adds the contents of the file to that command prefix and invokes it, then writes the result back to to file.
In this case, the command called iterates through the lines of the file, adding $newtext after every line that matches $pattern. This is just one way to write the procedure for making the change. If only the first match is relevant, this could be used:
proc change {pattern newtext data} {
set lines [split $data \n]
set index [lsearch -regexp $lines $pattern]
if {$index >= 0} {
set lines [linsert $lines $index+1 $newtext]
}
return [join $lines \n]
}
etc.
Documentation: fileutil package, foreach, if, lappend, linsert, lsearch, package, proc, regexp, return, set, split
I've written one piece of code by using a while loop but it will take too much time to read the file line by line. Can any one help me please?
my code :
set a [open myfile r]
while {[gets $a line]>=0} {
"do somethig by using the line variable"
}
The code looks fine. It's pretty quick (if you're using a sufficiently new version of Tcl; historically, there were some minor versions of Tcl that had buffer management problems) and is how you read a line at a time.
It's a little faster if you can read in larger amounts at once, but then you need to have enough memory to hold the file. To put that in context, files that are a few million lines are usually no problem; modern computers can handle that sort of thing just fine:
set a [open myfile]
set lines [split [read $a] "\n"]
close $a; # Saves a few bytes :-)
foreach line $lines {
# do something with each line...
}
If it truly is a large file you should do the following to read in only a line at a time. Using your method will read the entire contents into ram.
https://www.tcl.tk/man/tcl8.5/tutorial/Tcl24.html
#
# Count the number of lines in a text file
#
set infile [open "myfile.txt" r]
set number 0
#
# gets with two arguments returns the length of the line,
# -1 if the end of the file is found
#
while { [gets $infile line] >= 0 } {
incr number
}
close $infile
puts "Number of lines: $number"
#
# Also report it in an external file
#
set outfile [open "report.out" w]
puts $outfile "Number of lines: $number"
close $outfile
i have 2 files each having 50 lines..
FILE1
FILE2
now, i need to read two file lines by line in a single while or for loop and i should push the corresponding line to the 2 output arrays. i have tried something like this. but its not working out. kindly help
#!/usr/bin/perl
my #B =();
my #C =();
my #D =();
my $lines = 0;
my $i = 0;
my $sizeL = 0;
my $sizeR = 0;
my $gf = 0;
$inputFile = $ARGV[0];
$outputFile = $ARGV[1];
open(IN1FILE,"<$inputFile") or die "cant open output file ";
open(IN2FILE,"<$outputFile") or die "cant open output file";
while((#B=<IN1FILE>)&&(#C= <IN2FILE>))
{
my $line1 = <IN1FILE>;
my $line2 = <IN2FILE>;
print $line2;
}
Here array 2 is not getting build.. but i am getting array 1 value.
In your loop condition, you read the whole files into their arrays. The list assignment is then used as a boolean value. This works only once, as the files will be read after the condition has been evaluated. Also, the readlines inside the loop will return undef.
Here is code that should work:
my (#lines_1, #lines_2);
# read until one file hits EOF
while (!eof $INFILE_1 and !eof $INFILE_2) {
my $line1 = <$INFILE_1>;
my $line2 = <$INFILE_2>;
say "from the 1st file: $line1";
say "from the 2nd file: $line2";
push #lines_1, $line1;
push #lines_2, $line2;
}
You could also do:
my (#lines_1, #lines_2);
# read while both files return strings
while (defined(my $line1 = <$INFILE_1>) and defined(my $line2 = <$INFILE_2>)) {
say "from the 1st file: $line1";
say "from the 2nd file: $line2";
push #lines_1, $line1;
push #lines_2, $line2;
}
Or:
# read once into arrays
my #lines_1 = <$INFILE_1>;
my #lines_2 = <$INFILE_2>;
my $min_size = $#lines_1 < $#lines_2 ? $#lines_1 : $#lines_2; # $#foo is last index of #foo
# then interate over data
for my $i ( 0 .. $min_size) {
my ($line1, $line2) = ($lines_1[$i], $lines_2[$i]);
say "from the 1st file: $line1";
say "from the 2nd file: $line2";
}
Of course, I am assuming that you did use strict; use warnings; and use feature 'say', and used the 3-arg form of open with lexical filehandles:
my ($file_1, $file_2) = #ARGV;
open my $INFILE_1, '<', $file_1 or die "Can't open $file_1: $!"; # also, provide the actual error!
open my $INFILE_2, '<', $file_2 or die "Can't open $file_2: $!";
I also urge you to use descriptive variable names instead of single letters, and to declare your variables in the innermost possible scope — declaring vars at the beginning is almost the same as using bad, bad globals.
let's say I opened a file, then parsed it into lines. Then I use a loop:
foreach line $lines {}
inside the loop, for some lines, I want to replace them inside the file with different lines. Is it possible? Or do I have to write to another temporary file, then replace the files when I'm done?
e.g., if the file contained
AA
BB
and then I replace capital letters with lower case letters, I want the original file to contain
aa
bb
Thanks!
for plain text files, it's safest to move the original file to a "backup" name then rewrite it using the original filename:
Update: edited based on Donal's feedback
set timestamp [clock format [clock seconds] -format {%Y%m%d%H%M%S}]
set filename "filename.txt"
set temp $filename.new.$timestamp
set backup $filename.bak.$timestamp
set in [open $filename r]
set out [open $temp w]
# line-by-line, read the original file
while {[gets $in line] != -1} {
#transform $line somehow
set line [string tolower $line]
# then write the transformed line
puts $out $line
}
close $in
close $out
# move the new data to the proper filename
file link -hard $filename $backup
file rename -force $temp $filename
In addition to Glenn's answer. If you would like to operate on the file on a whole contents basis and the file is not too large, then you can use fileutil::updateInPlace. Here is a code sample:
package require fileutil
proc processContents {fileContents} {
# Search: AA, replace: aa
return [string map {AA aa} $fileContents]
}
fileutil::updateInPlace data.txt processContents
If this is Linux it'd be easier to exec "sed -i" and let it do the work for you.
If it's a short file you can just store it in a list:
set temp ""
#saves each line to an arg in a temp list
set file [open $loc]
foreach {i} [split [read $file] \n] {
lappend temp $i
}
close $file
#rewrites your file
set file [open $loc w+]
foreach {i} $temp {
#do something, for your example:
puts $file [string tolower $i]
}
close $file
set fileID [open "lineremove.txt" r]
set temp [open "temp.txt" w+]
while {[eof $fileID] != 1} {
gets $fileID lineInfo
regsub -all "delted information type here" $lineInfo "" lineInfo
puts $temp $lineInfo
}
file delete -force lineremove.txt
file rename -force temp.txt lineremove.txt
For the next poor soul that is looking for a SIMPLE tcl script to change all occurrences of one word to a new word, below script will read each line of myfile and change all red to blue then output the line to in a new file called mynewfile.
set fin "myfile"
set fout "mynewfile"
set win [open $fin r]
set wout [open $fout w]
while {[gets $win line] != -1} {
set line [regsub {(red)} $line blue]
puts $wout $line
}
close $win
close $wout