i'm finding some troubles working on a file containing some floating numbers.
These are some rows from my file:
174259 1.264944 -.194235 4.1509e-5
174260 1.264287 -.191802 3.9e-2
174261 1.266468 -.190813 3.9899e-2
174262 1.267116 -.193e-3 4.2452e-2
What i'm trying to do is to find the row where is my desire number (e.g "174260") and extract the following three numbers.
this is my code:
set Output [open "Output3.txt" w]
set FileInput [open "Input.txt" r]
set filecontent [read $FileInput]
set inputList [split $filecontent "\n"]
set Desire 174260
set FindElem [lsearch -all -inline $inputList $Desire*]
set Coordinate [ regexp -inline -all {\S+} $FindElem ]
set x1 [lindex $Coordinate 1]
set y1 [lindex $Coordinate 2]
set z1 [lindex $Coordinate 3]
puts $Output "$x1 $y1 $z1"
Using the regexp method for a string "{\S+}" i obtain as last character a curly brackets:
1.264287 -.191802 3.9e-2}
I don't know how to extract only the numbers value and not the entire string.
I'd be really tempted in this case to go with the simplest possible option.
set Output [open "Output3.txt" w]
set FileInput [open "Input.txt" r]
set Desire 174260
while {[gets $FileInput line] >= 0} {
lassign [regexp -inline -all {\S+} $line] key x1 y2 z1
if {$key == $Desire} {
puts $Output "$x1 $y1 $z1"
}
}
close $FileInput
close $Output
Failing that, your problem is that you're using lsearch -all -inline, which returns a list, and then processing that list as a string with regexp. You should handle that using:
foreach found $FindElem {
set Coordinate [ regexp -inline -all {\S+} $found ]
set x1 [lindex $Coordinate 1]
set y1 [lindex $Coordinate 2]
set z1 [lindex $Coordinate 3]
puts $Output "$x1 $y1 $z1"
}
This isn't really as good as just understanding the lines properly in the first place, and working with the data one line at a time is pretty trivial.
Related
I have a list of files and a separate list of sizes of those files using "file size <file_name>".
I am required to sort the files in ascending order based on the size and then feed it further for processing.
Can someone provide a step by step process I could follow?
This is what I have done so far
set direc "<Any direcotry to look files at>"
set folderFiles [glob -directory $direc -nocomplain -type f *.xml]
set fileSizes []
puts "Files to be processed are:"
puts "$folderFiles"
puts "Sizes of files in this order are:"
foreach tempFile $folderFiles {
lappend fileSizes [file size $tempFile]
}
puts $fileSizes
set fileDict [dict create [lindex $folderFiles 0] [lindex $fileSizes 0]]
for {set i 1} {$i < [llength $folderFiles]} {incr i} {
dict lappend fileDict [lindex $folderFiles $i] [lindex $fileSizes $i]
}
puts $fileDict
So, this gives me a dictionary where keys -> files and values -> file sizes. I just need to sort this dictionary based on values which are file sizes.
The first thing you need to do is to get the list of filenames and their sizes. You can keep the sizes separately.
set filenames [glob -type f *.foo]; # Or whatever
set sizes [lmap f $filenames {file size $f}]
Then we sort the sizes, but get the indices of the sort back rather than the sorted list.
set indices [lsort -indices -integer $sizes]
Now, we use those indices to construct the sorted filenames:
set filenames [lmap idx $indices {lindex $filenames $idx}]
We can combine some of these things into a helper procedure:
proc SortFilesBySize {filenames} {
set sizes [lmap f $filenames {file size $f}]
return [lmap idx [lsort -indices -integer $sizes] {lindex $filenames $idx}]
}
set filenames [glob -type f *.foo]; # Or whatever
puts [join [SortFilesBySize $filenames] "\n"]
One way:
#!/usr/bin/env tclsh
proc zip {list1 list2} {
lmap a $list1 b $list2 { list $a $b }
}
proc heads {pairs} {
lmap pair $pairs { lindex $pair 0 }
}
proc sort_by_size {names sizes} {
heads [lsort -integer -increasing -index 1 [zip $names $sizes]]
}
set names {a.txt b.txt c.txt}
set sizes {3 2 1}
puts [sort_by_size $names $sizes]
Combines the names and sizes into a list of pairs of filename and size, sorts based on size, and then returns just the reordered filenames. Essentially a tcl version of perl's classic Schwartzian Transform idiom.
I'm an absolute beginner and can't quite wrap my head around Tcl. I need some help with something that I think is very basic. Any help would be appreciated. I have a text file that I want to import into Tcl. I'll give you the syntax of the file and my desired way to store it:
text FILE to import into Tcl:
Singles 'pink granny fuji'
Singles2 'A B C D E'
Couples 'bread butter honey lemon cinnamon sugar'
Couples2 'A1 A2 B1 B2 C1 C2 D1 D2'
My desired format:
For lines 1 & 2:
Singles
[pink granny fuji] ( 3 single elements)
Singles2
[A B C D E] (5 single elements)
For lines 3 & 4:
Couples
[bread butter
honey lemon
cinnamon sugar] (3 x 2 array)
Couples2
[A1 A2
B1 B2
C1 C2
D1 D2] (4 x 2 array)
The import text file can in theory have any number of elements, but lines 3&4 will always be an even number of elements so that they are pairs, so I know a for each loop is needed to capture any number of elements. I know the code will also need to strip the apostrophes from the text file.
I'm just really struggling at the moment, really appreciate any help at all, thank you :)
here is solution. It works perfectly. I hope you are looking for similar solution.
set handle [open "text_import" "r"]
set content [read $handle]
regsub -all {'} $content "" content
#set content [split [read $handle] \n]
set content [split $content \n]
foreach ele $content {
puts $ele
if {[regexp -nocase -- "(\[^ \]+) +(.*)" $ele - key val]} {
puts $key
if {[regexp -nocase -- "single" $key]} {
set val1 [split $val " "]
set arr($key) $val1
} elseif {[regexp -nocase -- "couple" $key]} {
set biggerList [list]
set val2 [split $val " "]
for {set i 0} {$i < [llength $val2]} {incr i 2} {
set tempList [list [lindex $val2 $i] [lindex $val2 [expr $i + 1]]]
lappend biggerList $tempList
}
set arr($key) $biggerList
}
}
}
parray arr
~
One possible solution. Instead of loading the textfile as data, we can load it as Tcl source if we make the right definitions.
proc Singles args {
set str [string trim [join $args] ']
puts "\[$str]"
}
proc Singles2 args {
set str [string trim [join $args] ']
puts "\[$str]"
}
proc Couples args {
set list [split [string trim [join $args] ']]
foreach {a b} $list {
lappend list2 "$a $b"
}
set str [join $list2 \n]
puts "\[$str]"
}
proc Couples2 args {
set list [split [string trim [join $args] ']]
foreach {a b} $list {
lappend list2 "$a $b"
}
set str [join $list2 \n]
puts "\[$str]"
}
source textfile.txt
Documentation:
foreach,
join,
lappend,
list,
proc,
puts,
set,
source,
split,
string
Maybe this is pretty stupid, but I really can't find a soulution. I created two variables and want to transform them into lists.
This commands are tool specific, but they work the way I want:
redirect max_transition {report_constraint -view $pargs(-scenario) -drv_violation_type {max_transition} -all_violators} -variable
redirect max_capacitance {report_constraint -view $pargs(-scenario) -drv_violation_type {max_capacitance} -all_violators} -variable
Now I want to create tcl lists out of them. I could use a loop, because the data has the same structure.
set reports {$max_transition $max_capacitance}
set report_length [llength $reports]
for {set i 0} {$i < $report_length} {incr i} {
set tns_value 0
set max_wns 0
set vios 0
set report [lindex $reports $i]
puts $report
# remove all uneccessary white spaces
set no_space [regexp -all -inline {\S+} $report]
# insert a new line for every path
set insert_lines [string map {" U_" \nU_} $no_space]
# create list out of result reports
set report_list [split $insert_lines "\n"]
if {[llength $report_list] > 1} {
for {set i 1} {$i < [llength $report_list]} {incr i} {
# get value of violation
set slack [lindex [split [lindex $report_list $i] " "] 3]
set tns_value [expr $tns_value + $slack]
if {$vios == 0} {set max_wns $slack}
incr vios
}
}
# write out values
puts "$pargs(-scenario), $report, $max_wns, $tns_value, $vios"
}
But this does not work out. The loop just puts out the variable's names (because of "puts $report") but not its content.
If I do it without a loop (so for each variable the same code consecutively), I get the lists I want.
So how can I process these variables as a whole in a loop?
The problem lies in the below loop variable i, it is overriding variable value of outer-loop. Try changing inner-loop variable to j.
for {set i 1} {$i < [llength $report_list]} {incr i} {
# get value of violation
set slack [lindex [split [lindex $report_list $i] " "] 3]
set tns_value [expr $tns_value + $slack]
if {$vios == 0} {set max_wns $slack}
incr vios
}
It's hard to write an answer for this since so much is unknown. To begin with, you should probably change to assignment by list and a foreach loop like this:
set reports [list $max_transition $max_capacitance]
foreach report $reports {
Since you don't really need to use a for loop here, it makes sense to simplify it. Please comment and I will iteratively improve the answer if I can.
I am trying to find a matching row in a a text file having 4 columns of numbers like this:
number coordinates
101138 0.420335 -.238945 .1446484
101139 .4134844 -0.2437 6.7484e-2
101140 .4140046 -.243681 7.3344e-2
I need to read the text file and find a specific number in the first column and plot only its coordinates.
This is my code in which I try to find the coordinates for number "101138" but something is not working because there is no match found.
set Output [open "Output1.txt" w]
set FileInput [open "Input.txt" r]
set filecontent [read $FileInput]
set inputList [split $filecontent "\n"]
set Text [lsearch -all -inline $inputList "101138"]
foreach elem $Text {
puts $Output "[lindex $elem 1] [lindes $elem 2] [lindex $elem 3]"
}
You are searching for a list element that exactly matches your given value "101138". However your list is constructed from lines which have multiple whitespace delimited columns. You need to amend your search to match this value in the correct column.
One method would be to split each line again and perform an equals match on the correct column. Another might be to use a glob or regexp expression that actually matches the inputs. ie:
% set lst {"123 abc def" "456 efg ijk" "789 zxc cvb"}
"123 abc def" "456 efg ijk" "789 zxc cvb"
% lsearch -all -inline $lst "456*"
{456 efg ijk}
% lsearch -all -inline -regexp $lst "^456"
{456 efg ijk}
The second line does a standard (glob) match looking for a list element beginning with 456 followed by anything.
The last line searches for a list element that begins with "456" using regular expression matching.
let's say I opened a file, then parsed it into lines. Then I use a loop:
foreach line $lines {}
inside the loop, for some lines, I want to replace them inside the file with different lines. Is it possible? Or do I have to write to another temporary file, then replace the files when I'm done?
e.g., if the file contained
AA
BB
and then I replace capital letters with lower case letters, I want the original file to contain
aa
bb
Thanks!
for plain text files, it's safest to move the original file to a "backup" name then rewrite it using the original filename:
Update: edited based on Donal's feedback
set timestamp [clock format [clock seconds] -format {%Y%m%d%H%M%S}]
set filename "filename.txt"
set temp $filename.new.$timestamp
set backup $filename.bak.$timestamp
set in [open $filename r]
set out [open $temp w]
# line-by-line, read the original file
while {[gets $in line] != -1} {
#transform $line somehow
set line [string tolower $line]
# then write the transformed line
puts $out $line
}
close $in
close $out
# move the new data to the proper filename
file link -hard $filename $backup
file rename -force $temp $filename
In addition to Glenn's answer. If you would like to operate on the file on a whole contents basis and the file is not too large, then you can use fileutil::updateInPlace. Here is a code sample:
package require fileutil
proc processContents {fileContents} {
# Search: AA, replace: aa
return [string map {AA aa} $fileContents]
}
fileutil::updateInPlace data.txt processContents
If this is Linux it'd be easier to exec "sed -i" and let it do the work for you.
If it's a short file you can just store it in a list:
set temp ""
#saves each line to an arg in a temp list
set file [open $loc]
foreach {i} [split [read $file] \n] {
lappend temp $i
}
close $file
#rewrites your file
set file [open $loc w+]
foreach {i} $temp {
#do something, for your example:
puts $file [string tolower $i]
}
close $file
set fileID [open "lineremove.txt" r]
set temp [open "temp.txt" w+]
while {[eof $fileID] != 1} {
gets $fileID lineInfo
regsub -all "delted information type here" $lineInfo "" lineInfo
puts $temp $lineInfo
}
file delete -force lineremove.txt
file rename -force temp.txt lineremove.txt
For the next poor soul that is looking for a SIMPLE tcl script to change all occurrences of one word to a new word, below script will read each line of myfile and change all red to blue then output the line to in a new file called mynewfile.
set fin "myfile"
set fout "mynewfile"
set win [open $fin r]
set wout [open $fout w]
while {[gets $win line] != -1} {
set line [regsub {(red)} $line blue]
puts $wout $line
}
close $win
close $wout