variable substitution in for each loop- tcl tk programming - loops

I am trying to find out number of elements failed and the results are to be printed in a .csv file.
this is my code,
set n_min_len 10
set n_max_len 50
set n_angle 60
foreach check {"min length" "max length" "angle"} \
fail {$n_min_len $n_max_len $n_angle} {
puts $file [format %30s%10s "$check...." "$fail"]
}
I get output as
min length....$n_min_len
max length....$n_max_len and so on.
Instead I wanted output as
min length....10
max length....50
can someone help me, how to get this.
thank you!!

The problem is with this part:
{$n_min_len $n_max_len $n_angle}
The braces block any substitution. Instead you should write
"$n_min_len $n_max_len $n_angle"
or
[list $n_min_len $n_max_len $n_angle]

Related

TCL Error: can't set "::streamID(1,10,1)": variable isn't array

I have read the thread, Cant read variable, isnt array, and I think may be related somehow, but I can't figure out how. In the following TCL snippet, a three dimensional array ::stream is tested and a value is read. The array contains a scalar ID value. The last line in the snippet creates an error that says
can't set "::streamId(1,10,1)": variable isn't array
while executing
"set ::streamId($chas,$card,$port) $nextId"
(procedure "getNextStreamId" line 28)
I interpret this as meaning $nextId is something other than a scalar and it can't be put into a 3 dimensional array of scalars. Is my interpretation of the error incorrect? I was pretty confident that the array holds scalar values, so I started to think maybe there is some data safety issue here.
# get current streamId
if { [ catch {info exists $::streamId($chas,$card,$port)} ] == 0 } {
if {$::testEnv(verbose) >= $verbLevel} {
logInfo [getProcName] "pre-existing streamId found for: \
\n dutAndPort: $dutAndPort \
\n ixiaPort: {$chas $card $port}\
\n streamId: $::streamId($chas,$card,$port) \
\n incrementing to next one..."
}
set nextId [ mpexpr $::streamId($chas,$card,$port) + 1 ]
} else {
if {$::testEnv(verbose) >= 0} {
logInfo [getProcName] "No pre-existing streamId found for: \
\n\t dutAndPort: $dutAndPort \
\n\t ixiaPort: {$chas $card $port}\
\n\t setting to 1"
}
set nextId 1
}
set curId [ mpexpr $nextId - 1 ]
set ::streamId($chas,$card,$port) $nextId
In your code, I guess you wanted to check if the array ::streamId has the index $chas,$card,$port
info exists $::streamId($chas,$card,$port)
which is incorrect. You should use
info exists ::streamId($chas,$card,$port)
i.e. without dollar sign. Then only the if loop can ensure the existence of the index $chas,$card,$port.
Then, at last you are trying to set the value of the index $chas,$card,$port to $nextId .
set ::streamId($chas,$card,$port) $nextId
which is incoorect, because it kept outside the if loop of variable existence check of the index $chas,$card,$port.
Then the actual error message is referring the fact that there is a scalar variable named streamId exists.
% set ::streamId 1
1
% set ::streamId(1,10,1) 20
can't set "::streamId(1,10,1)": variable isn't array
%
Ensure you don't have the same variable names.

Bash formatting text file into columns

I have a text file with data in it which is set up like a table, but separated with commas, eg:
Name, Age, FavColor, Address
Bob, 18, blue, 1 Smith Street
Julie, 17, yellow, 4 John Street
Firstly I have tried using a for loop, and placing each 'column' with all its values into a separate array.
eg/ 'nameArray' would contain bob, julie.
Here is the code from my actual script, there is 12 columns hence why c should not be greater than 12.
declare -A Array
for((c = 1; c <= 12; c++))
{
for((i = 1; i <= $total_lines; i++))
{
record=$(cat $FILE | awk -F "," 'NR=='$i'{print $'$c';exit}'| tr -d ,)
Array[$c,$i]=$record
}
}
From here I then use the 'printf' function to format each array and print them as columns. The issue with this is that I have more than 3 arrays, in my actual code they're all in the same 'printf' line. Which I don't like and I know it is a silly way to do it.
for ((i = 1; i <= $total_lines; i++))
{
printf "%0s %-10s %-10s...etc \n" "${Array[1,$i]}" "${Array[2,$i]}" "${Array[3,$i]}" ...etc
}
This does however give me the desired output, see image below:
I would like to figure out how to do this another way that doesn't require a massive print statement. Also the first time I call the for loop I get an error with 'awk'.
Any advice would be appreciated, I have looked through multiple threads and posts to try and find a suitable solution but haven't found something that would be useful.
Try the column command like
column -t -s','
This is what I can get quickly. See the man page for details.

Find a list of max values in a text file using awk

I am new to awk and I cannot figure out the correct syntax for the task I am working on.
I have a text file which looks something like this (the content is always sorted but is not always the same, so I cannot hard code the index of the array):
27 abc123
27 abd333
27 dce123
23 adb234
21 abc789
18 bcd213
So apparently the max is 27. However, I want my output to be:
27 abc123
27 abd333
27 dce123
and not the first row only.
The second column is just there, my code always sorts the text file based on the first column.
My code right now set the max as the first value (27 for example), and as it reads through the lines, it stores only the rows with the max values in an array and eventually print out the output.
awk 'BEGIN {max=$1} {if(($1)==max) a[NR]=($0)} END {for (i in a) print a[i]}' file
You can't read fields in a BEGIN block, since it's executed before the file is read.
To find the first record, use the pattern NR == 1. NR is the number of the current record. To find the other records, just check whether $1 equals the max value.
NR == 1 { max = $1 }
$1 == max { print }
Since your input is always sorted, you can optimise this program by exiting after reading all the records with the max value:
$1 != max { exit }

Compare values in an array

I am using perl to work with database queries that return multiple results like this:
select name,percent from table order by percent desc
I want to retrieve only those values in the if condition as in this code:
while (#data=$sth->fetchrow_array()) {
$toto = $data[0];
$percent = $data[1];
foreach $percent (#data) {
if ($percent > 80) {
$output .= $toto.'='.$percent.'%,';
$state = "BAD";
}
elsif ($percent > 60 && $percent < 80){
$output .= $toto.'='.$percent.'%,';
$state = "NOTGOOD";
}
}
}
my $str = "$state $output";
# Display Ouptut
print $str."\n";
undef($str);
exit $ERRORS{$status};
This code only prints the last statement (NOTGOOD); I would like to print BAD for each missing value.
here is the result of the query:
test 40
test2 80
test3 75
test4 90
test5 50
test6 45
and here the print output:
NOTGOOD test4=90%,test2=80%,test3=75%,
all the values are good but wrong state
Your logic is very strange here. But because it's not clear whay you're trying to do, it's impossible for me to fix it for you. I can, however, hopefully explain what your current code is doing in the hope that you can work out how to fix it.
Let's assume that your query returns the following data:
Name,Percent
John,100
Paul,75
George,50
Ringo,25
Now let's step through your code a line at a time.
while (#data=$sth->fetchrow_array()) {
At this point, #data contains "John" and "100".
$toto = $data[0];
$percent = $data[1];
This puts "John" into $toto and "100" into $percent.
foreach $percent (#data) {
This is weird. It iterates over #data putting each element in turn into $percent. So on the first iteration $percent gets set to "John" (overwriting the "100" that you previously put there.
if ($percent > $opt_c) {
I don't know what $opt_c contains. Let's assume it's 50. But $percent contains "John" - which isn't a number. If you have use warnings turned on (and you really should), then Perl will give you a warning at this point as you're trying to do a numerical comparison with something that isn't a number. But Perl will convert your non-number to 0 and do the comparison. 0 isn't greater than 50, so the else branch is executed.
$output .= $toto.'='.$percent.'%,';
$state = "BAD";
}
elsif ($percent > $opt_w && $percent < $opt_c){
Again, I don't know what $opt_w is. Let's assume it's 25. But $percent is still 0 when treated as a number. So this code isn't executed either.
$output .= $toto.'='.$percent.'%,';
$state = "NOTGOOD";
}
}
}
The next time round your inner loop, $percent is set to 100. So your if code is executed and $outer gets set to "John=100%". And $state is set to "BAD". But you never do anything with $state, so it gets overwritten the next time round your outer (while) loop.
Your foreach $percent (#data) line is extremely questionable. I'm really not sure what you're trying to do there. And the reason that you only ever see one $state is because you're (presumably) printing it outside of the loop and only seeing the final value that it gets set to.
It's always a good idea to turn on use strict and use warnings. And then to fix the errors that they will show you.

Why does my Perl script die with an "out of memory" exception?

I need to read a 200mb "space"-separated file line-by-line and collect its contents into an array.
Every time I run the script, Perl throws an "out of memory" exception, but I don't understand why!
Some advice please?
#!/usr/bin/perl -w
use strict;
use warnings;
open my $fh, "<", "../cnai_all.csd";
my #parse = ();
while (<$fh>) {
my #words = split(/\s/,$_);
push (#parse, \#words);
}
print scalar #parse;
the cnai file looks like this: it contains 11000 rows and 4200 values, seperated by "space", per line.
VALUE_GROUP_A VALUE_GROUP_B VALUE_GROUP_C
VALUE_GROUP_A VALUE_GROUP_B VALUE_GROUP_C
VALUE_GROUP_A VALUE_GROUP_B VALUE_GROUP_C
VALUE_GROUP_A VALUE_GROUP_B VALUE_GROUP_C
The code above is just a stripped down sample.The final script will store all values in a hash and write it to a database later .
But first, I have to solve that memory problem!
That would be because... you're running out of memory!
You're not merely storing 200MB of data. You're creating a new list data structure for each line, with all of its associated overhead, and also creating a bunch of separate string objects for each word, with all of their associated overhead.
Edit: As an example of the kind of overhead we're talking about here, each and every value (and that includes strings) has the following overhead:
/* start with 2 sv-head building blocks */
#define _SV_HEAD(ptrtype) \
ptrtype sv_any; /* pointer to body */ \
U32 sv_refcnt; /* how many references to us */ \
U32 sv_flags /* what we are */
#define _SV_HEAD_UNION \
union { \
char* svu_pv; /* pointer to malloced string */ \
IV svu_iv; \
UV svu_uv; \
SV* svu_rv; /* pointer to another SV */ \
SV** svu_array; \
HE** svu_hash; \
GP* svu_gp; \
} sv_u
struct STRUCT_SV { /* struct sv { */
_SV_HEAD(void*);
_SV_HEAD_UNION;
};
So that's at least 4 32-bit values per Perl object.
Generally this means you are running out of memory for Perl, but it's possible you aren't running out of system memory. First up, there are ways you can get more information on perl's memory usage in the perl debug guts doc -- though you may find yourself recompiling perl, then. (Also note the warning in that doc about perl's hunger for memory...)
However, many operating systems it is possible for memory limits to be set per-process or per-user. If, for example, you're using Linux (or another POSIX system) you might need to alter your ulimits. Type 'ulimit -a' and look at your memory sizes; it's possible your 'max memory size' is below the memory in your machine -- or you have a limited data seg size. You can then reset it with the appropriate option, e.g. ulimit -d 1048576 for a 1GB data seg size limit.
Of course, there is another option: process the file line-by-line, if your situation allows it. (The example code above can be rewritten in such a way.)
Rather than reading all 46,200,000 values in core at once, your description of the data in cnai_all.csd as having many rows suggests that each line can be processed independently. If so, use
while (<$fh>) {
my #words = split /\s/, $_;
insert_row \#words;
}
where insert_row is a sub you'd define to insert that row into your database.
Note that split /\s/ is often a mistake. The perlfunc documentation on split explains:
As a special case, specifying a PATTERN of space (' ') will split on white space just as split with no arguments does. Thus, split(' ') can be used to emulate awk's default behavior, whereas split(/ /) will give you as many null initial fields as there are leading spaces. A split on /\s+/ is like a split(' ') except that any leading whitespace produces a null first field. A split with no arguments really does a split(' ', $_) internally.
In the nominal case, everything is fine:
DB<1> x split /\s/, "foo bar baz"
0 'foo'
1 'bar'
2 'baz'
But what if there are multiple spaces between fields? Does that mean an empty field or just a "wide" separator?
DB<2> x split /\s/, "foo bar baz"
0 'foo'
1 ''
2 'bar'
3 'baz'
What about leading whitespace?
DB<3> x split /\s/, " foo bar baz"
0 ''
1 'foo'
2 'bar'
3 'baz'
The default behavior of split isn't arbitrary. Let the tool work for you!
while (<$fh>) {
insert_row [ split ];
}
Your while loop does not read from the file. You should have
<$fh>
or something inside the parentheses.
At last I have found a more suitable solution for my problem:
After some research for other parsers I've had to develop, I learned
about the module DBD::CSV.
DBD::CSV allows me to select only the needed columns (out of >4000) of the "whitespace"-seperated fields. This reduceses memory usage and perfoms quite well.
More at DBD-CSV # CPAN.org
Thanks to gbacon I've changed my strategy from reading the whole file in one go to reading it part by part. DBD::CSV makes this possible without much coding.
#!/usr/bin/perl -w
use strict;
use warnings;
use DBI;
## -------------------------------------------------------------------------##
## -------------------------------------------------------------------------##
## SET GLOBAL CONFIG #############
my $globalConfig = {
_DIR => qq{../Data},
_FILES => {
'cnai_all.csd' => '_TEST'
}
};
## -------------------------------------------------------------------------##
## -------------------------------------------------------------------------##
my $sTime = time();
my $sepChar = " ";
my $csv_dbh = DBI->connect("DBI:CSV:f_dir=".$globalConfig->{_DIR}.";");
$csv_dbh->{csv_eol} ="\n";
#$csv_dbh->{csv_quote_char} ="'";
#$csv_dbh->{csv_escape_char} ="\\";
$csv_dbh->{csv_null} = 1;
$csv_dbh->{csv_quote_char} = '"';
$csv_dbh->{csv_escape_char} = '"';
$csv_dbh->{csv_sep_char} = "$sepChar";
$csv_dbh->{csv_always_quote} = 0;
$csv_dbh->{csv_quote_space} = 0;
$csv_dbh->{csv_binary} = 0;
$csv_dbh->{csv_keep_meta_info} = 0;
$csv_dbh->{csv_allow_loose_quotes} = 0;
$csv_dbh->{csv_allow_loose_escapes} = 0;
$csv_dbh->{csv_allow_whitespace} = 0;
$csv_dbh->{csv_blank_is_undef} = 0;
$csv_dbh->{csv_empty_is_undef} = 0;
$csv_dbh->{csv_verbatim} = 0;
$csv_dbh->{csv_auto_diag} = 0;
my #list = $csv_dbh->func('list_tables');
my $sth = $csv_dbh->prepare("SELECT CELL,NW,BSC,n_cell_0 FROM cnai_all.tmp");
#print join ("\n",#list);
print "\n-------------------\n";
$sth->execute();
while (my $row = $sth->fetchrow_hashref) {
# just print a hash refrence
print "$row\n";
}
$sth->finish();
print "\n finish after ".(time()-$sTime)." sec ";
On my machine this runs in roughly 20s and use no more than 10MB memory.
The database you are using probably has a bulk import function. I would try that first.
If you need to do something with each row before putting it into the database (assuming the operations do not require referencing other rows), you should insert the row into the database as soon as processing is complete (turn AutoCommit off) rather than trying to store all the data in memory.
If the processing of each row depends on information in other rows, then you can use Tie::File to treat the input file as an array of lines. Again, don't try to store the contents of each line in memory. When processing is complete, ship it off to the database.

Resources