Possible to assign count to array? like #content.$i? - arrays

my csv input file
Chapter,Content
A,1
B,3
C,1
C,2
C,3
D,5
My current perl script
open(INFILE,$input)||die "can't open the file";
#line = (split/,/,$_);
#line_last = (split/\n/,$line[1]);
if ($_ =~ /A/){
push #con1, $line[1];
}
elsif ($_ =~ /B/){
push #con2, $line[1];
}
elsif ($_ =~ /C/){
push #con3, $line[1];
}
elsif ($_ =~ /D/){
push #con4, $line[1];
}
close INFILE;
chomp #con1, #con2, #con3, #con4;
print "content =", (join ", ", #con1),"\n";
print "content =", (join ", ", #con2),"\n";
print "content =", (join ", ", #con3),"\n";
print "content =", (join ", ", #con4),"\n";
My current code can work but I wanna make my code shorter & more efficient.
How do I make #con more automate? something like using For loop & use $i to assign to #con to make overall code shorter.

Why it's stupid to `use a variable as a variable name'.
You should use an AoA instead.
use Text::CSV_XS qw( );
my $csv = Text::CSV_XS->new({ binary => 1, auto_diag => 2 });
open my $fh, "<", $qfn)
or die("Can't open \"$qfn\": $!\n");
my #foos = qw( A B C D );
my $num_eles = 0;
my %index_by_foo = map { $_ => $num_eles++ } #foos;
my #cons_by_foo = map { [] } 1..$num_eles;
while (my $row = $csv->getline($fh)) {
if ( my $i = $index_by_foo{ $row->[0] } ) {
push #{ $cons_by_foo[$i] }, $row->[1];
}
}
for my $i (1..$num_eles-1) {
print("content =", join(", ", #{ $cons_by_foo[$i] }), "\n");
}
That said, a HoA seems a far better fit.
use Text::CSV_XS qw( );
my $csv = Text::CSV_XS->new({ binary => 1, auto_diag => 2 });
open my $fh, "<", $qfn)
or die("Can't open \"$qfn\": $!\n");
my %cons_by_foo;
while (my $row = $csv->getline($fh)) {
push #{ $cons_by_foo{ $row->[0] } }, $row->[1];
}
for my $foo (sort keys %cons_by_foo) {
print("content $foo =", join(", ", #{ $cons_by_foo{$foo} }), "\n");
}

This is what a hash is great for IMO (hashtable or dictionary in other languages).
note: I haven't tested this code, it's from memory.
my %con;
while (<INFILE>)
{
my #line_last = (split/\n/,$line[1]);
$con{$line[0]} = $con{$line[0]} + ", " + $line[1];
}
foreach my $str (keys sort %con)
{
print "content $con{$str}\n";
}
edit: much better and tested code here.
open(INFILE,"input.txt")||die "can't open the file";
my %con;
while (<INFILE>)
{
next if /^Chapter/;
chomp;
my #line = (split ',' , $_);
push #{$con{$line[0]}}, $line[1];
}
foreach my $str (sort keys %con)
{
my $tmp = join ', ', #{$con{$str}};
print "content $str = $tmp\n";
}
output is:
content A = 1
content B = 3
content C = 1,2,3
content D = 5
I know that isn't your output, but it seems more useful than what you had.

my %cc; #chapter contents
open my $FH, $filename or die "Cannot open $filename";
push #{ $cc{ $$_[0] } }, $$_[1] for map {chomp;[split/,/]} <$FH>;
close $FH;
print "Chapter = $_ Content = " . join(", ", #{ $cc{$_} }) . "\n" for sort keys %cc;
Output:
Chapter = A Content = 1
Chapter = B Content = 3
Chapter = C Content = 1, 2, 3
Chapter = D Content = 5

Related

filter file by unique and biggest value; combine two arrays into hash

I need to extract by unique genus (first part of the name of species) in one column but with by biggest number in another column in a CSV file when having multiples of the same name.
So if have multiple genus (same first name) then take the biggest number in the last column to select which will represent that genus.
I have extracted the information into arrays, but I am having trouble with combining the two in order to select. I was using
https://perlmaven.com/unique-values-in-an-array-in-perl
to help but I need to include biggest number in last column when have the same genus situation.
use strict;
use warnings;
open taxa_fh, '<', "$ARGV[0]" or die qq{Failed to open "$ARGV[0]" for input: $!\n};
open match_fh, ">$ARGV[0]_genusLongestLEN.csv" or die qq{Failed to open for output: $!\n};my #unique;
my %seen;
my %hash;
while ( my $line = <taxa_fh> ) {
chomp( $line );
my #parts = split( /,/, $line );
my #name = split( / /, $parts[3]);
my #A = $name[0];
my #B = $parts[5];
#seen{#A} = ();
my #merged = (#A, grep{!exists $seen{$_}} #B);
my #merged = (#A, #B);
#hash{#A} = #B;
print "$line\n";
}
close taxa_fh;
close match_fh;
Input example:
AB179735.1.1711,AB179735.1.1711,278983,Eucyrtidium hexagonatum,0,1600
AB179736.1.1725,AB179736.1.1725,278986,Pterocorys zancleus,0,1763
AB181888.1.1758,AB181888.1.1758,281609,Protoperidinium crassipes,0,1700
AB181890.1.1709,AB181890.1.1709,281610,Protoperidinium denticulatum,0,1800
AB181892.1.1738,AB181892.1.1738,281611,Protoperidinium divergens,0,1800
AB181894.1.1744,AB181894.1.1744,281612,Protoperidinium leonis,0,1500
AB181899.1.1746,AB181899.1.1746,281613,Protoperidinium pallidum,0,1600
AB181902.1.1741,AB181902.1.1741,261845,Protoperidinium pellucidum,0,1750
AB181904.1.1734,AB181904.1.1734,281614,Protoperidinium punctulatum,0,1599
AB181907.1.1687,AB181907.1.1687,281615,Protoperidinium thorianum,0,1600
AB120001.1.1725,AB120001.1.1725,244960,Gyrodinium spirale,0,1500
AB120002.1.1725,AB120002.1.1725,244961,Gyrodinium fusiforme,0,1800
AB120003.1.1724,AB120003.1.1724,244962,Gyrodinium rubrum,0,1700
AB120004.1.1723,AB120004.1.1723,244963,Gyrodinium helveticum,0,1500
AB120309.1.1800,AB120309.1.1800,4442,Camellia sinensis,0,1700
Wanted output:
AB179735.1.1711,AB179735.1.1711,278983,Eucyrtidium hexagonatum,0,1600
AB179736.1.1725,AB179736.1.1725,278986,Pterocorys zancleus,0,1763
AB181890.1.1709,AB181890.1.1709,281610,Protoperidinium denticulatum,0,1800
AB120002.1.1725,AB120002.1.1725,244961,Gyrodinium fusiforme,0,1800
AB120309.1.1800,AB120309.1.1800,4442,Camellia sinensis,0,1700
use Text::CSV_XS qw( );
my $csv = Text::CSV_XS->new({
auto_diag => 2,
binary => 1,
quote_space => 0,
});
my %by_genus;
while ( my $row = $csv->getline(\*ARGV) ) {
my ($genus) = split(' ', $row->[3]);
$by_genus{$genus} = $row
if !$by_genus{$genus}
|| $row->[5] > $by_genus{$genus}[5];
}
$csv->say(select(), $_) for values(%by_genus);
Properly naming the variables makes the code more readable:
#! /usr/bin/perl
use warnings;
use strict;
my %selected;
while (<>) {
my ($species, $value) = (split /,/)[3, 5];
my $genus = (split ' ', $species)[0];
if ($value > ($selected{$genus}{max} || 0)) {
$selected{$genus}{max} = $value;
$selected{$genus}{line} = $_;
}
}
for my $genus (keys %selected) {
print $selected{$genus}{line};
}
The order of the output lines is random.
You can this Perl command line as well
perl -F, -lane ' ($g=$F[3])=~s/(^\S+).*/$1/; if( $mx{$g}<$F[-1])
{ $kv{$g}=$_;$mx{$g}=$F[-1] } END { print $kv{$_} for(keys %kv) } ' file
with the given inputs in cara.txt file, the output is
$ perl -F, -lane ' ($g=$F[3])=~s/(^\S+).*/$1/; if( $mx{$g}<$F[-1])
{ $kv{$g}=$_;$mx{$g}=$F[-1] } END { print $kv{$_} for(keys %kv) } ' cara.txt
AB179736.1.1725,AB179736.1.1725,278986,Pterocorys zancleus,0,1763
AB179735.1.1711,AB179735.1.1711,278983,Eucyrtidium hexagonatum,0,1600
AB120309.1.1800,AB120309.1.1800,4442,Camellia sinensis,0,1700
AB120002.1.1725,AB120002.1.1725,244961,Gyrodinium fusiforme,0,1800
AB181890.1.1709,AB181890.1.1709,281610,Protoperidinium denticulatum,0,1800
$
Not fancy but gets the job done
#!/usr/bin/perl
use strict;
my #data = `cat /var/tmp/test.in`;
my %genuses = ();
foreach my $line ( #data ) {
chomp($line);
my #splitline = split(',', $line);
my $genus = $splitline[3];
my $num = $splitline[5];
my ( $name, $extra ) = split(' ', $genus);
if ( exists $genuses{$name}->{'num'} ) {
if ( $genuses{$name}->{'num'} < $num ) {
$genuses{$name}->{'num'} = $num;
$genuses{$name}->{'line'} = $line;
}
else {
next;
}
}
else {
$genuses{$name}->{'num'} = $num;
$genuses{$name}->{'line'} = $line;
}
}
foreach my $genus ( %genuses ) {
print "$genuses{$genus}->{'line'}";
print "\n";
}
Output:
[root#localhost tmp]# ./test.pl
AB179736.1.1725,AB179736.1.1725,278986,Pterocorys zancleus,0,1763
AB179735.1.1711,AB179735.1.1711,278983,Eucyrtidium hexagonatum,0,1600
AB120309.1.1800,AB120309.1.1800,4442,Camellia sinensis,0,1700
AB120002.1.1725,AB120002.1.1725,244961,Gyrodinium fusiforme,0,1800
AB181890.1.1709,AB181890.1.1709,281610,Protoperidinium denticulatum,0,1800
Don't see an obvious method that you are sorting your output by

Converting code using hash to array

I am charged with making a Perl module more efficient. It takes log entries which are currently uniquely identified by date and server name and inserts them into our mysql database for later processing.
I've posted the original file and my attempt below. My code doesn't insert into the database. I'm sure its a simple array handling problem but not sure.
Original file
package UlsaSpectrumAnalyser;
use strict;
use warnings;
use Data::Dumper;
use EnmServiceGroup;
use StatsDB;
use DBI;
use StatsTime;
sub new
{
my $klass = shift;
my $self = bless {}, $klass;
return $self;
}
sub init($$$$)
{
my ($self,$r_cliArgs,$r_incr,$dbh) = #_;
$self->{'site'} = $r_cliArgs->{'site'};
$self->{'siteId'} = $r_cliArgs->{'siteId'};
$self->{'date'} = $r_cliArgs->{'date'};
if ( exists $r_incr->{'UlsaSpectrumAnalyser'} )
{
$self->{'r_UlsaSpectrumAnalyser'} = $r_incr->{'UlsaSpectrumAnalyser'}->{'r_UlsaSpectrumAnalyser'};
}
else
{
$self->{'r_UlsaSpectrumAnalyser'} = {};
}
my #subscriptions = ();
$self->{'serverMap'} = {};
foreach my $service( "pmservice", "saservice" ) {
my $r_serverMap = enmGetServiceGroupInstances($self->{'site'}, $self->{'date'},$service);
while ( my ($server,$serverId) = each %{$r_serverMap} ) {
push ( #subscriptions, {'server' => $server, 'prog' => 'JBOSS'} );
$self->{'serverMap'}->{$server} = $serverId;
}
}
return \#subscriptions;
}
sub handle($$$$$$$)
{
my ($self,$timestamp,$host,$program,$severity,$message,$messageSize) = #_;
if ( $::DEBUG > 9 ) { print "UlsaSpectrumAnalyser::handle got message from $host $program : $message\n"; }
# Skip any warnings/errors
if ( $severity ne 'info' ) {
return;
}
my ($time)=$timestamp=~/(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}).*/;
my ($epochtime) = getDateTimeInMilliSeconds($timestamp);
if ( $::DEBUG > 3 ) { print "UlsaSpectrumAnalyser::handle got message from $time $host $program : $message\n"; }
#Sample Log
#2017-09-08 14:55:36,028 INFO [com.X.Y.itpf.EVENT_LOGGER] (ajp-executor-threads - 18) [administrator, Spectrum Analyzer, DETAILED, #ULSA_COMPONENT_FFT, MeContext=lienb4003, Samples=65510; FileParsingTime(ms)=50; FastFourierTime(ms)=370; PostProcessingTime(ms)=3; #ChartScalingTime(ms)=1; TotalTime(ms)=424]
if( $message =~ /.*ULSA_COMPONENT_FFT,\s+(\S+),\s+Samples=(\d+);\s+FileParsingTime\S+=(\d+);\s+FastFourierTime\S+=(\d+);\s+PostProcessingTime\S+=(\d+);\s+ChartScalingTime\S+=(\d+);\s+TotalTime\S+=(\d+)]/ ) {
my $activity = $epochtime . '##' . $host; #TODO Handle array instead of hash. Working on separately
my $serverid = $self->{'serverMap'}->{$host};
$self->{'r_UlsaSpectrumAnalyser'}->{$activity} = {
'time' => $time,
'epochtime' => $epochtime,
'serverid' => $serverid,
'source' => $1,
'sample' => $2,
'file_parsing_time' => $3,
'fast_fourier_time' => $4,
'post_processing_time' => $5,
'chart_scaling_time' => $6,
'total_time' => $7};
}
}
sub handleExceeded($$$)
{
my ($self, $host, $program) = #_;
}
sub done($$$)
{
my ($self,$dbh,$r_incr) = #_;
my $tmpDir = '/data/tmp';
my $date=$self->{'date'};
if (exists $ENV{'TMP_DIR'})
{
$tmpDir = $ENV{'TMP_DIR'};
}
my $bcpFileUlsaAnalyserLogs = "$tmpDir/ulsa_spectrum_analyser_logs.bcp";
open (BCP, "> $bcpFileUlsaAnalyserLogs") or die "Failed to open $bcpFileUlsaAnalyserLogs";
foreach my $activity (sort keys %{$self->{'r_UlsaSpectrumAnalyser'}}) {
print BCP $self->{'siteId'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'serverid'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'time'} . "\t" .
$self->{'r_UlsaSpiUectrumAnalyser'}->{$activity}->{'epochtime'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'source'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'sample'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'file_parsing_time'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'fast_fourier_time'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'post_processing_time'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'chart_scaling_time'} . "\t" .
$self->{'r_UlsaSpectrumAnalyser'}->{$activity}->{'total_time'} . "\n";
}
close BCP;
dbDo( $dbh, "DELETE FROM enm_ulsa_spectrum_analyser_logs WHERE siteid = $self->{'siteId'} AND time BETWEEN '$date 00:00:00' AND '$date 23:59:59'" )
or die "Failed to delete from enm_ulsa_spectrum_analyser_logs" . $dbh->errstr;
dbDo( $dbh, "LOAD DATA INFILE '$bcpFileUlsaAnalyserLogs' INTO TABLE enm_ulsa_spectrum_analyser_logs" )
or die "Failed to load new data from '$bcpFileUlsaAnalyserLogs' file to 'enm_ulsa_spectrum_analyser_logs' table" . $dbh->errstr;
unlink($bcpFileUlsaAnalyserLogs);
$r_incr->{'UlsaSpectrumAnalyser'} = {
'r_UlsaSpectrumAnalyser' => $self->{'r_UlsaSpectrumAnalyser'}
};
}
1;
My attempt
package UlsaSpectrumAnalyser;
use strict;
use warnings;
use Data::Dumper;
use StatsDB;
use DBI;
use StatsTime;
sub new
{
my $klass = shift;
my $self = bless {}, $klass;
return $self;
}
sub init($$$$)
{
my ($self,$r_cliArgs,$r_incr,$dbh) = #_;
$self->{'site'} = $r_cliArgs->{'site'};
$self->{'siteId'} = $r_cliArgs->{'siteId'};
$self->{'date'} = $r_cliArgs->{'date'};
$self->{'r_instrDataEvent'}->{'ulsaStats'} = [];
my #subscriptions = ();
$self->{'serverMap'} = {};
foreach my $service( "pmservice", "saservice" ) {
my $r_serverMap = enmGetServiceGroupInstances($self->{'site'}, $self->{'date'},$service);
while ( my ($server,$serverId) = each %{$r_serverMap} ) {
push ( #subscriptions, {'server' => $server, 'prog' => 'JBOSS'} );
$self->{'serverMap'}->{$server} = $serverId;
}
}
return \#subscriptions;
}
sub handle($$$$$$$)
{
my ($self,$timestamp,$host,$program,$severity,$message,$messageSize) = #_;
if ( $::DEBUG > 9 ) { print "UlsaSpectrumAnalyser::handle got message from $host $program : $message\n"; }
# Skip any warnings/errors
if ( $severity ne 'info' ) {
return;
}
my ($time)=$timestamp=~/(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}).*/;
my ($epochtime) = getDateTimeInMilliSeconds($timestamp);
if ( $::DEBUG > 3 ) { print "UlsaSpectrumAnalyser::handle got message from $time $host $program : $message\n"; }
#Sample Log
#2017-09-08 14:55:36,028 INFO [com.X.y.itpf.EVENT_LOGGER] (ajp-executor-threads - 18) [administrator, Spectrum Analyzer, DETAILED, #ULSA_COMPONENT_FFT, MeContext=lienb4003, Samples=65510; FileParsingTime(ms)=50; FastFourierTime(ms)=370; PostProcessingTime(ms)=3; #ChartScalingTime(ms)=1; TotalTime(ms)=424]
if( $message =~ /.*ULSA_COMPONENT_FFT,\s+(\S+),\s+Samples=(\d+);\s+FileParsingTime\S+=(\d+);\s+FastFourierTime\S+=(\d+);\s+PostProcessingTime\S+=(\d+);\s+ChartScalingTime\S+=(\d+);\s+TotalTime\S+=(\d+)]/ ) {
my $serverid = $self->{'r_instrDataEvent'}->{'serverid'};
my %event = (
'time' => $time,
'epochtime' => $epochtime,
'serverid' => $serverid,
'source' => $1,
'sample' => $2,
'file_parsing_time' => $3,
'fast_fourier_time' => $4,
'post_processing_time' => $5,
'chart_scaling_time' => $6,
'total_time' => $7
);
push #{$self->{'r_instrDataEvent'}->{'ulsaStats'}}, \%event;
}
}
sub handleExceeded($$$)
{
my ($self, $host, $program) = #_;
}
sub done($$$)
{
my ($self,$dbh,$r_incr) = #_;
my $bcpFileUlsaAnalyserLogs = getBcpFilename("enm_ulsa_spectrum_analyser_logs ");
open (BCP, "> $bcpFileUlsaAnalyserLogs") or die "Failed to open $bcpFileUlsaAnalyserLogs";
foreach my $activity (#{$self->{'r_instrDataEvent'}->{'ulsaStats'}}) {
print BCP $self->{'siteId'} . "\t" .
$activity->{'serverid'} . "\t" .
$activity->{'time'} . "\t" .
$activity->{'epochtime'} . "\t" .
$activity->{'source'} . "\t" .
$activity->{'sample'} . "\t" .
$activity->{'file_parsing_time'} . "\t" .
$activity->{'fast_fourier_time'} . "\t" .
$activity->{'post_processing_time'} . "\t" .
$activity->{'chart_scaling_time'} . "\t" .
$activity->{'total_time'} . "\n";
}
close BCP;
dbDo( $dbh, sprintf("DELETE FROM enm_ulsa_spectrum_analyser_logs WHERE siteid = %d AND time BETWEEN '%s' AND '%s'",
$self->{'siteId'}, $self->{'r_instrDataEvent'}->{'ulsaStats'}->[0]->{'epochtime'},$self->{'r_instrDataEvent'}->{'ulsaStats'}->[$#{$self->{'r_instrDataEvent'}->{'ulsaStats'}}]->{'epochtime'}))
or die "Failed to delete from enm_ulsa_spectrum_analyser_logs" . $dbh->errstr;#'
dbDo( $dbh, "LOAD DATA INFILE '$bcpFileUlsaAnalyserLogs' INTO TABLE enm_ulsa_spectrum_analyser_logs" )
or die "Failed to load new data from '$bcpFileUlsaAnalyserLogs' file to 'enm_ulsa_spectrum_analyser_logs' table" . $dbh->errstr;
}
1;
I would say that the problem is your
my $bcpFileUlsaAnalyserLogs = getBcpFilename( "enm_ulsa_spectrum_analyser_logs " )
I doubt if the file name really has a space at the end!
I'm surprised that your dbDo or at least your
or die "Failed to load new data ... "
didn't catch this and report it. I don't like wrapping DBI code inside subroutines, especially when I can't see those subroutines.
Note also that you should be using placeholders
dbDo( $dbh, sprintf("DELETE FROM enm_ulsa_spectrum_analyser_logs WHERE siteid = %d AND time BETWEEN '%s' AND '%s'",
$self->{'siteId'}, $self->{'r_instrDataEvent'}->{'ulsaStats'}->[0]->{'epochtime'},$self->{'r_instrDataEvent'}->{'ulsaStats'}->[$#{$self->{'r_instrDataEvent'}->{'ulsaStats'}}]->{'epochtime'}))
or die "Failed to delete from enm_ulsa_spectrum_analyser_logs" . $dbh->errstr;#'
would be much better as
my $sth = $dbh->prepare(<<END_SQL);
DELETE FROM enm_ulsa_spectrum_analyser_logs
WHERE siteid = ? AND time BETWEEN ? AND ?
END_SQL
my $ulsa_stats = $self->{r_instrDataEvent}{ulsaStats};
$sth->execute(
$self->{siteId},
$ulsa_stats->[0]{epochtime},
$ulsa_stats->[-1]{epochtime}
) or die "Failed to delete from enm_ulsa_spectrum_analyser_logs: " . $dbh->errstr;

Perl: best way to pass array of hash to a sub

My class looks like the below:
package CSVKeepCols;
use strict;
use warnings;
use Text::CSV;
use Data::Dumper;
my $text;
my $del;
my #cols;
my $output = '';
sub load {
my $class = shift;
my $self = {};
bless $self;
return $self;
}
sub input {
my $class = shift;
$text = shift;
return $class;
}
sub setOpts {
my ($class, $opts) = #_;
$del = $opts->{'delimeter'};
#cols = $opts->{'columns'};
}
sub process {
my #lines = split /\n|\r|\n\r|\r\n/, $text;
my $csv = Text::CSV->new({ sep_char => $del });
foreach (#lines) {
die('Invalid CSV data') if !$csv->parse($_);
$output .= __filterFields($csv->fields()) . "\n";
}
}
sub output {
return $output;
}
sub __filterFields {
my #fields = #_;
my $line = '';
foreach (#cols) {
$line .= ',' if $line;
$line .= $fields[$_];
}
return $line;
}
1;
I am using this class from my code like this:
$parser = load CSVKeepCols();
$parser->input($out);
$parser->setOpts({'delimeter' => ',', 'columns' => [1,2]});
$parser->process();
$out = $parser->output();
I am expecting, the setOpts subroutine will take the hash {'delimeter' => ',', 'columns' => [1,2]} and from there it will set the value of $delto , and #cols to (1,2) so that I can loop through the #cols array.
However, when I try to loop through #cols in the __filterFields subroutine I get error
Use of reference "ARRAY(0x22e32e0)" as array index at CSVKeepCols.pm line 52.
How do I fix this?
In setOpts, you set #cols = $opts->{columns};
$opts->{columns} contains a reference to an array ( [1,2] ).
So in __filterFields:
for ( #cols ){
# $_ is an arrayref [1,2]
# you are using it as an index to retrieve a value from #fields
$line .= $fields[$_];
# Thus the error: "use of reference ARRAY"..." as array index"
# You should be using an integer here.
}
To fix it:
sub setOpts {
# ...
#cols = #{ $opts->{columns} };
}
Edit: removed unnecessary check

How to check element in a array in PERL with grep?

I would like to check if a element is in the array?
my %hash = (
Value1 => ['10.0.0.1', '10.0.0.2'],
); #/!\NOT ARRAY
my #table = ( '10.0.0.6', '10.0.0.1');
Pseudo-code
my $i = 0;
if( grep { $table[0] eq $_ } eq $hash{"Value1[]"} ) {
print "Find!!!";
$i++; #true
}
if( grep { $table[1] eq $_ } eq $hash{"Value1[]"} ) {
print "Find!!!";
$i++; #true
}
if ( $i = 2) {
print "It is perfect. 0% difference between table and hash{"Value1"}";
}
if ( $i = 1) {
print "It is middle. 50% difference between table and hash{"Value1"}";
}
if ( $i = 0) {
print "It is bad. 100% difference between table and hash{"Value1"}";
}
How to convert hash to array ? Use grep in PERL
I'm not sure for grep syntax "$_"??
I'm only a beginner in PERL.
Thx very much.
for my $ip (#ips_to_find) {
for my $key (keys(%hash)) {
print("$ip in $key\n")
if grep { $_ eq $ip } #{ $hash{$key} };
}
}
You can reduce code by using loop for searching element.
my %hash = ('Value1' => ['10.0.0.1', '10.0.0.2'] );
my #table = ( '10.0.0.6', '10.0.0.1');
my $i = 0;
for my $val (#table) {
if (grep $_ eq $val, #{ $hash{'Value1'} })
{
print "Find!!!\n";
$i++;
}
}
if ( $i == 2) {
print "It is perfect. 0% difference between table and hash{'Value1'}";
}
elsif ( $i == 1) {
print "It is middle. 50% difference between table and hash{'Value1'}";
}
elsif ( $i == 0) {
print "It is bad. 100% difference between table and hash{'Value1'}";
}
Output
G:\Study\Perl Arsenal>perl temp.pl
Find!!!
It is middle. 50% difference between table and hash{'Value1'}
G:\Study\Perl Arsenal>

creating hash of hashes in perl

I have an array with contain values like
my #tmp = ('db::createParamDef xy', 'data $data1', 'model $model1', 'db::createParamDef wl', 'data $data2', 'model $model2')
I want to create a hash of hashes with values of xy and wl
my %hash;
my #val;
for my $file(#files){
for my $mod(#tmp){
if($mod=~ /db::createParamDef\s(\w+)/){
$hash{$file}="$1";
}
else{
my $value = split(/^\w+\s+/, $mod);
push (#val,$values);
}
$hash{$fname}{$1}="#val";
#val=();
}
}
this returns me only the filename and the value of $1, but i'm expecting output to be like this:
%hash=(
'filename1'=>
{
'xy'=>'$data1,$model1',
}
'filename2'=>
{
'wl'=>'$data2,$model2',
}
)
where am I doing wrong?!
This was actually a pretty tricky problem. Try something like this:
#!/bin/perl
use strict;
use warnings;
my #tmp = ('db::createParamDef xy', 'data $data1', 'model $model1', 'db::createParamDef wl', 'data $data2', 'model $model2');
my #files = ('filename1', 'filename2');
my %hash;
my #val;
my $index = 0;
my $current;
for my $mod (#tmp) {
if ( $mod=~ /db::createParamDef\s+(\w+)/){
$current = $1;
$hash{$files[$index]}={$current => ""};
$index++;
#val=();
} else {
my $value = (split(/\s+/, $mod))[1];
push (#val,$value);
}
$hash{$files[$index - 1]}{$current} = join(",", #val);
}
use Data::Dumper;
print Dumper \%hash;
Let me know if you have any questions about how it works!
my #tmp = (
'db::createParamDef xy', 'data $data1', 'model $model1',
'db::createParamDef wl', 'data $data2', 'model $model2'
);
my $count = 0;
my %hash = map {
my %r;
if (my($m) = $tmp[$_] =~ /db::createParamDef\s(\w+)/) {
my $i = $_;
my #vals = map { $tmp[$i+$_] =~ /(\S+)$/ } 1..2;
$r{"filename". ++$count}{$m} = join ",", #vals;
}
%r;
} 0 .. $#tmp;
use Data::Dumper; print Dumper \%hash;
output
$VAR1 = {
'filename1' => {
'xy' => '$data1,$model1'
},
'filename2' => {
'wl' => '$data2,$model2'
}
};

Resources