I am using below command to retrieve baselines.
cleartool lsbl -fmt "%n\n" -comp comp_name#\vob_name -stream stream_name#\vob_name
I am searching for a way to display baselines which is numerically equal to/ less than certain given baseline. Is there any way to achieve it?
Case 1 : If output is
abc_6.2168
abc_7.4587
abc_8.2950
abc_9.3032
If I want to display baseline which is numerically equal to / less (and closest) to abc_8. Hence, the expected result in Case 1 should be : abc_8.2950.
Case 2 : If output is
abc_6.2168
abc_7.4587
abc_9.3032
Expected result should be : abc_7.4587
NOTE : Trying this on Groovy (Jenkins pipeline)
use strict;
use warnings;
use DBI;
my $bsl_find = $ARGV[0]; #baseline build package name
my $bsl;
my $c=0;
my $mat;
my $previous_str = q{};
my $final_baseline;
my $prev_num_count=1;
my $prev_num_len=1;
my $split_strng;
my $baseline_var = q{};
my $baseline_file;
my $all_baseline_file = $ARGV[1]; #file which contains the list of all retreived baselines as per ARGV[0]
my $app = $ARGV[2]; #the name of the application for which baseline is to be selected
my $filename = 'D:\\baseline_'.$app.'\\'.'new_'.$all_baseline_file.'.txt';
$baseline_file = 'D:\\baseline_'.$app.'\\'.'final_'.$all_baseline_file.'.txt';
$all_baseline_file = 'D:\\baseline_'.$app.'\\'.$all_baseline_file.'.txt';
open(my $fh, '<:encoding(UTF-8)',$filename)
or die "Could not open file '$filename' $!";
while (my $strng = <$fh>) {
chomp $strng;
#print "The line is : $strng \n";
$strng=~ s/^\s+|\s+$//;
#print " \n strng after trim is $strng.";
my $num_count = (split '_', $strng)[-1];
my $num_count_bsl_param = (split '_', $bsl_find)[-1];
my $num_len = length ($num_count);
my $num_len_bsl_param = length ($num_count_bsl_param);
my $a = substr($bsl_find, -$num_len_bsl_param);
my $b = substr($strng, -$num_len);
$split_strng = '_'.$a;
my ($substrng) = split /$split_strng/, $bsl_find;
if ($substrng =~ m/([^\_]+)$/)
{
$substrng=$1;
}
if ( ($a == $b) && (index($strng, $substrng) != -1) )
{
print "\n Match found";
$mat = $strng;
print "\n baseline found is : $mat";
$final_baseline = $mat;
print "\n final bsl is $bsl_find";
$baseline_var = $strng;
#exit 0;
goto label;
}
elsif ( ($a < $b) && (index($strng, $substrng) != -1) )
{
if ( (grep{/$bsl_find/} $filename) && ($previous_str eq "") ){
print "\n final baseline decided : $bsl_find";
$baseline_var = $bsl_find;
goto label;
}
elsif ( ($previous_str ne "") )
{
print "\n final baseline is ...: $previous_str";
$baseline_var = $previous_str;
goto label;
}
}
elsif ( ($a < $b) && ($previous_str ne "") && (index($strng, $substrng) != -1) )
{
if ( ($a > $c) && (index($previous_str, $substrng) != -1) )
{
print "\n baseline found is : $previous_str";
$final_baseline = $previous_str;
print " \n final is $final_baseline";
$baseline_var = $previous_str;
goto label;
}
}
elsif ( ($a < $b) && (index($bsl_find, $substrng) != -1) && ($previous_str ne "") && (index($previous_str, $substrng) == -1) )
{
print "\n Baseline not found of type $bsl_find.... final baseline is : $previous_str";
$baseline_var = $previous_str;
goto label;
}
close(fh);
}
if ($baseline_var eq "")
{
open my $fh ,"<",$filename;
my $last_line;
$last_line = $_,while (<$fh>);
print $last_line;
print " \n Baseline is $last_line";
$baseline_var = $last_line;
goto label;
close(fh);
}
label:
print " \n\n Writing $baseline_var to $baseline_file...";
#$baseline_var = $baseline_var.'.';
$baseline_var=~ s/^\s+|\s+$//;
print " \n \n baseline_var is $baseline_var. ";
unlink $baseline_file;
open(my $fh, '<:encoding(UTF-8)',$all_baseline_file)
or die "Could not open file '$all_baseline_file' $!";
while (my $word = <$fh>) {
chomp $word;
#print "\n word is $word.";
if ( $word =~ /\./ )
{
if( $word =~ m/$baseline_var\./ )
{
print "\n found $baseline_var. in $word";
open(FH1, '>', $baseline_file) or die $!;
print FH1 "$word";
}
}
else
{
if( $word eq $baseline_var )
{
print "\n found $baseline_var. in $word";
open(FH1, '>', $baseline_file) or die $!;
print FH1 "$word";
}
}
close(fh);
}
close(FH1);
}
Is there any way to achieve it?
Not with ClearCase/cleartool alone, which means you need to parse its output, and that depends on your OS/shell.
Something like, in Windows CMD shell, with Git For Windows shell in its path:
cleartool lsbl ... | sort -V |awk 'BEGIN{a=$0;FS="._"}$2 ^< 9{print $0;}'|tail -1
(the ^< is needed to escape the <, and prevent the CMD to interpret that as a redirection)
Related
I'm new to programming much less Perl; I'm having difficulty with searching an array I've made from an external text file. I'm looking for a simple way to check if the user entry is located in the array. I've used the Smart Match function before but never in an "if" statement and can't seem to get it to work. Am I implementing this function wrong, or is there an easier way to check if the user's string is in the array?
#!/usr/bin/perl
use 5.010;
#Inventory editing script - Jason Black
#-------------------------------------------------------------------------------
print "1. Add Items\n";
print "2. Search Items\n";
print "Please enter your choice: ";
chomp ($userChoice = <STDIN>); #Stores user input in $userChoice
if($userChoice == 1){
$message = "Please enter in format 'code|title|price|item-count'\n";
&ChoiceOne;
}
elsif($userChoice == 2){
$message = "Enter search terms\n";
&ChoiceTwo;
}
sub ChoiceOne{
print "$message\n";
chomp($userAddition = <STDIN>); #Stores input in $userAddition
$string1 = "$userAddition";
open (FILE, "FinalProjData.txt") or die ("File not found"); #"FILE" can be named anything
#array = <FILE>;
if ( /$string1/ ~~ #array){
print "This entry already exists. Would you like to replace? Y/N \n";
chomp($userDecision = <STDIN>); #Stores input in $userDecision
if ($userDecision eq "Y"){
$string1 =~ s/$userAddition/$userAddition/ig;
print "Item has been overwritten\n";}
elsif($userDecision eq "N"){
print FILE "$string1\n";
print "Entry has been added to end of file.\n";}
else{
print "Invalid Input";
exit;}
}
else {
print FILE "$string1\n";
print "Item has been added.\n";}
close(FILE);
exit;
}#end sub ChoiceOne
sub ChoiceTwo{
print "$message\n";
}
If you want to avoid using smartmatch alltogether:
if ( grep { /$string1/ } #array ) {
To actually match the $string1, however, it needs to be escaped, so that | doesn't mean or:
if ( grep { /\Q$string\E/ } #array ) {
or just a simple string compare:
if ( grep { $_ eq $string } #array ) {
I am trying to findy the common lines among two tab separated files based one field.
One line of the first file:
1 52854 s64199.1 A . . . PR GT 0/0
One line of the second file:
chr1 52854 . C T 215.302 . AB=0.692308;ABP=7.18621;AC=1;AF=0.5;AN=2;AO=9;CIGAR=1X;DP=13;DPB=13;DPRA=0;EPP=3.25157;EPPR=3.0103;GTI=0;LEN=1;MEANALT=1;MQM=60;MQMR=60;NS=1;NUMALT=1;ODDS=17.5429;PAIRED=0;PAIREDR=0.25;PAO=0;PQA=0;PQR=0;PRO=0;QA=318;QR=138;RO=4;RPP=3.25157;RPPR=5.18177;RUN=1;SAF=0;SAP=22.5536;SAR=9;SRF=1;SRP=5.18177;SRR=3;TYPE=snp;technology.illumina=1;BVAR GT:DP:RO:QR:AO:QA:GL 0/1:13:4:138:9:318:-5,0,-5
Based on the second field (52854) in this example I have a much.
Here is my code which finds the common ones, but my files are quite large and takes a lot of time.
Is there any way to speed up the process?
Thank you very much in advance.
#!/app/languages/perl/5.14.2/bin/perl
use strict;
use warnings;
my $map_file = $ARGV[0];
my $vcf_file = $ARGV[1];
open my $map_info, $map_file or die "Could not open $map_file: $!";
my #map_array = ();
my #vcf_array = ();
while( my $mline = <$map_info>) {
chomp $mline;
my #data1 = split('\t', $mline);
my $pos1 = $data1[1];
push (#map_array, $pos1);
}
open my $vcf_info, $vcf_file or die "Could not open $vcf_file: $!";
while( my $line = <$vcf_info>) {
if ($line !~ m/^#/) {
push (#vcf_array, $line);
}
}
foreach my $a (#map_array) {
chomp $a;
foreach my $b (#vcf_array) {
chomp $b;
my #data = split('\t', $b);
my $pos2 = $data[1];
my $ref2 = $data[3];
my $allele = $data[4];
my $genotype = $data[9];
if ($a == $pos2) {
print $pos2 . "\t" . $ref2. "\t".$allele."\t".$genotype. "\n";
#print "$b\n";
}
}
}
Here's a version that should run much faster than your own
It reads the map file and stores each pos field in hash %wanted. Then it reads through the second file and checks whether the record is in the list of wanted values. If so then it splits the record and prints the fields you require
Note that I haven't been able to test this beyond making sure that it compiles
use strict;
use warnings;
use 5.010;
use autodie;
my ( $map_file, $vcf_file ) = #ARGV;
my %wanted;
{
open my $map_fh, '<', $map_file;
while ( <$map_fh> ) {
chomp;
my $pos = ( split /\t/, $_, 3 )[1];
++$wanted{$pos};
}
}
{
open my $vcf_fh, '<', $vcf_file;
while ( <$vcf_fh> ) {
next if /^#/;
chomp;
my $pos = ( split /\t/, $_, 3 )[1];
next unless $wanted{$pos};
my ( $ref, $allele, $genotype ) = ( split /\t/ )[3, 4, 9];
print join("\t", $pos, $ref, $allele, $genotype), "\n";
}
}
Below please find minimal modification of your script for hash based searches
use strict;
use warnings;
my $map_file = $ARGV[0];
my $vcf_file = $ARGV[1];
my %vcf_hash;
open( my $vcf_info, $vcf_file) or die "Could not open $vcf_file: $!";
while( my $line = <$vcf_info>) {
next if $line =~ m/^#/; # Skip comment lines
chomp $line;
my (#data) = split(/\t/, $line);
die unless #data >= 10; # Check number of fields in the input line
my ($pos) = $data[1];
# $. - line number in the file
$vcf_hash{$pos}{$.} = \#data;
}
open( my $map_info, $map_file) or die "Could not open $map_file: $!";
while( my $mline = <$map_info>) {
chomp $mline;
my (#data) = split(/\t/, $mline);
die unless #data >= 2; # Check number of fields in the input line
my ($pos) = $data[1];
if( exists $vcf_hash{$pos}) {
my $hash_ref = $vcf_hash{$pos};
for my $n (sort{$a<=>$b} keys %$hash_ref) {
my $array_ref = $hash_ref->{$n};
my $pos2 = $array_ref->[1];
my $ref2 = $array_ref->[3];
my $allele = $array_ref->[4];
my $genotype = $array_ref->[9];
print $pos2 . "\t" . $ref2. "\t".$allele."\t".$genotype. "\n";
}
}
}
The script may be improved further to reduce memory use if you use huge data files.
There is not a need to keep your map_file in memory but just keys. It is good to make them keys in a hash which you use for existence checking. You don't have to keep your vcf_file in memory as well, but you can just make a decision to output or not.
#!/app/languages/perl/5.14.2/bin/perl
use strict;
use warnings;
use autodie;
use constant KEY => 1;
use constant FIELDS => ( 1, 3, 4, 9 );
my ( $map_file, $vcf_file ) = #ARGV;
my %map;
{
my $fh;
open $fh, '<', $map_file;
while (<$fh>) {
$map{ ( split /\t/, $_, KEY + 2 )[KEY] } = undef;
}
}
{
my $fh;
open $fh, '<', $vcf_file;
while (<$fh>) {
next if /^#/;
chomp;
my #data = split /\t/;
print join "\t", #data[FIELDS] if exists $map{ $data[KEY] };
}
}
I am looking to extract columns based off of header names in a comma (or tab) delimited file. I have a scalar variable that matches many header possibilities I named '$Acct_Name', among ones. I want to read the file(column headers), match it to what I have in '$Acct_Name' and print the matched column along with its data etc.
Here is my code:
open(FILE, "list_2.txt") or die "Cannot open file: $!";
my $Account_Name = qr/^Acct ID$|^Account No$|^Account$|^ACCOUNT NUMBER$|Account Number|Account.*?Number|^Account$|^Account #$|^Account_ID$|^Account ID$/i;
my $CLIENT = qr/^CLIENT_NAME$|^Account Long Name$|^ACCOUNT NAME$|^Account Name$|^Name$|portfolio.*?description|^Account Description$/i;
while (my $line = <FILE>) {
chomp $line;
my #array = split(/,/, $line);
my %index;
#index{#array} = (0..$#array);
my $Account_Name_ = $index{$Account_Name};
if (my ($matched) = grep $array[$_] =~ /$Account_Name/, 0..$#array) {
$Account_Name_ = $matched;
my $CLIENT_ = $index{$CLIENT};
if (my ($matched) = grep $array[$_] =~ /$CLIENT/, 0..$#array) {
$CLIENT_ = $matched;
print $array[$Account_Name_],",",$array[$CLIENT_],"\n";
}
}
}
close(FILE);
Data, list_2.txt
Account number,order_num,Name
dj870-1234,12334566,josh trust 1992
My Results
Account number,Name
Desried Out
Account number,Name
dj870-1234,josh
For some reason I am only able to print the column names based on the match. How can I grab the data as well?
You need to move your print statement to output your data lines - these do not match, so in the original code the print statement is never reached !
use warnings;
open(FILE, "list_2.txt") or die "Cannot open file: $!";
my $Account_Name = qr/^Acct ID$|^Account No$|^Account$|^ACCOUNT NUMBER$|Account Number|Account.*?Number|^Account$|^Account #$|^Account_ID$|^Account ID$/i;
my $CLIENT = qr/^CLIENT_NAME$|^Account Long Name$|^ACCOUNT NAME$|^Account Name$|^Name$|portfolio.*?description|^Account Description$/i;
my ($Account_Name_, $CLIENT_);
while (my $line = <FILE>) {
chomp $line;
my #array = split(/,/, $line);
if (my ($matched) = grep $array[$_] =~ /$Account_Name/, 0..$#array) {
$Account_Name_ = $matched;
if (my ($matched) = grep $array[$_] =~ /$CLIENT/, 0..$#array) {
$CLIENT_ = $matched;
}
}
print $array[$Account_Name_],",",$array[$CLIENT_],"\n";
}
close(FILE);
I am running a crontab as described below :
* 1 * * * /var/fdp/reportingscript/an_outgoing_tps_report.pl
* 1 * * * /var/fdp/reportingscript/an_processed_rule_report.pl
* 1 * * * /var/fdp/reportingscript/sdp_incoming_traffic_tps_report.pl
* 1 * * * /var/fdp/reportingscript/en_outgoing_tps_report.pl
* 1 * * * /var/fdp/reportingscript/en_processed_rule_report.pl
* 1 * * * /var/fdp/reportingscript/rs_incoming_traffic_report.pl
* 1 * * * /var/fdp/reportingscript/an_summary_report.pl
* 1 * * * /var/fdp/reportingscript/en_summary_report.pl
* 1 * * * /var/fdp/reportingscript/user_report.pl
and getting an error : ( for all scripts the error is same)
DBI connect('dbname=scs;host=192.168.18.23;port=5432','postgres',...) failed: FATAL: sorry, too many clients already at /var/fdp/reportingscript/sdp_incoming_traffic_tps_report.pl line 38.
Moreover, if I am running the script manually one at a time, it doesn't show any error.
For your reference i am attaching the script also for which I have shown the above error:
#!/usr/bin/perl
use strict;
use FindBin;
use lib $FindBin::Bin;
use Time::Local;
use warnings;
use DBI;
use File::Basename;
use CONFIG;
use Getopt::Long;
use Data::Dumper;
my $channel;
my $circle;
my $daysbefore;
my $dbh;
my $processed;
my $discarded;
my $db_name = "scs";
my $db_vip = "192.168.18.23";
my $db_port = "5432";
my $db_user = "postgres";
my $db_password = "postgres";
#### code to redirect all console output in log file
my ( $seco_, $minu_, $hrr_, $moday_, $mont_, $years_ ) = localtime(time);
$years_ += 1900;
$mont_ += 1;
my $timestamp = sprintf( "%d%02d%02d", $years_, $mont_, $moday_ );
$timestamp .= "_" . $hrr_ . "_" . $minu_ . "_" . $seco_;
print "timestamp is $timestamp \n";
my $logfile = "/var/fdp/log/reportlog/sdp_incoming_report_$timestamp";
print "\n output files is " . $logfile . "\n";
open( STDOUT, ">", $logfile ) or die("$0:dup:$!");
open STDERR, ">&STDOUT" or die "$0: dup: $!";
my ( $sec_, $min_, $hr_, $mday_, $mon_, $year_ ) = localtime(time);
$dbh = DBI->connect( "DBI:Pg:dbname=$db_name;host=$db_vip;port=$db_port",
"$db_user", "$db_password", { 'RaiseError' => 1 } );
print "\n Dumper is " . $dbh . "\n";
my $sthcircle = $dbh->prepare("select id,name from circle");
$sthcircle->execute();
while ( my $refcircle = $sthcircle->fetchrow_hashref() ) {
print "\n dumper for circle is " . Dumper($refcircle);
my $namecircle = uc( $refcircle->{'name'} );
my $idcircle = $refcircle->{'id'};
$circle->{$namecircle} = $idcircle;
print "\n circle name : " . $namecircle . "id is " . $idcircle;
}
sub getDate {
my $daysago = shift;
$daysago = 0 unless ($daysago);
my #months = qw(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec);
my ( $sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst ) = localtime( time - ( 86400 * $daysago ) );
# YYYYMMDD, e.g. 20060126
$year_ = $year + 1900;
$mday_ = $mday;
$mon_ = $mon + 1;
return sprintf( "%d-%02d-%02d", $year + 1900, $mon + 1, $mday );
}
GetOptions( "d=i" => \$daysbefore );
my $filedate = getDate($daysbefore);
print "\n filedate is $filedate \n";
my #basedir = CONFIG::getBASEDIR();
print "\n array has basedir" . Dumper(#basedir);
$mon_ = "0" . $mon_ if ( defined $mon_ && $mon_ <= 9 );
$mday_ = "0" . $mday_ if ( defined $mday_ && $mday_ <= 9 );
foreach (#basedir) {
my $both = $_;
print "\n dir is $both \n";
for ( keys %{$circle} ) {
my $path = $both;
my $circleid = $_;
print "\n circle is $circleid \n";
my $circleidvalue = $circle->{$_};
my $file_csv_path = "/opt/offline/reports/$circleid";
my %sdp_hash = ();
print "\n file is $file_csv_path csv file \n";
if ( -d "$file_csv_path" ) {
} else {
mkdir( "$file_csv_path", 0755 );
}
my $csv_new_file
= $file_csv_path
. "\/FDP_"
. $circleid
. "_SDPINCOMINGTPSREPORT_"
. $mday_ . "_"
. $mon_ . "_"
. $year_ . "\.csv";
print "\n file is $csv_new_file \n";
print "\n date:$year_-$mon_-$mday_ \n";
open( DATA, ">>", $csv_new_file );
$path = $path . $circleid . "/Reporting/EN/Sdp";
print "\n *****path is $path \n";
my #filess = glob("$path/*");
foreach my $file (#filess) {
print "\n Filedate ---------> $filedate file is $file \n";
if ( $file =~ /.*_sdp.log.$filedate-*/ ) {
print "\n found file for $circleid \n";
my $x;
my $log = $file;
my #a = split( "-", $file );
my $starttime = $a[3];
my $endtime = $starttime;
my $sdpid;
my $sdpid_value;
$starttime = "$filedate $starttime:00:00";
$endtime = "$filedate $endtime:59:59";
open( FH, "<", "$log" ) or die "cannot open < $log: $!";
while (<FH>) {
my $line = $_;
print "\n line is $line \n";
chomp($line);
$line =~ s/\s+$//;
my #a = split( ";", $line );
$sdpid = $a[4];
my $stat = $a[3];
$x->{$sdpid}->{$stat}++;
}
close(FH);
print "\n Dumper is x:" . Dumper($x) . "\n";
foreach my $sdpidvalue ( keys %{$x} ) {
print "\n sdpvalue us: $sdpidvalue \n";
if ( exists( $x->{$sdpidvalue}->{processed} ) ) {
$processed = $x->{$sdpidvalue}->{processed};
} else {
$processed = 0;
}
if ( exists( $x->{$sdpidvalue}->{discarded} ) ) {
$discarded = $x->{$sdpidvalue}->{discarded};
} else {
$discarded = 0;
}
my $sth_new1 = $dbh->prepare("select id from sdp_details where sdp_name='$sdpid' ");
print "\n sth new is " . Dumper($sth_new1);
$sth_new1->execute();
while ( my $row1 = $sth_new1->fetchrow_hashref ) {
$sdpid_value = $row1->{'id'};
print "\n in hash rowref from sdp_details table " . Dumper($sdpid_value);
}
my $sth_check
= $dbh->prepare(
"select processed,discarded from sdp_incoming_tps where circle_id='$circleidvalue' and sdp_id='$sdpid_value' and start_time='$starttime' and end_time='$endtime'"
);
print "\n Dumper for bhdatabase statement is " . Dumper($sth_check);
$sth_check->execute();
my $duplicate_row = 0;
my ( $success_, $failure_ );
while ( my $row_dup = $sth_check->fetchrow_hashref ) {
print "\n row_dup is " . Dumper($row_dup);
$duplicate_row = 1;
$success_ += $row_dup->{'processed'};
$failure_ += $row_dup->{'discarded'};
}
if ( $duplicate_row == 0 ) {
my $sth
= $dbh->prepare(
"insert into sdp_incoming_tps (id,circle_id,start_time,end_time,processed,discarded,sdp_id) select nextval('sdp_incoming_tps_id'),'$circleidvalue','$starttime','$endtime','$processed','$discarded','$sdpid_value' "
);
$sth->execute();
} else {
$success_ += $processed;
$failure_ += $discarded;
my $sth
= $dbh->prepare(
"update sdp_incoming_tps set processed=$success_,discarded=$failure_ where circle_id='$circleidvalue' and sdp_id='$sdpid_value' and start_time='$starttime' and end_time='$endtime'"
);
$sth->execute();
}
# my $file_csv_path = "/opt/offline/reports/$circleid";
# my %sdp_hash = ();
# if ( -d "$file_csv_path" ) {
# } else {
# mkdir( "$file_csv_path", 0755 );
# }
# my $csv_new_file = $file_csv_path . "\/FDP_" . $circleid . "_SDPINCOMINGTPSREPORT_". $mday_ . "_" . $mon_ . "_" . $year_ . "\.csv";
print "\n file is $csv_new_file \n";
print "\n date:$year_-$mon_-$mday_ \n";
close(DATA);
open( DATA, ">>", $csv_new_file ) or die("cant open file : $! \n");
print "\n csv new file is $csv_new_file \n";
my $sth_new2 = $dbh->prepare("select * from sdp_details");
$sth_new2->execute();
while ( my $row1 = $sth_new2->fetchrow_hashref ) {
my $sdpid = $row1->{'id'};
$sdp_hash{$sdpid} = $row1->{'sdp_name'};
}
#print "\n resultant sdp hash".Dumper(%sdp_hash);
#$mon_="0".$mon_;
print "\n timestamp being matched is $year_-$mon_-$mday_ \n";
print "\n circle id value is $circleidvalue \n";
my $sth_new
= $dbh->prepare(
"select * from sdp_incoming_tps where date_trunc('day',start_time)='$year_-$mon_-$mday_' and circle_id='$circleidvalue'"
);
$sth_new->execute();
print "\n final db line is " . Dumper($sth_new);
my $str = $sth_new->{NAME};
my #str_arr = #$str;
shift(#str_arr);
shift(#str_arr);
my #upper = map { ucfirst($_) } #str_arr;
$upper[4] = "Sdp-Name";
my $st = join( ",", #upper );
$st = $st . "\n";
$st =~ s/\_/\-/g;
#print $fh "sep=,"; print $fh "\n";
print DATA $st;
while ( my $row = $sth_new->fetchrow_hashref ) {
print "\n found matching row \n";
my $row_line
= $row->{'start_time'} . ","
. $row->{'end_time'} . ","
. $row->{'processed'} . ","
. $row->{'discarded'} . ","
. $sdp_hash{ $row->{'sdp_id'} } . "\n";
print "\n row line matched is " . $row_line . "\n";
print DATA $row_line;
}
close(DATA);
}
} else {
next;
}
}
}
}
$dbh->disconnect;
Please help, how can I avoid this error.
Thanks in adv.
The immediate problem, as indicated by the error message, is that running all of those scripts at once requires more database connections than the server will allow. If they run fine individually, then running them individually will fix that.
The underlying problem is that your crontab is wrong. * 1 * * * will run all the scripts every minute from 0100 to 0159 each day. If they take more than one minute to complete, then a new set will start before the previous set completes, requiring an additional set of database connections, which will run through the pool of available connections rather quickly.
I assume that you only need to run your daily scripts once per day, not sixty times, so change that to 5 1 * * * to run them only once, at 0105.
If there's still an issue, run each one on a different minute (which is probably a good idea anyhow):
5 1 * * * /var/fdp/reportingscript/an_outgoing_tps_report.pl
10 1 * * * /var/fdp/reportingscript/an_processed_rule_report.pl
15 1 * * * /var/fdp/reportingscript/sdp_incoming_traffic_tps_report.pl
20 1 * * * /var/fdp/reportingscript/en_outgoing_tps_report.pl
25 1 * * * /var/fdp/reportingscript/en_processed_rule_report.pl
30 1 * * * /var/fdp/reportingscript/rs_incoming_traffic_report.pl
35 1 * * * /var/fdp/reportingscript/an_summary_report.pl
40 1 * * * /var/fdp/reportingscript/en_summary_report.pl
45 1 * * * /var/fdp/reportingscript/user_report.pl
I'm relatively new to Perl and I've come across this project that I'm having a bit of a hard time with.
The object of the project is to compare two csv files, one of which would contain:
$name, $model, $version
and the other which would contain:
$name2,$disk,$storage
in the end the RESULT file will contain that matched lines and put together the information like so:
$name, $model, $version, $disk,$storage.
I've managed to do this, but my problem is that when one of the elements in missing the program breaks. When it encounters a line in the file missing an element it stops at that line. How can I fix this problem? any suggestions or a way as to how I can perhaps make it skip that line and continue on?
Here's my code:
open( TESTING, '>testing.csv' ); # Names will be printed to this during testing. only .net ending names should appear
open( MISSING, '>Missing.csv' ); # Lines with missing name feilds will appear here.
#open (FILE,'C:\Users\hp-laptop\Desktop\file.txt');
#my (#array) =<FILE>;
my #hostname; #stores names
#close FILE;
#***** TESTING TO SEE IF ANY OF THE LISTED ITEMS BEGIN WITH A COMMA AND DO NOT HAVE A NAME.
#***** THESE OBJECTS ARE PLACED INTO THE MISSING ARRAY AND THEN PRINTED OUT IN A SEPERATE
#***** FILE.
#open (FILE,'C:\Users\hp-laptop\Desktop\file.txt');
#test
if ( open( FILE, "file.txt" ) ) {
}
else {
die " Cannot open file 1!\n:$!";
}
$count = 0;
$x = 0;
while (<FILE>) {
( $name, $model, $version ) = split(","); #parsing
#print $name;
chomp( $name, $model, $version );
if ( ( $name =~ /^\s*$/ )
&& ( $model =~ /^\s*$/ )
&& ( $version =~ /^\s*$/ ) ) #if all of the fields are blank ( just a blank space)
{
#do nothing at all
}
elsif ( $name =~ /^\s*$/ ) { #if name is a blank
$name =~ s/^\s*/missing/g;
print MISSING "$name,$model,$version\n";
#$hostname[$count]=$name;
#$count++;
}
elsif ( $model =~ /^\s*$/ ) { #if model is blank
$model =~ s/^\s*/missing/g;
print MISSING"$name,$model,$version\n";
}
elsif ( $version =~ /^\s*$/ ) { #if version is blank
$version =~ s/^\s*/missing/g;
print MISSING "$name,$model,$version\n";
}
# Searches for .net to appear in field "$name" if match, it places it into hostname array.
if ( $name =~ /.net/ ) {
$hostname[$count] = $name;
$count++;
}
#searches for a comma in the name feild, puts that into an array and prints the line into the missing file.
#probably won't have to use this, as I've found a better method to test all of the feilds ( $name,$model,$version)
#and put those into the missing file. Hopefully it works.
#foreach $line (#array)
#{
#if($line =~ /^\,+/)
#{
#$line =~s/^\,*/missing,/g;
#$missing[$x]=$line;
#$x++;
#}
#}
}
close FILE;
for my $hostname (#hostname) {
print TESTING $hostname . "\n";
}
#for my $missing(#missing)
#{
# print MISSING $missing;
#}
if ( open( FILE2, "file2.txt" ) ) { #Run this if the open succeeds
#open outfile and print starting header
open( RESULT, '>resultfile.csv' );
print RESULT ("name,Model,version,Disk, storage\n");
}
else {
die " Cannot open file 2!\n:$!";
}
$count = 0;
while ( $hostname[$count] ne "" ) {
while (<FILE>) {
( $name, $model, $version ) = split(","); #parsing
#print $name,"\n";
if ( $name eq $hostname[$count] ) # I think this is the problem area.
{
print $name, "\n", $hostname[$count], "\n";
#print RESULT"$name,$model,$version,";
#open (FILE2,'C:\Users\hp-laptop\Desktop\file2.txt');
#test
if ( open( FILE2, "file2.txt" ) ) {
}
else {
die " Cannot open file 2!\n:$!";
}
while (<FILE2>) {
chomp;
( $name2, $Dcount, $vname ) = split(","); #parsing
if ( $name eq $name2 ) {
chomp($version);
print RESULT"$name,$model,$version,$Dcount,$vname\n";
}
}
}
$count++;
}
#open (FILE,'C:\Users\hp-laptop\Desktop\file.txt');
#test
if ( open( FILE, "file.txt" ) ) {
}
else {
die " Cannot open file 1!\n:$!";
}
}
close FILE;
close RESULT;
close FILE2;
I think you want next, which lets you finish the current iteration immediately and start the next one:
while (<FILE>) {
( $name, $model, $version ) = split(",");
next unless( $name && $model && $version );
...;
}
The condition that you use depends on what values you'll accept. In my examples, I'm assuming that all values need to true. If they need to just not be the empty string, maybe you check the length instead:
while (<FILE>) {
( $name, $model, $version ) = split(",");
next unless( length($name) && length($model) && length($version) );
...;
}
If you know how to validate each field, you might have subroutines for those:
while (<FILE>) {
( $name, $model, $version ) = split(",");
next unless( length($name) && is_valid_model($model) && length($version) );
...;
}
sub is_valid_model { ... }
Now you just need to decide how to integrate that into what you are already doing.
You should start by adding use strict and use warnings to the top of your program, and declaring all variables with my at their point of first use. That will reveal a lot of simple mistakes that are otherwise difficult to spot.
You should also use the three-parameter for of open and lexical filehandles, and the Perl idiom for checking exceptions on opening files is to add or die to an open call. if statements with an empty block for the success path waste space and become unreadable. An open call should look like this
open my $fh, '>', 'myfile' or die "Unable to open file: $!";
Finally, it is much safer to use a Perl module when you are handling CSV files as there are a lot of pitfalls in using a simple split /,/. The Text::CSV module has done all the work for you and is available on CPAN.
You problem is that, having read to the end of the first file, you don't rewind or reopen it before reading from the same handle again in the second nested loop. That means no more data will be read from that file and the program will behave as if it is empty.
It is a bad strategy to read through the same file hundreds of times just to pair up coresponding records. If file is of a reasonable size you should build a data structure in memory to hold the information. A Perl hash is ideal as it allows you to look up the data corresponding to a given name instantly.
I have written a revision of your code that demonstrates these points. It would be awkward for me to test the code as I have no sample data, but if you continue to have problems please let us know.
use strict;
use warnings;
use Text::CSV;
my $csv = Text::CSV->new;
my %data;
# Read the name, model and version from the first file. Write any records
# that don't have the full three fields to the "MISSING" file
#
open my $f1, '<', 'file.txt' or die qq(Cannot open file 1: $!);
open my $missing, '>', 'Missing.csv'
or die qq(Unable to open "MISSING" file for output: $!);
# Lines with missing name fields will appear here.
while ( my $line = csv->getline($f1) ) {
my $name = $line->[0];
if (grep $_, #$line < 3) {
$csv->print($missing, $line);
}
else {
$data{$name} = $line if $name =~ /\.net$/i;
}
}
close $missing;
# Put a list of .net names found into the testing file
#
open my $testing, '>', 'testing.csv'
or die qq(Unable to open "TESTING" file for output: $!);
# Names will be printed to this during testing. Only ".net" ending names should appear
print $testing "$_\n" for sort keys %data;
close $testing;
# Read the name, disk and storage from the second file and check that the line
# contains all three fields. Remove the name field from the start and append
# to the data record with the matching name if it exists.
#
open my $f2, '<', 'file2.txt' or die qq(Cannot open file 2: $!);
while ( my $line = $csv->getline($f2) ) {
next unless grep $_, #$line >= 3;
my $name = shift #$line;
next unless $name =~ /\.net$/i;
my $record = $data{$name};
push #$record, #$line if $record;
}
# Print the completed hash. Send each record to the result output if it
# has the required five fields
#
open my $result, '>', 'resultfile.csv' or die qq(Cannot open results file: $!);
$csv->print($result, qw( name Model version Disk storage ));
for my $name (sort keys %data) {
my $line = $data{$name};
if (grep $_, #$line >= 5) {
$csv->print($result, $data{$name});
}
}