Validate | delimited txt file while reading from an array in perl - arrays

I have a text file with delimiter as | and it has some 10 columns. I have put this file in an array and before reading i want to put check for file being valid meaning non empty and file having fixed 10 columns. If number of columns < 10 then an error message should be displayed

You can just count the number of pipe characters in each line. Ten columns means at least nine pipes, so you can say
perl -ne '($n = tr/|// + 1) and die "Only $n fields on line $.\n"' myfile.txt

my ($file, $fh);
$file = 'path/to/file.txt';
open $fh, '<', $file or die $!;
$line = <$fh>;
unless($fh=~/^.*?\|.*?\|.*?\|.*?\|.*?\|.*?\|.*?\|.*?\|.*?\|.*/){print "You have more than 10 columns!\n";}
if($10=~/\|/){print "more than 10 columns!\n";}
close $fh;
EDIT:
There's many ways and this is just a quick slop answer. It will ensure at least 10 as stated, although you technically requested 9 or less with the > sign.
EDIT EDIT:
This will ensure at least 10, it could be more. I'd recommend counting with global regex, but its been a while for me. Alternatively, perhaps another method to ensure exactly 10:
$line = <$fh>;
my #check=split(/\|/, $line);
if (scalar #check !=10){print "bad file delimitation!\n";}
EDIT x3:
if($10=~/\|/){print "more than 10 columns!\n";} to ensure no more than 10, but try doing a global regex to count or the array method

Related

Iterate through a file multiple times, each time finding a regex and returning one line (perl)

I have one file with ~90k lines of text in 4 columns.
col1 col2 col3 value1
...
col1 col2 col3 value90000
A second file contains ~200 lines, each one corresponding to a value from column 4 of the larger file.
value1
value2
...
value200
I want to read in each value from the smaller file, find the corresponding line in the larger file, and return that line. I have written a perl script that places all the values from the small file into an array, then iterates through that array using each value as a regex to search through the larger file. After some debugging, I feel like I have it almost working, but my script only returns the line corresponding to the LAST element of the array.
Here is the code I have:
open my $fh1, '<', $file1 or die "Could not open $file1: $!";
my #array = <$fh1>;
close $fh1;
my $count = 0;
while ($count < scalar #array) {
my $value = $array[$count];
open my $fh2, '<', $file2 or die "Could not open $file2: $!";
while (<$fh2>) {
if ($_ =~ /$value/) {
my $line = $_;
print $line;
}
}
close $fh2;
$count++;
}
This returns only:
col1 col2 col3 value200
I can get it to print each value of the array, so I know it's iterating through properly, but it's not using each value to search the larger file as I intended. I can also plug any of the values from the array into the $value variable and return the appropriate line, so I know the lines are there. I suspect my bug may have to do with either:
newlines in the array elements, since all the elements have a newline except the last one. I've tried chomp but get the same result.
or
something to do with the way I'm handling the second file with opening/closing. I've tried moving or removing the close command and that either breaks the code or doesn't help.
You should only be reading the 90k line file once, and checking each value from the other file against the fourth column of each line as you do, instead of reading the whole large file once per line of the smaller one:
#!usr/bin/env perl
use warnings;
use strict;
use feature qw/say/;
my ($file1, $file2) = #ARGV;
# Read the file of strings to match against
open my $fh1, '<', $file1 or die "Could not open $file1: $!";
my %words = map { chomp; $_ => 1 } <$fh1>;
close $fh1;
# Process the data file in one pass
open my $fh2, '<', $file2 or die "Could not open $file2: $!";
while (my $line = <$fh2>) {
chomp $line;
# Only look at the fourth column
my #fields = split /\s+/, $line, 4;
say $line if exists $words{$fields[3]};
}
close $fh2;
Note this uses a straight up string comparison (Via hash key lookup) against the last column instead of regular expression matching - your sample data looks like that's all that's needed. If you're using actual regular expressions, let me know and I'll update the answer.
Your code does look like it should work, just horribly inefficiently. In fact, after adjusting your sample data so that more than one line matches, it does print out multiple lines for me.
Slightly different approach to the problem
use warnings;
use strict;
use feature 'say';
my $values = shift;
open my $fh1, '<', $values or die "Could not open $values";
my #lookup = <$fh1>;
close $fh1;
chomp #lookup;
my $re = join '|', map { '\b'.$_.'\b' } #lookup;
((split)[3]) =~ /$re/ && print while <>;
Run as script.pl value_file data_file

How to read a .txt file and store it into an array

I know this is a fairly simple question, but I cannot figure out how to store all of the values in my array the way I want to.
Here is a small portion what the .txt file looks like:
0 A R N D
A 2 -2 0 0
R -2 6 0 -1
N 0 0 2 2
D 0 -1 2 4
Each value is delimited by either two spaces - if the next value is positive - or a space and a '-' - if the next value is negative
Here is the code:
use strict;
use warnings;
open my $infile, '<', 'PAM250.txt' or die $!;
my $line;
my #array;
while($line = <$infile>)
{
$line =~ /^$/ and die "Blank line detected at $.\n";
$line =~ /^#/ and next; #skips the commented lines at the beginning
#array = $line;
print "#array"; #Prints the array after each line is read
};
print "\n\n#array"; #only prints the last line of the array ?
I understand that #array only holds the last line that was passed to it. Is there a way where I can get #array to hold all of the lines?
You are looking for push.
push #array, $line;
You undoubtedly want to precede this with chomp to snip any newlines, first.
If file is small as compared to available memory of your machine then you can simply use below method to read content of file in to an array
open my $infile, '<', 'PAM250.txt' or die $!;
my #array = <$infile>;
close $infile;
If you are going to read a very large file then it is better to read it line by line as you are doing but use PUSH to add each line at end of array.
push(#array,$line);
I will suggest you also read about some more array manipulating functions in perl
You're unclear to what you want to achieve.
Is every line an element of your array?
Is every line an array in your array and your "words" are the elements of this array?
Anyhow.
Here is how you can achieve both:
use strict;
use warnings;
use Data::Dumper;
# Read all lines into your array, after removing the \n
my #array= map { chomp; $_ } <>;
# show it
print Dumper \#array;
# Make each line an array so that you have an array of arrays
$_= [ split ] foreach #array;
# show it
print Dumper \#array;
try this...
sub room
{
my $result = "";
open(FILE, <$_[0]);
while (<FILE>) { $return .= $_; }
close(FILE);
return $result;
}
so you have a basic functionality without great words. the suggest before contains the risk to fail on large files. fastest safe way is that. call it as you like...
my #array = &room('/etc/passwd');
print room('/etc/passwd');
you can shorten, rename as your convinience believes.
to the kidding ducks nearby: by this way the the push was replaced by simplictiy. a text-file contains linebreaks. the traditional push removes the linebreak and pushing up just the line. the construction of an array is a simple string with linebreaks. now contain the steplength...

Popping keys of an array to calculate a total

I'm trying to simply pop off each numeric value and add them together to gain a total.
Input file:
Samsung 46
RIM 16
Apple 87
Microsoft 30
My code compiles, however, it only returns 0:
open (UNITS, 'units.txt') || die "Can't open it $!";
my #lines = <UNITS>;
my $total = 0;
while (<UNITS>) {
chomp;
my $line = pop #lines;
$line += $total;
}
print $total;
No need to slurp all lines into an array if you're just going to loop through them anyway with a while. Also, you need to split each line to get your numbers.
use warnings;
use strict;
open (UNITS, 'units.txt') || die "Can't open it $!";
my $total = 0;
while (<UNITS>) {
chomp;
my $num = (split)[1];
$total += $num;
}
print "$total\n";
__END__
179
There are three problems here
You are trying to add strings like 'Samsung 46' + 'RIM 16'
You read the entire file into #lines and then try to read more from the file in the while loop. That loop is never entered because you have already read to end of file
You are adding $total to the (undeclared) variable $line within the loop, instead of the other way around. So $total remains at zero and $line keeps having zero added to it
It is best to use while to read files unless you need something other than sequential access to the records, so removing #lines is a start.
It isn't completely clear which part of the records you want to accumulate. This program splits the lines on whitespace and adds together the last field of each line.
You must always use strict and use warnings at the start of every program. It is a measure that will make it far easier to locate bugs in your code. It is also best to use lexical file handles rather than the global one you used, and the three-parameter form of open.
use strict;
use warnings;
open my $units, '<', 'units.txt' or die "Can't open it: $!";
my $total;
while (<$units>) {
my #fields = split;
$total += $fields[-1];
}
print $total;
output
179
use strict;
use warnings;
open my $fh, "<", "units.txt" or die "well...";
my $total = 0;
while(<$fh>){
chomp;
my ($string, $num) = split(" ", $_);
$total += $num;
}
print $total;
This problem is a doddle with a one-liner:
$ perl -ane '$sum += $F[1] }{ print $sum' units.txt
Explanation
-a enables autosplit, each line is split and stored in #F
-n loops over the file line by line
-e tells perl that the next argument is to be treated as Perl code
the LHS of the Eskimo-kiss (that funny-looking }{ in the middle) is performed for every line in the input file, RHS performed only once
LHS accumulates the second column of every line in $sum
RHS prints the result of $sum once all lines have been processed

Perl - Open large txt file on server and create / save into smaller files of 100 lines each

I am trying to do this:
I FTP a large file of single words (~144,000 and one word per line)
I need to open uploaded file and create files with 100 lines max one
word per line (01.txt, 02.txt etc).
I would like the processed 100 to be REMOVED from the original file
AFTER the file of 100 is created.
The server is shared but, I can install modules if needed.
Now, my code below is very crude as my knowledge is VERY limited. One problem is opening the whole file into an array? The shared server does not sport enough memory I assume to open such a large file and read into memory all at once? I just want the first 100 lines. Below is just opening a file that is small enough to be loaded and getting 100 lines into an array. Nothing else. I typed it quickly so, prob has several issues but, show my limited knowledge and need for help.
use vars qw($Word #Words $IN);
my $PathToFile = '/home/username/public/wordlists/Big-File-Of-Words.txt';
my $cnt= '0';
open $IN, '<', "$PathToFile" or die $!;
while (<$IN>) {
chomp;
$Word = $_;
$Word=~ s/\s//g;
$Word = lc($Word);
######
if ($cnt <= 99){
push(#Words,$Word);
}
$cnt++;
}
close $IN;
Thanks so much.
Okay, I am trying to implement the code below:
#!/usr/bin/perl -w
BEGIN {
my $b__dir = (-d '/home/username/perl'?'/home/username/perl':( getpwuid($>) )[7].'/perl');
unshift #INC,$b__dir.'5/lib/perl5',$b__dir.'5/lib/perl5/x86_64-linux',map { $b__dir . $_ } #INC;
}
use strict;
use warnings;
use CGI;
use CGI::Carp qw(fatalsToBrowser warningsToBrowser);
print CGI::header();
my $WORD_LIST='/home/username/public/wordlists/Big-File-Of-Words.txt';
sed 's/ *//g' $WORD_LIST | tr '[A-Z]' '[a-z]' | split -l 100 -a6 - words.
print 'Done';
1;
But I get:
syntax error at split-up-big-file.pl line 12, near "sed 's/ *//g'"
Can't find string terminator "'" anywhere before EOF at split-up-big-file.pl line 12.
FINALLY:
Well I figured out a quick solution that works. Not pretty:
#!/usr/bin/perl -w
BEGIN {
my $b__dir = (-d '/home/username/perl'?'/home/username/perl':( getpwuid($>) )[7].'/perl');
unshift #INC,$b__dir.'5/lib/perl5',$b__dir.'5/lib/perl5/x86_64-linux',map { $b__dir . $_ } #INC;
}
use strict;
use warnings;
use CGI;
use CGI::Carp qw(fatalsToBrowser warningsToBrowser);
use diagnostics;
print CGI::header();
my $sourcefile = '/home/username/public_html/test/bigfile.txt';
my $rowlimit = 100;
my $cnt= '1';
open(IN, $sourcefile) or die "Failed to open $sourcefile";
my $outrecno = 1;
while(<IN>) {
if($outrecno == 1) {
my $filename= $cnt.'.txt';
open OUT, ">$filename" or die "Failed to create $filename";
$cnt++;
}
print OUT $_;
if($outrecno++ == $rowlimit) {
$outrecno = 1;
close FH;
}
}
close FH;
I found enough info here to get me going. Thanks...
Here is a solution based on a slight modification of your code that should work approximately the way you want it.
It loops through all the lines of the input file and for every 100th line it will write the word list of the words encountered since the last write (or the beginning). The eof($IN) check is to catch the remaining lines if they are less than 100.
use strict;
use warnings;
my $PathToFile = '/home/username/public/wordlists/Big-File-Of-Words.txt';
open my $IN, '<', "$PathToFile" or die $!;
my $cnt = 0;
my $cnt_file = 0;
my #Words;
while ( my $Word = <$IN> ) {
chomp $Word;
$Word =~ s/\s//g;
$Word = lc($Word);
######
push(#Words,$Word);
if ( !(++$cnt % 100) || eof($IN) ) {
$cnt_file++;
open my $out_100, '>', "file_$cnt_file.txt" or die $!;
print $out_100 join("\n", #Words), "\n";
close $out_100;
#Words = ();
}
}
There's a non-Perl solution that you might find interesting...
$ split -l 100 -a6 /home/username/public/wordlists/Big-File-Of-Words.txt words.
This will split your big file of words into a bunch of files with no more than 100 lines each. The file name will start with words., and the suffix will range from aaaaaa to zzzzzz. Thus, you'll have words.aaaaaa, words.aaaaab, words.aaaaac, etc. You can then recombine all of these files back into your word list like this:
$ cat words.* > reconstituted_word_list.txt
Of course, you want to eliminate spaces, and lowercase the words all at the same time:
$ WORD_LIST=/home/username/public/wordlists/Big-File-Of-Words.txt
$ sed 's/ *//g' $WORD_LIST | tr '[A-Z]' '[a-z]' | split -l 100 -a6 - words.
The tr is the transformation command, and will change all uppercase to lower case. The split splits the files, and sed removes the spaces.
One of Unix's big strengths was its file handling ability. Splitting up big files into smaller pieces and reconstituting them was a common task. Maybe you had a big file, but a bunch of floppy disks that couldn't hold more than 100K per floppy. Maybe you were trying to use UUCP to copy these files over to another computer and there was a 10K limit on file transfer sizes. Maybe you were doing FTP by email, and the system couldn't handle files larger than 5K.
Anyway, I brought it up because it's probably an easier solution in your case than writing a Perl script. I am a big writer of Perl, and many times Perl can handle a task better and faster than shell scripts can. However, in this case, this is an easy task to handle in shell.
Here's a pure Perl solution. The problem is that you want to create files after every 100 lines.
To solve this, I have two loops. One is an infinite loop, and the other loops 100 times. Before I enter the inner loop, I create a file for writing, and write one word per line. When that inner loop ends, I close the file, increment my $output_file_num and then open another file for output.
A few changes:
I use use warnings; and use strict (which is included when you specify that you want Perl version 5.12.0 or greater).
Don't use use vars;. This is obsolete. If you have to use package variables, declare the variable with our instead of my. When should you use package variables? If you have to ask that question, you probably don't need package variables. 99.999% of the time, simply use my to declare a variable.
I use constant to define your word file. This makes it easy to move the file when needed.
My s/../../ not only removes beginning and ending spaces, but also lowercases my word for me. The ^\s*(.*?)\s*$ removes the entire line, but captures the word sans spaces at the beginning and end of the word. The .*? is like .*, but is non-greedy. It will match the minimum possible (which in this case does not include spaces at the end of the word).
Note I define a label INPUT_WORD_LIST. I use this to force my inner last to exit the outer loop.
I take advantage of the fact that $output_word_list_fh is defined only in the loop. Once I leave the loop, the file is automatically closed for me since $output_word_list_fh in out of scope.
And the program:
#!/usr/bin/env perl
use 5.12.0;
use warnings;
use autodie;
use constant WORD_FILE => "/home/username/public/wordlists/Big-File-Of-Words.txt";
open my $input_word_list_fh, "<", WORD_FILE;
my $output_file_num = 0;
INPUT_WORD_LIST:
for (;;) {
open my $output_word_list_fh, ">", sprintf "%05d.txt", $output_file_num;
for my $line (1..100) {
my $word;
if ( not $word = <$input_word_list_fh> ) {
last INPUT_WORD_LIST;
}
chomp $word;
$word =~ s/^\s*(.*?)\s*$/\L$1\E/;
say {$output_word_list_fh} "$word";
}
close $output_word_list_fh;
$output_file_num += 1;
}
close $input_word_list_fh;

perl - cutting many strings with given array of numbers

dear my fellow perl masters in the world~!
I need your help.
I have a string file A and a number file B like this:
File A:
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
...and so on till 200.
File B:
3, 6, 2, 5, 6, 1, ... 2
(total 200 numbers in an array)
then, with the numbers in file B, I would like to cut each string from the start position to the number of characters in File B.
E.g. as File B starts with 3, 6, 2 ...
File A will be
AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
like this.
So. this is my code so far...
use strict;
if (#ARGV != 2) {
print "Invalid usage\n";
print "Usahe: perl program.pl [num_list] [string_file]\n";
exit(0);
}
my $numbers=$ARGV[0];
my $strings=$ARGV[1];
my $i;
open(LIST,$number);
open(DATA,$strings);
my #list = <LIST>;
my $list_size = scalar #sp_list;
for ($i=0;$i<=$list_size;$i++) {
print $i,"\n";
#while (my $line = <DATA>) {
}
close(LIST);
close(DATA);
As the strings and numbers are 200 I changed the array into a scalar value to work on every numbers of every strings.
I'm working on this. and I know I suppose to use a pos function but i do not know how to match each number with each string. is reading the string first by while? or using for to know how many time that I have to run this to achieve the result?
Your help will be much appreciated!
Thank you.
I will be working on it, too. Need your feedback.
It is good that you use strict, and you should also use warnings. Further things to note:
You should check the return value of open to make sure they did not fail. You should also use the three argument form of open and use a lexical file handle. Especially when handling command line arguments, which does pose a security risk.
open my $listfh, "<", $file or die $!;
You may wish to use a safety precaution
use ARGV::readonly;
You can easily make the list of numbers with a map statement. Assuming the numbers are in a comma separated list:
my #list = map split(/\s*,\s*/), <$listfh>;
This will split the input line(s) on comma and strip excess whitespace.
When reading your input file, you do not need to use a counter variable. You can simply do
open my $inputfh, "<", $file or die $!;
while (<$inputfh>) {
my $length = shift #list; # these are your numbers
chomp; # remove newline
my $string = substr($_, 0, -$length); # negative length on substr
print "$string\n";
}
The negative length on substr makes it leave that many characters off the end of the string.
Here is a one-liner in action that demonstrates these principles:
perl -lwe '$f = pop; # save file name for later
#nums = map split(/\s*,\s*/), <>; # process first file
push #ARGV, $f; # put back file name
while (<>) {
my $len = shift #nums;
chomp;
print substr($_,0,-$len);
}' fileb.txt filea.txt
Output:
AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
DDDDDDDDDDDDDDDDDDDDDDDDDDD
EEEEEEEEEEEEEEEEEEEEEEEEEE
Note the use of implicit open of file name arguments by manipulating #ARGV. Also handling newlines with -l switch.
Here is my suggestion. It does use autodie so that there is no need to explicitly check the status of open calls, and temporarily undefines $/ - the input record separator - so that all of the num_list file is read in one go. You aren't clear whether this file will always contain just single line, in which case you can omit local $/.
The numbers are extracted from the text using a regular expression /\d+/g returns all the strings of digits in the input as a list.
The second parameter to substr is the start position of the substring you want, and using a negative number counts from the end of the string instead of the beginning. The third parameter is the number of characters in the substring, and the fourth is a string to replace that substring in the target variable. So substr $data, -$n, $n, '' replaces the substring of length $n starting $n characters from the end with an empty string - i.e. it deletes it.
Note that if it is your intention to remove the given number of characters from the beginning of the string, then you would write substr $data, 0, $n, '' instead.
use strict;
use warnings;
use autodie;
unless (#ARGV == 2) {
print "Usage: perl program.pl [num_list] [string_file]\n";
exit;
}
my #numbers;
{
open my $listfh, '<', $ARGV[0];
local $/;
my $numbers = <$listfh>;
#numbers = $numbers =~ /\d+/g;
};
open my $datafh, '<', $ARGV[1];
for my $i (0 .. $#numbers) {
print "$i\n";
my $n = $numbers[$i];
my $data = <$datafh>;
chomp $data;
substr $data, -$n, $n, '';
print "$data\n";
}
Here is how I would do it. substr is the function to remove a part of a string. From your example, it is not clear whether you want to remove the characters at the beginning or at the end. Both alternatives are shown here:
#!/usr/bin/perl
use warnings;
use strict;
if (#ARGV != 2) {
die "Invalid usage\n"
. "Usage: perl program.pl [num_list] [string_file]\n";
}
my ($number_f, $string_f) = #ARGV;
open my $LIST, '<', $number_f or die "Cannot open $number_f: $!";
my #numbers = split /, */, <$LIST>;
close $LIST;
open my $DATA, '<', $string_f or die "Cannot open $string_f: $!";
while (my $string = <$DATA>) {
substr $string, 0, shift #numbers, q(); # Replace the first n characters with an empty string.
# To remove the trailing portion, replace the previous line with the following:
# my $n = shift #numbers;
# substr $string, -$n-1, $n, q();
print $string;
}
You were not checking the return value of open. Try to remember to always do that.
Do not declare variables far before you are going to use them ($i here).
Do not use C-style for loops if you do not have to. They are prone to fence post errors.
You can use substr():
use strict;
use warnings;
if (#ARGV != 2) {
print "Invalid usage\n";
print "Usage: perl program.pl [num_list] [string_file]\n";
exit(0);
}
my $numbers=$ARGV[0];
my $strings=$ARGV[1];
open my $list, '<', $numbers or die "Can't open $numbers: $!";
open my $data, '<', $strings or die "Can't open $strings: $!";
chomp(my $numlist = <$list>);
my #numbers = split /\s*,\s*/,$numlist;
for my $chop_length (#numbers)
{
my $data = <$data> // die "not enough data in $strings";
print substr($data,0,length($data)-$chop_length)."\n";
}
Your specs say you want "... to cut each string from the start position to the number of characters in File B." I agree with choroba that it's not perfectly clear whether characters from the start or the end of the string are to be cut. However, I tend to think that you want to remove characters from the beginning when you say, "... from the start position ...", but a string like ABCDEFGHIJKLMNOPQRSTUVWXYZ012345 would help clarify this issue.
This option is not as well self-documenting as the other solutions, but a discussion of it will follow:
use strict;
use warnings;
#ARGV == 2 or die "Usage: perl program.pl [num_list] [string_file]\n";
open my $fh, '<', pop or die "Cannot open string file: $!";
chomp( my #str = <$fh> );
local $/ = ', ';
while (<>) {
chomp;
print +( substr $str[ $. - 1 ], $_ ) . "\n";
}
Strings:
ABCDEFGHIJKLMNOPQRSTUVWXYZ012345
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
Numbers:
3, 6, 2, 5, 6
Output:
DEFGHIJKLMNOPQRSTUVWXYZ012345
BBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
DDDDDDDDDDDDDDDDDDDDDDDDDDD
EEEEEEEEEEEEEEEEEEEEEEEEEE
The strings' file name is poped off #ARGV (since an explicit argument for pop is not used) and passed to open to read the strings into #str. The record separator is set to ', ' so chomp leaves only the number. The current line number in $. is used as part of the index to the corresponding #str element, and the remaining characters in the string from n on are printed.

Resources