I have a school program I just got and we are learning hashes and the teacher went over hashes of arrays but not really array of hashes and I feel like an AoH is going to work out better for me in the long run. Right now I get all my data into separate variables and I want store them into a AoH bc I have the same variables the entire time but the values change.
What the program is, is a log analyzer and parses through a gigantic log file and all the data is, is lines that look like this.
IPADDY x x [DATE:TIME -x] "METHOD URL HTTPVERS" STATUSCODE BYTES "REFERER" "USERAGENT"
example line being
27.112.105.20 - - [09/Oct/2011:07:22:51 -0500] "GET / HTTP/1.1" 200 4886 "-" "Python-urllib/2.4"
Now I get all the data fine I just dont really understand how to populate and Array of Hashes if anyone can help me out.
Here is an updated code that grabs the data and tries storing it into an AoH. The output in my file used to be perfect just like the print statments I now have commented out. This is all that comes in my output file now "ARRAY(0x2395df0): HASH(0x23d06e8)". Am I doing something wrong?
#!/usr/bin/perl
use strict;
use warnings;
my $j = 0;
my #arrofhash;
my $ipadd;
my $date;
my $time;
my $method;
my $url;
my $httpvers;
my $statuscode;
my $bytes;
my $referer;
my $useragent;
my $dateANDtime;
my ($dummy1, $dummy2, $dummy3);
open ( MYFILE, '>>dodoherty.report');
if ( #ARGV < 1)
{
printf "\n\tUsage: $0 file word(s)\n\n";
exit 0;
}
for (my $i = 0; $i < #ARGV; ++$i)
{
open( HANDLE, $ARGV[$i]);
while( my $line = <HANDLE> )
{
($ipadd, $dummy1, $dummy2, $dateANDtime, $dummy3, $method, $url, $httpvers, $statuscode, $bytes, $referer, $useragent) = split( /\s/, $line);
$method = substr ($method, 1, length($method));
$httpvers = substr ($httpvers, 0, length($httpvers)-1);
$referer = substr ($referer, 1, length($referer)-2);
$useragent = substr ($useragent, 1, length($useragent)-1);
if ( substr ($useragent, length($useragent)-1, length($useragent)) eq '"')
{
chop $useragent;
}
if ( $dateANDtime =~ /\[(\S*)\:(\d{2}\:\d{2}\:\d{2})/)
{
$date = $1;
$time = $2;
}
$arrofhash[$i] = {ipadd => $ipadd, date => $date, 'time' => $time, method => $method, url => $url, httpvers => $httpvers, statuscode => $statuscode, bytes => $bytes, referer => $referer, useragent => $useragent};
# print MYFILE "IPADDY :$ipadd\n";
# print MYFILE "METHOD :$method\n";
# print MYFILE "URL :$url\n";
# print MYFILE "HTTPOVERS : $httpvers\n";
# print MYFILE "STATUS CODE: $statuscode\n";
# print MYFILE "BYTES : $bytes\n";
# print MYFILE "REFERER : $referer\n";
# print MYFILE "USERAGENT : $useragent\n";
# print MYFILE "DATE : $date\n";
# print MYFILE "TIME : $time\n\n";
}
}
for ( my $j = 0; $j < #arrofhash; ++$j)
{
foreach my $hash (#hashkeys)
{
printf MYFILE "%s: %s\n",$hash, $arrofhash[$j];
}
print MYFILE "\n";
}
close (MYFILE);
A common beginner mistake is to not make use of the lexical scope of variables, and just declare all variables at the top, like you do. Declare them within the scope that you need them, no more, no less.
In your case, it would be beneficial to just store the data directly in a hash, then push that hash reference to an array. I would also advise against using split here, as it is working unreliably IMO, and you are splitting quoted strings, using dummy variables to get rid of unwanted data. Instead use a regex.
This regex won't handle escaped quotes inside quotes, but I get the feeling that you will not have to deal with that, since you were using split before to handle this.
You will need to add any further processing to the data, like extracting date and time, etc. If you want some added safety, you can add a warning if the regex seems to have failed, e.g. unless (%f) { warn "Warning: Regex did not match line: '$_'"; next; }
use strict;
use warnings;
use Data::Dumper;
my #all;
while (<DATA>) {
my %f; # make a new hash for each line
# assign the regex captures to a hash slice
#f{qw(ipadd dateANDtime method statuscode bytes referer useragent)} =
/^ # at beginning of line...
(\S+) [\s-]* # capture non-whitespace and ignore whitespace/dash
\[([^]]+)\]\s* # capture what's inside brackets
"([^"]+)"\s* # capture what's inside quotes
(\d+)\s* # capture digits
(\d+)\s*
"([^"]+)"\s*
"([^"]+)"\s*
$/x; # ..until end of line, /x for regex readability only
push #all, \%f; # store hash in array
}
#f{qw(date time)} = split /:/, $f{dateANDtime}, 2;
print Dumper \#all; # show the structure you've captured
__DATA__
27.112.105.20 - - [09/Oct/2011:07:22:51 -0500] "GET / HTTP/1.1" 200 4886 "-" "Python-urllib/2.4"
Basically you just declare the top level structure, and then use it:
my #AoH;
$AoH[0]{some_key} = 5;
$AoH[1]{some_other_key} = 10;
# ^ ^ second level is a hash
# | first level is an array
Which would create an array with two elements, each hashes, each with one key. This feature is called autovivification, and it causes container structures to spring into existence when they are used.
All of this is documented in the perldsc tutorial.
In your case, it would be something like:
$arrofhash[$i]{key_name} = value;
$arrofhash[$i]{another_key} = another_value;
...
or
$arrofhash[$i] = {key => value, key2 => value2, ...}
to set the whole hash at once.
Related
I have subroutine in my module which checks (regular) user password age using regex search on shadow file:
Module.pm
my $pwdsetts_dump = "tmp/shadow_dump.txt";
system("cat /etc/shadow > $pwdsetts_dump");
open (my $fh1, "<", $pwdsetts_dump) or die "Could not open file '$pwdsetts_dump': $!";
sub CollectPWDSettings {
my #pwdsettings;
while (my $array = <$fh1>) {
if ($array =~ /^(\S+)[:][$]\S+[:](1[0-9]{4})/) {
my $pwdchange = "$2";
if ("$2" eq "0") {
$pwdchange = "Next login";
}
my %hash = (
"Username" => $1,
"Last change" => $pwdchange
);
push (#pwdsettings, \%hash);
}
}
my $current_date = int(time()/86400); # epoch
my $ndate = shift #_; # n-days
my $search_date = int($current_date - $ndate);
my #sorted = grep{$_->{'Last change'} > $search_date} #pwdsettings;
return \#sorted;
}
Script is divided in 2 steps:
1. load all password settings
2. search for password which is older than n-days
In my main script I use following script:
my ($user_changed_pwd);
if (grep{$_->{'Username'} eq $users_to_check} #{Module::CollectPWDSettings("100")}) {
$user_changed_pwd = "no";
}
else {
$user_changed_pwd = "yes";
}
Problem occurs in first step, AoH never gets populated. I'm also pretty sure that this subroutine always worked for me and strict and warnings never complained about it, nut now, for some reason it refuses to work.
I've just run your regex against my /etc/shadow and got no matches. If I drop the leading 1 I get a few hits.
E.g.:
$array =~ /^(\S+)[:][$]\S+[:]([0-9]{4})/
But personally - I would suggest not trying to regex, and instead rely on the fact that /etc/shadow is defined as delimited by :.
my #fields = split ( /:/, $array );
$1 contains a bunch of stuff, and I suspect what you actually want is the username - but because \S+ is greedy, you might be accidentally ending up with encrypted passwords.
Which will be $fields[0].
And then the 'last change' field - from man shadow is $fields[2].
I think your regex pattern is the main problem. Don't forget that \S matches any non-space character including colons :, and \S+ will try to match as much as possible so it will happily skip over multiple fields of the file
I think using split to separate each record into colon-delimited fields is a better approach. I also think that, instead of the array of two-element hashes #pwdsettings it would be better to store the data as a hash relating usernames to their password history
Here's how I would write this. It prints a list of all usernames whose password history is greater than 90 days
use strict;
use warnings;
use Time::Seconds 'ONE_DAY';
my #shadow = do {
open my $fh, '<', '/etc/shadow' or die qq{Unable to open "/etc/shadow" for input: $!};
<$fh>;
};
print "$_\n" for #{ collect_pwd_settings(90) };
sub collect_pwd_settings {
my ($ndate) = #_;
my %pwdsettings;
for ( #shadow ) {
my ($user, $pwdchange) = (split /:/)[0,2];
$pwdsettings{$user} = $pwdchange;
}
my $current_date = time / ONE_DAY;
my #filtered = grep { $current_date - $pwdsettings{$_} > $ndate } keys %pwdsettings;
return \#filtered;
}
I have a file that looks like this:
[options42BuySide]
logged-check-times=06:01:00
logged-check-address=192.168.3.4
logged-check-reply=192.168.2.5
logged-check-vac-days=sat,sun
start-time=06:01:00
stop-time=19:00:00
falldown=logwrite after 10000
failtolog=logwrite after 10000
listento=all
global-search-text=Target Down. This message is stored;
[stock42BuySide]
logged-check-times=06:01:00
logged-check-address=192.168.2.13
logged-check-reply=192.168.2.54
logged-check-vac-days=sat,sun
start-time=06:01:00
stop-time=18:00:00
The script grinds the list down to just the name, start and stop time.
sellSide40, start-time=07:05:00, stop-time=17:59:00
SellSide42, start-time=07:06:00, stop-time=17:29:00
SellSide44, start-time=07:31:00, stop-time=16:55:00
42SellSide, start-time=09:01:00, stop-time=16:59:00
The problem is that I would like to filter out specific names from the file with comand line parameters.
I am trying to use the #ARGV array and grep the command line values out of the #nametimes array. Something like :
capser#capser$ ./get_start_stop SorosSellSide42 ETFBuySide42
The script works fine for parsing the file - I just need help on the command line array
#!/usr/bin/perl
use strict ;
use warnings ;
my ($name , $start, $stop, $specific);
my #nametimes;
my $inifile = "/var/log/Alert.ini";
open ( my $FILE, '<', "$inifile") or die ("could not open the file -- $!");
while(<$FILE>) {
chomp ;
if (/\[(\w+)\]/) {
$name = $1;
} elsif (/(start-time=\d+:\d+:\d+)/) {
$start = $1;
} elsif (/(stop-time=\d+:\d+:\d+)/) {
$stop = $1;
push (#nametimes, "$name, $start, $stop");
}
}
for ($a = 0; $a >= $#ARGV ; $a++) {
$specific = (grep /$ARGV[$a]/, #nametimes) ;
print "$specific\n";
}
It is probably pretty easy - however I have worked on it for days, and I am the only guy that uses perl in this shop. I don't have anyone to ask and the googlize is not panning out. I apologize in advance for angering the perl deities who are sure to yell at me for asking such and easy question.
Your construct for looping over #ARGV is a bit unwieldy - the more common way of doing that would be:
for my $name (#ARGV) {
#do something
}
But really, you don't even need to loop over it. You can just join them all directly into a single regular expression:
my $names = join("|", #ARGV);
my #matches = grep { /\b($names)\b/ } #nametimes;
I've used \b in the regex here - that indicates a word boundary, so the argument SellSide4 wouldn't match SellSide42. That may or may not be what you want...
Use an array to store the results from the grep(), not a scalar. Push them, not assign. Otherwise the second iteration of the for loop will overwrite results. Something like:
for my $el ( #ARGV ) {
push #specific, grep { /$el/ } #nametimes);
};
print join "\n", #specific;
The easiest thing to do is to store your INI file as a structure. Then, you can go through your structure and pull out what you want. The simplest structure would be a hash of hashes. Where your heading is the key to the outer hash, and the inner hash is keyed by the parameter:
Here's is creating the basic structure:
use warnings;
use strict;
use autodie;
use feature qw(say);
use Data::Dumper;
use constant INI_FILE => "test.file.txt";
open my $ini_fh, "<", INI_FILE;
my %ini_file;
my $heading;
while ( my $line = <$ini_fh> ) {
chomp $line;
if ( $line =~ /\[(.*)\]/ ) { #Headhing name
$heading = $1;
}
elsif ( $line =~ /(.+?)\s*=\s*(.+)/ ) {
my $parameter = $1;
my $value = $2;
$ini_file{$heading}->{$parameter} = $value;
}
else {
say "Invalid line $. - $line";
}
}
After this, the structure will look like this:
$VAR1 = {
'options42BuySide' => {
'stop-time' => '19:00:00',
'listento' => 'all',
'logged-check-reply' => '192.168.2.5',
'logged-check-vac-days' => 'sat,sun',
'falldown' => 'logwrite after 10000',
'start-time' => '06:01:00',
'logged-check-address' => '192.168.3.4',
'logged-check-times' => '06:01:00',
'failtolog' => 'logwrite after 10000',
'global-search-text' => 'Target Down. This message is stored;'
},
'stock42BuySide' => {
'stop-time' => '18:00:00',
'start-time' => '06:01:00',
'logged-check-reply' => '192.168.2.54',
'logged-check-address' => '192.168.2.13',
'logged-check-vac-days' => 'sat,sun',
'logged-check-times' => '06:01:00'
}
};
Now, all you have to do is parse your structure and pull the information you want out of it:
for my $heading ( sort keys %ini_file ) {
say "$heading " . $ini_file{$heading}->{"start-time"} . " " . $ini_file{$heading}->{"stop-time"};
}
You could easily modify this last loop to skip the headings you want, or to print out the exact parameters you want.
I would also recommend using Getopt::Long to parse your command line parameters:
my_file -include SorosSellSide42 -include ETFBuySide42 -param start-time -param stop-time
Getopt::Long could store your parameters in arrays. For example. It would put all the -include parameters in an #includes array and all the -param parameters in an #parameters array:
for my $heading ( #includes ) {
print "$heading ";
for my $parameter ( #parameters ) {
print "$ini_file{$heading}->{$parameter} . " ";
}
print "\n;
}
Of course, there needs to be lots of error checking (does the heading exist? What about the requested parameters?). But, this is the basic structure. Unless your file is extremely long, this is probably the easiest way to process it. If your file is extremely long, you could use the #includes and #parameters in the first loop as you read in the parameters and headings.
I am very new to perl and am struggling to get this script to work.
I have taken pieces or perl and gooten them to work as indivual sections but upon trying to blend them together it fails. Even with the error messages that show up I can not find where my mistake is.
The script when working and completed will read an output file and go through it section my section and utilmately generate a new output file with not much more the a heading with some additional text and a value of the amount of lines in that section.
My issues are when it does the looping for each keyword in the array it is now failing with the error message 'Argument "" isn't numeric in array element at'. Perl directs me to a section in the script but I can not see how I am calling the element incorrectly. All the elements in the array are alpha yet the error message is refering to a numeric value.
Can anyone see my mistake.
Thank you
Here is the script
#!/usr/bin/perl -w
use strict;
use warnings;
use diagnostics;
# this version reads each variable and loops through the 18 times put only displays on per loop.
my $NODE = `uname -n`;
my $a = "/tmp/";
my $b = $NODE ;
my $c = "_deco.txt";
my $d = "_deco_mini.txt";
chomp $b;
my $STRING = "$a$b$c";
my $STRING_out = "$a$b$d";
my #keyword = ( "Report", "Last", "HP", "sulog", "sudo", "eTrust", "proftp", "process", "active clusters", "pdos", "syslog", "BNY", "syslogmon", "errpt", "ports", "crontab", "NFS", "scripts", "messages");
my $i = 0;
my $keyword="";
my $x=0;
my $y=0;
my $jw="";
my $EOS = "########################################################################";
my $qty_lines=0;
my $skip5=0;
my $skipcnt=0;
my $keeplines=0;
my #HPLOG="";
do {
print "Reading File: [$STRING]\n";
if (-e "$STRING" && open (IN, "$STRING")) {
# ++$x; # proving my loop worked
# print "$x interal loop counter\n"; # proving my loop worked
for ( ++$i) { # working
while ( <IN> ) {
chomp ;
#if ($_ =~ /$keyword/) {
#if ($_ =~ / $i /) {
#if ($_ =~ /$keyword[ $i ]/) {
if ($_ =~ /$keyword $i/) {
print " $i \n";
$skip5=1;
next;
# print "$_\n";# $ not initalized error when tring to use it
}
if ($skip5) {
$skipcnt++;
print "SKIP LINE: $_\n";
print "Header LINE: $_\n";
next if $skipcnt <= 5;
$skip5=0;
$keeplines=1;
}
if ($keeplines) {
# ++$qty_lines; # for final output
last if $_ =~ /$EOS/;
print "KEEP LINE: $_\n";
# print "$qty_lines\n"; # for final output
push #HPLOG, "$_\n";
# push #HPLOG, "$qty_lines\n";# for final output
}
} ## end while ( <IN> )
} ## end for ( ++$i)
} ## end if (-e "$STRING" && open (IN, "$STRING"))
close (IN);
} while ( $i < 19 && ++$y < 18 );
Here is a sample section or the input file.
###############################################################################
Checking for active clusters.
#########
root 11730980 12189848 0 11:24:20 pts/2 0:00 egrep hagsd|harnad|HACMP|haemd
If there are any processes listed you need to remove the server from the cluster.
############################################################################
This is the output from Pdos log
Please review it for anything that looks like a users may be trying to run something.
#########
This server is not on Tamos
############################################################################
This is the output from syslog.conf.
Look for any entries on the right side column that are not the ususal logs or location.
#########
# #(#)34 1.11 src/bos/etc/syslog/syslog.conf, cmdnet, bos610 4/27/04 14:47:53
# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos610 src/bos/etc/syslog/syslog.conf 1.11
I truncated the rest of the file
Can anyone see my mistake.
I can see quite a lot of mistakes. But I also see some good stuff like use strict and use warnings.
My suggestion for you is to work on your coding style so that it gets easier for you and others to debug any problems.
Naming variables
my $NODE = `uname -n`;
my $a = "/tmp/";
my $b = $NODE ;
my $c = "_deco.txt";
my $d = "_deco_mini.txt";
chomp $b;
my $STRING = "$a$b$c";
my $STRING_out = "$a$b$d";
Why are some of those names all uppercase and others all lower case? If you are building up a filename, why do you call the variable that holds the filename $STRING?
my #keyword = ( "Report", "Last", "HP", "sulog", "sudo", ....
If you have a list of several keywords, wouldn't it be apt to not chose a singular for the variable name? How about #keywords?
Using temporary variables you don't need
my $NODE = `uname -n`;
my $a = "/tmp/";
my $b = $NODE ;
my $c = "_deco.txt";
chomp $b;
my $STRING = "$a$b$c";
Why do you need $a, $b and $c? The (forgive me) stupid names of those vars are a tell-tale sign that you don't need them. How about this instead?
my $node_name = `uname -n`;
chomp $node_name;
my $file_name = sprintf '/tmp/%s/_deco.txt', $node_name;
Your biggest problem: you have no idea how to use arrays
You are making several drastic mistakes when it comes to arrays.
my #HPLOG="";
Do you want an array or another string? The # says array, the "" says string. I guess you wanted a new, empty array, so my #hplog = () would have been much better. But since there is no need to tell perl that you want an empty array as it will give you an empty one anyway, my #hplog; will do the job just fine.
It took me a while to figure out this next one and I'm still not sure whether I'm guessing your intentions correctly:
my #keyword = ( "Report", "Last", "HP", "sulog", "sudo", "eTrust", "proftp", "process", "active clusters", "pdos", "syslog", "BNY", "syslogmon", "errpt", "ports", "crontab", "NFS", "scripts", "messages");
...
if ($_ =~ /$keyword $i/) {
What I think you are doing here is trying to match your current input line against element number $i in #keywords. If my assumption is correct, you really wanted to say this:
if ( /$keyword[ $i ]/ ) {
Iterating arrays
Perl is not C. It doesn't make you jump through hoops to get a loop.
Just look at all the code you wrote to loop through your keywords:
my $i = 0;
...
for ( ++$i) { # working
...
if ($_ =~ /$keyword $i/) {
...
} while ( $i < 19 && ++$y < 18 );
Apart from the facts that your working comment is just self-deception and that you hard-coded the number of elements in your array, you could have just used a for-each loop:
foreach my $keyword ( #keywords ) {
# more code here
}
I'm sure that when you try to work on the above list, the problem that made you ask here will just go away. Have fun.
What's going on:
I've ssh'd onto my localhost, ls the desktop and taken those items and put them into an array.
I hardcoded a short list of items and I am comparing them with a hash to see if anything is missing from the host (See if something from a is NOT in b, and let me know).
So after figuring that out, when I print out the "missing files" I get a bunch of duplicates (see below), not sure if that has to do with how the files are being checked in the loop, but I figured the best thing to do would be to just sort out the data and eliminate dupes.
When I do that, and print out the fixed data, only one file is printing, two are missing.
Any idea why?
#!/usr/bin/perl
my $hostname = $ARGV[0];
my #hostFiles = ("filecheck.pl", "hostscript.pl", "awesomeness.txt");
my #output =`ssh $hostname "cd Desktop; ls -a"`;
my %comparison;
for my $file (#hostFiles) {
$comparison{$file} +=1;
}
for my $file (#output) {
$comparison{$file} +=2
}
for my $file (sort keys %comparison) {
#missing = "$file\n" if $comparison{$file} ==1;
#print "Extra file: $file\n" if $comparison{$file} ==2;
print #missing;
}
my #checkedMissingFiles;
foreach my $var ( #missing ){
if ( ! grep( /$var/, #checkedMissingFiles) ){
push( #checkedMissingFiles, $var );
}
}
print "\n\nThe missing Files without dups:\n #checkedMissingFiles\n";
Password:
awesomeness.txt ##This is what is printing after comparing the two arrays
awesomeness.txt
filecheck.pl
filecheck.pl
filecheck.pl
hostscript.pl
hostscript.pl
The missing Files without dups: ##what prints after weeding out duplicates
hostscript.pl
The perl way of doing this would be:
#!/usr/bin/perl -w
use strict;
use Data::Dumper;
my %hostFiles = qw( filecheck.pl 1 hostscript.pl 1 awesomeness.txt 1);
# ssh + backticks + ls, not the greatest way to do this, but that's another Q
my #files =`ssh $ARGV[0] "ls -a ~/Desktop"`;
# get rid of the newlines
chomp #files;
#grep returns the matching element of #files
my %existing = map { $_ => 1} grep {exists($hostFiles{$_})} #files;
print Dumper([grep { !exists($existing{$_})} keys %hostFiles]);
Data::Dumper is a utility module, I use it for debugging or demonstrative purposes.
If you want print the list you can do something like this:
{
use English;
local $OFS = "\n";
local $ORS = "\n";
print grep { !exists($existing{$_})} keys %hostFiles;
}
$ORS is the output record separator (it's printed after any print) and $OFS is the output field separator which is printed between the print arguments. See perlvar. You can get away with not using "English", but the variable names will look uglier. The block and the local are so you don't have to save and restore the values of the special variables.
If you want to write to a file the result something like this would do:
{
use English;
local $OFS = "\n";
local $ORS = "\n";
open F, ">host_$ARGV[0].log";
print F grep { !exists($existing{$_})} keys %hostFiles;
close F;
}
Of course, you can also do it the "classical" way, loop trough the array and print each element:
open F, ">host_$ARGV[0].log";
for my $missing_file (grep { !exists($existing{$_})} keys %hostFiles) {
use English;
local $ORS = "\n";
print F "File is missing: $missing_file"
}
close F;
This allows you to do more things with the file name, for example, you can SCP it over to the host.
It seems to me that looping over the 'required' list makes more sense - looping over the list of existing files isn't necessary unless you're looking for files that exist but aren't needed.
#!/usr/bin/perl
use strict;
use warnings;
my #hostFiles = ("filecheck.pl", "hostscript.pl", "awesomeness.txt");
my #output =`ssh $ARGV[0] "cd Desktop; ls -a"`;
chomp #output;
my #missingFiles;
foreach (#hostFiles) {
push( #missingFiles, $_ ) unless $_ ~~ #output;
}
print join("\n", "Missing files: ", #missingFiles);
#missing = "$file\n" assigns the array #missing to contain a single element, "$file\n". It does this every loop, leaving it with the last missing file.
What you want is push(#missing, "$file\n").
I have a text file layed out like this:
1 a, b, c
2 c, b, c
2.5 a, c
I would like to reverse the keys (the number) and values (CSV) (they are separated by a tab character) to produce this:
a 1, 2.5
b 1, 2
c 1, 2, 2.5
(Notice how 2 isn't duplicated for c.)
I do not need this exact output. The numbers in the input are ordered, while the values are not. The output's keys must be ordered, as well as the values.
How can I do this? I have access to standard shell utilities (awk, sed, grep...) and GCC. I can probably grab a compiler/interpreter for other languages if needed.
If you have python (if you're on linux you probably already have) i'd use a short python script to do this. Note that we use sets to filter out "double" items.
Edited to be closer to requester's requirements:
import csv
from decimal import *
getcontext().prec = 7
csv_reader = csv.reader(open('test.csv'), delimiter='\t')
maindict = {}
for row in csv_reader:
value = row[0]
for key in row[1:]:
try:
maindict[key].add(Decimal(value))
except KeyError:
maindict[key] = set()
maindict[key].add(Decimal(value))
csv_writer = csv.writer(open('out.csv', 'w'), delimiter='\t')
sorted_keys = [x[1] for x in sorted([(x.lower(), x) for x in maindict.keys()])]
for key in sorted_keys:
csv_writer.writerow([key] + sorted(maindict[key]))
I would try perl if that's available to you. Loop through the input a row at a time. Split the line on the tab then the right hand part on the commas. Shove the values into an associative array with letters as the keys and the value being another associative array. The second associative array will be playing the part of a set so as to eliminate duplicates.
Once you read the input file, sort based on the keys of the associative array, loop through and spit out the results.
here's a small utility in php:
// load and parse the input file
$data = file("path/to/file/");
foreach ($data as $line) {
list($num, $values) = explode("\t", $line);
$newData["$num"] = explode(", ", trim($values));
}
unset($data);
// reverse the index/value association
foreach ($newData as $index => $values) {
asort($values);
foreach($values as $value) {
if (!isset($data[$value]))
$data[$value] = array();
if (!in_array($index, $data[$value]))
array_push($data[$value], $index);
}
}
// printout the result
foreach ($data as $index => $values) {
echo "$index\t" . implode(", ", $values) . "\n";
}
not really optimized or good looking, but it works...
# use Modern::Perl;
use strict;
use warnings;
use feature qw'say';
our %data;
while(<>){
chomp;
my($number,$csv) = split /\t/;
my #csv = split m"\s*,\s*", $csv;
push #{$data{$_}}, $number for #csv;
}
for my $number (sort keys %data){
my #unique = sort keys %{{ map { ($_,undef) } #{$data{$number}} }};
say $number, "\t", join ', ', #unique;
}
Here is an example using CPAN's Text::CSV module rather than manual parsing of CSV fields:
use strict;
use warnings;
use Text::CSV;
my %hash;
my $csv = Text::CSV->new({ allow_whitespace => 1 });
open my $file, "<", "file/to/read.txt";
while(<$file>) {
my ($first, $rest) = split /\t/, $_, 2;
my #values;
if($csv->parse($rest)) {
#values = $csv->fields()
} else {
warn "Error: invalid CSV: $rest";
next;
}
foreach(#values) {
push #{ $hash{$_} }, $first;
}
}
# this can be shortened, but I don't remember whether sort()
# defaults to <=> or cmp, so I was explicit
foreach(sort { $a cmp $b } keys %hash) {
print "$_\t", join(",", sort { $a <=> $b } #{ $hash{$_} }), "\n";
}
Note that it will print to standard output. I recommend just redirecting standard output, and if you expand this program at all, make sure to use warn() to print any errors, rather than just print()ing them. Also, it won't check for duplicate entries, but I don't want to make my code look like Brad Gilbert's, which looks a bit wack even to a Perlite.
Here's an awk(1) and sort(1) answer:
Your data is basically a many-to-many data set so the first step is to normalise the data with one key and value per line. We'll also swap the keys and values to indicate the new primary field, but this isn't strictly necessary as the parts lower down do not depend on order. We use a tab or [spaces],[spaces] as the field separator so we split on the tab between the key and values, and between the values. This will leave spaces embedded in the values, but trim them from before and after:
awk -F '\t| *, *' '{ for (i=2; i<=NF; ++i) { print $i"\t"$1 } }'
Then we want to apply your sort order and eliminate duplicates. We use a bash feature to specify a tab char as the separator (-t $'\t'). If you are using Bourne/POSIX shell, you will need to use '[tab]', where [tab] is a literal tab:
sort -t $'\t' -u -k 1f,1 -k 2n
Then, put it back in the form you want:
awk -F '\t' '{
if (key != $1) {
if (key) printf "\n";
key=$1;
printf "%s\t%s", $1, $2
} else {
printf ", %s", $2
}
}
END {printf "\n"}'
Pipe them altogether and you should get your desired output. I tested with the GNU tools.