Perl reading file, printing unique value from a column - arrays

I am new to perl, and i'd like to achieve the following with perl.
I have a file which contain the following data:
/dev/hda1 /boot ext3 rw 0 0
/dev/hda1 /boot ext3 rw 0 0
I'd like to extract the second field from the file and print unique values only. My desired output for this example is, the program should print :
ext3
also if i have several different filesystem, it should print in on the same line.
I have tried many piece of code but am left stuck.
Thank you

If you prefer awk:
$ cat file
/dev/hda1 /boot ext3 rw 0 0
/dev/hda1 /boot ext3 rw 0 0
$ awk '!seen[$3]++{print $3}' file
ext3
OR , using cut:
$ cut -d" " -f3 file | sort | uniq # or use just sort -u if your version supports it
ext3
Here is perl solution:
$ perl -lane 'print $F[2] unless $seen{$F[2]}++' file
ext3
Here is the perl command line options explanation (from perl -h):
l: enable line ending processing, specifies line terminator
a: autosplit mode with -n or -p (splits $_ into #F)
n: assume "while (<>) { ... }" loop around program
e: one line of program (several -e's allowed, omit programfile)
For a better explanation around these option, please refer: https://blogs.oracle.com/ksplice/entry/the_top_10_tricks_of

#!/usr/bin/perl
my %hash ;
while (<>) {
if (/\s*[^\s]+\s+[^\s]+\s+([^\s]+)\s+.*/) {
$hash{$1}=1;
}
}
print join("\n",keys(%hash))."\n";
Usage:
./<prog-name>.pl file1 fil2 ....

perl -anE '$s{$F[2]}++ }{say for keys %s' file
or
perl -anE '$s{$_}++ or say for $F[2]' file

Related

Load field 1 and print at the END{} equivalent awk in Perl

I have the following AWK script that counts occurences of elements in field 1 and when finishes to read entire file, prints each element and the times of repetitions.
awk '{a[$1]++} END{ for(i in a){print i"-->"a[i]} }' file
I'm very new with perl and I don't know how would be the equivalent. What I have so far is below, but it has incorrect syntax. Thanks in advance.
perl -lane '$a{$F[1]}++ END{foreach $a {print $a} }' file
____________________________________UPDATE
______________________________________
Hi, thanks both for your answers. The real input file has 34 million lines and the execution time is 3 or more times faster between awk and Perl. Is awk faster than perl?
awk '{a[$1]++}END{for(i in a){print i"-->"a[i]}}' file #--> 2:45 aprox
perl -lane '$a{$F[0]}++;END{foreach my $k (keys %a){ print "$k --> $a{$k}" } }' file #--> 7 min aprox
perl -lanE'$a{$F[0]}++; END { say "$_ => $a{$_}" for keys %a }' file # -->9 min aprox
Okay, Ger, one more time :-)
I upgraded my Perl to the latest version available to me and made a file like what you described (34.5 million lines each having a 16 digit integer in the 1st and only column):
schumack#linux2 52> wc -l listbig
34521909 listbig
schumack#linux2 53> head -3 listbig
1111111111111111
3333333333333333
4444444444444444
I then ran a specialized Perl line (works for this file but is not the same as the awk line). As before I timed the runs using /usr/bin/time:
schumack#linux2 54> /usr/bin/time -f '%E %P' /usr/local/bin/perl -lne 'chomp; $a{$_}++; END{foreach $i (keys %a){print "$i-->$a{$i}"}}' listbig
5555555555555555-->4547796
1111111111111111-->9715747
9999999999999999-->826872
3333333333333333-->9922465
1212121212121212-->826872
4444444444444444-->5374669
2222222222222222-->1653744
8888888888888888-->826872
7777777777777777-->826872
0:12.20 99%
schumack#linux2 55> /usr/bin/time -f '%E %P' awk '{a[$1]++} END{ for(i in a){print i"-->"a[i]} }' listbig
1111111111111111-->9715747
2222222222222222-->1653744
3333333333333333-->9922465
4444444444444444-->5374669
5555555555555555-->4547796
1212121212121212-->826872
7777777777777777-->826872
8888888888888888-->826872
9999999999999999-->826872
0:12.61 99%
Both perl and awk ran very fast on the 34.5 million line file and were within a half second of each other.
Curious as what type of machine / OS / Perl version you are currently using. I tested on an ASUS laptop that is about 4 years old, has Intel I7. I am using Ubuntu 16.04 and Perl v5.26.1
Anyways, thanks for the reason to play around with Perl!
Have fun,
Ken
Input file would obviously make a difference, but Perl 5.22.1 was slightly faster than Awk 4.1.3 below on my 33.5 million line test file (12.23 vs 12.52 seconds).
schumack#daddyo2 10-02T18:25:17 54> wc -l listbig
33521910 listbig
schumack#daddyo2 10-02T18:25:58 55> /usr/bin/time -f '%E %P' awk '{a[$1]++} END{ for(i in a){print i"-->"a[i]} }' listbig
1-->9434310
2-->1605840
3-->9635040
4-->5218980
5-->4416060
7-->802920
8-->802920
9-->802920
12-->802920
0:12.52 99%
schumack#daddyo2 10-02T18:26:17 56> /usr/bin/time -f '%E %P' perl -lne '$_=~s/^(\S+) .*/$1/; $a{$_}++; END{foreach $i (keys %a){print "$i-->$a{$i}"}}' listbig
1-->9434310
5-->4416060
2-->1605840
3-->9635040
12-->802920
8-->802920
9-->802920
4-->5218980
7-->802920
0:12.23 99%
Equivalent to your awk line
perl -lanE'$a{$F[0]}++; END { say "$_ => $a{$_}" for keys %a }' file
By -a the line is broken into fields in #F so you want $F[0] as a key in a hash %a with the value of the counter handled by ++. The hash is iterated over keys and printed in the END block.
However, the efficiency comparison comes up. One way to improve this is to not fetch all fields on the line, done with -a, since only the first one is needed. Between two ways that come to mind
perl -nE'$a{(/(\S+)/)[0]}++; END { ... }'
and
perl -nE'$a{(split " ", $_, 2)[0]}++; END { ... }'
the split is significantly faster with its 3.63s vs 4.41s for regex, on a 8M-line file.
This is still behind 1.99s for your awk line. So it seems that awk is faster for this task.
Summary of my timings for an 8-million line file (average of a few runs)
awk (question) 1.99s
perl (split) 3.63s
perl (regex) 4.41s
perl (like awk) 5.61s
These timings vary over runs by a few tens of miliseconds (a few 0.01s).
This destructive method is the fastest I came up with:
perl -lne '$_=~s/\s.*//; $a{$_}++; END{foreach $i (keys %a){print "$i-->$a{$i}"}}' file
However, it still is not quite as fast as awk.
You can go through a2p
$ cat file
1
1
2
3
3
3
$ perl -lane '$a{$F[0]}++;END{foreach my $k (keys %a){ print "$k --> $a{$k}" } }' file
1 --> 2
2 --> 1
3 --> 3
$ awk '{a[$1]++} END{ for(i in a){print i" --> "a[i]} }' file
1 --> 2
2 --> 1
3 --> 3

Using array inside awk in shell script

I am very new to Unix shell script and trying to get some knowledge in shell scripting. Please check my requirement and my approach.
I have a input file having data
ABC = A:3 E:3 PS:6
PQR = B:5 S:5 AS:2 N:2
I am trying to parse the data and get the result as
ABC
A=3
E=3
PS=6
PQR
B=5
S=5
AS=2
N=2
The values can be added horizontally and vertically so I am trying to use an array. I am trying something like this:
myarr=(main.conf | awk -F"=" 'NR!=1 {print $1}'))
echo ${myarr[1]}
# Or loop through every element in the array
for i in "${myarr[#]}"
do
:
echo $i
done
or
awk -F"=" 'NR!=1 {
print $1"\n"
STR=$2
IFS=':' read -r -a array <<< "$STR"
for i in "${!array[#]}"
do
echo "$i=>${array[i]}"
done
}' main.conf
But when I add this code to a .sh file and try to run it, I get syntax errors as
$ awk -F"=" 'NR!=1 {
> print $1"\n"
> STR=$2
> FS= read -r -a array <<< "$STR"
> for i in "${!array[#]}"
> do
> echo "$i=>${array[i]}"
> done
>
> }' main.conf
awk: cmd. line:4: FS= read -r -a array <<< "$STR"
awk: cmd. line:4: ^ syntax error
awk: cmd. line:5: for i in "${!array[#]}"
awk: cmd. line:5: ^ syntax error
awk: cmd. line:8: done
awk: cmd. line:8: ^ syntax error
How can I complete the above expectations?
This is the awk code to do what you want:
$ cat tst.awk
BEGIN { FS="[ =:]+"; OFS="=" }
{
print $1
for (i=2;i<NF;i+=2) {
print $i, $(i+1)
}
print ""
}
and this is the shell script (yes, all a shell script does to manipulate text is call awk):
$ awk -f tst.awk file
ABC
A=3
E=3
PS=6
PQR
B=5
S=5
AS=2
N=2
A UNIX shell is an environment from which to call UNIX tools (find, sort, sed, grep, awk, tr, cut, etc.). It has its own language for manipulating (e.g. creating/destroying) files and processes and sequencing calls to tools but it is NOT intended to be used to manipulate text. The guys who invented shell also invented awk for shell to call to manipulate text.
Read https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice and the book Effective Awk Programming, 4th Edition, by Arnold Robbins.
First off, a command that does what you want:
$ sed 's/ = /\n/;y/: /=\n/' main.conf
ABC
A=3
E=3
PS=6
PQR
B=5
S=5
AS=2
N=2
This replaces, on each line, the first (and only) occurrence of = with a newline (the s command), then turns all : into = and all spaces into newlines (the y command). Notice that
this works only because there is a space at the end of the first line (otherwise it would be a bit more involved to get the empty line between the blocks) and
this works only with GNU sed because it substitutes newlines; see this fantastic answer for all the details and how to get it to work with BSD sed.
As for what you tried, there is almost too much wrong with it to try and fix it piece by piece: from the wild mixing of awk and Bash to syntax errors all over the place. I recommend you read good tutorials for both, for example:
The BashGuide
Effective AWK Programming
A Bash solution
Here is a way to solve the same in Bash; I didn't use any arrays.
#!/bin/bash
# Read line by line into the 'line' variable. Setting 'IFS' to the empty string
# preserves leading and trailing whitespace; '-r' prevents interpretation of
# backslash escapes
while IFS= read -r line; do
# Three parameter expansions:
# Replace ' = ' by newline (escape backslash)
line="${line/ = /\\n}"
# Replace ':' by '='
line="${line//:/=}"
# Replace spaces by newlines (escape backslash)
line="${line// /\\n}"
# Print the modified input line; '%b' expands backslash escapes
printf "%b" "$line"
done < "$1"
Output:
$ ./SO.sh main.conf
ABC
A=3
E=3
PS=6
PQR
B=5
S=5
AS=2
N=2

Store grep output in an array

I need to search a pattern in a directory and save the names of the files which contain it in an array.
Searching for pattern:
grep -HR "pattern" . | cut -d: -f1
This prints me all filenames that contain "pattern".
If I try:
targets=$(grep -HR "pattern" . | cut -d: -f1)
length=${#targets[#]}
for ((i = 0; i != length; i++)); do
echo "target $i: '${targets[i]}'"
done
This prints only one element that contains a string with all filnames.
output: target 0: 'file0 file1 .. fileN'
But I need:
output: target 0: 'file0'
output: target 1: 'file1'
.....
output: target N: 'fileN'
How can I achieve the result without doing a boring split operation on targets?
You can use:
targets=($(grep -HRl "pattern" .))
Note use of (...) for array creation in BASH.
Also you can use grep -l to get only file names in grep's output (as shown in my command).
Above answer (written 7 years ago) made an assumption that output filenames won't contain special characters like whitespaces or globs. Here is a safe way to read those special filenames into an array: (will work with older bash versions)
while IFS= read -rd ''; do
targets+=("$REPLY")
done < <(grep --null -HRl "pattern" .)
# check content of array
declare -p targets
On BASH 4+ you can use readarray instead of a loop:
readarray -d '' -t targets < <(grep --null -HRl "pattern" .)

What is the shell script instruction to divide a file with sorted lines to small files?

I have a large text file with the next format:
1 2327544589
1 3554547564
1 2323444333
2 3235434544
2 3534532222
2 4645644333
3 3424324322
3 5323243333
...
And the output should be text files with a suffix in the name with the number of the first column of the original file keeping the number of the second column in the corresponding output file as following:
file1.txt:
2327544589
3554547564
2323444333
file2.txt:
3235434544
3534532222
4645644333
file3.txt:
3424324322
5323243333
...
The script should run on Solaris but I'm also having trouble with the instruction awk and options of another instruccions like -c with cut; its very limited so I am searching for common commands on Solaris. I am not allowed to change or install anything on the system. Using a loop is not very efficient because the script takes too long with large files. So aside from using the awk instruction and loops, any suggestions?
Something like this perhaps:
$ awk 'NF>1{print $2 > "file"$1".txt"}' input
$ cat file1.txt
2327544589
3554547564
2323444333
or if you have bash available, try this:
#!/bin/bash
while read a b
do
[ -z $a ] && continue
echo $b >> "file"$a".txt"
done < input
output:
$ paste file{1..3}.txt
2327544589 3235434544 3424324322
3554547564 3534532222 5323243333
2323444333 4645644333

Assigning command's output to shell variable and get the variables size

I have a file consisting of digits. Usually, each line contains one single number. I would like to count the number of lines in the file that begin with digit '0'. If it's the case, then I would like to do some post-processing.
Although I'm able to retrieve correctly the corresponding line numbers, the total number of retrieved lines is not correct. Below, I'm posting the code that I'm using.
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
# linesToRemove=$(grep -n "^0" ${inputFile} | cut -d":" -f1);
linesNr=${#linesToRemove} # <- here, the error
# linesNr=${#linesToRemove[#]} # <- here, the error
if [ "${linesNr}" -gt "0" ]; then
# do something here, e.g. remove corresponding lines.
awk -v n=$linesToRemove 'NR == n {next} {print}' ${anotherFile} > ${outputFile}
fi
Also, as for the awk-based command, how could I use a shell-variable? I tried the command below, but it's not working correctly, since 'myIndex' is interpreted as a text and not as a variable.
linesToRemove=$(awk -v myIndex="$myIndex" '/^myIndex/ { print NR;}' ${inputFile});
Given the line numbers starting with 0 found in ${inputFile}, I would like to remove the corresponding lines numbers from ${anotherFile}. An example for both ${inputFile} and ${anotherFile} is given below:
// ${inputFile}
0
1
3
0
// ${anotherFile}
2.617300e+01 5.886700e+01 -1.894697e-01 1.251225e+02
5.707397e+01 2.214040e+02 8.607959e-02 1.229114e+02
1.725900e+01 1.734360e+02 -1.298053e-01 1.250318e+02
2.177940e+01 1.249531e+02 1.538853e-01 1.527150e+02
// ${outputFile}
5.707397e+01 2.214040e+02 8.607959e-02 1.229114e+02
1.725900e+01 1.734360e+02 -1.298053e-01 1.250318e+02
In the example above, I need to delete lines 0 and 3 from ${anotherFile}, given that those lines correspond to the lines starting with 0 in ${inputFile}.
If you want to count the number of lines in the file that begins with 0, then this line is wrong.
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
The above says to print the line number when the line start with 0, and your linesToRemove variable will contain all the line numbers, not the total number of lines. Use END{} block to capture the total. eg
linesToRemove=$(awk '/^0/ {c++}END{print c}' ${inputFile});
As for your 2nd question on using variable inside awk, use the regex operator ~. And then set your myIndex variable to include the ^ anchor
linesToRemove=$(awk -v myIndex="^$myIndex" '$0 ~ myIndex{ print NR;}' ${inputFile});
finally, if you just want to remove those lines that start with 0, then just simply remove it
awk '/^0/{next}{print $0>FILENAME}' file
If you want to remove lines from another file using what is captured in input file, here's one way
paste -d"|" inputfile anotherfile | awk '!/^0/{gsub(/^.*\|/,"");print}'
Or just one awk command
awk 'FNR==NR && /^0/{a[FNR]} NR>FNR && (!(FNR in a))' inputfile anotherfile
crude explanation: FNR==NR && /^0/ means process the first file whole line starts with 0 and put its line number into array a. NR>FNR means process the next file and if line number not in array, print the line. See the gawk documentation for what FNR,NR etc means
I think you have to do the following to assign an array:
linesToRemove=( $(awk '/^0/ { print NR; }' ${inputFile}) )
And to get the number of elements do (as you have in a commented line):
linesNr=${#linesToRemove[#]}
To remove the lines from from the file you could do something like:
sedCmd=""
for lineNr in ${linesToRemove[#]}; do
sedCmd="$sedCmd;${lineNr}d"
done
sed "$sedCmd" ${anotherFile} > ${outputFile}
In general if you do this:
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
instead of this:
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
linesNr=${#linesToRemove}
use this:
linesToRemove=$(awk '/^0/ { print NR; }' ${inputFile});
linesNr=${echo $linesToRemove|awk '{print NF}'}
POC :
cat temp.sh
#!/usr/bin/ksh
lines=$(awk '/^d/{print NR}' script.sh)
nooflines=$(echo $lines|awk '{print NF}')
echo $nooflines
torinoco!DBL:/oo_dgfqausr/test/dfqwrk12/vijay> temp.sh
8
torinoco!DBL:/oo_dgfqausr/test/dfqwrk12/vijay>
It greatly depends on the post-processing you are doing, but do you really need the actual count? Why not do something like this:
if grep ^0 $inputfile > /dev/null; then
# There is at least one line with a leading 0
:
fi
grep -v ^0 $inputfile | process-lines-without-leading-zero
grep ^0 $inputfile | process-lines-with-leading-zero
Or, even just:
if grep ^0 $inputfile | process-lines-with-leading-zero; then
# some post processing
:
fi
--EDIT--
Based on what you've said in your comment, I would recommend a different approach. If I understand you correctly, you want to read file a, looking for lines of the form ^0[0-9]*,
and then remove those line numbers from file b. Doing it one line at a time is pretty slow if the files get big. Just do:
cmd=$( grep '^0[0-9]*$' a | sed 's/$/d;/g' )
sed "$cmd" b
The assignment to cmd forms a sed command to delete the lines. Invoking sed on b will omit those lines. You'll need to redirect the sed output appropriately (perhaps to a temp file and then back to b, or just use 'sed -i' if you're using gnu sed.)
Given the large number of edits to this question, it seems easiest to start a new answer. Your problem can be solved with a simple one-liner:
$ sed "$( grep -n ^0 $inputFile | sed 's/:.*/d;/g' )" $anotherFile > $outputFile

Resources