If Statement With 2 Arrays To Perfrom Relative Converging Task - arrays

The data is fictional to keep it simple.
Here's the problem
Content Of Prcessed Data
cat rawdata
10 0-9{3}
4 0-9{3}
7 0-9{3}
noc=$(cat ipConn.txt | awk '{print $1}')
rct=$(cat ipConn.txt | awk '{print $2}')
Intended Solution:
for i in ${noc[]}
if $i -ge 50 then
coomand -options ${rct[]}
done
Is the code comprehensible??
but the item in ${noc[]} must match the item in ${rct[]}
so that only items in same line is affected..

Try a while read loop:
echo '10 0-9{3}
4 0-9{3}
7 0-9{3}' |
while IFS=' ' read -r num item; do
if (( num >= 50 )); then
some_action with "$item"
fi
done
Note that the loop is typically very slow in bash. A faster solution would be to first filter the rows with first column greater or equal to 50, then remove the first column and then run some_action using xargs (or even pass -P0 to xargs to run in parallel):
echo '10 0-9{3}
4 0-9{3}
7 0-9{3}' |
awk '$1 >= 50' |
cut -d' ' -f2- |
xargs -n1 some_action with

Related

In side the loop if the id is 09 (ex: f132a09) it will skip that student but process all other students

I know i need to use continue but just don't know where to start. Any help would be appreciated.
Not looking for an answer, just a clearer understanding
Here's my code so far
prefix=$1
for id in $(grep "^$prefix" /etc/passwd | cut -d: -f1 | sort)
do
echo -e "$(grep "^${id}:" /etc/passwd | cut -d: -f5 | sed "s/^\(.*\), \(.*\)$/\2 \1 /g") \c"
echo -e " has the cisweb id ${id}"
if who | grep "^${id} " > /dev/null
then
echo -e "$(grep -w "^$prefix" /etc/passwd | cut -d: -f5 | uniq | sed 's/^\(.*\), \(.*\)$/\2 \1 /g' )is currently logged on"
fi
done
I've tried several Mods but o luck.

Concatenate outputs of the commands

Can someone please help me to concatenate the outputs of the two commands?
finger | awk '{print $2,$3}' | uniq | sed '1d'
system_profiler SPHardwareDataType | awk '/Serial/{print $NF}'
The output should be firstnamelastname.Serialnumber.local
you can affect result of two commands in variables to be able to concatenate them in a result one
first=$(finger | awk '{print $2,$3}' | uniq | sed '1d')
second=$(system_profiler SPHardwareDataType | awk '/Serial/{print $NF}')
result="$first.$second"
echo $result;

Counting strings from array in bash

I am writing output of awk to array in bash like so:
ARR=$(( awk '{print $2}' file.txt ))
Imagine the content of file.txt is:
A B
A B
A C
A D
A C
A B
What I want is number of repetition of each string in second column like:
B: 3
C: 2
D: 1
Any other solution rather than arrays and awk is welcome.
Using awk you can do:
awk '{c[$2]++} END{for (i in c) print i ":", c[i]}' file
B: 3
C: 2
D: 1
Other solution I found:
awk '{print $2}' file.txt | sort | uniq -c | sort -nr | while read count name
do
if [ ${count} -gt 1 ]
then
echo "${name} ${count}"
fi
done

bash scripting , loop not looping

I wrote a while loop to search inside files and append the output to a text file , but it seems like it's reading only the first line of that text file . How do I fix it ?
while read line
do
x=`echo $line`
y=`grep $x: /etc/group | cut -d ":" -f 3`
grep $y /etc/passwd | cut -d ":" -f 1 >> users
grep $y /etc/group | cut -d ":" -f 4 | tr "," "\n" >> users
done < filename
Perhaps you need to wrap $x and $y in quotes, as otherwise grep may interpret anything after the first space as the name of the file to be searched:
#!/bin/bash
while read line
do
x=`echo $line`
y=`grep "$x:" /etc/group | cut -d ":" -f 3`
grep "$y" /etc/passwd | cut -d ":" -f 1 >> users
grep "$y" /etc/group | cut -d ":" -f 4 | tr "," "\n" >> users
done < filename
This might be a bit safer as some of the grep statements may pick up the wrong fields (i.e. it does not check for the correct field):
while read GROUP
do
GROUP_ID=`grep ^$GROUP: /etc/group | cut -d ":" -f 3`
USER_ENT=`grep -e '\(.*:\)\{3\}'$GROUP_ID':' /etc/passwd`
[ $? -eq 0 ] && cut -d ":" -f 1 <<<$USER_ENT
GROUP_ENT=`grep -e '\(.*:\)\{2\}'$GROUP_ID':' /etc/group`
[ $? -eq 0 ] && cut -d ":" -f 4 <<<$GROUP_ENT | tr "," "\n" | grep -v ^$
done < $FILE_NAME | sort | uniq >users

Find duplicate lines in a file and count how many time each line was duplicated?

Suppose I have a file similar to the following:
123
123
234
234
123
345
I would like to find how many times '123' was duplicated, how many times '234' was duplicated, etc.
So ideally, the output would be like:
123 3
234 2
345 1
Assuming there is one number per line:
sort <file> | uniq -c
You can use the more verbose --count flag too with the GNU version, e.g., on Linux:
sort <file> | uniq --count
This will print duplicate lines only, with counts:
sort FILE | uniq -cd
or, with GNU long options (on Linux):
sort FILE | uniq --count --repeated
on BSD and OSX you have to use grep to filter out unique lines:
sort FILE | uniq -c | grep -v '^ *1 '
For the given example, the result would be:
3 123
2 234
If you want to print counts for all lines including those that appear only once:
sort FILE | uniq -c
or, with GNU long options (on Linux):
sort FILE | uniq --count
For the given input, the output is:
3 123
2 234
1 345
In order to sort the output with the most frequent lines on top, you can do the following (to get all results):
sort FILE | uniq -c | sort -nr
or, to get only duplicate lines, most frequent first:
sort FILE | uniq -cd | sort -nr
on OSX and BSD the final one becomes:
sort FILE | uniq -c | grep -v '^ *1 ' | sort -nr
To find and count duplicate lines in multiple files, you can try the following command:
sort <files> | uniq -c | sort -nr
or:
cat <files> | sort | uniq -c | sort -nr
Via awk:
awk '{dups[$1]++} END{for (num in dups) {print num,dups[num]}}' data
In awk 'dups[$1]++' command, the variable $1 holds the entire contents of column1 and square brackets are array access. So, for each 1st column of line in data file, the node of the array named dups is incremented.
And at the end, we are looping over dups array with num as variable and print the saved numbers first then their number of duplicated value by dups[num].
Note that your input file has spaces on end of some lines, if you clear up those, you can use $0 in place of $1 in command above :)
In Windows, using "Windows PowerShell", I used the command mentioned below to achieve this
Get-Content .\file.txt | Group-Object | Select Name, Count
Also, we can use the where-object Cmdlet to filter the result
Get-Content .\file.txt | Group-Object | Where-Object { $_.Count -gt 1 } | Select Name, Count
To find duplicate counts, use this command:
sort filename | uniq -c | awk '{print $2, $1}'
Assuming you've got access to a standard Unix shell and/or cygwin environment:
tr -s ' ' '\n' < yourfile | sort | uniq -d -c
^--space char
Basically: convert all space characters to linebreaks, then sort the tranlsated output and feed that to uniq and count duplicate lines.

Resources