addition of two array elements and put it in third array in bash - arrays

I am not getting any convenient code for :
I want to add three arrays into new one and add this new array value to a new column of a csv. My code so far:
sar -f -r > test.txt
sed -i 's/AM/ /g;s/PM/ /g' test.txt
IFS=$'\n'
arr1=($(cat test.txt | awk '{print $2}'| tail -n +3 | egrep -v 'kb'))
unset IFS
echo -e "${arr1[#]/%/$'\n'}"
sar -f -r > test1.txt
sed -i 's/AM/ /g;s/PM/ /g' test1.txt
IFS=$'\n'
arr2=($(cat test1.txt | awk '{print $5}'| tail -n +3 | egrep -v 'kb'))
unset IFS
echo -e "${arr2[#]/%/$'\n'}"
sar -f -r > test2.txt
sed -i 's/AM/ /g;s/PM/ /g' test2.txt
IFS=$'\n'
arr3=($(cat test2.txt | awk '{print $6}'| tail -n +3 | egrep -v 'kb'))
unset IFS
echo -e "${arr3[#]/%/$'\n'}"

Related

In side the loop if the id is 09 (ex: f132a09) it will skip that student but process all other students

I know i need to use continue but just don't know where to start. Any help would be appreciated.
Not looking for an answer, just a clearer understanding
Here's my code so far
prefix=$1
for id in $(grep "^$prefix" /etc/passwd | cut -d: -f1 | sort)
do
echo -e "$(grep "^${id}:" /etc/passwd | cut -d: -f5 | sed "s/^\(.*\), \(.*\)$/\2 \1 /g") \c"
echo -e " has the cisweb id ${id}"
if who | grep "^${id} " > /dev/null
then
echo -e "$(grep -w "^$prefix" /etc/passwd | cut -d: -f5 | uniq | sed 's/^\(.*\), \(.*\)$/\2 \1 /g' )is currently logged on"
fi
done
I've tried several Mods but o luck.

Count the occurrence of a pattern from a column of one file in other file

I created a file with one column with a list of patterns (2,196 in total) that I wanna find in other text file which has approximated 400 millions lines.
For example:
file1
abc1
abc2
abc3
abc4
abc5
file2
abc1
abc1
abc1
abc1
abc1
abc2
abc2
abc2
abc2
The desired output:
file3
abc1 5
abc2 2
I can do one by one with awk or grep:
awk '/abc1/{++c}END{print c}' file1 | wc -l > file3
or
grep 'abc1' file1 | wc -l > file3
However, when I try:
cat file1 | xargs -L 1 grep file2 | wc -l > file3
I get an error message:
grep: abc1: No such file or directory
grep: abc2: No such file or directory
etc
I tried:
cat file1 | xargs -L 1 grep '' file2 | wc -l > file3
Also does not work! So what I am doing wrong?
Thank you!
Your cat file1 | xargs -L 1 grep file2… is trying to grep the pattern file2 from the non-existing file abcX. You could start with something like
<file1 xargs -I{} grep "{}" file2
and extend this to
$ <file1 xargs -I{} sh -c 'printf "%s\t%s\n" "{}" $(grep -c "{}" file2)'
abc1 5
abc2 4
abc3 0
abc4 0
abc5 0
but that's not very efficient for a large pattern file.
Using grep, sort and uniq:
$ grep -F -x -f file1 file2 | sort | uniq -c > file3
Output file3:
5 abc1
4 abc2
If you need to reverse the number of matches and the pattern:
grep -F -x -f file1 file2 | sort | uniq -c | awk '{ print $2"\t"$1 }' > file3
Output file3:
abc1 5
abc2 4
Using awk:
awk '
NR==FNR{ a[$0] }
NR!=FNR && $0 in a{ a[$0]++ }
END{ for (i in a){ if (a[i])print i"\t"a[i] }}
' file1 file2 > file3
Output file3:
abc1 5
abc2 4
Simplest solution would be as follows IMHO.
awk 'FNR==NR{a[$0]++;next} ($1 in a){print $1,a[$1]}' Input_file2 Input_file1
Explanation: Adding explanation for above code.
awk ' ##Starting awk program from here.
FNR==NR{ ##Checking condition if FNR==NR which will be TRUE when Input_file2 is being read.
a[$0]++ ##Creating an array named a index is $0 and increment it with 1 each time it goes to line.
next ##next will skip all further statements from here.
}
($1 in a){ ##Checking condition if $1 is present in array a then do following.
print $1,a[$1] ##Printing first field then value of array a with index $1.
}
' Input_file2 Input_file1 ##Mentioning Input_file names here.
Output will be as follows.
abc1 5
abc2 4

If Statement With 2 Arrays To Perfrom Relative Converging Task

The data is fictional to keep it simple.
Here's the problem
Content Of Prcessed Data
cat rawdata
10 0-9{3}
4 0-9{3}
7 0-9{3}
noc=$(cat ipConn.txt | awk '{print $1}')
rct=$(cat ipConn.txt | awk '{print $2}')
Intended Solution:
for i in ${noc[]}
if $i -ge 50 then
coomand -options ${rct[]}
done
Is the code comprehensible??
but the item in ${noc[]} must match the item in ${rct[]}
so that only items in same line is affected..
Try a while read loop:
echo '10 0-9{3}
4 0-9{3}
7 0-9{3}' |
while IFS=' ' read -r num item; do
if (( num >= 50 )); then
some_action with "$item"
fi
done
Note that the loop is typically very slow in bash. A faster solution would be to first filter the rows with first column greater or equal to 50, then remove the first column and then run some_action using xargs (or even pass -P0 to xargs to run in parallel):
echo '10 0-9{3}
4 0-9{3}
7 0-9{3}' |
awk '$1 >= 50' |
cut -d' ' -f2- |
xargs -n1 some_action with

bash scripting , loop not looping

I wrote a while loop to search inside files and append the output to a text file , but it seems like it's reading only the first line of that text file . How do I fix it ?
while read line
do
x=`echo $line`
y=`grep $x: /etc/group | cut -d ":" -f 3`
grep $y /etc/passwd | cut -d ":" -f 1 >> users
grep $y /etc/group | cut -d ":" -f 4 | tr "," "\n" >> users
done < filename
Perhaps you need to wrap $x and $y in quotes, as otherwise grep may interpret anything after the first space as the name of the file to be searched:
#!/bin/bash
while read line
do
x=`echo $line`
y=`grep "$x:" /etc/group | cut -d ":" -f 3`
grep "$y" /etc/passwd | cut -d ":" -f 1 >> users
grep "$y" /etc/group | cut -d ":" -f 4 | tr "," "\n" >> users
done < filename
This might be a bit safer as some of the grep statements may pick up the wrong fields (i.e. it does not check for the correct field):
while read GROUP
do
GROUP_ID=`grep ^$GROUP: /etc/group | cut -d ":" -f 3`
USER_ENT=`grep -e '\(.*:\)\{3\}'$GROUP_ID':' /etc/passwd`
[ $? -eq 0 ] && cut -d ":" -f 1 <<<$USER_ENT
GROUP_ENT=`grep -e '\(.*:\)\{2\}'$GROUP_ID':' /etc/group`
[ $? -eq 0 ] && cut -d ":" -f 4 <<<$GROUP_ENT | tr "," "\n" | grep -v ^$
done < $FILE_NAME | sort | uniq >users

Define array for walking directories with du

I wrote a basic script to test a recursive du output against a directory or filesystem that selects the largest directory and repeats, then outputs the results neatly. Is there a way I can combine an array and some if/then statements to make this more elegant and continue to recurse until no more directories are matched, then printing the outputs from an array?
#!/bin/bash
dir1=$1
du1=$(du -x --max-depth=1 $dir1 | sort -nr | awk '{ print $2 }' | \
xargs du -hx --max-depth=0 | egrep -v "sys|proc|boot|lost|media|mnt|selinux" | head -10 | tail -n +2)
dir2=$(echo "$du1"|head -1|awk '{print $2}')
du2=$(du -x --max-depth=1 $dir2 | sort -nr | awk '{ print $2 }' | \
xargs du -hx --max-depth=0 | egrep -v "sys|proc|boot|lost|media|mnt|selinux" | head -10 | tail -n +2)
dir3=$(echo "$du2"|head -1|awk '{print $2}')
du3=$(du -x --max-depth=1 $dir3 | sort -nr | awk '{ print $2 }' | \
xargs du -hx --max-depth=0 | egrep -v "sys|proc|boot|lost|media|mnt|selinux" | head -10 | tail -n +2)
dir4=$(echo "$du3"|head -1|awk '{print $2}')
du4=$(du -x --max-depth=1 $dir4 | sort -nr | awk '{ print $2 }' | \
xargs du -hx --max-depth=0 | egrep -v "sys|proc|boot|lost|media|mnt|selinux" | head -10 | tail -n +2)
dir5=$(echo "$du4"|head -1|awk '{print $2}')
du5=$(du -x --max-depth=1 $dir5 | sort -nr | awk '{ print $2 }' | \
xargs du -hx --max-depth=0 | egrep -v "sys|proc|boot|lost|media|mnt|selinux" | head -10 | tail -n +2)
dir6=$(echo "$du5"|head -1|awk '{print $2}')
du6=$(du -x --max-depth=1 $dir6 | sort -nr | awk '{ print $2 }' | \
xargs du -hx --max-depth=0 | egrep -v "sys|proc|boot|lost|media|mnt|selinux" | head -10 | tail -n +2)
echo -e "##LEVEL1##"
paste -d ' ' <(echo "$du1") <(echo "$(file $(echo "$du1" | \
awk '{print $2}')|cut -d' ' -f2- | sed -e 's/[a-zA-Z0-9]/[&/' -e 's/$/]/')")
echo -e "##LEVEL2##"
paste -d ' ' <(echo "$du2") <(echo "$(file $(echo "$du2" | \
awk '{print $2}')|cut -d' ' -f2- | sed -e 's/[a-zA-Z0-9]/[&/' -e 's/$/]/')")
echo -e "##LEVEL3##"
paste -d ' ' <(echo "$du3") <(echo "$(file $(echo "$du3" | \
awk '{print $2}')|cut -d' ' -f2- | sed -e 's/[a-zA-Z0-9]/[&/' -e 's/$/]/')")
echo -e "##LEVEL4##"
paste -d ' ' <(echo "$du4") <(echo "$(file $(echo "$du4" | \
awk '{print $2}')|cut -d' ' -f2- | sed -e 's/[a-zA-Z0-9]/[&/' -e 's/$/]/')")
echo -e "##LEVEL5##"
paste -d ' ' <(echo "$du5") <(echo "$(file $(echo "$du5" | \
awk '{print $2}')|cut -d' ' -f2- | sed -e 's/[a-zA-Z0-9]/[&/' -e 's/$/]/')")
echo -e "##LEVEL6##"
paste -d ' ' <(echo "$du6") <(echo "$(file $(echo "$du6" | \
awk '{print $2}')|cut -d' ' -f2- | sed -e 's/[a-zA-Z0-9]/[&/' -e 's/$/]/')")
Here is an example output:
#./rdu.sh / 2>/dev/null
##LEVEL1##
12G /opt [directory]
1.9G /usr [directory]
452M /var [directory]
352M /root [directory]
179M /home [directory]
116M /lib [directory]
46M /tmp [sticky directory]
28M /sbin [directory]
21M /etc [directory]
##LEVEL2##
8.5G /opt/zenoss [directory]
2.9G /opt/zends [directory]
##LEVEL3##
6.6G /opt/zenoss/perf [directory]
510M /opt/zenoss/ZenPacks [directory]
486M /opt/zenoss/var [directory]
461M /opt/zenoss/lib [directory]
250M /opt/zenoss/log [directory]
85M /opt/zenoss/Products [directory]
49M /opt/zenoss/packs [directory]
31M /opt/zenoss/share [directory]
26M /opt/zenoss/webapps [directory]
##LEVEL4##
6.5G /opt/zenoss/perf/Devices [directory]
59M /opt/zenoss/perf/Daemons [directory]
##LEVEL5##
289M /opt/zenoss/perf/Devices/10.0.4.218 [directory]
288M /opt/zenoss/perf/Devices/10.215.68.9 [directory]
287M /opt/zenoss/perf/Devices/10.0.4.18 [directory]
161M /opt/zenoss/perf/Devices/<removed> [directory]
145M /opt/zenoss/perf/Devices/10.219.68.12 [directory]
143M /opt/zenoss/perf/Devices/VMs-- [directory]
143M /opt/zenoss/perf/Devices/10.0.4.219 [directory]
143M /opt/zenoss/perf/Devices/10.0.4.19 [directory]
136M /opt/zenoss/perf/Devices/10.215.68.8 [directory]
##LEVEL6##
279M /opt/zenoss/perf/Devices/10.0.4.218/ltmvirtualservers [directory]
7.1M /opt/zenoss/perf/Devices/10.0.4.218/os [directory]
888K /opt/zenoss/perf/Devices/10.0.4.218/hw [directory]
840K /opt/zenoss/perf/Devices/10.0.4.218/loadbalancerports [directory]
You code doesn't work on my system, so I cannot test it. But you can do something like this:
function durec {
dir1=$1
level=$2
du1=$(du -x --max-depth=1 $dir1 | sort -nr | awk '{ print $2 }' | \
xargs du -hx --max-depth=0 | egrep -v "sys|proc|boot|lost|media|mnt|selinux" | head -10 | tail -n +2)
echo -e "##LEVEL$level##"
paste -d ' ' <(echo "$du1") <(echo "$(file $(echo "$du1" | \
awk '{print $2}')|cut -d' ' -f2- | sed -e 's/[a-zA-Z0-9]/[&/' -e 's/$/]/')")
let level++
dir2=$(echo "$du1"|head -1|awk '{print $2}')
if [ ! -z "$dir" ]; then
durec $dir2 $level
fi
}
# call the function
durec / 1
Your 1st du counts the whole filesystem. After you counts again and again for each subdir.
This for me seems as a couple of unnecessary counting (read nonsense), because you can save the output from the 1st du and works only on it...
Something like:
root="${1:-.}"
count=${2:-10}
tmp1=/tmp/durec_du.$$
tmp2=/tmp/durec_tmp.$$
trap "rm -f $tmp1 $tmp2;exit" 0 1 2 3 15
#human readable format - need GNU sort
#du -h "$root" | gsort -hr > $tmp1
#KB format
du -k "$root" | sort -nr > $tmp1
cp /dev/null $tmp2
level=0
durec() {
dir=$1
biggest=$(grep " ${dir}/[^/][^/]*$" $tmp1 | tee $tmp2 | head -1 | sed 's/^[0-9BKMGTP][0-9BKMGTP]* //')
# ^^^ ----------------------- one <TAB> character ---------------------------------- ^^^^
# if you have GNU version of sed, and grep replace the <TAB> with \t
[[ -n "$biggest" ]] || return
let level++
echo "##LEVEL$level##"
head -$count $tmp2
durec "$biggest"
}
durec "$root"
The gsort command is the GNU sort. If your standard sort is GNU, replace the gsort with a simple sort. (Need for the -h - sorting the result of du -h.

Resources