I am trying to create 2 arrays with bash.
array1 called DNSSERVERS : with all DNS Servers; output should be like this: 1.1.1.1,2.2.2.2,3.3.3.3,4.4.4.4
and
array2 called DNSSERVERSSEARCH.with all DNS Domain; output should be like this: local.domain.net,domain.net
all this information comes from: systemd-resolve --status.
Then I would like to put this array to a bash script called: bounding-netplan.sh
And the things is that today we have 4 DNS Server and 2 DNS domain.
but tomorrow it could be 1 DNS Server and 4 DNS Domain. The script must be flexible.
I tried to set with awk. but without success.
anyone can help me on this. will be very appreciated.
Thank you very much in advance.
# systemd-resolve --status
Global
DNS Servers: 1.1.1.1
2.2.2.2
3.3.3.3
4.4.4.4
DNS Domain: local.domain.net
domain.net
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
168.192.in-addr.arpa
17.172.in-addr.arpa
18.172.in-addr.arpa
19.172.in-addr.arpa
20.172.in-addr.arpa
21.172.in-addr.arpa
22.172.in-addr.arpa
23.172.in-addr.arpa
24.172.in-addr.arpa
25.172.in-addr.arpa
26.172.in-addr.arpa
27.172.in-addr.arpa
28.172.in-addr.arpa
29.172.in-addr.arpa
30.172.in-addr.arpa
31.172.in-addr.arpa
corp
home
internal
intranet
lan
local
private
test
cat bounding-netplan.sh
#!/bin/bash
MAJORRELEASE=$( lsb_release -sr | cut -d\. -f1 )
STROS=$( lsb_release -si )
# Ubuntu 18.04
if [ $STROS == Ubuntu ] && [ $MAJORRELEASE -ge 18 ] ; then
if [ -d /etc/netplan ]; then
DNSSERVERS=``
DNSSERVERSSEARCH=``
cat <<EOF | tee /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
switchports:
match: {name: "enp*"}
bonds:
bond0:
interfaces: [switchports]
addresses: [${IP}]
gateway4: ${ROUTE}
nameservers:
search: [${DNSSERVERSSEARCH}]
addresses: [${DNSSERVERS}]
EOF
fi
fi
I think found a way here:
$ systemd-resolve --status | sed -e 's#[\t ]##g' | awk -F\: 'BEGIN{section=""}{if($2!=""){section=$1; print $1" "$2}else {print section" "$1}}' | awk '{if($1=="DNSServers") print $2}' | sort -u | tr '\n' ',' | sed -e 's#,$##'
1.1.1.1,2.2.2.2,3.3.3.3,4.4.4.4
The former only work when it's only IPv4, this work for both IPv4 and IPv6:
$ systemd-resolve --status | sed -e 's/: /=/' | sed -e 's#[\t ]##g' | awk -F= 'BEGIN{section=""}{if($2!=""){section=$1; print $1" "$2}else {print section" "$1}}' | awk '{if($1=="DNSServers") print $2}' | sort -u | tr '\n' ',' | sed -e 's#,$##'
8.8.4.4,8.8.8.8,fd80::7a98:e8ff:fe46:4328,fe80::1
Related
I have 2 arrays in bash.
Array number 1 is the vlan subnet without the last octet.
Array number 2 is a list of octets i want to ignore, while scanning the subnet with Nmap.
lets assume every subnet has 254 pingable ip's (class c subnet)
I want the script to scan each subnet, and exclude ip's that ends with 1,2,3,252,253,254 which are Usually routers / firewalls / switches.
I manages to run 2 iterations, but failed on the if [[ $host == $host."${ignore[#]" ]] to identify the relevant ip (sunbet + ignore string)
Would really appreciate your help.
#!/bin/bash
# lets assume each subnet has 254 ips and all ignore ip's like 10.6.114.1 10.6.115.1 and 10.5.120.1
declare -a vlans=(
10.6.114
10.6.115
10.5.120
)
declare -a ignore=(
1
2
3
252
253
254
)
for vlan in "${vlans[#]}"; do
nmap -sn "$vlan" | grep Nmap | awk "{print $5}" | sed -n '1!p' | sed -e "$d" | sort > /tmp/vlan_ips.txt
readarray -t hosts < /tmp/vlan_ips.txt
for host in "${hosts[#]}"; do
check=$(echo "$host" | cut -d"." -f1-3)
if [ $host == $check."${ignore[#]}" ]; then
echo 'skipping record'
fi
done
done
This might work for you:
for vlan in "${vlans[#]}"; do
for ign in "${ignore[#]}"; do
printf '%s.%s\n' "$vlan" "$ign"
done >/tmp/ignore
nmap -n -sn "$vlan.0/24" -oG - 2>/dev/null |
grep -vwFf /tmp/ignore |
awk '/Host:/{print $2}' |
while read -r host; do
echo "$host"
done
done
I look for a way to count the login attempts from each ip by parsing a log file (var/log/auth). I already found a way to do this but it is ugly and needs a lot of time because of the fu*** file operations.
Couldnt this be done with variables only? What I want is a listing like ip=count...
Many thanks
This is what I dont want :)
for ip in $(cat /var/log/auth.log | grep sshd | grep Failed | grep -v invalid | awk '{ print $11; }'); do
if [ -e "log/$ip" ]; then
file="log/$ip"
counter=$(cat "$file")
counter=$[$counter +1]
echo $counter > log/$ip
else
echo 1 >> log/$ip
fi
done
A sample from the logfile is
Jul 30 21:07:30 Server sshd[20895]: Failed password for root from 58.242.83.20 port 41448 ssh2
Jul 30 21:07:31 Server sshd[20897]: Failed password for root from 61.177.172.44 port 28603 ssh2
What I want is something like
58.242.83.20=1932
61.177.172.44=3
grep -Po 'sshd.*Failed password for [^i]* \K[0-9.]{7,14}' file | sort | uniq -c
works perfect for me...
thanks to Cyrus
try one more approach with awk, where it will provide you output in same order in which IPs have come into your Input_file.
awk '!/invalid/ && /Server sshd.*Failed password/{
match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/);
val=substr($0,RSTART,RLENGTH);
if(!a[val]){
b[++i]=val
};
a[val]++
}
END{
for(j=1;j<=i;j++){
print b[j],a[b[j]]
}
}' Input_file
This is an idea using awk with sort and uniq:
awk '$5 ~ "sshd" && $6 = "Failed" && $9 != "invalid" { print $11 }' auth.log | sort | uniq -c
Or just awk:
awk '$5 ~ "sshd" && $6 = "Failed" && $9 != "invalid" { ips[$11]++ } END { for (ip in ips) print ip"="ips[ip]}' auth.log
Is it possible to use array values as variables?
For example, i have this script:
#!/bin/bash
SOURCE=$(curl -k -s $1 | sed 's/{//g;s/}//g;s/,/"\n"/g;s/:/=/g;s/"//g' | awk -F"=" '{ print $1 }')
JSON=$(curl -k -s $1 | sed 's/{//g;s/}//g;s/,/"\n"/g;s/:/=/g;s/"//g' | awk -F"=" '{ print $NF }')
data=$2
readarray -t prot_array <<< "$SOURCE"
readarray -t pos_array <<< "$JSON"
for ((i=0; i<${#prot_array[#]}; i++)); do
echo "${prot_array[i]}" "${pos_array[i]}" | sed 's/NOK/0/g;s/OK/1/g' | grep $2 | awk -F' ' '{ print $2,$3,$4 }'
done
EDIT:
I just added: grep $2 | awk -F' ' '{ print $2,$3,$4 }'
Usage:
./json.sh URL
Sample (very short) output:
DATABASE 1
STATUS 1
I don't want to echo out all the lines, i would like to use DATABASE STATUS as variable $DATABASE and echo that out.
I just need DATABASE (or any other) value from command line.
Is it somehow possible to use something like this?
./json.sh URL $DATABASE
Happy to explain more if needed.
EDIT:
curl output without any formattings etc:
{
"VERSION":"R3.1",
"STATUS":"OK",
"DATABASES":{
"READING":"OK"
},
"TIMESTAMP":"2017-03-08-16-20-35"
}
Output using script:
VERSION R3.1
STATUS 1
DATABASES 1
TIMESTAMP 2017-03-08-16-21-54
What i want is described before. For example use DATABASE as varible $DATABASE and somehow get the value "1"
EDIT:
Random json from uconn.edu
./json.sh https://github.uconn.edu/raw/nam12023/novaLauncher/master/manifest.json
Another:
./json.sh https://gitlab.uwe.ac.uk/dc2-roskilly/angular-qs/raw/master/.npm/nan/2.4.0/package/package.json
Last output begins with:
name nan
version 2.4.0
From command line: ./json.sh URL version
At leats it works for me.
I think you want to use jq something like this:
$ curl -k -s "$1" | jq --arg d DATABASES -r '
"VERSION \(.VERSION)",
"STATUS \(if .STATUS == "OK" then 1 else 0 end)",
"DATABASES \(if .[$d].READING == "OK" then 1 else 0 end)",
"TIMESTAMP \(.TIMESTAMP)"
'
VERSION R3.1
STATUS 1
DATABASES 1
TIMESTAMP 2017-03-08-16-20-35
(I'm probably missing a simpler way to convert a boolean value to an integer.)
Quick explanation:
The ,-separated strings each become a separate output line.
The -r option outputs a raw string, rather than a JSON string value.
The name of the database field is passed using the --arg option.
\(...) is jq's interpolation operator; the contents are evaluated as a JSON expression and the result is inserted into the string.
I'm trying to understand what I'm doing wrong here, but can't seem to determine the cause. I would like to create a set of arrays from an output for a for loop in bash. Below is the code I have so far:
for i in `onedatastore list | grep pure02 | awk '{print $1}'`;
do
arr${i}=($(onedatastore show ${i} | sed 's/[A-Z]://' | cut -f2 -d\:)) ;
echo "Output of arr${i}: ${arr${i}[#]}" ;
done
The output for the condition is as such:
107
108
109
What I want to do is based on these unique IDs is create arrays:
arr107
arr108
arr109
The arrays will have data like such in each:
[oneadmin#opennebula/]$ arr107=($(onedatastore show 107 | sed 's/[A-Z]://' | cut -f2 -d\:))
[oneadmin#opennebula/]$ echo ${arr107[#]}
DATASTORE 107 INFORMATION 107 pure02_vm_datastore_1 oneadmin oneadmin 0 IMAGE vcenter vcenter /var/lib/one//datastores/107 FILE READY DATASTORE CAPACITY 60T 21.9T 38.1T - PERMISSIONS um- u-- --- DATASTORE TEMPLATE CLONE_TARGET="NONE" DISK_TYPE="FILE" DS_MAD="vcenter" LN_TARGET="NONE" RESTRICTED_DIRS="/" SAFE_DIRS="/var/tmp" TM_MAD="vcenter" VCENTER_CLUSTER="CLUSTER01" IMAGES
When I try this in the script section though I get output errors as such:
./test.sh: line 6: syntax error near unexpected token `$(onedatastore show ${i} | sed 's/[A-Z]://' | cut -f2 -d\:)'
I can't seem to figure out the syntax to use on this scenario.
In the end what I want to do is be able to compare different datastores and based on which on has more free space, deploy VMs to it.
Hope someone can help. Thanks
You can use the eval (potentially unsafe) and declare (safer) commands:
for i in $(onedatastore list | grep pure02 | awk '{print $1}');
do
declare "arr$i=($(onedatastore show ${i} | sed 's/[A-Z]://' | cut -f2 -d\:))"
eval echo 'Output of arr$i: ${arr'"$i"'[#]}'
done
readarray or mapfile, added in bash 4.0, will read directly into an array:
while IFS= read -r i <&3; do
readarray -t "arr$i" < <(onedatastore show "$i" | sed 's/[A-Z]://' | cut -f2 -d:)
done 3< <(onedatastore list | awk '/pure02/ {print $1}')
Better, back through bash 3.x, one can use read -a to read to an array:
shopt -s pipefail # cause pipelines to fail if any element does
while IFS= read -r i <&3; do
IFS=$'\n' read -r -d '' -a "arr$i" \
< <(onedatastore show "$i" | sed 's/[A-Z]://' | cut -f2 -d: && printf '\0')
done 3< <(onedatastore list | awk '/pure02/ {print $1}')
Alternately, one can use namevars to create an alias for an array with an arbitrarily-named array in bash 4.3:
while IFS= read -r i <&3; do
declare -a "arr$i"
declare -n arr="arr$i"
# this is buggy: expands globs, string-splits on all characters in IFS, etc
# ...but, well, it's what the OP is asking for...
arr=( $(onedatastore show "$i" | sed 's/[A-Z]://' | cut -f2 -d:) )
done 3< <(onedatastore list | awk '/pure02/ {print $1}')
First, I am not experienced in scripting, so be gentle with me
Anyway, I tried making a script for finding files by mime-type ( audio, video, text...etc), and here's the poor result I came up with.
#!/bin/bash
FINDPATH="$1"
FILETYPE="$2"
locate $FINDPATH* | while read FILEPROCESS
do
if file -bi "$FILEPROCESS" | grep -q "$FILETYPE"
then
echo $FILEPROCESS
fi
done
It works, but as you could guess, the performance is not so good.
So, can you guys help me make it better ? and also, I don't want to rely on files extensions.
Update:
Here's what I am using now
#!/bin/bash
FINDPATH="$1"
find "$FINDPATH" -type f | file -i -F "::" -f - | awk -v FILETYPE="$2" -F"::" '$2 ~ FILETYPE { print $1 }'
Forking (exec) is expensive. This runs the file command only once, so it is fast:
find . -print | file -if - | grep "what you want" | awk -F: '{print $1}'
or
locate what.want | file -if -
check man file
-i #print mime types
-f - #read filenames from the stdin
#!/bin/bash
find $1 | file -if- | grep $2 | awk -F: '{print $1}'
#!/usr/bin/env bash
mimetypes=$(sed -E 's/\/.*//g; /^$/d; /^#/d' /etc/mime.types | uniq)
display_help(){
echo "Usage: ${0##*/} [mimetype]"
echo "Available mimetypes:"
echo "$mimetypes"
exit 2
}
[[ $# -lt 1 ]] && display_help
ext=$(sed -E "/^${1}/!d; s/^[^ \t]+[ \t]*//g; /^$/d; s/ /\n/g" /etc/mime.types | sed -Ez 's/\n$//; s/\n/\\|/g; s/(.*)/\.*\\.\\(\1\\)\n/')
find "$PWD" -type f -regex "$ext"