How can I get the names of all namespaces containing the word "nginx" and store those names in an array - arrays

Basically I want to automate this task where I have some namespaces in Kubernetes I need to delete and others that I want to leave alone. These namespaces contain the word nginx. So I was thinking in order to do that I could get the output of get namespace using some regex and store those namespaces in an array, then iterate through that array deleting them one by one.
array=($(kubectl get ns | jq -r 'keys[]'))
declare -p array
for n in {array};
do
kubectl delete $n
done
I tried doing something like this but this is very basic and doesn't even have the regex. But I just left it here as an example to show what I'm trying to achieve. Any help is appreciated and thanks in advance.

kubectl get ns doesn't output JSON unless you add -o json. This:
array=($(kubectl get ns | jq -r 'keys[]'))
Should result in an error like:
parse error: Invalid numeric literal at line 1, column 5
kubectl get ns -o json emits a JSON response that contains a list of Namespace resources in the items key. You need to get the metadata.name attribute from each item, so:
kubectl get ns -o json | jq -r '.items[].metadata.name'
You only want namespaces that contain the word "nginx". We could filter the above list with grep, or we could add that condition to our jq expression:
kubectl get ns -o json | jq -r '.items[]|select(.metadata.name|test("nginx"))|.metadata.name'
This will output your desired namespaces. At this point, there's no reason to store this in array and use a for loop; you can just pipe the output to xargs:
kubectl get ns -o json |
jq -r '.items[]|select(.metadata.name|test("nginx"))|.metadata.name' |
xargs kubectl delete ns

kubectl get ns
output
NAME STATUS AGE
default Active 75d
kube-public Active 75d
kube-system Active 75d
oci-service-operator-system Active 31d
olm Active 31d
command
kubectl get ns --no-headers | awk '{if ($1 ~ "de") print $1}'
Output
default
kube-node-lease
this will give you a list of namespaces
array=$(kubectl get ns --no-headers | awk '{if ($1 ~ "de") print $1}')
Testing
bash-4.2$ array=$(kubectl get ns --no-headers | awk '{if ($1 ~ "de") print $1}')
bash-4.2$ echo $array
default kube-node-lease
bash-4.2$ for n in $array; do echo $n; done
default
kube-node-lease
bash-4.2$

Related

How can I save my response into a list in BASH?

My first bash script.
myvar=$(aws ec2 describe-regions|jq '.Regions[] | {RegionName: .RegionName }')
echo "${myvar}"| jq -c '.| .RegionName'
Using the 2 commands above. I successfully got the the following result in terminal
"af-south-1"
"eu-north-1"
"ap-south-1"
"eu-west-3"
"eu-west-2"
"eu-west-1"
"ap-northeast-3"
"ap-northeast-2"
"me-south-1"
"ap-northeast-1"
"sa-east-1"
"ca-central-1"
"ap-east-1"
"ap-southeast-1"
"ap-southeast-2"
"eu-central-1"
"us-east-1"
"us-east-2"
"us-west-1"
"us-west-2"
How to save the answer into an array, and iterate it?
I thought about
awsRegionList =$(${myvar}| jq -c '.| .RegionName')
echo awsRegionList
But it finishes with an error.
First of all you can simplify the jq query to process the EC2 output in one go.
#!/usr/bin/env bash
# Map the output of jq parsed EC2 response into the $regionList array
mapfile -d '' regionList < <(
# Gets EC2 response in JSON piped
aws ec2 describe-regions |
# Processes JSON into a null-delimited stream
jq --join-output '.Regions[] | .RegionName + "\u0000"'
)
# Iterates the list
for regionName in "${regionList[#]}"; do
# Do stuffs with $regionName
printf 'Doing stuff with: %s\n' "$regionName"
done
You could create a single jq command to save the output into an array. Afterwards you can iterate over it with a for loop:
#!/bin/bash
awsRegionList=($(aws ec2 describe-regions | jq -r -c '.Regions[] | .RegionName'))
for region in ${awsRegionList[#]}; do
echo $region
done;
Note, using -r will remove the quotes as well, so the output will be something like:
eu-north-1
ap-south-1
eu-west-3
eu-west-2
eu-west-1
ap-northeast-3
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
us-east-2
us-west-1
us-west-2

Output from kubectl images into an array (bash script)

I'm following the format of this stackoverflow post to try and get the jsonpath for images (for all pods) into an array, which I can then loop through and run a gcloud command on each item from the array.
The command I'm trying is:
array=( $(kubectl get pods -o jsonpath="{.items[*].spec.containers[*].image}" | jq -r 'keys[]') )
declare -p array
However I receive the error: parse error: Invalid numeric literal at line 1, column 41
When i run the command without the jq the command it's putting everything into the first index item, so a big long string, with spaces e.g.
typeset -a array=( 'eu.gcr.io/repo/imagename1 eu.gcr.io/repo/imagename2 eu.gcr.io/repo/imagename3 eu.gcr.io/repo/imagename4' )
Any ideas how I can get this output into separate array items and something I can use to iterate through?
Your -o jsonpath="{.items[*].spec.containers[*].image}" outputs all of the image names on one line (here's mine):
nginx docker.local/node-server:1640867594 docker.local/node-server:1640867594 docker.local/node-server:1640867594 nginx nginx nginx nginx nginx
It's no longer JSON, so piping it into jq is going to do weird things. In your case, jq is (probably) complaining about one of the : items. In my case, I get a slightly different parse error.
Either drop the use of jq, or lean into it fully:
kubectl get pods -o json | jq -r '.items[].spec.containers[].image' | sort | uniq | ...

using command variable in column output not working

I'm using a script that uses curl to obtain specific array values from a configuration. I'd like to place the output into columns separating values (values are unknown to script). Here's my code:
# get overlay networks and their details
get_overlay=`curl -H "X-Person-Token: $auth_token" -H "X-Person-Email: $auth_email" -k "$api_host/api/v1/networks"`
# array of overlay names with uuid
overlay_name=`echo $get_overlay | jq '.[] | .name'`
overlay_uuid=`echo $get_overlay | jq '.[] | .uuid'`
echo ""
echo -e "Overlay UUID\n$oname $ouuid" | column -t
exit 0
Here's the ouput:
Overlay UUID
"TESTOVERLAY"
"Auto_API_Overlay"
"ANOTHEROVERLAYTEST" "ea178905-6ab0-4154-ab05-412dc4b39151"
"e5be9dbe-b0fc-4e30-aaf5-ac4bdcd863a7"
"850ebf6b-3651-4cf1-aae1-5a6c03fad61b"
What I was expecting was:
Overlay UUID
"TESTOVERLAY" "ea178905-6ab0-4154-ab05-412dc4b39151"
"Auto_API_Overlay" "e5be9dbe-b0fc-4e30-aaf5-ac4bdcd863a7"
"ANOTHEROVERLAYTEST" "850ebf6b-3651-4cf1-aae1-5a6c03fad61b"
I'm an absolute beginner at this, any insight is very much appreciated.
Thanks!
I would suggest using paste to combine your two variables line by line:
paste <(printf 'Overlay\n%s\n' "$name") <(printf 'UUID\n%s\n' "$uuid") | column -t
Two process substitutions are used to pass the contents of each variable along with their titles.

Count ip repeat in log from bash

bash as I can tell from the repetition of an IP within a log through a specific search?
By example:
#!/bin/bash
# Log line: [Sat Jul 04 21:55:35 2015] [error] [client 192.168.1.39] Access denied with status code 403.
grep "status\scode\s403" /var/log/httpd/custom_error_log | while read line ; do
pattern='^\[.*?\]\s\[error\]\s\[client\s(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\].*?403'
[[ $line =~ $pattern ]]
res_remote_addr="${BASH_REMATCH[1]}.${BASH_REMATCH[2]}.${BASH_REMATCH[3]}.${BASH_REMATCH[4]}"
echo "Remote Addr: $res_remote_addr"
done
I need to know the end results obtained a few times each message 403 ip, if possible sort highest to lowest.
By example output:
200.200.200.200 50 times.
200.200.200.201 40 times.
200.200.200.202 30 times.
... etc ...
This we need to create an html report from a monthly log of apache in a series of events (something like awstats).
there are better ways. following is my proposal, which should be more readable and easier to maintain:
grep -P -o '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}' log_file | sort | uniq -c | sort -k1,1 -r -n
output should be in a form of:
count1 ip1
count2 ip2
update:
filter only 403:
grep -P -o '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(?=.*403)' log_file | sort | uniq -c | sort -k1,1 -r -n
notice that a look ahead would suffice.
If log file is in the format as mentioned in question, the best is to use awk to filter out the status code needed plus output only the IP. Then use the uniq command to count each occurence:
awk '/code 403/ {print $8}' error.log | sort | uniq -c |sort -n
In awk, we filter by regexp /code 403/ and then for matching lines we print the 8th value (values are separated by whitespace), which is the IP.
Then we need to sort the output, so that the same IPs are one after another - this is requirement of the uniq program.
uniq -c prints each unique line from input only once - and preceded by the number of occurences. Finnaly we sort this list numericaly to get the IPs sorted by count.
Sample output (first is no. of occurences, second is IP):
1 1.1.1.1
10 2.2.2.2
12 3.3.3.3

Execute bash command stored in associative array over SSH, store result

For a larger project that's not relevant, I need to collect system stats from the local system or a remote system. Since I'm collecting the same stats either way, I'm preventing code duplication by storing the stats-collecting commands in a Bash associative array.
declare -A stats_cmds
# Actually contains many more key:value pairs, similar style
stats_cmds=([total_ram]="$(free -m | awk '/^Mem:/{print $2}')")
I can collect local system stats like this:
get_local_system_stats()
{
# Collect stats about local system
complex_data_structure_that_doesnt_matter=${stats_cmds[total_ram]}
# Many more similar calls here
}
A precondition of my script is that ~/.ssh/config is setup such that ssh $SSH_HOSTNAME works without any user input. I would like something like this:
get_remote_system_stats()
{
# Collect stats about remote system
complex_data_structure_that_doesnt_matter=`ssh $SSH_HOSTNAME ${stats_cmds[total_ram]}`
}
I've tried every combination of single quotes, double quotes, backticks and such that I can imagine. Some combinations result in the stats command getting executed too early (bash: 7986: command not found), others cause syntax errors, others return null (single quotes around the stats command) but none store the proper result in my data structure.
How can I evaluate a command, stored in an associative array, on a remote system via SSH and store the result in a data structure in my local script?
Make sure that the commands you store in your array don't get expanded when you assign your array!
Also note that the complex-looking quoting style is necessary when nesting single quotes. See this SO post for an explanation.
stats_cmds=([total_ram]='free -m | awk '"'"'/^Mem:/{print $2}'"'"'')
And then just launch your ssh as:
sh "$ssh_hostname" "${stats_cmds[total_ram]}"
(yeah, I lowercased your variable name because uppercase variable names in Bash are really sick). Then:
get_local_system_stats() {
# Collect stats about local system
complex_data_structure_that_doesnt_matter=$( ${stats_cmds[total_ram]} )
# Many more similar calls here
}
and
get_remote_system_stats() {
# Collect stats about remote system
complex_data_structure_that_doesnt_matter=$(ssh "$ssh_hostname" "${stats_cmds[total_ram]}")
}
First, I'm going to suggest an approach that makes minimal changes to your existing implementation. Then, I'm going to demonstrate something closer to best practices.
Smallest Modification
Given your existing code:
declare -A remote_stats_cmds
remote_stats_cmds=([total_ram]='free -m | awk '"'"'/^Mem:/{print $2}'"'"''
[used_ram]='free -m | awk '"'"'/^Mem:/{print $3}'"'"''
[free_ram]='free -m | awk '"'"'/^Mem:/{print $4}'"'"''
[cpus]='nproc'
[one_min_load]='uptime | awk -F'"'"'[a-z]:'"'"' '"'"'{print $2}'"'"' | awk -F "," '"'"'{print $1}'"'"' | tr -d " "'
[five_min_load]='uptime | awk -F'"'"'[a-z]:'"'"' '"'"'{print $2}'"'"' | awk -F "," '"'"'{print $2}'"'"' | tr -d " "'
[fifteen_min_load]='uptime | awk -F'"'"'[a-z]:'"'"' '"'"'{print $2}'"'"' | awk -F "," '"'"'{print $3}'"'"' | tr -d " "'
[iowait]='cat /proc/stat | awk '"'"'NR==1 {print $6}'"'"''
[steal_time]='cat /proc/stat | awk '"'"'NR==1 {print $9}'"'"'')
...one can evaluate these locally as follows:
result=$(eval "${remote_stat_cmds[iowait]}")
echo "$result" # demonstrate value retrieved
...or remotely as follows:
result=$(ssh "$hostname" bash <<<"${remote_stat_cmds[iowait]}")
echo "$result" # demonstrate value retrieved
No separate form is required.
The Right Thing
Now, let's talk about an entirely different way to do this:
# no awful nested quoting by hand!
collect_total_ram() { free -m | awk '/^Mem:/ {print $2}'; }
collect_used_ram() { free -m | awk '/^Mem:/ {print $3}'; }
collect_cpus() { nproc; }
...and then, to evaluate locally:
result=$(collect_cpus)
...or, to evaluate remotely:
result=$(ssh "$hostname" bash <<<"$(declare -f collect_cpus); collect_cpus")
...or, to iterate through defined functions with the collect_ prefix and do both of these things:
declare -A local_results
declare -A remote_results
while IFS= read -r funcname; do
local_results["${funcname#collect_}"]=$("$funcname")
remote_results["${funcname#collect_}"]=$(ssh "$hostname" bash <<<"$(declare -f "$funcname"); $funcname")
done < <(compgen -A function collect_)
...or, to collect all the items into a single remote array in one pass, avoiding extra SSH round-trips and not eval'ing or otherwise taking security risks with results received from the remote system:
remote_cmd=""
while IFS= read -r funcname; do
remote_cmd+="$(declare -f "$funcname"); printf '%s\0' \"$funcname\" \"\$(\"$funcname\")\";"
done < <(compgen -A function collect_)
declare -A remote_results=( )
while IFS= read -r -d '' funcname && IFS= read -r -d '' result; do
remote_results["${funcname#collect_}"]=$result
done < <(ssh "$hostname" bash <<<"$remote_cmd")

Resources