I am going to keep this simple and ask, is there a way to see which pod have an active connection with an endpoint like a database endpoint?
My cluster contains a few hundred of namespace and my database provider just told me that the maximum amount of connections is almost reached and I want to pinpoint the pod(s) that uses multiple connections to our database endpoint at the same time.
I can see from my database cluster that the connections come from my cluster node's IP... but it won't say which pods... and I have quite lot of pods...
Thanks for the help
Each container uses its own network namespace, so to check the network connection inside the container namespace you need to run command inside that namespace.
Luckily all containers in a Pod share the same network namespace, so you can add small sidecar container to the pod that print to the log open connections.
Alternatively, you can run netstat command inside the pod (if the pod has it on its filesystem):
kubectl get pods | grep Running | awk '{ print $1 }' | xargs -I % sh -c 'echo == Pod %; kubectl exec -ti % -- netstat -tunaple' >netstat.txt
# or
kubectl get pods | grep Running | awk '{ print $1 }' | xargs -I % sh -c 'echo == Pod %; kubectl exec -ti % -- netstat -tunaple | grep ESTABLISHED' >netstat.txt
After that you'll have a file on your disk (netstat.txt) with all information about connections in the pods.
The third way is most complex. You need to find the container ID using docker ps and run the following command to get PID
$ pid = "$(docker inspect -f '{{.State.Pid}}' "container_name | Uuid")"
Then, you need to create named namespace:
(you can use any name you want, or container_name/Uuid/Pod_Name as a replacement to namespace_name)
sudo mkdir -p /var/run/netns
sudo ln -sf /proc/$pid/ns/net "/var/run/netns/namespace_name"
Now you can run commands in that namespace:
sudo ip netns exec "namespace_name" netstat -tunaple | grep ESTABLISHED
You need to do that for each pod on each node. So, it might be useful to troubleshoot particular containers, but it needs some more automation for your task.
It might be helpful for you to install Istio to your cluster. It has several interesting features mentioned in this answer
The easiest way is to run netstat on all your Kubernetes nodes:
$ netstat -tunaple | grep ESTABLISHED | grep <ip address of db provider>
The last column is the PID/Program name column, and that's a program that is running in a container (with a different internal container PID) in your pod on that specific node. There are all kinds of different ways to find out which container/pod it is. For example,
# Loop through all containers on the node with
$ docker top <container-id>
Then after you find the container id, if you look through all your pods:
$ kubectl get pod <pod-id> -o=yaml
And you can find the status, for example:
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-11-09T23:01:36Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2018-11-09T23:01:38Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2018-11-09T23:01:38Z
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2018-11-09T23:01:36Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://f64425b3cd0da74a323440bcb03d8f2cd95d3d9b834f8ca5c43220eb5306005d
Related
I tried to run my first cluster as I'm currently trying to learn so I can work in Cloud Engineering hopefully.
What I did :
I have 3 Cloud Servers ( Ubuntu 20.04), all in one Network,
I've successfully set up my ETCD Cluster ( cluster-health shows me all 3 Network IPs of the Servers, 1 leader 2 not leader)
now I've installed k3s on my first Server
curl -sfL https://get.k3s.io | sh -s - server \ --datastore-endpoint="https://10.0.0.2:2380,https://10.0.0.4:2380,https://10.0.0.3:2380"
I've done the same on the 2 other Servers the only difference is I added the token value to it and checked it beforehand in:
cat /var/lib/rancher/k3s/server/token
now everything seems to have worked but when I tried to kubectl get nodes , it just shows me one node...
does anyone have any tips or answers for me?
k3s Service FIle :
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
After=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
server \
'--node-external-ip=78.46.241.153'
'--node-name=node-1'
' --flannel-iface=ens10'
' --advertise-address=10.0.0.2'
' --node-ip=10.0.0.2'
' --datastore-endpoint=https://10.0.0.2:2380,https://10.0.0.4:2380,https://10.0.0.3:2380' \
[Updated1] I have a shell which will change TCP kernel parameters in some functions, but now I need to make this shell run in Docker container, that means, the shell need to know it is running inside a container and stop configuring the kernel.
Now I'm not sure how to achieve that, here is the contents of /proc/self/cgroup inside the container:
9:hugetlb:/
8:perf_event:/
7:blkio:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct:/
2:cpu:/docker/25ef774c390558ad8c4e9a8590b6a1956231aae404d6a7aba4dde320ff569b8b
1:cpuset:/
Any flags above can I use to figure out if this process is running inside a container?
[Updated2]: I have also noticed Determining if a process runs inside lxc/Docker, but it seems not working in this case, the content in /proc/1/cgroup of my container is:
8:perf_event:/
7:blkio:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct:/
2:cpu:/docker/25ef774c390558ad8c4e9a8590b6a1956231aae404d6a7aba4dde320ff569b8b
1:cpuset:/
No /lxc/containerid
Docker creates .dockerenv and .dockerinit (removed in v1.11) files at the top of the container's directory tree so you might want to check if those exist.
Something like this should work.
#!/bin/bash
if [ -f /.dockerenv ]; then
echo "I'm inside matrix ;(";
else
echo "I'm living in real world!";
fi
To check inside a Docker container if you are inside a Docker container or not can be done via /proc/1/cgroup. As this post suggests you can to the following:
Outside a docker container all entries in /proc/1/cgroup end on / as you can see here:
vagrant#ubuntu-13:~$ cat /proc/1/cgroup
11:name=systemd:/
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:freezer:/
6:devices:/
5:memory:/
4:cpuacct:/
3:cpu:/
2:cpuset:/
Inside a Docker container some of the control groups will belong to Docker (or LXC):
vagrant#ubuntu-13:~$ docker run busybox cat /proc/1/cgroup
11:name=systemd:/
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:freezer:/
6:devices:/docker/3601745b3bd54d9780436faa5f0e4f72bb46231663bb99a6bb892764917832c2
5:memory:/
4:cpuacct:/
3:cpu:/docker/3601745b3bd54d9780436faa5f0e4f72bb46231663bb99a6bb892764917832c2
2:cpuset:/
We use the proc's sched (/proc/$PID/sched) to extract the PID of the process. The process's PID inside the container will differ then it's PID on the host (a non-container system).
For example, the output of /proc/1/sched on a container
will return:
root#33044d65037c:~# cat /proc/1/sched | head -n 1
bash (5276, #threads: 1)
While on a non-container host:
$ cat /proc/1/sched | head -n 1
init (1, #threads: 1)
This helps to differentiate if you are in a container or not. eg you can do:
if [[ ! $(cat /proc/1/sched | head -n 1 | grep init) ]]; then {
echo in docker
} else {
echo not in docker
} fi
Using Environment Variables
For my money, I prefer to set an environment variable inside the docker image that can then be detected by the application.
For example, this is the start of a demo Dockerfile config:
FROM node:12.20.1 as base
ENV DOCKER_RUNNING=true
RUN yarn install --production
RUN yarn build
The second line sets an envar called DOCKER_RUNNING that is then easy to detect. The issue with this is that in a multi-stage build, you will have to repeat the ENV line every time you FROM off of an external image. For example, you can see that I FROM off of node:12.20.1, which includes a lot of extra stuff (git, for example). Later on in my Dockerfile I then COPY things over to a new image based on node:12.20.1-slim, which is much smaller:
FROM node:12.20.1-slim as server
ENV DOCKER_RUNNING=true
EXPOSE 3000
COPY --from=base /build /build
CMD ["node", "server.js"]
Even though this image target server is in the same Dockerfile, it requires the ENV var to be defined again because it has a different base image.
If you make use of Docker-Compose, you could instead easily define an envar there. For example, your docker-compose.yml file could look like this:
version: "3.8"
services:
nodeserver:
image: michaeloryl/stackdemo
environment:
- NODE_ENV=production
- DOCKER_RUNNING=true
Thomas' solution as code:
running_in_docker() {
(awk -F/ '$2 == "docker"' /proc/self/cgroup | read non_empty_input)
}
Note
The read with a dummy variable is a simple idiom for Does this produce any output?. It's a compact method for turning a possibly verbose grep or awk into a test of a pattern.
Additional note on read
What works for me is to check for the inode number of the '/.'
Inside the docker, its a very high number.
Outside the docker, its a very low number like '2'.
I reckon this approach would also depend on the FileSystem being used.
Example
Inside the docker:
# ls -ali / | sed '2!d' |awk {'print $1'}
1565265
Outside the docker
$ ls -ali / | sed '2!d' |awk {'print $1'}
2
In a script:
#!/bin/bash
INODE_NUM=`ls -ali / | sed '2!d' |awk {'print $1'}`
if [ $INODE_NUM == '2' ];
then
echo "Outside the docker"
else
echo "Inside the docker"
fi
We needed to exclude processes running in containers, but instead of checking for just docker cgroups we decided to compare /proc/<pid>/ns/pid to the init system at /proc/1/ns/pid. Example:
pid=$(ps ax | grep "[r]edis-server \*:6379" | awk '{print $1}')
if [ $(readlink "/proc/$pid/ns/pid") == $(readlink /proc/1/ns/pid) ]; then
echo "pid $pid is the same namespace as init system"
else
echo "pid $pid is in a different namespace as init system"
fi
Or in our case we wanted a one liner that generates an error if the process is NOT in a container
bash -c "test -h /proc/4129/ns/pid && test $(readlink /proc/4129/ns/pid) != $(readlink /proc/1/ns/pid)"
which we can execute from another process and if the exit code is zero then the specified PID is running in a different namespace.
golang code, via the /proc/%s/cgroup to check a process in a docker,include the k8s cluster
func GetContainerID(pid int32) string {
cgroupPath := fmt.Sprintf("/proc/%s/cgroup", strconv.Itoa(int(pid)))
return getContainerID(cgroupPath)
}
func GetImage(containerId string) string {
if containerId == "" {
return ""
}
image, ok := containerImage[containerId]
if ok {
return image
} else {
return ""
}
}
func getContainerID(cgroupPath string) string {
containerID := ""
content, err := ioutil.ReadFile(cgroupPath)
if err != nil {
return containerID
}
lines := strings.Split(string(content), "\n")
for _, line := range lines {
field := strings.Split(line, ":")
if len(field) < 3 {
continue
}
cgroup_path := field[2]
if len(cgroup_path) < 64 {
continue
}
// Non-systemd Docker
//5:net_prio,net_cls:/docker/de630f22746b9c06c412858f26ca286c6cdfed086d3b302998aa403d9dcedc42
//3:net_cls:/kubepods/burstable/pod5f399c1a-f9fc-11e8-bf65-246e9659ebfc/9170559b8aadd07d99978d9460cf8d1c71552f3c64fefc7e9906ab3fb7e18f69
pos := strings.LastIndex(cgroup_path, "/")
if pos > 0 {
id_len := len(cgroup_path) - pos - 1
if id_len == 64 {
//p.InDocker = true
// docker id
containerID = cgroup_path[pos+1 : pos+1+64]
// logs.Debug("pid:%v in docker id:%v", pid, id)
return containerID
}
}
// systemd Docker
//5:net_cls:/system.slice/docker-afd862d2ed48ef5dc0ce8f1863e4475894e331098c9a512789233ca9ca06fc62.scope
docker_str := "docker-"
pos = strings.Index(cgroup_path, docker_str)
if pos > 0 {
pos_scope := strings.Index(cgroup_path, ".scope")
id_len := pos_scope - pos - len(docker_str)
if pos_scope > 0 && id_len == 64 {
containerID = cgroup_path[pos+len(docker_str) : pos+len(docker_str)+64]
return containerID
}
}
}
return containerID
}
Based on Dan Walsh's comment about using SELinux ps -eZ | grep container_t, but without requiring ps to be installed:
$ podman run --rm fedora:31 cat /proc/1/attr/current
system_u:system_r:container_t:s0:c56,c299
$ podman run --rm alpine cat /proc/1/attr/current
system_u:system_r:container_t:s0:c558,c813
$ docker run --rm fedora:31 cat /proc/1/attr/current
system_u:system_r:container_t:s0:c8,c583
$ cat /proc/1/attr/current
system_u:system_r:init_t:s0
This just tells you you're running in a container, but not which runtime.
Didn't check other container runtimes but https://opensource.com/article/18/2/understanding-selinux-labels-container-runtimes provides more info and suggests this is widely used, might also work for rkt and lxc?
What works for me, as long as I know the system programs/scrips will be running on, is confirming if what's running with PID 1 is systemd (or equivalent). If not, that's a container.
And this should be true for any linux container, not only docker.
Had the need for this capability in 2022 on macOS and only the answer by #at0S still works from all the other options.
/proc/1/cgroup only has the root directory in a container unless configured otherwise
/proc/1/sched showed the same in-container process number. The name was different (bash) but that's not very portable.
Environment variables work if you configure your container yourself, but none of the default environment variables helped
I did find an option not listed in the other answers: /proc/1/mounts included an overlay filesystem with "docker" in its path.
Trying to insert several JSON files to MongoDB collections using shell script as following,
#!/bin/bash
NUM=50000
for ((i=o;i<NUM;i++))
do
mongoimport --host localhost --port 27018 -u 'admin' -p 'password' --authenticationDatabase 'admin' -d random_test -c tri_${i} /home/test/json_files/json_${i}.csv --jsonArray
done
after several successful adding, these errors were shown on terminal
Failed: connection(localhost:27017[-3]), incomplete read of message header: EOF
error connecting to host: could not connect to server:
server selection error: server selection timeout,
current topology: { Type: Single, Servers:
[{ Addr: localhost:27017, Type: Unknown,
State: Connected, Average RTT: 0, Last error: connection() :
dial tcp [::1]:27017: connect: connection refused }, ] }
And below the eoor messages from mongo.log, that said too many open files, can I somehow limit the thread number? or what should I do to fix it?? Thanks a lot!
2020-07-21T11:13:33.613+0200 E STORAGE [conn971] WiredTiger error (24) [1595322813:613873][53971:0x7f7c8d228700], WT_SESSION.create: __posix_directory_sync, 151: /home/mongodb/bin/data/db/index-969--7295385362343345274.wt: directory-sync: Too many open files Raw: [1595322813:613873][53971:0x7f7c8d228700], WT_SESSION.create: __posix_directory_sync, 151: /home/mongodb/bin/data/db/index-969--7295385362343345274.wt: directory-sync: Too many open files
2020-07-21T11:13:33.613+0200 E STORAGE [conn971] WiredTiger error (-31804) [1595322813:613892][53971:0x7f7c8d228700], WT_SESSION.create: __wt_panic, 490: the process must exit and restart: WT_PANIC: WiredTiger library panic Raw: [1595322813:613892][53971:0x7f7c8d228700], WT_SESSION.create: __wt_panic, 490: the process must exit and restart: WT_PANIC: WiredTiger library panic
2020-07-21T11:13:33.613+0200 F - [conn971] Fatal Assertion 50853 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 414
2020-07-21T11:13:33.613+0200 F - [conn971]
***aborting after fassert() failure
checking the open file limit by ulimit -n shows 1024. Then I tried to alter the limit by ulimit -n 50000, but the accout that I used for the remote server doesn't have permissions to do that, can I somehow close the file once the importing is done or is there any other way to alter the open file limit without root permission needed? Thanks a lot!
Env: Redhat, mongoDB
You can't. The reason why resource limits exist is to limit how much resources non-privileged users (which yours is) can consume. You need to reconfigure the system to adjust this which requires root privileges.
Following my previous question which got closed— basically I have a script that check availability of packages on target server, the target server and the packages have been stored to an array.
declare -a prog=("gdebi" "firefox" "chromium-browser" "thunar")
declare -a snap=("beer2" "beer3")
# checkvar=$(
for f in "${prog[#]}"; do
for connect in "${snap[#]}"; do
ssh lg#"$connect" /bin/bash <<- EOF
if dpkg --get-selections | grep -qE "(^|\s)"$f"(\$|\s)"; then
status="[INSTALLED] [$connect]"
else
status=""
fi
printf '%s %s\n' "$f" "\$status"
EOF
done
done
With the help of fellow member here, I've made several fix to original script, script ran pretty well— except there's one problem, the output contain duplicate entries.
gdebi [INSTALLED] [beer2]
gdebi
firefox [INSTALLED] [beer2]
firefox [INSTALLED] [beer3]
chromium-browser [INSTALLED] [beer2]
chromium-browser [INSTALLED] [beer3]
thunar
thunar
I know it this is normal behavior, as for pass multiple server from snap array, making ssh travel to all the two server.
Considering that the script checks same package for two server, I want the output to be merged.
If beer2 have firefox packages, but beer3 doesn't.
firefox [INSTALLED] [beer2]
If beer3 have firefox packages, but beer2 doesn't.
firefox [INSTALLED] [beer3]
If both beer2 and beer3 have the packages.
firefox [INSTALLED] [beer2, beer3]
or
firefox [INSTALLED] [beer2] [beer3]
If both beer2 and beer3 doesn't have the package, it will return without extra parameter.
firefox
Sound like an easy task, but for the love of god I can't find how to achieve this, here's list of things I have tried.
Try to manipulate the for loops.
Try putting return value after one successful loops (exit code).
Try nested if.
All of the above doesn't seem to work, I haven't tried changing/manipulate the return string as I'm not really experienced with some text processing such as: awk, sed, tr and many others.
Can anyone shows how It's done ? Would really mean the world to me.
Pure Bash 4+ solution using associative array to store hosts the program is installed on:
#!/usr/bin/env bash
declare -A hosts_with_package=(["gdebi"]="" ["firefox"]="" ["chromium-browser"]="" ["thunar"]="")
declare -a hosts=("beer2" "beer3")
# Collect installed status
# Iterate all hosts
for host in "${hosts[#]}"; do
# Read the output of dpkg --get-selections with searched packages
while IFS=$' \t' read -r package status; do
# Test weather package is installed on host
if [ "$status" = "install" ]; then
# If no host listed for package, create first entry
if [ -z "${hosts_with_package[$package]}" ]; then
# Record the first host having the package installed
hosts_with_package["$package"]="$host"
else
# Additional hosts are concatenated as CSV
hosts_with_package["$package"]="${hosts_with_package[$package]}, $host"
fi
fi
# Feed the whole loop with the output of the dpkg --get-selections for packages names
# Packages names are the index of the hosts_with_package array
done < <(ssh "lg#$host" dpkg --get-selections "${!hosts_with_package[#]}")
done
# Output results
# Iterate the package name keys
for package in "${!hosts_with_package[#]}"; do
# Print package name without newline
printf '%s' "$package"
# If package is installed on some hosts
if [ -n "${hosts_with_package[$package]}" ]; then
# Continue the line with installed hosts
printf ' [INSTALLED] [%s]' "${hosts_with_package[$package]}"
fi
# End with a newline
echo
done
Instead of making several ssh connections in nested loops consider this change
prog=( mysql-server apache2 php ufw )
snap=( localhost )
for connect in ${snap[#]}; do
ssh $connect "
progs=( ${prog[#]} )
for prog in \${progs[#]}; do
dpkg -l | grep -q \$prog && echo \"\$prog [INSTALLED]\" || echo \"\$prog\"
done
"
done
Based on #Ivan answer
#!/bin/bash
prog=( "gdebi" "firefox" "chromium-browser" "thunar" )
snap=( "beer2" "beer3" )
# First, retrieve the list on installed program for each host
for connect in ${snap[#]}; do
ssh lg#"$connect" /bin/bash >/tmp/installed.${connect} <<- EOF
progs=( "${prog[#]}" )
for prog in \${progs[#]}; do
dpkg --get-selections | awk -v pkg=\$prog '\$1 == pkg && \$NF ~ /install/ {print \$1}'
done
EOF
done
# Filter the previous results to format the output as you need
awk '{
f = FILENAME;
gsub(/.*\./,"",f);
a[$1] = a[$1] "," f
}
END {
for (i in a)
print i ":[" substr(a[i],2) "]"
}' /tmp/installed.*
rm /tmp/installed.*
Example of output :
# With prog=( bash cat sed tail something firefox-esr )
firefox-esr:[localhost]
bash:[localhost,localhost2]
sed:[localhost,localhost2]
I have formed a command for fetching established port connection using nagios check_by_ssh module.
I am able to get the output when I run the command, however after placing the command in the commands.cfg file I am seeing "check_by_ssh: skip-stderr argument must be an integer " in the GUI. Any suggestion on this would be of great help.
Command:
/usr/local/nagios/libexec/check_by_ssh -l fuseadmin -H <hostname> -C "netstat -punta | grep -i ESTABLISHED | wc -l | awk '{if (\$0>2500) {print \"CRITICAL: Established Socket Count: \"\$0} else {print \"OK: Established Socket Count: \"\$0}}'" -i ~/.ssh/id_dsa -E
OK: Established Socket Count: 67
Commands.cfg:
define command {
command_name netstat_cnt_estanblished_gt_2500_fuse01
command_line /usr/local/nagios/libexec/check_by_ssh -l fuseadmin -H a0110pcsgesb01 -C "netstat -punta | grep -i ESTABLISHED | wc -l 2>&1 | awk '{if (\$0>2500) {print \"CRITICAL: Established Socket Count: \"\$0} else {print \"OK: Established Socket Count: \"\$0}}'" -i ~/.ssh/id_dsa -E
}
Service Definition
#netstat_cnt_estanblished_gt_2500_csg2.0
define service{
use generic-service ; Name of service template to use
host_name <hostname>
service_description Netstat Established Count
event_handler send-service-trap-fms
event_handler_enabled 1
check_command netstat_cnt_estanblished_gt_2500_fuse01
max_check_attempts 1
notifications_enabled 1 ; Service notifications are enabled
check_period 24x7 ; The service can be checked at any time of the day
max_check_attempts 3 ; Re-check the service up to 3 times in order to determine its final (hard) state
check_interval 2 ; Check the service every 10 minutes under normal conditions
retry_interval 2 ; Re-check the service every two minutes until a hard state can be determined
contact_groups fuse_users ; Notifications get sent out to everyone in the 'admins' group
notification_options w,u,c,r ; Send notifications about warning, unknown, critical, and recovery events
notification_interval 30 ; Re-notify about service problems every hour
notification_period 24x7
}
**I have changed the actual hostname to due to compliance
here it says:
check_by_ssh: print command output in verbose mode
right now it is not possible to print the command output of ssh. check_by_ssh
only prints the command itself. This patchs adds printing the output too. This
makes it possible to use ssh with verbose logging which helps debuging any
connection, key or other ssh problems.
Note: you must use -E,--skip-stderr=<high number>, otherwise check_by_ssh would
always exit with unknown state.
Example:
./check_by_ssh -H localhost -o LogLevel=DEBUG3 -C "sleep 1" -E 999 -v
Meaning: you should just have to specify a number after "-E", like -E 999, in your definition (like the example in above code-block says)
... even though, it's confusing (maybe a bug?), because the command help of check_by_ssh says:
-E, --skip-stderr[=n]
Ignore all or (if specified) first n lines on STDERR [optional]