How many static pods exist in this cluster in all namespaces?
-- execute regular kubectl get pods --all-namespaces and see if there are "-master" on the name.
master $ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-bbm7q 1/1 Running 0 2m1s
kube-system coredns-66bff467f8-hghqk 1/1 Running 0 2m1s
kube-system etcd-controlplane 1/1 Running 0 2m11s
kube-system katacoda-cloud-provider-58f89f7d9-kx9ts 1/1 Running 0 2m
kube-system kube-apiserver-controlplane 1/1 Running 0 2m11s
kube-system kube-controller-manager-controlplane 1/1 Running 0 2m11s
kube-system kube-flannel-ds-amd64-nhhj9 1/1 Running 0 2m2s
kube-system kube-flannel-ds-amd64-xfjzs 1/1 Running 0 109s
kube-system kube-keepalived-vip-fnqhp 1/1 Running 0 68s
kube-system kube-proxy-ml2d6 1/1 Running 0 109s
kube-system kube-proxy-n5zg4 1/1 Running 0 2m2s
kube-system kube-scheduler-controlplane 1/1 Running 0 2m11s
Since we have none here, the answer is 0
Which of the below components is NOT deployed as a static POD?
-Look for the pods that doesn't have -master appended on their name
--kubectl get pods --all-namespaces
On what nodes are the static pods created?
master $ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default static-busybox-master 1/1 Running 0 5m17s 10.244.0.4 master <none> <none>
kube-system coredns-66bff467f8-bjmd8 1/1 Running 0 8m52s 10.244.0.2 master <none> <none>
kube-system coredns-66bff467f8-rjjsf 1/1 Running 0 8m52s 10.244.0.3 master <none> <none>
kube-system etcd-master 1/1 Running 0 9m 172.17.0.8 master <none> <none>
kube-system katacoda-cloud-provider-58f89f7d9-htt9s 1/1 Running 5 8m51s 10.244.1.2 node01 <none> <none>
kube-system kube-apiserver-master 1/1 Running 0 9m 172.17.0.8 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 0 9m 172.17.0.8 master <none> <none>
kube-system kube-flannel-ds-amd64-5fkpt 1/1 Running 0 8m44s 172.17.0.9 node01 <none> <none>
kube-system kube-flannel-ds-amd64-gj5lc 1/1 Running 0 8m52s 172.17.0.8 master <none> <none>
kube-system kube-keepalived-vip-ckcx6 1/1 Running 0 8m13s 172.17.0.9 node01 <none> <none>
kube-system kube-proxy-wklpl 1/1 Running 0 8m52s 172.17.0.8 master <none> <none>
kube-system kube-proxy-xj8nk 1/1 Running 0 8m44s 172.17.0.9 node01 <none> <none>
kube-system kube-scheduler-master 1/1 Running 0 9m 172.17.0.8 master <none> <none>
From above, all static pods are created on master node.
How many pod definition files are present in the manifests folder
one way to do this is look for kubelet process i.e
ps -ef | grep kubelet and look for config file which would be /var/lib/kubelet/config.yaml
Now if we just search for static pod path on this yaml file, we get the staticPodPaths
for eg:
grep -i "static" /var/lib/kubelet/config.yaml
- The solution would be something like this:
staticPodPath: /etc/kubernetes/manifests
Now go to the folder and see how many files are there
master $ cd /etc/kubernetes/manifests/
master $ ls -ltr
total 20
-rw------- 1 root root 3366 Sep 16 01:54 kube-apiserver.yaml
-rw------- 1 root root 1120 Sep 16 01:54 kube-scheduler.yaml
-rw------- 1 root root 3231 Sep 16 01:54 kube-controller-manager.yaml
-rw------- 1 root root 1832 Sep 16 01:54 etcd.yaml
-rw-r--r-- 1 root root 298 Sep 16 01:59 static-busybox.yaml
What is the docker image used to deploy the kube-api server as a static pod?
master $ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep image
image: k8s.gcr.io/kube-apiserver:v1.18.0
imagePullPolicy: IfNotPresent
Create a static pod named static-busybox that uses the busybox image and the command sleep 1000
kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yamls
We just created a new static pod named static-greenbox. Find it and delete it.
master $ kubectl get nodes node01 -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node01 Ready <none> 32m v1.18.0 172.17.0.11 <none> Ubuntu 18.04.4 LTS 4.15.0-109-generic docker://19.3.6
master $ ssh 172.17.0.11
node01 $ ps -ef | grep kubelet | grep -i "config"
root 2080 1 2 03:11 ? 00:00:02 /usr/bin/kubelet --bootstrap-
kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --resolv-conf=/run/systemd/resolve/resolv.conf
node01 $ grep -i static /var/lib/kubelet/config.yaml
staticPodPath: /etc/just-to-mess-with-you
node01 $ cd /etc/just-to-mess-with-you/
node01 $ ls -ltr
total 4
-rw-r--r-- 1 root root 301 Sep 17 03:11 greenbox.yaml
node01 $ rm greenbox.yaml
Now exit out of node01 and do kubectl get pods on master node, there will be no pods on the master node.
No comments:
Post a Comment