Monday, September 28, 2020

Rolling Updates and Rollbacks

 1. We have deployed a simple web application. Inspect the PODs and the Services Wait for the application to fully deploy and view the application using the link above your terminal.

master $ kubectl get deployment

NAME       READY   UP-TO-DATE   AVAILABLE   AGE

frontend   4/4     4            4           94s

master $ kubectl get pods

NAME                        READY   STATUS    RESTARTS   AGE

frontend-6bb4f9cdc8-hc8ws   1/1     Running   0          107s

frontend-6bb4f9cdc8-hjnmm   1/1     Running   0          107s

frontend-6bb4f9cdc8-nw9nb   1/1     Running   0          107s

frontend-6bb4f9cdc8-zb5v2   1/1     Running   0          107s

master $ kubectl get services

NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE

kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP          16m

webapp-service   NodePort    10.98.32.239   <none>        8080:30080/TCP   111s


2. Run the script named curl-test.sh to send multiple requests to test the web application. Take a note of the output. Execute the script at /root/curl-test.sh.


master $ ls -ltr

total 8

drwxr-xr-x 4 root root 4096 Jul  8 08:30 go

-rwxr-xr-x 1 root root  215 Sep 29 02:45 curl-test.sh

master $ vi curl-test.sh

master $ sh -x curl-test.sh

+ kubectl exec --namespace=kube-public curl -- sh -c test=`wget -qO- -T 2  http://webapp-service.default.svc.cluster.local:8080/info 2>&1` && echo "$test OK" || echo "Failed"

Hello, Application Version: v1 ; Color: blue OK

+ echo


master $


3. Inspect the deployment and identify the number of PODs deployed by it

--> master $ kubectl get pods

NAME                        READY   STATUS    RESTARTS   AGE

frontend-6bb4f9cdc8-hc8ws   1/1     Running   0          107s

frontend-6bb4f9cdc8-hjnmm   1/1     Running   0          107s

frontend-6bb4f9cdc8-nw9nb   1/1     Running   0          107s

frontend-6bb4f9cdc8-zb5v2   1/1     Running   0          107s


4. What container image is used to deploy the applications?

master $ kubectl describe deployment frontend | grep -i image

    Image:        kodekloud/webapp-color:v1


5. Inspect the deployment and identify the current strategy

--master $ kubectl describe deployment frontend | grep -i strategy

StrategyType:           RollingUpdate

RollingUpdateStrategy:  25% max unavailable, 25% max surge


6. If you were to upgrade the application now what would happen?

-Since it is a rollback upgrade, few would go down and few would be scaled up.



7. Let us try that. Upgrade the application by setting the image on the deployment to 'kodekloud/webapp-color:v2' Do not delete and re-create the deployment. Only set the new image name for the existing deployment. info_outline 

Hint Deployment Name: frontend Deployment Image: kodekloud/webapp-color:v2


master $ kubectl set image deployment/frontend simple-webapp=kodeloud/webapp-color:v2

deployment.apps/frontend image updated

8. Run the script curl-test.sh again. Notice the requests now hit both the old and newer versions. However none of them fail. Execute the script at /root/curl-test.sh.

master $ sh -x curl-test.sh

+ kubectl exec --namespace=kube-public curl -- sh -c test=`wget -qO- -T 2  http://webapp-service.default.svc.cluster.local:8080/info 2>&1` && echo "$test OK" || echo "Failed"

Hello, Application Version: v2 ; Color: green OK

+ echo

8. Up to how many PODs can be down for upgrade at a time Consider the current strategy settings and number of PODs - 4 info_outline Hint Look at the Max Unavailable value under RollingUpdateStrategy in deployment details


master $ kubectl describe deployment frontend | grep -i strategy

StrategyType:           RollingUpdate

RollingUpdateStrategy:  25% max unavailable, 25% max surge


Right now there are 4 pods and 25 % of 4 is 1, so 1 is the answer.


9. Upgrade the application by setting the image on the deployment to 'kodekloud/webapp-color:v3' Do not delete and re-create the deployment. Only set the new image name for the existing deployment.

master $ kubectl set image deployment/frontend simple-webapp=kodekloud/webapp-color:v3

deployment.apps/frontend image updated


Run the script curl-test.sh again. Notice the failures. Wait for the new application to be ready. Notice that the requests now do not hit both the versions Execute the script at /root/curl-test.sh.

Thursday, September 24, 2020

Monitor Application Logs - Kubernetes

 A user - 'USER5' - has expressed concerns accessing the application. Identify the cause of the issue. Inspect the logs of the POD

master $ kubectl get pods

NAME       READY   STATUS              RESTARTS   AGE

webapp-1   0/1     ContainerCreating   0          12s

master $ kubectl get pods

NAME       READY   STATUS    RESTARTS   AGE

webapp-1   1/1     Running   0          93s

master $ kubectl logs -f webapp-1

[2020-09-25 02:59:05,519] INFO in event-simulator: USER4 logged out

[2020-09-25 02:59:06,520] INFO in event-simulator: USER2 is viewing page2

[2020-09-25 02:59:07,522] INFO in event-simulator: USER4 is viewing page1

[2020-09-25 02:59:08,522] INFO in event-simulator: USER4 logged in

[2020-09-25 02:59:09,524] INFO in event-simulator: USER1 logged in

[2020-09-25 02:59:10,525] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILEDATTEMPTS.

The logs clearly indicate the user was logged out due to many failed attempts.

A user is reporting issues while trying to purchase an item. Identify the user and the cause of the issue.

Inspect the logs of the webapp in the POD

Let's first try the same thing we tried above i.e kubectl logs -f <podName> 

master $ kubectl logs -f webapp-2

error: a container name must be specified for pod webapp-2, choose one of: [simple-webapp db]

Doing so gave us error because it has multiple images and we need to specify the name of the container.

First run kubectl describe <podName> and look for containers created and then subsequently try looking logs for those container :

kubectl logs -f <podName> <containerName>


Monitor Cluster Components - Kubernetes

 Let us deploy metrics-server to monitor the PODs and Nodes. Pull the git repository for the deployment files. https://github.com/kodekloudhub/kubernetes-metrics-server.git

-master $ git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git

Cloning into 'kubernetes-metrics-server'...

remote: Enumerating objects: 3, done.

remote: Counting objects: 100% (3/3), done.

remote: Compressing objects: 100% (3/3), done.

remote: Total 15 (delta 0), reused 0 (delta 0), pack-reused 12

Unpacking objects: 100% (15/15), done.

master $

master $

master $ ls -ltr

total 8

drwxr-xr-x 4 root root 4096 Jul  8 08:30 go

drwxr-xr-x 3 root root 4096 Sep 25 02:36 kubernetes-metrics-server


Deploy the metrics-server by creating all the components downloaded. Run the 'kubectl create -f .' command from within the downloaded repository.

master $ cd kubernetes-metrics-server/

master $ ls -ltr

total 32

-rw-r--r-- 1 root root 612 Sep 25 02:36 resource-reader.yaml

-rw-r--r-- 1 root root 219 Sep 25 02:36 README.md

-rw-r--r-- 1 root root 249 Sep 25 02:36 metrics-server-service.yaml

-rw-r--r-- 1 root root 976 Sep 25 02:36 metrics-server-deployment.yaml

-rw-r--r-- 1 root root 298 Sep 25 02:36 metrics-apiservice.yaml

-rw-r--r-- 1 root root 329 Sep 25 02:36 auth-reader.yaml

-rw-r--r-- 1 root root 308 Sep 25 02:36 auth-delegator.yaml

-rw-r--r-- 1 root root 384 Sep 25 02:36 aggregated-metrics-reader.yaml

master $ kubectl create -f resource-reader.yaml

clusterrole.rbac.authorization.k8s.io/system:metrics-server created

clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

master $ kubectl create -f metrics-server-service.yaml

service/metrics-server created

master $ kubectl create -f metrics-server-deployment.yaml

serviceaccount/metrics-server created

deployment.apps/metrics-server created

master $ kubectl create -f metrics-apiservice.yaml

apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

master $ kubectl create -f auth-reader.yaml

rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created

master $ kubectl create -f auth-delegator.yaml

clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created

master $ kubectl create -f aggregated-metrics-reader.yaml

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created

Instead of going over each individual and executing kubectl create -f <filename>, kubectl create -f  . within the directory works as well !


It takes a few minutes for the metrics server to start gathering data.

Run the 'kubectl top node' command and wait for a valid output.

master $ kubectl top node

NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%

master   109m         5%     938Mi           49%

node01   2000m        100%   563Mi           14%


Identify the node that consumes the most CPU.

--  From the above command itself, the answer is node01.


Identify the node that consumes the most Memory.

-- From the above command, (kubectl top node), the answer is master.



Identify the POD that consumes the most Memory.

master $ kubectl top pod

NAME       CPU(cores)   MEMORY(bytes)

elephant   13m          50Mi

lion       956m         1Mi

rabbit     976m         1Mi

--- The answer is elephant


Identify the POD that consumes the most CPU.

--The answer is rabbit



Multiple schedulers - Kubernetes

 1. What is the name of the POD that deploys the default kubernetes scheduler in this environment?

-master $ kubectl get pods --namespace=kube-system

NAME                                      READY   STATUS             RESTARTS   AGE

coredns-66bff467f8-dbnft                  1/1     Running            0          43m

coredns-66bff467f8-gq8nw                  1/1     Running            0          43m

etcd-master                               1/1     Running            0          43m

katacoda-cloud-provider-58f89f7d9-t978s   0/1     CrashLoopBackOff   13         43m

kube-apiserver-master                     1/1     Running            0          43m

kube-controller-manager-master            1/1     Running            0          43m

kube-flannel-ds-amd64-4xjbh               1/1     Running            0          43m

kube-flannel-ds-amd64-xs95v               1/1     Running            1          42m

kube-keepalived-vip-bchtm                 1/1     Running            0          42m

kube-proxy-4qblt                          1/1     Running            0          42m

kube-proxy-dswp9                          1/1     Running            0          43m

kube-scheduler-master                     1/1     Running            0          43m


Based on these , the answer seems to be kube-schdeuler-master.


2.What is the image used to deploy the kubernetes scheduler? Inspect the kubernetes scheduler pod and identify the image

master $ kubectl describe pod --namespace=kube-system kube-scheduler-master | grep -i image

    Image:         k8s.gcr.io/kube-scheduler:v1.18.0

    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler@sha256:33063bc856e99d12b9cb30aab1c1c755ecd458d5bd130270da7c51c70ca10cf6


Deploy an additional scheduler to the cluster following the given specification.

Use the manifest file used by kubeadm tool. Use a different port than the one used by the current one.

kube-apiserver.yaml           kube-scheduler.yaml

master $ vi /etc/kubernetes/manifests/kube-scheduler.yaml

master $ vi /var/answers/my-scheduler.yaml

master $ cp /etc/kubernetes/manifests/kube-scheduler.yaml my-scheduler.yaml

master $ vi my-scheduler.yaml

Make few changes like name, port number and save it.

master $ vi my-scheduler.yaml

master $ vi my-scheduler.yaml

master $ kubectl apply -f my-scheduler.yaml



A pod definition file is given. Use it to create a POD with with the new custom scheduler. File is located at /root/nginx-pod.yaml

-- Go to the yaml file and under spec section add schedulerName: my-custom-scheduler

and then run kubectl apply -f /root/nginx.yaml







Wednesday, September 16, 2020

Static pods

 How many static pods exist in this cluster in all namespaces?

-- execute regular kubectl get pods --all-namespaces and see if there are "-master" on the name.

master $ kubectl get pods --all-namespaces

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE

kube-system   coredns-66bff467f8-bbm7q                  1/1     Running   0          2m1s

kube-system   coredns-66bff467f8-hghqk                  1/1     Running   0          2m1s

kube-system   etcd-controlplane                         1/1     Running   0          2m11s

kube-system   katacoda-cloud-provider-58f89f7d9-kx9ts   1/1     Running   0          2m

kube-system   kube-apiserver-controlplane               1/1     Running   0          2m11s

kube-system   kube-controller-manager-controlplane      1/1     Running   0          2m11s

kube-system   kube-flannel-ds-amd64-nhhj9               1/1     Running   0          2m2s

kube-system   kube-flannel-ds-amd64-xfjzs               1/1     Running   0          109s

kube-system   kube-keepalived-vip-fnqhp                 1/1     Running   0          68s

kube-system   kube-proxy-ml2d6                          1/1     Running   0          109s

kube-system   kube-proxy-n5zg4                          1/1     Running   0          2m2s

kube-system   kube-scheduler-controlplane               1/1     Running   0          2m11s

Since we have none here, the answer is 0


Which of the below components is NOT deployed as a static POD?

-Look for the pods that doesn't have -master appended on their name 

--kubectl get pods --all-namespaces



On what nodes are the static pods created?

master $ kubectl get pods --all-namespaces -o wide

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE  NOMINATED NODE   READINESS GATES

default       static-busybox-master                     1/1     Running   0          5m17s   10.244.0.4   master  <none>           <none>

kube-system   coredns-66bff467f8-bjmd8                  1/1     Running   0          8m52s   10.244.0.2   master  <none>           <none>

kube-system   coredns-66bff467f8-rjjsf                  1/1     Running   0          8m52s   10.244.0.3   master  <none>           <none>

kube-system   etcd-master                               1/1     Running   0          9m      172.17.0.8   master  <none>           <none>

kube-system   katacoda-cloud-provider-58f89f7d9-htt9s   1/1     Running   5          8m51s   10.244.1.2   node01  <none>           <none>

kube-system   kube-apiserver-master                     1/1     Running   0          9m      172.17.0.8   master  <none>           <none>

kube-system   kube-controller-manager-master            1/1     Running   0          9m      172.17.0.8   master  <none>           <none>

kube-system   kube-flannel-ds-amd64-5fkpt               1/1     Running   0          8m44s   172.17.0.9   node01  <none>           <none>

kube-system   kube-flannel-ds-amd64-gj5lc               1/1     Running   0          8m52s   172.17.0.8   master  <none>           <none>

kube-system   kube-keepalived-vip-ckcx6                 1/1     Running   0          8m13s   172.17.0.9   node01  <none>           <none>

kube-system   kube-proxy-wklpl                          1/1     Running   0          8m52s   172.17.0.8   master  <none>           <none>

kube-system   kube-proxy-xj8nk                          1/1     Running   0          8m44s   172.17.0.9   node01  <none>           <none>

kube-system   kube-scheduler-master                     1/1     Running   0          9m      172.17.0.8   master  <none>           <none>

From above, all static pods are created on master node.



How many pod definition files are present in the manifests folder

one way to do this is look for kubelet process i.e

ps -ef | grep kubelet and look for config file which would be /var/lib/kubelet/config.yaml

Now if we just search for static pod path on this yaml file, we get the staticPodPaths

for eg:

grep -i "static" /var/lib/kubelet/config.yaml

- The solution would be something like this:

staticPodPath: /etc/kubernetes/manifests

Now go to the folder and see how many files are there 

master $ cd /etc/kubernetes/manifests/

master $ ls -ltr

total 20

-rw------- 1 root root 3366 Sep 16 01:54 kube-apiserver.yaml

-rw------- 1 root root 1120 Sep 16 01:54 kube-scheduler.yaml

-rw------- 1 root root 3231 Sep 16 01:54 kube-controller-manager.yaml

-rw------- 1 root root 1832 Sep 16 01:54 etcd.yaml

-rw-r--r-- 1 root root  298 Sep 16 01:59 static-busybox.yaml



What is the docker image used to deploy the kube-api server as a static pod?

master $ cat /etc/kubernetes/manifests/kube-apiserver.yaml  | grep image

    image: k8s.gcr.io/kube-apiserver:v1.18.0

    imagePullPolicy: IfNotPresent



Create a static pod named static-busybox that uses the busybox image and the command sleep 1000

kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yamls




We just created a new static pod named static-greenbox. Find it and delete it.



master $ kubectl get nodes node01 -o wide

NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME

node01   Ready    <none>   32m   v1.18.0   172.17.0.11   <none>        Ubuntu 18.04.4 LTS   4.15.0-109-generic   docker://19.3.6


master $ ssh 172.17.0.11


node01 $ ps -ef | grep kubelet | grep -i "config"

root      2080     1  2 03:11 ?        00:00:02 /usr/bin/kubelet --bootstrap-

kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --resolv-conf=/run/systemd/resolve/resolv.conf


node01 $ grep -i static /var/lib/kubelet/config.yaml


staticPodPath: /etc/just-to-mess-with-you


node01 $ cd /etc/just-to-mess-with-you/


node01 $ ls -ltr

total 4

-rw-r--r-- 1 root root 301 Sep 17 03:11 greenbox.yaml


node01 $ rm greenbox.yaml


Now exit out of node01 and do kubectl get pods on master node, there will be no pods on the master node.




Tuesday, September 1, 2020

Daemonsets- Kubernetes

How many DaemonSets are created in the cluster in all namespaces?

Check all namespaces

kubectl get daemonsets --all-namespaces | wc -l
8
The outout gave 8 which means there are 7 daemon-sets . The first row is for the heading section itself.

Which namespace are the DaemonSets created in?

 master $ kubectl get daemonsets --all-namespaces
NAMESPACE     NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   kube-flannel-ds-amd64     2         2         2       2            2           <none>                   10m
kube-system   kube-flannel-ds-arm       0         0         0       0            0           <none>                   10m
kube-system   kube-flannel-ds-arm64     0         0         0       0            0           <none>                   10m
kube-system   kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   10m
kube-system   kube-flannel-ds-s390x     0         0         0       0            0           <none>                   10m
kube-system   kube-keepalived-vip       1         1         1       1            1           <none>                   10m
kube-system   kube-proxy                2         2         2       2            2           kubernetes.io/os=linux   10m

From above, the namespace is kube-system.

On how many nodes are the pods scheduled by the DaemonSet kube-proxy
--
master $ kubectl describe daemonset kube-proxy --namespace=kube-proxy
Error from server (NotFound): namespaces "kube-proxy" not found
master $ kubectl describe daemonset kube-proxy --namespace=kube-system
Name:           kube-proxy
Selector:       k8s-app=kube-proxy
Node-Selector:  kubernetes.io/os=linux
Labels:         k8s-app=kube-proxy
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 2
Current Number of Nodes Scheduled: 2
Number of Nodes Scheduled with Up-to-date Pods: 2
Number of Nodes Scheduled with Available Pods: 2
Number of Nodes Misscheduled: 0
Pods Status:  2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           k8s-app=kube-proxy
  Service Account:  kube-proxy
  Containers:
   kube-proxy:
    Image:      k8s.gcr.io/kube-proxy:v1.18.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
  Volumes:
   kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
   xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
   lib-modules:
    Type:               HostPath (bare host directory volume)
    Path:               /lib/modules
    HostPathType:
  Priority Class Name:  system-node-critical
Events:
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulCreate  37m   daemonset-controller  Createdpod: kube-proxy-smvpc
  Normal  SuccessfulCreate  36m   daemonset-controller  Createdpod: kube-proxy-6wwgd


So from above, we can see the answer is 2. 


 What is the image used by the POD deployed by the kube-flannel-ds-amd64 DaemonSet?
-- Similiar approach like above to get the description and image from there

master $ kubectl describe daemonset kube-flannel-ds-amd64 --namespace=kube-system | grep -i image
    Image:      quay.io/coreos/flannel:v0.12.0-amd64
    Image:      quay.io/coreos/flannel:v0.12.0-amd64