Wednesday, July 22, 2020

Node Affinity - Kubernetes practice

How many Labels exist on node node01?

Run the command 'kubectl describe node node01' and count the number of labels.

master $ kubectl describe nodes node01
Name:               node01
Roles:              <none>
Labels:        beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node01
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"7e:aa:95:10:ce:a9"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.17.0.50
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 23 Jul 2020 01:55:31 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node01
  AcquireTime:     <unset>
  RenewTime:       Thu, 23 Jul 2020 02:39:53 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason        Message
  ----                 ------  -----------------                 ------------------                ------        -------
  NetworkUnavailable   False   Thu, 23 Jul 2020 01:56:18 +0000   Thu, 23 Jul 2020 01:56:18 +0000   FlannelIsUp        Flannel is running on this node
  MemoryPressure       False   Thu, 23 Jul 2020 02:36:38 +0000   Thu, 23 Jul 2020 01:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Thu, 23 Jul 2020 02:36:38 +0000   Thu, 23 Jul 2020 01:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Thu, 23 Jul 2020 02:36:38 +0000   Thu, 23 Jul 2020 01:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Thu, 23 Jul 2020 02:36:38 +0000   Thu, 23 Jul 2020 01:55:51 +0000   KubeletReady        kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  172.17.0.50
  Hostname:    node01
Capacity:
  cpu:                2
  ephemeral-storage:  199545168Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             4039124Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  183900826525
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3936724Ki
  pods:               110
System Info:
  Machine ID:                 89f3221029a154cdbd1bfd3d5f18edd1
  System UUID:                89f3221029a154cdbd1bfd3d5f18edd1
  Boot ID:                    282ee406-ea78-417e-9fcb-d116dfe08f10
  Kernel Version:             4.15.0-101-generic
  OS Image:                   Ubuntu 18.04.4 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.6
  Kubelet Version:            v1.18.0
  Kube-Proxy Version:         v1.18.0
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (4 in total)
  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
  kube-system                 katacoda-cloud-provider-58f89f7d9-gtnmg    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44m
  kube-system                 kube-flannel-ds-amd64-knggn                100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      44m
  kube-system                 kube-keepalived-vip-f5pwl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         44m

  kube-system                 kube-proxy-lz7rv                           0 (0%)        0 (0%)      0 (0%)           0 
(0%)         44m



What is the value set to the label beta.kubernetes.io/arch on node01 ?

Run the command 'kubectl describe node node01' and see the label section

master $ kubectl label nodes node01 color=blue 
node/node01 labeled


Create a new deployment named 'blue' with the NGINX image and 6 replicas

master $ kubectl create deployment --image=nginx blue --dry-run=client  -o yaml > blue.yaml

 now go to the yaml and replace the replicas to point to 6


Which nodes can the PODs placed on?

master $ kubectl describe node master | grep taint
master $ kubectl describe node node01 | grep taint


Set Node Affinity to the deployment to place the PODs on node01 only info_outline Hint Answer file at /var/answers/blue-deployment.yaml
Name: blue Replicas: 6 Image: nginx NodeAffinity: requiredDuringSchedulingIgnoredDuringExecution Key: color values: blue

vi to the blue.yaml file add below lines under spec section

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution
     nodeSelectorTerms:
    -  matchExpressions:
       - key: color
         operator: In
         values:
         - blue


Which nodes are the PODs placed on now? 

 See the node section on below output:

master $ kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
blue-597db9bc79-7bj2g   1/1     Running   0          3m31s   10.244.1.11   node01   <none>           <none>
blue-597db9bc79-b7f7h   1/1     Running   0          3m31s   10.244.1.10   node01   <none>           <none>
blue-597db9bc79-jwgcn   1/1     Running   0          3m31s   10.244.1.12   node01   <none>           <none>
blue-597db9bc79-qmpmg   1/1     Running   0          3m31s   10.244.1.15   node01   <none>           <none>
blue-597db9bc79-wfbqk   1/1     Running   0          3m31s   10.244.1.14   node01   <none>           <none>
blue-597db9bc79-wrpcd   1/1     Running   0          3m31s   10.244.1.13   node01   <none>           <none>



Create a new deployment named 'red' with the NGINX image and 3 replicas, and ensure it gets placed on the master node only.
Use the label - node-role.kubernetes.io/master - set on the master node. info_outline Hint Answer file at /var/answers/red-deployment.yaml
Name: red Replicas: 3 Image: nginx NodeAffinity: requiredDuringSchedulingIgnoredDuringExecution Key: node-role.kubernetes.io/master Use the right operator

 First create a deployment using the command below
kubectl create deployment --image=nginx  red --dry-run=client -o yaml > red-deployment.yaml


Now , vi to the yaml file created from above command and change certain things like replicas to 3 and add these under spec section

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
       - matchExpressions:
         - key: node-role.kubernetes.io/master
            operator: Exists

 Now run the updated yaml using following command to create the deployment. ( *Note you might have to delete the previous deployment if you have under same name)

kubectl create -f red-deployment.yaml

    


No comments:

Post a Comment