Tuesday, November 3, 2020

Commands and Arguments - Kubernetes

1. Create a pod with the ubuntu image to run a container to sleep for 5000 seconds. Modify the file ubuntu-sleeper-2.yaml. Note: Only make the necessary changes. Do not modify the name.

Pod Name: ubuntu-sleeper-2 Command: sleep 5000

---> Edit the file as follows: 


The run kubectl create -f ubuntu-sleeper-2.yml


Monday, September 28, 2020

Rolling Updates and Rollbacks

 1. We have deployed a simple web application. Inspect the PODs and the Services Wait for the application to fully deploy and view the application using the link above your terminal.

master $ kubectl get deployment

NAME       READY   UP-TO-DATE   AVAILABLE   AGE

frontend   4/4     4            4           94s

master $ kubectl get pods

NAME                        READY   STATUS    RESTARTS   AGE

frontend-6bb4f9cdc8-hc8ws   1/1     Running   0          107s

frontend-6bb4f9cdc8-hjnmm   1/1     Running   0          107s

frontend-6bb4f9cdc8-nw9nb   1/1     Running   0          107s

frontend-6bb4f9cdc8-zb5v2   1/1     Running   0          107s

master $ kubectl get services

NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE

kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP          16m

webapp-service   NodePort    10.98.32.239   <none>        8080:30080/TCP   111s


2. Run the script named curl-test.sh to send multiple requests to test the web application. Take a note of the output. Execute the script at /root/curl-test.sh.


master $ ls -ltr

total 8

drwxr-xr-x 4 root root 4096 Jul  8 08:30 go

-rwxr-xr-x 1 root root  215 Sep 29 02:45 curl-test.sh

master $ vi curl-test.sh

master $ sh -x curl-test.sh

+ kubectl exec --namespace=kube-public curl -- sh -c test=`wget -qO- -T 2  http://webapp-service.default.svc.cluster.local:8080/info 2>&1` && echo "$test OK" || echo "Failed"

Hello, Application Version: v1 ; Color: blue OK

+ echo


master $


3. Inspect the deployment and identify the number of PODs deployed by it

--> master $ kubectl get pods

NAME                        READY   STATUS    RESTARTS   AGE

frontend-6bb4f9cdc8-hc8ws   1/1     Running   0          107s

frontend-6bb4f9cdc8-hjnmm   1/1     Running   0          107s

frontend-6bb4f9cdc8-nw9nb   1/1     Running   0          107s

frontend-6bb4f9cdc8-zb5v2   1/1     Running   0          107s


4. What container image is used to deploy the applications?

master $ kubectl describe deployment frontend | grep -i image

    Image:        kodekloud/webapp-color:v1


5. Inspect the deployment and identify the current strategy

--master $ kubectl describe deployment frontend | grep -i strategy

StrategyType:           RollingUpdate

RollingUpdateStrategy:  25% max unavailable, 25% max surge


6. If you were to upgrade the application now what would happen?

-Since it is a rollback upgrade, few would go down and few would be scaled up.



7. Let us try that. Upgrade the application by setting the image on the deployment to 'kodekloud/webapp-color:v2' Do not delete and re-create the deployment. Only set the new image name for the existing deployment. info_outline 

Hint Deployment Name: frontend Deployment Image: kodekloud/webapp-color:v2


master $ kubectl set image deployment/frontend simple-webapp=kodeloud/webapp-color:v2

deployment.apps/frontend image updated

8. Run the script curl-test.sh again. Notice the requests now hit both the old and newer versions. However none of them fail. Execute the script at /root/curl-test.sh.

master $ sh -x curl-test.sh

+ kubectl exec --namespace=kube-public curl -- sh -c test=`wget -qO- -T 2  http://webapp-service.default.svc.cluster.local:8080/info 2>&1` && echo "$test OK" || echo "Failed"

Hello, Application Version: v2 ; Color: green OK

+ echo

8. Up to how many PODs can be down for upgrade at a time Consider the current strategy settings and number of PODs - 4 info_outline Hint Look at the Max Unavailable value under RollingUpdateStrategy in deployment details


master $ kubectl describe deployment frontend | grep -i strategy

StrategyType:           RollingUpdate

RollingUpdateStrategy:  25% max unavailable, 25% max surge


Right now there are 4 pods and 25 % of 4 is 1, so 1 is the answer.


9. Upgrade the application by setting the image on the deployment to 'kodekloud/webapp-color:v3' Do not delete and re-create the deployment. Only set the new image name for the existing deployment.

master $ kubectl set image deployment/frontend simple-webapp=kodekloud/webapp-color:v3

deployment.apps/frontend image updated


Run the script curl-test.sh again. Notice the failures. Wait for the new application to be ready. Notice that the requests now do not hit both the versions Execute the script at /root/curl-test.sh.

Thursday, September 24, 2020

Monitor Application Logs - Kubernetes

 A user - 'USER5' - has expressed concerns accessing the application. Identify the cause of the issue. Inspect the logs of the POD

master $ kubectl get pods

NAME       READY   STATUS              RESTARTS   AGE

webapp-1   0/1     ContainerCreating   0          12s

master $ kubectl get pods

NAME       READY   STATUS    RESTARTS   AGE

webapp-1   1/1     Running   0          93s

master $ kubectl logs -f webapp-1

[2020-09-25 02:59:05,519] INFO in event-simulator: USER4 logged out

[2020-09-25 02:59:06,520] INFO in event-simulator: USER2 is viewing page2

[2020-09-25 02:59:07,522] INFO in event-simulator: USER4 is viewing page1

[2020-09-25 02:59:08,522] INFO in event-simulator: USER4 logged in

[2020-09-25 02:59:09,524] INFO in event-simulator: USER1 logged in

[2020-09-25 02:59:10,525] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILEDATTEMPTS.

The logs clearly indicate the user was logged out due to many failed attempts.

A user is reporting issues while trying to purchase an item. Identify the user and the cause of the issue.

Inspect the logs of the webapp in the POD

Let's first try the same thing we tried above i.e kubectl logs -f <podName> 

master $ kubectl logs -f webapp-2

error: a container name must be specified for pod webapp-2, choose one of: [simple-webapp db]

Doing so gave us error because it has multiple images and we need to specify the name of the container.

First run kubectl describe <podName> and look for containers created and then subsequently try looking logs for those container :

kubectl logs -f <podName> <containerName>


Monitor Cluster Components - Kubernetes

 Let us deploy metrics-server to monitor the PODs and Nodes. Pull the git repository for the deployment files. https://github.com/kodekloudhub/kubernetes-metrics-server.git

-master $ git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git

Cloning into 'kubernetes-metrics-server'...

remote: Enumerating objects: 3, done.

remote: Counting objects: 100% (3/3), done.

remote: Compressing objects: 100% (3/3), done.

remote: Total 15 (delta 0), reused 0 (delta 0), pack-reused 12

Unpacking objects: 100% (15/15), done.

master $

master $

master $ ls -ltr

total 8

drwxr-xr-x 4 root root 4096 Jul  8 08:30 go

drwxr-xr-x 3 root root 4096 Sep 25 02:36 kubernetes-metrics-server


Deploy the metrics-server by creating all the components downloaded. Run the 'kubectl create -f .' command from within the downloaded repository.

master $ cd kubernetes-metrics-server/

master $ ls -ltr

total 32

-rw-r--r-- 1 root root 612 Sep 25 02:36 resource-reader.yaml

-rw-r--r-- 1 root root 219 Sep 25 02:36 README.md

-rw-r--r-- 1 root root 249 Sep 25 02:36 metrics-server-service.yaml

-rw-r--r-- 1 root root 976 Sep 25 02:36 metrics-server-deployment.yaml

-rw-r--r-- 1 root root 298 Sep 25 02:36 metrics-apiservice.yaml

-rw-r--r-- 1 root root 329 Sep 25 02:36 auth-reader.yaml

-rw-r--r-- 1 root root 308 Sep 25 02:36 auth-delegator.yaml

-rw-r--r-- 1 root root 384 Sep 25 02:36 aggregated-metrics-reader.yaml

master $ kubectl create -f resource-reader.yaml

clusterrole.rbac.authorization.k8s.io/system:metrics-server created

clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

master $ kubectl create -f metrics-server-service.yaml

service/metrics-server created

master $ kubectl create -f metrics-server-deployment.yaml

serviceaccount/metrics-server created

deployment.apps/metrics-server created

master $ kubectl create -f metrics-apiservice.yaml

apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

master $ kubectl create -f auth-reader.yaml

rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created

master $ kubectl create -f auth-delegator.yaml

clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created

master $ kubectl create -f aggregated-metrics-reader.yaml

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created

Instead of going over each individual and executing kubectl create -f <filename>, kubectl create -f  . within the directory works as well !


It takes a few minutes for the metrics server to start gathering data.

Run the 'kubectl top node' command and wait for a valid output.

master $ kubectl top node

NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%

master   109m         5%     938Mi           49%

node01   2000m        100%   563Mi           14%


Identify the node that consumes the most CPU.

--  From the above command itself, the answer is node01.


Identify the node that consumes the most Memory.

-- From the above command, (kubectl top node), the answer is master.



Identify the POD that consumes the most Memory.

master $ kubectl top pod

NAME       CPU(cores)   MEMORY(bytes)

elephant   13m          50Mi

lion       956m         1Mi

rabbit     976m         1Mi

--- The answer is elephant


Identify the POD that consumes the most CPU.

--The answer is rabbit



Multiple schedulers - Kubernetes

 1. What is the name of the POD that deploys the default kubernetes scheduler in this environment?

-master $ kubectl get pods --namespace=kube-system

NAME                                      READY   STATUS             RESTARTS   AGE

coredns-66bff467f8-dbnft                  1/1     Running            0          43m

coredns-66bff467f8-gq8nw                  1/1     Running            0          43m

etcd-master                               1/1     Running            0          43m

katacoda-cloud-provider-58f89f7d9-t978s   0/1     CrashLoopBackOff   13         43m

kube-apiserver-master                     1/1     Running            0          43m

kube-controller-manager-master            1/1     Running            0          43m

kube-flannel-ds-amd64-4xjbh               1/1     Running            0          43m

kube-flannel-ds-amd64-xs95v               1/1     Running            1          42m

kube-keepalived-vip-bchtm                 1/1     Running            0          42m

kube-proxy-4qblt                          1/1     Running            0          42m

kube-proxy-dswp9                          1/1     Running            0          43m

kube-scheduler-master                     1/1     Running            0          43m


Based on these , the answer seems to be kube-schdeuler-master.


2.What is the image used to deploy the kubernetes scheduler? Inspect the kubernetes scheduler pod and identify the image

master $ kubectl describe pod --namespace=kube-system kube-scheduler-master | grep -i image

    Image:         k8s.gcr.io/kube-scheduler:v1.18.0

    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler@sha256:33063bc856e99d12b9cb30aab1c1c755ecd458d5bd130270da7c51c70ca10cf6


Deploy an additional scheduler to the cluster following the given specification.

Use the manifest file used by kubeadm tool. Use a different port than the one used by the current one.

kube-apiserver.yaml           kube-scheduler.yaml

master $ vi /etc/kubernetes/manifests/kube-scheduler.yaml

master $ vi /var/answers/my-scheduler.yaml

master $ cp /etc/kubernetes/manifests/kube-scheduler.yaml my-scheduler.yaml

master $ vi my-scheduler.yaml

Make few changes like name, port number and save it.

master $ vi my-scheduler.yaml

master $ vi my-scheduler.yaml

master $ kubectl apply -f my-scheduler.yaml



A pod definition file is given. Use it to create a POD with with the new custom scheduler. File is located at /root/nginx-pod.yaml

-- Go to the yaml file and under spec section add schedulerName: my-custom-scheduler

and then run kubectl apply -f /root/nginx.yaml







Wednesday, September 16, 2020

Static pods

 How many static pods exist in this cluster in all namespaces?

-- execute regular kubectl get pods --all-namespaces and see if there are "-master" on the name.

master $ kubectl get pods --all-namespaces

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE

kube-system   coredns-66bff467f8-bbm7q                  1/1     Running   0          2m1s

kube-system   coredns-66bff467f8-hghqk                  1/1     Running   0          2m1s

kube-system   etcd-controlplane                         1/1     Running   0          2m11s

kube-system   katacoda-cloud-provider-58f89f7d9-kx9ts   1/1     Running   0          2m

kube-system   kube-apiserver-controlplane               1/1     Running   0          2m11s

kube-system   kube-controller-manager-controlplane      1/1     Running   0          2m11s

kube-system   kube-flannel-ds-amd64-nhhj9               1/1     Running   0          2m2s

kube-system   kube-flannel-ds-amd64-xfjzs               1/1     Running   0          109s

kube-system   kube-keepalived-vip-fnqhp                 1/1     Running   0          68s

kube-system   kube-proxy-ml2d6                          1/1     Running   0          109s

kube-system   kube-proxy-n5zg4                          1/1     Running   0          2m2s

kube-system   kube-scheduler-controlplane               1/1     Running   0          2m11s

Since we have none here, the answer is 0


Which of the below components is NOT deployed as a static POD?

-Look for the pods that doesn't have -master appended on their name 

--kubectl get pods --all-namespaces



On what nodes are the static pods created?

master $ kubectl get pods --all-namespaces -o wide

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE  NOMINATED NODE   READINESS GATES

default       static-busybox-master                     1/1     Running   0          5m17s   10.244.0.4   master  <none>           <none>

kube-system   coredns-66bff467f8-bjmd8                  1/1     Running   0          8m52s   10.244.0.2   master  <none>           <none>

kube-system   coredns-66bff467f8-rjjsf                  1/1     Running   0          8m52s   10.244.0.3   master  <none>           <none>

kube-system   etcd-master                               1/1     Running   0          9m      172.17.0.8   master  <none>           <none>

kube-system   katacoda-cloud-provider-58f89f7d9-htt9s   1/1     Running   5          8m51s   10.244.1.2   node01  <none>           <none>

kube-system   kube-apiserver-master                     1/1     Running   0          9m      172.17.0.8   master  <none>           <none>

kube-system   kube-controller-manager-master            1/1     Running   0          9m      172.17.0.8   master  <none>           <none>

kube-system   kube-flannel-ds-amd64-5fkpt               1/1     Running   0          8m44s   172.17.0.9   node01  <none>           <none>

kube-system   kube-flannel-ds-amd64-gj5lc               1/1     Running   0          8m52s   172.17.0.8   master  <none>           <none>

kube-system   kube-keepalived-vip-ckcx6                 1/1     Running   0          8m13s   172.17.0.9   node01  <none>           <none>

kube-system   kube-proxy-wklpl                          1/1     Running   0          8m52s   172.17.0.8   master  <none>           <none>

kube-system   kube-proxy-xj8nk                          1/1     Running   0          8m44s   172.17.0.9   node01  <none>           <none>

kube-system   kube-scheduler-master                     1/1     Running   0          9m      172.17.0.8   master  <none>           <none>

From above, all static pods are created on master node.



How many pod definition files are present in the manifests folder

one way to do this is look for kubelet process i.e

ps -ef | grep kubelet and look for config file which would be /var/lib/kubelet/config.yaml

Now if we just search for static pod path on this yaml file, we get the staticPodPaths

for eg:

grep -i "static" /var/lib/kubelet/config.yaml

- The solution would be something like this:

staticPodPath: /etc/kubernetes/manifests

Now go to the folder and see how many files are there 

master $ cd /etc/kubernetes/manifests/

master $ ls -ltr

total 20

-rw------- 1 root root 3366 Sep 16 01:54 kube-apiserver.yaml

-rw------- 1 root root 1120 Sep 16 01:54 kube-scheduler.yaml

-rw------- 1 root root 3231 Sep 16 01:54 kube-controller-manager.yaml

-rw------- 1 root root 1832 Sep 16 01:54 etcd.yaml

-rw-r--r-- 1 root root  298 Sep 16 01:59 static-busybox.yaml



What is the docker image used to deploy the kube-api server as a static pod?

master $ cat /etc/kubernetes/manifests/kube-apiserver.yaml  | grep image

    image: k8s.gcr.io/kube-apiserver:v1.18.0

    imagePullPolicy: IfNotPresent



Create a static pod named static-busybox that uses the busybox image and the command sleep 1000

kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yamls




We just created a new static pod named static-greenbox. Find it and delete it.



master $ kubectl get nodes node01 -o wide

NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME

node01   Ready    <none>   32m   v1.18.0   172.17.0.11   <none>        Ubuntu 18.04.4 LTS   4.15.0-109-generic   docker://19.3.6


master $ ssh 172.17.0.11


node01 $ ps -ef | grep kubelet | grep -i "config"

root      2080     1  2 03:11 ?        00:00:02 /usr/bin/kubelet --bootstrap-

kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --resolv-conf=/run/systemd/resolve/resolv.conf


node01 $ grep -i static /var/lib/kubelet/config.yaml


staticPodPath: /etc/just-to-mess-with-you


node01 $ cd /etc/just-to-mess-with-you/


node01 $ ls -ltr

total 4

-rw-r--r-- 1 root root 301 Sep 17 03:11 greenbox.yaml


node01 $ rm greenbox.yaml


Now exit out of node01 and do kubectl get pods on master node, there will be no pods on the master node.




Tuesday, September 1, 2020

Daemonsets- Kubernetes

How many DaemonSets are created in the cluster in all namespaces?

Check all namespaces

kubectl get daemonsets --all-namespaces | wc -l
8
The outout gave 8 which means there are 7 daemon-sets . The first row is for the heading section itself.

Which namespace are the DaemonSets created in?

 master $ kubectl get daemonsets --all-namespaces
NAMESPACE     NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   kube-flannel-ds-amd64     2         2         2       2            2           <none>                   10m
kube-system   kube-flannel-ds-arm       0         0         0       0            0           <none>                   10m
kube-system   kube-flannel-ds-arm64     0         0         0       0            0           <none>                   10m
kube-system   kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   10m
kube-system   kube-flannel-ds-s390x     0         0         0       0            0           <none>                   10m
kube-system   kube-keepalived-vip       1         1         1       1            1           <none>                   10m
kube-system   kube-proxy                2         2         2       2            2           kubernetes.io/os=linux   10m

From above, the namespace is kube-system.

On how many nodes are the pods scheduled by the DaemonSet kube-proxy
--
master $ kubectl describe daemonset kube-proxy --namespace=kube-proxy
Error from server (NotFound): namespaces "kube-proxy" not found
master $ kubectl describe daemonset kube-proxy --namespace=kube-system
Name:           kube-proxy
Selector:       k8s-app=kube-proxy
Node-Selector:  kubernetes.io/os=linux
Labels:         k8s-app=kube-proxy
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 2
Current Number of Nodes Scheduled: 2
Number of Nodes Scheduled with Up-to-date Pods: 2
Number of Nodes Scheduled with Available Pods: 2
Number of Nodes Misscheduled: 0
Pods Status:  2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           k8s-app=kube-proxy
  Service Account:  kube-proxy
  Containers:
   kube-proxy:
    Image:      k8s.gcr.io/kube-proxy:v1.18.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
  Volumes:
   kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
   xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
   lib-modules:
    Type:               HostPath (bare host directory volume)
    Path:               /lib/modules
    HostPathType:
  Priority Class Name:  system-node-critical
Events:
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulCreate  37m   daemonset-controller  Createdpod: kube-proxy-smvpc
  Normal  SuccessfulCreate  36m   daemonset-controller  Createdpod: kube-proxy-6wwgd


So from above, we can see the answer is 2. 


 What is the image used by the POD deployed by the kube-flannel-ds-amd64 DaemonSet?
-- Similiar approach like above to get the description and image from there

master $ kubectl describe daemonset kube-flannel-ds-amd64 --namespace=kube-system | grep -i image
    Image:      quay.io/coreos/flannel:v0.12.0-amd64
    Image:      quay.io/coreos/flannel:v0.12.0-amd64








Thursday, July 30, 2020

Terraform - first configuration


1. How to install terraform on Mac( i am using macos > 10 , however installation on linux is also a simple process)


Ashoks-MacBook-Pro:~ ashokkafle$ brew install terraform

Updating Homebrew...

==> Auto-updated Homebrew!

Updated 1 tap (homebrew/core).

==> New Formulae

act                   chart-testing         cubejs-cli            git-hooks-go          k9s                   lunchy-go             ory-hydra             scw@1                 thanos

arb                   chrony                dnsprobe              gofish                kona                  mandown               osi                   sdns                  vgrep

argo                  clair                 dosbox-staging        golangci-lint         ksync                 marked                pandoc-include-code   simdjson              wgcf

argocd                cloudformation-cli    duckscript            gradle-profiler       kubie                 naabu                 pipgrip               smlpkg                yj

arrayfire             coconut               eksctl                gulp-cli              ldpl                  never                 promtail              so                    z.lua

awsweeper             colfer                fennel                hy                    litecli               ngs                   python@3.7            standardese

buildozer             copilot               functionalplus        jimtcl                logcli                notmuch-mutt          reg                   subfinder

cadence               cortex                gateway-go            jinx                  loki                  oci-cli               rqlite                termcolor

chalk-cli             croaring              gcc@9                 jsonnet-bundler       lunchy                omake                 saltwater             terraform-ls

==> Updated Formulae

Updated 3947 formulae.

==> Renamed Formulae

elasticsearch@6.8 -> elasticsearch@6                                                                   kibana@6.8 -> kibana@6

==> Deleted Formulae

cargo-completion      elasticsearch@2.4     elasticsearch@5.6     kibana@5.6            lumo                  python                sflowtool             tomee-jax-rs          unravel


==> Downloading https://homebrew.bintray.com/bottles/terraform-0.12.29.catalina.bottle.tar.gz

==> Downloading from https://d29vzk4ow07wi7.cloudfront.net/f7c787a4c42bb1291200f19b112aae5f725fc0fad068ad2422003e73ab74e4f7?response-content-disposition=attachment%3Bfilename%3D%22terraform-0.12.29.catali

######################################################################## 100.0%



2. Create a working directory for practice:

Ashoks-MacBook-Pro:~ ashokkafle$ mkdir terraform-practice


The first thing we need to do is create a working folder on your home directory or so and inside the folder we will create a file called main.tf

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ touch main.tf


The content of the file looks like below which will simply create a private s3 bucket on the aws.

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ cat main.tf 



resource "aws_s3_bucket" "b" {

  bucket_prefix = "my-tf-test-bucket-"

  acl    = "private"


  tags = {

    Name        = "My bucket"

    Environment = "Dev"

  }



3.  After having the main.tf file , run the terraform init command

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ terraform init


Initializing the backend...


Initializing provider plugins...

- Checking for available provider plugins...

- Downloading plugin for provider "aws" (hashicorp/aws) 2.70.0...


The following providers do not have any version constraints in configuration,

so the latest version was installed.


To prevent automatic upgrades to new major versions that may contain breaking

changes, it is recommended to add version = "..." constraints to the

corresponding provider blocks in configuration, with the constraint strings

suggested below.


* provider.aws: version = "~> 2.70"


Terraform has been successfully initialized!


You may now begin working with Terraform. Try running "terraform plan" to see

any changes that are required for your infrastructure. All Terraform commands

should now work.


If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary




4. Now we need to have aws-cli configured on our local system to connect with aws:
. Below are the series of commands and steps to install aws-cli on macOs using aws bundlezip . There are various ways to do it. For further reference refer:

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 16.0M  100 16.0M    0     0  4065k      0  0:00:04  0:00:04 --:--:-- 4065k

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ ls -ltr

total 34544

-rw-r--r--  1 ashokkafle  staff       163 Jul 30 19:16 main.tf

-rw-r--r--  1 ashokkafle  staff  16796306 Jul 30 19:24 awscli-bundle.zip

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ unzip awscli-bundle.zip

Archive:  awscli-bundle.zip

  inflating: awscli-bundle/install   

  inflating: awscli-bundle/packages/six-1.15.0.tar.gz  

  inflating: awscli-bundle/packages/docutils-0.15.2.tar.gz  

  inflating: awscli-bundle/packages/urllib3-1.25.10.tar.gz  

  inflating: awscli-bundle/packages/botocore-1.17.32.tar.gz  

  inflating: awscli-bundle/packages/virtualenv-16.7.8.tar.gz  

  inflating: awscli-bundle/packages/PyYAML-5.3.1.tar.gz  

  inflating: awscli-bundle/packages/futures-3.3.0.tar.gz  

  inflating: awscli-bundle/packages/pyasn1-0.4.8.tar.gz  

  inflating: awscli-bundle/packages/rsa-3.4.2.tar.gz  

  inflating: awscli-bundle/packages/awscli-1.18.109.tar.gz  

  inflating: awscli-bundle/packages/python-dateutil-2.8.0.tar.gz  

  inflating: awscli-bundle/packages/jmespath-0.10.0.tar.gz  

  inflating: awscli-bundle/packages/colorama-0.4.1.tar.gz  

  inflating: awscli-bundle/packages/colorama-0.4.3.tar.gz  

  inflating: awscli-bundle/packages/PyYAML-5.2.tar.gz  

  inflating: awscli-bundle/packages/s3transfer-0.3.3.tar.gz  

  inflating: awscli-bundle/packages/urllib3-1.25.7.tar.gz  

  inflating: awscli-bundle/packages/setup/setuptools_scm-3.3.3.tar.gz  

  inflating: awscli-bundle/packages/setup/wheel-0.33.6.tar.gz  

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ ls -ltr

total 34544

-rw-r--r--  1 ashokkafle  staff       163 Jul 30 19:16 main.tf

-rw-r--r--  1 ashokkafle  staff  16796306 Jul 30 19:24 awscli-bundle.zip

drwxr-xr-x  4 ashokkafle  staff       128 Jul 30 19:25 awscli-bundle

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ ./awscli-bundle/install -b ~/bin/aws

Running cmd: /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python virtualenv.py --no-download --python /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python /Users/ashokkafle/.local/lib/aws

Running cmd: /Users/ashokkafle/.local/lib/aws/bin/pip install --no-binary :all: --no-cache-dir --no-index --find-links file://. setuptools_scm-3.3.3.tar.gz

Running cmd: /Users/ashokkafle/.local/lib/aws/bin/pip install --no-binary :all: --no-cache-dir --no-index --find-links file://. wheel-0.33.6.tar.gz

Running cmd: /Users/ashokkafle/.local/lib/aws/bin/pip install --no-binary :all: --no-build-isolation --no-cache-dir --no-index  --find-links file:///Users/ashokkafle/terraform-practice/simple-aws-configuration/awscli-bundle/packages awscli-1.18.109.tar.gz

You can now run: /Users/ashokkafle/bin/aws --version

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ echo $PATH | grep ~/bin     // See if $PATH contains ~/bin (output will be empty if it doesn't)

-bash: syntax error near unexpected token `('

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ export PATH=~/bin:$PATH     // Add ~/bin to $PATH if necessary

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ echo $PATH | grep ~/bin 

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ export PATH=~/bin:$PATH 

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ echo $PATH | grep ~/bin 

/Users/ashokkafle/bin:/Library/Frameworks/Python.framework/Versions/3.8/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/share/dotnet:~/.dotnet/tools:/Library/Frameworks/Mono.framework/Versions/Current/Commands


Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ vi ~/.bash_profile

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ source ~/.bash

.bash_history         .bash_profile.pysave  .bashrc               

.bash_profile         .bash_sessions/       

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ source ~/.bash

.bash_history         .bash_profile.pysave  .bashrc               

.bash_profile         .bash_sessions/       

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ source ~/.bash_profile

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ aws --version

aws-cli/1.18.109 Python/2.7.16 Darwin/19.5.0 botocore/1.17.32




5. After installing aws-cli, we also need to have a user created on aws with access-key and secret to configure aws locally as shown below. For detailed explanation of this process, refer: https://ashokkafle.blogspot.com/2020/05/start-and-stop-ec2-instances-using.html

Once user is created and correct permission is assigned, we will get the access-key and secret which can be used for connecting with aws. Those can also be downloaded as csv file and keep safe somewhere.



6. Configure aws locally


Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ aws configure

AWS Access Key ID [None]: <access-key>

AWS Secret Access Key [None]: <secret-key>

Default region name [None]: us-east

Default output format [None]: json

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ cat ~/.aws/config 

[default]

output = json

region = us-east

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ cat ~/.aws/credentials 

[default]

aws_access_key_id = < access-key>

aws_secret_access_key = <secret-key>




7. Now execute terraform plan  to see if the config file will be doing what you intended ( It won't execute but just give a display of what it is going to once executed)


Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ terraform plan

provider.aws.region

  The region where AWS operations will take place. Examples

  are us-east-1, us-west-2, etc.


  Enter a value: us-east-1


Refreshing Terraform state in-memory prior to plan...

The refreshed state will be used to calculate this plan, but will not be

persisted to local or remote state storage.



------------------------------------------------------------------------


An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

  + create


Terraform will perform the following actions:


  # aws_s3_bucket.b will be created

  + resource "aws_s3_bucket" "b" {

      + acceleration_status         = (known after apply)

      + acl                         = "private"

      + arn                         = (known after apply)

      + bucket                      = (known after apply)

      + bucket_domain_name          = (known after apply)

      + bucket_prefix               = "my-tf-test-bucket-"

      + bucket_regional_domain_name = (known after apply)

      + force_destroy               = false

      + hosted_zone_id              = (known after apply)

      + id                          = (known after apply)

      + region                      = (known after apply)

      + request_payer               = (known after apply)

      + tags                        = {

          + "Environment" = "Dev"

          + "Name"        = "My bucket"

        }

      + website_domain              = (known after apply)

      + website_endpoint            = (known after apply)


      + versioning {

          + enabled    = (known after apply)

          + mfa_delete = (known after apply)

        }

    }


Plan: 1 to add, 0 to change, 0 to destroy.


------------------------------------------------------------------------


Note: You didn't specify an "-out" parameter to save this plan, so Terraform

can't guarantee that exactly these actions will be performed if

"terraform apply" is subsequently run.



8. Next execute the actual terraform apply  to do the actual task 


Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ terraform apply

provider.aws.region

  The region where AWS operations will take place. Examples

  are us-east-1, us-west-2, etc.


  Enter a value: us-east-1



An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

  + create


Terraform will perform the following actions:


  # aws_s3_bucket.b will be created

  + resource "aws_s3_bucket" "b" {

      + acceleration_status         = (known after apply)

      + acl                         = "private"

      + arn                         = (known after apply)

      + bucket                      = (known after apply)

      + bucket_domain_name          = (known after apply)

      + bucket_prefix               = "my-tf-test-bucket-"

      + bucket_regional_domain_name = (known after apply)

      + force_destroy               = false

      + hosted_zone_id              = (known after apply)

      + id                          = (known after apply)

      + region                      = (known after apply)

      + request_payer               = (known after apply)

      + tags                        = {

          + "Environment" = "Dev"

          + "Name"        = "My bucket"

        }

      + website_domain              = (known after apply)

      + website_endpoint            = (known after apply)


      + versioning {

          + enabled    = (known after apply)

          + mfa_delete = (known after apply)

        }

    }


Plan: 1 to add, 0 to change, 0 to destroy.


Do you want to perform these actions?

  Terraform will perform the actions described above.

  Only 'yes' will be accepted to approve.


  Enter a value: yes


aws_s3_bucket.b: Creating...

aws_s3_bucket.b: Creation complete after 3s [id=my-tf-test-bucket-20200731003742999700000001]


Apply complete! Resources: 1 added, 0 changed, 0 destroyed.





9. Run some verfications like check if terraform state file is there or s3 bucket is created and so on



Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ terraform state list

aws_s3_bucket.b



Also verified on the aws console that s3 bucket with my-tf-test-bucket-20200731003742999700000001  is created



10. Destroy the resource using terraform destroy


Now we can destroy the resource that is created using terraform or the resources that resides on the terraform state file.


Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ terraform destroy

provider.aws.region

  The region where AWS operations will take place. Examples

  are us-east-1, us-west-2, etc.


  Enter a value: us-east-1


aws_s3_bucket.b: Refreshing state... [id=my-tf-test-bucket-20200731003742999700000001]


An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

  - destroy


Terraform will perform the following actions:


  # aws_s3_bucket.b will be destroyed

  - resource "aws_s3_bucket" "b" {

      - acl                         = "private" -> null

      - arn                         = "arn:aws:s3:::my-tf-test-bucket-20200731003742999700000001" -> null

      - bucket                      = "my-tf-test-bucket-20200731003742999700000001" -> null

      - bucket_domain_name          = "my-tf-test-bucket-20200731003742999700000001.s3.amazonaws.com" -> null

      - bucket_prefix               = "my-tf-test-bucket-" -> null

      - bucket_regional_domain_name = "my-tf-test-bucket-20200731003742999700000001.s3.amazonaws.com" -> null

      - force_destroy               = false -> null

      - hosted_zone_id              = "Z3AQBSTGFYJSTF" -> null

      - id                          = "my-tf-test-bucket-20200731003742999700000001" -> null

      - region                      = "us-east-1" -> null

      - request_payer               = "BucketOwner" -> null

      - tags                        = {

          - "Environment" = "Dev"

          - "Name"        = "My bucket"

        } -> null


      - versioning {

          - enabled    = false -> null

          - mfa_delete = false -> null

        }

    }


Plan: 0 to add, 0 to change, 1 to destroy.


Do you really want to destroy all resources?

  Terraform will destroy all your managed infrastructure, as shown above.

  There is no undo. Only 'yes' will be accepted to confirm.


  Enter a value: yes


aws_s3_bucket.b: Destroying... [id=my-tf-test-bucket-20200731003742999700000001]

aws_s3_bucket.b: Destruction complete after 1s


Destroy complete! Resources: 1 destroyed.



11. Confirm it has been destroyed (  terrform state list)  gives no output and also s3 bucket is no longer present on aws


Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$ terraform state list

Ashoks-MacBook-Pro:simple-aws-configuration ashokkafle$