Mock Exam 1
Some of the scenario questions here are based on Kodekloud's CKA course labs.
CKAD and CKA can have similar scenario questions. It is recommended to go through the CKAD practice tests.
Shortcuts
First run the two commands below for shortcuts.
export do="--dry-run=client -o yaml"
export now="--force --grace-period=0"
Questions
-
Upgrade the current version of kubernetes from 1.26.0 to 1.27.0 exactly using the kubeadm utility. Make sure that the upgrade is carried out one node at a time starting with the controlplane node. To minimize downtime, the deployment gold-nginx should be rescheduled on an alternate node before upgrading each node.
Upgrade controlplane node first and drain node node01 before upgrading it. Pods for gold-nginx should run on the controlplane node subsequently.
Answer
Start with the controlplane first.
controlplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 30m v1.26.0node01 Ready <none> 29m v1.26.0controlplane ~ ➜ k drain controlplane --ignore-daemonsetsnode/controlplane cordonedWarning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-9q57t, kube-system/weave-net-txlplevicting pod kube-system/coredns-787d4945fb-qdt8zevicting pod kube-system/coredns-787d4945fb-c4qz4pod/coredns-787d4945fb-c4qz4 evictedpod/coredns-787d4945fb-qdt8z evictednode/controlplane drainedcontrolplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready,SchedulingDisabled control-plane 30m v1.26.0node01 Ready <none> 30m v1.26.0controlplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESgold-nginx-6c5b9dd56c-4sgh9 1/1 Running 0 3m10s 10.244.192.1 node01 <none> <none>apt updateapt-cache madison kubeadmapt-mark unhold kubelet kubectl && \apt-get update && apt-get install -y \kubeadm=1.27.0-00 \kubelet=1.27.0-00 \kubectl=1.27.0-00 \apt-mark hold kubelet kubectlcontrolplane ~ ➜ kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.0", GitCommit:"1b4df30b3cdfeaba6024e81e559a6cd09a089d65", GitTreeState:"clean", BuildDate:"2023-04-11T17:09:06Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}controlplane ~ ➜ kubectl versionWARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.0", GitCommit:"1b4df30b3cdfeaba6024e81e559a6cd09a089d65", GitTreeState:"clean", BuildDate:"2023-04-11T17:10:18Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}Kustomize Version: v5.0.1kubeadm upgrade plankubeadm upgrade apply v1.27.0sudo systemctl daemon-reloadsudo systemctl restart kubeletcontrolplane ~ ➜ k uncordon controlplanenode/controlplane uncordonedcontrolplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 38m v1.27.0node01 Ready <none> 37m v1.26.0controlplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESgold-nginx-6c5b9dd56c-4sgh9 1/1 Running 0 10m 10.244.192.1 node01 <none> <none>Before going to node01, drain it first.
controlplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 39m v1.27.0node01 Ready <none> 38m v1.26.0controlplane ~ ➜ k drain node01 --ignore-daemonsetsnode/node01 cordonedWarning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-d5t6j, kube-system/weave-net-kwhpvevicting pod kube-system/coredns-5d78c9869d-pzbkbevicting pod admin2406/deploy2-7b6d9445df-tgp74evicting pod admin2406/deploy3-66785bc8f5-22nv7evicting pod default/gold-nginx-6c5b9dd56c-4sgh9evicting pod admin2406/deploy1-5d88679d77-nvbfcevicting pod admin2406/deploy5-7cbf794564-t66r2evicting pod kube-system/coredns-5d78c9869d-844r9evicting pod admin2406/deploy4-55554b4b4c-zkz7ppod/deploy5-7cbf794564-t66r2 evictedI0104 05:34:38.455763 17683 request.go:696] Waited for 1.05165341s due to client-side throttling, not priority and fairness, request: GET:https://controlplane:6443/api/v1/namespaces/admin2406/pods/deploy1-5d88679d77-nvbfcpod/deploy1-5d88679d77-nvbfc evictedpod/deploy4-55554b4b4c-zkz7p evictedpod/deploy3-66785bc8f5-22nv7 evictedpod/gold-nginx-6c5b9dd56c-4sgh9 evictedpod/deploy2-7b6d9445df-tgp74 evictedpod/coredns-5d78c9869d-844r9 evictedpod/coredns-5d78c9869d-pzbkb evictednode/node01 drainedcontrolplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 39m v1.27.0node01 Ready,SchedulingDisabled <none> 39m v1.26.0Now run the same commands in node-01.
ssh node01apt updateapt-cache madison kubeadmapt-mark unhold kubelet kubectl && \apt-get update && apt-get install -y \kubeadm=1.27.0-00 \kubelet=1.27.0-00 \kubectl=1.27.0-00 \apt-mark hold kubelet kubectlkubeadm versionkubectl versionkubeadm upgrade plansudo kubeadm upgrade apply v1.27.0sudo systemctl daemon-reloadsudo systemctl restart kubeletroot@node01 ~ ✦ ✖ kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.0", GitCommit:"1b4df30b3cdfeaba6024e81e559a6cd09a089d65", GitTreeState:"clean", BuildDate:"2023-04-11T17:09:06Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}root@node01 ~ ✦ ➜ kubectl versionWARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.0", GitCommit:"1b4df30b3cdfeaba6024e81e559a6cd09a089d65", GitTreeState:"clean", BuildDate:"2023-04-11T17:10:18Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}Now before we uncordon the node01, we must first make sure that the pod is running on controlplane, as instructed.
controlplane ~ ✦2 ✖ k get poNAME READY STATUS RESTARTS AGEgold-nginx-6c5b9dd56c-xjc6c 0/1 Pending 0 3m30scontrolplane ~ ✦2 ➜ k describe po gold-nginx-6c5b9dd56c-xjc6c | grep -i events -A 5Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 4m59s default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.controlplane ~ ✦2 ➜ k describe nodes controlplane | grep -i taintTaints: node-role.kubernetes.io/control-plane:NoScheduleTo allow scheduling of pods controlplane, we need to remove the taint on the controlplane.
controlplane ~ ✦2 ✖ k describe nodes controlplane | grep -i taintTaints: node-role.kubernetes.io/control-plane:NoSchedulecontrolplane ~ ✦2 ➜ k taint no controlplane node-role.kubernetes.io/control-plane:NoSchedule-node/controlplane untaintedcontrolplane ~ ✦2 ➜ k describe nodes controlplane | grep -i taintTaints: <none>controlplane ~ ✦2 ➜ k get poNAME READY STATUS RESTARTS AGEgold-nginx-6c5b9dd56c-xjc6c 0/1 ContainerCreating 0 7m10scontrolplane ~ ✦2 ➜ k get poNAME READY STATUS RESTARTS AGEgold-nginx-6c5b9dd56c-xjc6c 1/1 Running 0 7m13scontrolplane ~ ✦2 ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESgold-nginx-6c5b9dd56c-xjc6c 1/1 Running 0 7m17s 10.244.0.4 controlplane <none> <none>We can now uncordon node01.
controlplane ~ ✦2 ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 56m v1.27.0node01 Ready,SchedulingDisabled <none> 55m v1.27.0controlplane ~ ✦2 ➜ k uncordon node01node/node01 uncordonedcontrolplane ~ ✦2 ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 56m v1.27.0node01 Ready <none> 55m v1.27.0controlplane ~ ✦2 ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESgold-nginx-6c5b9dd56c-xjc6c 1/1 Running 0 7m44s 10.244.0.4 controlplane <none> <none> -
Print the names of all deployments in the admin2406 namespace in the following format:
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACEThe data should be sorted by the increasing order of the deployment name.
Answer
controlplane ~ ✦2 ➜ k get nsNAME STATUS AGEadmin1401 Active 28madmin2406 Active 28malpha Active 28mdefault Active 57mkube-node-lease Active 57mkube-public Active 57mkube-system Active 57mcontrolplane ~ ✦2 ➜ k get deployments.apps -n admin2406NAME READY UP-TO-DATE AVAILABLE AGEdeploy1 1/1 1 1 29mdeploy2 1/1 1 1 29mdeploy3 1/1 1 1 29mdeploy4 1/1 1 1 29mdeploy5 1/1 1 1 29mUse custom columns to specify the headers.
controlplane ~ ✦2 ➜ k get -n admin2406 deployments.apps -o custom-columns="DEPLOYMENT:a,CONTAINER_IMAGE:b,READY_REPLICAS:c,NAMESPACE:d"DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE<none> <none> <none> <none><none> <none> <none> <none><none> <none> <none> <none><none> <none> <none> <none><none> <none> <none> <none>Now that we got the format, we just need to supply the values. Let's use one sample first.
controlplane ~ ✦2 ➜ k get deployments.apps -n admin2406 deploy1NAME READY UP-TO-DATE AVAILABLE AGEdeploy1 1/1 1 1 35mcontrolplane ~ ✦2 ➜ k get deployments.apps -n admin2406NAME READY UP-TO-DATE AVAILABLE AGEdeploy1 1/1 1 1 35mdeploy2 1/1 1 1 35mdeploy3 1/1 1 1 35mdeploy4 1/1 1 1 35mdeploy5 1/1 1 1 35mcontrolplane ~ ✦2 ➜ k get deployments.apps -n admin2406 deploy1NAME READY UP-TO-DATE AVAILABLE AGEdeploy1 1/1 1 1 35mDetermine the values that is needed and use dot notation.
controlplane ~ ✦2 ➜ k get deployments.apps -n admin2406 deploy1 -o json{"apiVersion": "apps/v1","kind": "Deployment","metadata": {"annotations": {"deployment.kubernetes.io/revision": "1"},"creationTimestamp": "2024-01-04T10:23:24Z","generation": 1,"labels": {"app": "deploy1"},"name": "deploy1","namespace": "admin2406","resourceVersion": "6133","uid": "0c04a727-afbd-4242-9a9f-abc45879367b"},"spec": {"progressDeadlineSeconds": 600,"replicas": 1,"revisionHistoryLimit": 10,"selector": {"matchLabels": {"app": "deploy1"}},"strategy": {"rollingUpdate": {"maxSurge": "25%","maxUnavailable": "25%"},"type": "RollingUpdate"},"template": {"metadata": {"creationTimestamp": null,"labels": {"app": "deploy1"}},"spec": {"containers": [{"image": "nginx","imagePullPolicy": "Always","name": "nginx",Start with first column:
Deployment names = {.metadata.name}controlplane ~ ✦2 ➜ k get -n admin2406 deployments.apps -o custom-columns="DEPLOYMENT:{.metadata.name},CONTAINER_IMAGE:b,READY_REPLICAS:c,NAMESPACE:d"DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACEdeploy1 <none> <none> <none>deploy2 <none> <none> <none>deploy3 <none> <none> <none>deploy4 <none> <none> <none>deploy5 <none> <none> <none>Now the container image.
{.spec.template.spec.containers[0].image}controlplane ~ ✦2 ➜ k get -n admin2406 deployments.apps -o custom-columns="DEPLOYMENT:{.metadata.name},CONTAINER_IMAGE:{.spec.template.spec.containers[0].image},READY_REPLICAS:c,NAMESPACE:d"DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACEdeploy1 nginx <none> <none>deploy2 nginx:alpine <none> <none>deploy3 nginx:1.16 <none> <none>deploy4 nginx:1.17 <none> <none>deploy5 nginx:latest <none> <none>Now the ready replicas.
{.status.readyReplicas}controlplane ~ ✦2 ➜ k get -n admin2406 deployments.apps -o custom-columns="DEPLOYMENT:{.metadata.name},CONTAINER_IMAGE:{.spec.template.spec.containers[0].image},READY_REPLICAS:{.status.readyReplicas},NAMESPACE:d"DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACEdeploy1 nginx 1 <none>deploy2 nginx:alpine 1 <none>deploy3 nginx:1.16 1 <none>deploy4 nginx:1.17 1 <none>deploy5 nginx:latest 1 <none>Finally, the namespace.
{.metadata.namespace}controlplane ~ ✦2 ➜ k get -n admin2406 deployments.apps -o custom-columns="DEPLOYMENT:{.metadata.name},CONTAINER_IMAGE:{.spec.template.spec.containers[0].image},READY_REPLICAS:{.status.readyReplicas},NAMESPACE:{.metadata.namespace}"DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACEdeploy1 nginx 1 admin2406deploy2 nginx:alpine 1 admin2406deploy3 nginx:1.16 1 admin2406deploy4 nginx:1.17 1 admin2406deploy5 nginx:latest 1 admin2406Now sort it by deployment name.
controlplane ~ ➜ kubectl -n admin2406 get deployment -o custom-columns=DEPLOYMENT:.metadata.name,CONTAINER_IMAGE:.spec.template.spec.containers[].image,READY_REPLICAS:.status.readyReplicas,NAMESPACE:.metadata.namespace --sort-by=.metadata.nameDEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACEdeploy1 nginx 1 admin2406deploy2 nginx:alpine 1 admin2406deploy3 nginx:1.16 1 admin2406deploy4 nginx:1.17 1 admin2406deploy5 nginx:latest 1 admin2406Finally, forward it to the specified file.
kubectl -n admin2406 get deployment -o custom-columns=DEPLOYMENT:.metadata.name,CONTAINER_IMAGE:.spec.template.spec.containers[].image,READY_REPLICAS:.status.readyReplicas,NAMESPACE:.metadata.namespace --sort-by=.metadata.name > /opt/admin2406_datacontrolplane ~ ➜ ls -la /opt/admin2406_data-rw-r--r-- 1 root root 348 Jan 4 06:57 /opt/admin2406_datacontrolplane ~ ➜ cat /opt/admin2406_dataDEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACEdeploy1 nginx 1 admin2406deploy2 nginx:alpine 1 admin2406deploy3 nginx:1.16 1 admin2406deploy4 nginx:1.17 1 admin2406deploy5 nginx:latest 1 admin2406 -
A kubeconfig file called admin.kubeconfig has been created in /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.
Answer
Make sure the port for the kube-apiserver is correct. So for this change port from 4380 to 6443.
Run the below command to know the cluster information:
controlplane ~ ➜ kubectl cluster-info --kubeconfig /root/CKA/admin.kubeconfigE0104 07:00:03.980973 11082 memcache.go:238] couldn't get current server API group list: Get "https://controlplane:4380/api?timeout=32s": dial tcp 192.26.250.9:4380: connect: connection refusedE0104 07:00:03.981343 11082 memcache.go:238] couldn't get current server API group list: Get "https://controlplane:4380/api?timeout=32s": dial tcp 192.26.250.9:4380: connect: connection refusedE0104 07:00:03.982790 11082 memcache.go:238] couldn't get current server API group list: Get "https://controlplane:4380/api?timeout=32s": dial tcp 192.26.250.9:4380: connect: connection refusedE0104 07:00:03.984160 11082 memcache.go:238] couldn't get current server API group list: Get "https://controlplane:4380/api?timeout=32s": dial tcp 192.26.250.9:4380: connect: connection refusedE0104 07:00:03.985582 11082 memcache.go:238] couldn't get current server API group list: Get "https://controlplane:4380/api?timeout=32s": dial tcp 192.26.250.9:4380: connect: connection refusedTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.The connection to the server controlplane:4380 was refused - did you specify the right host or port?vi /root/CKA/admin.kubeconfigserver: https://controlplane:6443controlplane ~ ➜ kubectl cluster-info --kubeconfig /root/CKA/admin.kubeconfigKubernetes control plane is running at https://controlplane:6443CoreDNS is running at https://controlplane:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. -
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next upgrade the deployment to version 1.17 using rolling update.
Answer
controlplane ~ ✦2 ➜ k create deployment nginx-deploy --image nginx:1.16 --replicas 1deployment.apps/nginx-deploy createdcontrolplane ~ ✦2 ➜ k get deployments.appsNAME READY UP-TO-DATE AVAILABLE AGEgold-nginx 1/1 1 1 44mnginx-deploy 1/1 1 1 5scontrolplane ~ ➜ k set image deploy nginx-deploy nginx=nginx:1.17deployment.apps/nginx-deploy image updatedcontrolplane ~ ➜ k rollout status deployment nginx-deploydeployment "nginx-deploy" successfully rolled outcontrolplane ~ ✦2 ➜ k get deployments.appsNAME READY UP-TO-DATE AVAILABLE AGEgold-nginx 1/1 1 1 45mnginx-deploy 1/1 1 1 56scontrolplane ~ ✦2 ➜ k describe deployments.apps nginx-deploy | grep -i imageImage: nginx:1.17 -
A new deployment called alpha-mysql has been deployed in the alpha namespace. However, the pods are not running. Troubleshoot and fix the issue. The deployment should make use of the persistent volume alpha-pv to be mounted at /var/lib/mysql and should use the environment variable MYSQL_ALLOW_EMPTY_PASSWORD=1 to make use of an empty root password.
Answer
controlplane ~ ✖ k get all -n alphaNAME READY STATUS RESTARTS AGEpod/alpha-mysql-5b7b8988c4-r8ls8 0/1 Pending 0 8m8sNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/alpha-mysql 0/1 1 0 8m8sNAME DESIRED CURRENT READY AGEreplicaset.apps/alpha-mysql-5b7b8988c4 1 1 0 8m8scontrolplane ~ ➜ k describe deployments.apps -n alpha alpha-mysqlName: alpha-mysqlNamespace: alphaCreationTimestamp: Thu, 04 Jan 2024 06:52:44 -0500Labels: app=alpha-mysqlAnnotations: deployment.kubernetes.io/revision: 1Selector: app=alpha-mysqlReplicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25% max unavailable, 25% max surgePod Template:Labels: app=alpha-mysqlContainers:mysql:Image: mysql:5.6Port: 3306/TCPHost Port: 0/TCPEnvironment:MYSQL_ALLOW_EMPTY_PASSWORD: 1Mounts:/var/lib/mysql from mysql-data (rw)Volumes:mysql-data:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: mysql-alpha-pvcReadOnly: falseConditions:Type Status Reason---- ------ ------Available False MinimumReplicasUnavailableProgressing True ReplicaSetUpdatedOldReplicaSets: <none>NewReplicaSet: alpha-mysql-5b7b8988c4 (1/1 replicas created)Events:Type Reason Age From Message---- ------ ---- ---- -------Normal ScalingReplicaSet 8m19s deployment-controller Scaled up replica set alpha-mysql-5b7b8988c4 to 1Look closely at:
Volumes:mysql-data:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: mysql-alpha-pvcReadOnly: falseHowever, there's no PVC with that name.
controlplane ~ ➜ k get pvc -n alphaNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEalpha-claim Pending slow-storage 9m1sCreate the PVC.
## pvc.yml---apiVersion: v1kind: PersistentVolumeClaimmetadata:name: mysql-alpha-pvcnamespace: alphaspec:accessModes:- ReadWriteOnceresources:requests:storage: 1GistorageClassName: slowcontrolplane ~ ➜ k apply -f pvc.ymlpersistentvolumeclaim/mysql-alpha-pvc createdcontrolplane ~ ➜ k get -n alpha pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEalpha-claim Pending slow-storage 10mmysql-alpha-pvc Bound alpha-pv 1Gi RWO slow 11scontrolplane ~ ➜ k get -n alpha poNAME READY STATUS RESTARTS AGEalpha-mysql-5b7b8988c4-r8ls8 1/1 Running 0 10mcontrolplane ~ ➜ k get -n alpha deployments.appsNAME READY UP-TO-DATE AVAILABLE AGEalpha-mysql 1/1 1 1 10m -
Take the backup of ETCD at the location /opt/etcd-backup.db on the controlplane node.
Answer
controlplane ~ ✦2 ➜ k describe -n kube-system po kube-apiserver-controlplane | grep -i caPriority Class Name: system-node-criticalImage ID: registry.k8s.io/kube-apiserver@sha256:89b8d9dbef2b905b7d028ca8b7f79d35ebd9baa66b0a3ee2ddd4f3e0e2804b45--client-ca-file=/etc/kubernetes/pki/ca.crtcontrolplane ~ ✦2 ➜ k describe -n kube-system po kube-apiserver-controlplane | grep -i server.crt--tls-cert-file=/etc/kubernetes/pki/apiserver.crtcontrolplane ~ ✦2 ➜ k describe -n kube-system po kube-apiserver-controlplane | grep -i .key--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key--service-account-key-file=/etc/kubernetes/pki/sa.pub--service-account-signing-key-file=/etc/kubernetes/pki/sa.key--tls-private-key-file=/etc/kubernetes/pki/apiserver.keyexport ETCDCTL_API=3etcdctl \--endpoints=127.0.0.1:2379 \--cacert=/etc/kubernetes/pki/etcd/ca.crt \--cert=/etc/kubernetes/pki/etcd/server.crt \--key=/etc/kubernetes/pki/etcd/server.key \snapshot save /opt/etcd-backup.dbcontrolplane ~ ➜ ls -la /opt/etcd-backup.db-rw-r--r-- 1 root root 2134048 Jan 4 07:05 /opt/etcd-backup.db -
Create a pod called secret-1401 in the admin1401 namespace using the busybox image. The container within the pod should be called secret-admin and should sleep for 4800 seconds.
The container should mount a read-only secret volume called secret-volume at the path /etc/secret-volume. The secret being mounted has already been created for you and is called dotfile-secret.
Answer
k run secret-1401 -n admin1401 --image=busybox --dry-run=client -oyaml --command -- sleep 4800 > admin.yaml---apiVersion: v1kind: Podmetadata:creationTimestamp: nullname: secret-1401namespace: admin1401labels:run: secret-1401spec:volumes:- name: secret-volume# secret volumesecret:secretName: dotfile-secretcontainers:- command:- sleep- "4800"image: busyboxname: secret-admin# volumes' mount pathvolumeMounts:- name: secret-volumereadOnly: truemountPath: "/etc/secret-volume"controlplane ~ ➜ k get po -n admin1401NAME READY STATUS RESTARTS AGEsecret-1401 1/1 Running 0 27s -
Expose the hr-web-app as service hr-web-app-service application on port 30082 on the nodes on the cluster. The web application listens on port 8080.
-
Name: hr-web-app-service
-
Type: NodePort
-
Endpoints: 2
-
Port: 8080
-
NodePort: 30082
Answer
controlplane ~ ✦ ➜ k get poNAME READY STATUS RESTARTS AGEhr-web-app-57cd7b5799-vzmsz 1/1 Running 0 9m2shr-web-app-57cd7b5799-wqshb 1/1 Running 0 9m2smessaging 1/1 Running 0 11mnginx-pod 1/1 Running 0 11morange 1/1 Running 0 3m23sstatic-busybox 1/1 Running 0 5m52scontrolplane ~ ✦ ➜ k get deployments.appsNAME READY UP-TO-DATE AVAILABLE AGEhr-web-app 2/2 2 2 9m4scontrolplane ~ ✦ ➜ kubectl expose deployment hr-web-app --type=NodePort --port=8080 --name=hr-web-app-service --dry-run=client -o yaml > hr-web-app-service.yamlModify the YAML file and add the nodeport section:
### hr-web-app-service.yamlapiVersion: v1kind: Servicemetadata:creationTimestamp: nulllabels:app: hr-web-appname: hr-web-app-servicespec:ports:- port: 8080protocol: TCPtargetPort: 8080nodePort: 30082selector:app: hr-web-apptype: NodePortstatus:loadBalancer: {}controlplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEhr-web-app-service NodePort 10.106.44.228 <none> 8080:30082/TCP 23skubernetes ClusterIP 10.96.0.1 <none> 443/TCP 128mmessaging-service ClusterIP 10.100.177.203 <none> 6379/TCP 30m -
-
Create a static pod named static-busybox on the controlplane node that uses the busybox image and the command sleep 1000.
Answer
kubectl run --restart=Never --image=busybox static-busybox --dry-run=client -oyaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yamlcontrolplane ~ ➜ ls -la /etc/kubernetes/manifests/static-busybox.yaml-rw-r--r-- 1 root root 298 Jan 4 08:12 /etc/kubernetes/manifests/static-busybox.yamlcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEhr-web-app-57cd7b5799-vzmsz 1/1 Running 0 26mhr-web-app-57cd7b5799-wqshb 1/1 Running 0 26mmessaging 1/1 Running 0 28mnginx-pod 1/1 Running 0 28morange 1/1 Running 0 20mstatic-busybox 1/1 Running 0 23m -
Create a Persistent Volume with the given specification: -
-
Volume name: pv-analytics
-
Storage: 100Mi
-
Access mode: ReadWriteMany
-
Host path: /pv/data-analytics
Answer
## pv.ymlapiVersion: v1kind: PersistentVolumemetadata:name: pv-analyticsspec:capacity:storage: 100MiaccessModes:- ReadWriteManyhostPath:path: /pv/data-analyticscontrolplane ~ ➜ k apply -f pv.ymlpersistentvolume/pv-analytics createdcontrolplane ~ ➜ k get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv-analytics 100Mi RWX Retain Available -
-
Use JSON PATH query to retrieve the osImages of all the nodes and store it in a file /opt/outputs/nodes_os_x43kj56.txt.
The osImages are under the nodeInfo section under status of each node.
Answer
controlplane ~ ➜ k get no -o jsonpath='{.items[*].status.nodeInfo}'{"architecture":"amd64","bootID":"1ededf94-06c6-443e-b30a-58a8637de4ad","containerRuntimeVersion":"containerd://1.6.6","kernelVersion":"5.4.0-1106-gcp","kubeProxyVersion":"v1.27.0","kubeletVersion":"v1.27.0","machineID":"73d7539cb95c4ef09a8ddd274b5251bc","operatingSystem":"linux","osImage":"Ubuntu 20.04.6 LTS","systemUUID":"f27b8c4f-18b7-2007-fc27-ce8e34bfff92"}controlplane ~ ➜controlplane ~ ➜ k get no -o jsonpath='{.items[*].status.nodeInfo.osImage}'Ubuntu 20.04.6 LTScontrolplane ~ ➜controlplane ~ ➜ kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.osImage}' > /opt/outputs/nodes_os_x43kj56.txtcontrolplane ~ ➜controlplane ~ ➜ ls -l /opt/outputs/total 20-rw-r--r-- 1 root root 18 Jan 4 08:19 nodes_os_x43kj56.txt-rw-r--r-- 1 root root 12296 Jan 4 07:45 nodes-z3444kd9.jsoncontrolplane ~ ➜ cat /opt/outputs/nodes_os_x43kj56.txtUbuntu 20.04.6 LTS -
Create a new pod called super-user-pod with image busybox:1.28. Allow the pod to be able to set system_time. Pod should sleep for 4800 seconds.
Answer
## super.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: super-user-podname: super-user-podspec:containers:- image: busybox:1.28name: super-user-podcommand: ["sh","-c","sleep 4800"]securityContext:capabilities:add: ["SYS_TIME"]resources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k apply -f super.ymlpod/super-user-pod createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnginx-critical 1/1 Running 0 6m28snginx-deploy-5c95467974-d27mz 1/1 Running 0 12mredis-storage 1/1 Running 0 28msuper-user-pod 1/1 Running 0 3s