Core Concepts
Some of the scenario questions here are based on Kodekloud's CKA course labs.
CKAD and CKA can have similar scenario questions. It is recommended to go through the CKAD practice tests.
Shortcuts
First run the two commands below for shortcuts.
export do="--dry-run=client -o yaml"
export now="--force --grace-period=0"
Questions
-
Create a new pod with the nginx image.
Answer
k run nginx --image=nginxk get po -
What is the state of the container agentx in the pod webapp?
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnginx 1/1 Running 0 16mnewpods-vwgw4 1/1 Running 0 4m58snewpods-tkscd 1/1 Running 0 4m58snewpods-jc7l5 1/1 Running 0 4m58swebapp 1/2 ImagePullBackOff 0 3m41sAnswer
controlplane ~ ➜ k describe po webapp | grep agentx -A 5agentx:Container ID:Image: agentxImage ID:Port: <none>Host Port: <none>State: WaitingReason: ImagePullBackOff -
What does the READY column in the output of the kubectl get pods command indicate?
Answer
Running containers in pod/Total containers in pod -
Delete the webapp Pod.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEwebapp 1/2 ImagePullBackOff 0 3m41sAnswer
k delete po webapp --force --grace-period=0 -
Create a new pod with the name redis and the image redis123. Use a pod-definition YAML file.
Answer
controlplane ~ ➜ k run redis --image=redis123 --dry-run=client -o yaml > redis.ymlcontrolplane ~ ➜ lsredis.ymlcontrolplane ~ ➜ k apply -f redis.ymlpod/redis createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEredis 0/1 ImagePullBackOff 0 40s -
Deploy a redis pod using the redis:alpine image with the labels set to tier=db
Answer
controlplane ~ ✖ k run redis --image="redis:alpine" --labels="tier=db" --dry-run=clientpod/redis created (dry run)controlplane ~ ➜ k run redis --image="redis:alpine" --labels="tier=db"pod/redis createdcontrolplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-pod 1/1 Running 0 2m13s 10.42.0.9 controlplane <none> <none>redis 1/1 Running 0 7s 10.42.0.10 controlplane <none> <none>controlplane ~ ➜ k describe po redis | grep LabelLabels: tier=db -
A pod definition file nginx.yaml is given. Create a pod using the file.
## nginx.yaml---apiVersion: v1kind: Podmetadata:name: nginxspec:containers:- image: nginxname: nginxAnswer
controlplane ~ ➜ k apply -f nginx.yamlpod/nginx createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnginx 0/1 Pending 0 4s -
What is the image used to create the pods in the new-replica-set?
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnew-replica-set-988qh 0/1 ImagePullBackOff 0 84snew-replica-set-b4blf 0/1 ImagePullBackOff 0 84snew-replica-set-fqlg2 0/1 ImagePullBackOff 0 84snew-replica-set-9qn8h 0/1 ImagePullBackOff 0 84sAnswer
controlplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEnew-replica-set 4 4 0 86scontrolplane ~ ✖ k describe rs new-replica-set | grep ImageImage: busybox777 -
Why do you think the PODs are not ready?
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnew-replica-set-988qh 0/1 ImagePullBackOff 0 84snew-replica-set-b4blf 0/1 ImagePullBackOff 0 84snew-replica-set-fqlg2 0/1 ImagePullBackOff 0 84snew-replica-set-9qn8h 0/1 ImagePullBackOff 0 84sAnswer
The image BUSYBOX777 doesn't exist
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnew-replica-set-fqlg2 0/1 ImagePullBackOff 0 3m39snew-replica-set-b4blf 0/1 ImagePullBackOff 0 3m39snew-replica-set-9qn8h 0/1 ImagePullBackOff 0 3m39snew-replica-set-988qh 0/1 ImagePullBackOff 0 3m39scontrolplane ~ ➜ k describe po new-replica-set-b4blf | grep Events -A 5Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 3m49s default-scheduler Successfully assigned default/new-replica-set-b4blf to controlplaneNormal Pulling 2m16s (x4 over 3m49s) kubelet Pulling image "busybox777"Warning Failed 2m15s (x4 over 3m48s) kubelet Failed to pull image "busybox777": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox777:latest": failed to resolve reference "docker.io/library/busybox777:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed -
Create a ReplicaSet using the replicaset-definition-1.yaml file located at /root/. There is an issue with the file, so try to fix it.
## replicaset-definition-1.yamlapiVersion: v1kind: ReplicaSetmetadata:name: replicaset-1spec:replicas: 2selector:matchLabels:tier: frontendtemplate:metadata:labels:tier: frontendspec:containers:- name: nginximage: nginxAnswer
controlplane ~ ➜ k apply -f replicaset-definition-1.yamlerror: resource mapping not found for name: "replicaset-1" namespace: "" from "replicaset-definition-1.yaml": no matches for kind "ReplicaSet" in version "v1"ensure CRDs are installed firstcontrolplane ~ ✖ k api-resources | grep repreplicationcontrollers rc v1 true ReplicationControllerreplicasets rs apps/v1 true ReplicaSetFix apiVersion then apply.
## replicaset-definition-1.yamlapiVersion: apps/v1kind: ReplicaSetmetadata:name: replicaset-1spec:replicas: 2selector:matchLabels:tier: frontendtemplate:metadata:labels:tier: frontendspec:containers:- name: nginximage: nginxcontrolplane ~ ➜ k apply -f replicaset-definition-1.yamlreplicaset.apps/replicaset-1 createdcontrolplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEreplicaset-1 2 2 0 4s -
Fix the issue in the replicaset-definition-2.yaml file and create a ReplicaSet using it.
## replicaset-definition-2.yamlapiVersion: apps/v1kind: ReplicaSetmetadata:name: replicaset-2spec:replicas: 2selector:matchLabels:tier: frontendtemplate:metadata:labels:tier: nginxspec:containers:- name: nginximage: nginxAnswer
controlplane ~ ➜ k apply -f replicaset-definition-2.yamlThe ReplicaSet "replicaset-2" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"tier":"nginx"}: `selector` does not match template `labels`Fix labels and apply.
## replicaset-definition-2.yamlapiVersion: apps/v1kind: ReplicaSetmetadata:name: replicaset-2spec:replicas: 2selector:matchLabels:tier: frontendtemplate:metadata:labels:tier: frontendspec:containers:- name: nginximage: nginxcontrolplane ~ ➜ k apply -f replicaset-definition-2.yamlreplicaset.apps/replicaset-2 createdcontrolplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEnew-replica-set 4 4 0 11mreplicaset-1 2 2 2 2m22sreplicaset-2 2 2 2 6s -
Delete the two newly created ReplicaSets - replicaset-1 and replicaset-2.
controlplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEnew-replica-set 4 4 0 12mreplicaset-1 2 2 2 3m22sreplicaset-2 2 2 2 66sAnswer
controlplane ~ ➜ k delete rs replicaset-1replicaset.apps "replicaset-1" deletedcontrolplane ~ ➜ k delete rs replicaset-2replicaset.apps "replicaset-2" deletedcontrolplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEnew-replica-set 4 4 0 12m -
Fix the original replica set new-replica-set to use the correct busybox image.
Answer
controlplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEnew-replica-set 4 4 0 13mcontrolplane ~ ➜ k get rs new-replica-set -o yaml > new-rs.ymlcontrolplane ~ ➜ lsnew-rs.ymlcontrolplane ~ ➜ cat new-rs.yml## new-rs.ymlapiVersion: apps/v1kind: ReplicaSetmetadata:creationTimestamp: "2023-12-28T14:59:48Z"generation: 1name: new-replica-setnamespace: defaultresourceVersion: "952"uid: b23fad99-5515-4e17-abb2-0f9524180759spec:replicas: 4selector:matchLabels:name: busybox-podtemplate:metadata:creationTimestamp: nulllabels:name: busybox-podspec:containers:- command:- sh- -c- echo Hello Kubernetes! && sleep 3600image: busybox777imagePullPolicy: Alwaysname: busybox-containerresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilednsPolicy: ClusterFirstrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}terminationGracePeriodSeconds: 30status:fullyLabeledReplicas: 4observedGeneration: 1replicas: 4controlplane ~ ➜ k delete -f new-rs.ymlreplicaset.apps "new-replica-set" deletedcontrolplane ~ ➜ k get poNo resources found in default namespace.Fix the image used and apply.
image: busyboxcontrolplane ~ ➜ k apply -f new-rs.ymlreplicaset.apps/new-replica-set createdcontrolplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEnew-replica-set 4 4 4 4scontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnew-replica-set-kp7m5 1/1 Running 0 7snew-replica-set-v6ts5 1/1 Running 0 7snew-replica-set-j5k5x 1/1 Running 0 7snew-replica-set-tzbrt 1/1 Running 0 7s -
Scale the ReplicaSet to 5 PODs.
controlplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEnew-replica-set 4 4 4 33sAnswer
controlplane ~ ➜ k edit rs new-replica-setspec:replicas: 5controlplane ~ ➜ k get rsNAME DESIRED CURRENT READY AGEnew-replica-set 5 5 5 118s -
What is the image used to create the pods in the new deployment?
controlplane ~ ➜ k get deployNAME READY UP-TO-DATE AVAILABLE AGEfrontend-deployment 0/4 4 0 103sAnswer
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEfrontend-deployment-577494fd6f-kcjjq 0/1 ImagePullBackOff 0 107sfrontend-deployment-577494fd6f-mth6d 0/1 ImagePullBackOff 0 107sfrontend-deployment-577494fd6f-8ffvh 0/1 ErrImagePull 0 107sfrontend-deployment-577494fd6f-4tgk4 0/1 ErrImagePull 0 107scontrolplane ~ ➜ k describe deploy frontend-deployment | grep ImageImage: busybox888 -
Why do you think the deployment is not ready?
controlplane ~ ➜ k get deployNAME READY UP-TO-DATE AVAILABLE AGEfrontend-deployment 0/4 4 0 103sAnswer
The image BUSYBOX888 doesn't exist
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEfrontend-deployment-577494fd6f-kcjjq 0/1 ImagePullBackOff 0 107sfrontend-deployment-577494fd6f-mth6d 0/1 ImagePullBackOff 0 107sfrontend-deployment-577494fd6f-8ffvh 0/1 ErrImagePull 0 107sfrontend-deployment-577494fd6f-4tgk4 0/1 ErrImagePull 0 107scontrolplane ~ ➜ k describe po frontend-deployment-577494fd6f-kcjjq | grep Events -A 10Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 2m56s default-scheduler Successfully assigned default/frontend-deployment-577494fd6f-kcjjq to controlplaneNormal Pulling 83s (x4 over 2m55s) kubelet Pulling image "busybox888"Warning Failed 83s (x4 over 2m54s) kubelet Failed to pull image "busybox888": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox888:latest": failed to resolve reference "docker.io/library/busybox888:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed -
Create a new Deployment using the deployment-definition-1.yaml file located at /root/. There is an issue with the file, so try to fix it.
## deployment-definition-1.yaml---apiVersion: apps/v1kind: deploymentmetadata:name: deployment-1spec:replicas: 2selector:matchLabels:name: busybox-podtemplate:metadata:labels:name: busybox-podspec:containers:- name: busybox-containerimage: busybox888command:- sh- "-c"- echo Hello Kubernetes! && sleep 3600Answer
controlplane ~ ➜ k apply -f deployment-definition-1.yamlError from server (BadRequest): error when creating "deployment-definition-1.yaml": deployment in version "v1" cannot be handled as a Deployment: no kind "deployment" is registered for version "apps/v1" in scheme "k8s.io/apimachinery@v1.27.1-k3s1/pkg/runtime/scheme.go:100"controlplane ~ ✖ k api-resources | grep deploydeployments deploy apps/v1 true DeploymentFix kind and apply.
## deployment-definition-1.yaml---apiVersion: apps/v1kind: Deploymentcontrolplane ~ ➜ k apply -f deployment-definition-1.yamldeployment.apps/deployment-1 created -
Create a new Deployment with the below attributes using your own deployment definition file.
- Name: httpd-frontend
- Replicas: 3
- Image: httpd:2.4-alpine
Answer
controlplane ~ ➜ k create deploy httpd-frontend --image="httpd:2.4-alpine" --replicas=3 --dry-run=clientdeployment.apps/httpd-frontend created (dry run)controlplane ~ ➜ k create deploy httpd-frontend --image="httpd:2.4-alpine" --replicas=3 --dry-run=client -o yaml > httpd.ymlcontrolplane ~ ➜ k apply -f httpd.ymldeployment.apps/httpd-frontend createdcontrolplane ~ ➜ k get deployNAME READY UP-TO-DATE AVAILABLE AGEfrontend-deployment 0/4 4 0 18mdeployment-1 0/2 2 0 7m26shttpd-frontend 0/3 3 0 5s -
Create a new deployment called redis-deploy in the dev-ns namespace with the redis image. It should have 2 replicas.
Answer
controlplane ~ ➜ export dr="--dry-run=client"controlplane ~ ➜ k create deploy redis-deploy --namespace=dev-ns --image=redis --replicas=2 $drdeployment.apps/redis-deploy created (dry run)controlplane ~ ➜ k create deploy redis-deploy --namespace=dev-ns --image=redis --replicas=2deployment.apps/redis-deploy createdcontrolplane ~ ➜ k get po -n dev-nsNAME READY STATUS RESTARTS AGEredis-deploy-8b745d48d-f9m8t 1/1 Running 0 6sredis-deploy-8b745d48d-vlqg4 1/1 Running 0 6scontrolplane ~ ➜ k get deploy -n dev-nsNAME READY UP-TO-DATE AVAILABLE AGEredis-deploy 2/2 2 2 18s -
How many pods exist in the research namespace?
Answer
controlplane ~ ➜ k get po -n researchNAME READY STATUS RESTARTS AGEdna-1 0/1 CrashLoopBackOff 3 (18s ago) 71sdna-2 0/1 CrashLoopBackOff 3 (17s ago) 71s -
Create a POD in the finance namespace.
- Name: redis
- Image name: redis
Answer
controlplane ~ ➜ k run redis --image=redis --namespace=finance --dry-run=clientpod/redis created (dry run)controlplane ~ ➜ k run redis --image=redis --namespace=financepod/redis createdcontrolplane ~ ➜ k get po -n financeNAME READY STATUS RESTARTS AGEpayroll 1/1 Running 0 2m19sredis 1/1 Running 0 8s -
Which namespace has the blue pod in it?
Answer
controlplane ~ ➜ k get po -A | grep bluemarketing blue 1/1 Running 0 3m14s -
What DNS name should the Blue application use to access the database db-service in its own namespace - marketing?
controlplane ~ ➜ k get po -n marketingNAME READY STATUS RESTARTS AGEredis-db 1/1 Running 0 5m50sblue 1/1 Running 0 5m50scontrolplane ~ ➜ k get svc -n marketingNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEblue-service NodePort 10.43.27.58 <none> 8080:30082/TCP 5m53sdb-service NodePort 10.43.187.8 <none> 6379:31378/TCP 5m53sAnswer
Since the blue application and the db-service are in the same namespace, we can simply use the service name to access the database.
db-service -
What DNS name should the Blue application use to access the database db-service in the dev namespace?
controlplane ~ ➜ k get po -n marketingNAME READY STATUS RESTARTS AGEredis-db 1/1 Running 0 7m11sblue 1/1 Running 0 7m11scontrolplane ~ ➜ k get svc -n devNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdb-service ClusterIP 10.43.48.220 <none> 6379/TCP 7m19sAnswer
Since the blue application and the db-service are in different namespaces. In this case, we need to use the service name along with the namespace to access the database. The FQDN (fully Qualified Domain Name) for the db-service in this example would be db-service.dev.svc.cluster.local.
You can also access it using the service name and namespace like this: db-service.dev
db-service.dev.svc.cluster.local -
Create a new namespace called dev-ns.
Answer
controlplane ~ ➜ k create ns dev-nsnamespace/dev-ns createdcontrolplane ~ ➜ k get nsNAME STATUS AGEkube-system Active 28mdefault Active 28mkube-public Active 28mkube-node-lease Active 28mdev-ns Active 49s -
What is the targetPort configured on the kubernetes service?
controlplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 6m33sAnswer
controlplane ~ ➜ k describe svc kubernetesName: kubernetesNamespace: defaultLabels: component=apiserverprovider=kubernetesAnnotations: <none>Selector: <none>Type: ClusterIPIP Family Policy: SingleStackIP Families: IPv4IP: 10.43.0.1IPs: 10.43.0.1Port: https 443/TCPTargetPort: 6443/TCPEndpoints: 192.35.187.6:6443Session Affinity: NoneEvents: <none> -
Create a new service to access the web application using the service-definition-1.yaml file.
- Name: webapp-service
- Type: NodePort
- targetPort: 8080
- port: 8080
- nodePort: 30080
- selector: name: simple-webapp
Answer
Create the file and apply.
## service-definition-1.yamlapiVersion: v1kind: Servicemetadata:name: webapp-servicespec:type: NodePortselector:name: simple-webappports:- protocol: TCPport: 8080targetPort: 8080nodePort: 30080controlplane ~ ➜ k apply -f service-definition-1.yamlservice/webapp-service createdcontrolplane ~ ➜ k get svc -o wideNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 14m <none>webapp-service NodePort 10.43.22.35 <none> 8080:30080/TCP 73s name=simple-webapp -
Create a service redis-service to expose the redis application within the cluster on port 6379.
Answer
controlplane ~ ➜ k get deployNo resources found in default namespace.controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnginx-pod 1/1 Running 0 9m28sredis 1/1 Running 0 5m42scontrolplane ~ ➜ k expose po redis --port=6379 --name=redis-service --dry-run=clientservice/redis-service exposed (dry run)controlplane ~ ➜ k expose po redis --port=6379 --name=redis-serviceservice/redis-service exposedcontrolplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 18mredis-service ClusterIP 10.43.109.102 <none> 6379/TCP 5s -
Create a new pod called custom-nginx using the nginx image and expose it on container port 8080.
Answer
controlplane ~ ➜ k run custom-nginx --image=nginx --port=8080pod/custom-nginx created -
Create a pod called httpd using the image httpd:alpine in the default namespace. Next, create a service of type ClusterIP by the same name (httpd). The target port for the service should be 80.
Answer
controlplane ~ ➜ export dr="--dry-run=client"controlplane ~ ➜ k run httpd --image="httpd:alpine"pod/httpd createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnginx-pod 1/1 Running 0 25mredis 1/1 Running 0 21mwebapp-7fdf67dd49-2swjm 1/1 Running 0 11mwebapp-7fdf67dd49-rrz5r 1/1 Running 0 11mwebapp-7fdf67dd49-pkpzx 1/1 Running 0 11mcustom-nginx 1/1 Running 0 6m13controlplane ~ ✖ k expose po httpd --port=80 --type=ClusterIP $drservice/httpd exposed (dry run)controlplane ~ ➜ k expose po httpd --port=80 --type=ClusterIPservice/httpd exposedcontrolplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.43.0.1 <none> 443/TCP 35mredis-service ClusterIP 10.43.109.102 <none> 6379/TCP 17mhttpd ClusterIP 10.43.129.55 <none> 80/TCP 6s -
Get the list of nodes in JSON format and store it in a file at /opt/outputs/nodes.json.
Answer
controlplane ~ ➜ k get no -o json > /opt/outputs/nodes.json