Mock Exams
Some of the scenario questions here are based on Kodekloud's CKAD course labs.
CKAD and CKA can have similar scenario questions. It is recommended to go through the CKA practice tests.
Shortcuts
First run the two commands below for shortcuts.
export do="--dry-run=client -o yaml"
export now="--force --grace-period=0"
Questions
-
Create a Persistent Volume called log-volume. It should make use of a storage class name manual. It should use RWX as the access mode and have a size of 1Gi. The volume should use the hostPath /opt/volume/nginx
Next, create a PVC called log-claim requesting a minimum of 200Mi of storage. This PVC should bind to log-volume.
Mount this in a pod called logger at the location /var/www/nginx. This pod should use the image nginx:alpine.
Answer
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEwebapp-color 1/1 Running 0 75scontrolplane ~ ➜ k get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEmanual kubernetes.io/no-provisioner Delete Immediate false 77scontrolplane ~ ➜ k get pvNo resources foundcontrolplane ~ ➜ k get pvcNo resources found in default namespace.## pv.ymlapiVersion: v1kind: PersistentVolumemetadata:name: log-volumespec:capacity:storage: 1GivolumeMode: FilesystemaccessModes:- ReadWriteManystorageClassName: manualhostPath:path: /opt/volume/nginx## pvc.ymlapiVersion: v1kind: PersistentVolumeClaimmetadata:name: log-claimspec:accessModes:- ReadWriteManyvolumeMode: Filesystemresources:requests:storage: 200MistorageClassName: manual## nginx.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: loggername: loggerspec:containers:- image: nginx:alpinename: loggerresources: {}volumeMounts:- mountPath: /var/www/nginxname: log-volumereadOnly: truevolumes:- name: log-volumeemptyDir: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ ls -ltotal 12-rw-r--r-- 1 root root 385 Jan 6 03:06 nginx.yml-rw-r--r-- 1 root root 212 Jan 6 03:01 pvc.yml-rw-r--r-- 1 root root 231 Jan 6 03:04 pv.ymlcontrolplane ~ ➜ k apply -f .pod/logger createdpersistentvolume/log-volume createdpersistentvolumeclaim/log-claim createdcontrolplane ~ ➜ k get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGElog-volume 1Gi RWX Retain Bound default/log-claim manual 4scontrolplane ~ ➜ k get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGElog-claim Bound log-volume 1Gi RWX manual 7scontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGElogger 1/1 Running 0 10swebapp-color 1/1 Running 0 14mcontrolplane ~ ➜ k describe po logger | grep -A 10 Volumes:Volumes:log-volume:Type: EmptyDir (a temporary directory that shares a pod's lifetime)Medium:SizeLimit: <unset> -
We have deployed a new pod called secure-pod and a service called secure-service. Incoming or Outgoing connections to this pod are not working. Troubleshoot why this is happening.
-
Make sure that incoming connection from the pod webapp-color are successful.
-
Important: Don't delete any current objects deployed.
-
Create the necessary networking policy.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGElogger 1/1 Running 0 27mnginx-deploy-dcbd487f9-47592 1/1 Running 0 9m22snginx-deploy-dcbd487f9-9r8tc 1/1 Running 0 9m25snginx-deploy-dcbd487f9-r6tqp 1/1 Running 0 9m25snginx-deploy-dcbd487f9-srmtx 1/1 Running 0 9m25sredis-77c4ffc68c-n4ndn 1/1 Running 0 3m52ssecure-pod 1/1 Running 0 25mwebapp-color 1/1 Running 0 32mcontrolplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 112msecure-service ClusterIP 10.101.166.125 <none> 80/TCP 25mAnswer
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: allow-ingressnamespace: defaultspec:podSelector:matchLabels:run: secure-podpolicyTypes:- Ingressingress:- from:- podSelector:matchLabels:name: webapp-colorports:- protocol: TCPport: 80controlplane ~ ✦ ➜ k apply -f netpol.ymlnetworkpolicy.networking.k8s.io/test-network-policy createdcontrolplane ~ ✦ ➜ k get netpolNAME POD-SELECTOR AGEallow-ingress run=secure-pod 2m39sdefault-deny <none> 33mcontrolplane ~ ➜ k exec -it webapp-color -- nc -v -z secure-service 80secure-service (10.100.140.194:80) open -
-
Create a pod called time-check in the dvl1987 namespace. This pod should run a container called time-check that uses the busybox image.
-
Create a config map called time-config with the data TIME_FREQ=10 in the same namespace.
-
The time-check container should run the command: while true; do date; sleep $TIME_FREQ;done and write the result to the location /opt/time/time-check.log.
-
The path /opt/time on the pod should mount a volume that lasts the lifetime of this pod.
Answer
controlplane ~ ✦ ➜ k create ns dvl1987namespace/dvl1987 createdcontrolplane ~ ✦ ➜ k get nsNAME STATUS AGEdefault Active 139mdvl1987 Active 4se-commerce Active 58mkube-node-lease Active 139mkube-public Active 139mkube-system Active 139mmarketing Active 58m## time-check-cm.ymlapiVersion: v1kind: ConfigMapmetadata:creationTimestamp: nullname: time-confignamespace: dvl1987data:TIME_FREQ: "10"## time-check.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: time-checkname: time-checknamespace: dvl1987spec:containers:- image: busyboxname: time-checkresources: {}command: ["sh","-c"]args: ["while true; do date; sleep $TIME_FREQ;done > /opt/time/time-check.log"]env:- name: TIME_FREQvalueFrom:configMapKeyRef:name: time-configkey: TIME_FREQvolumeMounts:- name: volmountPath: /opt/timevolumes:- name: volemptyDir: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ✦2 ➜ ls -ltotal 20-rw-r--r-- 1 root root 132 Jan 6 03:20 time-check-cm.yml-rw-r--r-- 1 root root 630 Jan 6 03:22 time-check.ymlcontrolplane ~ ✦2 ➜ k apply -f time-check-cm.ymlconfigmap/time-config createdcontrolplane ~ ✦3 ➜ k apply -f time-check.ymlpod/time-check createdcontrolplane ~ ✦3 ➜ k get -n dvl1987 cmNAME DATA AGEkube-root-ca.crt 1 4m59stime-config 1 111scontrolplane ~ ➜ k describe -n dvl1987 po time-check | grep -A 10 EnvironmentEnvironment:TIME_FREQ: <set to the key 'TIME_FREQ' of config map 'time-config'> Optional: falsecontrolplane ~ ➜ k describe -n dvl1987 po time-check | grep -A 10 Volumes:Volumes:vol:Type: EmptyDir (a temporary directory that shares a pod's lifetime)Medium:SizeLimit: <unset> -
-
Create a new deployment called nginx-deploy, with one single container called nginx, image nginx:1.16 and 4 replicas.
-
The deployment should use RollingUpdate strategy with maxSurge=1, and maxUnavailable=2.
-
Next upgrade the deployment to version 1.17.
-
Finally, once all pods are updated, undo the update and go back to the previous version.
Answer
## nginx-deploy.ymlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: nginx-deployname: nginx-deployspec:replicas: 4selector:matchLabels:app: nginx-deploystrategy:type: RollingUpdaterollingUpdate:maxUnavailable: 2maxSurge: 1template:metadata:creationTimestamp: nulllabels:app: nginx-deployspec:containers:- image: nginx:1.16name: nginxresources: {}status: {}controlplane ~ ➜ k apply -f nginx-deploy.ymldeployment.apps/nginx-deploy createdcontrolplane ~ ➜ k get deployments.appsNAME READY UP-TO-DATE AVAILABLE AGEnginx-deploy 4/4 4 4 30sNow upgrade the image:
controlplane ~ ✦ ➜ k edit deployments.apps nginx-deploydeployment.apps/nginx-deploy editedspec:containers:- image: nginx:1.17controlplane ~ ✦ ➜ k describe deployments.apps nginx-deploy | grep -i imageImage: nginx:1.17controlplane ~ ✦ ➜ k rollout undo deployment nginx-deploydeployment.apps/nginx-deploy rolled backcontrolplane ~ ✦ ➜ k describe deployments.apps nginx-deploy | grep -i imageImage: nginx:1.16 -
-
Create a redis deployment with the following parameters:
-
Name of the deployment should be redis using the redis:alpine image. It should have exactly 1 replica.
-
The container should request for .2 CPU. It should use the label app=redis.
It should mount exactly 2 volumes.
-
An Empty directory volume called data at path /redis-master-data.
-
A configmap volume called redis-config at path /redis-master.
-
The container should expose the port 6379.
The configmap has already been created.
controlplane ~ ➜ k get cmNAME DATA AGEkube-root-ca.crt 1 128mredis-config 1 7m10scontrolplane ~ ➜ k get cm redis-config -o yamlapiVersion: v1data:redis-config: |maxmemory 2mbmaxmemory-policy allkeys-lrukind: ConfigMapmetadata:creationTimestamp: "2024-01-06T08:32:05Z"name: redis-confignamespace: defaultresourceVersion: "10206"uid: a378978a-d271-46dc-89e4-ea8d22551471Answer
## redis.ymlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: redisname: redisspec:replicas: 1selector:matchLabels:app: redisstrategy: {}template:metadata:creationTimestamp: nulllabels:app: redisspec:containers:- image: redis:alpinename: redisresources:limits:cpu: "1"requests:cpu: "0.2"ports:- containerPort: 6379volumeMounts:- name: datamountPath: /redis-master-data- name: redis-configmountPath: /redis-mastervolumes:- name: dataemptyDir: {}- name: redis-configconfigMap:name: redis-configstatus: {}controlplane ~ ✦ ➜ k get poNAME READY STATUS RESTARTS AGElogger 1/1 Running 0 24mnginx-deploy-dcbd487f9-47592 1/1 Running 0 6m16snginx-deploy-dcbd487f9-9r8tc 1/1 Running 0 6m19snginx-deploy-dcbd487f9-r6tqp 1/1 Running 0 6m19snginx-deploy-dcbd487f9-srmtx 1/1 Running 0 6m19sredis-77c4ffc68c-n4ndn 1/1 Running 0 46ssecure-pod 1/1 Running 0 22mwebapp-color 1/1 Running 0 29mcontrolplane ~ ✦ ➜ k describe po redis-77c4ffc68c-n4ndn | grep -A 10 Mounts:Mounts:/redis-master from redis-config (rw)/redis-master-data from data (rw)/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-csrv6 (ro)controlplane ~ ➜ k describe po redis-77c4ffc68c-n4ndn | grep -A 10 VolumesVolumes:data:Type: EmptyDir (a temporary directory that shares a pod's lifetime)Medium:SizeLimit: <unset>redis-config:Type: ConfigMap (a volume populated by a ConfigMap)Name: redis-configOptional: falsekube-api-access-csrv6:Type: Projected (a volume that contains injected data from multiple sources) -
-
We have deployed a few pods in this cluster in various namespaces. Inspect them and identify the pod which is not in a Ready state. Troubleshoot and fix the issue.
Next, add a check to restart the container on the same pod if the command ls /var/www/html/file_check fails. This check should start after a delay of 10 seconds and run every 60 seconds.
controlplane ~ ➜ k get po -n dev1401NAME READY STATUS RESTARTS AGEnginx1401 0/1 Running 0 30mpod-kab87 1/1 Running 0 30mAnswer
controlplane ~ ➜ k get -n dev1401 po nginx1401 -o yaml > nginx1401.ymlcontrolplane ~ ✖ k delete po -n dev1401 nginx1401 $nowWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "nginx1401" force deletedcontrolplane ~ ➜ k get po -n dev1401NAME READY STATUS RESTARTS AGEpod-kab87 1/1 Running 0 36m## nginx1401.ymlapiVersion: v1kind: Podmetadata:labels:run: nginxname: nginx1401namespace: dev1401spec:containers:- image: kodekloud/nginximagePullPolicy: IfNotPresentname: nginxports:- containerPort: 9080protocol: TCPreadinessProbe:httpGet:path: /port: 9080scheme: HTTPperiodSeconds: 10successThreshold: 1timeoutSeconds: 1livenessProbe:exec:command:- ls- /var/www/html/file_checkinitialDelaySeconds: 10periodSeconds: 60controlplane ~ ➜ k apply -f nginx1401.ymlpod/nginx1401 createdcontrolplane ~ ➜ k get po -n dev1401NAME READY STATUS RESTARTS AGEnginx1401 1/1 Running 0 21spod-kab87 1/1 Running 0 36m -
Create a cronjob called dice that runs every one minute. Use the Pod template located at /root/throw-a-dice. The image throw-dice randomly returns a value between 1 and 6. The result of 6 is considered success and all others are failure.
-
The job should be non-parallel and complete the task once. Use a backoffLimit of 25.
-
If the task is not completed within 20 seconds the job should fail and pods should be terminated.
Answer
## cron-dice.ymlapiVersion: batch/v1kind: CronJobmetadata:name: dicespec:schedule: "1 * * * *"jobTemplate:spec:backoffLimit: 25activeDeadlineSeconds: 20template:spec:containers:- name: diceimage: throw-diceimagePullPolicy: IfNotPresentrestartPolicy: Nevercontrolplane ~ ➜ k apply -f cron-dice.ymlcronjob.batch/dice createdcontrolplane ~ ➜ k get cjNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGEdice 1 * * * * False 0 <none> 13s -
-
Create a pod called my-busybox in the dev2406 namespace using the busybox image. The container should be called secret and should sleep for 3600 seconds.
-
The container should mount a read-only secret volume called secret-volume at the path /etc/secret-volume.
-
The secret being mounted has already been created for you and is called dotfile-secret.
-
Make sure that the pod is scheduled on controlplane and no other node in the cluster.
Answer
Check the labels of the controlplane first. We'll use this label as nodeSelector for the pod.
controlplane ~ ➜ k get no controlplane --show-labelsNAME STATUS ROLES AGE VERSION LABELScontrolplane Ready control-plane 29m v1.27.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=controlplane,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=## my-busybox.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: my-busyboxname: my-busyboxnamespace: dev2406spec:nodeSelector:kubernetes.io/hostname: controlplanecontainers:- image: busyboxname: secretresources: {}command:- sleep- "3600"volumeMounts:- name: secret-volumemountPath: /etc/secret-volumereadOnly: truevolumes:- name: secret-volumesecret:secretName: dotfile-secretdnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k apply -f my-busybox.ymlpod/my-busybox createdcontrolplane ~ ➜ k get -n dev2406 po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmy-busybox 1/1 Running 0 18s 10.244.0.5 controlplane <none> <none>nginx2406 1/1 Running 0 21m 10.244.192.3 node01 <none> <none>pod-var2016 1/1 Running 0 21m 10.244.192.5 node01 <none> <none> -
-
Create a single ingress resource called ingress-vh-routing. The resource should route HTTP traffic to multiple hostnames as specified below:
-
The service video-service should be accessible on http://watch.ecom-store.com:30093/video
-
The service apparels-service should be accessible on http://apparels.ecom-store.com:30093/wear
-
Here 30093 is the port used by the Ingress Controller
controlplane ~ ➜ k get po | grep webwebapp-apparels-56b6df9d5f-nrps8 1/1 Running 0 3m32swebapp-color 1/1 Running 0 25mwebapp-video-55fcd88897-ljnpg 1/1 Running 0 3m32scontrolplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEapparels-service ClusterIP 10.98.253.224 <none> 8080/TCP 3m35skubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37mvideo-service ClusterIP 10.104.57.104 <none> 8080/TCP 3m35sAnswer
There is a trick here, specifically on the port used. The port 30093 is the port used by the Ingress Controller, but we should not specify it in the Ingress resource. Instead, we use the port specified in the service, which is 8080.
controlplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEapparels-service ClusterIP 10.99.206.59 <none> 8080/TCP 9m14skubernetes ClusterIP 10.96.0.1 <none> 443/TCP 41mvideo-service ClusterIP 10.107.72.29 <none> 8080/TCP 9m14sThe port numbers specified in the YAML file represent the target ports of the backend services, not the port used by the Ingress Controller itself.
-
If the services (video-service and apparels-service) are running on port 8080 within Kubernetes cluster, then we should keep the port field as number: 8080.
-
The port field under each backend service does not refer to the external port used by the Ingress Controller.
-
The external port (the one accessed from outside the cluster) is determined by the configuration of the Ingress Controller.
Create the YAML file.
## ingress-vh-routing.ymlapiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: ingress-vh-routingspec:rules:- host: "watch.ecom-store.com"http:paths:- pathType: Prefixpath: "/video"backend:service:name: video-serviceport:number: 8080- host: "apparels.ecom-store.com"http:paths:- pathType: Prefixpath: "/wear"backend:service:name: apparels-serviceport:number: 8080controlplane ~ ➜ k apply -f ingress-vh-routing.ymlingress.networking.k8s.io/ingress-vh-routing createdcontrolplane ~ ➜ k get ingNAME CLASS HOSTS ADDRESS PORTS AGEingress-vh-routing <none> watch.ecom-store.com,apparels.ecom-store.com 80 6scontrolplane ~ ➜ k describe ingress ingress-vh-routingName: ingress-vh-routingLabels: <none>Namespace: defaultAddress:Ingress Class: <none>Default backend: <default>Rules:Host Path Backends---- ---- --------watch.ecom-store.com/video video-service:8080 (10.244.192.13:8080)apparels.ecom-store.com/wear apparels-service:8080 (10.244.192.14:8080)Annotations: nginx.ingress.kubernetes.io/rewrite-target: /Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Sync 15s nginx-ingress-controller Scheduled for sync -
-
A pod called dev-pod-dind-878516 has been deployed in the default namespace. Inspect the logs for the container called log-x and redirect the warnings to /opt/dind-878516_logs.txt on the controlplane node.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEdev-pod-dind-878516 3/3 Running 0 28mAnswer
controlplane ~ ➜ k logs dev-pod-dind-878516 -c log-x > /opt/dind-878516_logs.txtcontrolplane ~ ➜ ls -l /opt/total 204drwxr-xr-x 1 root root 4096 Nov 2 11:33 cnidrwx--x--x 4 root root 4096 Nov 2 11:33 containerd-rw-r--r-- 1 root root 192493 Jan 6 05:35 dind-878516_logs.txtdrwxr-xr-x 2 root root 4096 Jan 6 05:05 outputs -
Create a service messaging-service to expose the redis deployment in the marketing namespace within the cluster on port 6379.
Answer
controlplane ~ ➜ k expose -n marketing deploy redis --name messaging-service --port 6379 --type ClusterIP --target-port 6379 $doapiVersion: v1kind: Servicemetadata:creationTimestamp: nullname: messaging-servicenamespace: marketingspec:ports:- port: 6379protocol: TCPtargetPort: 6379selector:name: redis-podtype: ClusterIPstatus:loadBalancer: {}controlplane ~ ➜ k expose -n marketing deploy redis --name messaging-service --port 6379 --type ClusterIP --target-port 6379service/messaging-service exposedcontrolplane ~ ➜ k get -n marketing poNAME READY STATUS RESTARTS AGEredis-798b49c867-82n5n 1/1 Running 0 5m28scontrolplane ~ ➜ k get -n marketing svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEmessaging-service ClusterIP 10.111.52.207 <none> 6379/TCP 21s -
Create a new ConfigMap named cm-3392845. Use the spec given on the below.
-
Data: DB_NAME=SQL3322
-
Data: DB_HOST=sql322.mycompany.com
-
Data: DB_PORT=3306
Answer
controlplane ~ ➜ k create cm cm-3392845 $doapiVersion: v1kind: ConfigMapmetadata:creationTimestamp: nullname: cm-3392845controlplane ~ ➜ k create cm cm-3392845 $do > cm-3392845.yml## cm-3392845.ymlapiVersion: v1kind: ConfigMapmetadata:creationTimestamp: nullname: cm-3392845data:DB_NAME: "SQL3322"DB_HOST: "sql322.mycompany.com"DB_PORT: "3306"controlplane ~ ➜ k apply -f cm-3392845.ymlconfigmap/cm-3392845 createdcontrolplane ~ ➜ k get cmNAME DATA AGEcm-3392845 3 6skube-root-ca.crt 1 43m -
-
Create a new Secret named db-secret-xxdf with the data given (on the below).
-
Secret Name: db-secret-xxdf
-
Secret 1: DB_Host=sql01
-
Secret 2: DB_User=root
-
Secret 3: DB_Password=password123
Answer
controlplane ~ ➜ k create secret generic db-secret-xxdf --from-literal DB_Host=sql01 --from-literal DB_User=root --from-literal DB_Password=password123 $doapiVersion: v1data:DB_Host: c3FsMDE=DB_Password: cGFzc3dvcmQxMjM=DB_User: cm9vdA==kind: Secretmetadata:creationTimestamp: nullname: db-secret-xxdfcontrolplane ~ ➜ k create secret generic db-secret-xxdf --from-literal DB_Host=sql01 --from-literal DB_User=root --from-literal DB_Password=password123secret/db-secret-xxdf createdcontrolplane ~ ➜ k get secretsNAME TYPE DATA AGEdb-secret-xxdf Opaque 3 4scontrolplane ~ ➜ k describe secrets db-secret-xxdfName: db-secret-xxdfNamespace: defaultLabels: <none>Annotations: <none>Type: OpaqueData====DB_Host: 5 bytesDB_Password: 11 bytesDB_User: 4 bytes -
-
Update pod app-sec-kff3345 to run as Root user and with the SYS_TIME capability.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEapp-sec-kff3345 1/1 Running 0 14mAnswer
controlplane ~ ➜ k get po app-sec-kff3345 -o yaml > app-sec-kff3345.ymlcontrolplane ~ ➜ k delete po app-sec-kff3345 $now## app-sec-kff3345.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: "2024-01-06T11:26:56Z"name: app-sec-kff3345namespace: defaultresourceVersion: "4432"uid: 6eee9699-6c12-4e8f-b579-0d2b19081d34spec:securityContext:runAsUser: 0containers:- command:- sleep- "4800"image: ubuntuimagePullPolicy: AlwayssecurityContext:capabilities:add: ["SYS_TIME"]controlplane ~ ➜ k apply -f app-sec-kff3345.ymlpod/app-sec-kff3345 createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEapp-sec-kff3345 1/1 Running 0 3scontrolplane ~ ➜ k exec -it app-sec-kff3345 -- whoamiroot -
Create a redis deployment using the image redis:alpine with 1 replica and label app=redis. Expose it via a ClusterIP service called redis on port 6379. Create a new Ingress Type NetworkPolicy called redis-access which allows only the pods with label access=redis to access the deployment.
Answer
controlplane ~ ➜ k create deployment redis --image redis:alpine --replicas 1 $doapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: redisname: redisspec:replicas: 1selector:matchLabels:app: redisstrategy: {}template:metadata:creationTimestamp: nulllabels:app: redisspec:containers:- image: redis:alpinename: redisresources: {}status: {}controlplane ~ ➜ k create deployment redis --image redis:alpine --replicas 1 $do > redis.yml## redis.ymlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: redisname: redisspec:replicas: 1selector:matchLabels:app: redisstrategy: {}template:metadata:creationTimestamp: nulllabels:app: redisspec:containers:- image: redis:alpinename: redisresources: {}status: {}controlplane ~ ➜ k apply -f redis.ymldeployment.apps/redis createdcontrolplane ~ ➜ k get deployments.appsNAME READY UP-TO-DATE AVAILABLE AGEhttpd-frontend 3/3 3 3 20mredis 1/1 1 1 3scontrolplane ~ ➜ k get po | grep redisredis-78d4b8b77c-8gq9f 1/1 Running 0 9scontrolplane ~ ➜ k expose deployment redis --name redis --type ClusterIP --port 6379 --target-port 6379 $doapiVersion: v1kind: Servicemetadata:creationTimestamp: nulllabels:app: redisname: redisspec:ports:- port: 6379protocol: TCPtargetPort: 6379selector:app: redistype: ClusterIPstatus:loadBalancer: {}controlplane ~ ➜ k expose deployment redis --name redis --type ClusterIP --port 6379 --target-port 6379service/redis exposedcontrolplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 58mredis ClusterIP 10.108.162.236 <none> 6379/TCP 3s## netpol.ymlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: redis-accessnamespace: defaultspec:podSelector:matchLabels:app: redispolicyTypes:- Ingressingress:- from:- podSelector:matchLabels:access: redisports:- protocol: TCPport: 6379controlplane ~ ➜ k apply -f netpol.ymlnetworkpolicy.networking.k8s.io/redis-access createdcontrolplane ~ ➜ k get netpolNAME POD-SELECTOR AGEredis-access <none> 3s -
Create a Pod called sega with two containers:
- Container 1: Name tails with image busybox and command: sleep 3600.
- Container 2: Name sonic with image nginx and Environment variable: NGINX_PORT with the value 8080.
Answer
controlplane ~ ➜ k run sega --image busybox $doapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: seganame: segaspec:containers:- image: busyboxname: segaresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k run sega --image busybox $do > sega.yml## sega.ymlcontrolplane ~ ➜ k apply -f sega.ymlpod/sega createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEapp-sec-kff3345 1/1 Running 0 16mhttpd-frontend-5497fbb8f6-47zbb 1/1 Running 0 32mhttpd-frontend-5497fbb8f6-hpzpn 1/1 Running 0 32mhttpd-frontend-5497fbb8f6-xvc7b 1/1 Running 0 32mmessaging 1/1 Running 0 31mnginx-448839 1/1 Running 0 32mredis-78d4b8b77c-8gq9f 1/1 Running 0 11mrs-d33393-2pnq4 1/1 Running 0 29mrs-d33393-jnkb8 1/1 Running 0 29mrs-d33393-mf62t 1/1 Running 0 29mrs-d33393-z2782 1/1 Running 0 29msega 2/2 Running 0 46swebapp-color 1/1 Running 0 26m -
Add a taint to the node node01 of the cluster. Use the specification below:
- key: app_type
- value: alpha
- effect: NoSchedule
Create a pod called alpha, image: redis with toleration to node01.
Answer
controlplane ~ ➜ k taint node node01 app_type=alpha:NoSchedulenode/node01 taintedcontrolplane ~ ➜ k describe no node01 | grep -i taintTaints: app_type=alpha:NoSchedule## alpha.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: alphaname: alphaspec:tolerations:- key: "app_type"operator: "Equal"value: "alpha"effect: "NoSchedule"containers:- image: redisname: alpharesources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k apply -f alpha.ymlcontrolplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESalpha 1/1 Running 0 7s 10.244.192.2 node01 <none> <none>my-webapp-54b7444d85-79rl7 1/1 Running 0 7m5s 10.244.0.4 controlplane <none> <none>my-webapp-54b7444d85-dls5v 1/1 Running 0 7m5s 10.244.192.1 node01 <none> <none> -
Apply a label app_type=beta to node controlplane. Create a new deployment called beta-apps with image: nginx and replicas: 3. Set Node Affinity to the deployment to place the PODs on controlplane only.
controlplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 28m v1.27.0node01 Ready <none> 27m v1.27.0Answer
controlplane ~ ➜ k get no --show-labelsNAME STATUS ROLES AGE VERSION LABELScontrolplane Ready control-plane 28m v1.27.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=controlplane,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=controlplane ~ ➜ k label nodes controlplane app_type=betanode/controlplane labeledcontrolplane ~ ➜ k get no controlplane --show-labelsNAME STATUS ROLES AGE VERSION LABELScontrolplane Ready control-plane 29m v1.27.0 app_type=beta,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=controlplane,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=controlplane ~ ➜ k create deployment beta-apps --image nginx --replicas 3 $doapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: beta-appsname: beta-appsspec:replicas: 3selector:matchLabels:app: beta-appsstrategy: {}template:metadata:creationTimestamp: nulllabels:app: beta-appsspec:containers:- image: nginxname: nginxresources: {}status: {}controlplane ~ ➜ k create deployment beta-apps --image nginx --replicas 3 $do > nginx.yml## nginx.ymlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: beta-appsname: beta-appsspec:replicas: 3selector:matchLabels:app: beta-appsstrategy: {}template:metadata:creationTimestamp: nulllabels:app: beta-appsspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: app_typeoperator: Invalues:- betacontainers:- image: nginxname: nginxresources: {}status: {}controlplane ~ ➜ k get deployments.appsNAME READY UP-TO-DATE AVAILABLE AGEbeta-apps 3/3 3 3 17smy-webapp 2/2 2 2 14mcontrolplane ~ ➜ k get po -o wide | grep betabeta-apps-574fd8858c-2m8zj 1/1 Running 0 48s 10.244.0.7 controlplane <none> <none>beta-apps-574fd8858c-chc5d 1/1 Running 0 48s 10.244.0.6 controlplane <none> <none>beta-apps-574fd8858c-nlbh8 1/1 Running 0 48s 10.244.0.5 controlplane <none> <none> -
Create a new Ingress Resource for the service my-video-service to be made available at the URL: http://ckad-mock-exam-solution.com:30093/video.
To create an ingress resource, the following details are: -
-
annotation: nginx.ingress.kubernetes.io/rewrite-target: /
-
host: ckad-mock-exam-solution.com
-
path: /video
Once set up, the curl test of the URL from the nodes should be successful: HTTP 200
controlplane ~ ➜ k get po | grep videowebapp-video-55fcd88897-h49ft 1/1 Running 0 114scontrolplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEfront-end-service NodePort 10.99.121.208 <none> 80:30083/TCP 15mkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35mmy-video-service ClusterIP 10.106.189.83 <none> 8080/TCP 116sAnswer
## ingress.ymlapiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: ingress-wildcard-hostannotations:nginx.ingress.kubernetes.io/rewrite-target: /spec:rules:- host: "ckad-mock-exam-solution.com"http:paths:- pathType: Prefixpath: "/video"backend:service:name: my-video-serviceport:number: 8080controlplane ~ ➜ k apply -f ingress.ymlingress.networking.k8s.io/ingress-wildcard-host createdcontrolplane ~ ➜ k get ingNAME CLASS HOSTS ADDRESS PORTS AGEingress-wildcard-host <none> ckad-mock-exam-solution.com 80 3scontrolplane ~ ➜ k describe ingress ingress-wildcard-hostName: ingress-wildcard-hostLabels: <none>Namespace: defaultAddress:Ingress Class: <none>Default backend: <default>Rules:Host Path Backends---- ---- --------ckad-mock-exam-solution.com/video my-video-service:8080 (10.244.0.8:8080)Annotations: nginx.ingress.kubernetes.io/rewrite-target: /Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Sync 12s nginx-ingress-controller Scheduled for synccontrolplane ~ ➜ curl -I http://ckad-mock-exam-solution.com:30093/videoHTTP/1.1 200 OKDate: Sat, 06 Jan 2024 12:23:10 GMTContent-Type: text/html; charset=utf-8Content-Length: 293Connection: keep-alive -
-
We have deployed a new pod called pod-with-rprobe. This Pod has an initial delay before it is Ready. Update the newly created pod pod-with-rprobe with a readinessProbe using the given spec
-
httpGet path: /ready
-
httpGet port: 8080
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEalpha 1/1 Running 0 15mbeta-apps-574fd8858c-2m8zj 1/1 Running 0 8m21sbeta-apps-574fd8858c-chc5d 1/1 Running 0 8m21sbeta-apps-574fd8858c-nlbh8 1/1 Running 0 8m21smy-webapp-54b7444d85-79rl7 1/1 Running 0 22mmy-webapp-54b7444d85-dls5v 1/1 Running 0 22mpod-with-rprobe 1/1 Running 0 28swebapp-video-55fcd88897-h49ft 1/1 Running 0 7m15sAnswer
controlplane ~ ➜ k get po pod-with-rprobe -o yaml > podprobe.ymlcontrolplane ~ ➜ k delete po pod-with-rprobe $nowWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod-with-rprobe" force deleted## podprobe.ymlapiVersion: v1kind: Podmetadata:labels:name: pod-with-rprobename: pod-with-rprobenamespace: defaultspec:containers:- env:- name: APP_START_DELAYvalue: "180"image: kodekloud/webapp-delayed-startreadinessProbe:httpGet:path: /readyport: 8080initialDelaySeconds: 3periodSeconds: 3imagePullPolicy: Alwaysname: pod-with-rprobeports:- containerPort: 8080protocol: TCPcontrolplane ~ ➜ k apply -f podprobe.ymlpod/pod-with-rprobe createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEalpha 1/1 Running 0 19mbeta-apps-574fd8858c-2m8zj 1/1 Running 0 12mbeta-apps-574fd8858c-chc5d 1/1 Running 0 12mbeta-apps-574fd8858c-nlbh8 1/1 Running 0 12mmy-webapp-54b7444d85-79rl7 1/1 Running 0 26mmy-webapp-54b7444d85-dls5v 1/1 Running 0 26mpod-with-rprobe 1/1 Running 0 33swebapp-video-55fcd88897-h49ft 1/1 Running 0 11m -
-
Create a new pod called nginx1401 in the default namespace with the image nginx. Add a livenessProbe to the container to restart it if the command ls /var/www/html/probe fails. This check should start after a delay of 10 seconds and run every 60 seconds.
Answer
controlplane ~ ➜ k run nginx1401 --image nginx $doapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: nginx1401name: nginx1401spec:containers:- image: nginxname: nginx1401resources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k run nginx1401 --image nginx $do > nginx1401.yml## nginx1401.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: nginx1401name: nginx1401spec:containers:- image: nginxname: nginx1401livenessProbe:exec:command:- ls- /var/www/html/probeinitialDelaySeconds: 10periodSeconds: 60resources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k apply -f nginxpod/nginx1401 created -
Create a job called whalesay with image docker/whalesay and command "cowsay I am going to ace CKAD!".
-
completions: 10
-
backoffLimit: 6
-
restartPolicy: Never
Answer
## job.ymlapiVersion: batch/v1kind: Jobmetadata:name: whalesayspec:completions: 10backoffLimit: 6template:metadata:creationTimestamp: nullspec:containers:- command:- sh- -c- "cowsay I am going to ace CKAD!"image: docker/whalesayname: whalesayrestartPolicy: Neverk apply -f job.yml -