Mock Exam 2
Some of the scenario questions here are based on Kodekloud's CKA course labs.
CKAD and CKA can have similar scenario questions. It is recommended to go through the CKAD practice tests.
Shortcuts
First run the two commands below for shortcuts.
export do="--dry-run=client -o yaml"
export now="--force --grace-period=0"
Questions
-
A pod definition file is created at /root/CKA/use-pv.yaml. Make use of this manifest file and mount the persistent volume called pv-1. Ensure the pod is running and the PV is bound.
-
mountPath: /data
-
persistentVolumeClaim Name: my-pvc
This is the given pod YAML file.
## /root/CKA/use-pv.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: use-pvname: use-pvspec:containers:- image: nginxname: use-pvresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}Answer
Check the PV.
controlplane ~ ➜ k get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv-1 10Mi RWO Retain Available 18scontrolplane ~ ➜ k get pvcNo resources found in default namespace.controlplane ~/CKA ➜ k get pv pv-1 -o yamlapiVersion: v1kind: PersistentVolumemetadata:creationTimestamp: "2024-01-05T04:30:26Z"finalizers:- kubernetes.io/pv-protectionname: pv-1resourceVersion: "3753"uid: ad7d65b3-a7d4-4596-bf05-d12d23f4eebaspec:accessModes:- ReadWriteOncecapacity:storage: 10MihostPath:path: /opt/datatype: ""persistentVolumeReclaimPolicy: RetainvolumeMode: Filesystemstatus:phase: AvailableCreate the PVC yaml file.
controlplane ~ ➜ cd CKAcontrolplane ~/CKA ➜ ls -ltotal 4-rw-r--r-- 1 root root 235 Jan 5 00:10 use-pv.yaml## pvc.ymlapiVersion: v1kind: PersistentVolumeClaimmetadata:name: my-pvcspec:accessModes:- ReadWriteOnceresources:requests:storage: 10Micontrolplane ~/CKA ➜ k apply -f pvc.ymlpersistentvolumeclaim/my-pvc createdcontrolplane ~/CKA ➜ k get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmy-pvc Bound pv-1 10Mi RWO 8scontrolplane ~/CKA ➜ k get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv-1 10Mi RWO Retain Bound default/my-pvc 2m18sModify the pod YAML.
## use-pv.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: use-pvname: use-pvspec:containers:- image: nginxname: use-pvresources: {}volumeMounts:- mountPath: "/data"name: volvolumes:- name: volpersistentVolumeClaim:claimName: my-pvcdnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~/CKA ➜ k apply -f use-pv.yamlpod/use-pv createdcontrolplane ~/CKA ➜ k get poNAME READY STATUS RESTARTS AGEredis-storage 1/1 Running 0 6m39ssuper-user-pod 1/1 Running 0 3m39suse-pv 1/1 Running 0 25scontrolplane ~/CKA ➜ k describe pod use-pv | grep Volumes: -A 5Volumes:vol:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: my-pvcReadOnly: false -
-
Create a new user called john. Grant him access to the cluster. John should have permission to create, list, get, update and delete pods in the development namespace . The private key exists in the location: /root/CKA/john.key and csr at /root/CKA/john.csr.
Important Note: As of kubernetes 1.19, the CertificateSigningRequest object expects a signerName.
-
CSR: john-developer Status:Approved
-
Role Name: developer, namespace: development, Resource: Pods
-
Access: User 'john' has appropriate permissions
Answer
Follow steps here: https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
controlplane ~ ➜ mkdir user-johncontrolplane ~ ➜ cd user-john/controlplane ~/user-john ➜ openssl genrsa -out john.key 2048Generating RSA private key, 2048 bit long modulus (2 primes)...+++++..........+++++e is 65537 (0x010001)controlplane ~/user-john ➜ openssl req -new -key john.key -out john.csr -subj "/CN=john"controlplane ~/user-john ➜ ls -ltotal 8-rw-r--r-- 1 root root 883 Jan 5 00:28 john.csr-rw------- 1 root root 1675 Jan 5 00:27 john.keycontrolplane ~/user-john ➜ cat john.csr | base64 | tr -d "\n"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZEQ0NBVHdDQVFBd0R6RU5NQXNHQTFVRUF3d0VhbTlvYmpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQU9sVUVhMFVoK09lVks3TTCreate the CSR yaml file.
## john-csr.yml---apiVersion: certificates.k8s.io/v1kind: CertificateSigningRequestmetadata:name: john-developerspec:signerName: kubernetes.io/kube-apiserver-clientrequest: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZEQ0NBVHdDQVFBd0R6RU5NQXNHQTFVRUF3d0VhbTlvYmpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQU9sVUVhMFVoK09lVkusages:- digital signature- key encipherment- client authcontrolplane ~/user-john ➜ k apply -f john-csr.ymlcertificatesigningrequest.certificates.k8s.io/john-developer createdcontrolplane ~/user-john ➜ k get csrNAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITIONcsr-pd24m 25m kubernetes.io/kube-apiserver-client-kubelet system:node:controlplane <none> Approved,Issuedcsr-qsk2x 24m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:k93sdz <none> Approved,Issuedjohn-developer 6s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Pendingcontrolplane ~/user-john ➜ kubectl certificate approve john-developercertificatesigningrequest.certificates.k8s.io/john-developer approvedcontrolplane ~/user-john ➜ k get csrNAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITIONcsr-pd24m 25m kubernetes.io/kube-apiserver-client-kubelet system:node:controlplane <none> Approved,Issuedcsr-qsk2x 25m kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:k93sdz <none> Approved,Issuedjohn-developer 47s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,IssuedNext, create the role and rolebinding.
## dev-role.ymlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:namespace: developmentname: developerrules:- apiGroups: [""] # "" indicates the core API groupresources: ["pods"]verbs: ["create","get","update","delete","list"]## dev-rolebinding.ymlapiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: developer-role-bindingnamespace: developmentsubjects:- kind: UserapiGroup: rbac.authorization.k8s.ioroleRef:kind: Role #this must be Role or ClusterRolename: developer # this must match the name of the Role or ClusterRole you wish to bind toapiGroup: rbac.authorization.k8s.iocontrolplane ~/user-john ➜ k apply -f dev-role.ymlrole.rbac.authorization.k8s.io/developer createdcontrolplane ~/user-john ➜ k apply -f dev-rolebinding.ymlrolebinding.rbac.authorization.k8s.io/developer-role-binding createdcontrolplane ~/user-john ✖ k get -n development role | grep devdeveloper 2024-01-05T05:47:57Zcontrolplane ~/user-john ➜ k get -n development rolebindings.rbac.authorization.k8s.io | grep devdeveloper-role-binding Role/developer 21sTo verify:
controlplane ~/user-john ➜ kubectl auth can-i update pods --as=john -n developmentyes -
-
Create a nginx pod called nginx-resolver using image nginx, expose it internally with a service called nginx-resolver-service. Test that you are able to look up the service and pod names from within the cluster. Use the image: busybox:1.28 for dns lookup. Record results in /root/CKA/nginx.svc and /root/CKA/nginx.pod
Answer
controlplane ~/CKA ➜ k run nginx-resolver --image nginxpod/nginx-resolver createdcontrolplane ~/CKA ✖ k get poNAME READY STATUS RESTARTS AGEnginx-deploy-5c95467974-7l68p 1/1 Running 0 102snginx-resolver 1/1 Running 0 11sredis-storage 1/1 Running 0 12msuper-user-pod 1/1 Running 0 9m57suse-pv 1/1 Running 0 6m43sTake note of the word "internally", this means that the port type is a ClusterIP.
controlplane ~/user-john ➜ k expose pod nginx-resolver \--name nginx-resolver-service \--port 80 \--target-port 80 \--type=ClusterIPservice/nginx-resolver-service exposedcontrolplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26mnginx-resolver-service ClusterIP 10.106.113.53 <none> 80/TCP 4sFor testing, we'll create another pod.
ontrolplane ~ ➜ k run pod-tester --image busybox:1.28 --restart Never --rm -it -- nslookup nginx-resolver-serviceIf you don't see a command prompt, try pressing enter.warning: couldn't attach to pod/pod-tester, falling back to streaming logs: unable to upgrade connection: container pod-tester not found in pod pod-tester_defaultServer: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: nginx-resolver-serviceAddress 1: 10.106.113.53 nginx-resolver-service.default.svc.cluster.localpod "pod-tester" deletedcontrolplane ~ ➜ k run pod-tester --image busybox:1.28 --restart Never --rm -it -- nslookup nginx-resolver-service > /root/CKA/nginx.svccontrolplane ~ ➜ ls -l /root/CKA/nginx.svc-rw-r--r-- 1 root root 217 Jan 5 01:00 /root/CKA/nginx.svccontrolplane ~ ➜ cat /root/CKA/nginx.svcServer: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: nginx-resolver-serviceAddress 1: 10.106.113.53 nginx-resolver-service.default.svc.cluster.localpod "pod-tester" deletedTry using the pod IP.
controlplane ~ ➜ k get po nginx-resolver -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-resolver 1/1 Running 0 4m54s 10.244.192.1 node01 <none> <none>ontrolplane ~ ✦ ➜ k run pod-tester --image busybox:1.28 --restart Never --rm -it -- nslookup 10.244.192.1Server: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: 10.244.192.1Address 1: 10.244.192.1 10-244-192-1.nginx-resolver-service.default.svc.cluster.localpod "pod-tester" deletedcontrolplane ~ ✦ ➜ k run pod-tester --image busybox:1.28 --restart Never --rm -it -- nslookup 10.244.192.1 > /root/CKA/nginx.podcontrolplane ~ ✦ ➜ ls -l /root/CKA/nginx.pod-rw-r--r-- 1 root root 219 Jan 5 01:02 /root/CKA/nginx.podcontrolplane ~ ✦ ➜ cat /root/CKA/nginx.podServer: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: 10.244.192.1Address 1: 10.244.192.1 10-244-192-1.nginx-resolver-service.default.svc.cluster.localpod "pod-tester" deleted -
Create a static pod on node01 called nginx-critical with image nginx and make sure that it is recreated/restarted automatically in case of a failure.
Use /etc/kubernetes/manifests as the Static Pod path for example.
Answer
Read carefully again. The static pod needs to be created in node01, not in the controlplane. Generate the YAML first and copy it onto node01.
controlplane ~ ➜ k run nginx-critical --image nginx -o yaml > nginx-critical.ymlcontrolplane ~ ➜ scp nginx-critical.yml node01:/rootnginx-critical.yml 100% 1577 2.1MB/s 00:00controlplane ~ ➜ ssh node01Last login: Fri Jan 5 01:06:59 2024 from 192.10.251.4root@node01 ~ ➜ ls -ltotal 4-rw-r--r-- 1 root root 1577 Jan 5 01:08 nginx-critical.ymlThen create the required directory.
root@node01 ~ ➜ mkdir -p /etc/kubernetes/manifestsroot@node01 ~ ➜ cp nginx-critical.yml /etc/kubernetes/manifests/root@node01 ~ ➜ ls -l /etc/kubernetes/manifests/total 4-rw-r--r-- 1 root root 1577 Jan 5 01:09 nginx-critical.ymlCheck the statisPdPath. It should be set to manifest directory.
root@node01 ~ ➜ grep -i static /var/lib/kubelet/config.yamlstaticPodPath: /etc/kubernetes/manifestsBack at controlplane:
controlplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-critical 1/1 Running 0 7m21s 10.244.192.1 node01 <none> <none> -
Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding. Next, create a pod called pvviewer with the image: redis and serviceAccount: pvviewer in the default namespace.
Answer
controlplane ~ ➜ k create sa pvviewer --dry-run=client -o yaml > pvviewer.ymlcontrolplane ~ ➜ ls -ltotal 8drwxr-xr-x 2 root root 4096 Jan 5 05:11 CKA-rw-r--r-- 1 root root 89 Jan 5 05:13 pvviewer.yml-rw-rw-rw- 1 root root 0 Dec 13 05:39 sample.yamlcontrolplane ~ ➜ cat pvviewer.ymlapiVersion: v1kind: ServiceAccountmetadata:creationTimestamp: nullname: pvviewercontrolplane ~ ➜ k create clusterrole pvviewer-role --verb list --resource="persistentvolumes" $do > pvviewer-role.ymlcontrolplane ~ ➜ ls -ltotal 12drwxr-xr-x 2 root root 4096 Jan 5 05:11 CKA-rw-r--r-- 1 root root 197 Jan 5 05:15 pvviewer-role.yml-rw-r--r-- 1 root root 89 Jan 5 05:14 pvviewer.yml-rw-rw-rw- 1 root root 0 Dec 13 05:39 sample.yamlcontrolplane ~ ➜ cat pvviewer-role.ymlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:creationTimestamp: nullname: pvviewer-rolerules:- apiGroups:- ""resources:- persistentvolumesverbs:- listcontrolplane ~ ➜ k create clusterrolebinding pvviewer-role-binding --clusterrole pvviewer-role --serviceaccount default:pvviewer $do > pvviewer-role-binding.ymlcontrolplane ~ ➜ ls -ltotal 16drwxr-xr-x 2 root root 4096 Jan 5 05:11 CKA-rw-r--r-- 1 root root 292 Jan 5 05:17 pvviewer-role-binding.yml-rw-r--r-- 1 root root 197 Jan 5 05:15 pvviewer-role.yml-rw-r--r-- 1 root root 89 Jan 5 05:14 pvviewer.yml-rw-rw-rw- 1 root root 0 Dec 13 05:39 sample.yamlcontrolplane ~ ➜ cat pvviewer-role-binding.ymlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:creationTimestamp: nullname: pvviewer-role-bindingroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: pvviewer-rolesubjects:- kind: ServiceAccountname: pvviewernamespace: defaultcontrolplane ~ ➜ k run pvviewer --image redis $do > pvviewer-pod.ymlcontrolplane ~ ➜ ls -ltotal 20drwxr-xr-x 2 root root 4096 Jan 5 05:11 CKA-rw-r--r-- 1 root root 241 Jan 5 05:18 pvviewer-pod.yml-rw-r--r-- 1 root root 292 Jan 5 05:17 pvviewer-role-binding.yml-rw-r--r-- 1 root root 197 Jan 5 05:15 pvviewer-role.yml-rw-r--r-- 1 root root 89 Jan 5 05:14 pvviewer.yml-rw-rw-rw- 1 root root 0 Dec 13 05:39 sample.yamlModify the pod YAML and add the service account.
apiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pvviewername: pvviewerspec:serviceAccountName: pvviewercontainers:- image: redisname: pvviewerresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ ls -ltotal 20drwxr-xr-x 2 root root 4096 Jan 5 05:11 CKA-rw-r--r-- 1 root root 272 Jan 5 05:20 pvviewer-pod.yml-rw-r--r-- 1 root root 292 Jan 5 05:17 pvviewer-role-binding.yml-rw-r--r-- 1 root root 197 Jan 5 05:15 pvviewer-role.yml-rw-r--r-- 1 root root 89 Jan 5 05:14 pvviewer.yml-rw-rw-rw- 1 root root 0 Dec 13 05:39 sample.yamlcontrolplane ~ ➜ k apply -f .clusterrolebinding.rbac.authorization.k8s.io/pvviewer-role-binding createdclusterrole.rbac.authorization.k8s.io/pvviewer-role createdserviceaccount/pvviewer createdpod/pvviewer createdcontrolplane ~ ➜ k get clusterrole | grep pvpvviewer-role 2024-01-05T10:20:58Zsystem:controller:pv-protection-controller 2024-01-05T09:52:44Zsystem:controller:pvc-protection-controller 2024-01-05T09:52:44Zcontrolplane ~ ➜ k get clusterrole | grep pvvpvviewer-role 2024-01-05T10:20:58Zcontrolplane ~ ➜ k get clusterrolebinding | grep pvvpvviewer-role-binding ClusterRole/pvviewer-role 82scontrolplane ~ ➜ k get sa | grep pvvpvviewer 0 90scontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEpvviewer 1/1 Running 0 79scontrolplane ~ ➜ k describe po pvviewer | grep -i serviceService Account: pvviewer -
List the InternalIP of all nodes of the cluster. Save the result to a file /root/CKA/node_ips. Answer should be in the format:
InternalIP of controlplane<space>InternalIP of node01 (in a single line)Answer
controlplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 32m v1.27.0node01 Ready <none> 32m v1.27.0controlplane ~ ➜ k get no -o jsonpath='{.items[*].status.addresses[*]}'{"address":"192.22.238.9","type":"InternalIP"} {"address":"controlplane","type":"Hostname"} {"address":"192.22.238.12","type":"InternalIP"} {"address":"node01","type":"Hostname"}controlplane ~ ➜controlplane ~ ➜ k get no -o jsonpath='{.items[*].status.addresses[0]}'{"address":"192.22.238.9","type":"InternalIP"} {"address":"192.22.238.12","type":"InternalIP"}controlplane ~ ➜controlplane ~ ➜ k get no -o jsonpath='{.items[*].status.addresses[0].address}'192.22.238.9 192.22.238.12controlplane ~ ➜controlplane ~ ➜ k get no -o jsonpath='{.items[*].status.addresses[0].address}' > /root/CKA/node_ipscontrolplane ~ ➜ ls -la /root/CKA/node_ips-rw-r--r-- 1 root root 26 Jan 5 06:11 /root/CKA/node_ipscontrolplane ~ ➜ cat /root/CKA/node_ips192.22.238.9 192.22.238.12 -
Create a pod called multi-pod with two containers.
-
Container 1, name: alpha, image: nginx
-
Container 2: name: beta, image: busybox, command: sleep 4800
Environment Variables:
-
container 1:
- name: alpha
-
Container 2:
- name: beta
Answer
controlplane ~ ➜ k run multi-pod --image nginx $do > multi-pod.ymlcontrolplane ~ ➜ ls -ltotal 24drwxr-xr-x 2 root root 4096 Jan 5 06:11 CKA-rw-r--r-- 1 root root 244 Jan 5 06:14 multi-pod.ymlModify the pod YAML file.
apiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: multi-podname: multi-podspec:containers:- image: busyboxname: betacommand: ["sh","-c","sleep 4800"]env:- name: NAMEvalue: beta- image: nginxname: alphaenv:- name: NAMEvalue: alpharesources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k apply -f multi-pod.ymlpod/multi-pod createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEmulti-pod 2/2 Running 0 7spvviewer 1/1 Running 0 10mcontrolplane ~ ➜ k describe po multi-pod | grep Containers -A 40Containers:beta:Container ID: containerd://c2c4de069fcc7ca32732708ea9865e72956fce2c1f25734a2ab3c30a045e064fImage: busyboxImage ID: docker.io/library/busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8Port: <none>Host Port: <none>Command:sh-csleep 4800State: RunningStarted: Fri, 05 Jan 2024 06:19:33 -0500Ready: TrueRestart Count: 0Environment:NAME: betaMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v87xr (ro)alpha:Container ID: containerd://5ed2a20d88c470c61e6a0766230c95e430b8847f4fcbdc1a12bb46e9d3d49c26Image: nginxImage ID: docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026Port: <none>Host Port: <none>State: RunningStarted: Fri, 05 Jan 2024 06:19:37 -0500Ready: TrueRestart Count: 0Environment:NAME: alphaMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v87xr (ro) -
-
Create a Pod called non-root-pod , image: redis:alpine
-
runAsUser: 1000
-
fsGroup: 2000
Answer
controlplane ~ ➜ k run non-root-pod --image redis:alpine $do > non-root-pod.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: non-root-podname: non-root-podspec:securityContext:runAsUser: 1000fsGroup: 2000containers:- image: redis:alpinename: non-root-podresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEmulti-pod 2/2 Running 0 3m16snon-root-pod 1/1 Running 0 6spvviewer 1/1 Running 0 13m -
-
We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it. Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80.
Answer
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEmulti-pod 2/2 Running 0 3m56snon-root-pod 1/1 Running 0 46snp-test-1 1/1 Running 0 21spvviewer 1/1 Running 0 14mcontrolplane ~ ➜ k get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 40mnp-test-service ClusterIP 10.106.46.125 <none> 80/TCP 27s## netpol.yml---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: ingress-to-nptestnamespace: defaultspec:podSelector:matchLabels:run: np-test-1policyTypes:- Ingressingress:- ports:- protocol: TCPport: 80controlplane ~ ➜ k apply -f netpol.ymlnetworkpolicy.networking.k8s.io/ingress-to-nptest createdcontrolplane ~ ➜ k get netpolNAME POD-SELECTOR AGEdefault-deny <none> 4m58singress-to-nptest <none> 8sVerify that the port the open by running a test pod which will telnet to the service via port 80.
controlplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmulti-pod 2/2 Running 0 10m 10.244.192.2 node01 <none> <none>non-root-pod 1/1 Running 0 7m15s 10.244.192.3 node01 <none> <none>np-test-1 1/1 Running 0 6m50s 10.244.192.4 node01 <none> <none>pvviewer 1/1 Running 0 20m 10.244.192.1 node01 <none> <none>controlplane ~ ➜ k run test-pod --image busybox --rm -it -- telnet np-test-service 80If you don't see a command prompt, try pressing enter.Connected to np-test-serviceSession ended, resume using 'kubectl attach test-pod -c test-pod -i -t' command when the pod is runningpod "test-pod" deletedcontrolplane ~ ➜ k run test-pod --image busybox --rm -it -- telnet 10.244.192.4 80If you don't see a command prompt, try pressing enter.Connected to 10.244.192.4 -
Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine, to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image: redis:alpine with toleration to be scheduled on node01.
- key: env_type
- value: production
- operator: Equal and effect: NoSchedule
Answer
controlplane ~ ➜ k taint node controlplane env_type=production:NoSchedule-node/controlplane untaintedcontrolplane ~ ➜ k taint node node01 env_type=production:NoSchedulenode/node01 taintedcontrolplane ~ ➜ k describe no node01 | grep -i taintTaints: env_type=production:NoSchedulecontrolplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmulti-pod 2/2 Running 0 15m 10.244.192.2 node01 <none> <none>non-root-pod 1/1 Running 0 11m 10.244.192.3 node01 <none> <none>np-test-1 1/1 Running 0 11m 10.244.192.4 node01 <none> <none>pvviewer 1/1 Running 0 25m 10.244.192.1 node01 <none> <none>Create the dev-redis pod. It should not be scheduled on node01.
controlplane ~ ➜ k run dev-redis --image redis:alpinepod/dev-redis createdcontrolplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdev-redis 0/1 ContainerCreating 0 2s <none> controlplane <none> <none>multi-pod 2/2 Running 0 15m 10.244.192.2 node01 <none> <none>non-root-pod 1/1 Running 0 12m 10.244.192.3 node01 <none> <none>np-test-1 1/1 Running 0 12m 10.244.192.4 node01 <none> <none>pvviewer 1/1 Running 0 26m 10.244.192.1 node01 <none> <none>Next, create the prod-redis with the specified tolerations. It should be schedules on node01.
controlplane ~ ➜ k run prod-redis --image redis:alpine $do > prod-redis.yml## prod-redis.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: prod-redisname: prod-redisspec:tolerations:- key: "env_type"operator: "Equal"value: "production"effect: "NoSchedule"containers:- image: redis:alpinename: prod-redisresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k apply -f prod-redis.ymlpod/prod-redis createdcontrolplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdev-redis 1/1 Running 0 5m6s 10.244.0.4 controlplane <none> <none>multi-pod 2/2 Running 0 21m 10.244.192.2 node01 <none> <none>non-root-pod 1/1 Running 0 17m 10.244.192.3 node01 <none> <none>np-test-1 1/1 Running 0 17m 10.244.192.4 node01 <none> <none>prod-redis 1/1 Running 0 6s 10.244.192.5 node01 <none> <none>pvviewer 1/1 Running 0 31m 10.244.192.1 node01 <none> <none> -
Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier .
- image: redis:alpine
Use appropriate labels and create all the required objects if it does not exist in the system already.
Answer
controlplane ~ ➜ k create ns hrnamespace/hr createdcontrolplane ~ ➜ k run hr-pod --image redis:alpine --namespace hr --labels "environment=production,tier=frontend" $doapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:environment: productiontier: frontendname: hr-podnamespace: hrspec:containers:- image: redis:alpinename: hr-podresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}controlplane ~ ➜ k run hr-pod --image redis:alpine --namespace hr --labels "environment=production,tier=frontend"pod/hr-pod createdcontrolplane ~ ➜ k get po -n hrNAME READY STATUS RESTARTS AGEhr-pod 1/1 Running 0 38scontrolplane ~ ➜ k describe -n hr po hr-pod | grep -i image:Image: redis:alpinecontrolplane ~ ➜ k describe -n hr po hr-pod | grep -i label -A 5Labels: environment=productiontier=frontend -
A kubeconfig file called super.kubeconfig has been created under /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.
Answer
controlplane ~ ➜ ls -la CKAtotal 24drwxr-xr-x 2 root root 4096 Jan 5 06:45 .drwx------ 1 root root 4096 Jan 5 06:39 ..-rw-r--r-- 1 root root 26 Jan 5 06:11 node_ips-rw------- 1 root root 5636 Jan 5 06:45 super.kubeconfigcontrolplane ~ ➜ k cluster-info --kubeconfig CKA/super.kubeconfigE0105 06:46:28.819247 20286 memcache.go:265] couldn't get current server API group list: Get "https://controlplane:9999/api?timeout=32s": dial tcp 192.22.238.9:9999: connect: connection refusedE0105 06:46:28.819555 20286 memcache.go:265] couldn't get current server API group list: Get "https://controlplane:9999/api?timeout=32s": dial tcp 192.22.238.9:9999: connect: connection refusedE0105 06:46:28.820954 20286 memcache.go:265] couldn't get current server API group list: Get "https://controlplane:9999/api?timeout=32s": dial tcp 192.22.238.9:9999: connect: connection refusedE0105 06:46:28.822299 20286 memcache.go:265] couldn't get current server API group list: Get "https://controlplane:9999/api?timeout=32s": dial tcp 192.22.238.9:9999: connect: connection refusedE0105 06:46:28.823566 20286 memcache.go:265] couldn't get current server API group list: Get "https://controlplane:9999/api?timeout=32s": dial tcp 192.22.238.9:9999: connect: connection refusedTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.The connection to the server controlplane:9999 was refused - did you specify the right host or port?Make sure the port is correct in /root/CKA/super.kubeconfig.
server: https://controlplane:6443controlplane ~ ➜ kubectl cluster-info --kubeconfig=/root/CKA/super.kubeconfigKubernetes control plane is running at https://controlplane:6443CoreDNS is running at https://controlplane:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.