Scheduling
Some of the scenario questions here are based on Kodekloud's CKA course labs.
CKAD and CKA can have similar scenario questions. It is recommended to go through the CKAD practice tests.
Shortcuts
First run the two commands below for shortcuts.
export do="--dry-run=client -o yaml"
export now="--force --grace-period=0"
Questions
-
Fix the nginx pod. The YAML file is given.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnginx 0/1 Pending 0 3m18s## nginx.yamlapiVersion: v1kind: Podmetadata:name: nginxspec:containers:- image: nginxname: nginxAnswer
Check the details of all pods, including pods in the kube-system namespace. Here we can see that there is no scheduler pod. Without this scheduler pod, all other pods in the default namespace will remain in pending state forever.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnginx 0/1 Pending 0 25scontrolplane ~ ➜controlplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx 0/1 Pending 0 31s <none> <none> <none> <none>controlplane ~ ➜ k get po -A -o wideNAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdefault nginx 0/1 Pending 0 38s <none> <none> <none> <none>kube-flannel kube-flannel-ds-hn474 1/1 Running 0 8m23s 192.38.195.8 node01 <none> <none>kube-flannel kube-flannel-ds-zvkr9 1/1 Running 0 8m38s 192.38.195.6 controlplane <none> <none>kube-system coredns-5d78c9869d-5nt4f 1/1 Running 0 8m37s 10.244.0.2 controlplane <none> <none>kube-system coredns-5d78c9869d-8wwkp 1/1 Running 0 8m37s 10.244.0.3 controlplane <none> <none>kube-system etcd-controlplane 1/1 Running 0 8m52s 192.38.195.6 controlplane <none> <none>kube-system kube-apiserver-controlplane 1/1 Running 0 8m53s 192.38.195.6 controlplane <none> <none>kube-system kube-controller-manager-controlplane 1/1 Running 0 8m52s 192.38.195.6 controlplane <none> <none>kube-system kube-proxy-9qxp8 1/1 Running 0 8m23s 192.38.195.8 node01 <none> <none>kube-system kube-proxy-dptpt 1/1 Running 0 8m38s 192.38.195.6 controlplane <none> <none>Delete pod first and manually schedule on a node.
controlplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 8m15s v1.27.0node01 Ready <none> 7m34s v1.27.0controlplane ~ ➜ k delete po nginxpod "nginx" deletedcontrolplane ~ ➜ k get poNo resources found in default namespace.To manually schedule, modify the YAML file and apply.
## nginx.yamlapiVersion: v1kind: Podmetadata:name: nginxspec:nodeName=node01containers:- image: nginxname: nginxcontrolplane ~ ➜ k apply -f nginx.yamlpod/nginx createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEnginx 1/1 Running 0 8s -
We have deployed a number of PODs. They are labelled with tier, env and bu. How many PODs exist in the dev environment (env)?
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEapp-1-whhgk 1/1 Running 0 107sdb-1-9qjgg 1/1 Running 0 106sdb-1-jhqfb 1/1 Running 0 107sapp-1-sk7dg 1/1 Running 0 107sapp-1-rtm89 1/1 Running 0 107sauth 1/1 Running 0 106sdb-1-5z6hx 1/1 Running 0 106sapp-1-zzxdf 1/1 Running 0 106sapp-2-wdhp9 1/1 Running 0 107sdb-1-8nlqw 1/1 Running 0 106sdb-2-ds4b8 1/1 Running 0 106sAnswer
controlplane ~ ➜ k get po --show-labels=trueNAME READY STATUS RESTARTS AGE LABELSapp-1-whhgk 1/1 Running 0 111s bu=finance,env=dev,tier=frontenddb-1-9qjgg 1/1 Running 0 110s env=dev,tier=dbdb-1-jhqfb 1/1 Running 0 111s env=dev,tier=dbapp-1-sk7dg 1/1 Running 0 111s bu=finance,env=dev,tier=frontendapp-1-rtm89 1/1 Running 0 111s bu=finance,env=dev,tier=frontendauth 1/1 Running 0 110s bu=finance,env=proddb-1-5z6hx 1/1 Running 0 110s env=dev,tier=dbapp-1-zzxdf 1/1 Running 0 110s bu=finance,env=prod,tier=frontendapp-2-wdhp9 1/1 Running 0 111s env=prod,tier=frontenddb-1-8nlqw 1/1 Running 0 110s env=dev,tier=dbdb-2-ds4b8 1/1 Running 0 110s bu=finance,env=prod,tier=dbcontrolplane ~ ➜ k get po --show-labels=true | grep "env=dev"app-1-whhgk 1/1 Running 0 2m bu=finance,env=dev,tier=frontenddb-1-9qjgg 1/1 Running 0 119s env=dev,tier=dbdb-1-jhqfb 1/1 Running 0 2m env=dev,tier=dbapp-1-sk7dg 1/1 Running 0 2m bu=finance,env=dev,tier=frontendapp-1-rtm89 1/1 Running 0 2m bu=finance,env=dev,tier=frontenddb-1-5z6hx 1/1 Running 0 119s env=dev,tier=dbdb-1-8nlqw 1/1 Running 0 119s env=dev,tier=db -
How many PODs are in the finance business unit (bu)?
Answer
controlplane ~ ➜ k get po --show-labels=true | grep "bu=finance"app-1-whhgk 1/1 Running 0 2m43s bu=finance,env=dev,tier=frontendapp-1-sk7dg 1/1 Running 0 2m43s bu=finance,env=dev,tier=frontendapp-1-rtm89 1/1 Running 0 2m43s bu=finance,env=dev,tier=frontendauth 1/1 Running 0 2m42s bu=finance,env=prodapp-1-zzxdf 1/1 Running 0 2m42s bu=finance,env=prod,tier=frontenddb-2-ds4b8 1/1 Running 0 2m42s bu=finance,env=prod,tier=db -
How many objects are in the prod environment including PODs, ReplicaSets and any other objects?
Answer
controlplane ~ ➜ k get all --show-labels=true | grep "env=prod"pod/auth 1/1 Running 0 4m7s bu=finance,env=prodpod/app-1-zzxdf 1/1 Running 0 4m7s bu=finance,env=prod,tier=frontendpod/app-2-wdhp9 1/1 Running 0 4m8s env=prod,tier=frontendpod/db-2-ds4b8 1/1 Running 0 4m7s bu=finance,env=prod,tier=dbservice/app-1 ClusterIP 10.43.234.201 <none> 3306/TCP 4m7s bu=finance,env=prodreplicaset.apps/app-2 1 1 1 4m8s env=prodreplicaset.apps/db-2 1 1 1 4m7s env=prod -
Identify the POD which is part of the prod environment, the finance BU and of frontend tier?
Answer
controlplane ~ ➜ k get po -l bu=finance,env=prod,tier=frontendNAME READY STATUS RESTARTS AGEapp-1-zzxdf 1/1 Running 0 6m43s -
How many labels does node01 have?
Answer
controlplane ~ ➜ k describe no node01 | grep Labels -A 10Labels: beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=node01kubernetes.io/os=linuxAnnotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"96:49:b7:34:27:94"}flannel.alpha.coreos.com/backend-type: vxlanflannel.alpha.coreos.com/kube-subnet-manager: trueflannel.alpha.coreos.com/public-ip: 192.10.195.3kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.socknode.alpha.kubernetes.io/ttl: 0 -
Apply a label color=blue to node node01
Answer
controlplane ~ ➜ k label no node01 color=bluenode/node01 labeledcontrolplane ~ ➜ k describe no node01 | grep -A 10 LabelsLabels: beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxcolor=bluekubernetes.io/arch=amd64kubernetes.io/hostname=node01kubernetes.io/os=linux -
Do any taints exist on node01 node?
controlplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 21m v1.27.0node01 Ready <none> 20m v1.27.0Answer
controlplane ~ ➜ k describe no node01 | grep TaintsTaints: <none> -
Create a taint on node01 with:
- key of spray
- value of mortein
- effect of NoSchedule
Answer
controlplane ~ ➜ k taint no node01 spray=mortein:NoSchedulenode/node01 taintedcontrolplane ~ ➜ k describe no node01 | grep TaintsTaints: spray=mortein:NoSchedule -
Create another pod named bee with the nginx image, which has a toleration set to the taint mortein.
Answer
Generate the YAML file first.
controlplane ~ ➜ export dr="--dry-run=client"controlplane ~ ➜ k run bee --image=nginx $drpod/bee created (dry run)controlplane ~ ➜ k run bee --image=nginx $dr -o yaml > bee.ymlcontrolplane ~ ➜ lsbee.ymlSearch on the k8s docs the paramters for tolerations and add it to the YAML file. Apply afterwards.
## bee.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: beename: beespec:containers:- image: nginxname: beeresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaystolerations:- key: "spray"value: "mortein"effect: "NoSchedule"status: {}controlplane ~ ➜ k apply -f bee.ymlpod/bee createdcontrolplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEbee 1/1 Running 0 9smosquito 0/1 Pending 0 8m48s -
Remove the taint on controlplane.
controlplane ~ ➜ k get noNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane 35m v1.27.0node01 Ready <none> 34m v1.27.0Answer
controlplane ~ ➜ k describe no controlplane | grep TaintTaints: node-role.kubernetes.io/control-plane:NoSchedulecontrolplane ~ ✖ k taint no controlplane node-role.kubernetes.io/control-plane:NoSchedule-node/controlplane untaintedcontrolplane ~ ➜ k describe no controlplane | grep TaintTaints: <none> -
Set Node Affinity to the deployment to place the pods on node01 only.
Answer
Get the YAML file and modify by adding the parameters for the node affinity. See K8s docs for format.
controlplane ~ ➜ k get deployments.apps blue -o yaml > blue.yml## blue.ymlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: bluename: bluespec:replicas: 3selector:matchLabels:app: bluestrategy: {}template:metadata:creationTimestamp: nulllabels:app: bluespec:containers:- image: nginxname: nginxresources: {}affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: coloroperator: Invalues:- bluestatus: {}controlplane ~ ➜ k apply -f blue.ymldeployment.apps/blue createdcontrolplane ~ ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESblue-f69d4c887-6whqb 1/1 Running 0 10s 10.244.1.6 node01 <none> <none>blue-f69d4c887-j8x27 1/1 Running 0 10s 10.244.1.4 node01 <none> <none>blue-f69d4c887-z8hbl 1/1 Running 0 10s 10.244.1.5 node01 <none> <none> -
Create a new deployment named red with the nginx image and 2 replicas, and ensure it gets placed on the controlplane node only.
Use the label key - node-role.kubernetes.io/control-plane - which is already set on the controlplane node.
Answer
Verify label.
controlplane ~ ➜ k describe nodes controlplane | grep -A 10 LabelsLabels: beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=controlplanekubernetes.io/os=linuxnode-role.kubernetes.io/control-plane=node.kubernetes.io/exclude-from-external-load-balancers=Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"ca:c9:cd:c5:51:72"}flannel.alpha.coreos.com/backend-type: vxlanflannel.alpha.coreos.com/kube-subnet-manager: trueflannel.alpha.coreos.com/public-ip: 192.12.20.6Generate the YAML and add the affinity parameter. See K8s docs for format.
controlplane ~ ➜ k create deployment red --image nginx --replicas 2 $do > red.yml## red.ymlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: redname: redspec:replicas: 2selector:matchLabels:app: redstrategy: {}template:metadata:creationTimestamp: nulllabels:app: redspec:containers:- image: nginxname: nginxresources: {}affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: node-role.kubernetes.io/control-planeoperator: Existscontrolplane ~ ➜ k apply -f red.ymldeployment.apps/red createdcontrolplane ~ ➜ k get deployments.appsNAME READY UP-TO-DATE AVAILABLE AGEblue 3/3 3 3 10mred 2/2 2 2 13s -
A pod called rabbit is deployed. Identify the CPU requirements set on the Pod.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGErabbit 0/1 CrashLoopBackOff 4 (31s ago) 118sAnswer
controlplane ~ ➜ k describe po rabbit | grep -A 5 -i requestsRequests:cpu: 1 -
Another pod called elephant has been deployed in the default namespace. It fails to get to a running state. Inspect this pod and identify the Reason why it is not running.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEelephant 0/1 CrashLoopBackOff 3 (33s ago) 86sAnswer
The status OOMKilled indicates that it is failing because the pod ran out of memory. Identify the memory limit set on the POD.
controlplane ~ ➜ k describe pod elephant | grep -A 5 StateState: TerminatedReason: OOMKilledExit Code: 1Started: Fri, 29 Dec 2023 07:25:52 +0000Finished: Fri, 29 Dec 2023 07:25:52 +0000Last State: TerminatedReason: OOMKilledExit Code: 1Started: Fri, 29 Dec 2023 07:25:01 +0000Finished: Fri, 29 Dec 2023 07:25:01 +0000Ready: False -
The elephant pod runs a process that consumes 15Mi of memory. Increase the limit of the elephant pod to 20Mi.
controlplane ~ ➜ k get poNAME READY STATUS RESTARTS AGEelephant 0/1 CrashLoopBackOff 3 (33s ago) 86sAnswer
controlplane ~ ✦ ➜ k get po -o yaml > el.yaml## el.yamlapiVersion: v1items:- apiVersion: v1kind: Podmetadata:creationTimestamp: "2023-12-29T07:24:08Z"name: elephantnamespace: defaultresourceVersion: "952"uid: fe698e64-ca6b-4990-813a-9b63c7cc2b2bspec:containers:- args:- --vm- "1"- --vm-bytes- 15M- --vm-hang- "1"command:- stressimage: polinux/stressimagePullPolicy: Alwaysname: mem-stressresources:limits:memory: 20Mirequests:memory: 5Micontrolplane ~ ✦ ➜ k delete po elephant $nowWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "elephant" force deletedcontrolplane ~ ✦ ➜ k apply -f el.yamlpod/elephant createdcontrolplane ~ ✦ ✖ k get poNAME READY STATUS RESTARTS AGEelephant 1/1 Running 0 39s -
How many DaemonSets in all namespaces?
Answer
controlplane ~ ➜ k get ds -ANAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkube-flannel kube-flannel-ds 1 1 1 1 1 <none> 4m31skube-system kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 4m34s -
What is the image used by the POD deployed by the kube-flannel-ds DaemonSet?
Answer
controlplane ~ ➜ k get ds -ANAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkube-flannel kube-flannel-ds 1 1 1 1 1 <none> 6m57skube-system kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 7mcontrolplane ~ ➜ k describe daemonsets.apps -n kube-flannel kube-flannel-ds | grep ImageImage: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0Image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2Image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2 -
Deploy a DaemonSet for FluentD Logging.
- Name: elasticsearch
- Namespace: kube-system
- Image: registry.k8s.io/fluentd-elasticsearch:1.20
Answer
Copy the FluentD YAML from K8S docs and modify. Apply afterwards.
## fluentd.ymlapiVersion: apps/v1kind: DaemonSetmetadata:name: elasticsearchnamespace: kube-systemlabels:k8s-app: fluentd-loggingspec:selector:matchLabels:name: fluentd-elasticsearchtemplate:metadata:labels:name: fluentd-elasticsearchspec:tolerations:# these tolerations are to have the daemonset runnable on control plane nodes# remove them if your control plane nodes should not run pods- key: node-role.kubernetes.io/control-planeoperator: Existseffect: NoSchedule- key: node-role.kubernetes.io/masteroperator: Existseffect: NoSchedulecontainers:- name: fluentd-elasticsearchimage: registry.k8s.io/fluentd-elasticsearch:1.20resources:limits:memory: 200Mirequests:cpu: 100mmemory: 200MivolumeMounts:- name: varlogmountPath: /var/log# it may be desirable to set a high priority class to ensure that a DaemonSet Pod# preempts running Pods# priorityClassName: importantterminationGracePeriodSeconds: 30volumes:- name: varloghostPath:path: /var/logcontrolplane ~ ➜ k apply -f fluentd.ymldaemonset.apps/elasticsearch createdcontrolplane ~ ➜ k get ds -ANAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEkube-flannel kube-flannel-ds 1 1 1 1 1 <none> 10mkube-system elasticsearch 1 1 1 1 1 <none> 4skube-system kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 10m -
How many static pods exist in this cluster in all namespaces?
Answer
controlplane ~ ➜ k get po -A | grep controlplanekube-system etcd-controlplane 1/1 Running 0 6m41skube-system kube-apiserver-controlplane 1/1 Running 0 6m39skube-system kube-controller-manager-controlplane 1/1 Running 0 6m39skube-system kube-scheduler-controlplane 1/1 Running 0 6m41s -
On which nodes are the static pods created currently?
Answer
controlplane ~ ➜ k get po -o wide -A | grep controlplanekube-system etcd-controlplane 1/1 Running 0 8m9s 192.13.225.9 controlplane <none> <none>kube-system kube-apiserver-controlplane 1/1 Running 0 8m7s 192.13.225.9 controlplane <none> <none>kube-system kube-controller-manager-controlplane 1/1 Running 0 8m7s 192.13.225.9 controlplane <none> <none>kube-system kube-scheduler-controlplane 1/1 Running 0 8m9s 192.13.225.9 controlplane <none> <none> -
What is the path of the directory holding the static pod definition files?
Answer
First idenity the kubelet config file (--config):
controlplane ~ ➜ ps -aux | grep /usr/bin/kubeletroot 4685 0.0 0.0 3775504 101080 ? Ssl 02:35 0:10 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9root 9476 0.0 0.0 6748 2540 pts/0 S+ 02:46 0:00 grep --color=auto /usr/bin/kubeletNext, lookup the value assigned for staticPodPath:
controlplane ~ ➜ grep static /var/lib/kubelet/config.yamlstaticPodPath: /etc/kubernetes/manifests -
How many pod definition files are present in the manifests directory?
Answer
controlplane ~ ➜ ls -la /etc/kubernetes/manifests/total 28drwxr-xr-x 1 root root 4096 Dec 29 02:35 .drwxr-xr-x 1 root root 4096 Dec 29 02:35 ..-rw------- 1 root root 2405 Dec 29 02:35 etcd.yaml-rw------- 1 root root 3882 Dec 29 02:35 kube-apiserver.yaml-rw------- 1 root root 3393 Dec 29 02:35 kube-controller-manager.yaml-rw------- 1 root root 1463 Dec 29 02:35 kube-scheduler.yaml -
What is the docker image used to deploy the kube-api server as a static pod?
Answer
controlplane ~ ➜ ls -la /etc/kubernetes/manifests/total 28drwxr-xr-x 1 root root 4096 Dec 29 02:35 .drwxr-xr-x 1 root root 4096 Dec 29 02:35 ..-rw------- 1 root root 2405 Dec 29 02:35 etcd.yaml-rw------- 1 root root 3882 Dec 29 02:35 kube-apiserver.yaml-rw------- 1 root root 3393 Dec 29 02:35 kube-controller-manager.yaml-rw------- 1 root root 1463 Dec 29 02:35 kube-scheduler.yamlcontrolplane ~ ➜ grep image /etc/kubernetes/manifests/kube-apiserver.yamlimage: registry.k8s.io/kube-apiserver:v1.27.0imagePullPolicy: IfNotPresent -
Create a static pod named static-busybox that uses the busybox image and the command sleep 1000
Answer
controlplane ~ ➜ k run po static-busy-box --image busybox $do > bb.ymlNote that since it's a static pod, it needs to be created in the /etc/kubernetes/manifests directory.
controlplane ~ ➜ cd /etc/kubernetes/manifests/controlplane /etc/kubernetes/manifests ➜controlplane /etc/kubernetes/manifests ➜ k run static-busybox --image busybox $do > /etc/kubernetes/manifests/static-busybox.ymlcontrolplane /etc/kubernetes/manifests ➜ ls -la /etc/kubernetes/manifests/total 36drwxr-xr-x 1 root root 4096 Dec 29 02:56 .drwxr-xr-x 1 root root 4096 Dec 29 02:35 ..-rw------- 1 root root 2405 Dec 29 02:35 etcd.yaml-rw------- 1 root root 3882 Dec 29 02:35 kube-apiserver.yaml-rw------- 1 root root 3393 Dec 29 02:35 kube-controller-manager.yaml-rw------- 1 root root 1463 Dec 29 02:35 kube-scheduler.yaml-rw-r--r-- 1 root root 256 Dec 29 02:56 static-busybox.ymlAdd the command parameter and apply afterwards.
## /etc/kubernetes/manifests/static-busybox.ymlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: static-busyboxname: static-busyboxspec:containers:- command:- sleep- "1000"image: busyboxname: static-busyboxresources: {}dnsPolicy: ClusterFirstrestartPolicy: Neverstatus: {}controlplane /etc/kubernetes/manifests ➜ k get poNAME READY STATUS RESTARTS AGEstatic-busybox-controlplane 1/1 Running 0 2s -
We just created a new static pod named static-greenbox. Prevent this pod from restarting when it is deleted.
Answer
controlplane /etc/kubernetes/manifests ✦2 ➜ k get poNAME READY STATUS RESTARTS AGEstatic-busybox-controlplane 1/1 Running 0 27sstatic-greenbox-node01 1/1 Running 0 14scontrolplane /etc/kubernetes/manifests ✦2 ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESstatic-busybox-controlplane 1/1 Running 0 32s 10.244.0.5 controlplane <none> <none>static-greenbox-node01 1/1 Running 0 19s 10.244.1.2 node01 <none> <none>controlplane /etc/kubernetes/manifests ✦2 ✖ k delete po static-greenbox-node01 $nowWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "static-greenbox-node01" force deletedcontrolplane /etc/kubernetes/manifests ✦2 ➜ k get po -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESstatic-busybox-controlplane 1/1 Running 0 81s 10.244.0.5 controlplane <none> <none>SSH to node01 and find the config file (--config).
controlplane /etc/kubernetes/manifests ✦2 ➜ ssh node01Warning: Permanently added the ECDSA host key for IP address '192.14.237.3' to the list of known hosts.root@node01 ~ ➜ ps -aux | grep /usr/bin/kubeletroot 4435 0.0 0.0 3330296 94296 ? Ssl 03:19 0:01 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9root 5398 0.0 0.0 5200 720 pts/0 S+ 03:22 0:00 grep /usr/bin/kubeletroot@node01 ~ ➜ grep static /var/lib/kubelet/config.yamlstaticPodPath: /etc/just-to-mess-with-youroot@node01 ~ ➜ ls -la /etc/just-to-mess-with-you/total 16drwxr-xr-x 2 root root 4096 Dec 29 03:20 .drwxr-xr-x 1 root root 4096 Dec 29 03:19 ..-rw-r--r-- 1 root root 301 Dec 29 03:20 greenbox.yamlroot@node01 ~ ➜ sudo rm /etc/just-to-mess-with-you/greenbox.yamlReturn to the controlplane and verify that the greenbox pod does not restart anymore.
-
What is the image used in the scheduler pod?
Answer
controlplane ~ ➜ k get po -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-flannel kube-flannel-ds-bcc4q 1/1 Running 0 4m13skube-system coredns-5d78c9869d-6jbzh 1/1 Running 0 4m12skube-system coredns-5d78c9869d-g5kln 1/1 Running 0 4m12skube-system etcd-controlplane 1/1 Running 0 4m24skube-system kube-apiserver-controlplane 1/1 Running 0 4m24skube-system kube-controller-manager-controlplane 1/1 Running 0 4m24skube-system kube-proxy-cdzqm 1/1 Running 0 4m13skube-system kube-scheduler-controlplane 1/1 Running 0 4m24scontrolplane ~ ➜ k describe po -n kube-system kube-scheduler-controlplane | grep imagecontrolplane ~ ✖ k describe po -n kube-system kube-scheduler-controlplane | grep -i imageImage: registry.k8s.io/kube-scheduler:v1.27.0Image ID: registry.k8s.io/kube-scheduler@sha256:939d0c6675c373639f53f05d61b5035172f95afb47ecffee6baf4e3d70543b66 -
Create a configmap that the new scheduler will employ using the concept of ConfigMap as a volume. We have already given a configMap definition file called my-scheduler-configmap.yaml at /root/ path that will create a configmap with name my-scheduler-config using the content of file /root/my-scheduler-config.yaml.
Answer
controlplane ~ ➜ ls -ltotal 16-rw-r--r-- 1 root root 341 Dec 29 03:27 my-scheduler-configmap.yaml-rw-rw-rw- 1 root root 160 Dec 13 05:39 my-scheduler-config.yaml-rw-rw-rw- 1 root root 893 Dec 13 05:39 my-scheduler.yaml-rw-rw-rw- 1 root root 105 Dec 13 05:39 nginx-pod.yamlcontrolplane ~ ➜ k apply -f my-scheduler-configmap.yamlconfigmap/my-scheduler-config createdcontrolplane ~ ➜ k get cmNAME DATA AGEkube-root-ca.crt 1 7m58scontrolplane ~ ➜ k get cm -ANAMESPACE NAME DATA AGEdefault kube-root-ca.crt 1 8m1skube-flannel kube-flannel-cfg 2 8m11skube-flannel kube-root-ca.crt 1 8m1skube-node-lease kube-root-ca.crt 1 8m1skube-public cluster-info 2 8m15skube-public kube-root-ca.crt 1 8m1skube-system coredns 1 8m13skube-system extension-apiserver-authentication 6 8m18skube-system kube-apiserver-legacy-service-account-token-tracking 1 8m18skube-system kube-proxy 2 8m13skube-system kube-root-ca.crt 1 8m1skube-system kubeadm-config 1 8m16skube-system kubelet-config 1 8m16skube-system my-scheduler-config 1 6s -
Deploy an additional scheduler to the cluster following the given specification. Use the manifest file provided at /root/my-scheduler.yaml. Use the same image as used by the default kubernetes scheduler.
Answer
controlplane ~ ➜ ls -ltotal 16-rw-r--r-- 1 root root 341 Dec 29 03:27 my-scheduler-configmap.yaml-rw-rw-rw- 1 root root 160 Dec 13 05:39 my-scheduler-config.yaml-rw-rw-rw- 1 root root 893 Dec 13 05:39 my-scheduler.yaml-rw-rw-rw- 1 root root 105 Dec 13 05:39 nginx-pod.yamlcontrolplane ~ ➜ k get po -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-5d78c9869d-6jbzh 1/1 Running 0 9m45scoredns-5d78c9869d-g5kln 1/1 Running 0 9m45setcd-controlplane 1/1 Running 0 9m57skube-apiserver-controlplane 1/1 Running 0 9m57skube-controller-manager-controlplane 1/1 Running 0 9m57skube-proxy-cdzqm 1/1 Running 0 9m46skube-scheduler-controlplane 1/1 Running 0 9m57scontrolplane ~ ✖ k describe po -n kube-system kube-scheduler-controlplane | grep -i imageImage: registry.k8s.io/kube-scheduler:v1.27.0Image ID: registry.k8s.io/kube-scheduler@sha256:939d0c6675c373639f53f05d61b5035172f95afb47ecffee6baf4e3d70543b66## my-scheduler.yamlapiVersion: v1kind: Podmetadata:labels:run: my-schedulername: my-schedulernamespace: kube-systemspec:serviceAccountName: my-schedulercontainers:- image: registry.k8s.io/kube-scheduler:v1.27.0command:- /usr/local/bin/kube-scheduler- --config=/etc/kubernetes/my-scheduler/my-scheduler-config.yamlcontrolplane ~ ➜ k apply -f my-scheduler.yamlpod/my-scheduler createdcontrolplane ~ ➜ k get po -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-5d78c9869d-6jbzh 1/1 Running 0 12mcoredns-5d78c9869d-g5kln 1/1 Running 0 12metcd-controlplane 1/1 Running 0 12mkube-apiserver-controlplane 1/1 Running 0 12mkube-controller-manager-controlplane 1/1 Running 0 12mkube-proxy-cdzqm 1/1 Running 0 12mkube-scheduler-controlplane 1/1 Running 0 12mmy-scheduler 1/1 Running 0 7s -
Modify the nginx-pod.yml file POD to create a POD with the new custom scheduler.
Answer
controlplane ~ ✦ ➜ k get po -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-flannel kube-flannel-ds-bcc4q 1/1 Running 0 14mkube-system coredns-5d78c9869d-6jbzh 1/1 Running 0 14mkube-system coredns-5d78c9869d-g5kln 1/1 Running 0 14mkube-system etcd-controlplane 1/1 Running 0 14mkube-system kube-apiserver-controlplane 1/1 Running 0 14mkube-system kube-controller-manager-controlplane 1/1 Running 0 14mkube-system kube-proxy-cdzqm 1/1 Running 0 14mkube-system kube-scheduler-controlplane 1/1 Running 0 14mkube-system my-scheduler 1/1 Running 0 2m43sAdd the new custom scheduler.
## nginx-pod.yamlapiVersion: v1kind: Podmetadata:name: nginxspec:schedulerName: my-schedulercontainers:- image: nginxname: nginxcontrolplane ~ ✦ ➜ k apply -f nginx-pod.yamlpod/nginx createdcontrolplane ~ ✦ ➜ k get po -ANAMESPACE NAME READY STATUS RESTARTS AGEdefault nginx 1/1 Running 0 3skube-flannel kube-flannel-ds-bcc4q 1/1 Running 0 17mkube-system coredns-5d78c9869d-6jbzh 1/1 Running 0 17mkube-system coredns-5d78c9869d-g5kln 1/1 Running 0 17mkube-system etcd-controlplane 1/1 Running 0 17mkube-system kube-apiserver-controlplane 1/1 Running 0 17mkube-system kube-controller-manager-controlplane 1/1 Running 0 17mkube-system kube-proxy-cdzqm 1/1 Running 0 17mkube-system kube-scheduler-controlplane 1/1 Running 0 17mkube-system my-scheduler 1/1 Running 0 5m47s