Skip to main content

Storage

Updated Dec 29, 2023 ·

Some of the scenario questions here are based on Kodekloud's CKA course labs.

NOTE

CKAD and CKA can have similar scenario questions. It is recommended to go through the CKAD practice tests.

Shortcuts

First run the two commands below for shortcuts.

export do="--dry-run=client -o yaml" 
export now="--force --grace-period=0"

Questions

  1. Configure a volume to store the webapp logs (stored at /log/app.log) at /var/log/webapp on the host. Use the spec provided below.

    • Name: webapp

    • Image Name: kodekloud/event-simulator

    • Volume HostPath: /var/log/webapp

    • Volume Mount: /log

    Answer
    controlplane ~ ➜  k get po
    NAME READY STATUS RESTARTS AGE
    webapp 1/1 Running 0 48s

    controlplane ~ ➜ k exec -it webapp -- cat /log/app.log
    [2023-12-30 11:51:39,293] INFO in event-simulator: USER3 is viewing page3
    [2023-12-30 11:51:40,294] INFO in event-simulator: USER3 is viewing page3
    [2023-12-30 11:51:41,295] INFO in event-simulator: USER3 is viewing page3
    [2023-12-30 11:51:42,296] INFO in event-simulator: USER1 is viewing page1
    [2023-12-30 11:51:43,297] INFO in event-simulator: USER1 is viewing page2
    [2023-12-30 11:51:44,298] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.

    Generate a YAML file first and then delete the pod.

    controlplane ~ ➜  k get po
    NAME READY STATUS RESTARTS AGE
    webapp 1/1 Running 0 4m54s

    controlplane ~ ➜ k get po webapp -o yaml > webapp.yml

    controlplane ~ ➜ ls -l
    total 4
    -rw-rw-rw- 1 root root 0 Dec 13 05:39 sample.yaml
    -rw-r--r-- 1 root root 2658 Dec 30 06:56 webapp.yml

    controlplane ~ ➜ k delete po webapp $now
    Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    pod "webapp" force deleted

    controlplane ~ ➜ k get po
    No resources found in default namespace.

    Add the volume and volumemount in the YAML file. Follow K8S docs.

    ## webapp.yml 
    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: "2023-12-30T11:51:34Z"
    name: webapp
    namespace: default
    resourceVersion: "506"
    uid: 45b2b932-fbe3-4106-8926-55425cc05627
    spec:
    containers:
    - env:
    - name: LOG_HANDLERS
    value: file
    image: kodekloud/event-simulator
    imagePullPolicy: Always
    name: event-simulator
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
    name: kube-api-access-77kjm
    readOnly: true
    - mountPath: /log
    name: log-volume
    volumes:
    - name: log-volume
    hostPath:
    path: /var/log/webapp # directory location on host
    type: Directory # this field is optional
    controlplane ~ ➜  k apply -f webapp.yml 
    pod/webapp created

    controlplane ~ ➜ k get po
    NAME READY STATUS RESTARTS AGE
    webapp 1/1 Running 0 3s
  2. Create a Persistent Volume with the given specification.

    • Volume Name: pv-log

    • Storage: 100Mi

    • Access Modes: ReadWriteMany

    • Host Path: /pv/log

    • Reclaim Policy: Retain

    Answer
    ## pv-log.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: pv-log
    spec:
    persistentVolumeReclaimPolicy: Retain
    accessModes:
    - ReadWriteMany
    capacity:
    storage: 100Mi
    storageClassName: ""
    hostPath:
    path: /pv/log
    controlplane ~ ➜  k apply -f pv-log.yaml 
    persistentvolume/pv-log created

    controlplane ~ ➜ k get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    pv-log 100Mi RWX Retain Available 2s
  3. Create a Persistent Volume Claim with the given specification.

    • Volume Name: claim-log-1

    • Storage Request: 50Mi

    • Access Modes: ReadWriteMany

    Answer
    ## pvc-log.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: claim-log-1
    spec:
    accessModes:
    - ReadWriteMany
    resources:
    requests:
    storage: 50Mi
    controlplane ~ ➜  k apply -f pvc-log.yaml 
    persistentvolumeclaim/claim-log-1 created

    controlplane ~ ➜ k get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    pv-log 100Mi RWX Retain Bound default/claim-log-1 4m9s

    controlplane ~ ➜ k get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    claim-log-1 Bound pv-log 100Mi RWX 11s
  4. Update the webapp pod to use the persistent volume claim as its storage. Replace hostPath configured earlier with the newly created PersistentVolumeClaim.

    controlplane ~ ➜  k get po
    NAME READY STATUS RESTARTS AGE
    webapp 1/1 Running 0 46m

    controlplane ~ ➜ k get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    pv-log 100Mi RWX Retain Bound default/claim-log-1 39m

    controlplane ~ ➜ k get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    claim-log-1 Bound pv-log 100Mi RWX 35m 30m
    Answer
    controlplane ~ ➜  k get po webapp -o yaml > webapp.yml 

    Just need to modify the volume sectino:

    ## webapp.yml 
    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: "2023-12-30T12:55:20Z"
    name: webapp
    namespace: default
    resourceVersion: "978"
    uid: b6aec6fb-3333-4d77-9b34-0747f5de564c
    spec:
    containers:
    - env:
    - name: LOG_HANDLERS
    value: file
    image: kodekloud/event-simulator
    imagePullPolicy: Always
    name: event-simulator

    volumes:
    - name: log-volume
    persistentVolumeClaim:
    claimName: claim-log-1

    controlplane ~ ✦2 ➜  k apply -f webapp.yml 
    pod/webapp created

    controlplane ~ ✦2 ➜ k get po
    NAME READY STATUS RESTARTS AGE
    webapp 1/1 Running 0 2s
  5. What is the reclaim policy set on pv-log?

    controlplane ~ ✦2 ➜  k get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    pv-log 100Mi RWX Retain Bound default/claim-log-1 7m27s
    Answer
    controlplane ~ ✦2 ➜  k describe pv pv-log 
    Name: pv-log
    Labels: <none>
    Annotations: pv.kubernetes.io/bound-by-controller: yes
    Finalizers: [kubernetes.io/pv-protection]
    StorageClass:
    Status: Bound
    Claim: default/claim-log-1
    Reclaim Policy: Retain
  6. How many storage classes are there in the cluster?

    Answer
    controlplane ~ ➜  k get sc
    NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 9m2s
  7. What is the Volume Binding Mode used for this storage class local-storage?

    controlplane ~ ➜  k get sc
    NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 9m44s
    local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 36s
    portworx-io-priority-high kubernetes.io/portworx-volume Delete Immediate false 36s
    Answer
    controlplane ~ ✖ k describe sc local-storage 
    Name: local-storage
    IsDefaultClass: No
    Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}

    Provisioner: kubernetes.io/no-provisioner
    Parameters: <none>
    AllowVolumeExpansion: <unset>
    MountOptions: <none>
    ReclaimPolicy: Delete
    VolumeBindingMode: WaitForFirstConsumer
    Events: <none>
  8. Create a new PersistentVolumeClaim by the name of local-pvc that should bind to the volume local-pv.

    • PVC: local-pvc

    • Correct Access Mode?

    • Correct StorageClass Used?

    • PVC requests volume size = 500Mi?

    Answer
    controlplane ~ ➜  k get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    local-pv 500Mi RWO Retain Available local-storage 19m

    controlplane ~ ➜ k describe pv local-pv
    Name: local-pv
    Labels: <none>
    Annotations: <none>
    Finalizers: [kubernetes.io/pv-protection]
    StorageClass: local-storage
    Status: Available
    Claim:
    Reclaim Policy: Retain
    Access Modes: RWO
    VolumeMode: Filesystem
    Capacity: 500Mi
    Node Affinity:
    Required Terms:
    Term 0: kubernetes.io/hostname in [controlplane]
    Message:
    Source:
    Type: LocalVolume (a persistent volume backed by local storage on a node)
    Path: /opt/vol1
    Events: <none>
    ## local-pvc.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: local-pvc
    spec:
    accessModes:
    - ReadWriteOnce
    volumeMode: Filesystem
    storageClassName: local-storage
    resources:
    requests:
    storage: 500Mi
    controlplane ~ ➜  k apply -f local-pvc.yaml 
    persistentvolumeclaim/local-pvc created
  9. Create a new pod called nginx with the image nginx:alpine. The Pod should make use of the PVC local-pvc and mount the volume at the path /var/www/html. The PV local-pv should be in a bound state.

    controlplane ~ ➜  k get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    local-pv 500Mi RWO Retain Available local-storage 29m

    controlplane ~ ➜ k get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    local-pvc Pending local-storage 5m17s
    Answer
    controlplane ~ ➜  k run nginx --image nginx:alpine $do > nginx.yaml
    ## nginx.yaml
    apiVersion: v1
    kind: Pod
    metadata:
    creationTimestamp: null
    labels:
    run: nginx
    name: nginx
    spec:
    containers:
    - image: nginx:alpine
    name: nginx
    resources: {}
    volumeMounts:
    - mountPath: "/var/www/html"
    name: local-pv
    volumes:
    - name: local-pv
    persistentVolumeClaim:
    claimName: local-pvc
    status: {}
    controlplane ~ ➜  k apply -f nginx.yaml 
    pod/nginx created

    controlplane ~ ➜ k get po
    NAME READY STATUS RESTARTS AGE
    nginx 1/1 Running 0 6s

    controlplane ~ ➜ k get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    local-pv 500Mi RWO Retain Bound default/local-pvc local-storage 31m

    controlplane ~ ➜ k get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    local-pvc Bound local-pv 500Mi RWO local-storage 7m16s
  10. Create a new Storage Class called delayed-volume-sc that makes use of the below specs:

    • provisioner: kubernetes.io/no-provisioner

    • volumeBindingMode: WaitForFirstConsumer

    Answer
    ## delayed-volume-sc.yaml 
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: delayed-volume-sc
    annotations:
    storageclass.kubernetes.io/is-default-class: "false"
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer
    controlplane ~ ➜  k apply -f delayed-volume-sc.yaml 
    storageclass.storage.k8s.io/delayed-volume-sc created

    controlplane ~ ➜ k get sc
    NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 45m
    local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 35m
    portworx-io-priority-high kubernetes.io/portworx-volume Delete Immediate false 35m
    delayed-volume-sc kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 3s