2. Create a PVC using the StorageClass: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: dynamic -pvc spec: storageClassName: dynamic -storage -class accessModes: - ReadWriteOnce resources: requests: storage: 10Gi 3. Deploy a pod that uses the PVC: volumes: - name: dynamic -volume persistentVolumeClaim: claimName: dynamic -pvc containers: - name: app-container image: nginx volumeMounts: - mountPath: /mnt name: dynamic -volume 4. Validate the dynamic provisioning and binding: kubectl apply -f pod.yaml kubectl exec <pod -name> -- ls /mnt Explanation: Dynamic provisioning through StorageClasses simplifies volume creation and ensures storage resources are allocated on demand. Create a StatefulSet for a RabbitMQ cluster with persistent storage for each instance. Validate that the message queue retains data after a pod restart. ee the Solution below. Solution: 1. Create a StatefulSet YAML for RabbitMQ: apiVersion: apps/v1 kind: StatefulSet metadata: name: rabbitmq spec: serviceName: "rabbitmq -service" replicas: 3 selector: matchLabels: app: rabbitmq template: metadata: labels: app: rabbitmq spec: containers: - name: rabbitmq image: rabbitmq:management ports: - containerPort: 5672 - containerPort: 15672 volumeMounts: - name: rabbitmq -data mountPath: /var/lib/rabbitmq volumeClaimTemplates: - metadata: name: rabbitmq -data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi 2. Apply the StatefulSet: kubectl apply -f rabbitmq -statefulset.yaml 3. Publish a message to the queue: rabbitmqadmin -u admin -p admin publish exchange=amq.default routing_key=test payload="Hello, World!" 4. Restart a pod and validate the message queue: kubectl delete pod rabbitmq -0 rabbitmqadmin -u admin -p admin get queue=test Explanation:
StatefulSets with persistent storage ensure RabbitMQ retains its state across pod restarts, maintaining message queue integrity. Configure an application to use a ConfigMap and Secret together, mounted as files in a single directory. Validate their contents inside the pod. ee the Solution below. Solution: 1. Create a ConfigMap and Secret: kubectl create configmap app -config --from-literal=APP_ENV=production kubectl create secret generic app -secret --from-literal=DB_PASSWORD=securepass 2. Update the pod spec to mount both: volumes: - name: config -volume configMap: name: app -config - name: secret -volume secret: secretName: app-secret containers: - name: app-container image: nginx volumeMounts: - mountPath: /etc/config name: config -volume - mountPath: /etc/secret name: secret -volume 3. Apply the pod configuration: kubectl apply -f pod.yaml 4. Validate contents: kubectl exec <pod -name> -- ls /etc/config /etc/secret kubectl exec <pod -name> -- cat /etc/config/APP_ENV /etc/secret/DB_PASSWORD Explanation: Combining ConfigMaps and Secrets as file mounts in a pod simplifies accessing configurations and sensitive data securely.
Create a PersistentVolumeClaim for a pod that needs shared storage across multiple replicas. Validate data sharing between pods. ee the Solution below. Solution: 1. Create a PVC for shared storage: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: shared -pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi 2. Update the pod spec to use the PVC: volumes: - name: shared -volume persistentVolumeClaim: claimName: shared -pvc volumeMounts: - mountPath: /data/shared name: shared -volume 3. Deploy multiple pods and validate shared access: kubectl apply -f shared -pods.yaml kubectl exec <pod -1-name> -- touch /data/shared/test -file kubectl exec <pod -2-name> -- ls /data/shared Explanation: Shared PVCs allow multiple pods to access the same storage, enabling collaborative workloads like file- sharing applications. Deploy a StatefulSet with unique storage for each replica and validate the unique volumes.
ee the Solution below. Solution: 1. Create a StatefulSet with volume templates: volumeClaimTemplates: - metadata: name: unique -storage spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi 2. Deploy the StatefulSet: kubectl apply -f statefulset.yaml 3. Validate the unique volumes: kubectl get pvc | grep unique -storage kubectl exec <pod -name> -- ls /data Explanation: StatefulSets with volume templates ensure that each replica has its own storage, critical for stateful applications requiring data isolation. Create a Secret to store Docker credentials for pulling private images. Validate the pod uses the Secret successfully. ee the Solution below. Solution: 1. Create a Docker registry Secret: kubectl create secret docker -registry docker -secret --docker -username=<username> --docker - password=<password> --docker -server=<registry> 2. Update the pod spec to use the Secret: imagePullSecrets: - name: docker -secret
3. Apply the pod and validate: kubectl apply -f pod.yaml kubectl describe pod <pod -name> Explanation: Docker registry Secrets enable Kubernetes to authenticate with private registries, ensuring secure access to private images. Deploy a ConfigMap with custom configuration files and mount it into a pod. Validate the file structure inside the pod. ee the Solution below. Solution: 1. Create a ConfigMap with a file: kubectl create configmap custom -config --from-file=custom.conf 2. Update the pod YAML: volumes: - name: config -volume configMap: name: custom -config volumeMounts: - mountPath: /etc/custom name: config -volume 3. Apply the pod and validate: kubectl apply -f pod.yaml kubectl exec <pod -name> -- ls /etc/custom Explanation: ConfigMaps can store entire files, making it easier to manage complex configurations for applications. Create a StatefulSet for a Redis cluster with non-shared persistent storage for each node. Validate data persistence after node restarts.
ee the Solution below. Solution: 1. Create a StatefulSet YAML for Redis: volumeClaimTemplates: - metadata: name: redis -data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi 2. Deploy the StatefulSet: kubectl apply -f redis -statefulset.yaml 3. Insert data into Redis and restart a pod: redis -cli set key value kubectl delete pod redis -0 redis -cli get key Explanation: Redis requires non-shared persistent storage to ensure isolated data handling for each instance in the cluster. Configure a pod with an external NFS mount and validate access to the shared directory. ee the Solution below. Solution: 1. Create a PV for NFS: nfs: path: /nfs/shared server: <nfs-server -ip> 2. Create a PVC and attach it to the pod: volumes: - name: nfs -volume