failure. ee the Solution below. Solution: 1. Backup etcd data: ETCDCTL_API=3 etcdctl --endpoints=<etcd -endpoint> \ --cacert=/path/to/ca.crt \ --cert=/path/to/etcd -client.crt \ --key=/path/to/etcd -client.key snapshot save /backup/etcd -backup.db 2. Verify the backup: ETCDCTL_API=3 etcdctl snapshot status /backup/etcd -backup.db 3. Restore etcd data: ETCDCTL_API=3 etcdctl snapshot restore /backup/etcd -backup.db \ --data-dir=/path/to/new -data-dir Explanation: Backing up etcd ensures that critical cluster state information can be recovered during disasters. Restoring from snapshots minimizes downtime and restores cluster integrity. Troubleshoot a persistent volume claim (PVC) stuck in Pending state. Identify and resolve common issues such as storage class misconfiguration or unavailable PVs. ee the Solution below. Solution: 1. Check the PVC details: kubectl describe pvc <pvc -name> 2. Verify PV availability and matching storage class: kubectl get pv 3. Fix the storage class or provision a new PV if needed: storageClassName: <correct -storage -class>
Explanation: Startup probes are designed for applications with long initialization times, ensuring the pod is marked healthy only after successful startup. Manually configure resource quotas for a namespace to limit total application resource usage. Validate enforcement. ee the Solution below. Solution: 1. Create a ResourceQuota YAML: apiVersion: v1 kind: ResourceQuota metadata: name: compute -resources spec: hard: requests.cpu: "2" requests.memory: "2Gi" limits.cpu: "4" limits.memory: "4Gi" 2. Apply the ResourceQuota: kubectl apply -f resourcequota.yaml -n my -namespace 3. Deploy a pod exceeding limits and validate: kubectl describe quota compute -resources -n my -namespace Explanation: Resource quotas restrict namespace resource usage, preventing excessive consumption and maintaining cluster fairness. Configure a Horizontal Pod Autoscaler (HPA) for an application that scales based on custom metrics using Prometheus. Validate the custom metrics scaling behavior. ee the Solution below.
Solution: 1. Install Prometheus and configure metrics collection: kubectl apply -f https://raw.githubusercontent.com/prometheus -operator/prometheus -operator/main/bundle.yaml 2. Annotate the application deployment to expose custom metrics: kubectl annotate deployment my -app prometheus.io/scrape=true prometheus.io/port=8080 3. Create an HPA configuration: apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: custom -hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my -app minReplicas: 2 maxReplicas: 10 metrics: - type: Pods pods: metricName: custom_metric target: type: AverageValue averageValue: 500m 4. Apply the HPA: kubectl apply -f custom -hpa.yaml 5. Validate scaling behavior by generating load: kubectl get hpa Explanation: Custom metrics scaling enables dynamic scaling based on application -specific metrics, improving responsiveness and resource utilization. Implement a PodDisruptionBudget (PDB) for a deployment to maintain a minimum number of pods
during node maintenance. Validate its enforcement. ee the Solution below. Solution: 1. Create a PDB YAML file: apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-app-pdb spec: minAvailable: 2 selector: matchLabels: app: my -app 2. Apply the PDB: kubectl apply -f pdb.yaml 3. Attempt to drain a node: kubectl drain <node -name> --ignore -daemonsets 4. Validate that a minimum of 2 pods remain running: kubectl get pods Explanation: PDBs ensure a minimum number of pods remain available during disruptions, enhancing application availability during maintenance. Configure a Deployment with a memory -based Vertical Pod Autoscaler (VPA). Validate the autoscaler’s behavior. ee the Solution below. Solution: 1. Install the Vertical Pod Autoscaler: kubectl apply -f https://github.com/kubernetes/autoscaler/releases/latest/download/vertical -pod-autoscaler.yaml
2. Create a VPA for the deployment: apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: my-app-vpa spec: targetRef: apiVersion: apps/v1 kind: Deployment name: my -app updatePolicy: updateMode: "Auto" 3. Apply the VPA: kubectl apply -f vpa.yaml 4. Validate memory adjustments: kubectl describe vpa my -app-vpa Explanation: VPA adjusts pod resources dynamically, optimizing performance and ensuring sufficient resource allocation based on workload. Set up an application with both resource quotas and a LimitRange to enforce container -specific limits in a namespace. Validate enforcement. ee the Solution below. Solution: 1. Create a ResourceQuota: apiVersion: v1 kind: ResourceQuota metadata: name: namespace -quota spec: hard: requests.cpu: "2" requests.memory: "2Gi" limits.cpu: "4" limits.memory: "4Gi"
Solution: 1. Create a Headless Service YAML file: apiVersion: v1 kind: Service metadata: name: headless -service spec: clusterIP: None selector: app: my-stateful -app ports: - protocol: TCP port: 80 2. Apply the Service and verify DNS resolution: kubectl apply -f headless -service.yaml kubectl exec <pod -name> -- nslookup headless -service Explanation: Headless Services provide direct access to individual pod IPs, which is essential for StatefulSet workloads. Deploy an application using a Kubernetes CronJob that runs every 5 minutes. Validate its execution. ee the Solution below. Solution: 1. Create a CronJob YAML file cronjob.yaml: apiVersion: batch/v1 kind: CronJob metadata: name: my-cronjob spec: schedule: "*/5 * * * *" jobTemplate: spec: template: spec: containers: - name: my -container