kubectl get pods Explanation: Rolling updates allow incremental updates without downtime, ensuring the application remains available during deployment changes. Implement a canary deployment for an application to test a new version with a subset of users. Validate traffic splitting. ee the Solution below. Solution: 1. Deploy the stable version: kubectl apply -f stable -deployment.yaml 2. Deploy the canary version with a lower replica count: kubectl apply -f canary -deployment.yaml 3. Update the Service to route traffic to both deployments: apiVersion: v1 kind: Service spec: selector: app: my-app trafficPolicy: - stable -deployment: 80% - canary -deployment: 20% 4. Validate traffic splitting: curl http://my -app Explanation: Canary deployments allow controlled testing of new versions, minimizing risk by routing only a portion of traffic to the new version. Configure a Deployment with a grace period for pod termination during scaling or updates. Validate the
termination behavior. ee the Solution below. Solution: 1. Add a termination grace period: spec: terminationGracePeriodSeconds: 30 2. Apply the Deployment: kubectl apply -f deployment.yaml 3. Trigger a pod termination and validate: kubectl delete pod <pod -name> kubectl get events Explanation: Grace periods allow applications to complete ongoing requests or cleanup tasks before termination, ensuring a smooth shutdown process. Simulate and troubleshoot a failed readiness probe. Validate application recovery after resolving the issue. ee the Solution below. Solution: 1. Simulate a readiness failure: kubectl exec <pod -name> -- mv /usr/share/nginx/html/index.html /tmp/ 2. Validate the readiness probe failure: kubectl get pods kubectl describe pod <pod -name> 3. Fix the issue and validate recovery: kubectl exec <pod -name> -- mv /tmp/index.html /usr/share/nginx/html/ Explanation:
Readiness probes help identify and isolate failing pods, preventing traffic from being routed to unhealthy instances. Deploy an application with node affinity to ensure pods run on specific nodes. Validate scheduling behavior. ee the Solution below. Solution: 1. Add node affinity rules: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node -role.kubernetes.io/worker operator: In values: - true 2. Apply the Deployment: kubectl apply -f deployment.yaml 3. Validate pod placement: kubectl get pods -o wide Explanation: Node affinity ensures pods are scheduled on nodes meeting specific criteria, optimizing performance and compliance. Deploy an application using a DeploymentConfig in OpenShift to support rollback mechanisms. Validate rollback functionality. ee the Solution below. Solution: 1. Create a DeploymentConfig:
apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: my-app spec: replicas: 2 strategy: type: Rolling template: spec: containers: - name: app-container image: nginx 2. Trigger a new deployment with an incorrect image: oc set image dc/my -app app -container=nginx:wrong 3. Roll back to the previous version: oc rollout undo dc/my -app 4. Validate the rollback: oc get pods Explanation: DeploymentConfigs in OpenShift simplify application version management with built -in rollback support. Identify and validate an image using both its tag and digest. ee the Solution below. Solution: 1. Pull an image using a tag: docker pull nginx:latest 2. Identify the image digest: docker inspect nginx:latest | grep Digest 3. Pull the image using the digest:
docker pull nginx@sha256:<digest> 4. Verify both images are identical: docker images --digests | grep nginx Explanation: Tags are mutable and can change, but digests uniquely identify a specific image version, ensuring consistency across environments. Perform a rollback for a failed deployment and validate the application’s recovery. ee the Solution below. Solution: 1. Deploy a new version with a broken image: kubectl set image deployment/my -app app -container=nginx:broken 2. Validate the failed deployment: kubectl get pods 3. Roll back to the previous version: kubectl rollout undo deployment/my -app 4. Validate recovery: kubectl get pods Explanation: Rollbacks restore the last working deployment, minimizing downtime and recovering from failed updates efficiently. Create an ImageStream in OpenShift and use it to track a specific image version. Validate the tracking behavior. ee the Solution below.
Solution: 1. Create a Headless Service YAML file: apiVersion: v1 kind: Service metadata: name: headless -service spec: clusterIP: None selector: app: my-stateful -app ports: - protocol: TCP port: 80 2. Apply the Service and verify DNS resolution: kubectl apply -f headless -service.yaml kubectl exec <pod -name> -- nslookup headless -service Explanation: Headless Services provide direct access to individual pod IPs, which is essential for StatefulSet workloads. Deploy an application using a Kubernetes CronJob that runs every 5 minutes. Validate its execution. ee the Solution below. Solution: 1. Create a CronJob YAML file cronjob.yaml: apiVersion: batch/v1 kind: CronJob metadata: name: my-cronjob spec: schedule: "*/5 * * * *" jobTemplate: spec: template: spec: containers: - name: my -container