Ready to Pass Your Certification Test

Ready to guarantee a pass on the certification that will elevate your career? Visit this page to explore our catalog and get the questions and answers you need to ace the test.

Exam contains 260 questions

Page 9 of 44
Question 49 🔥

Explanation: Dynamic provisioning automates PV creation, simplifying storage management in OpenShift. Troubleshoot a Deployment rollout stuck due to a failing readiness probe. Resolve the issue and ensure the rollout completes successfully. ee the Solution below. Solution: 1. Check the rollout status: kubectl rollout status deployment <deployment -name> 2. Describe the pod to inspect readiness probe errors: kubectl describe pod <pod -name> 3. Update the readiness probe in the Deployment YAML: readinessProbe: httpGet: path: /health port: 80 initialDelaySeconds: 5 periodSeconds: 10 4. Apply the updated Deployment and verify rollout completion: kubectl apply -f deployment.yaml kubectl rollout status deployment <deployment -name> Explanation: Readiness probes ensure that only functional pods receive traffic. Correcting probe configurations resolves rollout issues. Use OpenShift monitoring to set up alerts for node disk usage exceeding 90%. Include steps to configure Prometheus rules and validate alert functionality. ee the Solution below.

Question 50 🔥

Solution: 1. Edit Prometheus configuration: kubectl edit cm prometheus -config -n openshift -monitoring Add a disk usage alert rule: groups: - name: node.rules rules: - alert: NodeDiskUsageHigh expr: node_filesystem_avail_bytes / node_filesystem_size_bytes < 0.1 for: 2m labels: severity: warning annotations: summary: "High disk usage on node {{ $labels.instance }}" description: "Disk usage exceeds 90% on {{ $labels.instance }}" 2. Restart Prometheus: kubectl rollout restart deployment prometheus -n openshift -monitoring 3. Simulate high disk usage to test alerts. Explanation: Disk usage monitoring and alerts enable proactive management of critical resources, ensuring cluster stability. Implement an OpenShift Job to process a file uploaded to a shared volume. Validate that the Job processes the file and terminates successfully. ee the Solution below. Solution: 1. Create a shared volume PVC: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: shared -pvc spec: accessModes: - ReadWriteMany

Question 51 🔥

resources: requests: storage: 1Gi 2. Define a Job YAML file: apiVersion: batch/v1 kind: Job metadata: name: file-processor spec: template: spec: containers: - name: processor image: busybox command: ["sh", "-c", "cat /data/input.txt && echo Processed"] volumeMounts: - name: shared -volume mountPath: /data restartPolicy: Never volumes: - name: shared -volume persistentVolumeClaim: claimName: shared -pvc 3. Apply the Job and validate: kubectl apply -f file-processor.yaml kubectl logs job/file -processor Explanation: Jobs are ideal for one-time tasks like file processing. Shared volumes enable data access across workloads. Configure OpenShift to enforce image pull policies that always pull the latest image version when creating or restarting a pod. Validate the configuration by deploying a pod. ee the Solution below. Solution: 1. Update the pod spec with the Always image pull policy: apiVersion: v1

Question 52 🔥

kind: Pod metadata: name: latest -image -pod spec: containers: - name: app -container image: nginx:latest imagePullPolicy: Always 2. Apply the pod configuration: kubectl apply -f latest -image -pod.yaml 3. Validate the policy by restarting the pod: kubectl delete pod latest -image -pod kubectl get pods Explanation: The Always policy ensures that pods use the most recent image version, promoting consistency and reducing the risk of outdated containers. Perform a rollback to a previous Deployment revision after identifying issues in the latest deployment. Validate that the rollback is successful. ee the Solution below. Solution: 1. Check the Deployment history: kubectl rollout history deployment <deployment -name> 2. Roll back to the previous revision: kubectl rollout undo deployment <deployment -name> --to-revision=<revision -number> 3. Verify the rollback: kubectl get pods kubectl rollout status deployment <deployment -name> Explanation: Rollbacks quickly revert to a stable state, minimizing downtime and mitigating risks from faulty

Question 53 🔥

deployments. Use OpenShift CLI to scale a StatefulSet horizontally. Verify that each pod maintains its unique identity after scaling. ee the Solution below. Solution: 1. Scale the StatefulSet: kubectl scale statefulset <statefulset -name> --replicas=5 2. Verify the pod identities: kubectl get pods -o wide 3. Confirm that each pod retains its identity (e.g., pod -name -0, pod -name -1, etc.). Explanation: StatefulSets ensure stable network identities for pods, which is crucial for stateful applications like databases and caches. Troubleshoot and resolve an issue where an ingress rule is not directing traffic to the correct backend service. Include validation steps after fixing the issue. ee the Solution below. Solution: 1. Check the Ingress configuration: kubectl describe ingress <ingress -name> 2. Verify that the service exists and matches the Ingress backend: kubectl get svc <service -name> 3. Correct the Ingress rule if necessary: rules: - http:

Question 54 🔥

kubectl get pods 4. Test the application by exposing the Deployment: kubectl expose deployment nginx -deployment --type=NodePort --port=80 kubectl get svc 5. Use the NodePort and cluster IP to confirm that the application is serving requests. Explanation: Deployments provide a scalable and declarative way to manage applications. YAML manifests ensure the configuration is consistent, while NodePort services expose the application for testing. Verifying replicas ensures that the application is running as expected and resilient. Your team requires an application to load specific configuration data dynamically during runtime. Create a ConfigMap to hold key -value pairs for application settings, and update an existing Deployment to use this ConfigMap. Provide a complete YAML definition for both the ConfigMap and the updated Deployment, and demonstrate how to validate that the configuration is applied correctly. ee the Solution below. Solution: 1. Create a ConfigMap YAML file named app-config.yaml: apiVersion: v1 kind: ConfigMap metadata: name: app-config data: APP_ENV: production APP_DEBUG: "false" 2. Apply the ConfigMap using: kubectl apply -f app -config.yaml 3. Update the Deployment YAML to reference the ConfigMap: apiVersion: apps/v1 kind: Deployment metadata: name: app-deployment spec: replicas: 1

Lorem ipsum dolor sit amet consectetur. Eget sed turpis aenean sit aenean. Integer at nam ullamcorper a.

© 2024 Exam Prepare, Inc. All Rights Reserved.