Ready to Pass Your Certification Test

Ready to guarantee a pass on the certification that will elevate your career? Visit this page to explore our catalog and get the questions and answers you need to ace the test.

Exam contains 260 questions

Page 4 of 44
Question 19 🔥

This approach minimizes resource conflicts. Perform a rolling restart of all pods in a deployment to update configurations without downtime. Provide commands and verify that the restart was successful. ee the Solution below. Solution: 1. Trigger a rolling restart: kubectl rollout restart deployment <deployment -name> 2. Monitor the rollout status: kubectl rollout status deployment <deployment -name> 3. Verify pod restarts: kubectl get pods Explanation: Rolling restarts allow dynamic reloading of configurations or updates, ensuring application availability throughout the process. Configure and validate a cluster -wide limit range to enforce CPU and memory limits for all newly created pods. Include the YAML definition and demonstrate how to test the limit enforcement. ee the Solution below. Solution: 1. Create a LimitRange YAML file limit-range.yaml: apiVersion: v1 kind: LimitRange metadata: name: default -limits spec: limits: - default: cpu: "500m"

Question 20 🔥

memory: "512Mi" defaultRequest: cpu: "250m" memory: "256Mi" type: Container 2. Apply the LimitRange: kubectl apply -f limit -range.yaml 3. Verify the enforcement: kubectl describe limitrange default -limits Explanation: Limit ranges enforce default resource usage caps, promoting balanced resource consumption and preventing over -provisioning. Set up OpenShift to log all administrative actions performed by a specific user. Demonstrate how to analyze the logs for compliance purposes. ee the Solution below. Solution: 1. Enable audit logging: kubectl edit cm kube -apiserver -n kube -system Add: --audit -log-path=/var/log/audit.log 2. Filter logs for the specific user: grep "user=<username>" /var/log/audit.log Explanation: Audit logging tracks administrative actions, ensuring accountability and compliance with organizational policies. Configure an Ingress resource to expose an application using a custom domain name. Include steps to

Question 21 🔥

create the Ingress YAML and validate that the domain resolves to the application. ee the Solution below. Solution: 1. Create an Ingress YAML file ingress.yaml: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app-ingress annotations: nginx.ingress.kubernetes.io/rewrite -target: / spec: rules: - host: custom -domain.example.com http: paths: - path: / pathType: Prefix backend: service: name: app-service port: number: 80 2. Apply the Ingress resource: kubectl apply -f ingress.yaml 3. Update DNS or /etc/hosts to point custom -domain.example.com to the cluster IP. 4. Verify accessibility: curl http://custom -domain.example.com Explanation: Ingress provides an HTTP(S) layer to expose services using custom domains, offering centralized traffic management. Troubleshoot and fix an issue where a pod fails to start due to insufficient resources. Identify the problem and reconfigure the deployment to ensure proper resource allocation. ee the Solution below.

Question 22 🔥

Solution: 1. Describe the pod to identify the issue: kubectl describe pod <pod -name> 2. Update the Deployment YAML to adjust resource requests and limits: resources: requests: cpu: "200m" memory: "256Mi" limits: cpu: "500m" memory: "512Mi" 3. Apply the updated Deployment: kubectl apply -f deployment.yaml 4. Verify pod status: kubectl get pods Explanation: Insufficient resources prevent pods from starting. Adjusting requests and limits ensures workloads fit within node capacities while adhering to cluster policies. Perform a node cordon and drain operation to safely remove all pods from a node for maintenance. Include steps to validate that no workloads are running on the node after draining. ee the Solution below. Solution: 1. Cordon the node to prevent new pods: kubectl cordon <node -name> 2. Drain the node: kubectl drain <node -name> --ignore -daemonsets --delete -emptydir -data 3. Verify no pods are running:

Question 23 🔥

kubectl get pods --all-namespaces -o wide | grep <node -name> Explanation: Cordoning and draining nodes ensure minimal disruption by safely migrating workloads before node maintenance. Configure and use a Kubernetes Job to run a batch process that executes a simple script. Include steps to define the Job YAML and verify its completion. ee the Solution below. Solution: 1. Create a Job YAML file batch -job.yaml: apiVersion: batch/v1 kind: Job metadata: name: simple -job spec: template: spec: containers: - name: batch -container image: busybox command: ["sh", "-c", "echo Hello, Kubernetes! && sleep 30"] restartPolicy: Never 2. Apply the Job: kubectl apply -f batch -job.yaml 3. Monitor Job status: kubectl get jobs kubectl logs job/simple -job Explanation: Jobs run one-time batch processes, and their configuration ensures completion tracking and logging for validation.

Question 24 🔥

kubectl get pods 4. Test the application by exposing the Deployment: kubectl expose deployment nginx -deployment --type=NodePort --port=80 kubectl get svc 5. Use the NodePort and cluster IP to confirm that the application is serving requests. Explanation: Deployments provide a scalable and declarative way to manage applications. YAML manifests ensure the configuration is consistent, while NodePort services expose the application for testing. Verifying replicas ensures that the application is running as expected and resilient. Your team requires an application to load specific configuration data dynamically during runtime. Create a ConfigMap to hold key -value pairs for application settings, and update an existing Deployment to use this ConfigMap. Provide a complete YAML definition for both the ConfigMap and the updated Deployment, and demonstrate how to validate that the configuration is applied correctly. ee the Solution below. Solution: 1. Create a ConfigMap YAML file named app-config.yaml: apiVersion: v1 kind: ConfigMap metadata: name: app-config data: APP_ENV: production APP_DEBUG: "false" 2. Apply the ConfigMap using: kubectl apply -f app -config.yaml 3. Update the Deployment YAML to reference the ConfigMap: apiVersion: apps/v1 kind: Deployment metadata: name: app-deployment spec: replicas: 1

Lorem ipsum dolor sit amet consectetur. Eget sed turpis aenean sit aenean. Integer at nam ullamcorper a.

© 2024 Exam Prepare, Inc. All Rights Reserved.