Ready to Pass Your Certification Test

Ready to guarantee a pass on the certification that will elevate your career? Visit this page to explore our catalog and get the questions and answers you need to ace the test.

Exam contains 260 questions

Page 6 of 44
Question 31 🔥

Configure OpenShift to automatically clean up completed Jobs after a specific time. Include steps to modify the TTL for finished Jobs and verify its functionality. ee the Solution below. Solution: 1. Create a Job with a TTL configuration: apiVersion: batch/v1 kind: Job metadata: name: cleanup -job spec: ttlSecondsAfterFinished: 3600 template: spec: containers: - name: cleanup -container image: busybox command: ["sh", "-c", "echo Job complete!"] restartPolicy: Never 2. Apply the Job: kubectl apply -f cleanup -job.yaml 3. Verify that the Job is deleted after completion and the TTL expiry: kubectl get jobs --watch Explanation: Setting TTL for Jobs ensures that completed resources are automatically cleaned up, reducing clutter and conserving cluster resources. Use taints and tolerations to ensure that specific pods are scheduled only on designated nodes. Include YAML definitions for both the node taint and pod tolerations. ee the Solution below. Solution: 1. Add a taint to a node: kubectl taint nodes <node -name> key=value:NoSchedule

Question 32 🔥

2. Update the pod definition with tolerations: tolerations: - key: "key" operator: "Equal" value: "value" effect: "NoSchedule" 3. Apply the updated pod YAML: kubectl apply -f pod.yaml 4. Verify the pod placement: kubectl get pods -o wide Explanation: Taints and tolerations allow fine-grained control over pod scheduling, ensuring that critical workloads are assigned to specific nodes. Set up and test horizontal autoscaling for an application based on custom metrics such as memory usage. Include YAML definitions and steps to simulate load. ee the Solution below. Solution: 1. Define a custom metric and set up a Prometheus adapter for metrics collection. 2. Create an HPA YAML file: apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: memory -scaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: app-deployment minReplicas: 1 maxReplicas: 5 metrics: - type: Resource resource:

Question 33 🔥

name: memory target: type: Utilization averageUtilization: 70 3. Apply the HPA: kubectl apply -f hpa.yaml 4. Simulate high memory usage: kubectl exec <pod -name> -- stress --vm 1 --vm-bytes 100M --timeout 60s 5. Verify scaling: kubectl get hpa kubectl get pods Explanation: Custom metrics -based autoscaling allows more granular control over scaling policies, adapting resources dynamically to workload patterns. Configure and validate OpenShift cluster monitoring using Grafana dashboards. Include steps to deploy Grafana and connect it to Prometheus. ee the Solution below. Solution: 1. Deploy Grafana: kubectl create deployment grafana --image=grafana/grafana kubectl expose deployment grafana --type=NodePort --port=3000 2. Add Prometheus as a data source in Grafana: o Access Grafana at <Node -IP>:<NodePort>. o Configure Prometheus URL (http://prometheus.openshift -monitoring.svc:9090). 3. Import a predefined dashboard and verify metrics. Explanation: Grafana enhances monitoring by providing advanced visualization and analysis capabilities, integrating seamlessly with Prometheus for metrics collection.

Question 34 🔥

Configure OpenShift to perform a rolling restart of all pods in a deployment every time the associated ConfigMap changes. Include steps to modify the deployment to achieve this functionality and validate it. ee the Solution below. Solution: 1. Create or update the ConfigMap: kubectl create configmap app -config --from-literal=key=value 2. Update the Deployment to include an annotation referencing the ConfigMap: spec: template: metadata: annotations: checksum/config: "{{ .Values.configHash }}" spec: containers: - name: app-container image: nginx volumeMounts: - mountPath: /config name: config -volume volumes: - name: config -volume configMap: name: app -config 3. Generate a hash of the ConfigMap data to trigger rollouts: kubectl patch deployment <deployment -name> --patch "$(kubectl get configmap app-config -o json | jq -r '.metadata.annotations["checksum/config"] = .data | .metadata.annotations')" 4. Verify the rolling update: kubectl rollout status deployment/<deployment -name> Explanation: Annotations in the Deployment template allow OpenShift to detect changes in the ConfigMap and trigger a rolling restart of affected pods, ensuring updated configurations are applied.

Question 35 🔥

Secure OpenShift by enforcing the use of Secrets for storing sensitive data, such as database credentials. Create a Secret, mount it in a pod, and validate its functionality. ee the Solution below. Solution: 1. Create a Secret: kubectl create secret generic db-credentials --from-literal=username=admin --from- literal=password=securepass 2. Update the pod spec to use the Secret: spec: containers: - name: app-container image: nginx env: - name: DB_USER valueFrom: secretKeyRef: name: db-credentials key: username - name: DB_PASS valueFrom: secretKeyRef: name: db-credentials key: password 3. Apply the pod YAML: kubectl apply -f pod.yaml 4. Verify the pod environment variables: kubectl exec <pod -name> -- env | grep DB Explanation: Secrets protect sensitive information, like credentials, by decoupling it from application code and configuration files, enhancing security and maintainability. Use OpenShift to deploy a blue-green application update strategy to minimize downtime during a version upgrade. Include steps to configure and validate the deployment. ee the Solution below. Solution: 1. Deploy version 1 of the application: apiVersion: apps/v1 kind: Deployment metadata: name: app-v1 spec: replicas: 2 selector: matchLabels: app: blue-green template: metadata: labels: app: blue-green version: v1 spec: containers: - name: app-container image: app -image:v1 2. Create a Service to route traffic: apiVersion: v1 kind: Service metadata: name: app-service spec: selector: app: blue-green version: v1 ports: - protocol: TCP port: 80 3. Deploy version 2 of the application: kubectl apply -f app -v2.yaml 4. Update the Service selector to version: v2 for traffic switchover. Explanation: Blue-green deployment separates production (blue) and updated (green) environments, allowing safe and controlled version transitions with minimal downtime.

Question 36 🔥

kubectl get pods 4. Test the application by exposing the Deployment: kubectl expose deployment nginx -deployment --type=NodePort --port=80 kubectl get svc 5. Use the NodePort and cluster IP to confirm that the application is serving requests. Explanation: Deployments provide a scalable and declarative way to manage applications. YAML manifests ensure the configuration is consistent, while NodePort services expose the application for testing. Verifying replicas ensures that the application is running as expected and resilient. Your team requires an application to load specific configuration data dynamically during runtime. Create a ConfigMap to hold key -value pairs for application settings, and update an existing Deployment to use this ConfigMap. Provide a complete YAML definition for both the ConfigMap and the updated Deployment, and demonstrate how to validate that the configuration is applied correctly. ee the Solution below. Solution: 1. Create a ConfigMap YAML file named app-config.yaml: apiVersion: v1 kind: ConfigMap metadata: name: app-config data: APP_ENV: production APP_DEBUG: "false" 2. Apply the ConfigMap using: kubectl apply -f app -config.yaml 3. Update the Deployment YAML to reference the ConfigMap: apiVersion: apps/v1 kind: Deployment metadata: name: app-deployment spec: replicas: 1

Lorem ipsum dolor sit amet consectetur. Eget sed turpis aenean sit aenean. Integer at nam ullamcorper a.

© 2024 Exam Prepare, Inc. All Rights Reserved.