containerNames: - app-container from: kind: ImageStreamTag name: nginx -stream:stable 2. Apply the DeploymentConfig: oc apply -f deploymentconfig.yaml 3. Push a new image to the ImageStream and validate: oc tag nginx:1.22 nginx -stream:stable oc get pods Explanation: Image triggers automate deployments when a new image version is available, ensuring the application stays up -to-date. Manually update an application by tagging a new image in an ImageStream. Validate the deployment. ee the Solution below. Solution: 1. Tag a new image: oc tag nginx:1.22 nginx -stream:stable 2. Monitor the deployment: oc get pods 3. Validate the new version: curl http://<app -url> Explanation: Manually tagging images in ImageStreams allows precise control over application updates and versioning.
Roll back to a specific deployment revision and validate the rollback. ee the Solution below. Solution: 1. List previous revisions: kubectl rollout history deployment/my -app 2. Roll back to a specific revision: kubectl rollout undo deployment/my -app --to-revision=2 3. Validate the rollback: kubectl get pods Explanation: Rolling back to specific revisions provides flexibility to restore stable versions while debugging failed updates. Set up an ImageStream to track the latest image from a remote registry automatically. Validate the tracking. ee the Solution below. Solution: 1. Create an ImageStream YAML: spec: tags: - name: latest from: kind: DockerImage name: registry.example.com/app:latest importPolicy: scheduled: true 2. Apply the ImageStream: oc apply -f imagestream.yaml 3. Validate automatic updates:
oc get is app -stream Explanation: Automatic tracking via ImageStreams ensures that the latest image from a registry is consistently available for deployments. Simulate a failed deployment due to a broken image and troubleshoot using deployment logs. ee the Solution below. Solution: 1. Deploy a broken image: kubectl set image deployment/my -app app -container=nginx:broken 2. Check deployment logs: kubectl logs deployment/my -app 3. Fix the deployment and roll back: kubectl rollout undo deployment/my -app Explanation: Logs are essential for diagnosing failed deployments, enabling quick fixes and minimizing downtime. Use a combination of DeploymentConfig and ImageStream to implement a blue-green deployment. Validate the process. ee the Solution below. Solution: 1. Create two DeploymentConfigs (blue and green): oc apply -f blue -deploymentconfig.yaml oc apply -f green -deploymentconfig.yaml 2. Tag the ImageStream to switch traffic:
4. Reapply the PVC and verify binding: kubectl get pvc Explanation: PVC issues often stem from mismatches between PVC requests and available PV configurations. Resolving these ensures seamless storage provisioning for applications. Audit all changes made to the resources in a namespace over the past day. Enable audit logging and provide commands to analyze logs for specific events. ee the Solution below. Solution: 1. Enable audit logging by editing the API server configuration: kubectl edit cm kube -apiserver -n kube -system Add audit logging flags: --audit -log-path=/var/log/audit.log --audit -log-maxage=10 --audit -log-maxbackup=5 2. Restart the API server and analyze logs: tail -n 100 /var/log/audit.log | grep <namespace> Explanation: Audit logs provide a comprehensive record of cluster activity, enabling administrators to trace and analyze changes for security and compliance purposes. Configure OpenShift logging to send application logs to an external Elasticsearch cluster. Include steps for Fluentd configuration and validation. ee the Solution below. Solution: 1. Edit the Fluentd ConfigMap in OpenShift:
oc tag nginx:1.22 nginx -stream:stable 3. Validate the deployment: curl http://<app -url> Explanation: Blue-green deployments ensure seamless transitions between versions by maintaining separate environments for live and testing versions. Set up a DeploymentConfig with both manual and image -based triggers. Validate both trigger mechanisms. ee the Solution below. Solution: 1. Define a DeploymentConfig with triggers: spec: triggers: - type: ConfigChange - type: ImageChange 2. Apply the DeploymentConfig: oc apply -f deploymentconfig.yaml 3. Test the manual trigger: oc rollout latest dc/my -app 4. Test the image -based trigger: oc tag nginx:1.23 nginx -stream:stable Explanation: Combining manual and automated triggers provides flexibility in managing deployments and reacting to updates. Configure a DeploymentConfig with an ImageStream that tracks an image digest instead of a tag.
Solution: 1. Create a Headless Service YAML file: apiVersion: v1 kind: Service metadata: name: headless -service spec: clusterIP: None selector: app: my-stateful -app ports: - protocol: TCP port: 80 2. Apply the Service and verify DNS resolution: kubectl apply -f headless -service.yaml kubectl exec <pod -name> -- nslookup headless -service Explanation: Headless Services provide direct access to individual pod IPs, which is essential for StatefulSet workloads. Deploy an application using a Kubernetes CronJob that runs every 5 minutes. Validate its execution. ee the Solution below. Solution: 1. Create a CronJob YAML file cronjob.yaml: apiVersion: batch/v1 kind: CronJob metadata: name: my-cronjob spec: schedule: "*/5 * * * *" jobTemplate: spec: template: spec: containers: - name: my -container