ee the solution below. Solution: 1. List mapped RBD images: rbd showmapped 2. Unmap the images: rbd unmap /dev/rbd/my -pool/image1 rbd unmap /dev/rbd/my -pool/image2 3. Remove the images from the pool: rbd rm my-pool/image1 rbd rm my -pool/image2 Explanation: Properly unmapping and removing RBD images ensures that resources are reclaimed and no residual mappings exist. Enable and monitor two -way snapshot mirroring between two RBD pools in separate clusters. ee the solution below. Solution: 1. Enable snapshot mirroring on both pools: rbd mirror pool enable my -pool snapshot rbd mirror pool enable remote -pool snapshot 2. Add peer clusters for bidirectional mirroring: rbd mirror pool peer add my -pool client.admin@remote -cluster rbd mirror pool peer add remote -pool client.admin@local -cluster 3. Monitor mirroring status: rbd mirror pool status my -pool Explanation: Two-way snapshot mirroring synchronizes snapshots in both directions, ensuring high availability and
disaster recovery. Deploy a basic RADOS gateway and verify its service status. ee the solution below. Solution: 1. Install the RADOS gateway package: yum install -y ceph -radosgw 2. Enable and start the RADOS gateway service: systemctl enable --now ceph -radosgw@rgw.<hostname> 3. Verify the service status: systemctl status ceph -radosgw@rgw.<hostname> Explanation: Deploying a RADOS gateway allows object storage access through REST APIs. Starting and verifying the service ensures its availability. Deploy a multisite RADOS gateway setup with two zones: zone1 and zone2. ee the solution below. Solution: 1. Create the zonegroup and zones: radosgw -admin zonegroup create --rgw-zonegroup=multi -zone -group --master radosgw -admin zone create --rgw-zone=zone1 --master radosgw -admin zone create --rgw-zone=zone2 2. Set the default zonegroup and zone: radosgw -admin region set --rgw-region=multi -zone -group 3. Verify the zones:
radosgw -admin zonegroup list radosgw -admin zone list Explanation: Multisite RADOS gateways support geo-distributed replication of object storage, ensuring high availability and disaster recovery. Create a RADOS gateway user for S3 client commands and verify the credentials. ee the solution below. Solution: 1. Create the S3 user: radosgw -admin user create --uid=s3user --display -name="S3 User" 2. Retrieve the user credentials: radosgw -admin user info --uid=s3user Explanation: Creating a user and retrieving their credentials ensures access to the S3 interface for uploading and downloading objects. Upload an object to a RADOS gateway using the S3 API. ee the solution below. Solution: 1. Export the user’s credentials: export AWS_ACCESS_KEY_ID=<access_key> export AWS_SECRET_ACCESS_KEY=<secret_key> 2. Use the S3 client to upload an object: aws s3 cp /path/to/file s3://bucket -name/ Explanation:
Using the S3 API ensures compatibility with AWS -based tools and workflows for seamless object storage management. Download an object from a RADOS gateway using the S3 API. ee the solution below. Solution: 1. Export the user’s credentials: export AWS_ACCESS_KEY_ID=<access_key> export AWS_SECRET_ACCESS_KEY=<secret_key> 2. Use the S3 client to download an object: aws s3 cp s3://bucket -name/file /path/to/destination Explanation: Downloading objects through the S3 API enables seamless access to stored data for applications and users. Export S3 objects from a RADOS gateway using NFS. ee the solution below. Solution: 1. Install the required NFS package: yum install -y nfs -utils 2. Configure the RADOS gateway NFS export: radosgw -admin nfs export create --uid=s3user --bucket=bucket -name 3. Verify the NFS export: radosgw -admin nfs export list Explanation:
Exporting S3 objects via NFS allows legacy systems to access object storage without requiring S3 API compatibility. Provide object storage compatible with the Swift API and verify the configuration. ee the solution below. Solution: 1. Enable the Swift API in the configuration file: rgw_enable_swift = true 2. Restart the RADOS gateway: systemctl restart ceph -radosgw@rgw.<hostname> 3. Verify the Swift endpoint: radosgw -admin zonegroup get Explanation: The Swift API enables OpenStack integration and supports object storage access for Swift -compatible clients. Create a RADOS gateway user for Swift commands and verify their access. ee the solution below. Solution: 1. Create the Swift user: radosgw -admin user create --uid=swiftuser --display -name="Swift User" 2. Enable Swift subuser access: radosgw -admin subuser create --uid=swiftuser --subuser=swiftuser:swift --access=full 3. Retrieve the Swift user’s secret key: radosgw -admin user info --uid=swiftuser
3. Apply the new CRUSH map: crushtool -c crush.txt -o crush.map ceph osd setcrushmap -i crush.map Explanation: CRUSH maps define how data is distributed across OSDs. Customizing these maps allows fine-grained control over data placement. Set up Ceph storage with authentication enabled and verify that clients require keys to access the cluster. ee the solution below. Solution: 1. Enable authentication in the Ceph configuration file: ceph config set mon auth_allow_insecure_global_id_reclaim false 2. Generate a client key: ceph auth get -or-create client.admin 3. Verify client access using the key: ceph -s --keyring /etc/ceph/ceph.client.admin.keyring Explanation: Enabling authentication adds a layer of security, ensuring only authorized clients can interact with the cluster. Deploy Ceph storage and configure BlueStore as the OSD backend for optimal performance. ee the solution below. Solution: 1. Update the playbook with BlueStore settings: osd_objectstore: bluestore