ceph mon stat Explanation: Adding MON nodes enhances cluster resiliency by maintaining quorum and availability during failures. Add a new OSD node to an existing Red Hat Ceph Storage cluster. ee the solution below. Solution: 1. Install Ceph OSD package: yum install ceph -osd -y 2. Initialize the disk: ceph -volume lvm create --data /dev/sdb 3. Verify OSD addition: ceph osd tree Explanation: Adding OSD nodes increases cluster capacity and improves performance by distributing data across additional devices. Change the CRUSH location of an existing OSD in the Ceph cluster. ee the solution below. Solution: 1. View the current CRUSH map: ceph osd tree 2. Change the CRUSH location: ceph osd crush set <osd -id> root=default rack=new -rack
Unmap multiple RBD images from a server and ensure they are properly released. ee the solution below. Solution: 1. List mapped RBD images: rbd showmapped 2. Unmap the images: rbd unmap /dev/rbd/my -pool/image1 rbd unmap /dev/rbd/my -pool/image2 3. Verify they are unmapped: lsblk | grep rbd Explanation: Unmapping RBD images ensures they are detached from the server, making them available for other operations or deletion. Create an RBD image with a specific data pool for optimized storage performance and verify the configuration. ee the solution below. Solution: 1. Create a data pool: ceph osd pool create data -pool 128 2. Create an RBD image using the data pool: rbd create my -rbd --size 10240 --pool my -pool --data-pool data -pool 3. Verify the image configuration: rbd info my -pool/my -rbd
Explanation: Using a separate data pool optimizes performance by isolating metadata and data for better storage management. Configure RBD mirroring in image mode for a single image and ensure synchronization with a remote cluster. ee the solution below. Solution: 1. Enable mirroring for the image: rbd mirror image enable my -pool/my -image 2. Add the remote cluster as a peer: rbd mirror image peer add my -pool/my -image client.admin@remote -cluster 3. Check the mirroring status: rbd mirror image status my -pool/my -image Explanation: Image mode mirroring provides granular control by synchronizing only selected images with remote clusters. Set up and test automatic snapshot deletion for snapshots older than 5 days. ee the solution below. Solution: 1. Enable snapshot scheduling: ceph mgr module enable rbd_support 2. Add a snapshot schedule with retention: rbd snap -schedule add my -pool/my -image --interval 12h --retention 5
3. Verify the snapshot schedule: rbd snap -schedule status my -pool/my -image Explanation: Automatic snapshot deletion ensures efficient storage management by removing old snapshots based on retention policies. Create an RBD clone from a snapshot, flatten it for independence, and verify the flattened clone. ee the solution below. Solution: 1. Create a snapshot: rbd snap create my -pool/my -image@base -snapshot 2. Clone the snapshot: rbd clone my -pool/my -image@base -snapshot my -pool/my -clone 3. Flatten the clone: rbd flatten my -pool/my -clone 4. Verify the clone: rbd info my -pool/my -clone Explanation: Flattening removes the dependency on the parent image, making the clone fully independent for standalone use. Resize an RBD image online and verify the filesystem expansion without unmounting the image. ee the solution below. Solution: 1. Resize the RBD image:
rbd resize --size 20480 my -pool/my -image 2. Expand the filesystem online: xfs_growfs /mnt/rbd 3. Verify the new size: df -h | grep /mnt/rbd Explanation: Online resizing ensures applications continue to access the storage without disruption while accommodating growth. Export an RBD image, compress it using xz, and verify its integrity after decompression. ee the solution below. Solution: 1. Export the RBD image: rbd export my -pool/my -image /backup/my -image.raw 2. Compress the image with xz: xz /backup/my -image.raw 3. Decompress and verify: xz -d /backup/my -image.raw.xz rbd import /backup/my -image.raw my -pool/restored -image Explanation: Using advanced compression methods like xz reduces storage space for backups while preserving data integrity. Configure and test automatic trimming of unused space in an RBD image. ee the solution below.
3. Apply the new CRUSH map: crushtool -c crush.txt -o crush.map ceph osd setcrushmap -i crush.map Explanation: CRUSH maps define how data is distributed across OSDs. Customizing these maps allows fine-grained control over data placement. Set up Ceph storage with authentication enabled and verify that clients require keys to access the cluster. ee the solution below. Solution: 1. Enable authentication in the Ceph configuration file: ceph config set mon auth_allow_insecure_global_id_reclaim false 2. Generate a client key: ceph auth get -or-create client.admin 3. Verify client access using the key: ceph -s --keyring /etc/ceph/ceph.client.admin.keyring Explanation: Enabling authentication adds a layer of security, ensuring only authorized clients can interact with the cluster. Deploy Ceph storage and configure BlueStore as the OSD backend for optimal performance. ee the solution below. Solution: 1. Update the playbook with BlueStore settings: osd_objectstore: bluestore