Explanation: Adding disks to OSD nodes increases storage capacity and balances data distribution within the cluster. Change the CRUSH map rule for a pool to ensure replication across racks. ee the solution below. Solution: 1. Modify the CRUSH rule: ceph osd crush rule create -replicated <rule_name> default rack 2. Apply the new rule to the pool: ceph osd pool set <pool_name> crush_rule <rule_id> 3. Verify the changes: ceph osd pool get <pool_name> crush_rule Explanation: CRUSH rules control data placement. Configuring replication across racks improves fault tolerance. Increase the PG count of an existing pool to improve performance. ee the solution below. Solution: 1. Verify the current PG count: ceph osd pool get <pool_name> pg_num 2. Set the new PG count: ceph osd pool set <pool_name> pg_num 256 Explanation: Increasing PGs enhances data distribution across OSDs but should be done with caution to avoid
containers. Enable bucket notifications for an SQS queue in RADOS Gateway and test an event. ee the solution below. Solution: 1. Create a notification configuration (notification.json): { "QueueConfigurations": [ { "QueueArn": "arn:aws:sqs::account -id:queue -name", "Events": ["s3:ObjectCreated:*"] } ] } 2. Apply the notification configuration: aws s3api put-bucket -notification -configuration --bucket bucket -name --notification -configuration file://notification.json 3. Upload an object and verify the event. Explanation: Bucket notifications allow automated workflows by triggering events to SQS queues upon object creation. Configure and test bucket policies to restrict access based on user -agent headers. ee the solution below. Solution: 1. Create a bucket policy (policy.json): { "Version": "2012 -10-17", "Statement": [ {
"Effect": "Deny", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::bucket -name/*", "Condition": { "StringNotLike": { "aws:UserAgent": "Mozilla/*" } } } ] } 2. Apply the policy: aws s3api put -bucket -policy --bucket bucket -name --policy file://policy.json 3. Test access using different user agents. Explanation: Restricting access based on user -agent headers provides fine -grained control over resource access. Configure NFS exports for a bucket in RADOS Gateway and test object access via NFS. ee the solution below. Solution: 1. Create an NFS export: radosgw -admin nfs export create --uid=s3user --bucket=bucket -name 2. Mount the export: mount -t nfs <rgw -host>:/bucket -name /mnt/nfs 3. Verify access: List and access objects in the NFS mount. Explanation: NFS exports provide access to buckets for legacy systems without S3 API compatibility.
Enable encryption for Swift containers and test encrypted object uploads. ee the solution below. Solution: 1. Enable encryption in the RADOS Gateway configuration: rgw_enable_encryption = true 2. Upload an encrypted object: swift upload my -container encrypted -file 3. Verify encryption in the backend storage. Explanation: Encryption secures data at rest, ensuring compliance with security standards and protecting sensitive information. Create a Swift user with temporary URL generation permissions and test a time -limited download link. ee the solution below. Solution: 1. Set a temporary URL key: swift post -m "Temp -URL-Key:my -secret -key" 2. Generate a temporary URL: swift tempurl GET 3600 /v1/AUTH_account/container/object my -secret -key 3. Test the URL for downloading the object. Explanation: Temporary URLs allow secure, time -limited access to objects without exposing permanent credentials.
Enable and configure lifecycle rules to archive objects to a cold storage class after 60 days. ee the solution below. Solution: 1. Define a lifecycle policy (lifecycle.json): { "Rules": [ { "ID": "ArchiveRule", "Status": "Enabled", "Transitions": [ { "Days": 60, "StorageClass": "GLACIER" } ] } ] } 2. Apply the lifecycle policy: aws s3api put-bucket -lifecycle -configuration --bucket bucket -name --lifecycle -configuration file://lifecycle.json 3. Verify the rule: aws s3api get -bucket -lifecycle -configuration --bucket bucket -name Explanation: Lifecycle rules automate object archival, reducing costs by transitioning objects to a colder storage tier. Monitor and analyze bucket access logs in RADOS Gateway. ee the solution below. Solution: 1. Enable logging for the bucket: aws s3api put -bucket -logging --bucket bucket -name --bucket -logging -status '{"LoggingEnabled": {"TargetBucket": "log -bucket", "TargetPrefix": "logs/"}}'
3. Apply the new CRUSH map: crushtool -c crush.txt -o crush.map ceph osd setcrushmap -i crush.map Explanation: CRUSH maps define how data is distributed across OSDs. Customizing these maps allows fine-grained control over data placement. Set up Ceph storage with authentication enabled and verify that clients require keys to access the cluster. ee the solution below. Solution: 1. Enable authentication in the Ceph configuration file: ceph config set mon auth_allow_insecure_global_id_reclaim false 2. Generate a client key: ceph auth get -or-create client.admin 3. Verify client access using the key: ceph -s --keyring /etc/ceph/ceph.client.admin.keyring Explanation: Enabling authentication adds a layer of security, ensuring only authorized clients can interact with the cluster. Deploy Ceph storage and configure BlueStore as the OSD backend for optimal performance. ee the solution below. Solution: 1. Update the playbook with BlueStore settings: osd_objectstore: bluestore