Skip to main content

Overview

The CRUSH map defines the hierarchical topology of the storage cluster — how nodes, racks, and rooms are structured, and how data is distributed across failure domains. Correct CRUSH configuration ensures data is spread across independent failure domains for maximum availability.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Administrator credentials with the admin role
  • SSH access to a cluster management node
  • Understanding of your physical infrastructure topology (host, rack, room layout)

CRUSH Hierarchy

The CRUSH hierarchy places storage devices into a tree of buckets. Data placement rules traverse this tree to select OSDs from distinct failure domains.

Viewing the CRUSH Map

View OSD tree with device classes
ceph osd tree
This shows the current hierarchy and the device class assigned to each OSD.

Device Class Rules

Device class CRUSH rules route data to a specific storage tier. Create one rule per device class to enable multi-tier storage.

Create a device class rule

Create SSD device class rule
ceph osd crush rule create-replicated \
  replicated_rule_ssd default host ssd
Create NVMe device class rule
ceph osd crush rule create-replicated \
  replicated_rule_nvme default host nvme
Create HDD device class rule
ceph osd crush rule create-replicated \
  replicated_rule_hdd default host hdd
Valid device classes: nvme, ssd, hdd.

Assign rule to a pool

Assign CRUSH rule to pool
ceph osd pool set <POOL_NAME> crush_rule replicated_rule_ssd
In a single-device-class cluster (all SSD), changing a pool’s CRUSH rule from replicated_rule to replicated_rule_ssd involves zero data movement — data is already on the correct devices.

Verify rule assignment

List rules and their pools
ceph osd crush rule dump --format json | python3 -m json.tool
Show pool CRUSH rule
ceph osd pool get <POOL_NAME> crush_rule
Pool reports the new CRUSH rule name and OSD tree shows data on the correct device class.

Managing OSD Device Classes

OSDs are automatically classified by device type at deployment. Verify or override classifications as needed.
List all OSD device classes
ceph osd crush class ls
List OSDs in a specific class
ceph osd crush class ls-osd ssd

Failure Domain Configuration

Configure CRUSH rules to spread replicas across independent failure domains. The failure domain level determines how many simultaneous failures the cluster can survive without data loss.
Failure DomainSurvivesRecommended For
hostAny number of OSD failures on different hostsSmall clusters (< 10 hosts)
rackEntire rack failures (power, networking)Medium clusters with physical racks
roomRoom-level failures (fire, flood)Large data center deployments
Create a rack-level failure domain rule
ceph osd crush rule create-replicated \
  replicated_rule_ssd_rack default rack ssd
For most production deployments, host-level failure domains provide the right balance of protection and OSD count requirements. Use rack-level failure domains only when you have at least 3 physical racks in your deployment.

Next Steps

Pool Management

Create pools and assign the CRUSH rules you’ve just configured

Storage Tiers

Wire device class rules to Cinder volume types for multi-tier storage

Cluster Management

Monitor cluster health and manage OSDs in your configured topology

Troubleshooting

Diagnose CRUSH-related issues including imbalanced data distribution