Skip to main content

Overview

Multi-tier storage routes different workload classes to appropriate device media through volume types in the block storage service and pool configurations in XSDS. Tier configuration is managed through XDeploy’s Storage Tiers panel, which auto-generates the required pool, CRUSH rule, and block storage backend configuration.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
XDeploy GUI — Storage tier detection, CRUSH rule creation, and pool configuration can be performed through the XSDS Storage interface under the Storage Tiers tab. Click Detect Tiers to automatically discover device classes. No manual file editing required.
Prerequisites
  • Administrator credentials with the admin role
  • Device class CRUSH rules created for each tier (see CRUSH Maps)
  • Access to XDeploy (https://connect.<your-domain>)

Tier Overview

TierDevice ClassPoolVolume TypeUse Case
NVMenvmevolumes-nvmeceph-nvmeDatabases, OLTP, latency-critical
SSDssdvolumes (default)ceph-ssdGeneral-purpose, web, application
HDDhddvolumes-hddceph-hddBackup, archive, big data

Configuring Tiers via XDeploy

Open Storage Tiers panel

Navigate to XDeploy → Configuration → Storage Tiers.The panel displays detected device classes in the cluster and lets you define which tiers to expose as volume types.

Add a new tier

Click Add Tier and configure:
FieldDescription
Tier NameDisplay name (e.g., NVMe Performance)
Device ClassPhysical media class: nvme, ssd, or hdd
Pool NameName for the storage pool (e.g., volumes-nvme)
Volume TypeCinder volume type name exposed to users (e.g., ceph-nvme)
DefaultWhether this tier is the default for new volumes without an explicit type
Set the fastest available tier as the default volume type. Users who don’t specify a type explicitly receive the best performance tier.

Apply configuration

Click Apply. XDeploy automatically:
  1. Creates the pool with the correct CRUSH rule for the device class
  2. Registers the Cinder backend pointing to the new pool
  3. Creates the volume type with appropriate extra specs
  4. Updates the _50_ceph_tiers.yml configuration file
New volume type appears in openstack volume type list and is available to tenants.

Manual Tier Configuration

For environments where XDeploy is not managing tier configuration, configure tiers manually.
Create CRUSH rule for NVMe
ceph osd crush rule create-replicated \
  replicated_rule_nvme default host nvme
Create NVMe-backed pool
ceph osd pool create volumes-nvme 128 128 replicated
ceph osd pool set volumes-nvme size 3
ceph osd pool set volumes-nvme min_size 2
ceph osd pool set volumes-nvme crush_rule replicated_rule_nvme
ceph osd pool application enable volumes-nvme rbd

Verifying Tier Configuration

List all volume types
openstack volume type list
Show volume type extra specs
openstack volume type show ceph-nvme -c extra_specs
The volume_backend_name extra spec should match the Cinder backend name.

Next Steps

CRUSH Maps

Configure device class rules that route each tier to the correct physical media

Pool Management

Manage the pools backing each storage tier

Capacity Planning

Monitor per-tier utilization and plan expansion

Block Storage — Volume Types

Advanced Cinder volume type configuration for QoS and backend selection