Overview
A storage backend is the combination of a driver and a physical storage system that serves Xloud Block Storage volumes. Backends are configured at deploy time through XDeploy and registered with the scheduler. Each backend must have a uniquevolume_backend_name that maps to at least one volume type, making it accessible to users.
Prerequisites
- Administrator credentials with the
adminrole - Access to XDeploy for infrastructure-level configuration
- The storage system (distributed storage cluster, LVM volume group, or NFS server) must be provisioned before configuring the backend
Supported Backends
| Backend | Driver | Protocol | Use Case | HA |
|---|---|---|---|---|
| Distributed Storage (RBD) | cinder.volume.drivers.rbd.RBDDriver | RBD | Production — full replication, zero-copy clones | Yes |
| LVM | cinder.volume.drivers.lvm.LVMVolumeDriver | iSCSI | Development / single-node testing | No |
| NFS | cinder.volume.drivers.nfs.NfsDriver | NFS | Shared NAS appliances, legacy NFS integration | Depends on server |
| NetApp ONTAP | cinder.volume.drivers.netapp.common.NetAppDriver | iSCSI / NFS / FC | Enterprise NAS/SAN — FlexVol, thin provisioning, dedup | Yes |
| Pure Storage FlashArray | cinder.volume.drivers.pure.PureISCSIDriver | iSCSI / FC / NVMe-oF | All-flash arrays, high IOPS, thin by default | Yes |
| Dell PowerStore | cinder.volume.drivers.dell_emc.powerstore.driver.PowerStoreDriver | iSCSI / FC / NVMe-oF | Dell enterprise all-flash and hybrid arrays | Yes |
| HPE 3PAR / Primera | cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver | iSCSI / FC | HPE enterprise SAN — CPG-based provisioning | Yes |
| Hitachi VSP | cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver | iSCSI / FC | Hitachi enterprise storage systems | Yes |
| IBM FlashSystem | cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver | iSCSI / FC / NVMe-oF | IBM all-flash and Storwize arrays | Yes |
| iSCSI (generic) | cinder.volume.drivers.iscsi.ISCSIDriver | iSCSI | Generic iSCSI targets not covered by vendor drivers | Depends on array |
| Fibre Channel (generic) | cinder.volume.drivers.fibre_channel.FibreChannelDriver | FC | Generic Fibre Channel targets | Depends on array |
Configure a Distributed Storage (RBD) Backend
The distributed storage driver is the recommended backend for all production deployments. It uses RADOS Block Devices backed by a Xloud Distributed Storage cluster.- XDeploy
- CLI
Select the distributed storage driver
Select Distributed Storage (RBD) from the Backend Driver dropdown.
Configure RBD parameters
| Parameter | Description | Example |
|---|---|---|
| Pool Name | Name of the storage pool for volumes | volumes |
| Auth Username | Storage authentication user | xloud-volume |
| Keyring Path | Path to the keyring file on volume service nodes | /etc/ceph/ceph.client.xloud-volume.keyring |
| Cluster Name | Storage cluster identifier | ceph |
| Backend Name | Unique name for this backend | rbd-ssd |
Configure Enterprise Storage Backends
For NetApp, Pure Storage, Dell, HPE, Hitachi, and IBM backends, configuration is applied through XDeploy globals and Ansible. Thecinder.conf sections are generated from templates. See External Storage Integration for full configuration examples, driver parameters, and volume type mapping for each enterprise backend.
Enterprise backend drivers require vendor-specific Python packages installed on the volume service nodes. XDeploy manages driver installation for supported backends during deployment.
Configure an LVM Backend
LVM backends are suitable for development and single-node testing environments only.- XDeploy
Select LVM driver
In XDeploy, navigate to Configuration → Block Storage and select
LVM from the Backend Driver dropdown.
Configure LVM parameters
| Parameter | Description | Example |
|---|---|---|
| Volume Group | LVM volume group name | xloud-volumes |
| Target Helper | iSCSI helper binary | lioadm |
| Backend Name | Unique name for this backend | lvm-local |
Backend Health Verification
After configuration, verify that the backend is registered and reporting capacity:List backends with capacity details
| Column | Expected Value |
|---|---|
free_capacity_gb | Greater than 0 — confirms backend is reporting capacity |
total_capacity_gb | Matches expected pool size |
driver_version | Populated — confirms driver initialized correctly |
storage_protocol | rbd, iSCSI, or NFS depending on backend type |
Enable or Disable a Backend Service
Disable a volume service (maintenance)
Re-enable a volume service
Troubleshooting
Backend pool does not appear in list
Backend pool does not appear in list
Cause: The volume service has not registered the backend with the scheduler, or
the service is not running.Resolution:Look for the volume service on the affected storage node. If state is
Check service status
down,
check service logs via XDeploy for driver initialization errors.Backend reporting zero free capacity
Backend reporting zero free capacity
Cause: The storage pool is full, or the backend driver cannot reach the storage
cluster to report capacity.Resolution:
- Verify distributed storage cluster health: check cluster status from the storage administration interface
- Verify the authentication keyring is present on the volume service node
- Verify the pool name matches the configured backend pool
- Increase pool capacity or add OSDs to the storage cluster
Next Steps
External Storage Integration
Full configuration guide for NetApp, Pure Storage, Dell, HPE, and IBM backends
Volume Types & QoS
Map volume types to backends and configure QoS policies
Thin Provisioning
Configure thin and thick provisioning with overcommit ratios
Storage Tiers
Configure multi-tier storage with NVMe, SSD, and HDD backends
Backup Backends
Configure Ceph, Swift, S3, or NFS as the volume backup destination
Migration
Migrate volumes between backends for rebalancing or hardware retirement