Skip to main content

Overview

A storage backend is the combination of a driver and a physical storage system that serves Xloud Block Storage volumes. Backends are configured at deploy time through XDeploy and registered with the scheduler. Each backend must have a unique volume_backend_name that maps to at least one volume type, making it accessible to users.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Administrator credentials with the admin role
  • Access to XDeploy for infrastructure-level configuration
  • The storage system (distributed storage cluster, LVM volume group, or NFS server) must be provisioned before configuring the backend

Supported Backends

BackendDriverProtocolUse CaseHA
Distributed Storage (RBD)cinder.volume.drivers.rbd.RBDDriverRBDProduction — full replication, zero-copy clonesYes
LVMcinder.volume.drivers.lvm.LVMVolumeDriveriSCSIDevelopment / single-node testingNo
NFScinder.volume.drivers.nfs.NfsDriverNFSShared NAS appliances, legacy NFS integrationDepends on server
NetApp ONTAPcinder.volume.drivers.netapp.common.NetAppDriveriSCSI / NFS / FCEnterprise NAS/SAN — FlexVol, thin provisioning, dedupYes
Pure Storage FlashArraycinder.volume.drivers.pure.PureISCSIDriveriSCSI / FC / NVMe-oFAll-flash arrays, high IOPS, thin by defaultYes
Dell PowerStorecinder.volume.drivers.dell_emc.powerstore.driver.PowerStoreDriveriSCSI / FC / NVMe-oFDell enterprise all-flash and hybrid arraysYes
HPE 3PAR / Primeracinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriveriSCSI / FCHPE enterprise SAN — CPG-based provisioningYes
Hitachi VSPcinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriveriSCSI / FCHitachi enterprise storage systemsYes
IBM FlashSystemcinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriveriSCSI / FC / NVMe-oFIBM all-flash and Storwize arraysYes
iSCSI (generic)cinder.volume.drivers.iscsi.ISCSIDriveriSCSIGeneric iSCSI targets not covered by vendor driversDepends on array
Fibre Channel (generic)cinder.volume.drivers.fibre_channel.FibreChannelDriverFCGeneric Fibre Channel targetsDepends on array

Configure a Distributed Storage (RBD) Backend

The distributed storage driver is the recommended backend for all production deployments. It uses RADOS Block Devices backed by a Xloud Distributed Storage cluster.

Open Block Storage configuration

Log in to XDeploy and navigate to Configuration → Block Storage.

Select the distributed storage driver

Select Distributed Storage (RBD) from the Backend Driver dropdown.
Ensure the storage pool is created and the authentication keyring is available on all volume service nodes before completing this step.

Configure RBD parameters

ParameterDescriptionExample
Pool NameName of the storage pool for volumesvolumes
Auth UsernameStorage authentication userxloud-volume
Keyring PathPath to the keyring file on volume service nodes/etc/ceph/ceph.client.xloud-volume.keyring
Cluster NameStorage cluster identifierceph
Backend NameUnique name for this backendrbd-ssd

Apply and deploy

Click Save Configuration, then click Deploy Block Storage. XDeploy restarts the volume service and validates backend connectivity.
Backend configured — volume service restarted and backend registered with scheduler.

Configure Enterprise Storage Backends

For NetApp, Pure Storage, Dell, HPE, Hitachi, and IBM backends, configuration is applied through XDeploy globals and Ansible. The cinder.conf sections are generated from templates. See External Storage Integration for full configuration examples, driver parameters, and volume type mapping for each enterprise backend.
Enterprise backend drivers require vendor-specific Python packages installed on the volume service nodes. XDeploy manages driver installation for supported backends during deployment.

Configure an LVM Backend

LVM backends are suitable for development and single-node testing environments only.

Select LVM driver

In XDeploy, navigate to Configuration → Block Storage and select LVM from the Backend Driver dropdown.

Configure LVM parameters

ParameterDescriptionExample
Volume GroupLVM volume group namexloud-volumes
Target HelperiSCSI helper binarylioadm
Backend NameUnique name for this backendlvm-local

Apply

Click Save Configuration and Deploy Block Storage.
LVM provides no replication or redundancy. Do not use LVM backends for production workloads where data durability is required.

Backend Health Verification

After configuration, verify that the backend is registered and reporting capacity:
List backends with capacity details
openstack volume backend pool list --long
Key columns to verify:
ColumnExpected Value
free_capacity_gbGreater than 0 — confirms backend is reporting capacity
total_capacity_gbMatches expected pool size
driver_versionPopulated — confirms driver initialized correctly
storage_protocolrbd, iSCSI, or NFS depending on backend type

Enable or Disable a Backend Service

Disabling a volume service prevents the scheduler from sending new volume operations to that backend. Existing volumes on the backend are unaffected. Re-enable the service to restore normal scheduling.
Disable a volume service (maintenance)
openstack volume service set \
  --disable \
  --disable-reason "Scheduled maintenance" \
  <volume-service-host> \
  cinder-volume
Re-enable a volume service
openstack volume service set \
  --enable \
  <volume-service-host> \
  cinder-volume

Troubleshooting

Cause: The volume service has not registered the backend with the scheduler, or the service is not running.Resolution:
Check service status
openstack volume service list
Look for the volume service on the affected storage node. If state is down, check service logs via XDeploy for driver initialization errors.
Cause: The storage pool is full, or the backend driver cannot reach the storage cluster to report capacity.Resolution:
  • Verify distributed storage cluster health: check cluster status from the storage administration interface
  • Verify the authentication keyring is present on the volume service node
  • Verify the pool name matches the configured backend pool
  • Increase pool capacity or add OSDs to the storage cluster

Next Steps

External Storage Integration

Full configuration guide for NetApp, Pure Storage, Dell, HPE, and IBM backends

Volume Types & QoS

Map volume types to backends and configure QoS policies

Thin Provisioning

Configure thin and thick provisioning with overcommit ratios

Storage Tiers

Configure multi-tier storage with NVMe, SSD, and HDD backends

Backup Backends

Configure Ceph, Swift, S3, or NFS as the volume backup destination

Migration

Migrate volumes between backends for rebalancing or hardware retirement