Skip to main content

Overview

Xloud Block Storage connects to a broad range of enterprise storage platforms through a pluggable driver architecture. Each backend is configured as a named section in cinder.conf, registered with the scheduler, and exposed to tenants as one or more volume types. Multiple backends can run simultaneously, enabling tiered storage, vendor-specific pools, and failover configurations within a single environment.
Administrator Access Required β€” This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Administrator credentials with the admin role
  • The storage array must be network-accessible from all volume service nodes
  • Vendor-specific driver packages installed on the volume service nodes
  • Authentication credentials (username, password, API endpoint) for each backend

Supported Backends

BackendProtocolUse CaseHA
Ceph RBDRBDProduction β€” native to XAVS, zero-copy clonesYes
NetApp ONTAPiSCSI / NFS / FCEnterprise NAS/SAN with ONTAP arraysYes
Pure Storage FlashArrayiSCSI / FC / NVMe-oFAll-flash arrays, high IOPS workloadsYes
Dell PowerStoreiSCSI / FC / NVMe-oFDell enterprise all-flash and hybrid arraysYes
HPE 3PAR / PrimeraiSCSI / FCHPE enterprise SAN arraysYes
Hitachi VSPiSCSI / FCHitachi enterprise storage systemsYes
IBM FlashSystemiSCSI / FC / NVMe-oFIBM all-flash enterprise arraysYes
NFSNFSShared NAS appliances, legacy integrationsDepends on server
LVMiSCSIDevelopment and single-node testing onlyNo

Multi-Backend Configuration

Cinder supports multiple backends in a single cinder.conf. Each backend is an independent section. The [DEFAULT] section lists all enabled backends by name.
cinder.conf β€” multi-backend example
[DEFAULT]
enabled_backends = ceph-ssd,netapp-nas,pure-flash

[ceph-ssd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph-ssd
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = <libvirt-secret-uuid>

[netapp-nas]
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name = netapp-nas
netapp_storage_family = ontap_cluster
netapp_transport_type = https
netapp_server_hostname = 10.0.10.50
netapp_server_port = 443
netapp_login = admin
netapp_password = <password>
netapp_vserver = svm0

[pure-flash]
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
volume_backend_name = pure-flash
san_ip = 10.0.10.60
pure_api_token = <api-token>
In XAVS deployments, backend configuration is managed through XDeploy globals. The cinder.conf file is generated from Ansible templates and should not be edited directly on the host. Use xavs-ansible deploy --tags cinder to apply changes.

Backend-Specific Configuration Examples

Ceph RBD is the native XAVS production backend. It supports thin provisioning by default and enables zero-copy clones between the image pool and volume pool for fast instance launches.
[ceph-ssd] section
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph-ssd
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_keyring_conf = /etc/ceph/ceph.client.cinder.keyring
rbd_secret_uuid = <libvirt-secret-uuid>
rbd_cluster_name = ceph
rados_connect_timeout = 5
NetApp ONTAP supports iSCSI, NFS, and Fibre Channel protocols. The netapp_vserver parameter specifies the storage virtual machine (SVM) on the array.
[netapp-iscsi] section
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name = netapp-iscsi
netapp_storage_family = ontap_cluster
netapp_transport_type = https
netapp_server_hostname = 10.0.10.50
netapp_server_port = 443
netapp_login = admin
netapp_password = <password>
netapp_vserver = svm0
netapp_storage_protocol = iscsi
Pure Storage uses a REST API token for authentication. Both iSCSI and Fibre Channel drivers are available.
[pure-flash] section (iSCSI)
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
volume_backend_name = pure-flash
san_ip = 10.0.10.60
pure_api_token = <api-token>
use_multipath_for_image_xfer = true
[pure-flash-fc] section (Fibre Channel)
volume_driver = cinder.volume.drivers.pure.PureFCDriver
volume_backend_name = pure-flash-fc
san_ip = 10.0.10.60
pure_api_token = <api-token>
Dell PowerStore supports iSCSI, NVMe-oF, and Fibre Channel. The management IP and credentials are required.
[dell-powerstore] section
volume_driver = cinder.volume.drivers.dell_emc.powerstore.driver.PowerStoreDriver
volume_backend_name = dell-powerstore
san_ip = 10.0.10.70
san_login = admin
san_password = <password>
powerstore_appliances = Appliance-1
storage_protocol = iSCSI
HPE 3PAR and Primera share the same driver. The CPG (Common Provisioning Group) name maps to a storage pool on the array.
[hpe-3par] section
volume_driver = cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
volume_backend_name = hpe-3par
san_ip = 10.0.10.80
san_login = 3parrc
san_password = <password>
hpe3par_api_url = https://10.0.10.80:8080/api/v1
hpe3par_username = 3parrc
hpe3par_password = <password>
hpe3par_cpg = SSD_r6
IBM FlashSystem uses the Storwize/SVC driver. The san_clustername must match the system name on the array.
[ibm-flash] section
volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver
volume_backend_name = ibm-flash
san_ip = 10.0.10.90
san_login = superuser
san_password = <password>
storwize_svc_volpool_name = Pool0
san_clustername = IBM_Flash_1

Register a Backend and Create a Volume Type

After configuring a backend, create a volume type that maps to it. Tenants select the volume type when provisioning volumes to route requests to a specific backend.

Verify backend registration

Navigate to Admin β†’ Volume β†’ Volume Services and confirm the volume service host for the backend shows state Up and status Enabled.

Create a volume type

Navigate to Admin β†’ Volume β†’ Volume Types and click Create Volume Type.Enter a name that communicates the backend to users β€” e.g., SSD-Flash or Enterprise-NAS.

Set the backend name extra spec

After creating the type, click View Extra Specs and add:
KeyValue
volume_backend_nameMust match the volume_backend_name in cinder.conf exactly
Click Create to save the extra spec.

Validate

Create a test volume using the new volume type and confirm it provisions successfully.
Volume type mapped β€” tenants can now select this backend when creating volumes.

Failover Between Backends

When a backend becomes unavailable, volumes can be migrated to an alternative backend using the volume migration API. This is an admin operation and results in data movement.
Migrate a volume to a different backend
openstack volume migrate \
  --host <cinder-volume-host>@<target-backend>#<pool> \
  <volume-id>
Check migration status
openstack volume show <volume-id> -c migration_status
Volume migration copies all data from the source backend to the destination. Migration time depends on volume size and network throughput. The volume remains available (online migration) but may experience reduced I/O performance during the transfer.
For backends that support replication (such as Pure Storage and NetApp ONTAP), use the replication failover API instead of data migration:
Failover to replication target
cinder failover-host <cinder-volume-host> \
  --backend_id <replication-target-backend-id>

Troubleshooting

Cause: The volume service failed to initialize the driver, or the backend configuration section is missing from cinder.conf.Resolution:
  • Verify the backend name is listed in enabled_backends in [DEFAULT]
  • Check volume service logs for driver initialization errors
  • Confirm network connectivity from the volume service node to the storage array management IP
Check volume service status
openstack volume service list
Cause: No backend matches the scheduler filters for the requested volume type, or the backend is reporting no free capacity.Resolution:
  • Verify the volume_backend_name extra spec on the volume type matches the backend name exactly (case-sensitive)
  • Check backend capacity: openstack volume backend pool list --long
  • Confirm the volume service is in state up and status enabled
Cause: Incorrect credentials or the management IP is unreachable.Resolution:
  • Verify the san_ip, san_login, and san_password (or equivalent) values in cinder.conf
  • Test network connectivity: curl -k https://<san_ip>/api/v1 from the volume service node
  • Check if a firewall or ACL is blocking access from the volume service node

Next Steps

Volume Types Admin

Create and manage volume types that expose backends to tenants

Thin Provisioning

Configure thin and thick provisioning per backend

Storage Tiers

Configure multi-tier storage pools across different hardware classes

Backup Backends

Configure backup destinations for volume data protection