Overview
Xloud Block Storage connects to a broad range of enterprise storage platforms through a pluggable driver architecture. Each backend is configured as a named section incinder.conf, registered with the scheduler, and exposed to tenants as one or more volume types. Multiple backends can run simultaneously, enabling tiered storage, vendor-specific pools, and failover configurations within a single environment.
Prerequisites
- Administrator credentials with the
adminrole - The storage array must be network-accessible from all volume service nodes
- Vendor-specific driver packages installed on the volume service nodes
- Authentication credentials (username, password, API endpoint) for each backend
Supported Backends
| Backend | Protocol | Use Case | HA |
|---|---|---|---|
| Ceph RBD | RBD | Production β native to XAVS, zero-copy clones | Yes |
| NetApp ONTAP | iSCSI / NFS / FC | Enterprise NAS/SAN with ONTAP arrays | Yes |
| Pure Storage FlashArray | iSCSI / FC / NVMe-oF | All-flash arrays, high IOPS workloads | Yes |
| Dell PowerStore | iSCSI / FC / NVMe-oF | Dell enterprise all-flash and hybrid arrays | Yes |
| HPE 3PAR / Primera | iSCSI / FC | HPE enterprise SAN arrays | Yes |
| Hitachi VSP | iSCSI / FC | Hitachi enterprise storage systems | Yes |
| IBM FlashSystem | iSCSI / FC / NVMe-oF | IBM all-flash enterprise arrays | Yes |
| NFS | NFS | Shared NAS appliances, legacy integrations | Depends on server |
| LVM | iSCSI | Development and single-node testing only | No |
Multi-Backend Configuration
Cinder supports multiple backends in a singlecinder.conf. Each backend is an independent section. The [DEFAULT] section lists all enabled backends by name.
cinder.conf β multi-backend example
Backend-Specific Configuration Examples
Ceph RBD
Ceph RBD
Ceph RBD is the native XAVS production backend. It supports thin provisioning by default and enables zero-copy clones between the image pool and volume pool for fast instance launches.
[ceph-ssd] section
NetApp ONTAP
NetApp ONTAP
NetApp ONTAP supports iSCSI, NFS, and Fibre Channel protocols. The
netapp_vserver parameter specifies the storage virtual machine (SVM) on the array.[netapp-iscsi] section
Pure Storage FlashArray
Pure Storage FlashArray
Pure Storage uses a REST API token for authentication. Both iSCSI and Fibre Channel drivers are available.
[pure-flash] section (iSCSI)
[pure-flash-fc] section (Fibre Channel)
Dell PowerStore
Dell PowerStore
Dell PowerStore supports iSCSI, NVMe-oF, and Fibre Channel. The management IP and credentials are required.
[dell-powerstore] section
HPE 3PAR / Primera
HPE 3PAR / Primera
HPE 3PAR and Primera share the same driver. The CPG (Common Provisioning Group) name maps to a storage pool on the array.
[hpe-3par] section
IBM FlashSystem
IBM FlashSystem
IBM FlashSystem uses the Storwize/SVC driver. The
san_clustername must match the system name on the array.[ibm-flash] section
Register a Backend and Create a Volume Type
After configuring a backend, create a volume type that maps to it. Tenants select the volume type when provisioning volumes to route requests to a specific backend.- Dashboard
- CLI
Verify backend registration
Navigate to Admin β Volume β Volume Services and confirm the volume service
host for the backend shows state Up and status Enabled.
Create a volume type
Navigate to Admin β Volume β Volume Types and click Create Volume Type.Enter a name that communicates the backend to users β e.g.,
SSD-Flash or Enterprise-NAS.Set the backend name extra spec
After creating the type, click View Extra Specs and add:
Click Create to save the extra spec.
| Key | Value |
|---|---|
volume_backend_name | Must match the volume_backend_name in cinder.conf exactly |
Failover Between Backends
When a backend becomes unavailable, volumes can be migrated to an alternative backend using the volume migration API. This is an admin operation and results in data movement.Migrate a volume to a different backend
Check migration status
Failover to replication target
Troubleshooting
Backend does not appear in pool list
Backend does not appear in pool list
Cause: The volume service failed to initialize the driver, or the backend configuration section is missing from
cinder.conf.Resolution:- Verify the backend name is listed in
enabled_backendsin[DEFAULT] - Check volume service logs for driver initialization errors
- Confirm network connectivity from the volume service node to the storage array management IP
Check volume service status
Volume creation fails with 'No valid host was found'
Volume creation fails with 'No valid host was found'
Cause: No backend matches the scheduler filters for the requested volume type, or the backend is reporting no free capacity.Resolution:
- Verify the
volume_backend_nameextra spec on the volume type matches the backend name exactly (case-sensitive) - Check backend capacity:
openstack volume backend pool list --long - Confirm the volume service is in state
upand statusenabled
Authentication error connecting to storage array
Authentication error connecting to storage array
Cause: Incorrect credentials or the management IP is unreachable.Resolution:
- Verify the
san_ip,san_login, andsan_password(or equivalent) values incinder.conf - Test network connectivity:
curl -k https://<san_ip>/api/v1from the volume service node - Check if a firewall or ACL is blocking access from the volume service node
Next Steps
Volume Types Admin
Create and manage volume types that expose backends to tenants
Thin Provisioning
Configure thin and thick provisioning per backend
Storage Tiers
Configure multi-tier storage pools across different hardware classes
Backup Backends
Configure backup destinations for volume data protection