Skip to main content

Overview

Xloud Block Storage supports both thin and thick provisioning models. Thin provisioning allocates storage capacity on demand — physical space is consumed only as data is written, not when the volume is created. Thick provisioning reserves the full volume capacity immediately at creation time. The provisioning behavior is controlled by the backend driver and the scheduler overcommit configuration.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.

Thin vs Thick Comparison

AttributeThin ProvisioningThick Provisioning
Space allocated at creationNo — space consumed as data is writtenYes — full size reserved immediately
Overcommit supportYes — allocate more than physical capacityNo — bounded by actual available space
PerformanceComparable to thick once space is allocatedPredictable from the start
RiskOut-of-space if usage exceeds physical capacityNo overcommit risk
Supported backendsCeph RBD, LVM thin, NetApp, Pure Storage, HPE, DellLVM (default), some SAN arrays
Recommended forMulti-tenant clouds, variable-usage workloadsLatency-sensitive databases, predictable workloads

Backend Provisioning Support

Ceph RBD images are sparse by design. When a volume is created, no physical storage is consumed until data is written. No configuration is required to enable thin provisioning on the Ceph backend.
cinder.conf — Ceph RBD backend
[ceph-ssd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph-ssd
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = <libvirt-secret-uuid>
Because Ceph RBD is thin by default, the max_over_subscription_ratio setting in the [DEFAULT] section controls how aggressively the scheduler overcommits capacity. The default value is 20.0 (20x overcommit).
LVM uses thick provisioning by default. To enable thin provisioning, create a thin pool in the volume group before configuring Cinder, then set lvm_type = thin in the backend section.Step 1: Create the thin pool
Create LVM thin pool
pvcreate /dev/sdX
vgcreate cinder-volumes /dev/sdX
lvcreate --type thin-pool \
  --size 500G \
  --name thin-pool \
  cinder-volumes
Step 2: Configure Cinder
cinder.conf — LVM thin backend
[lvm-thin]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvm-thin
volume_group = cinder-volumes
lvm_type = thin
target_helper = lioadm
target_protocol = iscsi
Step 3: Apply the configuration
Deploy block storage configuration
xavs-ansible deploy --tags cinder
Step 4: Verify thin volumes are created
Check LVM thin volume allocation
lvs --options +lv_layout
Volumes created from this backend will show Twi in the Attr column, confirming thin layout.
NetApp ONTAP supports thin provisioning through FlexVol volumes. Set netapp_lun_space_reservation = disabled to enable thin provisioning on the LUN.
cinder.conf — NetApp thin provisioning
[netapp-thin]
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name = netapp-thin
netapp_storage_family = ontap_cluster
netapp_transport_type = https
netapp_server_hostname = 10.0.10.50
netapp_login = admin
netapp_password = <password>
netapp_vserver = svm0
netapp_storage_protocol = iscsi
netapp_lun_space_reservation = disabled
Pure Storage FlashArray volumes are thin-provisioned by default. The array reports the provisioned size to the scheduler, not physical consumption. No additional configuration is needed.
cinder.conf — Pure Storage thin provisioning
[pure-flash]
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
volume_backend_name = pure-flash
san_ip = 10.0.10.60
pure_api_token = <api-token>
use_multipath_for_image_xfer = true

Scheduler Overcommit Configuration

The Cinder scheduler uses max_over_subscription_ratio to determine how much capacity to advertise to tenants above physical availability. This applies to backends that report thin_provisioning_support = True.
cinder.conf — scheduler overcommit settings
[DEFAULT]
max_over_subscription_ratio = 20.0
reserved_percentage = 5
ParameterDescriptionDefault
max_over_subscription_ratioMaximum ratio of provisioned capacity to physical capacity20.0
reserved_percentagePercentage of backend capacity reserved and excluded from scheduling0
Setting max_over_subscription_ratio too high without monitoring physical usage can result in volumes that cannot be written to once the physical backend is full. Monitor actual consumption against provisioned capacity and set alerts at 70–80% physical utilization.

Monitor Overcommit and Capacity

Navigate to Admin → Volume → Volume Services to view backend capacity statistics. The Free Capacity and Total Capacity columns show the physical pool state reported by each backend driver.For provisioned-vs-physical overcommit visibility, use the XIMP monitoring dashboard to track Cinder backend pool utilization over time.

Next Steps

External Storage Integration

Connect to enterprise storage backends including NetApp, Pure Storage, and Dell

Storage Backends

Configure and register backend drivers with the scheduler

Volume Types Admin

Expose provisioning mode to tenants through volume type extra specs

Storage Tiers

Configure multi-tier storage with NVMe, SSD, and HDD backends