Skip to main content

Overview

Storage QoS (Quality of Service) specifications define performance boundaries for block storage volumes — capping IOPS, throughput, and burst behavior at the hypervisor-to-storage layer. QoS specs are attached to volume types, so every volume created from that type automatically inherits the performance constraints. This prevents high-throughput workloads from starving adjacent volumes on shared storage.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Admin credentials sourced from admin-openrc.sh
  • Block Storage service deployed and healthy
  • Volume types configured (Volume Types Admin Guide)
  • QoS support enabled on the target storage backend (Ceph, NetApp, or Pure Storage)

QoS Specification Parameters

Storage QoS specs define performance limits using key-value pairs. The specific keys supported depend on the storage backend driver.

Standard QoS Keys (all backends)

KeyUnitDescription
total_iops_secIOPSMaximum combined read + write IOPS
read_iops_secIOPSMaximum read IOPS
write_iops_secIOPSMaximum write IOPS
total_bytes_secbytes/sMaximum combined read + write throughput
read_bytes_secbytes/sMaximum read throughput
write_bytes_secbytes/sMaximum write throughput
total_iops_sec_maxIOPSBurst IOPS ceiling
total_bytes_sec_maxbytes/sBurst throughput ceiling
size_iops_secIOPS/GBIOPS scaling factor (per GB of volume size)
size_iops_sec scales limits proportionally to volume size. A spec of size_iops_sec=10 grants a 100 GB volume 1,000 IOPS and a 500 GB volume 5,000 IOPS automatically.
Ceph RBD enforces QoS through its rbd_qos_* parameters, configured in ceph.conf and overridable per volume via extra specs.
KeyUnitDescription
rbd_qos_iops_limitIOPSCombined read+write IOPS cap
rbd_qos_read_iops_limitIOPSRead IOPS cap
rbd_qos_write_iops_limitIOPSWrite IOPS cap
rbd_qos_bps_limitbytes/sCombined throughput cap
rbd_qos_read_bps_limitbytes/sRead throughput cap
rbd_qos_write_bps_limitbytes/sWrite throughput cap
rbd_qos_iops_burstIOPSBurst IOPS above the sustained limit
rbd_qos_bps_burstbytes/sBurst throughput above sustained limit
Ceph QoS is enforced at the librbd layer — inside the QEMU process on the hypervisor. It does not require OSD-level configuration changes and applies per-volume independently.
KeyUnitDescription
netapp:qos_policy_groupPolicy nameReference to an ONTAP QoS policy group
netapp:qos_adaptive_policy_groupPolicy nameReference to an ONTAP adaptive QoS policy
total_iops_secIOPSEnforced via ONTAP QoS policy
total_bytes_secbytes/sEnforced via ONTAP QoS policy
NetApp adaptive QoS adjusts limits automatically based on volume size (TB) — expected and peak IOPS/TB values are set in the ONTAP adaptive policy group.
KeyUnitDescription
pure_qos_max_bandwidthbytes/sMaximum combined read + write throughput
pure_qos_max_iopsIOPSMaximum combined IOPS
Pure Storage enforces QoS at the array level — limits apply to the volume regardless of which host or hypervisor is accessing it.

Create QoS Specifications

Navigate to QoS Specs

Log in to the Xloud Dashboard (https://connect.<your-domain>) as an administrator and navigate to Admin → Volume → QoS Specs. Click Create QoS Spec.

Define the spec

FieldValueDescription
Namee.g., bronze-1000iopsDescriptive name
Consumerfront-endWhere limits are enforced: front-end (hypervisor), back-end (storage array), or both
Click Create to save the spec.

Add key-value pairs

Open the newly created spec and click Edit Keys. Add the desired performance parameters:
KeyExample Value
total_iops_sec1000
total_bytes_sec104857600 (100 MB/s)
total_iops_sec_max2000
Click Save to apply the key-value pairs to the spec.

Associate with a volume type

Navigate to Admin → Volume → Volume Types, select a volume type, click Manage QoS Spec Association, and select the QoS spec from the dropdown.
All new volumes created from this volume type inherit the QoS limits automatically.

Backend-Specific QoS Examples

For Ceph RBD backends, use the rbd_qos_* keys. These are enforced by the QEMU librbd driver on the hypervisor.
Create Ceph-specific QoS spec
openstack volume qos create ceph-standard-qos \
  --consumer front-end \
  --property rbd_qos_iops_limit=1000 \
  --property rbd_qos_bps_limit=104857600 \
  --property rbd_qos_iops_burst=2000 \
  --property rbd_qos_bps_burst=209715200
Associate with Ceph volume type
openstack volume qos associate ceph-standard-qos ceph-ssd
Set rbd_qos_iops_burst to 2× the sustained rbd_qos_iops_limit to absorb short IO spikes (e.g., application startup, database checkpoint) without violating the sustained limit.

QoS Capability Comparison

CapabilityCeph RBDNetApp ONTAPPure StorageStandard (libvirt)
IOPS limitYesYesYesYes
Throughput limitYesYesYesYes
Burst IOPSYesYes (adaptive)NoYes
Burst throughputYesYes (adaptive)NoYes
Per-read/write limitsYesNoNoYes
Size-scaled IOPSNoYes (adaptive)NoYes
Enforcement layerHypervisor (librbd)ArrayArrayHypervisor (libvirt)
Live updateYes (libvirt)YesYesYes (libvirt)

Manage QoS Specs

List all QoS specifications
openstack volume qos list
Show a QoS spec and its properties
openstack volume qos show bronze-1000iops
Update a QoS spec property
openstack volume qos set bronze-1000iops \
  --property total_iops_sec=1500
Remove a property from a QoS spec
openstack volume qos unset bronze-1000iops --property total_iops_sec_max
Disassociate QoS spec from a volume type
openstack volume qos disassociate bronze-1000iops standard-ssd
Delete a QoS spec
openstack volume qos delete bronze-1000iops
Deleting a QoS spec that is still associated with a volume type will fail. Disassociate from all volume types before deleting the spec.

Monitoring QoS Compliance

Navigate to Monitoring → Storage → Volume Metrics in Xloud XIMP. Per-volume IOPS and throughput graphs show whether volumes are hitting their QoS limits:
  • IOPS at or near the total_iops_sec limit — the volume is QoS-constrained
  • Sustained IOPS below limit — the workload does not require the full allocation
  • Burst IOPS exceeding sustained limit but below burst cap — normal burst behavior
Set XIMP alerts for volumes that sustain QoS-constrained behavior for more than 5 minutes — this signals that the volume type’s QoS limit may be too low for the workload.

Best Practices

Align with Volume Types

Define one QoS spec per volume tier (bronze, silver, gold). Attach specs to volume types so QoS is automatically applied without per-volume manual steps.

Set Burst at 2x Sustained

Configure burst limits at 2× the sustained limit to absorb application startup and checkpoint IO spikes without permanently raising the sustained cap.

Use Size-Scaled for Databases

Apply size_iops_sec to database volume types. Larger database volumes automatically receive proportionally more IOPS without manual reconfiguration.

Monitor Before Limiting

Baseline actual workload IOPS using XIMP before applying QoS limits. Setting limits below actual peak workload demand causes application performance degradation.

Tiered QoS Profiles

Xloud-Developed — This capability is developed by Xloud and ships with XAVS / XPCI.
Xloud provides pre-defined tiered QoS profiles that align with common workload classes. Each profile sets per-VM IOPS limits, bandwidth caps, and burst allowances with separate read and write thresholds.
ProfileIOPS (sustained)ThroughputBurst IOPSTarget Workload
Bronze50050 MB/s1,000Development, test environments
Silver2,000200 MB/s4,000General-purpose applications
Gold10,0001 GB/s15,000Transactional databases, analytics
Platinum50,0003 GB/s75,000High-frequency trading, real-time ingestion
Key capabilities of tiered QoS profiles:
  • Per-VM enforcement — limits apply at the individual instance level, not shared across a tenant
  • Read/write-differentiated limits — separate IOPS and bandwidth caps for read and write operations allow tuning for read-heavy or write-heavy workloads independently
  • Burst support — short IO spikes (application startup, checkpoints) are absorbed up to the burst ceiling without throttling
Assign profiles to volume types so every volume inherits the correct QoS automatically. See Storage Tiers for aligning QoS profiles with hardware tiers.

Next Steps

Volume Types Admin

Create and manage volume types to associate with QoS specs

Storage Tiers

Configure tiered storage pools aligned with QoS tiers

Storage Backends

Review backend-specific capabilities and QoS driver support

Block Storage Architecture

Understand how QoS is enforced across the storage service stack