Skip to main content

Overview

XSDS provides several performance optimization features that can be configured independently or combined to match your workload’s I/O requirements. Understanding how each feature works helps you select the right combination without unnecessary overhead.
Prerequisites
  • An active Xloud account with project member access
  • Storage tiering, deduplication, and caching are configured at the pool level by an administrator — contact your storage administrator to enable these features. Your administrator can configure this through XDeploy.

Storage Tiering

XSDS supports multiple storage device classes within a single cluster. Administrators configure volume types that map to specific device classes, allowing you to direct each workload to the appropriate media tier.
TierDevice ClassVolume TypeTypical Use Case
NVMeNVMe SSDceph-nvmeDatabases, high-IOPS OLTP, latency-critical applications
SSDSATA/SAS SSDceph-ssdGeneral-purpose workloads, web servers, application tiers
HDDHard Disk Driveceph-hddBackups, archives, cold data, large sequential workloads

View available volume types

List volume types
openstack volume type list
Volume types prefixed with ceph- correspond to XSDS-backed tiers.

Create a volume on the target tier

Create an NVMe-tier volume
openstack volume create \
  --size 200 \
  --type ceph-nvme \
  prod-db-nvme
Create an HDD-tier archive volume
openstack volume create \
  --size 10000 \
  --type ceph-hdd \
  archive-cold-data

Deduplication and Compression

Inline deduplication and compression reduce the effective storage footprint of compressible workloads.
Deduplication eliminates redundant data blocks across all objects in a pool. When two objects contain identical blocks, only one physical copy is stored.
  • Transparent to applications — no changes to client code required
  • Most effective for backup workloads (multiple similar backup sets)
  • Effectiveness varies: typical savings range from 1.5× to 4× for backup data
  • CPU-intensive — may reduce throughput on write-heavy workloads
Deduplication is enabled at the pool level by an administrator. Check with your storage administrator whether deduplication is active on your assigned pools.
Compression applies lossless compression to stored data before writing to disk. Common algorithms include LZ4 (fast, lower ratio) and ZSTD (slower, better ratio).
  • Transparent to applications — reads/writes use normal protocols
  • Most effective for text data, logs, and structured data formats (JSON, CSV)
  • Less effective for already-compressed formats (JPEG, MP4, ZIP, encrypted data)
  • Typical savings: 1.2× to 2× depending on data type
To check whether compression is enabled on your pool:
Check pool compression
openstack volume type show <TYPE_NAME> -c extra_specs

Read Caching

A tiered caching layer accelerates read-intensive workloads by promoting hot data to a faster media tier (typically SSD or NVMe) while the bulk of data resides on slower, higher-capacity devices.
When the caching tier is active:
  1. Frequently-accessed data blocks are automatically promoted from the capacity tier to the cache tier
  2. Subsequent reads are served directly from the faster cache
  3. Cache eviction moves cold data back to the capacity tier without data loss
  • Effective for workloads with a working set significantly smaller than total dataset size
  • Cache promotion is automatic and policy-driven — no application changes required
  • Latency for cached reads approaches native NVMe/SSD latency
If your workload is primarily write-heavy or exhibits no temporal access locality, caching provides limited benefit. Use a native SSD or NVMe-backed pool instead.

Performance Validation

Measure the effective I/O performance of your storage configuration:
From inside a Xloud Compute instance with a volume attached:
Sequential write throughput (1 GB test)
dd if=/dev/zero of=/dev/vdb bs=1M count=1024 oflag=direct
Random read IOPS (4K blocks)
fio --name=random-read \
  --filename=/dev/vdb \
  --rw=randread \
  --bs=4k \
  --numjobs=4 \
  --iodepth=32 \
  --runtime=60 \
  --group_reporting
Run I/O tests on a dedicated test volume, not on a volume containing production data. Direct device tests (/dev/vdb) will corrupt any file system on that device.

Next Steps

Storage Types

Understand which storage interface best fits your workload’s access pattern

Data Protection

Configure replication and erasure coding for durability

XSDS Admin — Storage Tiers

Configure multi-tier storage pools and device class rules (administrator)

Resource Optimizer

Automate data placement across tiers based on access patterns