Skip to main content

Overview

The cinder-backup service writes volume data to a configurable backup target that is completely independent of the primary volume storage backend. Backups from any primary backend (Ceph, LVM, NetApp) can go to any backup target. For production deployments, a separate object storage target (Ceph, Swift, or S3) provides the isolation needed for backups to survive a primary storage failure.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Administrator credentials with the admin role
  • Access to XDeploy for infrastructure-level configuration changes
  • A configured backup target reachable from all backup service nodes
  • Credentials and endpoint URL for the chosen backup target

Supported Backup Backends

BackendDriverBest ForIncrementalHA
Ceph RBDceph.CephBackupDriverXAVS deployments — fast, native, incrementalYesYes
Swiftswift.SwiftBackupDriverBuilt-in Xloud Object StorageYesYes
S3-compatibles3.S3BackupDriverAWS S3, Ceph RGW, MinIO, Wasabi, Dell ECSYesYes
NFSnfs.NFSBackupDriverExisting NAS/NFS infrastructureYesDepends
Google Cloud Storagegcs.GoogleBackupDriverOff-site cloud backup to GCSYesYes
GlusterFSglusterfs.GlusterfsBackupDriverExisting GlusterFS clustersYesYes
POSIX (local path)posix.PosixBackupDriverDev/test — single node onlyYesNo
Only one backup backend is active at a time. All settings go in the [DEFAULT] section of cinder.conf. Apply changes through XDeploy — do not edit cinder.conf directly on production nodes.

Backup Configuration

The Ceph backup driver is the recommended backend for XAVS deployments. It stores backups as RBD images in a dedicated Ceph pool and supports differential (incremental) backups — only changed blocks transfer after the first full backup. When both the volume backend and backup backend are Ceph, data is copied server-side without passing through the backup service node, dramatically reducing backup time and network load.
cinder.conf — Ceph RBD backup backend
[DEFAULT]
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_pool = backups
backup_ceph_user = cinder-backup
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_chunk_size = 134217728
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
backup_ceph_max_snapshots = 0
backup_ceph_image_journals = false
restore_discard_excess_bytes = true
Configuration options:
OptionDefaultDescription
backup_ceph_poolbackupsCeph pool where backup images are stored
backup_ceph_usercinderCeph user to connect with
backup_ceph_conf/etc/ceph/ceph.confPath to Ceph configuration file
backup_ceph_chunk_size134217728 (128 MiB)Chunk size in bytes for transfer to Ceph
backup_ceph_stripe_unit0RBD stripe unit (0 = use pool default)
backup_ceph_stripe_count0RBD stripe count (0 = use pool default)
backup_ceph_max_snapshots0Max snapshots to retain per backup volume (0 = unlimited)
backup_ceph_image_journalsFalseEnable JOURNALING and EXCLUSIVE_LOCK features for RBD mirroring
restore_discard_excess_bytesTruePad restored volumes with zeroes to exact size
Set backup_ceph_max_snapshots to a non-zero value (e.g., 5) to automatically prune old incremental snapshots and reclaim Ceph pool space. Manually deleting incremental backups outside this mechanism forces the next backup to be a full copy.
Create the backup pool and user:
Ceph — create backup pool and cephx user
ceph osd pool create backups 32
ceph auth get-or-create client.cinder-backup \
  mon 'profile rbd' \
  osd 'profile rbd pool=backups' \
  -o /etc/ceph/ceph.client.cinder-backup.keyring

Apply Backup Backend Configuration

In XAVS deployments, backup backend settings are configured through XDeploy and applied via Ansible. Do not edit cinder.conf directly on production nodes.
Apply backup configuration changes
xavs-ansible deploy --tags cinder
After deployment, verify the backup service is running:
Check backup service status
openstack volume service list --service cinder-backup

Validate the Backup Backend

Create a test backup

Navigate to Project → Volume → Volumes, select a volume, and click Actions → Create Backup. Set a name and leave the snapshot field empty for a full backup.

Monitor backup status

Navigate to Project → Volume → Backups. The backup transitions through creatingavailable.

Restore the backup

Click Actions → Restore Backup to restore to a new volume. Confirm the restored volume matches the original in size and status.
Backup created and restored successfully — backend is operational.

Backup Storage Sizing

Workload ProfileRecommended BackendStorage Sizing
Development / testPOSIX or NFS2× total volume capacity
Small production (< 10 TiB)NFS or Swift3× total volume capacity
Large production (> 10 TiB)Ceph or S35× total volume capacity (with incremental)
Off-site DRGCS or S3Same as large production
With a weekly full + daily incremental strategy, effective backup storage is typically 1.2–1.5× total volume capacity for a 30-day retention window. Ceph incremental backups are the most space-efficient — only changed blocks are stored after the first full backup.

Troubleshooting

Symptom: openstack volume service list | grep backup shows state down.Common causes and fixes:
  • Authentication failure — verify credentials (Swift/S3/GCS) are correct and the service user has access to the bucket/container
  • NFS/GlusterFS mount failure — verify the share is reachable and exported with write permission: mount -t nfs <server>:<share> /mnt/test
  • Ceph connectivity — verify the keyring and ceph.conf are correct: ceph --user cinder-backup -s
  • Container not started — restart the backup service: xavs-ansible deploy --tags cinder
Cause: Backup target is unreachable, slow, or the service is under load.Check backup service logs:
Check cinder-backup logs
docker logs cinder_backup | tail -50
For large volumes (> 1 TiB), allow up to 60 minutes. If the backup stays in creating beyond that, check target connectivity and available space.
Cause (Ceph): Manually deleting a Ceph incremental snapshot breaks the backup chain. The next backup must start a new full backup.Cause (Swift/S3/NFS): The base backup or a required incremental in the chain was deleted.Fix: Only delete backups through the Cinder API (openstack volume backup delete), never by directly removing objects from the storage backend. After a chain break, the next backup will automatically create a new full backup.
Cause: The service account credential file is missing, incorrect, or the service account lacks required permissions.Fix: Verify the credential file path in cinder.conf and test authentication:
Test GCS credentials
docker exec cinder_backup python3 -c "
from google.oauth2 import service_account
creds = service_account.Credentials.from_service_account_file('/etc/cinder/gcs-credentials.json')
print('Credentials valid:', creds.valid)
"
Ensure the service account has storage.objects.create, storage.objects.get, and storage.objects.delete on the target bucket.

Next Steps

Backup Backends

Comparison of all backup backends with use-case guidance

Volume Backups (User)

User guide for creating, managing, and restoring volume backups

Storage Backends

Configure the primary volume storage backends

Quotas

Configure backup quota limits per project