Configure the Xloud Block Storage backup service with Ceph, Swift, NFS, S3, POSIX, GlusterFS, or Google Cloud Storage backup targets. Enable and validate the backup service for production deployments.
The cinder-backup service writes volume data to a configurable backup target that is completely independent of the primary volume storage backend. Backups from any primary backend (Ceph, LVM, NetApp) can go to any backup target. For production deployments, a separate object storage target (Ceph, Swift, or S3) provides the isolation needed for backups to survive a primary storage failure.
Administrator Access Required — This operation requires the admin role. Contact your
Xloud administrator if you do not have sufficient permissions.
Prerequisites
Administrator credentials with the admin role
Access to XDeploy for infrastructure-level configuration changes
A configured backup target reachable from all backup service nodes
Credentials and endpoint URL for the chosen backup target
Only one backup backend is active at a time. All settings go in the [DEFAULT] section of cinder.conf. Apply changes through XDeploy — do not edit cinder.conf directly on production nodes.
The Ceph backup driver is the recommended backend for XAVS deployments. It stores backups as RBD images in a dedicated Ceph pool and supports differential (incremental) backups — only changed blocks transfer after the first full backup. When both the volume backend and backup backend are Ceph, data is copied server-side without passing through the backup service node, dramatically reducing backup time and network load.
Max snapshots to retain per backup volume (0 = unlimited)
backup_ceph_image_journals
False
Enable JOURNALING and EXCLUSIVE_LOCK features for RBD mirroring
restore_discard_excess_bytes
True
Pad restored volumes with zeroes to exact size
Set backup_ceph_max_snapshots to a non-zero value (e.g., 5) to automatically prune old incremental snapshots and reclaim Ceph pool space. Manually deleting incremental backups outside this mechanism forces the next backup to be a full copy.
The Swift backup driver stores volume backup data as objects in an Xloud Object Storage container. Each backup is split into chunks stored as individual objects. Swift’s built-in replication provides durability. Supports auth versions 1, 2, and 3.
backup_swift_object_size must be a multiple of backup_swift_block_size.
The S3 backup driver works with AWS S3 and any S3-compatible endpoint: Ceph RADOS Gateway, MinIO, Wasabi, Dell ECS, and others. Set backup_s3_endpoint_url for non-AWS endpoints; omit it for native AWS S3.
backup_s3_object_size must be a multiple of backup_s3_block_size.
The NFS backup driver mounts a remote NFS share and writes backup files to the mount point. Suitable for environments with existing NAS infrastructure.
backup_gcs_object_size must be a multiple of backup_gcs_block_size.
Required GCS service account permissions:
Permission
Purpose
storage.buckets.get
Read bucket metadata
storage.objects.create
Write backup objects
storage.objects.get
Read backup objects for restore
storage.objects.delete
Delete backup objects
The GlusterFS backup driver mounts a GlusterFS volume and writes backup files to it. Suitable for environments already running GlusterFS distributed storage.
GlusterFS volume in hostname:vol_name or ipv4addr:vol_name or [ipv6addr]:vol_name format. Example: 10.0.10.100:backup_vol
glusterfs_backup_mount_point
$state_path/backup_mount
Base directory for the GlusterFS mount point
The GlusterFS client (glusterfs-fuse) must be installed on all nodes running the backup service. The backup service account must have read/write access to the GlusterFS volume.
The POSIX backup driver writes backups to a local filesystem path on the backup service node. It requires no external services and is straightforward to configure, but provides no redundancy — if the node fails, backups are lost.
Max backup file size; volumes larger than this are split. Must be a multiple of backup_sha_block_size_bytes
backup_sha_block_size_bytes
32768 (32 KiB)
Block size for incremental change tracking
backup_container
None
Custom subdirectory within backup_posix_path
backup_enable_progress_timer
True
Send progress events to Ceilometer
POSIX local backup provides no redundancy. A node failure loses both primary volumes and their backups simultaneously. Use this only for development or test environments. For any production workload, use Ceph, Swift, S3, or NFS.
In XAVS deployments, backup backend settings are configured through XDeploy and applied via Ansible. Do not edit cinder.conf directly on production nodes.
Apply backup configuration changes
Copy
xavs-ansible deploy --tags cinder
After deployment, verify the backup service is running:
Check backup service status
Copy
openstack volume service list --service cinder-backup
Navigate to Project → Volume → Volumes, select a volume, and click Actions → Create Backup. Set a name and leave the snapshot field empty for a full backup.
Monitor backup status
Navigate to Project → Volume → Backups. The backup transitions through creating → available.
Restore the backup
Click Actions → Restore Backup to restore to a new volume. Confirm the restored volume matches the original in size and status.
Backup created and restored successfully — backend is operational.
Source your credentials file to authenticate with the Xloud platform:
Load credentials
Copy
source admin-openrc.sh
Download the OpenRC file from Xloud Dashboard → Project → API Access → Download OpenStack RC File.
With a weekly full + daily incremental strategy, effective backup storage is typically 1.2–1.5× total volume capacity for a 30-day retention window. Ceph incremental backups are the most space-efficient — only changed blocks are stored after the first full backup.
Symptom: openstack volume service list | grep backup shows state down.Common causes and fixes:
Authentication failure — verify credentials (Swift/S3/GCS) are correct and the service user has access to the bucket/container
NFS/GlusterFS mount failure — verify the share is reachable and exported with write permission: mount -t nfs <server>:<share> /mnt/test
Ceph connectivity — verify the keyring and ceph.conf are correct: ceph --user cinder-backup -s
Container not started — restart the backup service: xavs-ansible deploy --tags cinder
Backup stuck in 'creating' status
Cause: Backup target is unreachable, slow, or the service is under load.Check backup service logs:
Check cinder-backup logs
Copy
docker logs cinder_backup | tail -50
For large volumes (> 1 TiB), allow up to 60 minutes. If the backup stays in creating beyond that, check target connectivity and available space.
Incremental backup becomes a full backup unexpectedly
Cause (Ceph): Manually deleting a Ceph incremental snapshot breaks the backup chain. The next backup must start a new full backup.Cause (Swift/S3/NFS): The base backup or a required incremental in the chain was deleted.Fix: Only delete backups through the Cinder API (openstack volume backup delete), never by directly removing objects from the storage backend. After a chain break, the next backup will automatically create a new full backup.
GCS authentication errors
Cause: The service account credential file is missing, incorrect, or the service account lacks required permissions.Fix: Verify the credential file path in cinder.conf and test authentication: