Overview
Storage policies define how object data is placed, replicated, and protected across storage nodes. Each container is assigned to exactly one policy at creation time — the policy cannot be changed after the container is created. Multiple policies enable tiered storage (standard replication, erasure coding, SSD-backed performance tiers, and archival tiers).Prerequisites
- Admin credentials sourced from
admin-openrc.sh - Object Storage service deployed and healthy
- Storage rings built for each policy (see Ring Management)
Policy Types
Xloud Object Storage supports three storage policy types. Each maps to a distinct object ring file and enforces different data placement and durability behavior.- Replication
- Erasure Coding
- Multi-Tier
Replication policies store N identical copies of every object across distinct storage zones. Each replica is fully readable independently — no reconstruction is needed.
| Parameter | Typical Value | Description |
|---|---|---|
replica_count | 3 | Number of full copies stored per object |
| Min drives writable | 2 of 3 | Write quorum for consistency |
| Read quorum | 1 of 3 | Any single replica can serve a GET |
| Storage overhead | 3× | Total space = object size × replica count |
| Recovery time | Fast | Replication copies entire objects |
Policy Configuration Reference
Storage policies are defined in the Object Storage configuration managed by XDeploy. The configuration is applied during deployment viaxavs-ansible deploy -t swift.
swift.conf — storage policy definition
| Field | Description |
|---|---|
[storage-policy:N] | Section index — must match the ring file suffix (object-N.ring.gz) |
name | Human-readable identifier used in container creation |
default | yes for the policy applied without explicit selection |
aliases | Comma-separated alternate names |
policy_type | replication (default) or erasure_coding |
ec_type | EC algorithm — required when policy_type = erasure_coding |
ec_num_data_fragments | Data shards per EC stripe |
ec_num_parity_fragments | Parity shards per EC stripe |
deprecated | yes blocks new containers from using this policy |
View and Manage Policies
- Dashboard
- CLI
Navigate to Object Store
Log in to the Xloud Dashboard (
https://connect.<your-domain>) and navigate to
Project → Object Store → Containers. The storage policy is shown as a column
in the container list and is selectable during container creation.S3 API Compatibility
Xloud Object Storage exposes an S3-compatible API endpoint alongside the native Swift API. All storage policies are accessible via both APIs.- S3 API Access
- Swift API Access
The S3-compatible endpoint uses the same underlying storage policies. Buckets created via S3 API map to Swift containers and inherit policy assignment behavior.Supported S3 API operations:
Create bucket via S3 API (default policy)
Upload object via S3 API
S3 bucket-to-policy mapping is configured by your administrator. Contact your Xloud
administrator to assign a specific storage policy to S3 buckets, or use the Swift
API to create containers with explicit policy selection.
- Bucket operations:
CreateBucket,ListBuckets,DeleteBucket,GetBucketLocation - Object operations:
PutObject,GetObject,DeleteObject,HeadObject,ListObjectsV2 - Multipart:
CreateMultipartUpload,UploadPart,CompleteMultipartUpload - ACLs:
GetBucketAcl,PutBucketAcl,GetObjectAcl - Versioning:
GetBucketVersioning,PutBucketVersioning,ListObjectVersions
Multi-Cloud Access Patterns
Xloud Object Storage integrates with external cloud object storage providers. Tenant virtual machines can access multiple object storage systems using standard CLI tools and SDKs.Rclone — universal multi-cloud sync
Rclone — universal multi-cloud sync
Rclone provides a unified interface for Xloud Object Storage, AWS S3, Google Cloud Storage, and Azure Blob Storage.
Configure Rclone for Xloud Object Storage
Sync from AWS S3 to Xloud Object Storage
Mount Xloud container as local filesystem
AWS CLI — S3-compatible access
AWS CLI — S3-compatible access
The AWS CLI connects to the Xloud S3-compatible endpoint using standard credentials.
Configure AWS CLI for Xloud S3 endpoint
List buckets
Sync local directory to Xloud
Python boto3 — programmatic access
Python boto3 — programmatic access
boto3 client for Xloud S3-compatible endpoint
Glance integration — image backend
Glance integration — image backend
Xloud Image Service can store VM images and snapshots directly in Object Storage, eliminating local disk requirements on the image service nodes.
glance-api.conf — Swift backend
glance-api.conf — S3-compatible backend (Ceph RGW)
Deprecated Policies
Mark a policy as deprecated to prevent new containers from using it while maintaining full access for existing containers.Deprecate a policy via XDeploy configuration
Deprecated policies remain fully functional for existing containers. Only new container creation using the deprecated policy is blocked. Migrate data before removing the policy configuration entirely.
Best Practices
Plan Policies Before Deployment
Policy indexes and names are permanent once containers are created. Design your tier
structure (standard, gold, archive) before the first container is provisioned.
Use EC for Archival Data
Erasure coding at 8+4 reduces storage overhead from 3× to 1.5× with equivalent
or better durability. Apply EC to infrequently accessed data that tolerates
higher read latency.
Tag Rings by Device Class
Use ring builder device class labels (SSD, HDD) to enforce hardware affinity.
Never mix device classes in a single ring — it defeats the purpose of tiering.
Set Sensible Defaults
Configure the most common data tier as the default policy. Operators who need
high-performance or archival storage explicitly specify the policy at container
creation time.
Next Steps
Ring Management
Build and distribute ring files for each storage policy
Architecture
Understand how rings, proxy servers, and storage nodes interact
Replication
Monitor replication health and manage quarantined objects
Quotas
Set per-account and per-container storage limits