Skip to main content

Overview

Storage policies define how object data is placed, replicated, and protected across storage nodes. Each container is assigned to exactly one policy at creation time — the policy cannot be changed after the container is created. Multiple policies enable tiered storage (standard replication, erasure coding, SSD-backed performance tiers, and archival tiers).
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Admin credentials sourced from admin-openrc.sh
  • Object Storage service deployed and healthy
  • Storage rings built for each policy (see Ring Management)

Policy Types

Xloud Object Storage supports three storage policy types. Each maps to a distinct object ring file and enforces different data placement and durability behavior.
Replication policies store N identical copies of every object across distinct storage zones. Each replica is fully readable independently — no reconstruction is needed.
ParameterTypical ValueDescription
replica_count3Number of full copies stored per object
Min drives writable2 of 3Write quorum for consistency
Read quorum1 of 3Any single replica can serve a GET
Storage overheadTotal space = object size × replica count
Recovery timeFastReplication copies entire objects
3-replica replication is the recommended default for general-purpose workloads. It tolerates simultaneous loss of any one zone with no data loss and no performance penalty during reads.

Policy Configuration Reference

Storage policies are defined in the Object Storage configuration managed by XDeploy. The configuration is applied during deployment via xavs-ansible deploy -t swift.
swift.conf — storage policy definition
[storage-policy:0]
name = standard
default = yes
aliases = Policy-0, default

[storage-policy:1]
name = gold
aliases = ssd, performance

[storage-policy:2]
name = archive
aliases = ec, cold
policy_type = erasure_coding
ec_type = liberasurecode_rs_vand
ec_num_data_fragments = 8
ec_num_parity_fragments = 4
ec_object_segment_size = 1048576
FieldDescription
[storage-policy:N]Section index — must match the ring file suffix (object-N.ring.gz)
nameHuman-readable identifier used in container creation
defaultyes for the policy applied without explicit selection
aliasesComma-separated alternate names
policy_typereplication (default) or erasure_coding
ec_typeEC algorithm — required when policy_type = erasure_coding
ec_num_data_fragmentsData shards per EC stripe
ec_num_parity_fragmentsParity shards per EC stripe
deprecatedyes blocks new containers from using this policy

View and Manage Policies

Navigate to Object Store

Log in to the Xloud Dashboard (https://connect.<your-domain>) and navigate to Project → Object Store → Containers. The storage policy is shown as a column in the container list and is selectable during container creation.

Create a container with a specific policy

Click Create Container. In the Storage Policy dropdown, select the desired policy (e.g., gold, archive). Leaving the field blank assigns the default policy.
The storage policy field is only visible if multiple policies are configured in your cluster. If only one policy exists, containers are automatically assigned to it.

S3 API Compatibility

Xloud Object Storage exposes an S3-compatible API endpoint alongside the native Swift API. All storage policies are accessible via both APIs.
The S3-compatible endpoint uses the same underlying storage policies. Buckets created via S3 API map to Swift containers and inherit policy assignment behavior.
Create bucket via S3 API (default policy)
aws s3 mb s3://my-bucket \
  --endpoint-url https://object.<your-domain>

Upload object via S3 API
aws s3 cp myfile.tar.gz s3://my-bucket/ \
  --endpoint-url https://object.<your-domain>
S3 bucket-to-policy mapping is configured by your administrator. Contact your Xloud administrator to assign a specific storage policy to S3 buckets, or use the Swift API to create containers with explicit policy selection.
Supported S3 API operations:
  • Bucket operations: CreateBucket, ListBuckets, DeleteBucket, GetBucketLocation
  • Object operations: PutObject, GetObject, DeleteObject, HeadObject, ListObjectsV2
  • Multipart: CreateMultipartUpload, UploadPart, CompleteMultipartUpload
  • ACLs: GetBucketAcl, PutBucketAcl, GetObjectAcl
  • Versioning: GetBucketVersioning, PutBucketVersioning, ListObjectVersions

Multi-Cloud Access Patterns

Xloud Object Storage integrates with external cloud object storage providers. Tenant virtual machines can access multiple object storage systems using standard CLI tools and SDKs.
Rclone provides a unified interface for Xloud Object Storage, AWS S3, Google Cloud Storage, and Azure Blob Storage.
Configure Rclone for Xloud Object Storage
rclone config create xloud-swift swift \
  auth https://identity.<your-domain>/v3 \
  user myuser \
  key mypassword \
  tenant myproject \
  auth_version 3
Sync from AWS S3 to Xloud Object Storage
rclone sync s3:my-aws-bucket xloud-swift:my-local-container
Mount Xloud container as local filesystem
rclone mount xloud-swift:my-container /mnt/xloud-storage \
  --vfs-cache-mode writes &
The AWS CLI connects to the Xloud S3-compatible endpoint using standard credentials.
Configure AWS CLI for Xloud S3 endpoint
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
aws configure set default.region us-east-1
List buckets
aws s3 ls --endpoint-url https://object.<your-domain>
Sync local directory to Xloud
aws s3 sync ./backup/ s3://my-backup-bucket \
  --endpoint-url https://object.<your-domain>
boto3 client for Xloud S3-compatible endpoint
import boto3

client = boto3.client(
    's3',
    endpoint_url='https://object.<your-domain>',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY',
    region_name='default'
)

# List buckets
response = client.list_buckets()
for bucket in response['Buckets']:
    print(bucket['Name'])

# Upload object
client.upload_file('myfile.tar.gz', 'my-bucket', 'backups/myfile.tar.gz')
Xloud Image Service can store VM images and snapshots directly in Object Storage, eliminating local disk requirements on the image service nodes.
glance-api.conf — Swift backend
[glance_store]
stores = swift
default_store = swift
swift_store_auth_version = 3
swift_store_auth_address = https://identity.<your-domain>/v3
swift_store_container = glance-images
swift_store_create_container_on_put = True
glance-api.conf — S3-compatible backend (Ceph RGW)
[glance_store]
stores = s3
default_store = s3
s3_store_host = https://object.<your-domain>
s3_store_access_key = YOUR_ACCESS_KEY
s3_store_secret_key = YOUR_SECRET_KEY
s3_store_bucket = glance-images
s3_store_create_bucket_on_put = True
Using Object Storage as the Glance backend allows image sharing across all cluster nodes without NFS or shared filesystem dependencies.

Deprecated Policies

Mark a policy as deprecated to prevent new containers from using it while maintaining full access for existing containers.
Deprecate a policy via XDeploy configuration
# In the XDeploy Object Storage configuration:
# Set deprecated = yes on the target policy section
# Redeploy: xavs-ansible deploy -t swift
Deprecated policies remain fully functional for existing containers. Only new container creation using the deprecated policy is blocked. Migrate data before removing the policy configuration entirely.

Best Practices

Plan Policies Before Deployment

Policy indexes and names are permanent once containers are created. Design your tier structure (standard, gold, archive) before the first container is provisioned.

Use EC for Archival Data

Erasure coding at 8+4 reduces storage overhead from 3× to 1.5× with equivalent or better durability. Apply EC to infrequently accessed data that tolerates higher read latency.

Tag Rings by Device Class

Use ring builder device class labels (SSD, HDD) to enforce hardware affinity. Never mix device classes in a single ring — it defeats the purpose of tiering.

Set Sensible Defaults

Configure the most common data tier as the default policy. Operators who need high-performance or archival storage explicitly specify the policy at container creation time.

Next Steps

Ring Management

Build and distribute ring files for each storage policy

Architecture

Understand how rings, proxy servers, and storage nodes interact

Replication

Monitor replication health and manage quarantined objects

Quotas

Set per-account and per-container storage limits