Skip to main content

Overview

Pools are the logical containers for stored data. Each pool has a defined data protection policy (replication or erasure coding), a device class mapping, and a PG count. Creating pools correctly at the outset avoids disruptive reconfiguration later.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Administrator credentials with the admin role
  • SSH access to a cluster management node
  • CRUSH map configured with appropriate device class rules (see CRUSH Maps)

Creating Pools

Create the pool

Create a replicated pool
ceph osd pool create <POOL_NAME> <PG_COUNT> <PG_COUNT> replicated
For most pools, set PG count to 128 initially — the PG autoscaler will adjust automatically as data volume grows.

Set replication factor

Set replication size (factor)
ceph osd pool set <POOL_NAME> size 3
ceph osd pool set <POOL_NAME> min_size 2
size is the total number of copies. min_size is the minimum needed to serve I/O (allows degraded operation with 2 copies during OSD failure recovery).

Assign a CRUSH rule

Associate the pool with a CRUSH rule that targets the correct device class:
Set CRUSH rule on pool
ceph osd pool set <POOL_NAME> crush_rule replicated_rule_ssd
Use ceph osd crush rule ls to list available rules.

Enable pool application

Tag the pool with its application type so the cluster knows how to manage it:
Enable RBD (block storage) on pool
ceph osd pool application enable <POOL_NAME> rbd
Valid application types: rbd (block), rgw (object), cephfs (file).
Pool appears in ceph osd pool ls detail with correct application and replication settings.

Managing Existing Pools

List all pools with details
ceph osd pool ls detail
Show pool statistics
ceph df detail
Show specific pool configuration
ceph osd pool get <POOL_NAME> all

Pool Configuration Reference

ParameterCommandNotes
Replication sizeceph osd pool set <pool> size <n>Number of copies
Minimum sizeceph osd pool set <pool> min_size <n>Min copies for I/O
CRUSH ruleceph osd pool set <pool> crush_rule <rule>Device class routing
PG autoscaleceph osd pool set <pool> pg_autoscale_mode onAutomatic PG sizing
Compressionceph osd pool set <pool> compression_mode aggressiveInline compression
Quotasceph osd pool set-quota <pool> max_bytes <bytes>Capacity limit

Next Steps

CRUSH Maps

Configure failure domains and device class rules for pool placement

Storage Tiers

Map pools to Cinder volume types for multi-tier storage

Capacity Planning

Monitor pool utilization and plan expansion

Troubleshooting

Diagnose pool-related issues — PG warnings, capacity alerts