Overview
Pools are the logical containers for stored data. Each pool has a defined data protection policy (replication or erasure coding), a device class mapping, and a PG count. Creating pools correctly at the outset avoids disruptive reconfiguration later.Prerequisites
- Administrator credentials with the
adminrole - SSH access to a cluster management node
- CRUSH map configured with appropriate device class rules (see CRUSH Maps)
Creating Pools
- Replicated Pool
- Erasure-Coded Pool
Create the pool
Create a replicated pool
128 initially — the PG autoscaler will
adjust automatically as data volume grows.Set replication factor
Set replication size (factor)
size is the total number of copies. min_size is the minimum needed to
serve I/O (allows degraded operation with 2 copies during OSD failure recovery).Assign a CRUSH rule
Associate the pool with a CRUSH rule that targets the correct device class:Use
Set CRUSH rule on pool
ceph osd crush rule ls to list available rules.Managing Existing Pools
- Inspect Pools
- Modify Pools
- Delete a Pool
List all pools with details
Show pool statistics
Show specific pool configuration
Pool Configuration Reference
| Parameter | Command | Notes |
|---|---|---|
| Replication size | ceph osd pool set <pool> size <n> | Number of copies |
| Minimum size | ceph osd pool set <pool> min_size <n> | Min copies for I/O |
| CRUSH rule | ceph osd pool set <pool> crush_rule <rule> | Device class routing |
| PG autoscale | ceph osd pool set <pool> pg_autoscale_mode on | Automatic PG sizing |
| Compression | ceph osd pool set <pool> compression_mode aggressive | Inline compression |
| Quotas | ceph osd pool set-quota <pool> max_bytes <bytes> | Capacity limit |
Next Steps
CRUSH Maps
Configure failure domains and device class rules for pool placement
Storage Tiers
Map pools to Cinder volume types for multi-tier storage
Capacity Planning
Monitor pool utilization and plan expansion
Troubleshooting
Diagnose pool-related issues — PG warnings, capacity alerts