Documentation Index
Fetch the complete documentation index at: https://docs.xloud.tech/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Sizing XMS comes down to three questions: how many migrations do you want to run concurrently, how much source churn will warm migrations have to keep up with, and how long is your acceptable cutover window. This page walks through those three questions and gives you a planning framework.Three Capacity Dimensions
Concurrent Migrations
How many migrations XMS can run at the same time. This is the primary
dimension for campaigns with many small VMs.
Sustained Throughput
How much aggregate bytes per second the migration workers can read from
the source and write into the target. This drives how long the full sync
phase takes.
Cutover Burst
How much short-term capacity is available for the final delta sync and
guest conversion during a cutover. This drives how short the cutover
window can be.
Concurrent Migrations
Every running migration consumes:- One disk transport session against the source
- One worker slot in the XMS control plane
- Block storage API throughput to write into the target volume
- Network bandwidth on the path between XMS and the source
Wave Sizing
| Wave Size | When to Use |
|---|---|
| 1-5 VMs | First wave of a campaign, high-risk workloads, or environments with restricted change windows |
| 10-20 VMs | Steady-state campaign waves for typical workloads |
| 20-50 VMs | High-throughput waves for small, similar VMs (for example, Windows workstation-class workloads) |
| 50+ VMs | Only with a dedicated XMS deployment sized for the campaign — talk to Xloud support first |
Full Sync Throughput
The full sync phase reads the entire source disk once and writes it into the target volume. Throughput depends on:- Source disk read speed
- Network path between XMS and the source
- Target block storage write speed
- Number of concurrent full syncs competing for the same resources
Example Throughput Planning
For a wave of 10 VMs totaling 2 TB:| Path Throughput | Expected Full Sync Duration |
|---|---|
| 100 MB/s aggregate | ~6 hours |
| 500 MB/s aggregate | ~1 hour 10 minutes |
| 1 GB/s aggregate | ~35 minutes |
Warm Migration Cadence
Warm migrations trade ongoing sync bandwidth for a short cutover window. The cadence you choose directly controls how much data a final delta has to transfer.| Cadence | Bytes Per Sync (typical) | Good For |
|---|---|---|
| Every 15 minutes | Small, hundreds of MB | Small, steady-state workloads |
| Hourly | Low GB | Medium workloads with predictable churn |
| Daily | Tens of GB | Large workloads or low-churn archives |
Cutover Window Planning
Target cutover window breaks down roughly as:| Phase | Typical Duration |
|---|---|
| Final delta sync | Seconds to a few minutes |
| Source power off | Seconds to a minute |
| Guest fixes | 30 seconds to a few minutes |
| Finalize and boot | Under a minute |
Multi-Wave Campaign Framework
Classify workloads into waves
Group source VMs by risk, churn, and cutover tolerance:
- Wave 0 — cold-migration lab VMs, used to validate the pipeline
- Wave 1 — cold migrations for workloads that tolerate downtime
- Wave 2 — warm migrations for production workloads with hourly sync
- Wave 3 — warm migrations for the highest-churn workloads with 15-minute sync
Size XMS for the largest concurrent wave
Count the largest number of migrations you want to run concurrently
across all waves that overlap in time. Size workers and network path
for that number.
Validate on Wave 0 before scaling
Run Wave 0 end-to-end and measure real throughput, real cutover window,
and real post-migration success rate. Adjust wave sizing before
Wave 1 begins.
Schedule waves around source and target capacity
Warm migrations occupy target volume footprint for the full sync window
plus the cutover wait. Schedule overlapping waves so the cumulative
target footprint fits the project quota.
Next Steps
Architecture
How XMS components are structured
Storage Back-ends
How XMS writes migrated data into block storage
Troubleshooting
Operator-side diagnostics and recovery