Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xloud.tech/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Sizing XMS comes down to three questions: how many migrations do you want to run concurrently, how much source churn will warm migrations have to keep up with, and how long is your acceptable cutover window. This page walks through those three questions and gives you a planning framework.

Three Capacity Dimensions

Concurrent Migrations

How many migrations XMS can run at the same time. This is the primary dimension for campaigns with many small VMs.

Sustained Throughput

How much aggregate bytes per second the migration workers can read from the source and write into the target. This drives how long the full sync phase takes.

Cutover Burst

How much short-term capacity is available for the final delta sync and guest conversion during a cutover. This drives how short the cutover window can be.

Concurrent Migrations

Every running migration consumes:
  • One disk transport session against the source
  • One worker slot in the XMS control plane
  • Block storage API throughput to write into the target volume
  • Network bandwidth on the path between XMS and the source
Operators scale concurrent capacity by adding migration workers. The exact ceiling depends on the deployment, but as a rule:
Plan for a ceiling of concurrent migrations, not a ceiling of total migrations. A campaign of 200 VMs can run over several days in waves of 10-20 concurrent migrations and finish comfortably, even on a modest XMS deployment.

Wave Sizing

Wave SizeWhen to Use
1-5 VMsFirst wave of a campaign, high-risk workloads, or environments with restricted change windows
10-20 VMsSteady-state campaign waves for typical workloads
20-50 VMsHigh-throughput waves for small, similar VMs (for example, Windows workstation-class workloads)
50+ VMsOnly with a dedicated XMS deployment sized for the campaign — talk to Xloud support first

Full Sync Throughput

The full sync phase reads the entire source disk once and writes it into the target volume. Throughput depends on:
  • Source disk read speed
  • Network path between XMS and the source
  • Target block storage write speed
  • Number of concurrent full syncs competing for the same resources
The bottleneck is almost always the network path between XMS and the source, not the storage on either side. If full syncs are slower than expected, measure the raw path throughput first.

Example Throughput Planning

For a wave of 10 VMs totaling 2 TB:
Path ThroughputExpected Full Sync Duration
100 MB/s aggregate~6 hours
500 MB/s aggregate~1 hour 10 minutes
1 GB/s aggregate~35 minutes
Aggregate throughput is the sum across all concurrent migrations, so 10 migrations at 100 MB/s each gives 1 GB/s aggregate.

Warm Migration Cadence

Warm migrations trade ongoing sync bandwidth for a short cutover window. The cadence you choose directly controls how much data a final delta has to transfer.
CadenceBytes Per Sync (typical)Good For
Every 15 minutesSmall, hundreds of MBSmall, steady-state workloads
HourlyLow GBMedium workloads with predictable churn
DailyTens of GBLarge workloads or low-churn archives
Aggressive cadences mean more frequent CBT snapshots on the source. Watch the source host load and the CBT change map size if you see slowdown on the source side.

Cutover Window Planning

Target cutover window breaks down roughly as:
PhaseTypical Duration
Final delta syncSeconds to a few minutes
Source power offSeconds to a minute
Guest fixes30 seconds to a few minutes
Finalize and bootUnder a minute
For a small, healthy warm migration with low lag, the total cutover window is typically under 5 minutes. For large or churny workloads, plan 10-20 minutes and trigger a manual Sync Now right before cutover to keep the final delta small.

Multi-Wave Campaign Framework

Classify workloads into waves

Group source VMs by risk, churn, and cutover tolerance:
  • Wave 0 — cold-migration lab VMs, used to validate the pipeline
  • Wave 1 — cold migrations for workloads that tolerate downtime
  • Wave 2 — warm migrations for production workloads with hourly sync
  • Wave 3 — warm migrations for the highest-churn workloads with 15-minute sync

Size XMS for the largest concurrent wave

Count the largest number of migrations you want to run concurrently across all waves that overlap in time. Size workers and network path for that number.

Validate on Wave 0 before scaling

Run Wave 0 end-to-end and measure real throughput, real cutover window, and real post-migration success rate. Adjust wave sizing before Wave 1 begins.

Schedule waves around source and target capacity

Warm migrations occupy target volume footprint for the full sync window plus the cutover wait. Schedule overlapping waves so the cumulative target footprint fits the project quota.

Monitor and retune between waves

Use the Migration panel and platform monitoring to review wave metrics between waves. Retune cadence, concurrency, and network path if any dimension is bottlenecked.

Next Steps

Architecture

How XMS components are structured

Storage Back-ends

How XMS writes migrated data into block storage

Troubleshooting

Operator-side diagnostics and recovery