Overview
Xloud Compute uses a native hypervisor as its default virtualization layer, managed through the libvirt driver. The native hypervisor provides hardware-assisted virtualization on x86, ARM, and POWER architectures, delivering near-bare-metal performance for virtual machine workloads. The libvirt driver is responsible for translating Xloud Compute API requests into hypervisor operations on each compute node. Understanding the hypervisor configuration options enables administrators to tune workload placement, CPU topology, storage performance, and security posture.Prerequisites
- Administrator access to the Xloud platform and XDeploy
- Compute nodes running XOS with
libvirtandqemu-kvminstalled - Verify hardware virtualization support:
grep -o 'vmx\|svm' /proc/cpuinfo | head -1
Supported Image Formats
The following disk image formats are supported by the native hypervisor driver. The format is detected automatically from the image metadata stored in the Xloud Image Service.| Format | Name | Description | Recommended Use |
|---|---|---|---|
raw | Raw disk image | Flat binary representation of disk contents. No overhead from format features. | Maximum I/O performance, RBD-backed volumes |
qcow2 | QEMU Copy-on-Write v2 | Supports snapshots, compression, and copy-on-write. More flexible than raw. | Local storage, development environments |
qed | QEMU Enhanced Disk | Optimized for sparse images with faster lookup tables than qcow2. | Legacy workloads; qcow2 is preferred for new deployments |
vmdk | VMware Disk | VMware-compatible format. Supported for import/migration scenarios. | VM migrations from VMware environments |
Hardware Requirements
- x86 / x86_64
- ARM / AArch64
- POWER / ppc64le
x86 is the primary supported architecture for Xloud Compute.CPU RequirementsBIOS/UEFI Settings
- Intel VT-x (
vmxflag in/proc/cpuinfo) or AMD-V (svmflag) - For optimal performance: Intel VT-d or AMD-Vi (IOMMU) for PCI passthrough
Check virtualization support
Check IOMMU (for PCI passthrough)
- Enable Intel VT-x / AMD-V in BIOS
- Enable Intel VT-d / AMD-Vi if PCI passthrough is required
- Enable Hyper-Threading for improved vCPU density (optional)
Both
vmx or svm flag and IOMMU should be present for full feature support.Backing Storage Options
The hypervisor driver supports multiple backing storage configurations for instance disks. The backing storage determines how ephemeral instance disks are stored on compute nodes.| Storage Type | Description | Pros | Cons |
|---|---|---|---|
| QCOW (local) | qcow2 files on compute node local storage | Simple setup, copy-on-write snapshots | No live migration, no fault tolerance |
| Flat (raw local) | Raw image files on compute node local storage | Maximum local I/O performance | No live migration, no fault tolerance |
| LVM | Logical volumes on compute node volume groups | Better I/O than file-backed, thin provisioning | Complex setup, no live migration |
| RBD (XSDS) | Distributed block device via network | Live migration, fault tolerance, snapshots | Requires XSDS distributed storage cluster |
CPU Configuration Modes
The CPU mode controls how CPU features and topology are presented to virtual machines. This affects live migration compatibility and performance.| CPU Mode | Description | Live Migration | Performance | Use Case |
|---|---|---|---|---|
host-passthrough | Exposes exact host CPU model and all features | Requires identical CPUs | Best | Homogeneous clusters, bare-metal benchmarks |
host-model | Snapshots the host CPU model at VM launch | Restricted to similar CPUs | Near-native | Clusters with similar CPU generations |
custom | Specifies an explicit baseline CPU model | Cross-generation compatible | Reduced | Mixed-CPU clusters, live migration across generations |
none | QEMU default — minimal feature set | Compatible | Lowest | Legacy compatibility only; not recommended |
Configuring CPU Mode
CPU mode is set in the Nova compute configuration, which XDeploy manages viaglobals.d overrides:
/etc/xavs/globals.d/_50_compute.yml
Apply compute configuration
Nested Virtualization
Nested virtualization allows virtual machines to run their own hypervisors (e.g., for CI/CD pipelines, hypervisor testing, or running Kubernetes with virtualization-backed nodes).Enabling Nested Virtualization
Verify nested support is active
Verify nested virtualization
Output should be
Y or 1 confirming nested virtualization is enabled.Performance Tuning
VHostNet
VHostNet offloads virtio-net packet processing from QEMU user-space to the kernel, significantly reducing CPU overhead for network-intensive workloads.Verify VHostNet is loaded
CPU Pinning
For latency-sensitive workloads, pin instance vCPUs to dedicated physical cores to eliminate CPU scheduler jitter. See the Advanced Features guide for CPU pinning configuration.Huge Pages
Configure huge page memory backing for memory-intensive or NUMA-sensitive workloads. Huge pages reduce TLB pressure and improve memory throughput. See the Advanced Features guide for huge page setup.Capabilities
Advanced Features
CPU pinning, huge pages, NUMA topology, GPU passthrough, and SR-IOV configuration
Live Migration
Configure and execute live migrations between compute nodes
Compute Architecture
Understand the full Xloud Compute architecture and service components
Security Hardening
Hypervisor-level security configuration and CIS compliance hardening
Troubleshooting
VMs fail to start: hypervisor not available
VMs fail to start: hypervisor not available
Cause: Hardware virtualization is disabled in BIOS or the hypervisor kernel module is not loaded.Resolution:If the module fails to load, enable VT-x/AMD-V in the server BIOS and reboot.
Check hypervisor module status
Load hypervisor modules manually
Live migration fails: 'guest CPU doesn't match specification'
Live migration fails: 'guest CPU doesn't match specification'
Cause: The source and destination compute nodes have incompatible CPU models. This commonly occurs in mixed-CPU clusters when Apply with
host-model or host-passthrough is used.Resolution: Switch to custom CPU mode with a common baseline:globals.d fix
xavs-ansible deploy -t nova on all nodes, then retry the migration.Poor disk I/O performance on RBD-backed instances
Poor disk I/O performance on RBD-backed instances
Cause: qcow2 format images are being used with XSDS distributed storage, adding unnecessary copy-on-write overhead.Resolution: Use
raw format for all images on XSDS-backed deployments. Convert existing images:Convert qcow2 to raw
Nested VMs fail to start inside a guest
Nested VMs fail to start inside a guest
Cause: The host compute node does not have nested virtualization enabled, or the CPU mode does not expose virtualization feature flags to the guest.Resolution:
- Verify nested support:
cat /sys/module/kvm_intel/parameters/nested - Confirm the instance is using a flavor with
hw:cpu_mode=host-passthroughor equivalent - Verify the guest OS can see the virtualization flag:
grep -c vmx /proc/cpuinfofrom inside the VM
libvirtd not running on compute node
libvirtd not running on compute node
Cause: The Common causes include AppArmor policy conflicts and missing QEMU binaries. Review the journal output for the specific error and consult the Xloud support portal.
libvirtd service failed to start or crashed.Resolution:Check libvirtd status
Heterogeneous Hardware Support
Xloud-Developed — Heterogeneous hardware support is a core capability of XAVS / XPCI.
Mixed Node Types
Run converged (compute+storage), compute-only, and storage-heavy nodes in the same cluster. Each node contributes its resources to the shared pool.
Mixed CPU Generations
Intel and AMD processors of different generations coexist. CPU feature masking ensures live migration compatibility across generations. See CPU Feature Masking.
Mixed Storage Media
NVMe, SSD, and HDD drives in the same cluster. CRUSH device classes auto-detect media type per device and route data to the correct tier. See Storage Tiers.
Varying RAM Capacities
Nodes with different RAM sizes — no configuration needed. The scheduler tracks per-host capacity independently and places instances on hosts with sufficient resources.
Next Steps
Advanced Features
Configure CPU pinning, huge pages, and GPU passthrough for specialized workloads
Live Migration
Set up and execute live migrations across compute nodes
Compute Admin Guide
Full administrator reference for the Xloud Compute service
XAVS Product
Learn about the full XAVS Advanced Virtualization Suite