Skip to main content

Overview

Xloud Compute uses a native hypervisor as its default virtualization layer, managed through the libvirt driver. The native hypervisor provides hardware-assisted virtualization on x86, ARM, and POWER architectures, delivering near-bare-metal performance for virtual machine workloads. The libvirt driver is responsible for translating Xloud Compute API requests into hypervisor operations on each compute node. Understanding the hypervisor configuration options enables administrators to tune workload placement, CPU topology, storage performance, and security posture.
Prerequisites
  • Administrator access to the Xloud platform and XDeploy
  • Compute nodes running XOS with libvirt and qemu-kvm installed
  • Verify hardware virtualization support: grep -o 'vmx\|svm' /proc/cpuinfo | head -1

Supported Image Formats

The following disk image formats are supported by the native hypervisor driver. The format is detected automatically from the image metadata stored in the Xloud Image Service.
FormatNameDescriptionRecommended Use
rawRaw disk imageFlat binary representation of disk contents. No overhead from format features.Maximum I/O performance, RBD-backed volumes
qcow2QEMU Copy-on-Write v2Supports snapshots, compression, and copy-on-write. More flexible than raw.Local storage, development environments
qedQEMU Enhanced DiskOptimized for sparse images with faster lookup tables than qcow2.Legacy workloads; qcow2 is preferred for new deployments
vmdkVMware DiskVMware-compatible format. Supported for import/migration scenarios.VM migrations from VMware environments
For production deployments using XSDS (distributed storage) as the storage backend, use raw format images. The distributed storage layer handles copy-on-write natively, making qcow2 overhead unnecessary and counterproductive.

Hardware Requirements

x86 is the primary supported architecture for Xloud Compute.CPU Requirements
  • Intel VT-x (vmx flag in /proc/cpuinfo) or AMD-V (svm flag)
  • For optimal performance: Intel VT-d or AMD-Vi (IOMMU) for PCI passthrough
Verification
Check virtualization support
grep -oE 'vmx|svm' /proc/cpuinfo | sort -u
Check IOMMU (for PCI passthrough)
dmesg | grep -i iommu
BIOS/UEFI Settings
  • Enable Intel VT-x / AMD-V in BIOS
  • Enable Intel VT-d / AMD-Vi if PCI passthrough is required
  • Enable Hyper-Threading for improved vCPU density (optional)
Both vmx or svm flag and IOMMU should be present for full feature support.

Backing Storage Options

The hypervisor driver supports multiple backing storage configurations for instance disks. The backing storage determines how ephemeral instance disks are stored on compute nodes.
Storage TypeDescriptionProsCons
QCOW (local)qcow2 files on compute node local storageSimple setup, copy-on-write snapshotsNo live migration, no fault tolerance
Flat (raw local)Raw image files on compute node local storageMaximum local I/O performanceNo live migration, no fault tolerance
LVMLogical volumes on compute node volume groupsBetter I/O than file-backed, thin provisioningComplex setup, no live migration
RBD (XSDS)Distributed block device via networkLive migration, fault tolerance, snapshotsRequires XSDS distributed storage cluster
Local storage backends (QCOW, Flat, LVM) do not support live migration. Use RBD-backed storage when live migration between compute nodes is required.
The storage backend is configured per compute node in the Nova configuration, managed by XDeploy via the xavs-ansible deployment playbooks.

CPU Configuration Modes

The CPU mode controls how CPU features and topology are presented to virtual machines. This affects live migration compatibility and performance.
CPU ModeDescriptionLive MigrationPerformanceUse Case
host-passthroughExposes exact host CPU model and all featuresRequires identical CPUsBestHomogeneous clusters, bare-metal benchmarks
host-modelSnapshots the host CPU model at VM launchRestricted to similar CPUsNear-nativeClusters with similar CPU generations
customSpecifies an explicit baseline CPU modelCross-generation compatibleReducedMixed-CPU clusters, live migration across generations
noneQEMU default — minimal feature setCompatibleLowestLegacy compatibility only; not recommended
For mixed-CPU clusters (e.g., nodes with Intel Icelake and Cascadelake CPUs), use cpu_mode = custom with a common baseline model such as Cascadelake-Server-noTSX. This ensures live migration succeeds across all nodes in the cluster.

Configuring CPU Mode

CPU mode is set in the Nova compute configuration, which XDeploy manages via globals.d overrides:
/etc/xavs/globals.d/_50_compute.yml
nova_cpu_mode: "custom"
nova_cpu_model: "Cascadelake-Server-noTSX"
Apply the change with:
Apply compute configuration
xavs-ansible deploy -t nova

Nested Virtualization

Nested virtualization allows virtual machines to run their own hypervisors (e.g., for CI/CD pipelines, hypervisor testing, or running Kubernetes with virtualization-backed nodes).

Enabling Nested Virtualization

Enable on the host kernel

Load the hypervisor module with nested support enabled:
echo "options kvm_intel nested=1" | sudo tee /etc/modprobe.d/kvm-nested.conf
sudo modprobe -r kvm_intel && sudo modprobe kvm_intel

Verify nested support is active

Verify nested virtualization
cat /sys/module/kvm_intel/parameters/nested   # Should return Y or 1
cat /sys/module/kvm_amd/parameters/nested     # AMD alternative
Output should be Y or 1 confirming nested virtualization is enabled.

Set CPU mode to host-passthrough

Nested VMs require the host CPU feature flags to be visible inside the VM. Set cpu_mode = host-passthrough in the Nova configuration, or use host-model if cross-node migration is needed.
globals.d override for nested virtualization
nova_cpu_mode: "host-passthrough"
Nested Virtualization Limitations
  • Performance is significantly reduced compared to first-level VMs due to double emulation overhead
  • Live migration of nested VMs may not be supported depending on the inner hypervisor
  • Not recommended for production workloads — use dedicated bare-metal nodes for performance-sensitive nested environments
  • host-passthrough CPU mode restricts live migration to nodes with identical physical CPUs

Performance Tuning

VHostNet

VHostNet offloads virtio-net packet processing from QEMU user-space to the kernel, significantly reducing CPU overhead for network-intensive workloads.
Verify VHostNet is loaded
lsmod | grep vhost_net
VHostNet is enabled by default on XOS. No additional configuration is required.

CPU Pinning

For latency-sensitive workloads, pin instance vCPUs to dedicated physical cores to eliminate CPU scheduler jitter. See the Advanced Features guide for CPU pinning configuration.

Huge Pages

Configure huge page memory backing for memory-intensive or NUMA-sensitive workloads. Huge pages reduce TLB pressure and improve memory throughput. See the Advanced Features guide for huge page setup.

Capabilities

Advanced Features

CPU pinning, huge pages, NUMA topology, GPU passthrough, and SR-IOV configuration

Live Migration

Configure and execute live migrations between compute nodes

Compute Architecture

Understand the full Xloud Compute architecture and service components

Security Hardening

Hypervisor-level security configuration and CIS compliance hardening

Troubleshooting

Cause: Hardware virtualization is disabled in BIOS or the hypervisor kernel module is not loaded.Resolution:
Check hypervisor module status
lsmod | grep kvm
Load hypervisor modules manually
sudo modprobe kvm
sudo modprobe kvm_intel   # or kvm_amd for AMD CPUs
If the module fails to load, enable VT-x/AMD-V in the server BIOS and reboot.
Cause: The source and destination compute nodes have incompatible CPU models. This commonly occurs in mixed-CPU clusters when host-model or host-passthrough is used.Resolution: Switch to custom CPU mode with a common baseline:
globals.d fix
nova_cpu_mode: "custom"
nova_cpu_model: "Cascadelake-Server-noTSX"
Apply with xavs-ansible deploy -t nova on all nodes, then retry the migration.
Cause: qcow2 format images are being used with XSDS distributed storage, adding unnecessary copy-on-write overhead.Resolution: Use raw format for all images on XSDS-backed deployments. Convert existing images:
Convert qcow2 to raw
openstack image create \
  --disk-format raw \
  --container-format bare \
  --file <(qemu-img convert -f qcow2 -O raw source.qcow2 /dev/stdout) \
  my-raw-image
Cause: The host compute node does not have nested virtualization enabled, or the CPU mode does not expose virtualization feature flags to the guest.Resolution:
  1. Verify nested support: cat /sys/module/kvm_intel/parameters/nested
  2. Confirm the instance is using a flavor with hw:cpu_mode=host-passthrough or equivalent
  3. Verify the guest OS can see the virtualization flag: grep -c vmx /proc/cpuinfo from inside the VM
Cause: The libvirtd service failed to start or crashed.Resolution:
Check libvirtd status
sudo systemctl status libvirtd
sudo journalctl -u libvirtd -n 50
Common causes include AppArmor policy conflicts and missing QEMU binaries. Review the journal output for the specific error and consult the Xloud support portal.

Heterogeneous Hardware Support

Xloud-Developed — Heterogeneous hardware support is a core capability of XAVS / XPCI.
Xloud supports mixing different hardware configurations within a single cluster — no hardware homogeneity required.

Mixed Node Types

Run converged (compute+storage), compute-only, and storage-heavy nodes in the same cluster. Each node contributes its resources to the shared pool.

Mixed CPU Generations

Intel and AMD processors of different generations coexist. CPU feature masking ensures live migration compatibility across generations. See CPU Feature Masking.

Mixed Storage Media

NVMe, SSD, and HDD drives in the same cluster. CRUSH device classes auto-detect media type per device and route data to the correct tier. See Storage Tiers.

Varying RAM Capacities

Nodes with different RAM sizes — no configuration needed. The scheduler tracks per-host capacity independently and places instances on hosts with sufficient resources.
Use Host Aggregates to group nodes by capability (e.g., GPU hosts, high-memory hosts) and restrict specific flavors to specific hardware groups. See Scheduling.

Next Steps

Advanced Features

Configure CPU pinning, huge pages, and GPU passthrough for specialized workloads

Live Migration

Set up and execute live migrations across compute nodes

Compute Admin Guide

Full administrator reference for the Xloud Compute service

XAVS Product

Learn about the full XAVS Advanced Virtualization Suite