Overview
Xloud Compute exposes advanced hypervisor capabilities for workloads that require dedicated hardware resources, hardware-enforced security, or accelerated computing. These features are activated through flavor extra specs and require corresponding host-level configuration on participating compute nodes.Prerequisites
- Admin credentials sourced from
admin-openrc.sh - Host-level hardware features configured through XDeploy (IOMMU, huge pages, VFIO)
- Relevant scheduler filters active in the scheduler configuration
Features
NUMA-Aware Scheduling
NUMA-Aware Scheduling
EnterpriseNUMA-aware scheduling ensures that an instance’s vCPUs and memory are allocated
from the same physical NUMA cell on the host, avoiding cross-node memory access
penalties. The
Xloud-Developed — NUMA-aware scheduling with per-flavor granularity is developed by Xloud and ships with XAVS / XPCI.
NUMATopologyFilter in the scheduler evaluates each host’s NUMA
topology and rejects placements that would split an instance across cells.Unlike VMware’s cluster-wide EVC setting, Xloud provides per-flavor NUMA
granularity — each flavor can define its own NUMA cell count and CPU/memory
distribution, allowing mixed workload profiles on the same compute cluster.Create a NUMA-aware flavor (2 NUMA nodes, 8 vCPUs)
| Extra Spec | Values | Description |
|---|---|---|
hw:numa_nodes | Integer | Number of guest NUMA nodes (vCPUs and memory split evenly across nodes) |
hw:numa_cpus.N | CPU list | Pin specific vCPUs to NUMA node N (e.g., 0,1,2,3) |
hw:numa_mem.N | Integer (MB) | Memory allocation for NUMA node N |
CPU Pinning
CPU Pinning
EnterpriseCPU pinning dedicates physical CPU threads exclusively to a single instance,
eliminating scheduling jitter and improving performance predictability for
latency-sensitive workloads such as real-time databases, financial applications,
and telco VNFs.Xloud supports three CPU policy modes:
Xloud-Developed — CPU pinning with mixed-policy support is developed by Xloud and ships with XAVS / XPCI.
dedicated— all vCPUs are pinned to exclusive physical threadsshared— vCPUs float across all available host CPUs (default behavior)mixed— some vCPUs are pinned while others float, enabling a balance between deterministic performance for critical threads and flexible scheduling for background threads
NUMATopologyFilter must be active in the scheduler filter chain.Create a flavor with dedicated CPU policy:Create CPU-pinned flavor
| Extra Spec | Values | Description |
|---|---|---|
hw:cpu_policy | dedicated, shared, mixed | Controls whether vCPUs are pinned to exclusive physical threads |
hw:cpu_dedicated_mask | CPU mask (e.g., 0-3) | When mixed policy is used, specifies which vCPUs are pinned |
hw:cpu_thread_policy | prefer, isolate, require | Controls whether sibling hyperthreads are used |
hw:numa_nodes | Integer | Number of NUMA nodes to expose inside the instance |
CPU Feature Masking (EVC Equivalent)
CPU Feature Masking (EVC Equivalent)
EnterpriseCPU feature masking normalizes the CPU instruction set exposed to instances,
enabling live migration across compute nodes with different CPU generations. This
is the Xloud equivalent of VMware’s Enhanced vMotion Compatibility (EVC), with a
key difference: Xloud applies masking per-flavor or per-image rather than at
the cluster level, allowing different workloads to use different CPU baselines on
the same cluster.Set a common CPU model baseline on the flavor:
Xloud-Developed — Per-flavor CPU feature masking is developed by Xloud and ships with XAVS / XPCI.
Set CPU model for cross-generation migration
| Extra Spec | Values | Description |
|---|---|---|
hw:cpu_mode | host-model, host-passthrough, custom | custom enables explicit CPU model selection |
hw:cpu_model | CPU model name | Target CPU generation baseline (e.g., Cascadelake-Server-noTSX, IvyBridge) |
When using
host-model or host-passthrough, the instance exposes the host’s
native CPU features. Live migration to a host with a different CPU generation
will fail if the destination lacks required features. Use custom mode with an
explicit model to guarantee migration compatibility.Huge Pages
Huge Pages
EnterpriseHuge pages reduce TLB (Translation Lookaside Buffer) pressure for memory-intensive
workloads. The hypervisor pre-allocates huge page pools on the host at boot time.
Xloud supports two page sizes:Instances using huge pages are scheduled only onto hosts where the required huge
page pool is allocated. The scheduler rejects hosts without sufficient huge pages
of the requested size.
Xloud-Developed — Huge page management with per-flavor page size selection is developed by Xloud and ships with XAVS / XPCI.
- 2 MB huge pages — suitable for most workloads including databases, application servers, and general-purpose memory optimization. Lower host memory fragmentation risk.
- 1 GB huge pages — optimal for HPC, in-memory analytics, and latency-sensitive network functions (DPDK, VNFs) where TLB coverage per entry is critical.
GPU Passthrough
GPU Passthrough
EnterpriseGPU passthrough exposes a physical GPU device directly to an instance with
near-native performance. The GPU is bound to the VFIO driver on the host and
assigned exclusively to one instance at a time. Use this for AI/ML training,
3D rendering, and GPU-accelerated simulation workloads.Host requirements: IOMMU must be enabled in host BIOS and OS. The GPU must
be bound to the VFIO driver. Configure this through XDeploy under Compute →
Hardware → GPU Configuration.List available PCI resource providers to find the GPU alias:Request the GPU in a flavor using the configured device alias:
List resource providers
Show inventory for a resource provider
Add GPU alias to flavor
All compute nodes exposing the same GPU type must use the same alias name. The
PciPassthroughFilter must be active in the scheduler filter chain to route
GPU-requesting instances to hosts with available devices.vTPM (Virtual Trusted Platform Module)
vTPM (Virtual Trusted Platform Module)
EnterprisevTPM provides a software-emulated TPM chip inside the instance, enabling disk
encryption (BitLocker, LUKS), measured boot attestation, and secure credential
storage. Xloud supports live migration of vTPM instances with automatic secret
transfer via Xloud Key Management — the
encrypted TPM state is seamlessly transferred to the destination host without
manual intervention.Supported models:
Host requirements: Xloud Key Management
must be enabled for secret storage. The In the Dashboard, vTPM can be enabled through the flavor extra specs panel or
the image admin form. The instance detail page displays the vTPM status when
attached.
Xloud-Developed — vTPM with live migration support and Dashboard integration is developed by Xloud and ships with XAVS / XPCI.
| Model | Use Case |
|---|---|
tpm-crb (Command Response Buffer) | Recommended for TPM 2.0. Modern interface used by Windows 11, RHEL 9+, Ubuntu 22.04+ |
tpm-tis (TPM Interface Specification) | Legacy interface for TPM 1.2 compatibility and older operating systems |
swtpm software TPM emulator must be
installed on all compute nodes.Provisioning methods:| Extra Spec / Image Property | Values | Description |
|---|---|---|
hw:tpm_version / hw_tpm_version | 1.2, 2.0 | vTPM specification version |
hw:tpm_model / hw_tpm_model | tpm-tis, tpm-crb | Virtual TPM hardware model |
UEFI Boot and Secure Boot
UEFI Boot and Secure Boot
UEFI boot is required for Secure Boot, vTPM, and GPT-partitioned disk layouts.
Enable UEFI at the image level or as a flavor property. Secure Boot adds an
additional layer by verifying the bootloader signature at startup.Set UEFI firmware type on an image:Enable Secure Boot through a flavor property:
Enable UEFI boot on an image
Require Secure Boot via flavor
Secure Boot requires a signed bootloader in the guest OS. Unsigned kernels and
bootloaders will fail to start with Secure Boot enabled. Verify guest OS
compatibility before deploying Secure Boot in production.
PCI Passthrough
PCI Passthrough
EnterprisePCI passthrough grants exclusive access to any host PCI device — network adapters,
accelerators, storage controllers, or FPGAs — to a single instance. The device is
isolated from the host OS using IOMMU groups, providing hardware-level isolation
and near-native performance.Host requirements: IOMMU must be enabled. Device aliases must be configured
and consistent across all nodes exposing the same device type. Configure through
XDeploy under Compute → Hardware → PCI Passthrough.Replace
Request a PCI device via flavor
<device-alias> with the alias name configured in XDeploy. The number
after the colon specifies how many devices to attach (typically 1).Next Steps
Flavor Management
Apply extra specs to flavors to activate advanced hardware features for tenants.
Security Hardening
Combine vTPM and UEFI Secure Boot with compute control plane hardening.
Admin Guide
Return to the Compute Administration Guide index.