Skip to main content

Overview

Xloud Compute exposes advanced hypervisor capabilities for workloads that require dedicated hardware resources, hardware-enforced security, or accelerated computing. These features are activated through flavor extra specs and require corresponding host-level configuration on participating compute nodes.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Admin credentials sourced from admin-openrc.sh
  • Host-level hardware features configured through XDeploy (IOMMU, huge pages, VFIO)
  • Relevant scheduler filters active in the scheduler configuration

Features

Enterprise
Xloud-Developed — NUMA-aware scheduling with per-flavor granularity is developed by Xloud and ships with XAVS / XPCI.
NUMA-aware scheduling ensures that an instance’s vCPUs and memory are allocated from the same physical NUMA cell on the host, avoiding cross-node memory access penalties. The NUMATopologyFilter in the scheduler evaluates each host’s NUMA topology and rejects placements that would split an instance across cells.Unlike VMware’s cluster-wide EVC setting, Xloud provides per-flavor NUMA granularity — each flavor can define its own NUMA cell count and CPU/memory distribution, allowing mixed workload profiles on the same compute cluster.
Create a NUMA-aware flavor (2 NUMA nodes, 8 vCPUs)
openstack flavor create \
  --vcpus 8 \
  --ram 16384 \
  --disk 100 \
  numa.8xlarge

openstack flavor set numa.8xlarge \
  --property hw:numa_nodes=2
Extra SpecValuesDescription
hw:numa_nodesIntegerNumber of guest NUMA nodes (vCPUs and memory split evenly across nodes)
hw:numa_cpus.NCPU listPin specific vCPUs to NUMA node N (e.g., 0,1,2,3)
hw:numa_mem.NInteger (MB)Memory allocation for NUMA node N
For most workloads, setting hw:numa_nodes=1 is sufficient to ensure all resources are allocated from a single NUMA cell. Use hw:numa_nodes=2 only for large instances that exceed a single cell’s capacity.
Enterprise
Xloud-Developed — CPU pinning with mixed-policy support is developed by Xloud and ships with XAVS / XPCI.
CPU pinning dedicates physical CPU threads exclusively to a single instance, eliminating scheduling jitter and improving performance predictability for latency-sensitive workloads such as real-time databases, financial applications, and telco VNFs.Xloud supports three CPU policy modes:
  • dedicated — all vCPUs are pinned to exclusive physical threads
  • shared — vCPUs float across all available host CPUs (default behavior)
  • mixed — some vCPUs are pinned while others float, enabling a balance between deterministic performance for critical threads and flexible scheduling for background threads
Host requirements: NUMA topology support must be enabled on the compute node. The NUMATopologyFilter must be active in the scheduler filter chain.Create a flavor with dedicated CPU policy:
Create CPU-pinned flavor
openstack flavor create \
  --vcpus 8 \
  --ram 16384 \
  --disk 100 \
  rt.8xlarge
openstack flavor set rt.8xlarge \
  --property hw:cpu_policy=dedicated \
  --property hw:cpu_thread_policy=prefer
Extra SpecValuesDescription
hw:cpu_policydedicated, shared, mixedControls whether vCPUs are pinned to exclusive physical threads
hw:cpu_dedicated_maskCPU mask (e.g., 0-3)When mixed policy is used, specifies which vCPUs are pinned
hw:cpu_thread_policyprefer, isolate, requireControls whether sibling hyperthreads are used
hw:numa_nodesIntegerNumber of NUMA nodes to expose inside the instance
Configure CPU pinning host requirements on participating nodes through XDeploy under Compute → Advanced → NUMA Configuration. Only hosts with NUMA support enabled accept CPU-pinned instances.
Enterprise
Xloud-Developed — Per-flavor CPU feature masking is developed by Xloud and ships with XAVS / XPCI.
CPU feature masking normalizes the CPU instruction set exposed to instances, enabling live migration across compute nodes with different CPU generations. This is the Xloud equivalent of VMware’s Enhanced vMotion Compatibility (EVC), with a key difference: Xloud applies masking per-flavor or per-image rather than at the cluster level, allowing different workloads to use different CPU baselines on the same cluster.Set a common CPU model baseline on the flavor:
Set CPU model for cross-generation migration
openstack flavor set <flavor-name> \
  --property hw:cpu_mode=custom \
  --property hw:cpu_model=Cascadelake-Server-noTSX
Extra SpecValuesDescription
hw:cpu_modehost-model, host-passthrough, customcustom enables explicit CPU model selection
hw:cpu_modelCPU model nameTarget CPU generation baseline (e.g., Cascadelake-Server-noTSX, IvyBridge)
When using host-model or host-passthrough, the instance exposes the host’s native CPU features. Live migration to a host with a different CPU generation will fail if the destination lacks required features. Use custom mode with an explicit model to guarantee migration compatibility.
Enterprise
Xloud-Developed — Huge page management with per-flavor page size selection is developed by Xloud and ships with XAVS / XPCI.
Huge pages reduce TLB (Translation Lookaside Buffer) pressure for memory-intensive workloads. The hypervisor pre-allocates huge page pools on the host at boot time. Xloud supports two page sizes:
  • 2 MB huge pages — suitable for most workloads including databases, application servers, and general-purpose memory optimization. Lower host memory fragmentation risk.
  • 1 GB huge pages — optimal for HPC, in-memory analytics, and latency-sensitive network functions (DPDK, VNFs) where TLB coverage per entry is critical.
Host requirements: Huge page pools must be pre-allocated on the compute node. Configure pool sizes through XDeploy under Compute → Advanced → Memory Configuration.
openstack flavor set <flavor-name> \
  --property hw:mem_page_size=2MB
Instances using huge pages are scheduled only onto hosts where the required huge page pool is allocated. The scheduler rejects hosts without sufficient huge pages of the requested size.
Use hw:mem_page_size=any to allow the scheduler to place the instance on a host with either 2 MB or 1 GB pages, maximizing scheduling flexibility.
EnterpriseGPU passthrough exposes a physical GPU device directly to an instance with near-native performance. The GPU is bound to the VFIO driver on the host and assigned exclusively to one instance at a time. Use this for AI/ML training, 3D rendering, and GPU-accelerated simulation workloads.Host requirements: IOMMU must be enabled in host BIOS and OS. The GPU must be bound to the VFIO driver. Configure this through XDeploy under Compute → Hardware → GPU Configuration.List available PCI resource providers to find the GPU alias:
List resource providers
openstack resource provider list
Show inventory for a resource provider
openstack resource provider inventory list <resource-provider-uuid>
Request the GPU in a flavor using the configured device alias:
Add GPU alias to flavor
openstack flavor set <flavor-name> \
  --property pci_passthrough:alias=nvidia-a100:1
All compute nodes exposing the same GPU type must use the same alias name. The PciPassthroughFilter must be active in the scheduler filter chain to route GPU-requesting instances to hosts with available devices.
Enterprise
Xloud-Developed — vTPM with live migration support and Dashboard integration is developed by Xloud and ships with XAVS / XPCI.
vTPM provides a software-emulated TPM chip inside the instance, enabling disk encryption (BitLocker, LUKS), measured boot attestation, and secure credential storage. Xloud supports live migration of vTPM instances with automatic secret transfer via Xloud Key Management — the encrypted TPM state is seamlessly transferred to the destination host without manual intervention.Supported models:
ModelUse Case
tpm-crb (Command Response Buffer)Recommended for TPM 2.0. Modern interface used by Windows 11, RHEL 9+, Ubuntu 22.04+
tpm-tis (TPM Interface Specification)Legacy interface for TPM 1.2 compatibility and older operating systems
Host requirements: Xloud Key Management must be enabled for secret storage. The swtpm software TPM emulator must be installed on all compute nodes.Provisioning methods:
openstack flavor set <flavor-name> \
  --property hw:tpm_version=2.0 \
  --property hw:tpm_model=tpm-crb
In the Dashboard, vTPM can be enabled through the flavor extra specs panel or the image admin form. The instance detail page displays the vTPM status when attached.
Extra Spec / Image PropertyValuesDescription
hw:tpm_version / hw_tpm_version1.2, 2.0vTPM specification version
hw:tpm_model / hw_tpm_modeltpm-tis, tpm-crbVirtual TPM hardware model
vTPM state is encrypted at rest using a secret stored in Xloud Key Management. Ensure Xloud Key Management is enabled and healthy before provisioning vTPM instances. If the Key Management service is unavailable, vTPM instances cannot start or be live-migrated.
UEFI boot is required for Secure Boot, vTPM, and GPT-partitioned disk layouts. Enable UEFI at the image level or as a flavor property. Secure Boot adds an additional layer by verifying the bootloader signature at startup.Set UEFI firmware type on an image:
Enable UEFI boot on an image
openstack image set \
  --property hw_firmware_type=uefi \
  <image-id>
Enable Secure Boot through a flavor property:
Require Secure Boot via flavor
openstack flavor set <flavor-name> \
  --property os:secure_boot=required
Secure Boot requires a signed bootloader in the guest OS. Unsigned kernels and bootloaders will fail to start with Secure Boot enabled. Verify guest OS compatibility before deploying Secure Boot in production.
EnterprisePCI passthrough grants exclusive access to any host PCI device — network adapters, accelerators, storage controllers, or FPGAs — to a single instance. The device is isolated from the host OS using IOMMU groups, providing hardware-level isolation and near-native performance.Host requirements: IOMMU must be enabled. Device aliases must be configured and consistent across all nodes exposing the same device type. Configure through XDeploy under Compute → Hardware → PCI Passthrough.
Request a PCI device via flavor
openstack flavor set <flavor-name> \
  --property pci_passthrough:alias=<device-alias>:1
Replace <device-alias> with the alias name configured in XDeploy. The number after the colon specifies how many devices to attach (typically 1).
PCI-attached devices cannot be live-migrated. Instances with PCI passthrough are limited to cold migration only. Factor this constraint into your maintenance and availability planning.

Next Steps

Flavor Management

Apply extra specs to flavors to activate advanced hardware features for tenants.

Security Hardening

Combine vTPM and UEFI Secure Boot with compute control plane hardening.

Admin Guide

Return to the Compute Administration Guide index.