Documentation Index Fetch the complete documentation index at: https://docs.xloud.tech/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Terraform provider for Xloud uses the OpenStack provider (hashicorp/openstack) to manage
compute, networking, storage, and identity resources. You store infrastructure definitions in
version-controlled .tf files, enabling repeatable deployments, drift detection, and change
review workflows via standard CI/CD tooling.
Prerequisites
Terraform 1.3 or later installed (terraform.io )
Xloud application credentials or an openrc file from the Dashboard
Network and security group resources created in your project (or managed via Terraform)
Provider Configuration
Environment Variables
Inline Configuration
Source your credentials file before running Terraform commands. The provider reads
standard OS_* environment variables: terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 2.0"
}
}
}
provider "openstack" {
# Credentials are read from OS_* environment variables
}
For CI/CD pipelines where environment variables are injected as secrets: terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 2.0"
}
}
}
provider "openstack" {
auth_url = "https://api.<your-domain>:5000/v3"
region = "RegionOne"
tenant_name = var . project_name
user_name = var . username
password = var . password
}
Store credentials in CI/CD secret variables, not in .tf files. Never commit
credentials to source control.
Resource Examples
Compute Instance
data "openstack_compute_flavor_v2" "small" {
name = "m1.small"
}
data "openstack_images_image_v2" "ubuntu" {
name = "Ubuntu-22.04"
most_recent = true
}
data "openstack_networking_network_v2" "private" {
name = "private"
}
resource "openstack_compute_keypair_v2" "deployer" {
name = "deployer-key"
public_key = file ( "~/.ssh/id_rsa.pub" )
}
resource "openstack_compute_instance_v2" "web" {
name = "web-server-01"
image_id = data . openstack_images_image_v2 . ubuntu . id
flavor_id = data . openstack_compute_flavor_v2 . small . id
key_pair = openstack_compute_keypair_v2 . deployer . name
security_groups = [ "default" , "web-sg" ]
network {
name = data . openstack_networking_network_v2 . private . name
}
user_data = <<- EOF
#!/bin/bash
apt-get update -y
apt-get install -y nginx
systemctl enable --now nginx
EOF
}
output "instance_ip" {
value = openstack_compute_instance_v2 . web . access_ip_v4
}
Network and Router
resource "openstack_networking_network_v2" "app_net" {
name = "app-network"
admin_state_up = true
}
resource "openstack_networking_subnet_v2" "app_subnet" {
name = "app-subnet"
network_id = openstack_networking_network_v2 . app_net . id
cidr = "10.100.0.0/24"
ip_version = 4
dns_nameservers = [ "8.8.8.8" , "8.8.4.4" ]
}
resource "openstack_networking_router_v2" "app_router" {
name = "app-router"
admin_state_up = true
external_network_id = var . external_network_id
}
resource "openstack_networking_router_interface_v2" "app_router_iface" {
router_id = openstack_networking_router_v2 . app_router . id
subnet_id = openstack_networking_subnet_v2 . app_subnet . id
}
Block Storage Volume
resource "openstack_blockstorage_volume_v3" "data_vol" {
name = "data-volume-01"
size = 100
volume_type = "ceph-ssd"
description = "Application data volume"
}
resource "openstack_compute_volume_attach_v2" "data_attach" {
instance_id = openstack_compute_instance_v2 . web . id
volume_id = openstack_blockstorage_volume_v3 . data_vol . id
}
Floating IP
resource "openstack_networking_floatingip_v2" "web_fip" {
pool = "external"
}
resource "openstack_compute_floatingip_associate_v2" "web_fip_assoc" {
floating_ip = openstack_networking_floatingip_v2 . web_fip . address
instance_id = openstack_compute_instance_v2 . web . id
}
output "public_ip" {
value = openstack_networking_floatingip_v2 . web_fip . address
}
State Management
Store Terraform state remotely to enable collaboration and prevent state file conflicts.
Use an S3-compatible backend pointed at Xloud Object Storage: terraform {
backend "s3" {
bucket = "terraform-state"
key = "prod/web-tier/terraform.tfstate"
region = "us-east-1"
endpoint = "https://object.<your-domain>"
access_key = var . swift_access_key
secret_key = var . swift_secret_key
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
force_path_style = true
}
}
For single-operator or development use, Terraform defaults to local state storage in
terraform.tfstate. Commit this file to a private repository or use
.gitignore to exclude it from shared repositories. Local state is not suitable for team workflows. Multiple operators running
terraform apply concurrently against local state will corrupt state.
Use remote state for any shared environment.
Workflow
Initialize the working directory Downloads the provider plugin and configures the backend.
Review the execution plan terraform plan -out=tfplan
Review all resources that will be created, modified, or destroyed before applying.
Apply the configuration Resources are created and outputs are printed. Verify instances appear in the Dashboard under Project → Compute → Instances .
Destroy when done All resources defined in the configuration are removed. Use for development or ephemeral environments.
Next Steps
Ansible Integration Combine Terraform provisioning with Ansible for post-provision configuration management
Auto-Scaling with Orchestration Deploy auto-scaling stacks via Terraform and wire Prometheus alerts for dynamic scaling
Block Storage Create and attach persistent volumes to Terraform-managed instances
Application Credentials Generate non-expiring credentials for use in Terraform pipelines