Skip to main content

Overview

The Terraform provider for Xloud uses the OpenStack provider (hashicorp/openstack) to manage compute, networking, storage, and identity resources. Infrastructure definitions are stored in version-controlled .tf files, enabling repeatable deployments, drift detection, and change review workflows via standard CI/CD tooling.
Prerequisites
  • Terraform 1.3 or later installed (terraform.io)
  • Xloud application credentials or an openrc file from the Dashboard
  • Network and security group resources created in your project (or managed via Terraform)

Provider Configuration

Source your credentials file before running Terraform commands. The provider reads standard OS_* environment variables:
Load Xloud credentials
source admin-openrc.sh
provider.tf
terraform {
  required_providers {
    openstack = {
      source  = "terraform-provider-openstack/openstack"
      version = "~> 2.0"
    }
  }
}

provider "openstack" {
  # Credentials are read from OS_* environment variables
}

Resource Examples

Compute Instance

compute.tf
data "openstack_compute_flavor_v2" "small" {
  name = "m1.small"
}

data "openstack_images_image_v2" "ubuntu" {
  name        = "Ubuntu-22.04"
  most_recent = true
}

data "openstack_networking_network_v2" "private" {
  name = "private"
}

resource "openstack_compute_keypair_v2" "deployer" {
  name       = "deployer-key"
  public_key = file("~/.ssh/id_rsa.pub")
}

resource "openstack_compute_instance_v2" "web" {
  name            = "web-server-01"
  image_id        = data.openstack_images_image_v2.ubuntu.id
  flavor_id       = data.openstack_compute_flavor_v2.small.id
  key_pair        = openstack_compute_keypair_v2.deployer.name
  security_groups = ["default", "web-sg"]

  network {
    name = data.openstack_networking_network_v2.private.name
  }

  user_data = <<-EOF
    #!/bin/bash
    apt-get update -y
    apt-get install -y nginx
    systemctl enable --now nginx
  EOF
}

output "instance_ip" {
  value = openstack_compute_instance_v2.web.access_ip_v4
}

Network and Router

networking.tf
resource "openstack_networking_network_v2" "app_net" {
  name           = "app-network"
  admin_state_up = true
}

resource "openstack_networking_subnet_v2" "app_subnet" {
  name            = "app-subnet"
  network_id      = openstack_networking_network_v2.app_net.id
  cidr            = "10.100.0.0/24"
  ip_version      = 4
  dns_nameservers = ["8.8.8.8", "8.8.4.4"]
}

resource "openstack_networking_router_v2" "app_router" {
  name                = "app-router"
  admin_state_up      = true
  external_network_id = var.external_network_id
}

resource "openstack_networking_router_interface_v2" "app_router_iface" {
  router_id = openstack_networking_router_v2.app_router.id
  subnet_id = openstack_networking_subnet_v2.app_subnet.id
}

Block Storage Volume

storage.tf
resource "openstack_blockstorage_volume_v3" "data_vol" {
  name        = "data-volume-01"
  size        = 100
  volume_type = "ceph-ssd"
  description = "Application data volume"
}

resource "openstack_compute_volume_attach_v2" "data_attach" {
  instance_id = openstack_compute_instance_v2.web.id
  volume_id   = openstack_blockstorage_volume_v3.data_vol.id
}

Floating IP

floating-ip.tf
resource "openstack_networking_floatingip_v2" "web_fip" {
  pool = "external"
}

resource "openstack_compute_floatingip_associate_v2" "web_fip_assoc" {
  floating_ip = openstack_networking_floatingip_v2.web_fip.address
  instance_id = openstack_compute_instance_v2.web.id
}

output "public_ip" {
  value = openstack_networking_floatingip_v2.web_fip.address
}

State Management


Workflow

Initialize the working directory

Initialize Terraform
terraform init
Downloads the provider plugin and configures the backend.

Review the execution plan

Show plan
terraform plan -out=tfplan
Review all resources that will be created, modified, or destroyed before applying.

Apply the configuration

Apply configuration
terraform apply tfplan
Resources are created and outputs are printed. Verify instances appear in the Dashboard under Project → Compute → Instances.

Destroy when done

Destroy all resources
terraform destroy
All resources defined in the configuration are removed. Use for development or ephemeral environments.

Next Steps

Ansible Integration

Combine Terraform provisioning with Ansible for post-provision configuration management

Auto-Scaling with Orchestration

Deploy auto-scaling stacks via Terraform and wire Prometheus alerts for dynamic scaling

Block Storage

Create and attach persistent volumes to Terraform-managed instances

Application Credentials

Generate non-expiring credentials for use in Terraform pipelines