Skip to main content

Overview

Ansible integrates with Xloud through two complementary mechanisms: the openstack.cloud collection for infrastructure automation, and SSH/WinRM-based playbooks for instance configuration. Ansible operates agentlessly — no software is installed on managed instances beyond a working SSH daemon and Python interpreter. The Xloud dynamic inventory plugin sources instance metadata directly from the compute API, automatically organizing hosts by project, availability zone, image, and custom metadata tags.
Prerequisites
  • Ansible 2.12 or later installed
  • openstack.cloud collection: ansible-galaxy collection install openstack.cloud
  • Xloud application credentials or openrc file sourced in the shell
  • openstacksdk Python library: pip install openstacksdk

Dynamic Inventory

The openstack.cloud.openstack inventory plugin generates a live host list from the Xloud compute API. Hosts are grouped by instance metadata, eliminating the need to maintain static inventory files.
inventory/openstack.yml
plugin: openstack.cloud.openstack
auth:
  auth_url: "https://api.<your-domain>:5000/v3"
  username: "{{ lookup('env', 'OS_USERNAME') }}"
  password: "{{ lookup('env', 'OS_PASSWORD') }}"
  project_name: "{{ lookup('env', 'OS_PROJECT_NAME') }}"
  user_domain_name: Default
  project_domain_name: Default
expand_hostvars: true
fail_on_errors: false
groups:
  web_servers: "'web' in name"
  app_servers: "'app' in name"
  db_servers: "'db' in name"
compose:
  ansible_host: access_ipv4
Test inventory resolution:
List dynamic inventory hosts
ansible-inventory -i inventory/openstack.yml --list

Playbook Examples

OS Bootstrap

Bootstrap a newly provisioned instance with required packages, users, and firewall rules:
playbooks/bootstrap.yml
---
- name: Bootstrap new instances
  hosts: all
  become: true
  vars:
    admin_users:
      - name: deploy
        groups: sudo
        ssh_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"

  tasks:
    - name: Update package cache and upgrade
      apt:
        update_cache: true
        upgrade: dist
        cache_valid_time: 3600

    - name: Install required packages
      apt:
        name:
          - curl
          - vim
          - htop
          - unattended-upgrades
          - ufw
        state: present

    - name: Create admin users
      user:
        name: "{{ item.name }}"
        groups: "{{ item.groups }}"
        shell: /bin/bash
        create_home: true
      loop: "{{ admin_users }}"

    - name: Add SSH authorized keys
      authorized_key:
        user: "{{ item.name }}"
        key: "{{ item.ssh_key }}"
        state: present
      loop: "{{ admin_users }}"

    - name: Configure UFW default deny
      ufw:
        state: enabled
        direction: incoming
        policy: deny

    - name: Allow SSH
      ufw:
        rule: allow
        port: "22"
        proto: tcp

Patch Management

Apply security patches across all instances in a project:
playbooks/patch.yml
---
- name: Apply security patches
  hosts: all
  become: true
  serial: "25%"

  tasks:
    - name: Update package cache
      apt:
        update_cache: true
        cache_valid_time: 0

    - name: Apply security updates only
      apt:
        upgrade: safe
        update_cache: false

    - name: Check if reboot is required
      stat:
        path: /var/run/reboot-required
      register: reboot_required

    - name: Reboot if kernel was updated
      reboot:
        reboot_timeout: 300
        msg: "Rebooting after kernel patch"
      when: reboot_required.stat.exists

    - name: Verify instance is responsive after reboot
      wait_for_connection:
        delay: 10
        timeout: 120

CIS Compliance Enforcement

Apply CIS baseline hardening to Linux instances:
playbooks/cis-harden.yml
---
- name: Apply CIS Level 1 baseline
  hosts: all
  become: true

  tasks:
    - name: Disable unused filesystems
      lineinfile:
        path: /etc/modprobe.d/disable-filesystems.conf
        line: "install {{ item }} /bin/true"
        create: true
        mode: "0644"
      loop:
        - cramfs
        - freevxfs
        - jffs2
        - hfs
        - hfsplus
        - udf

    - name: Set password minimum length
      lineinfile:
        path: /etc/security/pwquality.conf
        regexp: "^minlen"
        line: "minlen = 14"

    - name: Set SSH MaxAuthTries
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: "^MaxAuthTries"
        line: "MaxAuthTries 4"
      notify: Restart SSH

    - name: Disable root SSH login
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: "^PermitRootLogin"
        line: "PermitRootLogin no"
      notify: Restart SSH

    - name: Enable auditd
      service:
        name: auditd
        state: started
        enabled: true

  handlers:
    - name: Restart SSH
      service:
        name: sshd
        state: restarted

Infrastructure Management via Xloud Modules

Create and manage Xloud resources directly from playbooks:
playbooks/provision.yml
---
- name: Provision application tier
  hosts: localhost
  gather_facts: false

  tasks:
    - name: Create application network
      openstack.cloud.network:
        state: present
        name: app-network
        external: false

    - name: Create subnet
      openstack.cloud.subnet:
        state: present
        network_name: app-network
        name: app-subnet
        cidr: 10.200.0.0/24
        dns_nameservers:
          - 8.8.8.8

    - name: Launch application instances
      openstack.cloud.server:
        state: present
        name: "app-{{ item }}"
        image: Ubuntu-22.04
        flavor: m1.medium
        key_name: deployer-key
        network: app-network
        security_groups:
          - default
          - app-sg
        wait: true
      loop: "{{ range(1, 4) | list }}"
      register: created_servers

    - name: Add new instances to in-memory inventory
      add_host:
        hostname: "{{ item.server.access_ipv4 }}"
        groups: new_instances
      loop: "{{ created_servers.results }}"

Credential Management

Store secrets used in playbooks in Xloud Key Manager (Barbican) or HashiCorp Vault. Retrieve them at runtime using the community.general.hashi_vault or openstack.cloud.identity_user lookup plugins rather than hardcoding in vars files.
group_vars/all/vault.yml
# Encrypted with ansible-vault
db_password: !vault |
  $ANSIBLE_VAULT;1.1;AES256
  ...encrypted blob...
Run playbook with vault password
ansible-playbook -i inventory/openstack.yml playbooks/bootstrap.yml \
  --vault-password-file ~/.vault-pass

Running Playbooks

Source credentials

Load Xloud credentials
source admin-openrc.sh

Test connectivity

Ping all dynamic inventory hosts
ansible -i inventory/openstack.yml all -m ping
All targeted instances return a pong response.

Run a playbook

Run bootstrap playbook
ansible-playbook -i inventory/openstack.yml playbooks/bootstrap.yml \
  --limit web_servers \
  --diff \
  --check
Remove --check to apply changes. --diff shows what would change on each host.

Verify results

Check instance facts
ansible -i inventory/openstack.yml web_servers -m setup \
  -a "filter=ansible_distribution*"
Playbook completes with no failed tasks. Verify changes on instances via SSH or the Dashboard console.

Next Steps

Terraform Integration

Use Terraform for provisioning and Ansible for post-provision configuration

Wazuh Integration

Deploy Wazuh agents using Ansible playbooks for SIEM and compliance monitoring

Key Manager

Store playbook secrets in Xloud Key Manager for secure credential retrieval

Auto-Scaling

Bootstrap auto-scaled instances using Ansible cloud-init integration