Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xloud.tech/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Xloud DNS as a Service uses Designate to manage DNS zones and records on behalf of tenants. Designate has a pluggable backend architecture: the Designate control plane handles tenant zones, while a backend driver propagates each change to the external authoritative DNS server. When a virtual machine is provisioned, Designate creates the matching A, PTR, and CNAME records in the tenant’s assigned zone. If an external DNS backend is configured, the records are propagated to the external provider without manual intervention.
Administrator Access Required — This operation requires the admin role. Contact your Xloud administrator if you do not have sufficient permissions.
Prerequisites
  • Administrator credentials with the admin role
  • Network connectivity from the Designate control plane nodes to the external DNS server’s management interface
  • Credentials, TSIG keys, or API tokens for the external DNS provider as appropriate

Supported Backends

Designate ships several DNS server backend drivers. Two carry Integrated status — they run in continuous CI and are the recommended path for production. The rest are present in tree but officially Untested by upstream, which means there is no automated regression net; deploy them only after your own validation against your target version.
BackendStatusIntegration mechanism
BIND9Integrated (CI-tested)RFC 2136 dynamic updates
PowerDNS 4Integrated (CI-tested)REST API (PowerDNS Authoritative Server 4.x or later)
InfobloxIn tree, UntestedXFR (zone transfer) — refreshed to use the official Infoblox client
Akamai DNS v2In tree, UntestedXFR (zone transfer)
NS1 DNSIn tree, UntestedXFR (zone transfer)
DynECTIn tree, UntestedXFR (zone transfer)
NSD4In tree, UntestedXFR (zone transfer)
Reference: Designate DNS Server Driver Support Matrix.
Untested does not mean broken — it means upstream has no automated coverage. Validate the target backend in a non-production environment before relying on it in customer deployments.

Notes on commonly named providers

Infoblox

In-tree backend, recently refreshed to use the official Infoblox client. Officially Untested in the upstream support matrix. Suitable for enterprises that want Infoblox to remain the authoritative DNS / DDI platform — but treat the integration as customer-validated, not upstream-certified.

Microsoft DNS / Active Directory DNS

There is no in-tree Designate driver for Microsoft DNS. Two practical paths:
  1. Front Designate with BIND9 or PowerDNS as the integrated backend, then configure a one-way zone-transfer or sync from BIND or PowerDNS into Microsoft DNS using Microsoft’s own zone-transfer support.
  2. Build a custom Designate backend that calls the Microsoft DNS PowerShell cmdlets through WinRM — see Custom Backend Drivers below.

BlueCat

BlueCat does not ship a Designate backend driver. BlueCat maintains a separate set of OpenStack integration components — see BlueCat OpenStack Drivers — that integrate via Neutron IPAM and Nova/Neutron sync agents that route DNS records into BlueCat Address Manager (BAM). In a BlueCat-integrated deployment, the platform’s DNS records flow through BAM and Designate is typically not used.

SolarWinds

SolarWinds is a network and infrastructure monitoring suite, not an authoritative DNS server, so a Designate backend is not applicable. SolarWinds can receive DNS-related metrics and events from the platform via standard Prometheus / SNMP / syslog forwarders documented in the Monitoring guides.

Architecture

Designate’s backend model separates the API and pool management layer from the DNS protocol layer. Each backend driver handles zone synchronization between Designate’s internal state and the external authoritative server. Tenants interact only with Designate APIs — the backend synchronization is transparent.
Tenant → Designate API → Pool Manager → Backend Driver → External DNS Server

                                    (zone create / record sync)
A pool groups one or more DNS backend targets with shared configuration. Each pool can contain multiple nameservers, and tenants can be assigned to specific pools based on project or zone type.

Backend Configuration (CI-tested backends)

All backend configuration lives in designate.conf. In XAVS deployments, this file is managed by Ansible — set parameters via XDeploy globals and run xavs-ansible deploy --tags designate to apply changes.
BIND9 integration uses the standard DNS dynamic update protocol (RFC 2136). A TSIG key authenticates updates from Designate to BIND.On the BIND server — generate a TSIG key:
Generate TSIG key on the BIND server
tsig-keygen -a HMAC-SHA256 designate-key
Add the output to /etc/bind/named.conf.keys and configure the zone to accept updates:
/etc/bind/named.conf — allow dynamic updates
zone "example.com" {
    type master;
    file "/var/cache/bind/example.com.db";
    allow-update { key designate-key; };
};
Designate configuration:
designate.conf — BIND9 backend
[backend:bind9]
host = 10.0.10.5
port = 53
tsig_key_name = designate-key
tsig_key_secret = <base64-encoded-key>
tsig_key_algorithm = HMAC-SHA256

[pool:default]
backends = bind9
nameservers = ns1.example.com
also_notifies = 10.0.10.5:53
The pdns4 driver targets PowerDNS Authoritative Server 4.x or later.PowerDNS server — enable the API in pdns.conf:
pdns.conf — enable REST API
api=yes
api-key=<your-api-key>
webserver=yes
webserver-address=0.0.0.0
webserver-port=8081
Designate configuration:
designate.conf — PowerDNS backend
[backend:pdns4]
host = 10.0.10.6
port = 8081
api_endpoint = http://10.0.10.6:8081
api_token = <your-api-key>

[pool:default]
backends = pdns4
nameservers = ns1.example.com
In-tree backends marked Untested (Infoblox, Akamai v2, NS1, DynECT, NSD4) follow the same designate.conf pattern — refer to the upstream Designate documentation for per-driver configuration keys. Validate every option in a non-production environment before relying on these in customer deployments.

Custom Backend Drivers

If your enterprise DNS / DDI provider does not have an in-tree Designate backend — or you want closer integration with a vendor’s REST API rather than relying on zone-transfer — you can develop a custom backend driver. Designate exposes a stable Python interface for plug-in drivers, so a custom driver lives outside the Designate codebase and is loaded at runtime.

When to build a custom driver

  • The vendor exposes a modern REST / WAPI / GraphQL management API and you want native record creation rather than XFR-based propagation.
  • You need richer record types (DDI-managed ranges, vendor-specific attributes) that XFR does not carry.
  • You want operational hooks (audit logging, change-control workflows, approval gates) inside the driver itself.
  • The vendor’s product (for example, Microsoft DNS, BlueCat) does not have an in-tree Designate backend today and you want platform tenants to use their Designate APIs unchanged.

What’s involved

A custom driver is a Python class that implements designate.backend.base.Backend, packaged as a setuptools entry point under the designate.backend group. The driver implements zone create / update / delete and record create / update / delete handlers, and uses the vendor’s SDK or REST API to apply each change to the authoritative DNS provider. The driver is registered in designate.conf exactly like an in-tree backend (backends = my-vendor-driver), and Designate loads it at startup. There are no upstream-Designate code changes required.

Maintenance trade-off

A custom driver is customer-owned — upstream Designate releases will not automatically validate against it. The integration must be re-tested at each Designate upgrade. Plan for a small recurring engineering investment in exchange for the closer integration.

Hybrid approach (often the most pragmatic)

When a vendor doesn’t fit cleanly into either XFR-based in-tree backend or custom-driver, a common middle ground is:
  1. Use BIND9 or PowerDNS 4 as the Designate-integrated backend (full upstream CI coverage).
  2. Configure the enterprise DNS / DDI platform (Infoblox, Microsoft DNS, BlueCat) to pull zones from BIND or PowerDNS via standard zone transfer or via the vendor’s own sync / gateway product.
This keeps the platform on a CI-tested integration path and uses the vendor’s own tooling for the leg of the path the vendor knows best.

Zone Transfer to External Provider

When a zone is created in Designate, the backend driver provisions it on the configured external DNS server. Zone transfers can also be initiated manually for bulk migration of existing zones.

Create a zone

Navigate to Network → DNS Zones and click Create Zone.Enter the zone name (e.g., example.com.), email, TTL, and select the pool if multiple pools are configured.

Verify propagation

After the zone reaches Active status, query the external DNS server to confirm the zone is visible:
Query external DNS server for zone
dig @10.0.10.5 SOA example.com.

Add records

Navigate to the zone and click Create Record Set. Records created here are automatically propagated to the external backend.

Pool Management

Pools allow routing different zones or tenants to different DNS backends. For example, internal zones can be handled by BIND9 while external zones use PowerDNS or a vendor backend.
designate.conf — multiple pools
[pools]
names = internal-pool,external-pool

[pool:internal-pool]
backends = bind9
nameservers = ns1.internal.example.com,ns2.internal.example.com
also_notifies = 10.0.10.5:53

[pool:external-pool]
backends = pdns4
nameservers = ns1.example.com,ns2.example.com
Create a zone in a specific pool
openstack zone create \
  --email admin@example.com \
  --attributes pool_id:<pool-uuid> \
  external.example.com.

Troubleshooting

Cause: Designate cannot reach the external backend, or authentication failed.Resolution:
  • Verify network connectivity from the Designate control node to the backend IP and port
  • Check Designate worker logs: docker logs designate_worker
  • Confirm credentials (API key, TSIG key, or password) are correct in designate.conf
Cause: The zone may be active in Designate but the backend sync failed.Resolution:
  • Force a zone sync: openstack zone sync <zone-name>
  • Check Designate producer and worker logs for backend errors
  • Verify the external DNS server has the zone configured to accept updates from the Designate source IP
Cause: The TSIG key in designate.conf does not match the key configured on the BIND9 server.Resolution:
  • Re-generate and re-sync the TSIG key on both BIND9 and Designate
  • Confirm the key algorithm (HMAC-SHA256) matches on both sides
  • Test manually: nsupdate -k /etc/bind/designate.key
Cause: The driver is not registered as a setuptools entry point under the designate.backend group, or the Python package is not installed in the Designate virtualenv.Resolution:
  • Verify the entry point: pip show <your-driver-package> should list it under entry points
  • Confirm the driver class implements every abstract method of designate.backend.base.Backend
  • Restart designate-worker and designate-producer after installing or updating the driver

Next Steps

Backend Configuration

Full reference for Designate backend driver options and pool configuration

Pool Management

Manage multiple backend pools for routing zones to different DNS providers

DNS Security

Configure TSIG keys, DNSSEC, and zone access policies

Designate Support Matrix (upstream)

Authoritative upstream list of in-tree backends and their CI test status