Skip to main content

Overview

Node groups are named pools of worker nodes within a single Kubernetes cluster. Each group can have a different instance flavor, enabling heterogeneous clusters — e.g., a GPU node pool for machine learning workloads alongside a general-purpose pool for web services. Node groups are scaled independently, giving fine-grained control over capacity without affecting other workloads on the cluster.
Prerequisites
  • A cluster in CREATE_COMPLETE status
  • Sufficient compute quota for the new node group

Default Node Group

Every cluster has a default node group created at provisioning time. Its flavor and node count are set by the cluster template and the initial node count parameter. The default node group is named default-worker.
List node groups for a cluster
openstack coe nodegroup list prod-cluster-01

Create a Node Group

Navigate to the cluster

Log in to the Xloud Dashboard (https://connect.<your-domain>) and navigate to Project → Containers → Clusters. Click your cluster name.

Open Node Groups

Click the Node Groups tab on the cluster detail page.

Create node group

Click Create Node Group and fill in the fields:
FieldDescriptionExample
NameUnique name within the clustergpu-workers
Node CountInitial number of nodes in the group2
FlavorInstance size for this groupg1.xlarge
Min NodesMinimum nodes for autoscaling1
Max NodesMaximum nodes for autoscaling5
Roleworker or infraworker

Create

Click Create Node Group. Nodes are provisioned and join the cluster.
Node group appears in the list and nodes show STATUS: Ready in kubectl.

Scale a Node Group

On the cluster detail page, click the Node Groups tab. Find the node group and click Actions → Resize. Enter the new node count and confirm.

Schedule Workloads to a Specific Node Group

Use Kubernetes node selectors or taints and tolerations to target workloads to a specific node group.
spec:
  nodeSelector:
    node.kubernetes.io/instance-type: g1.xlarge
Apply a taint to all nodes in a node group to ensure only workloads with the matching toleration are scheduled there:
Taint all GPU nodes
kubectl taint nodes -l ng=gpu-workers \
  node-type=gpu:NoSchedule

Delete a Node Group

Deleting a node group removes all nodes in the group and evicts all pods running on them. Ensure workloads have been migrated to other node groups before deletion.
Delete a node group
openstack coe nodegroup delete prod-cluster-01 gpu-workers

Next Steps

Scale Cluster

Resize the default node group for overall cluster capacity changes.

Cluster Upgrades

Upgrade Kubernetes version across all node groups.

Access Cluster

Configure kubectl to connect to your cluster and verify node readiness.

Troubleshooting

Resolve node group creation and scaling failures.