Skip to main content

Overview

The container runtime is specified in the cluster template and determines how container images are pulled, started, and managed on cluster nodes. Xloud K8SaaS supports containerd as the recommended runtime for all Kubernetes versions 1.24 and above. Docker runtime support was removed from Kubernetes upstream in version 1.24.

Supported Runtimes

RuntimeStatusKubernetes SupportRecommended For
containerdRecommended1.20+All production clusters on Kubernetes 1.24+
dockerDeprecatedRemoved in 1.24Legacy clusters only
Do not create new cluster templates with the docker runtime. Docker as the Kubernetes container runtime was removed in Kubernetes 1.24. All new templates must use containerd.

Configure Runtime in a Template

Set the container runtime via the container_runtime label in the cluster template:
Create template with containerd runtime
openstack coe cluster template create k8s-1.29-standard \
  --coe kubernetes \
  --image fedora-coreos-39 \
  --labels container_runtime=containerd \
  ...
Verify runtime label on existing template
openstack coe cluster template show k8s-1.29-standard \
  -f value -c labels
Expected output includes container_runtime=containerd.

containerd Configuration

The containerd runtime is pre-configured in the cluster node bootstrap script. Default containerd settings suitable for most deployments:
SettingDefaultDescription
CRI socket/run/containerd/containerd.sockStandard CRI socket path
Pause imageConfigured by K8SaaS bootstrapKubernetes pause container image
Sandbox imageregistry.k8s.io/pause:3.9Infrastructure sandbox container
Image pull policyIfNotPresentDefault pull policy for workload containers

Private Registry Configuration

If your organization uses an internal container registry, configure it in the cluster template using the insecure_registry label:
Template with internal registry
openstack coe cluster template create k8s-internal-registry \
  --coe kubernetes \
  --labels container_runtime=containerd \
  --labels insecure_registry=registry.xloud.local:5000 \
  ...
For HTTPS-enabled internal registries, configure the CA certificate via a custom bootstrap script or a ConfigMap deployed to the cluster after provisioning.

Verify Runtime on Running Nodes

After cluster deployment, confirm containerd is active on all nodes:
Check runtime on all nodes
kubectl get nodes \
  -o custom-columns='NAME:.metadata.name,RUNTIME:.status.nodeInfo.containerRuntimeVersion'
Expected output for each node: containerd://1.7.x

Next Steps

Network Drivers

Configure the CNI plugin for cluster network policy enforcement.

Template Management

Create and publish public templates with the correct runtime configuration.

Security

Harden container runtime configuration for production clusters.

Cluster Drivers

Review the provisioning driver that uses the template runtime configuration.