In our previous blog, we explored a GitOps use case for on-premises infrastructure, managing multiple clusters hosted on the k3s Kubernetes distribution using k0rdent.
But the platform engineering ecosystem is vast, and one blog barely scratches the surface of what it takes to manage multi-cluster environments at ease, or to make the most of different Kubernetes distributions.
Ultimately, success isn’t about running Kubernetes, it’s about running it at scale, efficiently, and consistently.
That’s exactly what hosted control planes are designed to achieve.
The scale problem nobody talks about enough
How do you manage dozens or hundreds of clusters without costs and complexity spiralling out of control?
Open infrastructure is neither small nor shrinking. In fact, most practitioners I encounter day-to-day are running their workloads on OpenStack. And if you’re on OpenStack, the challenge of managing multi-cluster applications doesn’t just exist, it compounds. Every new cluster adds overhead, and that overhead adds up fast.
This blog explores how combining k0s, k0rdent, and Hosted Control Planes (HCP) can give you a scalable, cost-efficient, and production-ready Kubernetes platform on OpenStack.
What are we solving?
In a typical Kubernetes setup, every cluster ships with its own dedicated control plane — meaning at least 3 nodes per cluster just for the control plane itself. Multiply that across dev, staging, and production environments, and you’re burning through resources before your first workload even lands.
This is the problem Hosted Control Planes were built to solve.
Instead of running the API server, etcd, and controllers on dedicated nodes per cluster, HCP runs all of them inside a central management cluster. The result is fewer VMs, lower costs, simpler upgrades, and a single pane of control across your entire fleet.

The combination of:
- k0s (lightweight Kubernetes)
- k0rdent (multi-cluster orchestration)
- OpenStack (private cloud infrastructure)
creates a powerful platform engineering stack.
1. Prepare your environment
Before you touch Kubernetes, make sure your base environment is correct. Most failures happen here.
Infrastructure requirements
You need:
- One Linux VM (Ubuntu 20.04/22.04 recommended) for the management cluster
- Minimum: 4 CPU, 8 GB RAM (more is better)
- Access to an OpenStack project with:
- Enough quota (instances, volumes, floating IPs)
- A working network + subnet + router
- A usable image (Ubuntu cloud image)
- At least one flavor (e.g., m1.medium)
Tools to install on your VM:
sudo apt update
sudo apt install -y curl wget jq unzip
Install kubectl:
curl -LO https://dl.k8s.io/release/v1.29.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
Install Helm:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Install OpenStack CLI:
pip install python-openstackclient
2. Create the management cluster using k0s
k0s is used because it is lightweight and simple, which is ideal for a management cluster.
Install k0s:
curl -sSLf https://get.k0s.sh | sudo sh
Initialize and start the controller:
sudo k0s install controller --single
sudo k0s start
Wait for about 30–60 seconds, then export kubeconfig:
sudo k0s kubeconfig admin > ~/.kube/config
Verify:
kubectl get nodes
You should see one node in Ready state.
3. Install k0rdent on the management cluster
k0rdent runs as controllers inside your management cluster and handles cluster lifecycle.
Add Helm repository:
helm repo add k0rdent https://charts.k0rdent.io
helm repo update
Install k0rdent:
helm install kcm oci://ghcr.io/k0rdent/kcm/charts/kcm --version 1.8.0 -n kcm-system --create-namespace
Verify installation:
kubectl get pods -n kcm-system
Wait until all pods are in Running state. If they are not, check logs before proceeding.
4. Configure OpenStack access
This step is critical. k0rdent needs credentials to create VMs on your behalf.
Load your OpenStack credentials
You should have an openrc.sh file from your OpenStack environment.
source openrc.sh
Verify access:
openstack server list
openstack network list
openstack image list
If these commands fail, fix OpenStack access before moving forward.
5. Create Kubernetes secret for OpenStack credentials
k0rdent expects a clouds.yaml format.
Create a file:
apiVersion: v1
kind: Secret
metadata:
name: openstack-cloud-config
namespace: kcm-system
type: Opaque
stringData:
clouds.yaml: |
clouds:
openstack:
auth:
auth_url: https://<OPENSTACK_AUTH_URL>
username: <USERNAME>
password: <PASSWORD>
project_name: <PROJECT_NAME>
user_domain_name: Default
project_domain_name: Default
region_name: RegionOne
interface: public
identity_api_version: 3
Apply it:
kubectl apply -f cloud-config.yaml
6. Create k0rdent credential object
This tells k0rdent to use the OpenStack secret.
apiVersion: k0rdent.mirantis.com/v1beta1
kind: Credential
metadata:
name: openstack-credential
namespace: kcm-system
spec:
type: openstack
secretRef:
name: openstack-cloud-config
Apply:
kubectl apply -f credential.yaml
7. Identify required OpenStack resources
You must use real names from your OpenStack environment.
Run:
openstack network list
openstack subnet list
openstack router list
openstack image list
openstack flavor list
You will need:
- External network name (e.g., public)
- Image name (e.g., ubuntu-20.04)
- Flavor (e.g., m1.medium)
Note: If these values are wrong, cluster creation will fail silently or partially.
8. Creating your ClusterDeployment (core step)
This is where you define your Kubernetes cluster declaratively. This is how we have defined our Cluster Deployment, you can can deploy it according to your needs:
apiVersion: k0rdent.mirantis.com/v1beta1
kind: ClusterDeployment
metadata:
name: openstack-hcp
namespace: kcm-system
spec:
template: openstack-hosted-cp
credential: openstack-credential
config:
workersNumber: 2
flavor: m1.medium
image:
filter:
name: ubuntu-20.04
externalNetwork:
filter:
name: public
identityRef:
name: openstack-cloud-config
cloudName: openstack
Apply:
9. Observe cluster creation
kubectl apply -f clusterdeployment.yaml
Now the system starts working in the background.
Watch resources:
kubectl get clusterdeployments -n kcm-system
kubectl get pods -n kcm-system
Also monitor OpenStack:
openstack server list
You should see worker VMs being created.
Important point:
- Control plane components run inside the management cluster
- Only worker nodes are created in OpenStack
10. Retrieve kubeconfig of the new cluster
Once the cluster is ready, k0rdent creates a secret containing kubeconfig.
kubectl get secret openstack-hcp-kubeconfig \
-n kcm-system \
-o jsonpath="{.data.value}" | base64 -d > kubeconfig.yaml
Use it:
kubectl get nodes --kubeconfig=kubeconfig.yaml
You should see your OpenStack worker nodes.
11. Validate with a real workload
Deploy something simple:
kubectl run nginx --image=nginx
kubectl get pods
Expose it if needed:
kubectl expose pod nginx --port=80 --type=NodePort
This confirms the cluster is functional.
12. Test scaling (important for demo and validation)
Edit cluster:
kubectl edit clusterdeployment openstack-hcp -n kcm-system
Change:
workersNumber: 3
Watch OpenStack again:
openstack server list
A new VM should be created.
This proves:
“Declarative scaling works / k0rdent reconciles desired state
It’s easy to walk away thinking you’ve just provisioned a cluster. You haven’t. What we built is fundamentally different:
A centralized control plane architecture
In a conventional setup, every cluster is an island: its own API server, its own etcd, its own everything. Multiply that by fifty and you don’t have a platform, you have a sprawl. Teams spend more time keeping clusters alive than using them.
Centralized control plane architecture breaks this pattern. All control planes live inside one management cluster, one place for state, one place for API requests, one place to go when something needs attention. The management cluster becomes the brain. Workload clusters become its extensions.
This isn’t just an infrastructure optimization. It’s an architectural shift in how responsibility is distributed.
A declarative cluster provisioning system
In the old world, provisioning a cluster meant scripts, runbooks, and CLI commands fired in a specific order. It was imperative and it lived in someone’s head.
With k0rdent, you describe what you want. The system figures out how to get there. Provisioning becomes reproducible, auditable, and version-controlled by design — not by accident.
A multi-cluster platform foundation
A single cluster is a tool. A well-architected multi-cluster system is a platform built to serve many teams, many workloads, and many environments consistently, without every consumer needing to understand what’s underneath.
That’s what we’ve laid down here. Onboard new clusters without rethinking the architecture. Enforce policies from a single point. Upgrade and observe your entire fleet without treating each cluster as a unique snowflake.
The core shift: From cluster-centric to platform-centric
Before this architecture, each cluster managed itself, your attention and tooling fragmented across however many clusters you had. After this architecture, one system manages all clusters. Every new cluster is just another expected, handled event.
You stop thinking of yourself as someone who manages clusters, and start thinking as someone who operates a system that manages clusters. That shift is what lets teams scale infrastructure without scaling their operational toil at the same rate.
You didn’t just build a cluster. You built the system that builds clusters.
Feel free to check out more on the k0s and k0rdent community
The stack we’ve walked through in this blog k0s, k0rdent, and OpenStack, isn’t just a technical combination. It represents a growing ecosystem of open-source contributors, platform engineers, and infrastructure practitioners who are actively shaping what production Kubernetes looks like at scale.
Both projects are open source, actively maintained, and genuinely community-driven. If anything in this blog sparked a question, a use case idea, or even a disagreement, the communities are the right place to take it.
If you want to explore k0rdent further, dig into the project on GitHub and bring your questions or feedback to the #k0rdent channel on the CNCF Slack. It’s an active space where contributors and users are building in the open.
For k0s, the GitHub repository is the best place to follow development and contribute. On the Kubernetes Slack, the #k0s-users channel is where you’ll find practitioners sharing real-world experience, and #k0s-dev is where the core development conversation happens, worth joining if you want to go deeper or contribute upstream.
The best way to learn this stack isn’t just to read about it: it’s to build with it, break it, and ask questions alongside people who are doing the same.