Guest post originally published by Brae Troutman, Red Hat

As the Open Cluster Management (OCM) project and community continues to grow, we felt it was important to start periodically communicating what we are doing with the project, where we are heading, and how you can get involved! As the curtain slowly closes on 2022, let’s take a closer look at the latest developments in the project!

About Open Cluster Management

OCM is an open source project that simplifies the management of multiple Kubernetes clusters in the cloud, on-premises, or at the edge. It provides the basic primitives for understanding a cluster inventory and orchestrating the distribution and scheduling of workloads across those clusters. OCM also provides a powerful extensibility, or “add-on” framework that enables integrations with other popular CNCF projects, or for injecting customized behavior into clusters for your specific scenarios.

OCM is a CNCF Sandbox project (2021) with maintainers from more than 15 different organizations, across different cloud-native and solution vendors. We consider it a viable alternative to the Kubernetes SIG KubeFed project and are committed to carrying on the multicluster management mission after KubeFed’s archival.

Website: https://open-cluster-management.io/

GitHub: https://github.com/open-cluster-management-io/OCM 

Release Notes

We are excited to announce the availability of OCM 0.9.0. In this release, we’ve improved security on managed clusters and made exposing services within clusters to your hub a more flexible process. You can follow along our project roadmap here, but keep reading to see the highlights of this release and what we have lined up in the future.

Release 0.9.0 Highlights

De-escalate Work Agent Privilege on Managed Clusters [issue]

In previous iterations of OCM, the Work Agent process is run with admin privileges on managed clusters. This release, to exercise the principle of least privilege, OCM supports defining a non-root identity within each ManifestWork object, allowing end users to give the agent only necessary permissions to interact with the clusters which they manage.

Allow Targeting Specific Services within Managed Clusters [issue]

The cluster-proxy add-on supports the exposure of services from within managed clusters to hub clusters, even across Virtual Private Clouds. Originally all traffic was routed through the Kubernetes API server on each managed cluster, increasing load on the node hosting the API server. Now the proxy agent  add-on supports specifying a particular target service within a cluster, allowing for better load balancing of requests made by hub clusters and more granular control of what resources/APIs are exposed to hub clusters.

Support referencing the AddOn configuration with AddOn APIs [enhancement]

For some add-ons, they want to run with configuration, we enhance the add-on APIs to support reference add-on configuration, and in add-on framework, we support to trigger re-rendering the add-on deployment if its configuration is changed.

Other Updates

Besides the  three highlights, general bug fixes, and cleanup previously mentioned, this release also progresses on some multi-release issues and updates to the ClusterSet and Placement APIs:

What’s Next?

Up and coming updates for OCM aim to better integrate with the current Kubernetes community standard, to make contributing to the project more accessible and approachable, and to package current functionality for resource efficient edge deployment. 

OCM Control Plane Standalone Executable [issue]

As part of an ongoing effort to adapt OCM for the edge, we’ve produced a prototype for a standalone binary of the OCM control plane (demoed here by @ycyaoxdu). Over the next release, we plan to further develop the concept– removing unused K8s APIs from the apiserver for interacting with nodes, pods, and other K8s resources that are more granular than the cluster-level objects that OCM manages. 

ArgoCD Integration – Pull Model [issue]

In previous versions of OCM, the ArgoCD Controller on a hub cluster is responsible for communicating directly with managed clusters to enact desired application state from manifests in source repositories. This bypasses the OCM secure communication channel established between hub and managed clusters, and puts the workload of checking and updating managed cluster states on the hub cluster. We plan to introduce MulticlusterApplicationSet, a custom resource definition that gathers ArgoCD application templates to be propagated on managed clusters through the OCM secure communication channel. From here ArgoCD controllers on these managed clusters can evaluate the templates and deploy the applications.

Spread Placement Policy + Topology Aware Scheduling [issue|issue]

When managing applications across multiple clouds and cluster types, some use-cases require specifying what type of cluster and what cloud a given workload should be deployed on. We want to package functionality to support three particular types of deployment policies when it comes to applying applications to clusters: cluster affinity, cluster anti-affinity, and spread. This helps to specify what sort of clusters we prefer for application placement, which ones we don’t prefer, and where our target clusters should be, respectively.

Proxy-Aware Communication Between Hub & Managed Clusters [issue]

In multicluster architectures, some managed clusters may only be accessible through a proxy server, which requires complex configuration on the proxies or barring said managed cluster from being accessed at all through the OCM certification and authentication protocol. By allowing token registration as an alternative to a Certificate Signing Request from the hub cluster to the managed cluster proxy, OCM is flexible enough to establish secure connections with a wider range of cluster configurations. 

Contributor Shoutout

The beauty of Open Source software is in the diversity of experience and contributions that a community attracts in its contributors. OCM touts over 70 contributors spanning more than 15 companies. In a project like ours, every member of the community plays an important and appreciated role in the software’s growth and development, both in terms of contributions and in terms of the community building and engagement that come with the OSS baggage.

That being said, we are pleased to welcome several new contributors to the community over the last ~90 days of the OCM release cycle: 

Of course, a community project wouldn’t be much good without people using it! Here are some users and their organizations who are exploring and using OCM in their multicluster application orchestration:

Learn More

Recent Events

KubeCon (10/24/22 – 10/28/22)

Get Involved

Interested in what you see here, or want to contribute to open cluster management? Connect with us! Do you:

Have questions? -> Open Cluster Management in K8s Slack

Have issues or contributions? -> OCM Organization on Github

Want regular project updates? -> Gmail Group

Want to talk live? -> Weekly Community Zoom Meetings

And if you don’t know where to start, check out our project website to get an overview of OCM: its use cases, architecture, and how you can get involved!