We are entering an era where Governance is increasingly important; with AI systems generating code and becoming a critical part of application’s runtime infrastructure, we can produce outputs at an increasingly rapid pace. Organizations and individuals need to ensure that there are appropriate guardrails on the inputs and outputs of these systems to keep workloads safe and conformant with our expectations.

Governance ensures that a system or group of systems operates in line with organizational expectations, complies with regulations, and achieves strategic objectives. We created the Automated Governance Maturity Model guidance as a part of TAG-Security to help organizations build roadmaps and assess their progress along the journey of not only developing Governance, but automating the Evaluation, Enforcement, and Audit activities that correspond to it.

The contributors to this effort were Matt Flannery, Pedro Ignácio, Brandt Keller, Eddie Knight, and Jon Zeolla.

How to use the Maturity Model

While we wrote the Maturity Model primarily with Security roles in mind, we also expect roles like Auditors, Control or Product Owners, or Control Implementers (Platform teams, SREs, DevOps, Developers) to benefit from reading or using this document.

The Automated Governance Maturity Model assesses organizational capabilities across four core categories; Policy, Evaluation, Enforcement, and Audit. These categories outline practices for leveraging automation and embedding modern approaches like rapid feedback loops and data-driven decision-making into traditionally manual administrative tasks, for example writing policies and standards, or managing audits.

The Maturity Model is designed to optimize usability and simplicity. It consists of over 50 total practices, each of which can be self-assessed fully independently, as in no practice depends on another. These items can also be assessed within a specific scope, such as a single product or business unit, or organization-wide. You simply check the box next to a practice to confirm that it is being performed in your organization and then tally all checked boxes and compare your results against our rubric to identify your overall grade.

If you’re interested in this work and would like to provide feedback on what we’ve done so far, please reach out to the team directly, or via the #tag-security channel in the CNCF slack! Possible future work may include additional practices, new categories, or adding supplementary guidance or materials based on feedback.