In part three of this series on Policy as Code, we’ll look at the intersection between PaC and AI coding assistants. AI coding tools have fundamentally changed software development. “Vibe coding” or chat-based coding, where you ask the LLM to write code, and then you feed it the results and ask it to make changes in a continuous loop is fairly commonplace. But this creates a challenge; as development teams are under more pressure to code deliver faster and with smaller (human-based) team sizes, how can teams maintain expected levels of security and quality?

Additionally, like humans, AI tools are just as susceptible to writing vulnerable code – either using libraries in an unsafe way, implementing business logic flaws due to not properly understanding the context, or creating overly complex point solutions when an organization already has a defined design pattern or library to solve a certain problem, with centralized logic and expectations.

They also can make changes to both the code and the tests, meaning that the bugs they introduce are often subtle and don’t fail the corresponding tests. With small changes, this is easily caught during a review, but with larger updates (as is typical when vibe coding) these bugs can easily slip through.

Automated Governance

Enter the concept of Automated Governance, which uses code to oversee changes, such as ones from AI coding tools, and can be implemented with Policy as Code. 

In the TAG-Security’s recent Automated Governance Maturity Model, we broke this down into four components:

There are a number of ways Policy as Code can help govern and oversee AI-generated code.

Code Quality & Maintainability

Security & Compliance

Architectural Standards & Resilience

Feedback & Continuous Improvement

Automated controls allow you to govern better how your engineers use AI solutions on a daily basis. AI solutions evolve fast (and faster each day that goes by), so, having in place good guardrails at the platform level, will allow you to address security concerns that appear with the usage of agentic solutions and won’t drive your engineers’ evolution back.

How does this apply to Cloud Native?

Platform-level, automated controls let engineers move quickly with AI assistants without taking on risks that the responsible engineers aren’t fully informed of. Because policies are code, they evolve at exactly the same speed as the platform changes themselves. This frees developers to stay “in flow,” while the platform quietly blocks unsafe patterns and highlights areas for improvement, no context-switching required.

Deterministic policy engines such as Kyverno and Open Policy Agent (OPA) excel at clear yes/no rules (“No public S3 buckets,” “Image must be signed”). But AI-generated changes frequently blur those lines. While there is no one-click, end-to-end solution for this today, covering that gray area can be done using a layered approach including Deterministic PaC, LLM-based reviews, and Human-in-the-Loop reviews.