Guest post originally published on Fairwinds’s blog by Robert Brennan, Director of Open Source Software at Fairwinds

What is Fairwinds’ Polaris? Kubernetes Open Source Configuration Validation

Kubernetes is an incredibly powerful platform for deploying software. The level of flexibility it provides can accommodate nearly any use case, no matter how unique. This is the reason Kubernetes has been adopted by more than half the Fortune 500. According to a study by Dimensional Research and VMware, “State of Kubernetes 2020 Report,” K8s adoption has skyrocketed significantly from 27% in 2018 up to 48% in 2020.

But as with all tools, there’s a natural tradeoff between power and safety. There are millions of ways to configure Kubernetes and the workloads that it runs, but 99% of them are dangerous. It’s easy to introduce problems with security, efficiency, or reliability — often simply by forgetting to specify a particular field in the YAML configuration.

To deal with this issue, the community has come up with a set of Kubernetes best practices for configuring Kubernetes workloads. These are guidelines that should always be followed, unless you have a really good reason not to. Fairwinds’ Polaris project was born as a way to help define and enforce these best practices.

An Example

Here’s an example Kubernetes Deployment, taken straight from the Polaris documentation:

apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
    app: nginx
  replicas: 3
      app: nginx
        app: nginx
      - name: nginx
        image: nginx:1.14.2
        - containerPort: 80

Can you tell what’s wrong with it? Probably not, unless you’re intimately familiar with Kubernetes configuration. But there are several fields left unspecified that could lead to serious issues.

CPU and Memory Settings

First off, it’s important to tell Kubernetes how much memory and CPU your application is expected to use. This allows Kubernetes to efficiently bin-pack your workloads onto the underlying Nodes that will run them, and gives it guidance as to when an application is misbehaving (for example, due to a memory leak).

A better container spec would look like:

      - name: nginx
        image: nginx:1.14.2
            memory: 512MB
            cpu: 500m
            memory: 1GB
            cpu: 1000m

Health Probes

The example above is also missing Liveness and Readiness Probes. These settings tell Kubernetes how to check if your application is healthy and ready to serve traffic. Without a Liveness Probe, Kubernetes won’t be able to self-heal if your application freezes up; without a Readiness Probe, it might direct traffic to pods that aren’t fully ready yet.

Liveness and Readiness Probes require some application-specific knowledge, but typically poll a particular HTTP endpoint or run a Unix command to test that the application is responding appropriately. For example:

            - cat
            - /tmp/healthy
            path: /healthz
            port: 8080

Security Tightening

Many Kubernetes workload settings are “insecure by default” — they err on the side of granting your application permission to do things it may or may not need. For instance, every container will be mounted with a writeable root filesystem by default, which can give an attacker the ability to replace system binaries or modify configuration.

A more secure container configuration would look like this:

      - name: nginx
        image: nginx:1.14.2
          allowPrivilegeEscalation: false
          privileged: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
              - ALL

How Polaris Can Help

Polaris checks for all the issues above and many more. It comes with 24 built-in checks (as of May 2021). Checks are constantly being added to our library as users submit feedback and the community learns new and better ways of configuring workloads.

Each of our checks are defined in JSON Schema — the same schema language used by Kubernetes itself each time you run kubectl to validate the resources you’re adding to the cluster.

The simplest checks only take a few lines of configuration:

successMessage: Host network is not configured
failureMessage: Host network should not be configured
category: Security
target: Pod
  type: object
        const: true

But we can leverage the full power of JSON Schema and Go Templates to create some pretty complex checks as well. You can take a look at the Polaris documentation to learn more about how to write your own custom Polaris checks, which can be really helpful if your organization has its own internal policies and best practices it wants to enforce.

Once you have your Polaris configuration set up (or you’re happy with the default configuration we’ve provided), Polaris can run in three different modes: as a Dashboard, showing you what resources in your cluster need attention; as an Admission Controller, blocking problematic resources from entering the cluster; or in CI/CD, vetting infrastructure-as-code before it gets checked in.

Deploying with Confidence

Kubernetes is an incredibly powerful platform, but with great power comes great responsibility. It’s important to ensure you’re following best practices as you deploy into Kubernetes. If you neglect to validate your configuration, it could lead to security vulnerabilities, production outages, or cloud cost overruns.

Adding Polaris into your workflow — whether it’s in CI/CD, Admission Control, or just a passive Dashboard — can help you navigate these dangerous waters with confidence. And if you want to utilize Polaris across a fleet of clusters, or combine it with some other great Kubernetes auditing tools — such as Trivy for scanning container images and Goldilocks for right-sizing memory and CPU settings — check out Fairwinds Insights, a platform for auditing and enforcing policy in Kubernetes environments.

So whether you’re a seasoned Kubernetes expert or you’re just building your first clusters, make sure you’ve got some guardrails in place! Fairwinds’ Polaris and Insights are a great place to start.