Guest post by Jon Friesen, Nick Tate, and Cody Baker of DigitalOcean

We love Kubernetes and all it can do. We want all developers to benefit from it, so we decided to build a higher-level abstraction on top of it. As a full-featured platform as a service (PaaS), App Platform solves the operational side of taking an application from development to a highly scalable and resilient cloud native deployment powered by Kubernetes, while keeping the user experience as simple as possible.

Providing this kind of service means running many clusters across many data centers around the world. Scalability, redundancy, and performance are key, which led us to utilize several CNCF projects in the architecture:

We rely heavily on the CNCF ecosystem because these tools allow truly scalable infrastructure that can handle the needs of user applications of any size. App Platform brings the power of these tools to developers without the headaches of setup and maintenance.

From Code to a Full Deployed App

App Platform architecture

Diagram Link

In Kubernetes clusters, nodes should be treated like cattle, not pets. This enables resiliency in systems when you are not tightly tied to the underlying infrastructure. If a node becomes unhealthy, or you want to scale down a pool during off-hours to save on costs, simply destroy the node.

We took this core principle from Kubernetes and brought it up to the cluster level in App Platform. To manage clusters across regions and hardware topologies we built a cluster reconciler that functions at the cluster level similar to how Kubernetes functions at the node level for an individual Kubernetes cluster.

With a single command, we can orchestrate an entirely new Kubernetes cluster to be provisioned, taking into account the varying node pool types, setting up Cloudflare ingress, ensuring all of our custom admin workloads such as Istio and Fluent Bit are up and running, and so forth. This process enables us to have great flexibility in the size and scale of our system. 

This is also extremely important for certain types of upgrades. There are various critical control plane components such as Istio for ingress networking that we prefer the safer alternative of creating an entirely new cluster for an upgrade versus doing a live upgrade on a cluster with active traffic flowing through. Once the new clusters are ready, we can instruct the App Platform reconciler to start safely migrating apps to them.

Detection & Building

App Platform meets developers where they are. For developers with application source code, we leverage Cloud Native Buildpacks to detect and build an OCI formatted image. For devs with a Dockerfile we leverage Kaniko. Devs with an existing CI workflow can also deploy pre-built images. We approached this with two options: Cloud Native Buildpacks and Dockerfile (built with Kaniko).

Cloud Native Buildpacks

Buildpacks.io

Cloud Native Buildpacks strive to standardize an abstract lifecycle and contracts for building an app. At a high level it separates this process into 4 phases:

App Platform implements Cloud Native Buildpack functionality for app detection which occurs when the user is setting up their app for the first time and during the app build process. Initial detection involves cloning the application code into a pre-warmed environment and running the detection parts of the CNB buildpacks to determine the group of buildpacks that apply. We also extend the buildpack results with additional metadata such as the language / framework detected, what types of App Platform components we think are supported, recommended build / run commands, etc. This information helps users understand how their application will be built and enables us to provide a simple creation process for their apps.

A build utilizes the entire CNB lifecycle from detection, to compilation, and finally, to bundling it all up into an OCI image and storing it in DigitalOcean Container Registry. All of this occurs within a sandboxed Kubernetes Job keeping code and configurations secure.

There is a great open source community with buildpacks for many of the languages users frequently want already existing today. It was a no-brainer to back Cloud Native Buildpacks and build upon it for our app detection and app build processes.

Dockerfile

Kaniko

The second approach provides a bit more depth and customizability by defining a Dockerfile as the instruction set for creating your container. We take this Dockerfile and we use it to create the build for your application. Traditionally this involves interaction with the Docker daemon, but for security reasons this is something that we could not readily make available to the end users’ build containers. This is where Kaniko comes into play: it runs the instructions of the Dockerfile entirely in an unprivileged container, taking a snapshot of the filesystem and uploading it after every instruction, without needing access to a Docker daemon.

Deployment

User apps are deployed in a Kubernetes cluster, an app deployment is composed of Kubernetes pieces (e.g., deployments, services). Kubernetes lends a hand to easily allow us to scale user apps vertically and horizontally as well as adding resource restrictions to fit the appropriate plan.

Another core concept we borrowed from Kubernetes is describing configuration in code. All apps are defined by a declarative YAML configuration that we call an “App Specification.” Users can edit their app’s spec manually and push it using our doctl command line tool or make changes in the App Platform web control panel. Both of these operations pass the app spec to our app reconciler which validates, builds, and deploys it. In cases where the app’s source code is the same or the app spec changes do not affect the build, we can skip it entirely, reuse the existing OCI image, and deploy the app with the new configuration in seconds.

Traffic

App Platform’s containerized runtime pushes customers towards patterns that are highly scalable and highly available. Horizontal scaling of containers does much of the heavy lifting to achieve that on the runtime side. Our challenge on the networking side was delivering a solution that could scale to meet the needs of our largest customers while also being cost effective when it comes to small apps. Put more technically, we needed a load-balancer with low cost overhead for distributing traffic to small app instances but could also handle traffic beyond the maximum capacity limits of a single virtual machine. It was also important that a spike in traffic for one App Platform customer would not impact other customers.

We opted to front all of our services with Cloudflare which provides us with a robust content delivery network (CDN), load balancing, and world class DDoS protection. The CDN is composed of edge servers spread across the globe that cache content, dramatically reducing the load times of assets, such as static sites. The DDoS protection absorbs excessive traffic that matches a malicious pattern. 

Behind it, we have a scalable pool of nodes running Istio and serving as an ingress gateway. Istio receives the traffic from Cloudflare and routes it to the customer’s application pods over the Cilium overlay network. For static sites, Istio makes some minor transformations to the request and then routes it to the DigitalOcean Spaces backend.

App Platform's traffic architecture

Isolation

In Kubernetes terminology, each application is assigned its own namespace and we use NetworkPolicies to lock down service communication to resources within that namespace. 

App Platform's isolation architecture

That’s only part of the isolation battle, though; we also need to secure the actual runtime environment of the containers. Otherwise, a malicious user could attempt to break out of their container and access or take over other customer workloads or even the host system.

We explored a few technologies around this area and settled on leveraging gVisor for isolation. You can think of gVisor as a “mini” kernel defined in user space. It implements a subset of all the system calls, /proc files, and so on that are supported, and it intercepts the actual system calls created by an application. It runs them in this sandboxed environment, only forwarding a very limited subset of these calls that it deems safe to the actual host kernel. 

Another runtime solution we considered was Kata Containers. These are probably closer to the mental model you have of a cloud Virtual Machine. With Kata Containers, each container is wrapped with a lightweight virtual machine and its own kernel. Throughout our benchmarking, we determined that gVisor’s performance was a better fit for App Platforms needs.

This is an area for which we are constantly re-evaluating and trying out new technologies and ideas. One of the great things about these runtimes is they are all OCI compliant, and with the Kubernetes orchestrator, we can register multiple runtimes for a single cluster. We can choose exactly which pods run under which runtime, making it extremely easy to test out new runtime technologies.

Conclusion

App Platform pulls all of these technologies together and removes the complexity and operational investments that are out of reach for most applications, providing a first-class cloud native platform with minimal user effort. App Platform is built on the shoulders of giants. We are eternally grateful to all of the individuals and organizations who have invested time and effort to create these amazing tools.

This post was written by Jon Friesen, Nick Tate, and Cody Baker.