Guest post by Jef Spaleta, Sensu, originally published on the Sensu blog
The appeal of running workloads in containers is intuitive and there are numerous reasons to do so. Shipping a process with its dependencies in a package that’s able to just run reduces the friction of organizational communication and operation. Relative to virtual machines, the size, simplicity, and reduced overhead of containers make a compelling case.
In a world where Docker has become a household name in technology circles, using containers to serve production is an obvious need, but real-world systems require many containers working together. Managing the army of containers you need for production workloads can become overwhelming. This is the reason Kubernetes exists.
Kubernetes is a production-grade platform as a service for running workloads in containers. The way it works, from a high level, is relatively straightforward.
You decide what your application needs to do. Then you package your software into container images. Following that, you document how your containers need to work together, including networking, redundancy, fault tolerance, and health probing. Ultimately, Kubernetes makes your desired state a reality.
But you need a few more details to be able to put it to use. In this post, I’ll help lay the groundwork with a few Kubernetes basics.
Building systems is hard. In constructing something nontrivial, one must consider many competing priorities and moving pieces. Further, automation and repeatability are prerequisites in today’s cultures that demand rapid turnaround, low defect rates, and immediate response to problems.
We need all the help we can get.
Containers make deployment repeatable and create packages that solve the problem of “works on my machine.” However, while it’s helpful having a process in a container with everything it needs to run, teams need more from their platforms. They need to be able to create multiple containers from multiple images to compose an entire running system.
The public cloud offerings for platform as a service give options for deploying applications without having to worry about the machines on which they run and elastic scaling options that ease the burden. Kubernetes yields a similar option for containerized workloads. Teams spell out the scale, redundancy, reliability, durability, networking, and other requirements, as well as dependencies in manifest files that Kubernetes uses to bring the system to life.
This means technologists have an option that provides the repeatability, replaceability, and reliability of containers, combined with the convenience, automation, and cost-effective solution of platform as a service.
What is Kubernetes?
When people describe Kubernetes, they typically do so by calling it a container orchestration service. This is both a good and incomplete way of describing what it is and what it does.
Kubernetes orchestrates containers, which means it runs multiple containers. Further, it manages where they operate and how to surface what they do — but this is only the beginning. It also actively monitors running containers to make sure they’re still healthy. When it finds containers not to be in good operating condition, it replaces them with new ones. Kubernetes also watches new containers to make sure not only that they’re running, but that they’re ready to start handling work.
Kubernetes is a full-scale, production-grade application execution and monitoring platform. It was born at Google and then later open-sourced. It’s now offered as a service by many cloud providers, in addition to being runnable in your datacenter.
How do you use it?
Setting up a Kubernetes cluster can be complex or very simple, depending on how you decide to do it. At the easy end of the spectrum are the public cloud providers, including Amazon’s AWS, Microsoft’s Azure, and Google’s Google Cloud Platform. They have offerings you can use to get up and running quickly.
With your cluster working, you can think about what to do with it. First, you’ll want to get familiar with the vocabulary introduced by Kubernetes. There are many terms you’ll want to be familiar with. This post contains only a subset of the Kubernetes vocabulary that you need to know; you can find additional terms defined more completely in our “How Kubernetes Works” post.
The most important concepts to know are pods, deployments, and services. I’ll define them below using monitoring examples from Sensu Go (for more on monitoring Kubernetes with Sensu, check out this post from CTO Sean Porter, as well as examples from the sensu-kube-demo repo).
- Pods: As a starting point, you can think of a pod as a container. In reality, pods are one or more containers working together to service a part of your system. There are reasons a pod may have more than one container, like having a supporting Sensu Go agent process that monitors logs or application health metrics in a separate container. The pod abstraction takes care of the drudgery of making sure such supporting containers share network and storage resources with the main application container. Despite these cases, thinking of a pod as a housing for a single container isn’t harmful. Many pods have a single container.
- Deployments: Deployments group pods of the same type together to achieve load balancing. A deployment has a desired number of identical pods and monitors to make certain that many pods remain running and healthy. Deployments work great to manage stateless workloads like web applications, where identical copies of the same application can run side-by-side to service requests without coordination.
- StatefulSets: Similar to deployments, but used for applications where copies of the same applications must coordinate with each other to maintain state. StatefulSets manage the lifecycle of unique copies of pods. A Sensu Go backend cluster is a good candidate for a StatefulSet. Each Sensu Go backend holds its own state in a volume mount and must coordinate with its peers via reliable networking links. The StatefulSet manages the lifecycle of each requested copy of the Sensu Go backend pod as unique, making sure the networking and storage resources are reused if unhealthy pods need to be replaced.
- Services: Services expose your deployments. This exposure can be to other deployments and/or to the outside world.
You interact with a cluster via the Kubernetes REST API. Rather than doing this by constructing HTTP requests yourself, you can use a handy command-line tool called kubectl.
Kubectl enables issuing commands against a cluster. These commands take the form below:
kubectl [command] [TYPE] [NAME] [flags]
There is a more complete overview of commands on the Kubernetes site.
The kubectl tool can be easily installed with Homebrew on macOS, Chocolatey on Windows, or the appropriate package manager for your distribution on Linux. Better yet, recent versions of Docker Desktop on Mac or Windows (also easily installed with Homebrew or Chocolatey) include setup of a local single-node Kubernetes cluster and kubectl on your workstation.
With kubectl installed on your workstation, you’re almost ready to start issuing commands to a cluster. First you’ll need to configure and authenticate with any cluster with which you want to communicate.
You use the kubectl config command to set up access to your cluster or clusters and switch between the contexts you’ve configured.
With access set up, you can start issuing commands. You’ll probably use the kubectl get and kubectl describe commands the most, as you’ll use them to see the states of your pods, deployments, services, secrets, etc.
The get verb will list resources of the type you specify:
kubectl get pods
The above will list the pods running in your cluster (more precisely, the pods running in a namespace on your cluster, but that adds more complexity than desired here).
This example gets the pod named fun-pod (if such a pod exists).
kubectl get pod fun-pod
Finally, the describe verb gives a lot more detail related to the pod named fun-pod.
kubectl describe pod fun-pod
Using the following is useful for making resources in your cluster:
Outside of learning, it’s generally preferable to create manifest files and use kubectl apply to put them into use. This is an especially good way to deploy applications from continuous deployment pipelines.
Teams write manifests in either JSON or YAML. Such a manifest can describe pods, service, deployments, and more. The specification of a deployment includes the definition of the number of times a type of pod should replicate to constitute a healthy and running deployment.
This is a sample of what a manifest looks like:
- name: webserver
- containerPort: 80
Kubernetes creates or updates the resources in a file with the following command:
kubectl apply -f <filename>
You can easily start your active learning journey with Kubernetes with either a cluster in a public cloud or on your workstation. As mentioned earlier, Docker Desktop for Windows or Mac includes a Kubernetes installation. This makes it easy to run a cluster for learning, development, and testing purposes on your machine.
If you can’t or don’t want to use Docker Desktop, you can accomplish the same purpose (setting up a local cluster) by installing Minikube.
With either the Kubernetes installation with Docker Desktop or Minikube, you have a cluster on your machine with which you can interact. You can now use this setup for getting started and for trying deployments before you push them remotely.
Dive in and learn more
This is only the beginning. There’s a lot more to know before you become truly comfortable with Kubernetes. Such is the life of a technology professional!
Courses and resources exist that show more on how to gain confidence in using Kubernetes. The Kubernetes site itself has a wonderful “Learn Kubernetes Basics” section, with plenty of straightforward interactive tutorials. The best way to get up to speed is to get your hands dirty and start getting some experience. Install Docker Desktop or Minikube and start deploying!