Guest post originally published on Snapt’s blog by Craig Risi

More organizations are using containers as a mechanism for driving their cloud-native applications.

Some organizations have hundreds of small containers across many different servers in different development, test, and production environments. This can be tricky to manage, which is why companies have turned to Kubernetes for container orchestration. 

This has made Kubernetes not only a vital part of many development pipelines but also a central system and potential performance bottleneck that needs to be managed and balanced to ensure optimized performance. 

During this article, you will notice a lot of references to security as well as load balancing, and that is because security and load balancing go hand-in-hand. Load balancers are responsible not only for adjusting the load between VMs or containers but also for managing access to these respective pods. So in configuring Kubernetes for load balancing we will be following the most secure practices as well.

Prerequisites

This article assumes that you have set up a load balancer in your organization. We will focus on the Kubernetes components and configuration to ensure that you set up your Kubernetes systems to take advantage of the power your load balancers offer. 

It is also worth mentioning that to properly take advantage of Kubernetes in your application space, you will want to ensure you have an application that is designed around containerization and a load balancer/ADC that is best suited for container environments and service discovery. If your load balancer does not meet these needs, it might be worthwhile pursuing a new load balancer to better suit the future needs of your organization.

How to set up a load balancer on Kubernetes

Setting up load balancing on Kubernetes can be quite a detailed topic on its own so we will only touch on it lightly in this article before moving on to more complex tips. 

To configure a load balancer on Kubernetes, you can create a configuration file (like the one described below) to set up the load balancing system on your individual clusters.  

kind: Service
apiVersion: v1
metadata :
name: my—service
spec:
selector:
app: MyApp
ports :
— protocol: TCP
port: 80
targetport: 9376
clusterIP: 10.0.171.239
loadBalancerIP: 78.11.24.19
type: LoadBalancer

Enable the readiness probe on a deployment

readiness probe should be defined in any deployment. A readiness probe is a signal to inform Kubernetes when to put this pod behind the load balancer and when to put this service behind the proxy to serve traffic. 

If you put an application behind the load balancer before it is ready, then a user can reach this pod but will not get the expected response of a healthy server. This rule is here to give an alert to the pod developer that the readiness probe should be enabled.

Enable liveness probe on a deployment

liveness probe lets Kubernetes know if a pod is in a healthy state and if not that Kubernetes should restart it. This is done via a simple check, such as getting an HTTP 200ok on some endpoint or a more complex check based on some bash commands. It is important – and very handy – to let Kubernetes know when the application isn’t working and needs to be restarted.

livenessprobe:

httpGet:
path: /health
port: 8080
initialDelaySeconds: 3
periodSeconds: 3

Enable CPU/Memory requests and limits

The container(s) in deployment should automatically request the CPU and memory resources that it needs and define them for the system. This prevents the pod from being starved of resources while also preventing CPU/Mem from consuming all of the resources on a node. Below is an example of how you can configure this.

resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

Flag when RBAC rules change

The principle of least privilege is something most security teams should be quite familiar with. It’s a compelling reason to get your RBAC (Role-based Access Control) configuration right and ensure that only specific roles and systems have access to the right systems. This is something that requires an overall review, starting with subjects that can create resources like Deployments or Pods in general or read sensitive resources like Secrets.

The challenging part is understanding when Role (or ClusterRole) resources do not add privileges over time. Too often organizations set up rules and forget them, only for different systems and pods to gain excessive privileges over time. Review these privileges and the rules that lead to their creation – this process allows you to tighten security and load balancing requirements accordingly. 

Control the container images deployed into your cluster

Your company could be more or less stringent on where the binaries come from, depending on the policy on third-party binaries.

If you are pulling common container images that many organizations use – like the official Nginx, Artifactory, MySQL, or Redis images – your organization might want to build it from the source and/or host the image internally instead of pulling from Docker Hub.

The reason is that the images stored in Docker Hub can change if someone pushes the same image and tags it. That means what you get from pulling the same image and tag might be different from one day to another. Additionally, the difference could be something malicious that could compromise your infrastructure and application.

To mitigate these risks, you can either build images from the source and host them in your repository, or push the same images into your repository.

If your organization hosts some or all your container images, you should apply this rule to flag any image not coming from your organization and flag it for someone to approve. The below configuration shows an example of how you can do this.

apiVersion: v1
kind: Pod
metadata :
labels :
test: liveness
name: liveness—http
spec:
containers:
— name: liveness
image: some-random-image:v1.0

Apply network policy to your deployments

Applying Security Groups policies to your VMs or your Kubernetes worker nodes is considered essential to security. We should do the same with Kubernetes workloads.

The best practice is to limit inbound and outbound traffic to only what you need so you don’t accidentally expose unwanted services to outbound traffic flow. Kubernetes has a Network Policy functionality that’s equivalent to Security Groups and all resources should have Network Policy rules associated with their deployments.

For every deployment set, there should be a network policy file or the following resource, like the one indicated below.

apiVersion: demoAPI.k8s.io/v1
kind: networkPolicy

If you want to get even more rigorous around access control, you can also match up the ports listed to those outlined in the deployments pods exposed list and/or the service port list, as shown below.

ports:
- protocol: TCP
port: 6379

Ideally, these would all match so that the developers know that everything reconciles and the network policy doesn’t list a port that is not used by the service or pod. It’s also important to ensure that your cluster is provisioned with a network plugin (CNI) that supports network policies.

Flag any service account changes

Service accounts provide an identity mapped to some set of Kubernetes API server access permissions for a pod to use. Changes to these are typically minor and often overlooked, though they can have big ramifications on security and API server access. 

Whenever a change is made, notifications should be sent out to the appropriate teams notifying them of a change and allowing them to respond accordingly should it pose any security risk.

Any changes in the ServiceAccount resource type should be flagged and reviewed to prevent any misconfigurations.

Adjusting POD toleration

Kubernetes master nodes are the control nodes of the entire cluster. Only certain items should be permitted to run on these nodes. 

To effectively limit what can run on these nodes, place taints on the nodes to specify the tolerated items. However, this does not preclude anyone from using these taints on their pods to run on the master nodes.

If you encounter the toleration below on a Pod specification in one of your deployment resources, and your cluster is self-managed, flag it for review.

Conclusion

Load balancing is essential to keeping your Kubernetes clusters operational and secure at a large scale. And while many load balancers like Snapt’s Aria and Nova are incredibly effective at managing many of these risks for you, it is important to configure your Kubernetes environment correctly to best take advantage of the features these load balancers offer.