Guest post originally published on the InfluxData blog by Charles Mahler

In an ideal world, developers would be able to release new products and features from development environments into production extremely fast while also not having to stress about breaking prod. Achieving this combination of development speed while also maintaining software reliability requires having the right toolchain and automation in place.

There are a number of different philosophies and tools that attempt to tackle these problems, but in this article you will learn about GitOps and Argo specifically and how they can be used to improve how you release software. Companies big and small have faced many of these same problems, and the value of Argo becomes clear by simply looking at some of the companies that have incorporated Argo into their development workflow:

GitOps and Argo costumers
Source

What is GitOps?

Let’s start off with what GitOps actually is at a conceptual level so you can understand why Argo was created and the specific types of problems that it is trying to solve.

GitOps is essentially a framework or group of best practices for how to manage cloud-based infrastructure. GitOps takes many lessons from DevOps and modern application development methodologies and applies them to infrastructure management. This includes concepts/tools like Continuous Deployment and Infrastructure as Code to bridge the gap between development and operations teams.

As the name suggests, Git plays a key role in GitOps. Git is used as a single source of truth for what infrastructure and applications should look like based on declarative configuration. The key shift here is the mindset that configuration should be a set of facts about how your infrastructure should look rather than a set of instructions to perform step by step to create an environment.

Automated processes are then used to take this configuration and make sure the production or development environment matches the state described in the repository. Using Git also gives you a versioned history of your application and makes it easy to deploy when new commits come in or roll back to a previous commit if needed.

The benefits of GitOps when implemented properly are the following:

What is Argo?

The Argo project is a collection of tools that can be used together or independently to make implementing and utilizing GitOps best practices easier for Kubernetes-based applications. The project was initially created by Applatix to help manage their cloud native software architecture. Applatix was acquired by Intuit, and in January 2018 the first component of the Argo project was open sourced as Argo Workflows. Argo CD and Argo Events followed later in 2018 and Argo Rollouts in 2019. In 2020 Argo was accepted as an incubator project by CNCF.

Argo is currently the 6th most active CNCF project and has gained over 25,000 GitHub stars across the tools that make up the Argo project.

CNCF repo activity

CNCF repo activity – source

Argo features

Now let’s take a deeper look at each of the different tools that make up the Argo project and see some of the features they provide.

Argo workflows

Argo Workflows is a workflow engine designed to work natively on Kubernetes. Workflows are implemented as Kubernetes Custom Resource Definitions and allow you to create multi-step workflows where each step is run as an independent container. These workflows can be sequences of tasks or a directed acyclic graph (DAG) that has dependencies on other tasks in the workflow.

The main strength of Argo Workflows is that it can be run on any Kubernetes cluster and is vendor agnostic. It is lightweight and efficient due to using containers rather than VMs or dedicated servers, and tasks can be easily scaled up or down depending on the workload thanks to Kubernetes.

Some common use cases for Argo Workflows are standard CI/CD pipelines where workflows can be used to glue different tools together. Another rising use case is as a data processing pipeline or MLOps tool where the scalability of Kubernetes is a huge advantage and all of the complexity is abstracted away by Argo. Depending on the task’s specific hardware requirements Argo can be configured to spin up CPU or memory optimized pods to be more efficient in terms of hardware cost as well.

Some other useful features provided by Argo Workflows:

Argo CD

Argo CD was launched to provide a dedicated tool for continuous deployment pipelines, which was a common way community members were using Argo Workflows. As a tool designed explicitly for continuous deployment rather than as a general workflow engine like Argo Workflows, Argo CD provides more out-of-the-box features for building CD pipelines.

Argo CD is deployed as a Kubernetes controller that constantly observes the state of a Kubernetes cluster and compares it to the desired state defined in the Git repo configuration. If Argo CD detects that the application has become out of sync with the desired configuration, Argo CD can be set up to automatically make the required changes to fix the Kubernetes cluster or create alerts to notify a developer for manual action.

Additional features of Argo CD:

Argo rollouts

Argo Rollouts is a tool for safely moving new deployments into production. Rollouts work by giving developers the ability to implement things like blue/green and canary deployments which routes a small percentage of traffic to the newest version of the application. If monitoring metrics are fine and show no sign of errors, Argo Rollouts will gradually increase the traffic until the new version of the application is fully deployed to production.

Argo Rollouts diagram flow

Argo Rollouts integrates with ingress and service mesh tools to manage traffic flow through the Kubernetes cluster. Rollouts also integrates with monitoring tools like Prometheus, Datadog, New Relic, and InfluxDB. You can define a metric to watch and Argo Rollouts will roll back the deployment if it sees that the defined threshold for something like response time or error rate has been exceeded.

Here’s a configuration example for how to monitor the percentage of request errors reported by Istio and stored with InfluxDB; if the error rate exceeds the value defined in the successCondition field, the deployment will fail:

apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
  name: error-rate
spec:
  args:
  - name: application-name
  metrics:
  - name: error-rate
    successCondition: result[0] <= 0.01
    provider:
      influxdb:
        profile: my-influxdb-secret  # optional, defaults to 'influxdb'
        query: |
          from(bucket: "app_istio")
            |> range(start: -15m)
            |> filter(fn: (r) => r["destination_workload"] == "{{ args.application-name }}")
            |> filter(fn: (r) => r["_measurement"] == "istio:istio_requests_errors_percentage:rate1m:5xx")

Argo events

Argo Events is a framework for handling events and taking automated action on these events with Kubernetes. Argo Events works by receiving events from Event Sources and includes support for over 20 event sources out of the box, with some of the most popular being:

Custom event sources can also be configured. Once Argo Events has received the event, it transforms it into a format that is CloudEvent compliant. These events are handled by Sensors which can have custom logic to determine whether to execute the defined Trigger for the event. Triggers are the actual resource or workload that will be executed if the Sensor passes the event.

Some common Triggers:

Argo Events is by design very flexible, and the goal is to provide a platform that makes it easy to automate processes with your preferred tools.

Wrapping up

Hopefully, this helped you get a solid understanding of all the various components of the Argo project. While it is well known for being used for CI/CD and GitOps use cases, tools like Workflows and Events can also be used for a variety of other tasks. One of the biggest benefits is that it’s fairly easy to get started with Argo because most developers are already familiar with the tools used to configure and customize Argo.