For PayIt, cloud native is a ‘competitive advantage’ for getting government services online
Founded in late 2014 to provide a digital platform for government agencies, PayIt was cloud native from the beginning. “We have a microservices-based architecture, and I was looking for a container orchestration framework,” says Richard Garbi, CTO at PayIt.
Garbi heard about Kubernetes at the first DockerCon in 2015, and as soon as there was a beta version, he switched from some cobbled-together open source tools to Kubernetes.
Kubernetes made high availability, scalability, elasticity, and security possible “out of the box, so we just didn’t have to manage them ourselves,” says Garbi.
“It made building and running microservices a lot easier than it could have been. We can deploy anytime we need to, we can spin up and spin down boxes, and spin up entire new clusters and tear them down. So we can be pretty agile with rolling out new features without any worry that we’re going to impact anyone.”
With Kubernetes, PayIt’s infrastructure costs are less than one percent of revenue.
By the numbers
Infrastructure cost is 1% of revenue
No volume-related outages
Can handle tripling or quadrupling of day-over-day volume
So much of our daily lives happen online these days, and yet the average adoption rate for a digital government service is less than 20%. PayIt’s founders set out to boost that number by offering a new digital platform that simplifies how citizens interact with governments.
“It’s the vision of a digital DMV,” says PayIt CTO Richard Garbi. “There’s a whole sphere of government-related services that are really only available in person, or the digital experience is really quite poor. We partner with governments to upgrade their experience and then transform their way of doing business with constituents. We provide enabling technology, but also consult on policies that they need to change to provide a digital service.”
The company, founded in late 2014, was cloud native from the beginning. “We have a microservices-based architecture, and I was looking for a container orchestration framework,” says Garbi. “At that point, Kubernetes was very, very nascent technology. At the first DockerCon, there were folks from Google that presented this idea of this orchestration framework called Kubernetes, and that piqued my interest. I was watching and waiting for them to get a version out that I could use.”
Of course, PayIt needed to get code into production before then. So Garbi cobbled together some open source tools to get the company’s infrastructure off the ground. When a beta version of Kubernetes became available in 2015, PayIt switched over and has been running Kubernetes in production ever since.
Kubernetes satisfied the key requirements that Garbi had around availability and elasticity. “I had to be able to deploy a new version or roll back with zero downtime,” says Garbi. “I needed the ability to scale up and scale down based on load, and to determine the healthiness of a particular service and then being able to evict things that are not healthy. Being able to discover where other services are running without having to roll my own service discovery was important.”
Many of PayIt’s customers see huge spikes in volume at the end of the month or the end of the year. When things like registration renewals or property taxes are due, “There can be tripling and quadrupling of day-over-day volume or even more,” says Garbi. “Being able to dynamically expand to handle the volume and then contract as it abates without my team having to do anything is pretty huge. We’ve never had an outage that was volume-related.”
Because PayIt’s customers are government agencies, the security aspect was also important. “There are pretty interesting and complicated security policies you can put on top of each individual deployment or pod, and then restrict the traffic that you would expect to see from a given container or pod,” says Garbi. “You can even get much more fine-grained network control rules via Kubernetes. The audit logging and tracing of who did what and when is also a pretty compelling part of the story. Knowing that a container was created by PayIt and is running in a PayIt cluster, and being able to assert the chain of delivery from end to end, is pretty powerful. Something that isn’t from us can’t run in our cluster.”
And all of these things were provided by Kubernetes “out of the box, so we just didn’t have to manage them ourselves,” says Garbi. “It made building and running microservices a lot easier than it could have been.”
“Being able to dynamically expand to handle the volume and then contract as it abates without my team having to do anything is pretty huge.”— Richard Garbi, CTO at PayIt
The decision has also paid off in cost savings. “We’re currently at a spend of less than one percent of revenue on infrastructure cost,” he says. “I attribute that directly to utilizing Kubernetes, because we’re able to pack a whole lot more work into the same set of servers and be very, very, very efficient with the infrastructure spend that we currently have. That’s 100% due to Kubernetes.”
Plus, Kubernetes has allowed the teams to be more agile. “The way that we manage our infrastructure and our services is a key performance driver for us,” Garbi says. “We can deploy anytime we need to, we can spin up and spin down boxes, and spin up entire new clusters and tear them down. All of the things that you would expect — and our clients and constituents never know that we’re doing these things. So we can be pretty agile with rolling out new features without any worry that we’re going to impact anyone.”
Case in point: Onboarding PayIt’s biggest client, which accounts for 60% of the company’s revenue, doubled the current volume. “Because we were running in Kubernetes, we really didn’t have any issues with the additional volume. It was very nice to see that the promise of this highly elastic infrastructure is a real thing.”
“We’re currently at a spend of less than one percent of revenue on infrastructure cost. I attribute that directly to utilizing Kubernetes, because we’re able to pack a whole lot more work into the same set of servers and be very, very, very efficient with the infrastructure spend that we currently have. That’s 100% due to Kubernetes.”— Richard Garbi, CTO at PayIt
Kubernetes has also enabled PayIt to integrate with all kinds of systems run by customers. “The flexibility that the technology provides is pretty huge for us in that we end up integrating with you name it under the sun: a lot of mainframes, as you might imagine, legacy systems that haven’t been touched in 30 years,” says Garbi. “The agility and flexibility that we get being cloud native really makes that much easier for us.”
The PayIt team runs its own Kubernetes clusters on Amazon’s GovCloud. The latest challenge is managing multiple clusters with a relatively small team. “They would spend a fair amount of time doing upgrades on the Kubernetes clusters that we have,” says Garbi. “They discovered the cluster API that’s currently in beta, and they started contributing back features to bring the cluster API up to the kind of features that we have in our own clusters that we hand-rolled.”
Other CNCF technologies have also been added to the infrastructure: PayIt recently rolled out Envoy to replace Amazon application load balancers, which reduced spend. Prometheus is used with Grafana for performance monitoring for everything running within Kubernetes. And the team uses Fluentd to ship logs to a logging provider. CoreDNS is also on the roadmap.
As PayIt continues on its cloud native journey, “We are building tools to enable internal teams to provision their own Kubernetes clusters at the push of a button, and our intent is to offer the expertise that we have to government clients that want to run some of their workloads in a cloud native context,” says Garbi. “Being cloud native is a pretty big competitive advantage for us.”