Case Study

Zynga

Zynga Scales Gaming Infrastructure by Eliminating Networking Bottlenecks with Cilium

Introduction

Zynga, the mobile game studio behind Words with Friends, Zynga Poker, and more, serves millions of daily players worldwide. The company’s infrastructure must support rapid content releases, unpredictable traffic spikes, and ultra-low latency for real-time multiplayer gaming. The team managing Kubernetes clusters for game developers recently migrated to Cilium to address scalability issues and take advantage of Cilium’s “all-in-one” solution for networking, observability, and security.

Highlights

  • Deep observability: Hubble provides visibility into traffic and bottlenecks.
  • Consolidated networking stack: Replaced AWS VPC CNI, kube-proxy, Istio service mesh, observability, and security tools with unified Cilium platform.
  • Eliminated conntrack bottleneck: From 300,000 entries per node to effectively zero by replacing kube-proxy with eBPF-based load balancing.
Industry:
Cloud Type:
Product Type:
Published:
March 2, 2026

Projects used

The Challenge: Scaling Past Conntrack Limits

Zynga’s 12-person Kubernetes platform engineering team, spanning North America, Spain, and India, provides game developers with infrastructure to build games quickly and maintain high availability. The team has been running Kubernetes on AWS for about six years, but in 2023 began facing connection tracking (conntrack) limitations in their AWS VPC CNI and kube-proxy setup. 

Conntrack, the Linux kernel connection tracking system used by kube-proxy, maintains state for every network connection. For games with thousands of concurrent players, this meant they were averaging around 300,000 entries per node, about 5x the default Linux limit, leading to lag and connection issues for users. 

“We had been bound by conntrack limits and had to scale around this constraint,” explains Harish Salluri, Principal Software Engineer managing Kubernetes clusters for game developers. “Previously, we had to try to resolve conntrack limits manually, creating a bottleneck with game traffic.” Engineers were designing systems to work around networking limitations rather than optimizing for game workloads.

When a customer approached them with a roadmap for their own CIlium adoption, the team decided to evaluate its CNI setup. Cilium emerged as the clear choice as alternative CNIs had unclear roadmaps, and lacked an advanced feature set for providing service mesh and observability alongside networking.

The Solution: eBPF-Native Networking with Cilium

In late 2023, Zynga decided to adopt Cilium for comprehensive value. “With Cilium, in one implementation, we would be getting the CNI and replacing kube-proxy, plus Cluster Mesh, service mesh, Gateway API, Hubble for observability, and network policies for restricting traffic,” Salluri explains. 

Cilium consolidated AWS VPC CNI, Istio service mesh, separate observability and security tools, and replaced kubeproxy, eliminating conntrack limitations, into one eBPF-based platform.

The team architected a phased approach for a gradual workload migration. They implemented Cilium with native routing and IPAM in ENI mode. When they encountered IPPrefix limitations, they developed a solution and contributed it upstream. “It took a lot of learning to get to the point where the PR could be merged safely, but it was worth the effort,” Salluri notes. “Gaining an understanding of such a critical part of the stack has allowed me to become a better resource for both our team and our customers.”

Hubble became the go-to troubleshooting tool, with Salluri even conducting Hubble training sessions for game developers. “Anytime there is a customer complaining about latency, we can look at Hubble and see exactly what is happening,” he explains. “This level of traffic visibility simply wasn’t available with our previous setup.”

Cilium service mesh is currently deployed in one game cluster, with plans to expand to 29 more. In addition, the game developers behind Zynga Poker have adopted Cilium’s Gateway API implementation to address limitations they were experiencing with ingress.

The Impact: Zero Conntrack Constraints

Since adopting Cilium’s kube-proxy replacement, conntrack entries have dropped to effectively zero. Cilium’s eBPF programs now handle packet forwarding and load balancing directly in the Linux kernel. “Once we switched over to Cilium, conntrack was no longer a concern,” the team reports. “That issue is gone completely.” Game developers can build rich new features without worrying about connection limits. 

By consolidating multiple tools into a single platform with Cilium, the team simplified infrastructure management. “Previously we had to manage multiple tooling stacks individually, but Cilium replaced them all in one go,” Salluri notes.

Hubble revolutionized observability capabilities. “The visibility Hubble provides has been game changing for our developers,” Salluri says. “They can see exactly where traffic is flowing and where bottlenecks occur.”

Looking Forward

Zynga will continue expanding their Cilium adoption. Gateway API implementation is underway across clusters, with Zynga Poker already benefiting. Service mesh expansion is also a major focus. After successful pilot deployment in one cluster, the team is rolling out to approximately 29 more, enabling consistent service-to-service communication, encryption, and advanced traffic management

L7 network policies are on the near-term roadmap for more granular application-layer traffic control. Cluster Mesh is further down the plan and will enable multiple Kubernetes clusters to behave as a single logical cluster, transforming how Zynga handles player traffic routing and global load balancing.

For gaming companies and organizations with high-performance networking requirements, Zynga’s experience demonstrates that eBPF-based networking eliminates real infrastructure constraints. By consolidating multiple tools into a single solution, Cilium let Zynga’s engineers focus on delivering exceptional gaming experiences to millions of players worldwide.