Case Study: CERN

Challenge: Massive amounts of data, extreme peaks in workloads

Application: Hybrid infrastructure with improved efficiency

Solution: Cloud native practices with Kubernetes, Helm, Prometheus, CoreDNS

Virtualization overhead went from 20% to ~5%; with Kubernetes on bare metal it could be 0%

Time to deploy a new cluster reduced from more than 3 hours to less than 15 minutes

Adding new nodes to a cluster went from more than 1 hour to less than 2 minutes

Autoscaling replicas for system components takes less than 2 minutes instead of more than an hour

320,000 cores; 4,300 projects; 330 petabytes; 3,300 users; 10,000 hypervisors; 300 Kubernetes clusters

“Kubernetes is something we can relate to very much because it’s naturally distributed. What it gives us is a uniform API across heterogeneous resources to define our workloads. This is something we struggled with a lot in the past when we want to expand our resources outside our infrastructure.” —Ricardo Rocha, Software Engineer, CERN

“With Kubernetes, there’s a well-established technology and a big community that we can contribute to. It allows us to do our physics analysis without having to focus so much on the lower level software. This is just exciting. We are looking forward to keep contributing to the community and collaborating with everyone.” —Ricardo Rocha, Software Engineer, CERN

Read the Case Study