All Posts By

CNCF

containerd Joins the Cloud Native Computing Foundation

By | Announcement | No Comments

Foundation fostering cross-project collaboration and growth of cloud native ecosystem

BERLIN – CloudNativeCon + KubeCon Europe – March 29, 2017 – T​he Cloud Native Computing Foundation (CNCF), which is sustaining and integrating open source technologies to orchestrate containers as part of a microservices architecture, today announced containerd – Docker’s core container runtime – has been accepted by the Technical Oversight Committee (TOC) as an incubating project alongside projects Kubernetes, gRPC and more. Docker’s acceptance into the CNCF comes three months after Docker, with support from the five largest cloud providers, announced its intent to contribute the project to a neutral foundation in the first quarter of this year.

“It’s important for CNCF to host foundational technology for cloud native computing,” said Dan Kohn, Executive Director of the Cloud Native Computing Foundation. “The containerd runtime is incredibly important to the growth of the overall cloud native ecosystem and uniting it with Kubernetes and CNCF will bring huge benefits to end user solutions. Container orchestrators need community-driven container runtimes and we are excited to have containerd which is used today by everyone running Docker. Becoming a part of CNCF unlocks new opportunities for broader collaboration within the ecosystem.”

containerd (Con-tay-ner-D) has been extracted from Docker’s container platform and includes methods for transferring container images, container execution and supervision and low-level local storage, across both Linux and Windows. containerd is an essential upstream component of the Docker platform used by millions of end users and also provides the industry with an open, stable and extensible base for building non-Docker products and container solutions.

“Our decision to contribute containerd to the CNCF closely follows months of collaboration and input from thought leaders in the Docker community,” said Solomon Hykes, founder, CTO and Chief Product Officer at Docker. “Since our announcement in December, we have been progressing the design of the project with the goal of making it easily embedded into higher level systems to provide core container capabilities. Our focus has always been on solving users’ problems. By donating containerd to an open foundation, we can accelerate the rate of innovation through cross-project collaboration – making the end user the ultimate benefactor of our joint efforts.”

The donation of containerd aligns with Docker’s history of making key open source plumbing projects available to the community. This effort began in 2014 when the company open sourced libcontainer. Over the past two years, Docker has continued along this path by making libnetwork, notary, runC (contributed to the Open Container Initiative, which like CNCF, is part of The Linux Foundation), HyperKit, VPNKit, DataKit, SwarmKit and InfraKit available as open source projects as well.

containerd is already a key foundation for Kubernetes, as Kubernetes 1.5 runs with Docker 1.10.3 to 1.12.3. There is also strong alignment with other CNCF projects: containerd exposes an API using gRPC and exposes metrics in the Prometheus format. containerd also fully leverages the Open Container Initiative’s (OCI) runtime, image format specifications and OCI reference implementation (runC), and will pursue OCI certification when it is available.

Figure 1: containerd’s role in the Container Ecosystem

Community consensus leads to technical progress

In the past few months, the containerd team has been active implementing Phase 1 and Phase 2 of the containerd roadmap. Details about the project can be charted in the containerd weekly development reports posted in the Github project.

At the end of February, Docker hosted the containerd Summit with more than 50 members of the community from companies including Alibaba, AWS, Google, IBM, Microsoft, Rancher, Red Hat and VMware. The group gathered to learn more about containerd, get more information on containerd’s progress and discuss its design. To view the presentations, check out the containerd summit recap blog post: Deep Dive Into Containerd by Michael Crosby, Stephen Day, Derek McGowan and Mickael Laventure (Docker), Driving Containerd Operations With GRPC by Phil Estes (IBM) and Containerd and CRI by Tim Hockin (Google).

The target date to finish implementing the containerd 1.0 roadmap is June 2017. To contribute to containerd, or embed it into a container system, check out the project on GitHub. If you want to learn more about containerd progress, or discuss its design, join the team in Berlin in March at KubeCon 2017 for the containerd Salon, or Austin for DockerCon Day 4 Thursday April 20th, as the Docker Internals Summit morning session will be a containerd summit.

Additional containerd Resources:

Additional CNCF Resources

About Cloud Native Computing Foundation

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. The Cloud Native Computing Foundation (CNCF) hosts critical components of those software stacks including Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing, gRPC, CoreDNS, containerd, and rkt; brings together the industry’s top developers, end users, and vendors; and serves as a neutral home for collaboration. CNCF is part of The Linux Foundation, a nonprofit organization. For more information about CNCF, please visit: https://cncf.io/.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Natasha Woods

The Linux Foundation

(415) 312-5289

PR@CNCF.io

rkt: The pod-native container engine launches in the CNCF

By | Blog | No Comments

By: Jonathan Boulle, rkt project co-founder, CNCF TOC representative, and head of containers and Berlin site lead at CoreOS

Earlier this month, we announced that CoreOS made a proposal to add rkt, the pod-native container engine, as a new incubated project within the Cloud Native Computing Foundation (CNCF). Today we are happy to celebrate that rkt has been formally proposed and accepted into the CNCF.

This means that with rkt now housed in the CNCF, we ensure that the rkt and container community will continue to thrive in a neutral home for collaboration. We are excited to work alongside the CNCF community to push forward the conversation around container execution in a cloud native environment. We look forward to working alongside the CNCF to further the development of the rkt community and develop interoperability between Kubernetes, OCI and containerd.

It is a historical moment where CNCF has the opportunity to push progress on container execution for the future of the ecosystem, under a neutral and collaborative home. The future of container execution is important for cloud native. Rkt is now joining the CNCF family alongside other critical projects like gRPC, Kubernetes, Prometheus, and others.

Working with the community and next steps

rkt developers already actively collaborate on container specifications in the OCI project, and we are happy to collaborate more on the implementation side with the CNCF. We are actively working to integrate rkt with Kubernetes, the container cluster orchestration system, and together we can work to refine and solidify the shared API for how Kubernetes communicates with container runtimes. Having container engine developers work side-by-side on the testing and iteration of this API ensures a more robust solution beneficial for users in our communities.

The OCI project is hard at work on the standards side, and we expect we will be able to share the code in working with those image and runtime specifications. rkt closely tracks OCI development and has developers involved in the specification process. rkt features early implementation support for the formats with the intention of being fully compliant once the critical 1.0 milestone is reached.

What can rkt users expect from this new announcement? All of the rkt maintainers will continue working on the project as usual. Even more, we can encourage new users, and maintainers, with the help of the CNCF, to contribute to and rely on rkt.

We encourage the community to continue using rkt or try it out: and you can get involved on the rkt page on GitHub or on the mailing list.

A big thank you to all the supporters of rkt over the years. We would also like to thank Brian Grant of Google for being the official sponsor of proposal for rkt’s contribution into the CNCF.

FAQ

What is rkt? A pod-native container engine

rkt, an open source project, is an application container engine developed for modern production cloud-native environments. It features a pod-native approach, a pluggable execution environment, and a well-defined surface area that makes it ideal for integration with other systems.

The core execution unit of rkt is the *pod*, a collection of one or more applications executing in a shared context (rkt’s pods are synonymous with the concept in the Kubernetes orchestration system). rkt allows users to apply different configurations (like isolation parameters) at both pod-level and at the more granular per-application level. rkt’s architecture means that each pod executes directly in the classic Unix process model (i.e. there is no central daemon), in a self-contained, isolated environment. rkt implements a modern, open, standard container format, the App Container (appc) spec, but can also execute other container images, like those created with Docker.

Since its introduction by CoreOS in December 2014, the rkt project has greatly matured and is widely used. It is available for most major Linux distributions and every rkt release builds self-contained rpm/deb packages that users can install. These packages are also available as part of the Kubernetes repository to enable testing of the rkt + Kubernetes integration. rkt also plays a central role in how Google Container Image and CoreOS Container Linux run Kubernetes.

How were rkt and containerd contributed to the CNCF?

On March 15, 2017, at the CNCF TOC meeting, CoreOS and Docker made proposals to add rkt and containerd as new projects for inclusion in the CNCF. During the meeting, we as rkt co-founders, proposed rkt, and Michael Crosby, a containerd project lead and co-founder, proposed containerd. These proposals were the first step, and then the project went through a formal proposal to the TOC, and finally were called to a formal vote last week. Today these projects have been accepted to the organization.

What does this mean for rkt and other projects in the CNCF?

As part of the CNCF, we believe rkt will continue to advance and grow. The donation will ensure that there is ongoing shared ecosystem collaboration around the various projects, where interoperability is key. Finding a well-respected neutral home at the CNCF provides benefits to the entire community around fostering interoperability with OCI, Kubernetes, and containerd. There’s also an exciting number of opportunities for cross-collaboration with other projects like gRPC and Prometheus.

Container execution is a core part of cloud-native. By housing rkt under the CNCF, a neutral, respected home for projects, we see benefits including help with community building and engagement, and overall, fostering of interoperability with other cloud native projects like Kubernetes, OCI, and containerd.

How should we get involved?

The community is encouraged to keep using, or begin using rkt, and you can get involved on the rkt page on GitHub or on the mailing list. Note that this repo will be moved into a new vendor-neutral GitHub organisation over the coming weeks.

Cloud Native Computing Foundation Announces Dell Technologies as Platinum Member

By | Announcement | No Comments

 {code} by Dell EMC builds on its commitment to open source, joins CNCF to accelerate adoption of cloud native environments

BERLIN – CloudNativeCon/KubeCon – March 29, 2017 – T​he Cloud Native Computing Foundation, which is sustaining and integrating environments for applications optimized for cloud operating models, today announced that Dell Technologies is now a Platinum Member. The company joins existing Platinum members Cisco, CoreOS, Docker, Fujitsu, Google, Huawei, IBM, Intel, Joyent, Mesosphere, Red Hat, Samsung SDS and Supernap in the industry effort to advance cloud native technologies.

Containers, software-based infrastructure, and microservices represent a major evolution in the way applications are deployed and managed. Under cloud native environments, applications are optimized by dynamically being managed adherent to the constraints but also benefiting from the features and services that clouds supply. The foundation ensures organizations have choice and confidence when they commit to these environments that rely on interoperability among cloud native components and services.

Dell Technologies, the largest privately-owned technology company in the world, is committed to enabling cloud native adoption through contributing open source software and integrating into cloud native environments. In 2013, the company released the software-based block and object storage platforms of ScaleIO and Elastic Cloud Storage (ECS). Its open source initiative, {code} by Dell EMC, created critical open source container storage orchestration projects, REX-Ray and its heterogeneous storage library libStorage. In the container space, Dell Technologies has also contributed to several open source projects to enhance interoperability and storage capabilities for applications managed by Kubernetes, Docker, Mesos and Cloud Foundry.

“Technology is evolving faster than ever before – and it’s critical that organizations are able to build and sustain smarter applications that drive digital transformation. Open source is the key to agility in today’s environment, where environments must be able to handle rapid change and evolution driven by software,” said Josh Bernstein, VP of Technology for Dell EMC, a Dell Technologies company. “By joining CNCF, we are furthering our commitment to enabling transformation while making software open, accessible and consumable as the heart of enterprise IT strategy.”

“Dell EMC has a long and proven track record making storage technologies available to modern and open source infrastructure,” said Dan Kohn, Executive Director of the CNCF. “The {code} team has been working hard to integrate storage into application platforms to pave way for increased enterprise adoption. We are pleased to welcome them as our newest Platinum member, and look forward to them taking on a bigger role shaping the cloud native landscape.”

As part of Dell Technologies’ Platinum membership, Bernstein will join CNCF’s Governing Board. This membership underscores Dell Technologies’ belief that supporting applications in new ways relies on transformation across the full range of its product portfolio. Cloud-native computing is a widely-applicable concept that spans the entire portfolio of Dell Technologies, including Dell EMC.

“Dell has been active in a variety of CNCF project communities, including presenting to the CNCF TOC on container storage initiatives including libStorage,” said Chris Aniszczyk, COO of the CNCF. “Furthermore, they are involved in the Kubernetes Storage SIG, working closely with the community to solve many challenges of external storage functionality within Kubernetes. We are looking forward to working with Dell within our Storage and Networking Working Groups, as well as many other initiatives.”

Additional Resources

About Cloud Native Computing Foundation

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. The Cloud Native Computing Foundation (CNCF) hosts critical components of those software stacks including Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing and gRPC; brings together the industry’s top developers, end users, and vendors; and serves as a neutral home for collaboration. CNCF is part of The Linux Foundation, a nonprofit organization. For more information about CNCF, please visit: https://cncf.io/.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Natasha Woods

The Linux Foundation

(415) 312-5289

PR@CNCF.io

 

Cloud Native Computing Foundation Kicks Off Berlin Event with Five New International Members

By | Announcement | No Comments

HarmonyCloud, QAware, Solinea, SUSE and TenxCloud Align with Foundation to Further Cloud Native Ecosystem During Sold Out CloudNativeCon + KubeCon Europe

BERLIN – CloudNativeCon + KubeCon Europe – March 29, 2017 – T​he Cloud Native Computing Foundation, which is sustaining and integrating open source technologies to orchestrate containers as part of a microservices architecture, today announced that Hangzhou HarmonyCloud Technology LTD, QAware, Solinea, SUSE and TenxCloud have joined the Foundation as its newest members. These new members will join the 1,500 cloud native developers, users and experts in Berlin for CloudNativeCon + KubeCon Europe.

Based in U.S, China and Europe, the new members represent fast-growing regions for cloud native activity and are committed to investing in, contributing to and sponsoring the development of applications based on microservices, containerization and dynamic orchestration.

“The cloud native movement is increasingly spreading to all parts of the world, which is on display this week at our flagship event in Berlin,” said Dan Kohn, Executive Director of the Cloud Native Computing Foundation. “We’re excited to welcome new members from Europe and Asia and showcase some of the most prolific cloud native developers and users in the world at CloudNativeCon/KubeCon. As we gain more international members, CNCF is able to have a broader, deeper impact on the future of the cloud native ecosystem.”

About the newest gold member:

 SUSE, headquartered in Germany, is a pioneer in open source software working to facilitate applications and developers as it provides reliable, interoperable Linux, cloud infrastructure and storage solutions that give enterprises greater control and flexibility. Twenty-five years of engineering excellence, exceptional service and an unrivaled partner ecosystem power the products and support that help SUSE’s customers manage complexity, reduce cost, and confidently deliver mission-critical services. SUSE provides customers a holistic approach to orchestration and management by providing Kubernetes-as-a-Service capabilities in SUSE OpenStack Cloud 7, a Kubernetes-integrated container OS delivered with SUSE Container-as-a-Service (CaaS) Platform, and the convergence of CaaS and PaaS with the soon-to-be-released SUSE solution based on Cloud Foundry with Kubernetes as a key component.

“Modern business is moving to the software-defined data center to optimize mission-critical availability and quickly deliver new services,” said Thomas Di Giacomo, CTO at SUSE. “SUSE solutions enable innovative open source technologies that are hardened for software-defined infrastructure-based enterprise operations, backed by outstanding support. This move toward software-defined infrastructure includes a growing emphasis on container and cloud technologies, and CNCF has become the natural home for many of the leading open source projects that will enable the software-defined data center of the future. This the ideal time for SUSE to join the CNCF community as a gold member, as SUSE is focused on providing customers with a holistic approach to orchestration and management.”

 About the newest silver members:

HarmonyCloud, based in Hangzhou, is a container-based cloud platform provider focused on building enterprise PaaS platforms on open source projects like Kubernetes. The company provides enterprise clients with enhanced features on runtime security, networking and storage, and enabling microservice-based applications, distributed tracing and automated CI/CD.
“The core HarmonyCloud team, which is from SEL Lab Zhejiang University, has made great contributions to Kubernetes and the cloud native space,” said Aoyu Wang, CEO of HarmonyCloud. “We’ve used our knowledge and expertise to automate and optimize the deployment of cloud applications based on container technologies, and this makes CNCF membership a perfect fit for us. We look forward to making continued contributions to the container-based technologies fostered and incubated by CNCF.”

QAware, based in Germany, is an independent software manufacturer and consultancy – analyzing, renovating, developing and implementing software systems and cloud native applications for customers whose success heavily depends on IT. These applications provide enterprises with a decisive advantage, as they make processes and products possible that were previously unimaginable.

“Building cloud native applications is a revolutionary way of making systems possible that were previously unimaginable,” said Josef Adersberger, CTO at QAware. “We love to share our experience, drive discussions with other cloud native experts and contribute to leading open source technology, which is why we are very happy to join forces with the CNCF.”

Solinea, headquartered in San Francisco, is the leading professional services partner that accelerates enterprise cloud adoption. The company works with enterprises and service providers to help them achieve their agile, secure and transformational objectives by developing multi and hybrid cloud adoption strategies, driving cloud native enablement through the integration of containers and microservices, and accelerating application delivery to the cloud through innovative DevOps solutions.

“Partnering with the Cloud Native Computing Foundation is the right decision as we look ahead at how best to architect and deploy open, vendor-agnostic cloud solutions for our current and future clients,” says Francesco Paola, CEO of Solinea. “As we work with leading global enterprises and service providers to architect and deploy cloud, container and microservices solutions at scale, to drive agility into the organization, it is important for us to work with an exceptional team that understand our clients’ needs. CNCF is the right choice.”

TenxCloud, based in Beijing, is an innovation-driven cloud computing company founded in 2014. It is the first Kubernetes-based enterprise-class container cloud platform in China. The platform provides application-centric container cloud products and solutions that cover lightweight container virtualization, microservices, DevOps, continuous delivery and more.

“At the end of 2015, TenxCloud released China’s first enterprise-based container cloud platform based on the open source project Kubernetes, we want to help enterprises to achieve rapid delivery of business applications and continuous innovation,” said Jerry Huang, CEO of TenxCloud. “The development of TenxCloud benefits from the open source community, so it’s our pleasure to promote the development of container technology and we look forward to making contributions to CNCF’s projects.”

Additional Resources

About Cloud Native Computing Foundation

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. The Cloud Native Computing Foundation (CNCF) hosts critical components of those software stacks including Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing, gRPC, CoreDNS, containerd, and rkt; brings together the industry’s top developers, end users, and vendors; and serves as a neutral home for collaboration. CNCF is part of The Linux Foundation, a nonprofit organization. For more information about CNCF, please visit: https://cncf.io/.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Natasha Woods

The Linux Foundation

(415) 312-5289

PR@CNCF.io

Cloud Native Computing Foundation Becomes Home To Pod-Native Container Engine Project rkt

By | Announcement | No Comments

Container execution solutions to benefit from Foundation fostering community building and interoperability in cloud-native computing technology

BERLIN – CloudNativeCon + KubeCon Europe – March 29, 2017 – T​he Cloud Native Computing Foundation, which is sustaining and integrating open source technologies to orchestrate containers as part of a microservices architecture, today announced that rkt, a project proposed by CoreOS, has been accepted by the Technical Oversight Committee (TOC) as an incubating project. As an application container engine developed for modern production cloud-native environments, rkt (pronounced “rocket”) is used to run applications packaged as container images on servers in production systems.

“It is important for CNCF to be a good home for container/orchestrator-friendly data-processing platforms and adding rkt to our project portfolio is another big milestone for the CNCF,” said Dan Kohn, Executive Director of the Cloud Native Computing Foundation. “Kubernetes and other container orchestrators benefit from reliable, community-driven container runtimes, like rkt. Having a container runtime engine such as rkt, along with container cluster managing system Kubernetes under a single foundation umbrella will bring huge benefits for providing solid end-user solutions to the industry.”

A pillar of cloud native computing is packaging applications as container images and distributing those images to servers. On the server, a container engine then downloads the image, verifies the image integrity, and executes the container process. Ideally, the container engine does this in the simplest possible manner while meeting the expectations of the production cloud native user. The rkt tool is laser-focused on solving these problems and is integrated with various orchestration systems including Kubernetes, Mesos, Nomad, and many organizations’ bespoke systems.

“Container execution is a core part of cloud native and it has been the mission of the rkt team and project to create a simple, composable, and production-ready container engine for the ecosystem,” said Jonathan Boulle, rkt project co-founder, CNCF TOC representative, and head of containers and Berlin site lead at CoreOS. “With CNCF becoming the neutral, respected home for rkt, the project will benefit from community building and engagement, and fostering interoperability with Kubernetes, OCI, containerd and other future projects.”

Since its introduction by CoreOS in December 2014, the rkt project has greatly matured with 178 contributors, 6,833 GitHub stars and 5,182 commits and is widely used in the industry, by companies like Xoom and BlaBlaCar. Packages of rkt are available for many popular Linux distributions including Arch, CentOS, CoreOS Container Linux, Debian, Fedora, Gentoo, NixOS, openSUSE, Ubuntu, and Void. rkt also plays a central role in how CoreOS Container Linux runs Kubernetes.

Pod-Native Container Engine

The rkt project has contributed indirectly to the creation of several important new APIs, specifications, and discussions in the container ecosystem. appc, the specification rkt is based off, was donated to the Open Container Initiative (OCI) at its founding in order to create the OCI image specification. Container Network Interface (CNI), the container network plugin system used by Mesos, Kubernetes, rkt, and others, comes directly from the initial rkt plugin system and has become a multi-organization and industry-wide effort. The rkt project was also a catalyst for the Kubernetes Container Runtime Interface (CRI) and is available for use via the CRI.

Notable Milestones:

  • 178 contributors
  • 5,182 commits
  • 59 releases with 2 branches and 667 forks
  • 6,833 GitHub stars

Additional rkt Resources:

Additional Resources

About Cloud Native Computing Foundation

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. The Cloud Native Computing Foundation (CNCF) hosts critical components of those software stacks including Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing, gRPC, CoreDNS, containerd, and rkt; brings together the industry’s top developers, end users, and vendors; and serves as a neutral home for collaboration. CNCF is part of The Linux Foundation, a nonprofit organization. For more information about CNCF, please visit: https://cncf.io/.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Natasha Woods

The Linux Foundation

(415) 312-5289

PR@CNCF.io

Cloud Native Computing Foundation Makes Kubernetes Certified Administrator Exam Curriculum Freely Available

By | Announcement | No Comments

Curriculum Blueprint Allows Training Providers To Get Jump Start on Certified Kubernetes Administrator Exam Preparations

BERLIN – CloudNativeCon + KubeCon Europe – March 29, 2017 – T​he Cloud Native Computing Foundation, which is sustaining and integrating open source technologies to orchestrate containers as part of a microservices architecture, today announced the Kubernetes Certified Administrator Exam Curriculum is now freely available.

Available under the Creative Commons By Attribution 4.0 International license, the document provides the curriculum outline of the knowledge, skills and abilities that a Certified Kubernetes Administrator (CKA) can be expected to demonstrate. By offering insights into subject domains and their percent weight on the exam, Kubernetes training providers can use the document to help shape their own curriculum and programs to better prepare for the exam. A Certified Kubernetes Administrator will be expected to work proficiently to design, install, configure, and manage a Kubernetes production-grade cluster.

“The ecosystem of training providers and consultants interested in offering Kubernetes expertise is rapidly growing as cloud native computing adoption soars,” said Dan Kohn, Executive Director of the Cloud Native Computing Foundation. “By making the curriculum outline available now companies in the Kubernetes training ecosystem are able to get a jump start on their CKA exam preparations. Being able to assess this several months in advance of exam availability will be extremely beneficial to the training ecosystem.”

Kubernetes Certified Administrator Exam Curriculum v 0.9 is available at https://github.com/cncf/curriculum.

In November, CNCF announced a training, certification and Kubernetes Managed Service Provider (KMSP) program. Since then, a team of Kubernetes experts representing nine different companies including Apprenda, Canonical, CoreOS, Google, Huawei, and Samsung SDS, among others, has been working on defining an online, proctored certification program. The CKA exam, which will be run by The Linux Foundation for CNCF, is expected to be available this summer.

While many enterprises have successfully deployed Kubernetes based on the publicly available documentation and support available from the large and growing Kubernetes community, CNCF’s KMSP program and training course enable enterprises that want additional support to be confident that they are working with Kubernetes experts.

This April the self-paced Kubernetes Fundamentals LFS258 course, which provides Linux administrators and software developers who are starting to work with containers the key principles behind managing containerized applications in production, will be expanded so that the course content matches the CKA exam scope.

Additionally, a new free Try Before You Buy: Kubernetes Fundamentals (LFS258) ebook sample of the course materials is now available. It gives a high-level overview of what Kubernetes is and the challenges it solves and dives deep into the system architecture.

Volunteers to help with beta testing the exam this May are needed. Interested developers and Kubernetes experts should subscribe to the Kubernetes Certification Working Group list: https://lists.cncf.io/mailman/listinfo/cncf-kubernetescertwg.

Additional Resources

About Cloud Native Computing Foundation

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. The Cloud Native Computing Foundation (CNCF) hosts critical components of those software stacks including Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing, gRPC, CoreDNS, containerd, and rkt; brings together the industry’s top developers, end users, and vendors; and serves as a neutral home for collaboration. CNCF is part of The Linux Foundation, a nonprofit organization. For more information about CNCF, please visit: https://cncf.io/.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Natasha Woods

The Linux Foundation

(415) 312-5289

PR@CNCF.io

Deploying 2048 OpenShift nodes on the CNCF Cluster (Part 2)

By | Blog | No Comments

By Jeremy Eder, Red Hat, Senior Principal Software Engineer

Overview

The Cloud Native community has been incredibly busy since our last set of scaling tests on the CNCF cluster back in August.  In particular, the Kubernetes (and by extension, OpenShift) communities have been hard at work pushing scalability to entirely new levels. As a significant contributor to Kubernetes, Red Hat engineers are involved in this process both upstream and with our enterprise distribution of Kubernetes, Red Hat OpenShift Container Platform.  

It’s time to put the new OpenShift 3.5 release to the test again with more benchmarking on the CNCF community cluster that has been donated and built out by Intel.

For more information about what Red Hat is doing in the Kubernetes community, be sure to attend our talks At CloudNativeCon + KubeCon Europe this week:

Recap

The previous round of benchmarking on CNCF’s cluster provided us with a wealth of information, which greatly aided our work on this new release. The last series of scaling tests on the CNCF cluster consisted of using a cluster-loader utility (as demonstrated at CloudNativeCon + KubeCon in Seattle last year) to load the environment with realistic content such as Django/Wordpress, along with multi-tier apps that included databases such as PostgreSQL and MariaDB.  We did this on Red Hat OpenShift Container Platform running on a 1,000 node cluster provisioned and managed using Red Hat OpenStack Platform. We scaled the number of applications up, analyzed the state of the system while under load and folded all of the lessons learned into Kubernetes and then downstream into OpenShift.

What we built on the CNCF Cluster

This time we wanted to leverage bare metal as well.  So we built two OpenShift clusters:  one cluster of 100 nodes on bare metal and another cluster of 2,048 VM nodes on Red Hat OpenStack Platform 10.  We chose 2,048 because it’s a power of 2, and that makes engineers happy.

Goals

We kept some of our goals from last time, and added some cool new ones:

  • 2,000+ node OpenShift Cluster and research future reference designs
  • Use Overlay2 graph driver for improved density & performance, along with recent SELinux support added in kernel v4.9
  • Saturation test for OpenShift’s HAProxy-based network ingress tier
  • Persistent volume scalability and performance using Red Hat’s Container-Native Storage (CNS) product
  • Saturation test for OpenShift’s integrated container registry and CI/CD pipeline

Network Ingress/Routing Tier

The routing tier in OpenShift consists of machine(s) running HAProxy as the ingress point into the cluster.  As our tests verified, HAProxy is one of the most performant open source solutions for load balancing.  In fact, we had to re-work our load generators several times in order to push HAProxy as required.  By super popular demand from our customers, we also added SNI and TLS variants to our test suite.

Our load generator runs in a pod and its configuration is passed in via configmaps.  It queries the Kubernetes API for a list of routes and builds its list of test targets dynamically.

Here is an example of the configuration that our load generator uses:

 

In our scenario, we found that HAProxy was indeed exceptionally performant.  From field conversations, we identified a trend that there are (on average) a large number of low-throughput cluster ingress connections from clients (i.e. web browsers) to HAProxy versus a small number of high-throughput connections.  Taking this feedback into account, the default connection limit of 2,000 leaves plenty of room on commonly available CPU cores for additional connections.  Thus, we have bumped the default connection limit to 20,000 in OpenShift 3.5 out of the box.

If you have other needs to customize the configuration for HAProxy, our networking folks have made it significantly easier — as of OpenShift 3.4, the router pod now uses a configmap, making tweaks to the config that much simpler.

As we were pushing HAProxy we decided to zoom in on a particularly representative workload mix – a combination of HTTP with keepalive and TLS terminated at the edge.  We chose this because it represents how most OpenShift production deployments are used – serving large numbers of web applications for internal and external use, with a range of security postures.

Let’s take a closer look of this data, noting that since this is a throughput test with a Y-axis of Requests Per Second, higher is better.

nbproc is the number of HAProxy processes spawned.  nbproc=1 is currently the only supported value in OpenShift, but we wanted to see what if anything increasing nbproc bought us from a performance and scalability standpoint.

Each bar represents a different potential tweak:

  • 1p-mix-cpu*:  HAProxy nbproc=1, run on any CPU
  • 1p-mix-cpu0: HAProxy nbproc=1, run on core 0
  • 1p-mix-cpu1: HAProxy nbproc=1, run on core 1
  • 1p-mix-cpu2: HAProxy nbproc=1, run on core 2
  • 1p-mix-cpu3: HAProxy nbproc=1, run on core 3
  • 1p-mix-mc10x: HAProxy nbproc=1, run on any core, sched_migration_cost=5000000
  • 2p-mix-cpu*: HAProxy nbproc=2, run on any core
  • 4p-mix-cpu02: HAProxy nbproc=4, run on core 2

We can learn a lot from this single graph:

  • CPU affinity matters.  But why are certain cores nearly 2x faster?  This is because HAProxy is now hitting the CPU cache more often due to NUMA/PCI locality with the network adapter.
  • Increasing nbproc helps throughput.  nbproc=2 is ~2x faster than nbproc=1, BUT we get no more boost from going to 4 cores, and, in fact, nbproc=4 is slower than nbproc=2.  This is because there were 4 cores in this guest, and 4 busy HAProxy threads left no room for the OS to do its thing (like process interrupts).

In summary, we know that we can improve performance more than 20 percent from baseline with no changes other than sched_migration_cost.  What is that knob? It is a kernel tunable that weights processes when deciding if/how the kernel should load balance them amongst available cores.  By increasing it by a factor of 10, we keep HAProxy on the CPU longer, and increase our likelihood of CPU cache hits by doing so.

This is a common technique amongst the low-latency networking crowd, and is in fact recommended tuning in our Low Latency Performance Tuning Guide for RHEL7.

We’re excited about this one, and will endeavor to bring this optimization to an OpenShift install near you :-).

Look for more of this sort of tuning to be added to  the product as we’re constantly hunting opportunities.

Network Performance

In addition to providing a routing tier, OpenShift also provides an SDN.  Similar to many other container fabrics, OpenShift-SDN is based on OpenvSwitch+VXLAN.  OpenShift-SDN defaults to multitenant security as well, which is a requirement in many environments.

VXLAN is a standard overlay network technology.  Packets of any protocol on the SDN are wrapped in UDP packets, making the SDN capable of running on any public or private cloud (as well as bare metal).

Incidentally, both the ingress/routing and SDN tier of OpenShift are pluggable, so you can swap those out for vendors who have certified compatibility with OpenShift.

When using overlay networks, the encapsulation technology comes at a cost of CPU cycles to wrap/unwrap packets and is mostly visible in throughput tests.  VXLAN processing can be offloaded to many common network adapters, such as the ones in the CNCF Cluster.

Web-based workloads are mostly transactional, so the most valid microbenchmark is a ping-pong test of varying payload sizes.

Below you can see a comparison of various payload sizes and stream count.  We use a mix like this as a slimmed down version of RFC2544.

  • tcp_rr-64B-1i:  tcp, round-robin, 64byte payload, 1 instance (stream)
  • tcp_rr-64B-4i:  tcp, round-robin, 64byte payload, 4 instances (streams)
  • tcp_rr-1024B-1i:  tcp, round-robin, 1024byte payload, 1 instance (stream)
  • tcp_rr-1024B-4i:  tcp, round-robin, 1024byte payload, 4 instance (streams)
  • tcp_rr-16384B-1i:  tcp, round-robin, 16384byte payload, 1 instance (stream)
  • tcp_rr-16384B-4i:  tcp, round-robin, 16384byte payload, 4 instance (streams)

The X-axis is number of transactions per second.  For example, if the test can do 10,000 transactions per second, that means the round-trip latency is 100 microseconds.  Most studies indicate the human eye can begin to detect variations in page load latencies in the range of 100-200ms.  We’re well within that range.

Bonus network tuning:  large clusters with more than 1,000 routes or nodes require increasing the default kernel arp cache size.  We’ve increased it by a factor of 8x, and are including that tuning out of the box in OpenShift 3.5.

Overlay2, SELinux

Since Red Hat began looking into Docker several years ago, our products have defaulted to using Device Mapper for Docker’s storage graph driver.  The reasons for this are maturity, supportability, security, and POSIX compliance.  Since the release of RHEL 7.2 in early 2016, Red Hat has also supported the use of overlay as the graph driver for Docker.

Red Hat engineers have since added SELinux support for overlay to the upstream kernel as of Linux 4.9.  These changes were backported to RHEL7, and will show up in RHEL 7.4.  This set of tests on the CNCF Cluster used a candidate build of the RHEL7.4 kernel so that we could use overlay2 backend with SELinux support, at scale, under load, with a variety of applications.

Red Hat’s posture toward storage drivers has been to ensure that we have the right engineering talent in-house to provide industry-leading quality and support.  After pushing overlay into the upstream kernel, as well as extending support for SELinux, we feel that the correct approach for customers is to keep Device Mapper as the default in RHEL, while moving to change the default graph driver to overlay2 in Fedora 26.  The first Alpha of Fedora 26 will show up sometime next month.

As of RHEL 7.3, we also have support for the overlay2 backend.  The overlay filesystem has several advantages over device mapper (most importantly, page cache sharing among containers).  Support for the overlay filesystem was added to RHEL with salient caveats such as that it is not POSIX compliant, and that use of overlay was, at the time, incompatible with SELinux (a key security/isolation technology).

That said, the density improvements gained by page cache sharing are very important for certain environments where there is significant overlap in base image content.

We constructed a test that used a single base image for all pods, and created 240 pods on a node.  The cluster-loader utility used to drive this test has a feature called a “tuningset” which we use to control the rate of creation of pods.  You can see there are 6 bumps in each line.  Each of those represents a batch of 40 pods that cluster-loader created.  Before it moves to the next batch, cluster-loader makes sure the previous batch is in running state.  In this way, we? avoid crushing the API server with requests, and can examine the system’s profiles at each plateau.

Below are the differences between device mapper and overlay for memory consumption.  The savings in terms of memory is reasonable (again, this is a “perfect world” scenario and your mileage may vary).

The reduction in disk operations below is due to subsequent container starts leveraging the kernel’s page cache rather than having to repeatedly fetch base image content from storage:

We have found overlay2 to be very stable, and it becomes even more interesting with the addition of SELinux support.

Container Native Storage

In the early days of Kubernetes, the community identified the need for stateful containers.  To that end, Red Hat has contributed significantly to the development of persistent volume support in Kubernetes.

Depending on the infrastructure you’re on, Kubernetes and OpenShift support dozens of volume providers.  Fiber Channel, iSCSI, NFS, Gluster, Ceph as well as cloud-specific storage providers such as Amazon EBS, Google persistent disks, Azure blob and OpenStack Cinder.  Pretty much anywhere you want to run, OpenShift can bring persistent storage to your pods.

Red Hat Container Native Storage is a Gluster-based persistent volume provider that runs on top of OpenShift in a hyper-converged manner.  That is, it is deployed in pods, scheduled like any other application running on OpenShift.  We used the NVME disks in the CNCF nodes as “bricks” for gluster to use, out of which CNS provided 1GB secure volumes to each pod running on OpenShift using “dynamic provisioning.”

If you look closely at our deployment architecture, while we have deployed CNS on top of OpenShift, we also labeled those nodes as “unschedulable” from the OpenShift standpoint, so that no other pods would run on the same node.  This helps control variability — reliable, reproducible data makes performance engineers happy :-).

We know that cloud providers limit volumes attached to each instance in the 16-128 range (often it is a sliding scale based on CPU core count).  The prevailing notion seems to be that field implementations will see numbers in the range of 5-10 per node, particularly since (based on your workload) you may hit CPU/memory/IOPS limits long before you hit PV limits.

In our scenario we wanted to verify that CNS could allocate and bind persistent volumes at a consistent rate over time, and that the API control plane for CNS called Heketi can withstand an API load test.  We ran throughput numbers for create/delete operations, as well as API parallelism.

The graph below indicates that CNS can allocate volumes in constant time – roughly 6 seconds from submit to the PVC going into “Bound” state.  This number does not vary when CNS is deployed on bare metal or virtualized.  Not pictured here are our tests verifying that several other persistent volume providers respond in a very similar timeframe.

OpenStack and Ceph

As we had approximately 300 physical machines for this set of tests, and goals of hitting the “engineering feng shui” value of 2,048 nodes, we first had to deploy Red Hat OpenStack Platform 10, and then build the second OpenShift environment on top.  Unique to this deployment of OpenStack was:

  • We used the new Composable roles feature to deploy OpenStack
    • 3 Controllers
    • 2 Networker nodes
    • A bare metal role for OpenShift
  • Multiple Heat stacks
    • Bare metal Stack
    • OpenStack Stack
  • Ceph was also deployed through Director.  Ceph’s role in this environment is to provide boot-from-volume service for our VMs (via Cinder).

We deployed a 9-node Ceph cluster on the CNCF “Storage” nodes, which include (2) SSDs and (10) nearline SAS disks.  We know from our counterparts in the Ceph team that Ceph performs significantly better when deployed with write-journals on SSDs.  Based on the CNCF storage node hardware, that meant creating two write-journals on the SSDs and allocating 5 of the spinning disks to each SSD.  In all, we had 90 Ceph OSDs, equating to 158TB of available disk space.

From a previous “teachable moment,” we learned that when importing a KVM image into glance, if it is first converted to “raw” format, creating instances from that image takes a snapshot/boot-from-volume approach.  The net result of this is that for each VM we create, we end up with approximate 700MB of disk space consumed.  For the 2,048 node environment, the VM pool in Ceph only took approximately 1.5TB of disk space.  Compare this to the last (internal) test when we had 1,000 VMs taking nearly 22TB.

In addition to reduced I/O to create VMs and reduced disk space utilization, booting from snapshots on Ceph was incredibly fast.  We were able to deploy all 2,048 guests in approximately 15 minutes.  This was really cool to watch!

Bonus deployment optimization:  use image-based deploys!  Whether it’s on OpenStack, or any other infrastructure public or private, image-based deploys reduce much of what would otherwise be repetitive tasks, and can reduce the burden on your infrastructure significantly.  

Bake in as much as you can.  Review our (unsupported) Ansible-based image provisioner for a head start.

Improved Documentation for Performance and Scale

Phew! That was a lot of work..how do we ensure that the community and customers benefit?

First, we push absolutely everything upstream.  Next, we bake in as much of the tunings, best practices and config optimization into the product as possible….and we document everything else.

Along with OpenShift 3.5, the performance and scale team at Red Hat will deliver a dedicated  Scaling and Performance Guide within the official product documentation.  This provides a consistently updated section of documentation to replace our previous whitepaper, and a single location for all performance and scalability-related advice and best practices.

Summary

The CNCF Cluster is an extremely valuable asset for the open source community.  This 2nd round of benchmarking on CNCF’s cluster has once again provided us with a wealth of information to incorporate into upcoming releases. The Red Hat team hopes that the insights gained from our work will provide benefit for the many Cloud Native communities upon which this work was built:

  • Kubernetes
  • Docker
  • OpenStack
  • Ceph
  • Kernel
  • OpenvSwitch
  • Golang
  • Gluster
  • And many more!

Our team also wishes to thank the CNCF and Intel for making this valuable community resource available.  We look forward to tackling the next levels of scalability and performance along with the community!

Want to know what Red Hat’s Performance and Scale Engineering team is working on next? Check out our Trello board. Oh, and we’re hiring Principal-level engineers!

Tell Us Your Opinion About Diversity in Tech at Google Cloud Next 2017

By | Blog | No Comments

Author: Leah Petersen, Systems Engineer Samsung CNCT

Contributed blog from CNCF Platinum member Samsung

“Tell me your opinion about diversity in tech.”

…not something you expect to be asked at a technology conference booth. This year at the Samsung Cloud Native Computing Team sponsor booth we decided to ask Google Cloud Next attendees their opinion about a problem in our industry – the lack of diversity. We could immediately see a trend – some people nervously shied away from us, laughing it off, and others marched straight up to us and began speaking passionately about the subject.

We chose this unique approach to interacting with conference goers for a few reasons. As big supporters of diversity in tech, we wanted to gather ideas. We asked if their companies were proactively taking any measures to increase the diversity at their company or retain diverse individuals. We also wanted to get people thinking about this issue, since the cloud native computing space is relatively young and we see a great benefit in assembling a diverse group of people to move it forward. Finally, as information-bombarded, weary attendees navigated through the booth space, we wanted to offer something beyond yet another sales pitch.

The three day long conference turned up a lot of interesting ideas and new perspectives. A favorite was how one person defined lack of diversity by describing a group of entirely “Western-educated males” making decisions for a company. After defining what a diverse workforce does or doesn’t look like, lots of people talked about what their companies were doing to take action.

Many companies were involved in youth programs, coding camps, university outreach, and mentoring programs. Salesforce hired a Chief Equality Officer to put words into action. One CTO from a Singapore-based company told us how more women in Asian countries commonly chose a STEM education track and 65% of his engineering team is female. Another company removed names and university names from resumes to address implicit biases in the interviewers. Most of the women we talked to simply thanked us for bringing up this issue and described how isolating it can be being the lone female on a team.

Another common, less positive story was how their company had tried a diversity program but gave up. This scenario underscores the unavoidable truth: bringing diversity into tech isn’t easy. Encouraging children to choose STEM careers is a long game that will bring change, but bringing diversity into the workplace right now is another story.

There’s a diverse, willing, and intelligent pool of workers, but they need training. People of different backgrounds, who weren’t able to get the classic Western STEM education need opportunities to transition into tech. As one man pointed out, adult training programs are the answer, but companies need to do more than just offer training. Finding the time and energy to break out of the demanding lifestyle of a single parent or low-income adult is near impossible.

Apprenticeships with financial support are the answer to getting a mature, diverse workforce.

The week’s undeniable message from everyone was: we NEED diversity and more specifically we need diversity in leadership positions. We need more points of view and we need a better representation of our society in the tech industry.

Diverse teams are more adaptable overall and build better products that serve more people.

Linkerd Celebrates One Year with One Hundred Billion Production Requests

By | Blog | No Comments

By William Morgan, Linkerd co-creator and Buoyant co-founder

We’re happy to announce that, one year after version 0.1.0 was released, Linkerd has processed over 100 billion production requests in companies around the world. Happy birthday, Linkerd! Let’s take a look at all that we’ve accomplished over the past year.

We released Linkerd into the wild in February 2016, with nothing more than a couple commits, a few early contributors, and some very big dreams. Fast-forward by one year, and Linkerd has already grown to 30+ releases, 800+ commits, 1500+ stars, 30+ contributors, 600+ people in the Linkerd Slack, and 30-odd companies around the globe using it in production (or on the path to production)—including folks like Monzo, Zooz,NextVR, Houghton Mifflin Harcourt, Olark and Douban.

Not to mention, of course, that Linkerd is now officially a CNCF project, alongside Kubernetes, Prometheus, gRPC, and a couple other other amazing projects that are defining the very landscape of cloud native infrastructure.

To the many contributors, users, and community members—thank you for helping us make Linkerd so successful this past year. (And thank you for privately sharing your production request volumes and deployment dates, which allow us to make big claims like the one above!) We couldn’t have asked for a better community. We’d especially like to thank Oliver Beattie, Jonathan Bennet, Abdel Dridi, Borys Pierov, Fanta Gizaw, Leo Liang, Mark Eijsermans, Don Petersen, and Oleksandr Berezianskyi for their contributions to the project and the community.

You can read the full press release here.

Finally, here’s a fun vanity metric graph, courtesy of Tim Qian’s excellent Github star history plotter:

Linkerd GitHub star history

Here’s to another great year for Linkerd!

* Blog originally posted on https://blog.buoyant.io/2017/03/07/linkerd-one-hundred-billion-production-requests/

Cloud Native Computing Foundation Continues Efforts to Drive Cloud Native Adoption with Application-Focused New Members

By | Announcement | No Comments

Foundation to exhibit during anticipated Google Cloud Next event

 SAN FRANCISCO – Google Cloud Next – March 7, 2017 – T​he Cloud Native Computing Foundation, which is sustaining and integrating open source technologies to orchestrate containers as part of a microservices architecture, today announced that Bitnami and Kinvolk have joined the Foundation as Silver Members to encourage dynamically scalable cloud native application development for the benefit of both enterprise and end-user customers. In addition, Box has joined as an end user supporter. The CNCF Google Cloud Next booth, staffed with the Foundation team and member company technologists, will be located Moscone Center West Booth D1.

Join CNCF and the #SFK8s Meetup for a panel on Cloud Native Computing at Google Launchpad (301 Howard, San Francisco) on March 9 from 6:00 – 8:30 PM. Speakers will plan to discuss Kubernetes, Fluentd, Linkerd and Prometheus during a Q&A with leaders from each project. Please RSVP here.

These new members – which have each implemented or contributed to one of today’s most innovative cloud native applications – solidify the growing prominence of the cloud native ecosystem and its impact on modern enterprise infrastructures.

“Today’s cloud native technologies empower developers to create resilient and dynamically scalable applications like never before,” said Dan Kohn, Executive Director of the Cloud Native Computing Foundation. “We’re thrilled to be working with Bitnami, Box and Kinvolk to improve developer engagement with our growing list of technology projects.”

The companies join CNCF’s member network of more than 70 cloud native stewards, many of whom will attend CloudNativeCon + KubeCon Europe in Berlin on March 29-30.

About the newest silver members:

Bitnami is a leading provider of ready-to-run server applications and automation for the software supply chain. With over one million deployments each month, Bitnami-packaged open source applications for the cloud provide a consistent, secure and up-to-date optimized experience – for both developers and end users – on any platform.

“Building upon our investments in the cloud native and open container space – including contributions to Kubernetes-related projects such as Helm and Monocular and creating development environment containers for the Eclipse Che project – Bitnami is pleased to announce the recent acquisition of Skippbox,” said Erica Bescia, COO of Bitnami. “Bitnami’s mission to simplify the deployment of cloud- and container-optimized applications makes CNCF membership a perfect fit for us.”

 Kinvolk is a Berlin-based development company focused on building, and building upon, the open-source software projects making up the foundation of modern Linux systems. Kinvolk works with clients to build some of the most challenging and cutting-edge cloud infrastructure projects in the industry. If a project is pushing the boundaries of what Linux can do, that’s when Kinvolk can help most.

“The shift to cloud native and microservice-based applications has been a driver of innovation for projects at the core of modern Linux systems, where Kinvolk focuses its efforts,” said Chris Kühl, CEO and co-founder of Kinvolk. “We’ve used our expertise in user-space and systemd to help build rkt, the container runtime from CoreOS, and our knowledge of Linux internals and ebpf to gather system metrics more efficiently and reliably in Weave Scope, the monitoring and visualization tool from Weaveworks. At Kinvolk, we look forward to continuing to help build innovative Linux technologies that cloud native computing is driving.”

About the newest end user supporters:

Box is a leader in cloud content management. The company enables businesses to revolutionize how they work by securely connecting their people, information and applications. Founded in 2005, Box today powers more than 71,000 businesses globally, including AstraZeneca, General Electric, P&G and The GAP.

“As an early adopter of Kubernetes, we’re happy to share our expertise and learnings with both the CNCF TOC and end users going into production,” said Sam Ghods, co-founder of Box. “Box is using Kubernetes with great results and we are exploring other cloud native technologies to empower developers to run their infrastructure in the cloud and balance request traffic in real-time across applications.”

As an end user supporter, Box has joined other end user companies like Goldman Sachs, eBay, Capital One, Ticketmaster, AT&T and NCSOFT on the End User Technical Advisory Board (TAB). For additional information on end user memberships, end user supporters and the End User TAB, please visit: www.cncf.io/about/end-user-community.

Additional Resources

About Cloud Native Computing Foundation

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. The Cloud Native Computing Foundation (CNCF) hosts critical components of those software stacks including Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing and gRPC; brings together the industry’s top developers, end users, and vendors; and serves as a neutral home for collaboration. CNCF is part of The Linux Foundation, a nonprofit organization. For more information about CNCF, please visit: https://cncf.io/.