Now Live: First Observability Practitioners Summit Schedule is Jam-Packed

By | Blog

Are you planning to attend KubeCon + CloudNativeCon Seattle, and want a deep dive into all things monitoring and observability? Arrive early so that you can attend the first-ever Observability Practitioners Summit – sponsored by LightStep – on Monday, December 10!

A part of KubeCon + CloudNativeCon Community Events Day, the Observability Practitioners Summit is focused on pushing the field of monitoring and observability forward with a mix of deep dive sessions and unique opportunities for discussion between the maintainers and users of tracing, metrics, logging, and alerting systems.

Just some of the can’t miss sessions include:

  • Logging What Matters: The Pythia Just-in-Time Instrumentation Framework – Lily Sturmann, Harvard University Extension School / Massachusetts Open Cloud
  • Visualizing Distributed Systems with Statemaps – Bryan Cantrill, Joyent
  • NanoLog: A Nanosecond Scale Logging System – Stephen Yang, Stanford University
  • Finding the needle in a Haystack – Shreya Sharma, Expedia Inc.
  • Who Watches the Watchers – Adrian Cockcroft, AWS
  • System Comprehension and Root Cause Analysis with Distributed Tracing – Yuri Shkuro, Uber

Representatives of CNCF projects OpenTracing, Prometheus, Fluentd and Jaeger will be in attendance + ready to talk all things observability!

Wondering what the community has to say? Check out the buzz #ObservabilitySummit is generating below:

Be sure to register* today and ensure your place!

* Pre-registration is required. To register for Observability Practitioners Summit, add it on during your KubeCon + CloudNativeCon registration.

CNCF and BVP Host Discussions with Leading Open Source Contributors and End Users

By | Blog

This blog was originally posted on the Bessemer Venture Partners Blog.

Bessemer was proud to partner with the CNCF to host an evening devoted to discussion around exciting new open source infrastructure projects and best practices for enterprises making the cloud native transformation. Given the high concentration of financial services firms in NYC, we were fortunate to have great input from many of these leaders on the added challenges of making these transitions in highly-regulated institutions.

Across all of these discussions, it was clear that everyone is dealing with similar macro challenges. Today’s infrastructure leaders are in a tough spot, having to meet increasingly strict data governance and security policies while also providing the ever increasing scalability that modern enterprises demand.  In addition, these leaders are now faced with a myriad of infrastructure options (public/private/hybrid cloud), compute paradigms (VM/Container/Serverless), open source projects, and startup vendors to choose from. Fortunately, governing bodies such as the CNCF are providing some direction and guidance for enterprises as they begin to navigate the crowded space.

Some of the best actionable tips that we heard regarding helping enterprises succeed with cloud native transitions include:

1)    Empower engineers to explore the open source landscape and experiment with new projects and then establish a point person, such as a Director of Open Source, to help centralize enforcement of best practices before new open source projects go into production.

2)    Create a funnel process to help qualify open source projects during testing and help teams get approval to use production data or build mission critical apps with open source software. This enables development teams to move quickly and experiment with new technologies while also ensuring an enterprise can manage their security and compliance concerns before new deployments go into production.

3)    Assign clear ownership to microservices and internal open source projects. When you have hundreds or thousands of microservices in production, knowing who to contact for what service can become a real problem.

4)    If you need help navigating this rapidly changing ecosystem, check out some of the guidance provided by the CNCF here.

In addition, the evening featured talks from three great speakers, as outlined below.

Ken Owens, VP of Cloud Engineering at Mastercard shared lessons learned from his experience adopting two of the leading open source technologies, Kubernetes and Istio, in production at Mastercard.  Based on our conversations with many large enterprises in the financial services industry we were really impressed to learn from Ken just how far ahead Mastercard is in this transition. His main advice was that in order to get any large enterprise, particularly a highly-regulated company in the financial services industry, to cross the open source chasm, you need to have very strong buy in from senior executives to go cloud native. He advised team leads to lay the necessary communications and relationship building groundwork, since this infrastructure change will be a multi-month, sometimes multi-year endeavor.

Priyanka Sharma, Director of Alliances at Gitlab, shared tips on how to avoid some of the devops horror stories she has seen across enterprises who have tried to keep up with the latest trends: Go for quality over quantity on devtools. Latching onto new trends in devops can ultimately lead to more headache and heartbreak than efficiency savings. Instead of having a “toolbox” of ten different devops tools, try to standardize across 2 or 3 tools that fit the bulk of your needs.

Andrew Jessup, project maintainer for SPIFFE and cofounder of Scytale, shared the origin story of their open source project (SPIFFE) and the need they saw in the market for scalable identity and access management infrastructure for microservices. He also highlighted how the pain points that SPIFFE solves are increasingly acute in enterprises moving to a microservices-based architecture, distributed across heterogeneous infrastructure that requires high scalability and elasticity.  As these environments have workloads that are constantly spinning up and disappearing across various network boundaries, the traditional approaches to security and authentication approaches break down and require an identify-focused approach.

CNCF is proud to foster and support the next wave of open source communities. We want to give a special thanks to BVP for supporting CNCF, and making this event possible with support for their portfolio companies, npm, ScyllaDB, and Scytale. We also want to thank the speakers for sharing their first hand open source knowledge, helping to give back and push the community forward.


Annual “CNCF Community Awards” Nominations Kick Off – Winners to be Recognized at KubeCon + CloudNativeCon Seattle

By | Blog

Nominations open today for the third-annual CNCF Community Awards – sponsored by VMware – honoring those who have made the greatest impact over the last year in the cloud native space. Within our fast-growing project communities, there’s an incredible amount of talent, hard work and commitment worthy of recognition.

So, if you know a deserving ambassador, maintainer and/or advocate working hard to advance cloud native innovation + serve the community, check out the forms below to nominate them through this year’s awards:

    • CNCF Top Ambassador: A champion for the cloud native space, this individual helps spread awareness of the CNCF (and its incubated projects). The CNCF Ambassador leverages multiple platforms, both online as well as speaking engagements, driving interest and excitement around the project.
    • CNCF Top Committer: This will recognize excellence in technical contributions to CNCF and its projects. The CNCF Top Committer has made key commits to projects and, more importantly, contributes in a way that benefits the project neutrally as a whole.
    • Chop Wood/Carry Water Award: This is given to a community member who helps behind the scenes, dedicating countless hours of their time to open source projects + completing often thankless tasks for the ecosystem’s benefit. The winner of this award will be chosen by the TOC and CNCF Staff.

Previous winners of the Community Awards include Sarah Novotny, Kelsey Hightower, Dawn Chen, Clayton Coleman, Fabian Reinartz & many more!

The award process is simple – nominations (open from October 8-29) will be collected through the above survey forms + shared across the CNCF official mailing lists.

Voting (open from November 1-15) will be performed using the CIVS tool, using emails from the CNCF database for the following groups:

  • CNCF Ambassadors are eligible to vote for the Top Ambassador
  • CNCF Maintainers are eligible to vote for the Top Committer


CNCF to Host Cloud Native Buildpacks in the Sandbox

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) accepted Cloud Native Buildpacks (CNB) into the CNCF Sandbox for early stage and evolving cloud native projects.

The CNCF Sandbox is a home for early stage projects; for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.

Buildpacks are pluggable, modular tools that translate source code into container-ready artifacts by providing a higher-level abstraction compared to Dockerfile. In doing so, they provide a balance of control that minimizes initial time to production, reduces the operational burden on developers, and supports enterprise operators who manage apps at scale.

Based on experience in maintaining production-grade buildpacks from both Pivotal and Salesforce Heroku, CNB was built to provide a platform-to-buildpack API contract that takes source code and outputs Docker images that can run on cloud platforms supporting OCI images.

“The next generation of cloud native buildpacks will aid developers and operators in packaging applications into containers, allowing operators to efficiently manage the infrastructure necessary to keep application dependencies updated,” said Stephen Levine, engineer and product manager at Pivotal. “We hope the inclusion of CNB in the CNCF sandbox will further improve interoperability between platforms and attract a wide community of contributors, including buildpack creators and maintainers.”

Buildpacks were first conceived by Heroku in 2011. Since then, they have been adopted by Cloud Foundry as well as Gitlab, Knative, Deis (now Microsoft), Dokku, and Drie.

“Anyone can create a buildpack for any Linux-based technology and share it with the world. Buildpacks’ ease of use and flexibility are why millions of developers rely on them for their mission critical apps,” said Joe Kutner, architect at Heroku. “Cloud Native Buildpacks will bring these attributes inline with modern container standards, allowing developers to focus on their apps instead of their infrastructure.”

The TOC sponsors of the project are Brian Grant and Alexis Richardson.

Schedule is LIVE for First-Ever EnvoyCon 🎉

By | Blog

Gearing up for KubeCon + CloudNativeCon Seattle? The inaugural EnvoyCon taking place on December 10 in Seattle as part of the KubeCon + CloudNativeCon Community Events Day JUST published a jam packed speaker line-up!

The program features either 30 minute sessions or 10 minute lightning talks, with experience levels from beginner to expert + is primarily composed of end user stories along with a great set of technical deep dives. Some of the talks not-to-miss include:

  • Lightning Talk: Discovering the Discovery Services – Brook Shelley, Turbine Labs
  • Envoy at Square – Snow Petteren, Square, Inc.
  • Running Envoy at the Edge – Derek Argueta, Pinterest
  • How to DDOS yourself with Envoy (and other tales of migration horror) – Ben Plotnick, Yelp + John Billings, Yelp
  • Lightning Talk: Debugging microservices applications with Envoy + Squash – Idit Levine,
  • Bridging the gap between on-prem and cloud: a story about Envoy + a hybrid boundary – Tristan J Blease, Groupon + Michael Chang, Groupon

Matt Klein, lead Envoy maintainer and software engineer at Lyft, is “beyond excited for the first EnvoyCon. The Envoy community went above and beyond in submitting an amazing number of outstanding proposals. I think attendees are going to have a fantastic day and I can’t wait until December!”

Don’t just take our word for it, check out some of the community buzz around Envoy’s first event:

EnvoyCon will be the first conference focused on the booming #serviceproxy / #servicemesh space + dedicated to bringing together Envoy end users and system integrators don’t miss your chance to join the fun, register today!*

* Pre-registration is required. To register for EnvoyCon, add it on during your KubeCon + CloudNativeCon registration.

GSoC 2018: Extending Envoy’s fuzzing coverage

By | Blog

Google Summer of Code (GSOC) program 2018 has come to an end and we followed up with CNCF’s seven interns previously featured in this blog post to check in on how their summer project progressed.

Over the summer, Anirudh Morali of Anna University worked with mentors Matt Klein, Constance Caramanolis, and Harvey Tuch on “Extending Envoy’s Fuzzing Coverage,” a project focusing on extending the fuzz coverage including proto, data plane, and H2 level frame fuzzing.

I am Anirudh, a final year computer science undergraduate at Anna University, India. I previously participated in Google Summer of Code 2017 with Haiku OS, and this time, Google Summer of Code 2018 with Cloud Native Computing Foundation.

It was February 12th when Google posted the list of organizations that will be participating for the Google Summer of Code 2018. During my internship with Hasura, I came to learn about the existence of Cloud Native Computing Foundation and all their projects. Having heard about CNCF’s success from the people who participated in Google Summer of Code with CNCF the previous year, I was interested in sending a proposal to CNCF, but had no knowledge on any specific project or domain to work on. The CNCF projects which participated in GSoC (Kubernetes, Prometheus, Envoy, CoreDNS and few more) had huge codebases, which was overwhelming for me at first. Though most codebases had parts of code written in different languages, the majority usage was Golang. I wanted to learn Go, find low-hanging issues, fix them and then work on a proposal. But I realised it was late to learn a new language with just a few weeks left for the program to begin, and it would be even more difficult to write code for the projects since the projects are not beginner friendly.

I previously worked with a C++ codebase and was looking for projects which were built with C++. I came across Envoy, and noticed that this was the first time for Envoy to participate in GSoC. You can read about Envoy here: – Envoy is a service mesh initially created at Lyft, with codebase in C++. It’s designed to minimize the memory and CPU footprint, while offering load balancing, network tracing and database activity in micro-service architecture system.

I spent a few days learning what Envoy was and took part in one of the community meetings. Even though I didn’t understand what they spoke technically, it was a good experience on how people were in touch with each other remotely and worked collaboratively on a project. I had talks with Harvey Tuch, who was already working on fuzzing for Envoy. Matt Klein told me that I’d be working with him and extending the fuzzing support for Envoy.

My last exam and the community bonding were both on the week of May 2nd. With no exams, I was able to spend more time on GSoC. We had video calls to get things started. There was an ongoing issue with all the fuzzing support needed for Envoy:

I became part of the Envoy GitHub organization and was assigned the issue to work on. Harvey gave me a quick intro on the library Envoy was using for fuzzing: oss-fuzz, and there was a guide on getting started with writing fuzz tests. I started working on fuzzing utility functions in Envoy, such as the string functions, and then gave a pull request to the repository: After a series of code reviews and suggestions, followed by fixing them, the pull request got merged which marking my first contribution to Envoy. 😀

For the second evaluation, I picked up the configuration validation fuzzing tasks, and the task was supposed to be a small fix to the existing server fuzz test, but it took me time to get that figured out. I also started working on the H1 capture fuzzing test. H1 fuzzing is on both the request and response path as of now, i.e. fuzzing happens both downstream and upstream, with direct response enabled the fuzzing happens only in the downstream. A response like a file is thrown instead of connecting to the upstream.

OSS-Fuzz is a project by Google which helps open-source projects be more secure and stable by fuzzing the codebase with fuzzing engines. I got access to Envoy’s OSS-Fuzz dashboard and started working on the issues present there. I picked up simple tests which were failing because of a failure in assert, and fixed them by proper error capture or adding bazel build constraints. Some PRs needed changes to other repositories as well, raised issues there as well.

Work done during the summer:

Pull requests:


Owe my biggest thanks to:

  • Envoy open-source community (Specifically, my kind mentors for taking time to clarify doubts, and helping me learn).
  • Nikhita, Chris Aniszczyk (CNCF organization admins) and Amit, for introducing me to Envoy.
  • People at my university for the permission and time to pursue GSoC.

If you’re an aspiring GSoCer, go ahead and apply to CNCF, you’ll have an awesome summer. You can get in touch with me for any queries on getting started. You can find my contact details here. Will be happy to help! 🙂


TOC Votes to Move Rook into CNCF Incubator

By | Blog

By Jared Watts

Rook, an open source cloud native storage orchestrator for Kubernetes, was the first storage project accepted into CNCF back in January of this year. Today, roughly 8 months later, we are excited to announce that the TOC has voted to officially move Rook out of the sandbox entry stage and up to the CNCF Incubator, alongside such projects as gRPC, Envoy and Helm.

We want to extend a thank you to the community who is continuing to help develop Rook into what it is today. In that short amount of time we’ve seen 13x the number of container downloads, doubled the number of GitHub stars and most importantly doubled the contributors to Rook.

While we invite you to read more on the Rook blog, we wanted to highlight some of the work the community has done together. We’ve worked on two major releases over the past 8 months, the 0.7 and the 0.8 releases, which included significant features and improvements. Some of the highlights of the releases include:

  • Rook Framework for storage providers turns Rook into a general purpose cloud-native storage orchestrator that now supports multiple new storage solutions with reusable specs, logic, policies and testing.
  • CockroachDB and Minio support and orchestration shipped in v0.8. Support for Network File System (NFS) has been merged to master and work on Cassandra, Nexenta, and Alluxio are all ongoing.
  • Ceph support graduated to Beta maturity, taking a major step towards being declared stable.

There are many different ways to get involved in the Rook project, so please join us in helping the project continue to grow on its way to the final stage as a CNCF hosted project: graduation! You can learn more on the Rook website + get involved in the community on GitHub or Slack.

CNCF To Host Cortex in the Sandbox

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) announced acceptance of Cortex, a multitenant Prometheus-as-a-Service, into the CNCF Sandbox, a home for early stage and evolving cloud native projects.

Developed to scale Prometheus in a multitenant fashion, Cortex provides long term storage for Prometheus metrics when used as a remote write destination, and a horizontally scalable, Prometheus-compatible query API.

Cortex was originally developed at Weaveworks in 2016, and is currently used in production by organizations like Grafana Labs, FreshTracks, and EA.

Cortex provides use cases both for service providers, who are managing large numbers of Prometheus instances and want to provide long term storage as a source of value, and for enterprises that want to centralize management of large scale Prometheus deployments and ensure long term durability of Prometheus data while providing a global query view.

“By joining CNCF, Cortex will have a neutral home for collaboration between contributor companies, while allowing the Prometheus ecosystem to grow a more robust set of integrations and solutions,” said Alexis Richardson, CEO of Weaveworks. “Cortex already has a strong affinity with several CNCF technologies, incuding Kubernetes, gRPC, OpenTracing and Jaeger, so it’s a natural fit for us to continue building on these interoperabilities as part of CNCF.”

TOC sponsors of the project are Bryan Cantrill and Ken Owens.

“At we believe in bringing the right data to the right people at the right time. Cortex is at the heart of our offering providing multi-cluster cluster and multi-cloud visibility to developers and operators. Cortex’s entry into CNCF is exciting, as it will provide the visibility and governance needed to ensure the ecosystem continues healthy growth,” says Bob Cotton, co-founder of

The CNCF Sandbox is a home for early stage projects, for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.

Linkerd 2.0 Now In General Availability: From Service Mesh to Service Sidecar

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) and maintainers of Linkerd are excited to announce the general availability of Linkerd 2.0.

The 2.0 release brings dramatic improvements to performance, resource consumption, and ease of use to Linkerd. It also transforms the project from a cluster-wide service mesh to a composable service sidecar, designed to give developers and service owners critical tools they need to be successful in a cloud native environment.

Released by Buoyant founders William Morgan and Oliver Gould in 2016, Linkerd was contributed to CNCF in early 2017. Since then, the project has experienced rapid growth and now powers a diverse ecosystem of applications around the globe, from satellite imaging to payments processing to the Human Genome Project.

Linkerd 2.0’s service sidecar design gives developers and service owners the ability to run Linkerd on just their service, providing automatic observability, reliability, and runtime diagnostics without configuration or code changes. The service sidecar approach also reduces the risk for platform owners and system architects, by providing a lightweight, incremental path to obtaining the traditional service mesh features of platform-wide telemetry, security, and reliability.

Notable release highlights include:

  • A self-contained “service sidecar” design that augments a single service without requiring cluster-wide installation.
  • An incremental path to cluster-wide service mesh, whereby service sidecars across multiple services link to become a service mesh.
  • A zero-config, zero-code-change installation process.
  • Automatic Grafana dashboards and Prometheus monitoring of service “golden metrics.”
  • Automatic TLS between services, including certificate generation and distribution.
  • A complete proxy rewrite in Rust, yielding orders of magnitude improvement in latency, throughput, and resource consumption.

Service Sidecars, Service Owners, and Service Ops

“With the 2.0 release, the community focused heavily on the idea of ‘service ops,’ whereby service owners are responsible for not just for building their service, but also deploying it, maintaining it, and waking up at 3 am if it breaks,” said Oliver Gould, core maintainer of Linkerd and CTO of Buoyant. “Service owners are the ultimate customers of all this platform technology we’re building, and we wanted to address their needs directly.”

“We’ve seen Linkerd grow at an incredible pace since becoming a part of CNCF, to the point where it is now successfully handling billions of production requests every day,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “The migration path from 1.0 to 2.0 is a huge step forward in how service and platform owners can work together, and we look forward to seeing how it is integrated even deeper into the cloud native user community.”

The project’s contributor and end user community now spans dozens of organizations, including Salesforce, Walmart, Comcast, CreditKarma, PayPal, WePay, and Buoyant.

“Before Linkerd 2.0, for my services, all I had was statistics for my public API. Now I can see on a very granular level how each of my services are behaving,” said Pascal Bourque, CTO and co-founder of Studyo, a task and project manager designed for schools. “For me this is gold. The fact that it is painless to install is even better.”

“We had a problem with instability and latencies after we redeployed a key service and turned to Linkerd 2.0 to diagnose the problem,” said Will King, CTO and co-founder of Hush, a social commerce company focused on cosmetics. “Having the ability to watch real time requests and responses was incredibly useful, much more than we had expected. We use Linkerd 2.0 tap for all our container service debugging now.”

Join the community

Linkerd 2.0 is available for download on GitHub and the community welcomes new users + contributors. The Linkerd core maintainer team is reachable on Slack, Twitter, and mailing lists, and at meetups and other events for the cloud native community.

GSoC 18: Kata Containers support for containerd

By | Blog

Google Summer of Code (GSOC) program 2018 has come to an end and we followed up with CNCF’s seven interns previously featured in this blog post to check in on how their summer project progressed.

Over the summer, graduate student at Zhejiang University and currently an intern at Jian Liu worked with mentor Harry Zhang, Fupan Li, and Lantao Liu to “Integrate containerd with Kata Containers,” a project aimed at creating a containerd-kata runtime plugin for containerd to integrate with Kata Containers.

My story started in January 2018 when I noticed that the CNCF community had some container technology-related ideas for GSoC 2018. After browsing all the topics I was very interested in the topic of “KataContainers support for containerd/cri-containerd.” I already had some knowledge about Kubernetes, so I spent quite some time to deeply study documentation and code for “containerd” and “KataContainers.” Once I had a much better understanding of these two projects, I drafted a design proposal and was luckily selected as a GSoC candidate! I believe the study of these open source projects’ source code helped a lot in my design proposal.

In the Kata/CRI native manner, the theory was that we could avoid using too many independent shim and proxy processes. Some investigations showed a shim consuming too much memory, causing huge overhead costs in high density cases. So our project was aimed at removing independent shim and proxy processes to save memory and start the container more quickly.

In the beginning of the project, we were going to develop a kata-runtime plugin for containerd. When we implemented part of the interfaces that containerd’s runtime plugin needed, the basic container operations worked successfully. That moment was engraved on my mind. It meant that I went from a container user to a container developer. So excited!

But soon, something unexpected happened.

After I have already finished some part of the task, the containerd upstream community proposed a fresh new proposal “Shim API v2.” The goal was to establish a new standard to make containerd compatible with various runtimes. This sounds very helpful to our ongoing work, and after discussing with maintainers from Google, KataContainers and containerd community, we decide to make a huge turnaround.

While considering the remaining time of my GSoC program was short, I began to feel worried about whether I could finish these new tasks. Fortunately, my mentors connected me with a maintainer of the KataContainers project, and he set up the skeleton code for me and the two of us cooperated closely on the new design together. Also, maintainers from Google and the containerd community gave me lots of useful ideas on how to follow the upstream progress. With this timely guidance, when it came to the end of GSoC, I had successfully implemented many functions of what Kubernetes CRI required and passed 95% of node e2e conformance tests. More details about my work can be found at here.

Eventually we used containerd+shimv2+kata-runtime to test the time of starting pause container. And the result is following.

This GSoC project was really challenging for me, and I felt a flood of new knowledge poured into my mind. The practical experience had given me a profound understanding of the open source world. My love for the open source community grew stronger. With a burning passion, I continued to follow the project after GSoC. This past summer was really amazing. Developing in this excellent community, I gained so much joy.

I would like to thank my mentors for their constant support and guidance, especially Harry Zhang, Fupan Li and Lantao Liu. As well as thank Google Summer of Code team and the CNCF organization for giving me such a golden opportunity to contribute to the open source community. I am eager to continue developing and contributing to open source world.

Last but not least, CNCF community is a good starting point for every student. There are many great projects in it and always an option that can fit you.