KubeCon + CloudNativeCon Virtual | August 17-20, 2020 | Don’t Miss Out | Learn more



CRCP – The Curiously Reoccurring Communications Pattern

By | Blog

By Randy Abernethy, Managing Partner at RX-M, LLC

 Randy Abernethy is a Tech Entrepreneur, coder, startup adviser, financial technology pioneer, Apache Thrift committer, author and highly experienced Destiny guardian.

Curiously Reoccurring Things

This is an article about the Curiously Reoccurring Communications Pattern, CRCP[1] for short. The CRCP name was inspired by the C++ CRTP[2], the “Curiously Reoccurring Template Pattern”, a C++ coding pattern identified in 1995 by Jim Coplien.

While the CRTP and CRCP operate in different spheres they are both curious and reoccurring. They are curious because they are non-obvious answers to important architectural questions. People keep reinventing these patterns only to discover that they are already known amongst the cognoscenti. Patterns like these can offer a valuable roadmap to help new architects overcome common challenges.

While the CRTP has its own Wikipedia page, the Curiously Reoccurring Communications Pattern is more or less an RX-M in-house invention. Therefore, we’ll leave it to you to Google the CRTP and spend our time here focusing on the merits of less well-known CRCP.


The CRCP is a large scale communications pattern found in the architectural fabric of many distributed applications. In the CRCP, frontend components communicate with the backend using RESTful service interfaces, synchronous backend activities are performed using RPC, and the Core of the backend communicates asynchronously over a messaging fabric. Something like this:

Each communications technology offers the perfect blend of features and function for the subsystem in which it is found. We’ll examine each in turn, working our way from the outside in.

The RESTful Outside

Like perhaps most modern systems, the “outside” part of a CRCP system operates over the Internet and makes use of the REST[3] architectural pattern. RESTful APIs extract maximum value from the underlying and ubiquitous HTTP protocol. They GET free use of browser and proxy caches all over the world, graciously paid for by the users, not the API developers. HTTP also provides RESTful services with clean separation between platform level directives (headers) and application level communications (verbs, IRIs, status, and bodies). Drop in HTTP/2 and the whole thing goes a lot faster at no extra charge (technical or otherwise). Authentication schemes, powerful HTTP aware gateways, native browser functionality, the list of RESTful benefits goes on. If you want to leverage the global infrastructure of the Web, there’s likely no better choice than REST for your API.

RESTful APIs are also the most likely thing one would expose to partners and engineered client systems, where the ubiquity and tool-less nature of RESTful interfaces are a significant plus. RESTful interfaces exhibit a Resource- Oriented Architecture (ROA), being decomposed into resources and operations on those resources, typically making these APIs easier for counterparties to navigate and understand.

Clearly, there’s a lot to like about REST, so why consider anything else?

The RPC Inside

The world changes considerably when we enter the realm of the backend service. Whether in the cloud or in a traditional on-premises data center, the nature of application decomposition in the backend tends toward smaller services, fewer bits of Web infrastructure and a single organizational view. Take away the Web and the need for cross-org adoption and you take away much of the RESTful value proposition.

Another consideration in a modern cloud native environment is application migration. If you are moving from a large, monolithic, traditional system to microservices, it is a pretty good bet that your monoliths do not have REST APIs internally. Rather they have functions and methods. Monolith functions and methods can be repackaged as RPC services in short order, however, migrating the same interface to a resource-oriented API environment like REST is a significant engineering undertaking impacting clients and servers alike.

Also worth considering is the heightened need for performance on the backend. Microservice oriented systems, in particular, are likely to require many backend calls to satisfy a single frontend request. For example, Netflix has noted in talks on their open source Zuul gateway, that in one analyzed setting, each Internet call typically triggers 6-7 backend calls. Whether the number is 3 or 20, latency in the call chain could quickly add up.

Backend services in the synchronous call path of internet callers affect the perceived responsiveness of the application in question. Thus the cumulative latency of these inside services could become a user experience problem if not managed. Fortunately, high-performance Remote Procedure Call (RPC) systems are available to address this concern.

CNCF’s gRPC[4] and Apache’s Thrift[5] are both cross-platform RPC systems and both are regularly clocked at rates an order of magnitude faster than the functionally equivalent service using a REST interface. These “Modern RPC” systems also support interface evolution, allowing you to add methods and parameters without rebuilding old clients. Both also support cross-language calls, supporting every programming languages in widespread commercial use today.

Nearly all of the hyperscale firms have a history of RPC innovation and adoption. For example, Google invented Protocol Buffers[6] (the serialization system under gRPC), Facebook followed with Thrift (now Apache Thrift), and Twitter created the Scala based Finagle[7] system (which can operate over Thrift). Each of these companies uses their respective RPC system across vast swaths of their in-house platforms to reduce latency and increase throughput.

Neither gRPC nor Apache Thrift requires an application server, instead, they offer integral lightweight RPC servers in each of the languages they support. Application servers offer many valuable features but in a world where services are atomically packaged and deployed, perhaps multiple times on the same node, placing an entire application server in a container to host one small microservice can amount to undesired overhead and additional latency.

So there’s a lot to like about modern RPC in a cloud native system backend. Whether your priority is brownfield monolith migration, low latency or efficient containerization, modern RPC solutions can fit the bill. However, while RPC solutions can speed up the synchronous application backend they are not perfect for asynchronous operations or event-driven environments.

The Messaging Core

In many applications, things at some point stop being synchronous. For example, if a mobile user submits an order for 100 shares of SuperMega to a trading system, validating the order and enriching it may occur in the synchronous RPC space. However, sending the order to a stock market and waiting for it to execute clearly needs to take place in the background. Decoupling subsystems with widely varying processing times is a job for messaging.

Another critical aspect of many environments is the fact that each message represents a new element of system state. Messaging allows us to embrace “event sourcing”[8], freezing and logging this state as it enters the system. These little state deltas can then be distributed to a wide range of services that may want to act independently on them. Auditing services, client notification services, risk analytics services, and more can all operate at their own pace in parallel as the data arrives.

Publish and subscribe systems allow message consuming services to be scaled out to process messages at extreme throughput rates. Messages can also be captured and replayed to repro bugs, runs tests, train ML systems and so on. There is no better way to unlock parallel activities and unencumber innovation at the heart of a system than messaging.

A small group of cluster based, cloud native messaging platforms have found their way into next-generation applications. In particular, Apache’s Kafka[9] and CNCF’s NATS[10], both high profile examples of messaging systems that can scale to the level demanded by large microservice systems. Both Kafka and NATS are frequently referred to as “central nervous systems” for applications.

It is easy to assume that the significant difference in processing models makes defining interfaces for messaging systems and RPC systems unique tasks. This, however, need not be the case.

Interface Definition Languages

Valuable synergies can be harvested when using RPC and Messaging together. For example, IDL (Interface Definition Language) based systems, like Protocol Buffers and Apache Thrift, make it easy to describe messages along with service interfaces. This allows one to serialize messages across a wide range of languages for use with RPC and messaging systems alike, lending the fast, efficient serialization of Protocol Buffers or Apache Thrift to the RPC and messaging world in a single IDL solution.

It is a grave mistake think of the messages passed through a system as anything other than a key manifestation of an API contract. Protocol Buffers and Thrift allow you to evolve these message contracts without breaking existing systems by, for example, using collections and adding/removing attributes to messages. In short, there’s a lot of value to harvest from the IDL tools and serialization engines of RPC systems when messaging.

While IDL provides machine-readable documentation for RPC and messaging contracts, the need to document RESTful service contracts is also critical, particularly given their external facing nature. RESTful API definition requires an ROA aligned solution, like the Swagger based OpenAPI Initiative (OAI)[11].

Edge services may need to expose RESTful interfaces while consuming RPC interfaces. Fortunately, there are tools that make these transitions easier. For example, Apache Thrift can serialize any IDL defined entity to/from JSON, making these types easy to exchange with the world of the Web.


So while the CRCP is conceptually fairly high level and not suitable for all applications, it does occur with curious frequency in large-scale distributed systems. Even if all of the pieces are not a fit for your use case, understanding the CRCP elements and their motivations may bring some value when thinking about the right communications patterns and tools for your next cloud native endeavor.

[1] CRCP – Curiously Reoccurring Communication Pattern, a term invented at RX-M and the subject of this blog
[2] CRTP – Curiously Reoccurring Template Pattern is an idiom in C++ in which a class X derives from a class template instantiation using X itself as template argument, more generally known as F-bound polymorphism, a form of F-bounded quantification, https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern
[3] REST – Representational State Transfer, an architectural style wherein resources are identified by routes and interacted with via HTTP verbs like GET, POST, PUT, DELETE and PATCH, https://en.wikipedia.org/wiki/Representational_state_transfer
[4] gRPC – a high performance, HTTP/2 based cross platform RPC system, https://grpc.io/
[5] Apache Thrift – a high performance, pluggable, cross platform RPC and serialization system, https://thrift.apache.org/

[6] Protocol Buffers – a compact and efficient cross platform message serialization system, https://developers.google.com/protocol-buffers/
[7] Finagle – Finagle is an extensible RPC system for the JVM used to construct high-concurrency servers, https://twitter.github.io/finagle/
[8] Event Sourcing – a pattern wherein state is managed as an immutable set of events occurring within a system or delivered to a system from an external source, https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
[9] Kafka – a highly scalable distributed transaction log, https://kafka.apache.org/
[10] NATS – a highly scalable message distribution system, https://nats.io/
[11] OpenAPI Initiative – the swagger based open source and open governance based foundation focused on developing open standards for documenting RESTful interface contracts, https://www.openapis.org/

CNCF Annual Report

By | Blog

The Cloud Native Computing Foundation (CNCF) annual report for 2017 is now available here. The CNCF annual report includes highlights, activities and community engagement from 2017.

CNCF was formed under the Linux Foundation in 2015 and serves as the vendor-neutral home for many of the fastest-growing projects on GitHub, including Kubernetes, Prometheus, and Envoy, fostering collaboration between the industry’s top developers, end users, and vendors.Kubernetes and other CNCF projects are some of the highest velocity projects in the history of open source. When founded, CNCF had 28 members and we ended the year with 170 members and growing. We are an open source software foundation dedicated to making cloud-native computing universal and sustainable.

CNCF is catalyzing the cloud-native movement and we are regularly adding new projects to better support a full stack cloud-native environment.To better listen to our community, we conducted a survey to learn more about the state of Kubernetes’ deployments and other container management platforms, as well as the progress of container deployment in general. More than 550 community members responded. The future of cloud-native is exciting, with more than 93% of respondents recommending CNCF technologies.

Our end user community is growing and we finished 2017 with 32 top companies and start-ups that are committed to accelerating the adoption of cloud-native technologies and improving the deployment experience.

None of the work we do or the impact we have would at CNCF would be possible without the dedication of our global community of members, end users, developers, contributors and maintainers. We are incredibly grateful for the support and we will continue to enable software developers to build great products faster.

Adopting new technologies can be challenging, especially when it’s hard to find qualified people. Early training/certification draws talent in early and gives employers confidence to deploy cloud-native technologies faster.

CNCF offers training and certification for key CNCF technologies like Kubernetes to ensure that organizations can train their own employees or hire from a strong body of experienced talent. It is nearly unprecedented to get every cloud company, enterprise software provider and startup in the industry to support a conformance program. It is an extraordinary accomplishment that there are no forks in our industry, which speaks to the commitment that companies of all sizes have made to be good partners in the community. The community response was overwhelming; CNCF had certified offerings from 44 vendors in 2017. Learn more about our software conformance and training programs.

CNCF will continue to build sustainable ecosystems and foster a community around a constellation of high quality projects that orchestrate containers as part of a microservices architecture. We hope you will join us on our mission in 2018. Learn more at www.cncf.io.

Come join us at KubeCon + CloudNativeCon in Copenhagen on May 2 – 4, 2018 , Bella Center, Copenhagen, Denmark to foster collaboration, community engagement, and further the education and advancement of cloud native computing.

Prometheus User Profile: Interview with Datawire

By | Blog
Continuing our series of interviews with users of Prometheus, Richard Li from Datawire talks about how they transitioned to Prometheus.

Can you tell us about yourself and what Datawire does?

At Datawire, we make open source tools that help developers code faster on Kubernetes. Our projects include Telepresence, for local development of Kubernetes services; Ambassador, a Kubernetes-native API Gateway built on the Envoy Proxy; and Forge, a build/deployment system.

We run a number of mission critical cloud services in Kubernetes in AWS to support our open source efforts. These services support use cases such as dynamically provisioning dozens of Kubernetes clusters a day, which are then used by our automated test infrastructure.

What was your pre-Prometheus monitoring experience?

We used AWS CloudWatch. This was easy to set up, but we found that as we adopted a more distributed development model (microservices), we wanted more flexibility and control. For example, we wanted each team to be able to customize their monitoring on an as-needed basis, without requiring operational help.

Why did you decide to look at Prometheus?

We had two main requirements. The first was that we wanted every engineer here to be able to have operational control and visibility into their service(s). Our development model is highly decentralized by design, and we try to avoid situations where an engineer needs to wait on a different engineer in order to get something done. For monitoring, we wanted our engineers to be able to have a lot of flexibility and control over their metrics infrastructure. Our second requirement was a strong ecosystem. A strong ecosystem generally means established (and documented) best practices, continued development, and lots of people who can help if you get stuck.

Prometheus, and in particular, the Prometheus Operator, fit our requirements. With the Prometheus Operator, each developer can create their own Prometheus instance as needed, without help from operations (no bottleneck!). We are also members of the CNCF with a lot of experience with the Kubernetes and Envoy communities, so looking at another CNCF community in Prometheus was a natural fit.


How did you transition?

We knew we wanted to start by integrating Prometheus with our API Gateway. Our API Gateway uses Envoy for proxying, and Envoy automatically emits metrics using the statsd protocol. We installed the Prometheus Operator (some detailed notes here) and configured it to start collecting stats from Envoy. We also set up a Grafana dashboard based on some work from another Ambassador contributor.

What improvements have you seen since switching?

Our engineers now have visibility into L7 traffic. We also are able to use Prometheus to compare latency and throughput for our canary deployments to give us more confidence that new versions of our services don’t cause performance regressions.

What do you think the future holds for Datawire and Prometheus?

Using the Prometheus Operator is still a bit complicated. We need to figure out operational best practices for our service teams (when do you deploy a Prometheus?). We’ll then need to educate our engineers on these best practices and train them on how to configure the Operator to meet their needs. We expect this will be an area of some experimentation as we figure out what works and what doesn’t work.

To hear from more companies transforming their operations with cloud native technologies, be sure to register for KubeCon + CloudNativeCon, May 2-4, 2018 in Copenhagen.

Training and Certification Programs Enable Wider Kubernetes Expertise by Developers

By | Blog

By Jason McGee, Fellow, IBM

  Jason McGee, IBM Fellow, is VP and CTO of Container and Microservice Tribe. Jason leads the technical strategy and architecture across all of IBM Cloud, with specific focus on core foundational cloud services, including containers, microservices, continuous delivery and operational visibility services. 

Kubernetes is nothing new, we’ve been talking about how it has changed the developer space for a few years now.  However, the world of managed containerized workloads and services has hit a new milestone: The Cloud Native Computing Foundation announced the graduation of Kubernetes from its incubation program.

As we look forward to its future, I sat down with Chris Aniszczyk, COO of CNCF, to discuss the growth of Kubernetes, expectations for 2018, developer training and certification, serverless architecture, ML/AI, and how cloud providers can enhance the container space.

We agree — it’s good news for developers that Kubernetes has become mainstream, even though the enterprise and large-scale adoption of the technology is seen by some as boring or losing excitement. Boring isn’t necessarily bad. With K8s certifications, like CKA, training programs and best practices in place, Kubernetes has grown to a point of adoption outside of the bubble of container orchestration enthusiasts into the wider world of developers; especially those who might not have the technical bandwidth to spare to learn the intricacies of a new technology. Since we’ve focused on making the platform as solid as possible, rapid adoption has occurred through the training provided to swiftly gain the right skills and expertise. And we only expect further maturation of this technology that will radically transform how we build software over next 5 years.

The new Certified Kubernetes Application Developer program gives a developer everything he or she needs to integrate Kubernetes into a tech stack. Coupled with the increased availability of managed services offered by KCSP partners like IBM and other cloud vendors, this creates a bright future for Kubernetes. We expect to see exciting advances for the next generations of the cutting-edge solution, like container orchestration with Kubernetes unlocking the possibility of enhancing microservice architectures with management solutions like Istio, or finding new use cases leveraging machine learning and artificial intelligence.

Moral of the story — there is still much for developers to be excited about in the containers and microservices space. For our full conversation about Kubernetes and Istio, helping DevOps learn about these new technologies, where serverless architecture fits into the picture, and how ML/AI driven infrastructures could be the next frontier, listen to the full episode of the New Builders Podcast.

For more information, consider going to Jason’s panel on microservices and containers at KubeCon + CloudNativeCon Europe, May 2-4, 2018 in Copenhagen.

Europe’s Leading Online Fashion Platform Gets Radical with Cloud Native

By | Blog

Zalando, Europe’s leading online fashion platform, has experienced exponential growth since it was founded in 2008.  Today Zalando has more than 14,000 employees, 3.6 billion Euro in revenue for 2016 and operates across 15 countries.

Read this new case study that talks about Zalando’s need to scale, which ultimately led the company on a cloud-native journey.

A few years ago, Zalando’s technology department began rewriting its applications to be cloud-ready and started moving its infrastructure from on-premise data centers to the cloud.

While orchestration wasn’t immediately considered, as teams migrated to Amazon Web Services (AWS), the Berlin-based company realized their teams were experiencing too much pain with infrastructure and cloud formation. To provide better support, cluster management was brought into play. The company now runs its Docker containers on AWS using Kubernetes orchestration.

In parallel to rewriting their applications, Zalando had set a goal of expanding beyond basic e-commerce to a platform offering multi-tenancy, a dramatic increase in assortments and styles, same-day delivery and even their own personal online stylist.

“We envision all Zalando delivery teams running their containerized applications on a state-of-the-art, reliable and scalable cluster infrastructure provided by Kubernetes. With growth in all dimensions, and constant scaling, it has been a once-in-a-lifetime experience,” says Henning Jacobs, Head of Developer Productivity at Zalando.

To hear from more European companies transforming their business with cloud native technologies, be sure to register for KubeCon + CloudNativeCon, May 2-4, 2018 in Copenhagen.

At the Intersection of Travel & Technology: Amadeus Rethinks IT with Kubernetes and Cloud Native

By | Blog

In the past few years, Amadeus, which provides IT solutions to the travel industry around the world, found itself in need of a new platform for the 5,000 services supported by its service-oriented architecture. The public cloud and its existing systems couldn’t quickly and efficiently deliver new services and features.

Read this in-depth case study to learn how the Spain-based company increased automation in managing its infrastructure, optimized the distribution of workloads, introduced new workloads and use data center resources more efficiently.

Amadeus turned to Kubernetes and OpenShift Container Platform, Red Hat’s enterprise container platform. In doing so, Amadeus was able to economically enhance everyone’s travel experience, without interrupting workflows for the customers who depend on their technology.

By rethinking IT and making applications as cloud native as possible, Amadeus is reaping major benefits. Its flight search solution is now handling in production several thousand transactions per second, deployed in multiple data centers throughout the world. Be sure to also check out this Amadeus presentation from the March 28, 2017, OpenShift Commons Gathering in Berlin @KubeCon. If you are attending KubeCon + CloudNativeCon EU from May 2-4, 2018, catch Amadeus’ session on Pod Anomaly Detection and Eviction using Prometheus Metrics.

CNCF to Host Open Policy Agent (OPA)

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) announced acceptance of the Open Policy Agent (OPA) into the CNCF Sandbox, a home for early stage and evolving cloud native projects.

The Open Policy Agent (OPA) is an open source, general-purpose policy engine that enables unified, context-aware policy enforcement across the entire stack. OPA provides greater flexibility and expressiveness than hard-coded service logic or ad-hoc domain-specific languages and comes with powerful tooling to help anyone get started.

“Authorization is a problem you can’t wish away. A good authorization system supports diverse resource types and allows flexible policies. OPA gives us that flexibility,” said Manish Mehta, Senior Security Software Engineer at Netflix.

“As cloud native technology matures and enterprise adoption increases, the need for policy-based control has become vital,” said Torin Sandall, Software Engineer at Styra and Technical Lead for OPA. “OPA provides a purpose-built language and runtime that can be used to author and enforce authorization policy. As such, we see OPA as a valid addition to CNCF’s project portfolio and look forward to working with the growing community to foster its adoption.”

TOC sponsors of the project include Brian Grant and Ken Owens.

“As cloud native technology matures and enterprise adoption increases, there is a need for policy-based control technologies like OPA,” said Ken Owens, Vice President of Digital Native Architecture at Mastercard and member of the CNCF’s Technical Oversight Committee (TOC). “OPA provides a solution to control who can do what across microservice deployments because legacy approaches to access control do not satisfy the requirements of modern environments. This complements CNCF’s mission to accelerate adoption of cloud native technology in enterprises.”

Sandbox is a home for early stage projects will now replace the previous Inception maturity level for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.

CNCF to Host the SPIFFE Project

By | Blog

Today, the Cloud Native Computing Foundation accepted SPIFFE into the CNCF Sandbox, a home for early stage and evolving cloud native projects.

Also known as the Secure Production Identity Framework For Everyone, the SPIFFE project is an open-source identity framework designed expressly to support distributed systems deployed into environments that may be deeply heterogenous, spanning on-premise and public cloud providers, and that may also be elastically scaled and dynamically scheduled through technologies like Kubernetes.

“The SPIFFE community believes that aligning on a common, flexible representation of workload identity, and prescribing best practices for identity issuance and management are critical for widespread adoption of cloud-native architectures,” said Sunil James, CEO of Scytale, a venture-backed company that serves as SPIFFE’s primary maintainer. “Modeled after similar production systems at Google, Netflix, Twitter, and more, SPIFFE delivers this platform capability for the rest of us. Joining the CNCF furthers this foundational technology, helps us build a diverse community, and delivers to the broader cloud-native community an increasingly ubiquitous identity framework that will be well-integrated with CNCF projects like Kubernetes and more.”

Accompanying SPIFFE is SPIRE (aka the “SPIFFE Runtime Environment”), which is an open-source SPIFFE implementation that enables organizations to provision, deploy, and manage SPIFFE identities throughout their heterogeneous production infrastructure. Coupled with CNCF projects like Envoy and gRPC, SPIRE forms a powerful solution for connecting, authenticating, and securing workloads in distributed environments.

TOC sponsors of the project include Brian Grant, Sam Lambert, and Ken Owens.

“SPIFFE provides one of the most important missing capabilities needed to enable cloud-native ecosystems,” said Brian Grant, a principal engineer at Google and member of the CNCF’s Technical Oversight Committee (TOC). “The internal Google system that inspired SPIFFE is ‘dial tone’ for Google’s software and operations engineers; it is ubiquitous and omnipresent. SPIFFE enables development and operations teams to easily and consistently authenticate and authorize microservices, and control (and audit) infrastructure access without needing to individually provision, manage, and rotate credentials per application and service.”

Sandbox replaces the Inception level for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.

CNCF Launches Cross-Cloud CI Project & Adds ONAP Networking Project to Dashboard Overview

By | Blog

CNCF Demos Kubernetes Enabling ONAP Running On Any Public, Private, or Hybrid Cloud at Open Networking Summit This Week

To ensure CNCF projects work across all cloud providers, the CNCF CI Working Group has been working on the Cross-cloud CI project to integrate, test and deploy projects within the CNCF ecosystem. The group recently released CI Dashboard v1.3.0, which is licensed under the Apache License 2.0 and publishes results daily.’

The Cross-Cloud CI team, pictured below, has been adding CNCF projects to the dashboard at the rate of about one a month. The dashboard displays the status on both the latest release and the latest development version (i.e., head). The newest release includes, for the first time, the Linux Foundation Open Network Automation Platform (ONAP) project. It can be seen at: https://cncf.ci.

Meet the Cross-Cloud CI Team

CNCF contracted with a team from Vulk Coop to design, build, maintain and deploy the cross-cloud project.

The Cross-cloud CI project consists of a cross-cloud testing system, status repository server and a dashboard. The cross-cloud testing system has 3 components (build, cross-cloud, cross-project) that continually validate the interoperability of each CNCF project for any commit on stable and head across all supported cloud providers. The cross-cloud testing system can reuse existing artifacts from a project’s preferred CI system or generate new build artifacts. The status repository server collects the test results and the dashboard displays them. To better understand the genesis of the project and work to date, view this Updated High-Level Overview README.

CNCF & ONAP at Open Networking Summit This Week in Los Angeles

Kubernetes is being used to enable ONAP to run on any public, private, or hybrid cloud. Kubernetes allows the ONAP platform for real-time, policy-driven orchestration and automation of physical and virtual network functions to deploy seamlessly into all these environment.

The opening keynotes at the Open Networking Summit today in Los Angeles will demonstrate and test ONAP 1.1.1 and 1.9.4 Kubernetes deployed across multiple public clouds and bare metal. This will also be demonstrated in the CNCF booth at ONS.

Backed by many of the world’s largest global service providers and technology leaders, including Amdocs, AT&T, Bell, China Mobile, China Telecom, Cisco, Ericsson, Cloudify, Huawei, IBM, Intel, Jio, Nokia, Orange, Tech Mahindra, Verizon, VMware, Vodafone and ZTE, ONAP brings together global carriers and vendors to enable end users to automate, design, orchestrate and manage services and virtual functions. ONAP enables nearly 60 percent of the world’s mobile subscribers.

Companies like Comcast and AT&T are using Kubernetes, while Vodafone said it is seeing around a 40 percent improvement in resource usage from going with containers compared with virtual machines (VMs) at MWC last month.

“The promise of containerization is the ability to deploy to any public, private, or hybrid cloud. CNCF continues to see ongoing migration from VMs to containers and our architecture enables that,” said Dan Kohn, CNCF executive director. “CNCF is attending ONS this week in Los Angeles to expand our focus beyond the enterprise market to the networking industry. Our CNCF demo at ONS will illustrate to carriers that Kubernetes and ONAP are key to the future of network virtualization.”

To learn more, be sure to check out “Intro to Cross-cloud CI” and “Deep Dive for Cross-cloud CI” at KubeCon + CloudNativeCon Europe, May 2-4 in Copenhagen.

To get involved:

Kubernetes 1.10: Stabilizing Storage, Security, and Networking

By | Blog

Editor’s note: today’s post is by the 1.10 Release Team

Originally posted on Kubernetes.io

We’re pleased to announce the delivery of Kubernetes 1.10, our first release of 2018!

Today’s release continues to advance maturity, extensibility, and pluggability of Kubernetes. This newest version stabilizes features in 3 key areas, including storage, security, and networking. Notable additions in this release include the introduction of external kubectl credential providers (alpha), the ability to switch DNS service to CoreDNS at install time (beta), and the move of Container Storage Interface (CSI) and persistent local volumes to beta.

Let’s dive into the key features of this release:

Storage – CSI and Local Storage move to beta

This is an impactful release for the Storage Special Interest Group (SIG), marking the culmination of their work on multiple features. The Kubernetes implementation of the Container Storage Interface (CSI) moves to beta in this release: installing new volume plugins is now as easy as deploying a pod. This in turn enables third-party storage providers to develop their solutions independently outside of the core Kubernetes codebase. This continues the thread of extensibility within the Kubernetes ecosystem.

Durable (non-shared) local storage management progressed to beta in this release, making locally attached (non-network attached) storage available as a persistent volume source. This means higher performance and lower cost for distributed file systems and databases.

This release also includes many updates to Persistent Volumes. Kubernetes can automatically prevent deletion of Persistent Volume Claims that are in use by a pod (beta) and prevent deletion of a Persistent Volume that is bound to a Persistent Volume Claim (beta). This helps ensure that storage API objects are deleted in the correct order.

Security – External credential providers (alpha)

Kubernetes, which is already highly extensible, gains another extension point in 1.10 with external kubectl credential providers (alpha). Cloud providers, vendors, and other platform developers can now release binary plugins to handle authentication for specific cloud-provider IAM services, or that integrate with in-house authentication systems that aren’t supported in-tree, such as Active Directory. This complements the Cloud Controller Manager feature added in 1.9.  

Networking – CoreDNS as a DNS provider (beta)

The ability to switch the DNS service  to CoreDNS at install time is now in beta. CoreDNS has fewer moving parts: it’s a single executable and a single process, and supports additional use cases.

Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the release notes.


Kubernetes 1.10 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials.

2 Day Features Blog Series

If you’re interested in exploring these features more in depth, check back next week for our 2 Days of Kubernetes series where we’ll highlight detailed walkthroughs of the following features:

  • Day 1 – Container Storage Interface (CSI) for Kubernetes going Beta
  • Day 2 – Local Persistent Volumes for Kubernetes going Beta

Release team

This release is made possible through the effort of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the release team led by Jaice Singer DuMars, Kubernetes Ambassador for Microsoft. The 10 individuals on the release team coordinate many aspects of the release, from documentation to testing, validation, and feature completeness.

As the Kubernetes community has grown, our release process represents an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem.

Project Velocity

The CNCF has continued refining an ambitious project to visualize the myriad contributions that go into the project. K8s DevStats illustrates the breakdown of contributions from major company contributors, as well as an impressive set of preconfigured reports on everything from individual contributors to pull request lifecycle times. Thanks to increased automation, issue count at the end of the release was only slightly higher than it was at the beginning. This marks a major shift toward issue manageability. With 75,000+ comments, Kubernetes remains one of the most actively discussed projects on GitHub.

User Highlights

According to a recent CNCF survey, more than 49% of Asia-based respondents use Kubernetes in production, with another 49% evaluating it for use in production. Established, global organizations are using Kubernetes in production at massive scale. Recently published user stories from the community include:

  • Huawei, the largest telecommunications equipment manufacturer in the world, moved its internal IT department’s applications to run on Kubernetes. This resulted in the global deployment cycles decreasing from a week to minutes, and the efficiency of application delivery improved by tenfold.
  • Jinjiang Travel International, one of the top 5 largest OTA and hotel companies, use Kubernetes to speed up their software release velocity from hours to just minutes. Additionally, they leverage Kubernetes to increase the scalability and availability of their online workloads.
  • Haufe Group, the Germany-based media and software company, utilized Kubernetes to deliver a new release in half an hour instead of days. The company is also able to scale down to around half the capacity at night, saving 30 percent on hardware costs.
  • BlackRock, the world’s largest asset manager, was able to move quickly using Kubernetes and built an investor research web app from inception to delivery in under 100 days.

Is Kubernetes helping your team? Share your story with the community.

Ecosystem Updates

  • The CNCF is expanding its certification offerings to include a Certified Kubernetes Application Developer exam. The CKAD exam certifies an individual’s ability to design, build, configure, and expose cloud native applications for Kubernetes. The CNCF is looking for beta testers for this new program. More information can be found here.
  • Kubernetes documentation now features user journeys: specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers.  
  • CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster.


The world’s largest Kubernetes gathering, KubeCon + CloudNativeCon is coming to Copenhagen from May 2-4, 2018 and will feature technical sessions, case studies, developer deep dives, salons and more! Check out the schedule of speakers and register today!


Join members of the Kubernetes 1.10 release team on April 10th at 10am PDT to learn about the major features in this release including Local Persistent Volumes and the Container Storage Interface (CSI). Register here.

Get Involved:

The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.

Thank you for your continued feedback and support.

  • Post questions (or answer questions) on Stack Overflow
  • Join the community portal for advocates on K8sPort
  • Follow us on Twitter @Kubernetesio for latest updates
  • Chat with the community on Slack
  • Share your Kubernetes story.
1 30 31 32 48