KubeCon + CloudNativeCon North America Virtual | November 17-20, 2020 | Don’t Miss Out | Learn more



CNCF to Host Open Policy Agent (OPA)

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) announced acceptance of the Open Policy Agent (OPA) into the CNCF Sandbox, a home for early stage and evolving cloud native projects.

The Open Policy Agent (OPA) is an open source, general-purpose policy engine that enables unified, context-aware policy enforcement across the entire stack. OPA provides greater flexibility and expressiveness than hard-coded service logic or ad-hoc domain-specific languages and comes with powerful tooling to help anyone get started.

“Authorization is a problem you can’t wish away. A good authorization system supports diverse resource types and allows flexible policies. OPA gives us that flexibility,” said Manish Mehta, Senior Security Software Engineer at Netflix.

“As cloud native technology matures and enterprise adoption increases, the need for policy-based control has become vital,” said Torin Sandall, Software Engineer at Styra and Technical Lead for OPA. “OPA provides a purpose-built language and runtime that can be used to author and enforce authorization policy. As such, we see OPA as a valid addition to CNCF’s project portfolio and look forward to working with the growing community to foster its adoption.”

TOC sponsors of the project include Brian Grant and Ken Owens.

“As cloud native technology matures and enterprise adoption increases, there is a need for policy-based control technologies like OPA,” said Ken Owens, Vice President of Digital Native Architecture at Mastercard and member of the CNCF’s Technical Oversight Committee (TOC). “OPA provides a solution to control who can do what across microservice deployments because legacy approaches to access control do not satisfy the requirements of modern environments. This complements CNCF’s mission to accelerate adoption of cloud native technology in enterprises.”

Sandbox is a home for early stage projects will now replace the previous Inception maturity level for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.

CNCF to Host the SPIFFE Project

By | Blog

Today, the Cloud Native Computing Foundation accepted SPIFFE into the CNCF Sandbox, a home for early stage and evolving cloud native projects.

Also known as the Secure Production Identity Framework For Everyone, the SPIFFE project is an open-source identity framework designed expressly to support distributed systems deployed into environments that may be deeply heterogenous, spanning on-premise and public cloud providers, and that may also be elastically scaled and dynamically scheduled through technologies like Kubernetes.

“The SPIFFE community believes that aligning on a common, flexible representation of workload identity, and prescribing best practices for identity issuance and management are critical for widespread adoption of cloud-native architectures,” said Sunil James, CEO of Scytale, a venture-backed company that serves as SPIFFE’s primary maintainer. “Modeled after similar production systems at Google, Netflix, Twitter, and more, SPIFFE delivers this platform capability for the rest of us. Joining the CNCF furthers this foundational technology, helps us build a diverse community, and delivers to the broader cloud-native community an increasingly ubiquitous identity framework that will be well-integrated with CNCF projects like Kubernetes and more.”

Accompanying SPIFFE is SPIRE (aka the “SPIFFE Runtime Environment”), which is an open-source SPIFFE implementation that enables organizations to provision, deploy, and manage SPIFFE identities throughout their heterogeneous production infrastructure. Coupled with CNCF projects like Envoy and gRPC, SPIRE forms a powerful solution for connecting, authenticating, and securing workloads in distributed environments.

TOC sponsors of the project include Brian Grant, Sam Lambert, and Ken Owens.

“SPIFFE provides one of the most important missing capabilities needed to enable cloud-native ecosystems,” said Brian Grant, a principal engineer at Google and member of the CNCF’s Technical Oversight Committee (TOC). “The internal Google system that inspired SPIFFE is ‘dial tone’ for Google’s software and operations engineers; it is ubiquitous and omnipresent. SPIFFE enables development and operations teams to easily and consistently authenticate and authorize microservices, and control (and audit) infrastructure access without needing to individually provision, manage, and rotate credentials per application and service.”

Sandbox replaces the Inception level for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.

CNCF Launches Cross-Cloud CI Project & Adds ONAP Networking Project to Dashboard Overview

By | Blog

CNCF Demos Kubernetes Enabling ONAP Running On Any Public, Private, or Hybrid Cloud at Open Networking Summit This Week

To ensure CNCF projects work across all cloud providers, the CNCF CI Working Group has been working on the Cross-cloud CI project to integrate, test and deploy projects within the CNCF ecosystem. The group recently released CI Dashboard v1.3.0, which is licensed under the Apache License 2.0 and publishes results daily.’

The Cross-Cloud CI team, pictured below, has been adding CNCF projects to the dashboard at the rate of about one a month. The dashboard displays the status on both the latest release and the latest development version (i.e., head). The newest release includes, for the first time, the Linux Foundation Open Network Automation Platform (ONAP) project. It can be seen at: https://cncf.ci.

Meet the Cross-Cloud CI Team

CNCF contracted with a team from Vulk Coop to design, build, maintain and deploy the cross-cloud project.

The Cross-cloud CI project consists of a cross-cloud testing system, status repository server and a dashboard. The cross-cloud testing system has 3 components (build, cross-cloud, cross-project) that continually validate the interoperability of each CNCF project for any commit on stable and head across all supported cloud providers. The cross-cloud testing system can reuse existing artifacts from a project’s preferred CI system or generate new build artifacts. The status repository server collects the test results and the dashboard displays them. To better understand the genesis of the project and work to date, view this Updated High-Level Overview README.

CNCF & ONAP at Open Networking Summit This Week in Los Angeles

Kubernetes is being used to enable ONAP to run on any public, private, or hybrid cloud. Kubernetes allows the ONAP platform for real-time, policy-driven orchestration and automation of physical and virtual network functions to deploy seamlessly into all these environment.

The opening keynotes at the Open Networking Summit today in Los Angeles will demonstrate and test ONAP 1.1.1 and 1.9.4 Kubernetes deployed across multiple public clouds and bare metal. This will also be demonstrated in the CNCF booth at ONS.

Backed by many of the world’s largest global service providers and technology leaders, including Amdocs, AT&T, Bell, China Mobile, China Telecom, Cisco, Ericsson, Cloudify, Huawei, IBM, Intel, Jio, Nokia, Orange, Tech Mahindra, Verizon, VMware, Vodafone and ZTE, ONAP brings together global carriers and vendors to enable end users to automate, design, orchestrate and manage services and virtual functions. ONAP enables nearly 60 percent of the world’s mobile subscribers.

Companies like Comcast and AT&T are using Kubernetes, while Vodafone said it is seeing around a 40 percent improvement in resource usage from going with containers compared with virtual machines (VMs) at MWC last month.

“The promise of containerization is the ability to deploy to any public, private, or hybrid cloud. CNCF continues to see ongoing migration from VMs to containers and our architecture enables that,” said Dan Kohn, CNCF executive director. “CNCF is attending ONS this week in Los Angeles to expand our focus beyond the enterprise market to the networking industry. Our CNCF demo at ONS will illustrate to carriers that Kubernetes and ONAP are key to the future of network virtualization.”

To learn more, be sure to check out “Intro to Cross-cloud CI” and “Deep Dive for Cross-cloud CI” at KubeCon + CloudNativeCon Europe, May 2-4 in Copenhagen.

To get involved:

Kubernetes 1.10: Stabilizing Storage, Security, and Networking

By | Blog

Editor’s note: today’s post is by the 1.10 Release Team

Originally posted on Kubernetes.io

We’re pleased to announce the delivery of Kubernetes 1.10, our first release of 2018!

Today’s release continues to advance maturity, extensibility, and pluggability of Kubernetes. This newest version stabilizes features in 3 key areas, including storage, security, and networking. Notable additions in this release include the introduction of external kubectl credential providers (alpha), the ability to switch DNS service to CoreDNS at install time (beta), and the move of Container Storage Interface (CSI) and persistent local volumes to beta.

Let’s dive into the key features of this release:

Storage – CSI and Local Storage move to beta

This is an impactful release for the Storage Special Interest Group (SIG), marking the culmination of their work on multiple features. The Kubernetes implementation of the Container Storage Interface (CSI) moves to beta in this release: installing new volume plugins is now as easy as deploying a pod. This in turn enables third-party storage providers to develop their solutions independently outside of the core Kubernetes codebase. This continues the thread of extensibility within the Kubernetes ecosystem.

Durable (non-shared) local storage management progressed to beta in this release, making locally attached (non-network attached) storage available as a persistent volume source. This means higher performance and lower cost for distributed file systems and databases.

This release also includes many updates to Persistent Volumes. Kubernetes can automatically prevent deletion of Persistent Volume Claims that are in use by a pod (beta) and prevent deletion of a Persistent Volume that is bound to a Persistent Volume Claim (beta). This helps ensure that storage API objects are deleted in the correct order.

Security – External credential providers (alpha)

Kubernetes, which is already highly extensible, gains another extension point in 1.10 with external kubectl credential providers (alpha). Cloud providers, vendors, and other platform developers can now release binary plugins to handle authentication for specific cloud-provider IAM services, or that integrate with in-house authentication systems that aren’t supported in-tree, such as Active Directory. This complements the Cloud Controller Manager feature added in 1.9.  

Networking – CoreDNS as a DNS provider (beta)

The ability to switch the DNS service  to CoreDNS at install time is now in beta. CoreDNS has fewer moving parts: it’s a single executable and a single process, and supports additional use cases.

Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the release notes.


Kubernetes 1.10 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials.

2 Day Features Blog Series

If you’re interested in exploring these features more in depth, check back next week for our 2 Days of Kubernetes series where we’ll highlight detailed walkthroughs of the following features:

  • Day 1 – Container Storage Interface (CSI) for Kubernetes going Beta
  • Day 2 – Local Persistent Volumes for Kubernetes going Beta

Release team

This release is made possible through the effort of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the release team led by Jaice Singer DuMars, Kubernetes Ambassador for Microsoft. The 10 individuals on the release team coordinate many aspects of the release, from documentation to testing, validation, and feature completeness.

As the Kubernetes community has grown, our release process represents an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem.

Project Velocity

The CNCF has continued refining an ambitious project to visualize the myriad contributions that go into the project. K8s DevStats illustrates the breakdown of contributions from major company contributors, as well as an impressive set of preconfigured reports on everything from individual contributors to pull request lifecycle times. Thanks to increased automation, issue count at the end of the release was only slightly higher than it was at the beginning. This marks a major shift toward issue manageability. With 75,000+ comments, Kubernetes remains one of the most actively discussed projects on GitHub.

User Highlights

According to a recent CNCF survey, more than 49% of Asia-based respondents use Kubernetes in production, with another 49% evaluating it for use in production. Established, global organizations are using Kubernetes in production at massive scale. Recently published user stories from the community include:

  • Huawei, the largest telecommunications equipment manufacturer in the world, moved its internal IT department’s applications to run on Kubernetes. This resulted in the global deployment cycles decreasing from a week to minutes, and the efficiency of application delivery improved by tenfold.
  • Jinjiang Travel International, one of the top 5 largest OTA and hotel companies, use Kubernetes to speed up their software release velocity from hours to just minutes. Additionally, they leverage Kubernetes to increase the scalability and availability of their online workloads.
  • Haufe Group, the Germany-based media and software company, utilized Kubernetes to deliver a new release in half an hour instead of days. The company is also able to scale down to around half the capacity at night, saving 30 percent on hardware costs.
  • BlackRock, the world’s largest asset manager, was able to move quickly using Kubernetes and built an investor research web app from inception to delivery in under 100 days.

Is Kubernetes helping your team? Share your story with the community.

Ecosystem Updates

  • The CNCF is expanding its certification offerings to include a Certified Kubernetes Application Developer exam. The CKAD exam certifies an individual’s ability to design, build, configure, and expose cloud native applications for Kubernetes. The CNCF is looking for beta testers for this new program. More information can be found here.
  • Kubernetes documentation now features user journeys: specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers.  
  • CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster.


The world’s largest Kubernetes gathering, KubeCon + CloudNativeCon is coming to Copenhagen from May 2-4, 2018 and will feature technical sessions, case studies, developer deep dives, salons and more! Check out the schedule of speakers and register today!


Join members of the Kubernetes 1.10 release team on April 10th at 10am PDT to learn about the major features in this release including Local Persistent Volumes and the Container Storage Interface (CSI). Register here.

Get Involved:

The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.

Thank you for your continued feedback and support.

  • Post questions (or answer questions) on Stack Overflow
  • Join the community portal for advocates on K8sPort
  • Follow us on Twitter @Kubernetesio for latest updates
  • Chat with the community on Slack
  • Share your Kubernetes story.

CNCF Survey: China is Going Native with Cloud

By | Blog

By Swapnil Bhartiya, Founder & Writer at TFiR: The Fourth Industrial Revolution

Swapnil is a journalist and writer who has been covering Linux & Open Source for 10 years. He is also a science fiction writer whose stories have been broadcast on Indian radio and published in leading Indian magazines.

To better gauge how quickly Asian companies are adopting open source and cloud native technologies like Kubernetes and Prometheus, CNCF recently conducted its cloud native survey in Chinese.  The bi-annual survey takes a pulse of the community to understand the adoption of cloud native technologies. More than 700 people responded to the December 2017 surveys, with 500 responding to the English version and 187 responding to the Chinese version.

For the first time, we have a very comprehensive sample and data to understand the adoption of cloud native technologies in China. Since the survey was conducted in Mandarin, it reflects the Chinese market more than the overall Asian market, excluding major economies like Japan. We expect that future surveys will be conducted in more languages to get an ever clearer picture.

KubeCon + CloudNativeCon Europe is around the corner from May 2-4 in Copenhagen, and the first KubeCon + CloudNativeCon will be held in China in November. It will be interesting to see how the representation and responses change as CNCF reaches new developers, vendors, and user attendees in other parts of the world. I can’t wait for the next KubeCon to learn more about the cloud native journey of Asian players. Read on to learn more about new cloud native developments in China.

P.S. This blog is a follow up to “Cloud Native Technologies Are Scaling Production Applications,” which analyzed trends and data from our global survey conducted in English.  

China, an Open Source Cloud Native Powerhouse

The survey data collected validates the fact that China is embracing cloud native and open source technologies at a phenomenal rate.

We already know the famous BAT (Baidu, Alibaba and Tencent) organizations are using open source technologies to build their services that serve more than a billion users.  For example, I recently interviewed Bowyer Liu, the chief architect at Tencent, which built their own public cloud using OpenStack. It’s called TStack, which is running more than 12,000 virtual machines (VMs), covering 17 clusters, spread across 7 data centers in 4 regions. TStack is managing more than 300 services that include QQ and WeChat. TStack is also using open source projects like CoreOS, Docker, Kubernetes, KVM, HAProxy, Keepalived, Clear Container, Rabbitmq, MariaDB, Nginx, Ansible, Jenkins, Git, ELK, Zabbix, Grafana, InfluxDB, Tempest and Rally.

Of the 18 CNCF platinum members, 4 are located in Asia; of its 8 gold members, 3 are based in Asia. The community runs 18 CNCF Meetups across Asia, with 8 in China alone. Furthermore, in the past quarter, 3 of the top 10 most active companies — Huawei, Fujitsu and Treasure Data — contributing across all CNCF projects are based in Asia.

About the Survey Methodology & Respondents

The respondents represented a variety of company sizes from small startups to large enterprises:

  • Over 5000 employees: 22%
  • 1000-4999 employees: 16%
  • 500-999: 14%
  • 100-499: 27%
  • 50-99: 12%
  • 10-49: 8%

Out of the total respondents, 26% represented the tech industry; 13% represented Container/Cloud Solutions vendors, 12% came from the Financial Services industry; 11% from the Consumer industry; 6% from the Government and 6% from Manufacturing. In comparison, the North American survey had 44% representation from the tech industry and the second largest representation was from the container/cloud vendors 14%, the financial services industry was the 3rd largest with 7% representation. China is embracing cloud native technologies across the board, and the Chinese financial service industry seems to be more open to cloud native technologies as compared to their Western counterparts.

What Cloud Are They Running?

While public cloud continues to dominate the U.S. market, the Chinese market is more diverse. 24% of respondents were using private cloud and only 16% were using public cloud. 60% of respondents were using on-prem. Compare that number to North America where more than 81% of respondents were using the public cloud, and 61% were running on premise; whereas 44% are using some private cloud.

In China, Alibaba Cloud remains the leader with more than 55% of respondents said to be using it, 30% were using AWS, 28% were using OpenStack Cloud, 12% were using Microsoft Azure and around 6% were using Google Cloud Platform.

In North America, AWS remains the leader with 70% of respondents deploying their containers in AWS environment. Google Cloud Platform was at the second spot with 40% and Microsoft Azure took the 3rd spot with 23%. Compared to China, only 22% of respondents from North America acknowledged using OpenStack.

Growth of Containers

Out of the total respondents, only 6% were using more than 5,000 containers. A majority of the respondents, around 32% are using less than 50 containers, and around 30% of respondents were using 50-249 containers. 36% of respondents were using containers in development stage, whereas 32% were using it in production. 57% of respondents said that they are planning to use containers in production in future. A majority of respondents (22%) are using between 6-20 machines in their fleet, with 21-50 machines close behind at 17%; only 6% have more than 5,000 machines, which includes VMs and bare metal.

What about Kubernetes?

No surprises — Kubernetes remains the No. 1 platform to manage containers. 35% of Asian respondents said they were using Kubernetes to manage their containers. Azure Container Service was used by 19%. Docker Swarm was reported to be used by 16% of respondents. ECS came in at the 4 spot with 13% and 11% reporting using another Linux Foundation Project Cloud Foundry. 6% said they were using OpenShift, while another 6% reported using CoreOS Tectonic. This data shows a much more diverse cloud native ecosystem in China as compared to the United States.

Where Are They Running Kubernetes?

The most interesting finding from the survey was that more than 49% of respondents were using Kubernetes in production, with another 49% evaluating it for use. Some of the most well-known examples include Jinjiang Travel International, one of the top 5 largest OTA and hotel companies that sells hotels, travel packages, and car rentals. They use Kubernetes containers to speed up their software release velocity from hours to just minutes, and they leverage Kubernetes to increase the scalability and availability of their online workloads. China Mobile uses containers to replace VMs to run various applications on their platform in a lightweight fashion, leveraging Kubernetes to increase resource utilization. State Power Grid, the state-owned power supply company in China, uses containers and Kubernetes to provide failure resilience and fast recovery. JD.com, one of China’s largest companies and the first Chinese Internet company to make the Global Fortune 500 list, chronicled their shift to Kubernetes from OpenStack in this blog from last year.

As expected, 52% of respondents were using Kubernetes with Alibaba public cloud, 26% were using AWS. China is bigger consumer of OpenStack as compared to the North America market (16%), around 26% of respondents were using OpenStack. A majority of respondents were running 10 or less clusters in production; 14% were running 1 cluster, 40% were running 2-5 clusters and 26% were running 6-10 clusters. Only 6% of the respondents running Kubernetes have more than 50 clusters in production.

One Ring to Bind Them All…

CNCF is at the bedrock of this cloud native movement and the survey shows adoption of CNCF projects is growing quickly in China. While Kubernetes remains the crown jewel, other CNCF projects are getting into production. 20% of respondents were running OpenTracing in production; 16% were using Prometheus; 13% were using gRPC; 10% were using CoreDNS; and 7% were using Fluentd. China is still warming up to newer projects like Istio where only 3% of respondents were using it in production.

Talking about new CNCF projects, we can’t ignore ‘serverless’ or ‘function as a service.’ The CNCF Serverless Working Group recently came out with a whitepaper to define serverless computing. The survey found that more than 25% of respondents are already using serverless technology and around 23% planned to use it in the next 12-18 months.

China leans heavily toward open source when it comes to serverless technologies. Apache OpenWhisk is the dominant serverless platform in China with more than 30% of respondents using it as their preferred platform. AWS Lambda is in the second spot with 24% of respondents using it. Azure was mentioned by 14% and Google Cloud Functions was mentioned by 9% of respondents.

We will hear more about serverless technologies in 2018, especially at the upcoming KubeCon + CloudNativeCon, May 2-4 in Copenhagen.

Challenges Ahead

As impressive as the findings of the survey are, we are talking about some fairly young technologies. Respondents cited many new and old challenges. Complexity remains the No. 1 concern, with more than 44% of respondents citing it as their biggest challenge followed by reliability (43%); monitoring (38%); and difficulty in choosing an orchestration solution (40%).

In contrast, for the North American respondents security remains the No. 1 concern, with 43% of respondents citing it as their biggest challenge followed by storage (41%); networking (38%); monitoring (38%); complexity (35%) and logging (32%).

This is a clear indication that the Chinese market is younger and still going through basic challenges that their Western counterparts have already overcome. To help Chinese companies move forward, documentation plays a very critical role in the proliferation of any technology, so it’s promising to see massive work going on to translate Kubernetes documentation into Mandarin.

The good news is that finding a vendor is not one of the biggest challenges for either of the two markets. With the release of the Certified Kubernetes Conformance Program, CNCF has instilled more confidence in users to pick and choose a vendor without fearing being locked down.

Getting Ready for Cloud Native World?

There is no playbook to help you embark on your journey to the cloud native world. However, there are some best practices that one can follow. A little under a year ago, Dr. Ying Xiong, Chief Architect of Cloud Computing at Huawei, talked about the company’s move toward cloud native architecture at KubeCon + CloudNativeCon Europe. Dr. Xiong has some tips – start with the right set of applications for cloud native journey; some apps are too difficult to redesign with microservice architecture; don’t start with those. Choose the easy ones, as you succeed you gain confidence and a model to replicate across your organization. He also advises in favor of using one platform to manage container and non container applications.

Be sure to join us at an upcoming CNCF Meetup.

For even deeper exposure to the cloud native community, ecosystem and user successes, be sure to attend our first KubeCon + CloudNativeCon China, Nov. 14-15 in Shanghai. 

How to Benefit from the Container Approach

By | Blog

By Jason McGee, Fellow, IBM

 Jason McGee, IBM Fellow, is VP and CTO of the Container and Microservice Tribe. Jason leads the technical strategy and architecture across all of IBM Cloud, with specific focus on core foundational cloud services, including containers, microservices, continuous delivery and operational visibility services. The below blog was originally posted on KNect365.

Nearly all cloud native applications that are being built today use container to some degree—and they should. However, one thing that so many businesses don’t understand is that nearly all applications that weren’t necessarily built in the cloud could also benefit from a container approach.

These applications go by many names, but we’ll call them traditional apps—and I’m here to tell you they deserve to be modernized. While many organizations are questioning whether they should use containers, the ones that are thriving are asking themselves how containers can supercharge their workloads.

Here’s why: Business executives, IT executives and developers alike all see a different benefit when it comes to containerization. A recent IBM online survey of more than 200 respondents, with the job roles mentioned above, revealed the following thoughts on the benefits that containers present:

  • Business executives see the potential efficiencies in the DevOps pipeline (77%) as the single most valuable potential benefit in using containers
  • IT executives see improved software quality (61%) as the highest business value of containers
  • Developers focus most on innovation (66%) and see the potential to quickly respond to changes in the market (64%)

No matter where you are in the business, you’re going to realize benefits—lots of them—from containers.

We recently worked with a large financial client that had a handful of WebSphere and Java EE apps. You can imagine the complexities and challenges that come from running so many applications in a shared application server. In short, if the client wanted to update one app, they must update them all. Conversely, if you needed to run a system update, the client had to coordinate with every team that had an application on the server. Consider there being more than a dozen teams running apps and you start to see how this is no way to work.

Enter containers.

The client could separate each application to run in its own containers. Operations loved it because it became infinitely easier to monitor and manage. Developers loved it because they could push their ideas into market faster and iterate constantly.

There’s simply no other move for a development team to make that can have such a dramatic impact on the application lifecycle.

Still skeptical? We asked developers what they thought in our survey. Here’s what they found to be the biggest benefits after turning to containers:

  • 66 percent: Greater levels of innovation
  • 64 percent: Faster response to changes in our market
  • 61 percent: Improved employee productivity

Innovation, response to changes in the market and improved employee productivity. These are the hallmarks of a successful development team.

Jason McGee will be moderating a panel at KubeCon + CloudNativeCon Europe on microservices and containers. Make sure to catch the panel and other great topics May 2-4 in Copenhagen!

This Week in Kubernetes: March 21st

By | Blog

Each week, the Kubernetes community shares an enormous amount of interesting and informative content including articles, blog posts, tutorials, videos, and much more. We’re highlighting just a few of our favorites from the week before. This week we’re talking machine learning, scalability, service mesh, and contributing to Kubernetes.

Running Apache Spark Jobs on AKS, Microsoft

Apache Spark, a fast engine for large-scale data processing, now supports native integration with Kubernetes clusters as a scheduler for Spark jobs. In this article, Lena Hall and Neil Peterson of Microsoft walk you through how to prepare and run Apache Spark jobs on an Azure Container Service (AKS) cluster. If you want to learn more about using Spark for large scale data processing on Kubernetes, check out this treehouse discussion video.

Introducing Agones: Open-source, Multiplayer, Dedicated Game-server Hosting Built on Kubernetes, Google

In the world of distributed systems, hosting and scaling dedicated game servers for online, multiplayer games presents some unique challenges. Because Kubernetes is an open-source, common standard for building complex workloads and distributed systems, it makes sense to expand this to scale game servers. In this article, Mark Mandel of Google introduces Agones, an open-source, dedicated game server hosting and scaling project built on top of Kubernetes, with the flexibility you need to tailor it to the needs of multiplayer games.

8 Ways to Bolster Kubernetes Security, TechBeacon

Kubernetes can affect many runtime security functions, including authentication, authorization, logging, and resource isolation. Since it also affects the container runtime environment, it’s a crucial part of maintaining a secure container infrastructure. In this article, John P. Mello Jr. of TechBeacon explains 8 ways to help keep Kubernetes secure.

Kubernetes from the Ground Up: Choosing a Configuration Method, OzNetNerd

Kubernetes’ configuration is simply a bunch of Kubernetes objects. In this article, Will Robinson of Contino takes you through a quick look at what these objects are, and what they’re used for. You’ll walk through imperative commands, imperative objects, and declarative objects including what imperative vs. declarative means and what is right for your application.

Stay tuned for more exciting content from the Kubernetes community next week, and join the KubeWeekly mailing list for the latest updates delivered directly to your inbox.

Is there a piece of content that you think we should highlight? Tweet at us! We’d love to hear what you’re reading, watching, or listening to.

Trace Your Microservices Application with Zipkin and OpenTracing

By | Blog

By Gianluca Arbezzano, Site Reliability Engineer at InfluxDB, CNCF Ambassador 

Walter Dal Mut is a certified Amazon Web Service consultant. He works at Corley SRL, an Italian company that helps other small and big companies move to the cloud.

During the first CNCF Italy Meetup, Walter shared his experience instrumenting a PHP microservices environment with Zipkin and OpenTracing.

Everybody doing logging in applications, and is effectively useful the problem is that we have a lot of detailed informations, they move very fast across multi services. They are almost impossible to read it in real time.

This is why we have monitors. With monitor I mean events, metrics and time series. One aspect of monitors that I love is the aggregation.

It makes easy to put things together and we can see so many things for example I can see the criticality of my issues looking at the occurances. I can compare the number of requests with the load time in just one graph. This is very useful and it’s something that I can’t see tailing logs.

With metrics we can measure changes. It is one of the most important things in my opinion for monitors because a deployment usually is the point in time when something change. If we are able to detect the entity of this change we can take any action based on how good or bad it is. We see immediately if what we changed is important or not. You will discover that so many times we change something that is not useful at all or my change is just not working as expected. Monitors are the only way to understand all of this.

Describing the image above I instrumented my application to send events and I collect them on InfluxDB. From the bottom right graph you can see green and red lines. Read lines tell to us that something is wrong, and now that we are know the distribution we can measure how a new version of our software improve or not the current state.

One tip to remember when you are building your dashboard is that a deploy is an important event. Based on what monitoring software you are using you can mark this special event with a vertical line, Grafana call this feature annotation. The annotation is printed across all the graphs part of the same dashboard. This line is the key to understand how a new release performs.

One missed information at some point is how the information is propagated in our system.

In a microservices it’s not really important the log generated by a single service we want to trace and jump across services following our request. I want to connect dots across my services and the tracing is designed to read time series in this terms.

In a traditional web application with a database, I want to understand the queries made to load a specific page. I want to understand how much it takes to keep them optimized and low in terms of numbers.

Tracing is all about spans and inter-process propagation and active span management.

A spans is a period of time that starts and ends, other than these two points we mark when the client send the request, when the server receive the request and when the server send the response to the client.

These four signals are important to understand the network latency between services.

Other than that you can mark custom events inside a span and you can calculate how long it takes to your application to end a specific task like generating a pdf, decompress a request, process a message queue and so on.

Inter-process propagation, the way that we propagate things as maybe using four eaters that we can send in my request. There is a trace indeed. It is the unity in fire that starts at time zero and ends when all my micro services are included. It is in the trace I.D. Then they have a spy identification using the spy effectively they want to use. Because the client send a request.

The inter-process propagation describe how we propagate things across network or processes. In HTTP we use headers to pass traces information across services. TraceId is the unique identifier for every single request every spans part of that requests is grouped under the same id. Every span has it’s id and it also have a parent span id. It is useful to aggregate all the spans to get the three of your spans.

There are different traces available the most famous open source are Jaeger (a project under the CNCF) an Zipkin started by Twitter.

During the talk Walter used Zipkin but they are both compatible with OpenTracing. If you use the right libraries you are able to switch between tracers transparently.

This is how a trace is represented by Zipkin. You can see the last of all your services on the left and every spans generated across your distributed system. The length of each span describes how much time it took and from this visualisation we already have a good set of information and optimisation points:

  • How many services we are calling to solve the request. Are they too much?
  • How many time I am calling the same service.
  • How much time a service is taking. If the authentication takes 80% of your request time every time you should fix it.
  • And many more.

Some spans have one or more dots, that white dots are logs. They are useful to understand when a specific event happen in time. You can use this feature to identify when you send an email or when a customer clicked a specific button tracking it’s UI experience.

The video shows a details demo about what Zipkin provides in terms of filtering, searching and event UI trick to quick identify traces that ended with an error. Other then showing how Zipkin works Walter shared his experience instrumenting PHP applications.

The focus of his talk is all about tricks and best practices really useful to listen in order to avoid some common mistakes.

They are a bit hard to transcribe and I will leave the video to you.

I will leave you the quote that he shared with us at the end of the presentation (spoiler alert)

“Measure is the key to science (Richard Feynman).”

Slides available http://example.walterdalmut.com/opentracing/


Cloud Native Computing Foundation Expands Certification Program to Kubernetes Application Developers – Beta Testers Needed

By | Blog

The Cloud Native Computing Foundation (CNCF), which sustains and integrates open source technologies like Kubernetes and Prometheus, announced today that it is expanding its certification program to include application developers who deploy to Kubernetes. After launching the highly successful Certified Kubernetes Administrator (CKA) exam and corresponding Kubernetes Certified Service Provider (KCSP) program back in September with more than 1,200 CKAs and 28 KCSPs to date, the natural follow-on was an exam focused on certifying application developers to cover the full spectrum of Kubernetes technical expertise.

We are looking for skilled Kubernetes professionals at vendor companies to beta test the exam curriculum.

Are you interested in getting an early peek as a beta tester? If so, please read on!

  • The online exam consists of a set of performance-based items (problems) to be solved in a command line.
  • The exam is expected to take approximately 2 hours to complete.
  • Beta testing is targeted for late April — early May.
  • After taking the exam, each beta tester will be asked to provide exam experience feedback in a questionnaire.

If you would like to be considered as a Beta tester for the CKAD exam, please sign-up via this short survey

You will be contacted with additional information when the exam is ready and the Beta team is selected. If you complete the beta exam with a passing grade, you will become one of the first Certified Kubernetes Application Developers.

With the majority of container-related job listings asking for proficiency in Kubernetes as an orchestration platform, the new program will help expand the pool of Kubernetes experts in the market, thereby enabling continued growth across the broad set of organizations a using the technology. Certification is a key step in that process, allowing certified application developers to quickly establish their credibility and value in the job market, and also allowing companies to more quickly hire high-quality teams to support their growth.

“The CKAD program was developed as an extension of CNCF’s Kubernetes training offerings which already includes certification for Kubernetes administrators. By introducing this new exam for application developers, anyone working with Kubernetes can now certify their competency in the platform,” said Dan Kohn Executive Director of Cloud Native Computing Foundation.  

To create the Certified Kubernetes Application Developer (CKAD) exam, 19 Kubernetes subject matter experts (SMEs) participated over four days in job analysis and item writing sessions co-located with Kubecon + CloudNativeCon in Austin, Texas in December. During these work sessions, the following exam scope statement was agreed upon:

CKAD Exam Scope Statement

The Certified Kubernetes Application Developer can design, build, configure, and expose cloud native applications for Kubernetes. The Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications & tools in Kubernetes.

The exam assumes knowledge of, but does not test for, container runtimes and microservice architecture.

The successful candidate will be comfortable using:

  • An OCI-Compliant Container Runtime, such as Docker or rkt.
  • Cloud native application concepts and architectures.
  • A programming language, such as Python, Node.js, Go, or Java.

Also during the work sessions, the team defined seven content domains with a % weight in the exam:

  • 13%  Core Concepts
  • 18%  Configuration
  • 10%  Multi-Container Pods
  • 18%  Observability
  • 20%  Pod Design
  • 13%  Services & Networking
  • 8%   State Persistence

The SME group wrote exam problems, assigned complexity and difficulty ratings, and determine a provisional passing score of 66% during the working session.

CKAD exam launch is targeted for early May.


By | Blog

Today, the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC) voted to accept NATS as an incubation-level hosted project, alongside Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Rook and Vitess.

Built from the ground up to be cloud native, NATS is a mature, seven-year-old open source messaging technology that implements the publish/subscribe, request/reply and distributed queue patterns to help create a performant and secure method of InterProcess Communication (IPC). Simplicity, performance, scalability and security are the core tenets of NATS.

The project consists of a family of open source components that are tightly integrated but can be deployed independently. NATS is based on a client-server architecture with servers that can be clustered to operate as a single entity – clients connect to these clusters to exchange data encapsulated in messages.

“While most messaging systems provide a mechanism to persist messages and ensure message delivery, NATS does this through log based streaming – which we’ve found to be an easier way to store and replay messages,” said Derek Collison, CEO of Synadia Communications and creator of NATS. “NATS is a simple yet powerful messaging system written to support modern cloud native architectures. Because complexity does not scale, NATS is designed to be easy to use while acting as a central nervous system for building distributed applications.”

NATS Streaming subscribers can retrieve messages published when they were offline, or replay a series of messages. Streaming inherently provides a buffer in the distributed application ecosystem, increasing stability. This allows applications to offload local message caching and buffering logic into NATS and ensures a message is never lost.

NATS is being deployed globally by thousands of companies, spanning innovative use-cases including: microservice architectures, cloud native applications and IoT messaging. It is being adopted by organizations such as Apcera, Apporeto, Baidu, Bridgevine, Capital One, Clarifai, Cloud Foundry, Comcast, Ericsson, Faber, Fission, General Electric (GE), Greta, HTC, Logimethods, Netlify, Pex, Pivotal, Platform9, Rapidloop, Samsung, Sendify, Sensay, StorageOS, VMware Acadiant, Weaveworks and Workiva.

NATS was created by Derek Collison, founder and CEO at Synadia in response to the market need for a simple and high performance messaging solution. The Synadia team maintains the NATS Server (written in Go), NATS Streaming and clients written in Go, Python, Ruby, Node.js, Elixir, Java, NGINX, C and C#. The community has contributed a growing list of libraries, including Arduino, Rust, Lua, PHP, Perl and more. NATS is also available as a hosted solution, NATS Cloud. (Update: NATS Cloud has been sunsetted in lieu of NGS)

“Today’s cloud native landscape emphasizes scalability and total flexibility, emphasizing the need to transfer data between processes quickly, reliably and securely,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation and VP of Developer Relations at The Linux Foundation. “Messaging was born to address this. In its contribution to CNCF, NATS fills a gap for a mature, cloud native messaging solution – the broad client coverage and simplicity of the protocol will make support for and integration with with future cloud native systems straightforward.”

NATS has very strong existing synergy and inertia with other CNCF projects – used heavily in conjunction with projects like Kubernetes, Prometheus, gRPC, Fluentd, Linkerd and containerd.

Main features:

  • Pure pub-sub
  • Clustered mode server
  • Auto-pruning of subscribers
  • Text-based protocol
  • Multiple qualities of service (QoS)
  • Durable subscriptions
  • Event streaming service
  • Last/Initial value caching

Notable Milestones:

  • 44 contributors
  • 3,820 GitHub stars
  • 20 releases
  • 1,356 commits
  • 415 forks

“Messaging and integration middleware paradigms have changed radically in the cloud native era and NATS represents a view into the future of application-to-application and service-to-service IPC,” said TOC representative and project sponsor, Alexis Richardson. “The performant nature of NATS makes it an ideal base for building modern, reliable, scalable cloud native distributed systems, making the project a great fit for CNCF.”

As a CNCF hosted project, NATS is part of a neutral cloud native foundation aligned with its technical interests, as well as the larger Linux Foundation, which provides the technology with project governance, marketing support and community outreach.

For more on NATS, please visit http://nats.io/documentation/, see this video by founder Derek Collison and review this slideshow on NATS messaging + the problems it can solve in “Simple Solutions for Complex Problems.” An overview of NATS architecture can be found in “Understanding NATS Architecture.” For more on how it integrates with other projects, read “Guest Post: Fission – Serverless Functions and Workflows with Kubernetes and NATS.”

1 33 34 35 50