All Posts By

Kaitlyn Barnard

Kubernetes 1.10: Stabilizing Storage, Security, and Networking

By | Blog

Editor’s note: today’s post is by the 1.10 Release Team

Originally posted on Kubernetes.io

We’re pleased to announce the delivery of Kubernetes 1.10, our first release of 2018!

Today’s release continues to advance maturity, extensibility, and pluggability of Kubernetes. This newest version stabilizes features in 3 key areas, including storage, security, and networking. Notable additions in this release include the introduction of external kubectl credential providers (alpha), the ability to switch DNS service to CoreDNS at install time (beta), and the move of Container Storage Interface (CSI) and persistent local volumes to beta.

Let’s dive into the key features of this release:

Storage – CSI and Local Storage move to beta

This is an impactful release for the Storage Special Interest Group (SIG), marking the culmination of their work on multiple features. The Kubernetes implementation of the Container Storage Interface (CSI) moves to beta in this release: installing new volume plugins is now as easy as deploying a pod. This in turn enables third-party storage providers to develop their solutions independently outside of the core Kubernetes codebase. This continues the thread of extensibility within the Kubernetes ecosystem.

Durable (non-shared) local storage management progressed to beta in this release, making locally attached (non-network attached) storage available as a persistent volume source. This means higher performance and lower cost for distributed file systems and databases.

This release also includes many updates to Persistent Volumes. Kubernetes can automatically prevent deletion of Persistent Volume Claims that are in use by a pod (beta) and prevent deletion of a Persistent Volume that is bound to a Persistent Volume Claim (beta). This helps ensure that storage API objects are deleted in the correct order.

Security – External credential providers (alpha)

Kubernetes, which is already highly extensible, gains another extension point in 1.10 with external kubectl credential providers (alpha). Cloud providers, vendors, and other platform developers can now release binary plugins to handle authentication for specific cloud-provider IAM services, or that integrate with in-house authentication systems that aren’t supported in-tree, such as Active Directory. This complements the Cloud Controller Manager feature added in 1.9.  

Networking – CoreDNS as a DNS provider (beta)

The ability to switch the DNS service  to CoreDNS at install time is now in beta. CoreDNS has fewer moving parts: it’s a single executable and a single process, and supports additional use cases.

Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the release notes.

Availability

Kubernetes 1.10 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials.

2 Day Features Blog Series

If you’re interested in exploring these features more in depth, check back next week for our 2 Days of Kubernetes series where we’ll highlight detailed walkthroughs of the following features:

  • Day 1 – Container Storage Interface (CSI) for Kubernetes going Beta
  • Day 2 – Local Persistent Volumes for Kubernetes going Beta

Release team

This release is made possible through the effort of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the release team led by Jaice Singer DuMars, Kubernetes Ambassador for Microsoft. The 10 individuals on the release team coordinate many aspects of the release, from documentation to testing, validation, and feature completeness.

As the Kubernetes community has grown, our release process represents an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem.

Project Velocity

The CNCF has continued refining an ambitious project to visualize the myriad contributions that go into the project. K8s DevStats illustrates the breakdown of contributions from major company contributors, as well as an impressive set of preconfigured reports on everything from individual contributors to pull request lifecycle times. Thanks to increased automation, issue count at the end of the release was only slightly higher than it was at the beginning. This marks a major shift toward issue manageability. With 75,000+ comments, Kubernetes remains one of the most actively discussed projects on GitHub.

User Highlights

According to a recent CNCF survey, more than 49% of Asia-based respondents use Kubernetes in production, with another 49% evaluating it for use in production. Established, global organizations are using Kubernetes in production at massive scale. Recently published user stories from the community include:

  • Huawei, the largest telecommunications equipment manufacturer in the world, moved its internal IT department’s applications to run on Kubernetes. This resulted in the global deployment cycles decreasing from a week to minutes, and the efficiency of application delivery improved by tenfold.
  • Jinjiang Travel International, one of the top 5 largest OTA and hotel companies, use Kubernetes to speed up their software release velocity from hours to just minutes. Additionally, they leverage Kubernetes to increase the scalability and availability of their online workloads.
  • Haufe Group, the Germany-based media and software company, utilized Kubernetes to deliver a new release in half an hour instead of days. The company is also able to scale down to around half the capacity at night, saving 30 percent on hardware costs.
  • BlackRock, the world’s largest asset manager, was able to move quickly using Kubernetes and built an investor research web app from inception to delivery in under 100 days.

Is Kubernetes helping your team? Share your story with the community.

Ecosystem Updates

  • The CNCF is expanding its certification offerings to include a Certified Kubernetes Application Developer exam. The CKAD exam certifies an individual’s ability to design, build, configure, and expose cloud native applications for Kubernetes. The CNCF is looking for beta testers for this new program. More information can be found here.
  • Kubernetes documentation now features user journeys: specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers.  
  • CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster.

KubeCon

The world’s largest Kubernetes gathering, KubeCon + CloudNativeCon is coming to Copenhagen from May 2-4, 2018 and will feature technical sessions, case studies, developer deep dives, salons and more! Check out the schedule of speakers and register today!

Webinar

Join members of the Kubernetes 1.10 release team on April 10th at 10am PDT to learn about the major features in this release including Local Persistent Volumes and the Container Storage Interface (CSI). Register here.

Get Involved:

The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.

Thank you for your continued feedback and support.

  • Post questions (or answer questions) on Stack Overflow
  • Join the community portal for advocates on K8sPort
  • Follow us on Twitter @Kubernetesio for latest updates
  • Chat with the community on Slack
  • Share your Kubernetes story.

CNCF Survey: China is Going Native with Cloud

By | Blog

By Swapnil Bhartiya, Founder & Writer at TFiR: The Fourth Industrial Revolution

Swapnil is a journalist and writer who has been covering Linux & Open Source for 10 years. He is also a science fiction writer whose stories have been broadcast on Indian radio and published in leading Indian magazines.

To better gauge how quickly Asian companies are adopting open source and cloud native technologies like Kubernetes and Prometheus, CNCF recently conducted its cloud native survey in Chinese.  The bi-annual survey takes a pulse of the community to understand the adoption of cloud native technologies. More than 700 people responded to the December 2017 surveys, with 500 responding to the English version and 187 responding to the Chinese version.

For the first time, we have a very comprehensive sample and data to understand the adoption of cloud native technologies in China. Since the survey was conducted in Mandarin, it reflects the Chinese market more than the overall Asian market, excluding major economies like Japan. We expect that future surveys will be conducted in more languages to get an ever clearer picture.

KubeCon + CloudNativeCon Europe is around the corner from May 2-4 in Copenhagen, and the first KubeCon + CloudNativeCon will be held in China in November. It will be interesting to see how the representation and responses change as CNCF reaches new developers, vendors, and user attendees in other parts of the world. I can’t wait for the next KubeCon to learn more about the cloud native journey of Asian players. Read on to learn more about new cloud native developments in China.

P.S. This blog is a follow up to “Cloud Native Technologies Are Scaling Production Applications,” which analyzed trends and data from our global survey conducted in English.  

China, an Open Source Cloud Native Powerhouse

The survey data collected validates the fact that China is embracing cloud native and open source technologies at a phenomenal rate.

We already know the famous BAT (Baidu, Alibaba and Tencent) organizations are using open source technologies to build their services that serve more than a billion users.  For example, I recently interviewed Bowyer Liu, the chief architect at Tencent, which built their own public cloud using OpenStack. It’s called TStack, which is running more than 12,000 virtual machines (VMs), covering 17 clusters, spread across 7 data centers in 4 regions. TStack is managing more than 300 services that include QQ and WeChat. TStack is also using open source projects like CoreOS, Docker, Kubernetes, KVM, HAProxy, Keepalived, Clear Container, Rabbitmq, MariaDB, Nginx, Ansible, Jenkins, Git, ELK, Zabbix, Grafana, InfluxDB, Tempest and Rally.

Of the 18 CNCF platinum members, 4 are located in Asia; of its 8 gold members, 3 are based in Asia. The community runs 18 CNCF Meetups across Asia, with 8 in China alone. Furthermore, in the past quarter, 3 of the top 10 most active companies — Huawei, Fujitsu and Treasure Data — contributing across all CNCF projects are based in Asia.

About the Survey Methodology & Respondents

The respondents represented a variety of company sizes from small startups to large enterprises:

  • Over 5000 employees: 22%
  • 1000-4999 employees: 16%
  • 500-999: 14%
  • 100-499: 27%
  • 50-99: 12%
  • 10-49: 8%

Out of the total respondents, 26% represented the tech industry; 13% represented Container/Cloud Solutions vendors, 12% came from the Financial Services industry; 11% from the Consumer industry; 6% from the Government and 6% from Manufacturing. In comparison, the North American survey had 44% representation from the tech industry and the second largest representation was from the container/cloud vendors 14%, the financial services industry was the 3rd largest with 7% representation. China is embracing cloud native technologies across the board, and the Chinese financial service industry seems to be more open to cloud native technologies as compared to their Western counterparts.

What Cloud Are They Running?

While public cloud continues to dominate the U.S. market, the Chinese market is more diverse. 24% of respondents were using private cloud and only 16% were using public cloud. 60% of respondents were using on-prem. Compare that number to North America where more than 81% of respondents were using the public cloud, and 61% were running on premise; whereas 44% are using some private cloud.

In China, Alibaba Cloud remains the leader with more than 55% of respondents said to be using it, 30% were using AWS, 28% were using OpenStack Cloud, 12% were using Microsoft Azure and around 6% were using Google Cloud Platform.

In North America, AWS remains the leader with 70% of respondents deploying their containers in AWS environment. Google Cloud Platform was at the second spot with 40% and Microsoft Azure took the 3rd spot with 23%. Compared to China, only 22% of respondents from North America acknowledged using OpenStack.

Growth of Containers

Out of the total respondents, only 6% were using more than 5,000 containers. A majority of the respondents, around 32% are using less than 50 containers, and around 30% of respondents were using 50-249 containers. 36% of respondents were using containers in development stage, whereas 32% were using it in production. 57% of respondents said that they are planning to use containers in production in future. A majority of respondents (22%) are using between 6-20 machines in their fleet, with 21-50 machines close behind at 17%; only 6% have more than 5,000 machines, which includes VMs and bare metal.

What about Kubernetes?

No surprises — Kubernetes remains the No. 1 platform to manage containers. 35% of Asian respondents said they were using Kubernetes to manage their containers. Azure Container Service was used by 19%. Docker Swarm was reported to be used by 16% of respondents. ECS came in at the 4 spot with 13% and 11% reporting using another Linux Foundation Project Cloud Foundry. 6% said they were using OpenShift, while another 6% reported using CoreOS Tectonic. This data shows a much more diverse cloud native ecosystem in China as compared to the United States.

Where Are They Running Kubernetes?

The most interesting finding from the survey was that more than 49% of respondents were using Kubernetes in production, with another 49% evaluating it for use. Some of the most well-known examples include Jinjiang Travel International, one of the top 5 largest OTA and hotel companies that sells hotels, travel packages, and car rentals. They use Kubernetes containers to speed up their software release velocity from hours to just minutes, and they leverage Kubernetes to increase the scalability and availability of their online workloads. China Mobile uses containers to replace VMs to run various applications on their platform in a lightweight fashion, leveraging Kubernetes to increase resource utilization. State Power Grid, the state-owned power supply company in China, uses containers and Kubernetes to provide failure resilience and fast recovery. JD.com, one of China’s largest companies and the first Chinese Internet company to make the Global Fortune 500 list, chronicled their shift to Kubernetes from OpenStack in this blog from last year.

As expected, 52% of respondents were using Kubernetes with Alibaba public cloud, 26% were using AWS. China is bigger consumer of OpenStack as compared to the North America market (16%), around 26% of respondents were using OpenStack. A majority of respondents were running 10 or less clusters in production; 14% were running 1 cluster, 40% were running 2-5 clusters and 26% were running 6-10 clusters. Only 6% of the respondents running Kubernetes have more than 50 clusters in production.

One Ring to Bind Them All…

CNCF is at the bedrock of this cloud native movement and the survey shows adoption of CNCF projects is growing quickly in China. While Kubernetes remains the crown jewel, other CNCF projects are getting into production. 20% of respondents were running OpenTracing in production; 16% were using Prometheus; 13% were using gRPC; 10% were using CoreDNS; and 7% were using Fluentd. China is still warming up to newer projects like Istio where only 3% of respondents were using it in production.

Talking about new CNCF projects, we can’t ignore ‘serverless’ or ‘function as a service.’ The CNCF Serverless Working Group recently came out with a whitepaper to define serverless computing. The survey found that more than 25% of respondents are already using serverless technology and around 23% planned to use it in the next 12-18 months.

China leans heavily toward open source when it comes to serverless technologies. Apache OpenWhisk is the dominant serverless platform in China with more than 30% of respondents using it as their preferred platform. AWS Lambda is in the second spot with 24% of respondents using it. Azure was mentioned by 14% and Google Cloud Functions was mentioned by 9% of respondents.

We will hear more about serverless technologies in 2018, especially at the upcoming KubeCon + CloudNativeCon, May 2-4 in Copenhagen.

Challenges Ahead

As impressive as the findings of the survey are, we are talking about some fairly young technologies. Respondents cited many new and old challenges. Complexity remains the No. 1 concern, with more than 44% of respondents citing it as their biggest challenge followed by reliability (43%); monitoring (38%); and difficulty in choosing an orchestration solution (40%).

In contrast, for the North American respondents security remains the No. 1 concern, with 43% of respondents citing it as their biggest challenge followed by storage (41%); networking (38%); monitoring (38%); complexity (35%) and logging (32%).

This is a clear indication that the Chinese market is younger and still going through basic challenges that their Western counterparts have already overcome. To help Chinese companies move forward, documentation plays a very critical role in the proliferation of any technology, so it’s promising to see massive work going on to translate Kubernetes documentation into Mandarin.

The good news is that finding a vendor is not one of the biggest challenges for either of the two markets. With the release of the Certified Kubernetes Conformance Program, CNCF has instilled more confidence in users to pick and choose a vendor without fearing being locked down.

Getting Ready for Cloud Native World?

There is no playbook to help you embark on your journey to the cloud native world. However, there are some best practices that one can follow. A little under a year ago, Dr. Ying Xiong, Chief Architect of Cloud Computing at Huawei, talked about the company’s move toward cloud native architecture at KubeCon + CloudNativeCon Europe. Dr. Xiong has some tips – start with the right set of applications for cloud native journey; some apps are too difficult to redesign with microservice architecture; don’t start with those. Choose the easy ones, as you succeed you gain confidence and a model to replicate across your organization. He also advises in favor of using one platform to manage container and non container applications.

Be sure to join us at an upcoming CNCF Meetup.

For even deeper exposure to the cloud native community, ecosystem and user successes, be sure to attend our first KubeCon + CloudNativeCon China, Nov. 14-15 in Shanghai. 

This Week in Kubernetes: March 21st

By | Blog

Each week, the Kubernetes community shares an enormous amount of interesting and informative content including articles, blog posts, tutorials, videos, and much more. We’re highlighting just a few of our favorites from the week before. This week we’re talking machine learning, scalability, service mesh, and contributing to Kubernetes.

Running Apache Spark Jobs on AKS, Microsoft

Apache Spark, a fast engine for large-scale data processing, now supports native integration with Kubernetes clusters as a scheduler for Spark jobs. In this article, Lena Hall and Neil Peterson of Microsoft walk you through how to prepare and run Apache Spark jobs on an Azure Container Service (AKS) cluster. If you want to learn more about using Spark for large scale data processing on Kubernetes, check out this treehouse discussion video.

Introducing Agones: Open-source, Multiplayer, Dedicated Game-server Hosting Built on Kubernetes, Google

In the world of distributed systems, hosting and scaling dedicated game servers for online, multiplayer games presents some unique challenges. Because Kubernetes is an open-source, common standard for building complex workloads and distributed systems, it makes sense to expand this to scale game servers. In this article, Mark Mandel of Google introduces Agones, an open-source, dedicated game server hosting and scaling project built on top of Kubernetes, with the flexibility you need to tailor it to the needs of multiplayer games.

8 Ways to Bolster Kubernetes Security, TechBeacon

Kubernetes can affect many runtime security functions, including authentication, authorization, logging, and resource isolation. Since it also affects the container runtime environment, it’s a crucial part of maintaining a secure container infrastructure. In this article, John P. Mello Jr. of TechBeacon explains 8 ways to help keep Kubernetes secure.

Kubernetes from the Ground Up: Choosing a Configuration Method, OzNetNerd

Kubernetes’ configuration is simply a bunch of Kubernetes objects. In this article, Will Robinson of Contino takes you through a quick look at what these objects are, and what they’re used for. You’ll walk through imperative commands, imperative objects, and declarative objects including what imperative vs. declarative means and what is right for your application.

Stay tuned for more exciting content from the Kubernetes community next week, and join the KubeWeekly mailing list for the latest updates delivered directly to your inbox.

Is there a piece of content that you think we should highlight? Tweet at us! We’d love to hear what you’re reading, watching, or listening to.

Trace Your Microservices Application with Zipkin and OpenTracing

By | Blog

By Gianluca Arbezzano, Site Reliability Engineer at InfluxDB, CNCF Ambassador 

Walter Dal Mut is a certified Amazon Web Service consultant. He works at Corley SRL, an Italian company that helps other small and big companies move to the cloud.

During the first CNCF Italy Meetup, Walter shared his experience instrumenting a PHP microservices environment with Zipkin and OpenTracing.

Everybody doing logging in applications, and is effectively useful the problem is that we have a lot of detailed informations, they move very fast across multi services. They are almost impossible to read it in real time.

This is why we have monitors. With monitor I mean events, metrics and time series. One aspect of monitors that I love is the aggregation.

It makes easy to put things together and we can see so many things for example I can see the criticality of my issues looking at the occurances. I can compare the number of requests with the load time in just one graph. This is very useful and it’s something that I can’t see tailing logs.

With metrics we can measure changes. It is one of the most important things in my opinion for monitors because a deployment usually is the point in time when something change. If we are able to detect the entity of this change we can take any action based on how good or bad it is. We see immediately if what we changed is important or not. You will discover that so many times we change something that is not useful at all or my change is just not working as expected. Monitors are the only way to understand all of this.

Describing the image above I instrumented my application to send events and I collect them on InfluxDB. From the bottom right graph you can see green and red lines. Read lines tell to us that something is wrong, and now that we are know the distribution we can measure how a new version of our software improve or not the current state.

One tip to remember when you are building your dashboard is that a deploy is an important event. Based on what monitoring software you are using you can mark this special event with a vertical line, Grafana call this feature annotation. The annotation is printed across all the graphs part of the same dashboard. This line is the key to understand how a new release performs.

One missed information at some point is how the information is propagated in our system.

In a microservices it’s not really important the log generated by a single service we want to trace and jump across services following our request. I want to connect dots across my services and the tracing is designed to read time series in this terms.

In a traditional web application with a database, I want to understand the queries made to load a specific page. I want to understand how much it takes to keep them optimized and low in terms of numbers.

Tracing is all about spans and inter-process propagation and active span management.

A spans is a period of time that starts and ends, other than these two points we mark when the client send the request, when the server receive the request and when the server send the response to the client.

These four signals are important to understand the network latency between services.

Other than that you can mark custom events inside a span and you can calculate how long it takes to your application to end a specific task like generating a pdf, decompress a request, process a message queue and so on.

Inter-process propagation, the way that we propagate things as maybe using four eaters that we can send in my request. There is a trace indeed. It is the unity in fire that starts at time zero and ends when all my micro services are included. It is in the trace I.D. Then they have a spy identification using the spy effectively they want to use. Because the client send a request.

The inter-process propagation describe how we propagate things across network or processes. In HTTP we use headers to pass traces information across services. TraceId is the unique identifier for every single request every spans part of that requests is grouped under the same id. Every span has it’s id and it also have a parent span id. It is useful to aggregate all the spans to get the three of your spans.

There are different traces available the most famous open source are Jaeger (a project under the CNCF) an Zipkin started by Twitter.

During the talk Walter used Zipkin but they are both compatible with OpenTracing. If you use the right libraries you are able to switch between tracers transparently.

This is how a trace is represented by Zipkin. You can see the last of all your services on the left and every spans generated across your distributed system. The length of each span describes how much time it took and from this visualisation we already have a good set of information and optimisation points:

  • How many services we are calling to solve the request. Are they too much?
  • How many time I am calling the same service.
  • How much time a service is taking. If the authentication takes 80% of your request time every time you should fix it.
  • And many more.

Some spans have one or more dots, that white dots are logs. They are useful to understand when a specific event happen in time. You can use this feature to identify when you send an email or when a customer clicked a specific button tracking it’s UI experience.

The video shows a details demo about what Zipkin provides in terms of filtering, searching and event UI trick to quick identify traces that ended with an error. Other then showing how Zipkin works Walter shared his experience instrumenting PHP applications.

The focus of his talk is all about tricks and best practices really useful to listen in order to avoid some common mistakes.

They are a bit hard to transcribe and I will leave the video to you.

I will leave you the quote that he shared with us at the end of the presentation (spoiler alert)

“Measure is the key to science (Richard Feynman).”

Slides available http://example.walterdalmut.com/opentracing/

 

Cloud Native Computing Foundation Expands Certification Program to Kubernetes Application Developers – Beta Testers Needed

By | Blog

The Cloud Native Computing Foundation (CNCF), which sustains and integrates open source technologies like Kubernetes and Prometheus, announced today that it is expanding its certification program to include application developers who deploy to Kubernetes. After launching the highly successful Certified Kubernetes Administrator (CKA) exam and corresponding Kubernetes Certified Service Provider (KCSP) program back in September with more than 1,200 CKAs and 28 KCSPs to date, the natural follow-on was an exam focused on certifying application developers to cover the full spectrum of Kubernetes technical expertise.

We are looking for skilled Kubernetes professionals at vendor companies to beta test the exam curriculum.

Are you interested in getting an early peek as a beta tester? If so, please read on!

  • The online exam consists of a set of performance-based items (problems) to be solved in a command line.
  • The exam is expected to take approximately 2 hours to complete.
  • Beta testing is targeted for late April — early May.
  • After taking the exam, each beta tester will be asked to provide exam experience feedback in a questionnaire.

If you would like to be considered as a Beta tester for the CKAD exam, please sign-up via this short survey

You will be contacted with additional information when the exam is ready and the Beta team is selected. If you complete the beta exam with a passing grade, you will become one of the first Certified Kubernetes Application Developers.

With the majority of container-related job listings asking for proficiency in Kubernetes as an orchestration platform, the new program will help expand the pool of Kubernetes experts in the market, thereby enabling continued growth across the broad set of organizations a using the technology. Certification is a key step in that process, allowing certified application developers to quickly establish their credibility and value in the job market, and also allowing companies to more quickly hire high-quality teams to support their growth.

“The CKAD program was developed as an extension of CNCF’s Kubernetes training offerings which already includes certification for Kubernetes administrators. By introducing this new exam for application developers, anyone working with Kubernetes can now certify their competency in the platform,” said Dan Kohn Executive Director of Cloud Native Computing Foundation.  

To create the Certified Kubernetes Application Developer (CKAD) exam, 19 Kubernetes subject matter experts (SMEs) participated over four days in job analysis and item writing sessions co-located with Kubecon + CloudNativeCon in Austin, Texas in December. During these work sessions, the following exam scope statement was agreed upon:

CKAD Exam Scope Statement

The Certified Kubernetes Application Developer can design, build, configure, and expose cloud native applications for Kubernetes. The Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications & tools in Kubernetes.

The exam assumes knowledge of, but does not test for, container runtimes and microservice architecture.

The successful candidate will be comfortable using:

  • An OCI-Compliant Container Runtime, such as Docker or rkt.
  • Cloud native application concepts and architectures.
  • A programming language, such as Python, Node.js, Go, or Java.

Also during the work sessions, the team defined seven content domains with a % weight in the exam:

  • 13%  Core Concepts
  • 18%  Configuration
  • 10%  Multi-Container Pods
  • 18%  Observability
  • 20%  Pod Design
  • 13%  Services & Networking
  • 8%   State Persistence

The SME group wrote exam problems, assigned complexity and difficulty ratings, and determine a provisional passing score of 66% during the working session.

CKAD exam launch is targeted for early May.

This Week in Kubernetes: March 14th

By | Blog

Each week, the Kubernetes community shares an enormous amount of interesting and informative content including articles, blog posts, tutorials, videos, and much more. We’re highlighting just a few of our favorites from the week before. This week we’re talking machine learning, scalability, service mesh, and contributing to Kubernetes.

7 Kubernetes Tools to Expand Your Container Architecture, Stackify

Kubernetes has become a vital resource for Agile and DevOps teams. As an open source tool, Kubernetes is becoming an ecosystem in itself, with other tools being developed to support it. Some of these extensions are coming straight from Kubernetes, while others are open source projects in their own right. In this article, John Julien of WeContent explores a few these tools in depth to help you better understand when and how you should be using them.

From Open Source to Sustainable Success: the Kubernetes Graduation Story, Google Cloud Platform

Kubernetes has graduated from CNCF incubation! Thanks to support of the Kubernetes project community, this milestone marks a significant achievement in the project’s maturity. In this article, Sarah Novotny and Aparna Sinha of Google share some of the best practices that were learned along the way and helped Kubernetes get where it is today. From building a community and user-friendly technology to investing in sustainability and enabling an ecosystem, take a look back at the evolution of Kubernetes.

How to Set Up Scalable Jenkins on Top of a Kubernetes Cluster, DZone

Jenkins is an open source continuous integration server used in many production applications and can be used on top of Kubernetes to help scale your application. In this article, Yuri Bushnev of AlphaSense walks you through how to use Jenkins auto-scaling inside a Kubernetes cluster. This allows all nodes to spin up automatically during builds, and be removed right after completion.

Single Sign-On for Kubernetes: An Introduction, The New Stack

Authentication and authorization are important steps in securing your Kubernetes clusters. And because Kubernetes separates these out, you have some flexibility in how to set this up. In this article, Joel Speed of Pusher gives an introductory explanation of authentication within Kubernetes and its approach to single sign-on. You’ll learn what authentication methods Kubernetes supports and take a deep dive into OpenID Connect.

Stay tuned for more exciting content from the Kubernetes community next week, and join the KubeWeekly mailing list for the latest updates delivered directly to your inbox.

Is there a piece of content that you think we should highlight? Tweet at us! We’d love to hear what you’re reading, watching, or listening to.

This Week in Kubernetes: March 7th

By | Blog

Each week, the Kubernetes community shares an enormous amount of interesting and informative content including articles, blog posts, tutorials, videos, and much more. We’re highlighting just a few of our favorites from the week before. This week we’re talking machine learning, scalability, service mesh, and contributing to Kubernetes.

First Beta Version of Kubernetes 1.10 is Here – Your Chance to Provide Feedback, Kubernetes.io

The Kubernetes community has been hard at work on the first beta version on Kubernetes 1.10. The March release is targeting over a dozen new alpha features, and over two dozen mature features including production-ready versions of Kubelet TLS Bootstrapping, API aggregation, and more detailed storage metrics. Nick Chase of Mirantis put together this sneak peek of what’s included in 1.10, and how you can provide your feedback during beta testing.

On Securing the Kubernetes Dashboard, Heptio

Recently, Tesla’s Kubernetes infrastructure was compromised and used by attackers to mine cryptocurrency. Tesla’s Kubernetes dashboard was exposed with to the internet, including visible AWS API keys and secrets. In this post, Joe Beda of Heptio explains how to secure your Kubernetes Dashboard to prevent this from happening including RBAC configurations, per-user credentials, and a full tutorial on screening with oauth2_proxy.  

How to know if Kubernetes is right for your SaaS, freeCodeCamp

Kubernetes is a great tool to scale, deploy, and manage SaaS applications. But it’s important to know if and when Kubernetes is a good fit for your current situation before investing the time and resources. If you’re currently deciding whether or not to adopt Kubernetes, check out this overview by Ben Sears of ServiceBot.io. Walk through what you should know about the benefits of containers, if Kubernetes will solve your current problems, and if it fits into your future plans for your application architecture.

Ensure High Availability and Uptime With Kubernetes Horizontal Pod Autoscaler and Prometheus, Weaveworks

Autoscaling in Kubernetes allows you to automatically scale workloads up or down based on resource usage. In this post, Stefan Prodan of Weaveworks explains how to use Cluster Autoscaling and the Horizontal Pod Autoscaler (HPA) to optimize for availability and uptime, including how to set up Prometheus to expose the right metrics for autoscaling.

Stay tuned for more exciting content from the Kubernetes community next week, and join the KubeWeekly mailing list for the latest updates delivered directly to your inbox.

Is there a piece of content that you think we should highlight? Tweet at us! We’d love to hear what you’re reading, watching, or listening to.

This Week in Kubernetes: February 28th

By | Blog

Each week, the Kubernetes community shares an enormous amount of interesting and informative content including articles, blog posts, tutorials, videos, and much more. We’re highlighting just a few of our favorites from the week before. This week we’re talking machine learning, scalability, service mesh, and contributing to Kubernetes.

How Kubernetes Became the Solution for Migrating Legacy Applications, Opensource.com

Kubernetes has become the go-to solution for container orchestration, helping organizations turn monolithic applications into manageable microservices.  In this article, Swapnil Bhartiya explains the history of Kubernetes, why more organizations are choosing open source technologies, and how Kubernetes is being used at companies like Ticketmaster to transition legacy applications to containers.

Set up a Hyperledger Fabric development environment on Kubernetes, Medium

Hyperledger Fabric is platform for distributed ledgers. If you’re interested in developing chaincode and client applications, Kynan Rilee, creator of koki.io, walks you through how to do this with Hyperledger Fabric on Kubernetes. You’ll learn how to setup Fabric and deploy the right resource configurations to smoothly run your chaincode.

Kubeless Tutorial – Kubernetes Native Serverless Framework, upnxtblog

Kubeless, a functions-as-a-service solution, leverages Kubernetes’ resources to give you all the benefits of auto-scaling, API routing, and more. This allows you to build applications without worrying about servers running the code. KarthiKeyan Shanmugam will get you up and running with serverless by sharing how kubeless works and how to install it and get started.

Dissecting Kubernetes Deployments, Heroku

Check out this great overview of Kubernetes deployments with Damien Mathieu of Heroku. This article dives into some Kubernetes internals, focusing on deployments and gradual rollouts of new containers. From understanding the Kubernetes trigger-based environment to working with ReplicaSets, this post takes the complexity out of Kubernetes deployments.

On Securing the Kubernetes Dashboard

Recently Tesla (the car company) was alerted, by security firm RedLock, that their Kubernetes infrastructure was compromised. The attackers were using Tesla’s infrastructure resources to mine cryptocurrency. The vector of attack in this case was a Kubernetes Dashboard that was exposed to the general internet with no authentication and elevated privileges. Joe Beda of Heptio in his latest blog attempts to answer the question: How do you prevent this from happening to you?

Stay tuned for more exciting content from the Kubernetes community next week, and join the KubeWeekly mailing list for the latest updates delivered directly to your inbox.

Is there a piece of content that you think we should highlight? Tweet at us! We’d love to hear what you’re reading, watching, or listening to.

This Week in Kubernetes: February 21st

By | Blog

Each week, the Kubernetes community shares an enormous amount of interesting and informative content including articles, blog posts, tutorials, videos, and much more. We’re highlighting just a few of our favorites from the week before. This week we’re talking machine learning, scalability, service mesh, and contributing to Kubernetes.

Rainbow Deploys with Kubernetes, BrandonDimcheff.com

Deployments aren’t as disruptive when your service is stateless, but sometimes stateful services can’t be turned stateless. In this article by Brandon Dimcheff of Olark, explains how Olark dealt with that issue when deploying their stateful service that powers chat.olark.com. Every time they deployed Kubernetes the traditional way, it would restart all the backends causing all users to reconnect creating a poor user experience and major load spikes. Learn how they solved this with a Rainbow Deployment strategy, and what that is.

Set Up a Jenkins CI/CD pipeline with Kubernetes, AKomljen.com

Continuous integration and delivery is a major component of cloud native DevOps approaches. Deploying a Jenkins server can be easy, but creating a pipeline to build, deploy, and test your software gets more difficult. In this introductory article, Alen Komljen, DevOps Engineer and Consultant, explains Jenkins pipelines, how to build one, and how to run it on Kubernetes.

Why I Went All-in with Containers…and the Fails Along the Way, IOD

As an early adopter of containers, Adam Hawkins of Saltside has faced his fair share of challenges building orchestration systems. In this article, Adam shares his story of adopting, developing, and running production containers. From working with new technologies before adequate supporting tools were developed to solving new production problems, check out these lessons learned from adopting bleeding edge technology.

Kubernetes and PaaS: The Force of Developer Experience and Workflow, The New Stack

As platforms-as-a-service (PaaS) become increasingly popular in the world of Kubernetes, it can be difficult to know what falls under that category.  In this article, Daniel Bryant, an independent technical consultant, he explains the three layers of modern web-based software development, how they relate to PaaS, and how these platforms evolved to where we are today.

Stay tuned for more exciting content from the Kubernetes community next week, and join the KubeWeekly mailing list for the latest updates delivered directly to your inbox.

Is there a piece of content that you think we should highlight? Tweet at us! We’d love to hear what you’re reading, watching, or listening to.

This Week in Kubernetes: February 14th

By | Blog

Each week, the Kubernetes community shares an enormous amount of interesting and informative content including articles, blog posts, tutorials, videos, and much more. We’re highlighting just a few of our favorites from the week before. This week we’re talking machine learning, scalability, service mesh, and contributing to Kubernetes.

Zero Downtime Deployment with Kubernetes, Rahmonov.me

Finding a time to take your application offline to make an update is never easy. It can be nearly impossible to find a time that doesn’t affect at least some of your users, even when your developers work late nights and weekends to make it happen. In this article, Jahongir Rahmonov of Super Dispatch walks you through how to do a zero downtime deployment with Kubernetes to help avoid those late night and weekend releases.

How Cloud Computing Is Changing Management, Harvard Business Review

Cloud computing is arguably one of the most impactful technologies of our time with faster deployment times, decreased costs, and the introductions of cloud native software approaches.  In this article, Quentin Hardy of Google Cloud explains how organizations are changing across the board from management to customer experience to adopt these new systems.

4 critical lessons DevOps admins can learn from Netflix’s container journey, TechRepublic

With technology changing and evolving as quickly as it is, it can be difficult for organizations to re-write their applications as quickly as trends shift. In this article, Keith Townsend, The CTO Advisor, shares 4 important lessons that DevOps admins can learn from Netflix’s move to containers. From governance and choosing an orchestration platform, to container networking and infrastructure choices, these are interesting lessons for anyone embarking on their own container journey

The Tale of Two Kubernetes, World Wide Technology

Kubernetes is used for a wide variety of use cases from infrastructure to applications. While Kubernetes is a valuable tool to solve a variety of needs, the way you approach it will be very different based on your individual use case. In this article by William Caban of World Wide Technology, you’ll learn about the “two Kubernetes” that many of these trends fit into and which category your environment falls into.

Stay tuned for more exciting content from the Kubernetes community next week, and join the KubeWeekly mailing list for the latest updates delivered directly to your inbox.

Is there a piece of content that you think we should highlight? Tweet at us! We’d love to hear what you’re reading, watching, or listening to.