KubeCon + CloudNativeCon Amsterdam | March 30 – April 2 | Best Pricing Ends February 2 | Learn more

Category

Blog

2019 CNCF Annual Report 

By | Blog

The Cloud Native Computing Foundation (CNCF) annual report for 2019 is now available. The report highlights the growth of the community, events, projects, and more, over the past year.

As CNCF celebrated its fourth birthday in 2019, we achieved greater engagement through membership growth, event attendance growth, and increased end user participation. This past year was an exceptional year for CNCF. Below are some of the highlights.

Membership

We started the year with 345 members. Throughout 2019, we added 173 new members, an increase of more than 50%. Our 20 Platinum members include some of the world’s largest public cloud and enterprise software companies, including Apple, ARM, NetApp, and Palo Alto Networks. Ant Financial, Fidelity, Equinix, and Kingsoft joined as new Gold members in 2019. 

Our End User Community grew by 89% in 2019, signaling clear interest in cloud native technologies. We finished 2019 with 131 companies and startups. At present, CNCF enjoys the largest end user community of any open source foundation.

Events

KubeCon + CloudNativeCon North America was the world’s largest open source developer conference in 2019. The event attracted 12,000 attendees, a 2000% increase from the first KubeCon event in 2015. 

For the year, KubeCon + CloudNativeCon events attracted more than 23,000 attendees across 3 locations.

In 2019, CNCF supported more than 217 meetup groups in 53 countries, with greater than 140,000 members. In 2019, we experienced a 75% increase in CNCF meetup members.

In response to the communities’ evolving needs, CNCF also created the opportunity for participants to connect on a local level by introducing Kubernetes Community Days (KCD)

Projects

This year saw CNCF projects Fluentd, CoreDNS, containerd, Jaeger, and Vitess advanced to graduated status. This brought the total graduated projects to six. During 2019, CloudEvents, Falco, and OPA joined our 14 incubating projects. 

The TOC also accepted 12 new projects into the Sandbox. 

These are just a few highlights from the report. It also dives into training and certification, community engagement, ecosystem tools, growth in China, security audits, and much more. 

CNCF remains the fastest-growing foundation in the history of open source, and our success is directly attributable to our projects, the contributions of the developer community, our end users, and support from our member companies. 

As we look to 2020 and beyond, we remain committed to fostering and sustaining an ecosystem of open source, vendor-neutral projects, and making technology accessible for everyone. 

Thank you to our community for an incredible year, and we look forward to seeing our community at this year’s KubeCon + CloudNativeCon events in Amsterdam (March 30 – April 2, 2020), Shanghai (July 28-30, 2020), and Boston (November 17-20, 2020).

Using Containers and Kubernetes to Increase the Efficacy of Anomaly Detection 

By | Blog

Guest post from Connor Gorman, Sr. Software Engineer, StackRox

The maturation of the container ecosystem has coincided in parallel with the emergence of Kubernetes as the de facto orchestrator for running containerized applications. This new declarative and immutable workload design has paved the way for an entirely new operational model for detection and response. 

Kubernetes’ rich set of workload metadata augments and elevates traditional detection approaches, like anomaly detection. In Kubernetes, anomaly detection consists of observing the normal behavior of an application to learn the typical behavior of the application, creating an activity baseline derived from the information during the learning stage, and then measuring future events against that baseline, including file reads and writes, network requests, and process executions. Anything that falls significantly outside of the normal baseline can be considered anomalous and should be investigated. 

Anomaly Detection in a Traditional VM Infrastructure

The challenge in anomaly detection in a traditional Virtual Machine (VM) infrastructure is that it requires more expertise to tune and is much more prone to false positives. This is because of how VMs operate. VMs run a full operating system; and to amortize the cost of running the OS, several core applications are often running inside each one. In such an infrastructure, with a significantly broader spectrum of possible activity, getting anomaly detection right means creating complex models and algorithms that rely on machine learning. Your job is to find the needle in the haystack, and with VMs the haystack is simply much larger. 

Anomaly Detection in Containers and Kubernetes

In contrast with VMs, containers are lightweight, often running a single application, which is frequently comprised of a single process. This form factor combined with the declarative nature of Kubernetes improves the efficacy of anomaly detection by providing context around each application that is running. 

The following diagram underscores why creating an activity baseline that leverages declarative information is more effective, as opposed to solely modeling runtime data. Each item below runtime is explicitly set by developers or operators and constitutes constraints for anomaly detection. 

Images

The principle of Immutability that images adhere to provides the foundation for creating an activity baseline. By defining the set of binaries and packages installed in a specific version of an application, detection becomes vastly simplifed. A Dockerfile is a manifest of the required application dependencies crafted by the application developer. Since containers don’t need to support a full operating system, this architecture relies on a significantly smaller set of packages and binaries compared to a VM. With a reduced number of known binaries and packages for an application, the development team can more easily verify that only pre-existing binaries are executed. This approach catches attacks where a malicious actor inserts binaries and executes them.

What you should do:

  • Strip down your image by removing all unneeded dependencies and binaries
  • Regularly scan for vulnerabilities

Pod Spec

PodSpecs allow developers to build guardrails for their Pods by defining their security contexts (assigning privileges, Linux capabilities, and whether the filesystem is read-only). These configurations narrow the Pod’s activity and specify aspects of the baseline that do not need to be inferred at runtime. For example, an attempted payload drop and execution on a Pod with a read-only filesystem would be denied and the event, triggering an anomaly in your detection system. By contrast, in a VM infrastructure these tight controls are not feasible because every application on the host would need to be compatible a change of this type.

What you should do:

Network Spec

Similar to firewalls but at a much more granular level, Kubernetes Network Policies enable developers to describe required ingress/egress in terms of Pods and IP subnets. This shift is critical. Developers in a microservices environment have a good understanding of their application’s network interactions and can scope access solely to the known dependencies. Kubernetes abstracts away IP addresses in application-to-application communication and provides logical segmentation constructs such as namespaces and labels. Carefully defined L3/L4 segmentation augments anomaly detection by narrowing the network activity to analyze and directly exposing blocked connections.

What you should do:

  • Enable namespace segmentation
  • Consider enabling finer-grained Network Policies

Keeping Bad Actors at Bay

Kubernetes and containers create a unique opportunity for developers and operators to explicitly declare the environment in which their applications should run. In a traditional VM infrastructure it is difficult to effectively define an application’s activity. Alternatively, by using single-application containers, users can define a minimal set of privileges and leverage Kubernetes to provide high-level abstractions around service-to-service interactions. These fine-grained controls can augment anomaly detection by determining what behavior is malicious versus benign and highlighting activity that violates user policies. As a result, the application’s attack surface is much smaller, making it less likely that a bad actor will be able to gain a foothold in the infrastructure.

Additional Resources

9 Kubernetes Security Best Practices >

12 Kubernetes Configuration Best Practices >

Kubernetes Security 101 >

Docker Container Security 101 >

Linkerd 2019 year in review

By | Blog, Technical

I think it’s safe to say that 2019 was a huge year for Linkerd. It saw the project emerge from the “seems promising but let’s wait and see” phase and firmly into “okay, I need an excuse to try this out” territory. In this post, I want to highlight what I think made 2019 Linkerd’s breakout year.

Features

Linkerd began 2019 quite feature rich in spite of its youth. Its control plane was easy to navigate and its data plane was blazing fast and extremely secure (three cheers for Rust!). But 2019 saw a dizzying array of improvements:

  • Distributed tracing support
  • Traffic splitting (crucial for use cases like canary deployments and blue/green deployments)
  • The linkerd tap command, which enables you to “listen in” on traffic to/from a Linkerd-enabled service (extremely useful for debugging)
  • The linkerd log command, which tails logs from Linkerd-enabled containers
  • Compliance with the Service Mesh Interface (SMI), an effort to establish a universal interface for all service meshes and to thereby make it dramatically easier to experiment with, migrate between, and combine meshes
  • Automatic request retries and timeouts
  • Service profiles, which are Kubernetes custom resource definitions (CRDs) that enable you to apply per-endpoint configuration to Linkerd-enabled services
  • Automatic proxy injection (aka auto-inject) installs a Linkerd proxy on any Kubernetes Pod with the linkerd.io/inject: enabled annotation without the need for user intervention.
  • Support for installing Linkerd with Helm

Whew! There are others but I don’t want to belabor the point. For a more granular look at Linkerd development in 2019, check out the changelog.

Linkerd on the big stage

After having just a few talks at KubeCon Asia 2018 in Shanghai, Linkerd had a pretty solid showing at KubeCon North America 2018 in Seattle, with nine talks. But 2019 proved to be a breakout year for Linkerd at KubeCon as well.

  • At KubeCon Europe 2019 in Barcelona, Linkerd was featured in 16 talks, including a keynote by Linkerd co-creator Oliver Gould
  • At the sadly truncated KubeCon Asia 2019 in Shanghai, Linkerd had just one talk. But that’s okay because…
  • …Linkerd came roaring back at KubeCon North America 2019 in San Diego, with eight talks as well as its first Day Zero event: A Linkerd in Production Workshop hosted by the company Buoyant. You can see a nice recap of the conference on the Linkerd blog.

Sterling security audit

Here at the CNCF, security is of paramount concern. As a condition for graduation, we require projects to undergo a security audit. Like Kubernetes, Prometheus, and others, Linkerd was independently audited by the highly respected Cure53, which subjected it to both a penetration test and a general source code audit.

The results were nothing short of thrilling for all of us at the CNCF. This was the highlight for me:

Judging by the lack of discovered relevant vulnerabilities and only a few miscellaneous issues, Cure53 has gained a rarely observed and very good impression of the examined Linkerd software complex and its surroundings…Cure53 is happy to report that no real vulnerabilities could be identified on the Linkerd scope.

I strongly encourage you to download the audit PDF and peruse the results yourself.

Revamped web presence

At the onset of 2019, Linkerd had a nice Bootstrap-y website and very good docs (including its now-famous Getting Started guide). But later in the year, the site got a complete aesthetic makeover, with what I personally find to be marvelous results.

Swag

Not everyone loves swag, perhaps mostly because a lot of it is subpar. But 2019 saw Linkerd become a leader in the swag space, with its now-iconic baseball cap an increasingly common sight at conferences. I look terrible in hats so I will not be partaking, personally, but y’all should get your hands on one.

Podcasts

Linkerd was featured in a variety of podcast episodes:

So if you have a hankering to learn about Linkerd when you’re making dinner or stuck in traffic, I can’t recommend these episodes enough.

Looking ahead

As 2020 gathers steam, I expect that Linkerd will continue garnering adoption and cementing its status as a standard-bearer amongst service meshes, particularly in the domain of usability. And it has no signs of slowing down this upcoming year. When I sit down to write my 2020 year in review for Linkerd, I expect it to have pushed the service mesh space forward in an even more dramatic fashion. Expect not just more features and docs and videos and talks but a slew of successful production deployments and a much larger share of the world’s backend east-west traffic.

CNCF Webinars: Fresh Insights for the Cloud Native Community

By | Blog

As we jump into 2020, we wanted to share a reminder and quick overview of CNCF’s webinar program. These webinars are hosted by members, CNCF incubating and graduated projects, and CNCF SIGs. Anyone can register to attend, and they are then posted on the CNCF YouTube channel.

CNCF webinars are a great, free opportunity for the cloud native community to learn about not only CNCF projects but other open source, cloud native technologies and trends in the community. 

Who hosts the CNCF webinars? 

Platinum, gold, and silver members can host. We also host webinars from graduated and incubating projects, which each have the opportunity to present twice a year on release launch details or updates.

Note that projects from the CNCF Sandbox are not able to hold webinars, but a CNCF member organization may still talk about an open source sandbox project during their company’s webinar. 

CNCF SIGs are able to host one webinar per year.

What are CNCF webinars about? 

Webinar topics are similar to what you could expect to see during a session at KubeCon + CloudNativeCon. Any platforms, tools or technologies discussed are open source and work with CNCF projects. 

The purpose of the CNCF webinars is to educate the cloud native community. Webinar topics vary depending on the target audience — whether developers, architects, CIOs, and/or CTO — so there is a vast range of topics available! Webinars are not product pitches.

CNCF offers English language webinars every Tuesday 10am-11am PT, Wednesday 10am-11am PT, and Thursday 9am-10am PT. While Chinese language webinars are typically scheduled for Wednesday or Thursday, 10am Beijing time.

To register for any upcoming webinars, you can check out our website and click on the webinar to sign up. We also offer recordings of past webinars so if you cannot attend one live, no worries!

Do you have any questions about webinars? Have a look at the in-depth guidelines if you are interested in hosting. Or, reach out to us at webinars@cncf.io!

 

Kubernetes Message Queue

By | Blog

Guest post originally published in KubeMQ by Lior Nabat

Background

Most enterprises are adopting Kubernetes as a result of the endless benefits it offers. It is adopted for seamless container management; it provides high scalability and enhances communication / messaging. It can also allow for the addition of many lifetime applications when building a microservice. This means enterprises can make lots of changes as the project expands with little effort

There is a high volume of traffic flow of messages in microservice architecture that is orchestrated by Kubernetes. Managing heavy traffic can pose major challenges, so enterprises have to give a lot of thought on how to effectively manage the heavy traffic in the system before deploying in Kubernetes. This implies that the operational and architecture requirements that are needed to support the production environment have gained significant focus. In a new microservice, each data model is disengaged from the rest of the system. But a project can grow bigger into thousands of microservices, which means the messaging traffic would grow into millions of messages every day. Therefore to achieve an effective messaging system between microservices, a robust communication mechanism must be adopted.

Some enterprises attempt to solve this communication gap in Kubernetes using a Point-To-Pont connectivity system such as REST. However, REST can create restrictions and other complications in the messaging structure of the services. If there is no proper messaging solution, then there would be a need to carry out maintenance each time requirements are changed. Carrying out frequent maintenance is expensive, time-consuming, and unreliable. This problem cannot be solved by REST because of the many restrictions that come with it.

To solve the problems in microservices architecture and Kubernetes, a messaging queue system must be deployed for effective management. A messaging queue system re-architects the stack and deploys a single focal point of communication for better communication. This ensures that each service communicates with the message queue broker in its own language. The message queue system would then deliver the messages to the services waiting for it.

Building a well-managed messaging solution

A messaging system cannot be effective if it is not native to Kubernetes. Enterprises must ensure that when building a message queue system, it is native to Kubernetes to leverage the advantages.

The advantages are:

  • Robust messaging queue system
  • Secured system
  • Low DevOps maintenance
  • Well-connected ecosystem for logging in Kubernetes
  • Rapid deployment

Message Queue advantages in Hybrid cloud solution

Deploying enterprise solutions on a hybrid cloud service offers flexibility, control, speed, agility, low cost, and total control. It also ensures that the enterprise can use on-premise and public cloud services concurrently. Flexibility to migrate from one solution to the order as cost and workload requirement changes is a big benefit. With a hybrid solution, enterprises can host their sensitive applications and workload on the private cloud solution while minor / less critical workloads, and the application would be hosted on the public cloud solution. Furthermore, with a private cloud service organizations pay for only the resources they use. These resources can be scaled up or down whenever needed. For hybrid clouds to run effectively, transparently, connect seamlessly, and interact, message queue must be deployed in Kubernetes.

Use Cases

Message queues support a diversified messaging pattern; it ensures flexibility and can create a wide range of use cases. The most common use cases of the messaging queue in Kubernetes are:

When messages need to be processed in a coordinated approach, a synchronous pattern would be implemented and used. The multi-stage pipeline approach allows for messages to be processed in a sequence between the different services. The multi-stage pipeline approach handles messages that cannot be processed as well. It does this by adopting a dead letter queue mechanism that accepts an unprocessed message and processes it in a predefined way. In a multi-stage Pipeline system, each service is considered a separate stage, and messages are passed between all the stages in the sequence.

When data needs to be streamed from many data sources such as big data and the Internet of things, it will adopt an A-synchronic pattern. This means big data are processed in a dedicated service such as pipeline, databases, storage, machine learning, and many other approaches. This is an effective mechanism that aggregates many producers to a smaller unit of consumers. With this approach, the delivery of the message is guaranteed.

This is applied when a smaller number of producers need to send a message to a larger number of consumers. A service that behaves like a publisher would send a message to a channel. And the subscribers would receive the message in real-time via the channel. This acts typically like cable TV sending content its many subscribers around the world.

Connectivity solutions such as Application programing interfaces, databases, and storage devices would act as a router to send messages to the consumers. This means they connect with each other and distributes information’s among them to send a unified data to the end-users

Ease of use

Microservices architecture saves time, money, and is super easy to use. It seamlessly unifies operation workflows and development, thereby saving great cost. Ease of use also ensures that the need for dedicated IT experts is not needed. Microservices ensure efficient memory usage, low latency, fundamental patterns, and support for high volume messaging. It doesn’t compromise real-time pub/sub, request/reply, and queue.

Gradual Migration to Kubernetes

Migration to Kubernetes must be done gradually to keep the data ongoing and ensure the business is operational. To achieve this Kubernetes messaging queue must connect seamlessly with the old and new system. Connectivity ensures that migration is carried out in a step by step procedure where new services are created without any downtime.

Introducing the Kubernetes Bug Bounty Program

By | Blog

We are happy to announce that the Cloud Native Computing Foundation (CNCF) is funding a new Kubernetes bug bounty program to reward researchers who find security vulnerabilities in Kubernetes’ codebase, as well as build and release processes. The program is being launched by the Kubernetes Product Security Committee, a group of security-focused maintainers who receive and respond to reports of security issues in Kubernetes, in concert with bug bounty program vendor, HackerOne. After having won the community-led RFP, HackerOne had their team pass the Certified Kubernetes Administrator (CKA) exam as part of the bootstrapping process. 

As a CNCF graduated project, it is imperative that Kubernetes adhere to the highest levels of security best practices. Back in August 2019, CNCF formed the Security Audit Working Group and conducted Kubernetes’ first security audit, which helped the community identify issues from general weaknesses to critical vulnerabilities, enabling them to address these vulnerabilities and add documentation to help users. 

To continue to drive awareness of Kubernetes’ security model and reward ongoing efforts in the community to secure Kubernetes, discussions began at the beginning of 2018 to launch an official bug bounty program. After several months of private testing, the Kubernetes Bug Bounty is now open to all security researchers. 

For information on the scope of the program and how to get involved, check out the Kubernetes.io blog

Zendesk: ‘Kubernetes Seemed Like It Was Designed to Solve the Problems We Were Having’

By | Blog

Launched in 2007 with a mission of making customer service easy for organizations, Zendesk offers products involving real-time messaging, voice chat, and data analytics. All of this was built as a monolithic Rails app, using MySQL database and running in a co-located data center on hardware the company owned. 

That system worked fine for the first seven years or so. But as Zendesk grew—the company went public in 2014 and now has 145,000 paid customer accounts and 3,000 employees—it became clear that changes were needed at the infrastructure level, and the effort to make those changes would lead the company to microservices, containers, and Kubernetes.

“We realized that just throwing more and more stuff into a Rails monolith slowed down teams,” says Senior Principal Engineer Jon Moter. “Deploys were really painful and really risky. Every team at Zendesk, some of whom were scattered in engineering offices all over the world, were all tied to this one application.”

Moter’s team built some tooling called ZDI (Zendesk Docker Integration), which got developers set up with containers almost instantly. There were just a couple of options for orchestration at the time, in the summer of 2015, and after some research, Moter says, “Kubernetes seemed like it was designed to solve pretty much exactly the problems we were having. Google knows a thing or two about containers, so it felt like, ‘all right, if we’re going to make a bet, let’s go with that one.’”

Today, about 70% of Zendesk applications are running on Kubernetes, and all new applications are built to run on it. There have been time savings as a result: Previously, changing the resource profile of an application could take a couple of days; now, it takes just a minute or two. Outage resolution happens with self-healing in minutes instead of the hours previously spent patching things up. 

Having a common orchestration platform makes it way easier to have common tooling, common monitoring, and more predictable dashboards, Moter adds. “That has helped make it easier to onboard people and follow standard sorts of templates, and to set up monitors and alerting in a fairly consistent manner. And it helps a lot with on-call. We have offices scattered around the world, so for people on-call, it’s daytime at one of our offices all day.”

The benefits have been clear, and Zendesk is happy to share its learnings with the rest of the community. “Having so many companies that either compete with each other, or are in different industries, all collaborating, sharing best practices, working on stuff together,” says Moter, “I think it’s really inspiring in a lot of ways.”

For more about Zendesk’s cloud native journey, read the full case study here.

Keeping Cloud Native Well

By | Blog

While the CNCF makes every effort to ensure the comfort, health, and happiness of KubeCon + CloudNativeCon attendees, there were some attendees at KubeCon + CloudNativeCon Seattle 2018 who felt overwhelmed or unhappy.

Some of those attendees were brave enough to share their experiences and this led to the creation of the Well-being working Group (WG). As the largest ever open source conference, KubeCon + CloudNativeCon is breaking new ground which provides an excellent opportunity for us to learn how to take care of attendees at scale.

Partnering with OSMI for KubeCon + CloudNativeCon EU 2019

While members of the Well-being WG were compiling a list of suggestions for future CNCF events, Dan Kohn, CNCF’s Executive Director, came across an organization called Open Sourcing Mental Illness (OSMI).

OSMI’s motto is “changing how we talk about mental health in the tech community,” This volunteer-led organization, engages in a range of activities including producing various open source resources to help both employers and employees navigate mental health issues in tech.

The Well-being WG and OSMI then teamed up to create a ‘conference handbook’, which first appeared at KubeCon + CloudNativeCon EU 2019 in Barcelona. The handbook listed helpful tips for how conference attendees could help both themselves and others to remain well during KubeCon + CloudNativeCon. Several hundred copies were distributed during the event at the OSMI booth, which is staffed entirely by volunteers from the Well-Being WG.

In addition to the handbook, Dr. Jennifer Akulian who works closely with OSMI, gave a talk on mental health in tech and there was a very well attended community organized panel session.

KubeCon + CloudNativeCon NA 2019 in San Diego

After positive feedback from the conference in Barcelona the WG decided to repeat the program in San Diego, alongside extra activities from CNCF including more accessible quiet rooms, free massages, and the puppy palooza. All of these items were listed on the conference’s ‘Keep Cloud Native Well’ page which will be a standard fixture for future events.

KubeCon + CloudNativeCon Amsterdam 2020 and beyond

At San Diego a huge number of people expressed interest in joining the Well-Being WG to both shape and deliver future working group activities. The general feeling is that we’ll be going ‘bigger and better’ in 2020. If you would like to be involved, you can either join the WG mailing list directly or contact the CNCF at info@cncf.io.

KubeCon + CloudNativeCon North America 2019 Conference Transparency Report: The Biggest KubeCon + CloudNativeCon to Date

By | Blog

KubeCon + CloudNativeCon North America 2019 was our largest event to date with record-breaking registrations, attendance, sponsorships, and co-located events. With nearly 12,000 attendees, this year’s event in San Diego saw a 49% increase in attendance over last year’s event in Seattle. Sixty-five percent of attendees were first-timers.

We’ve published KubeCon + CloudNativeCon North America 2019 conference transparency report

Key Takeaways:

  • The conference had 11,891 registrations, a 49% increase over last year.
  • 65% were first-time KubeCon + CloudNativeCon attendees.
  • Attendees came from 67 countries across 6 continents.
  • More than 55% of attendees participated in one or more of the 34 co-located events.
  • Feedback from attendees was overwhelmingly positive, with an overall average rating of 4.2 out 5.
  • We received 1,801 submissions – a new record for our North American event – and 2,128 potential speakers submitted to the CFP.
  • The three-day conference offered 366 sessions.
  • Of the keynote speakers, 58% identified as men, and 42% as women or non-binary/other genders.
  • CNCF offered travel support to 115 diversity scholarship applicants, leveraging $177,500 in available funds.
  • 2,631 companies participated in KubeCon + CloudNativeCon, of them were 1,809 End User companies.
  • Keynote sessions garnered 3,804 live stream views.
  • The event generated more than 15,000 articles, blog posts, and press releases.

Save the Dates for 2020!

After a massive 2019, we’re looking forward to bigger and better KubeCon + CloudNativeCon events in 2020.

We’ll be in Amsterdam from March 30-April 2, Shanghai from July 28-30, and Boston from November 17-20.

We hope to see you at one of or all of these upcoming events!

Certified Kubernetes Application Developer (CKAD) Certification is Now Valid for 3 Years

By | Blog

Announced in May 2018, the Certified Kubernetes Application Developer (CKAD) program was designed as an extension of CNCF’s Kubernetes training offerings which already includes certification for Kubernetes administrators. By adding this exam to the CNCF certification line-up, application developers and anyone working with Kubernetes can certify their competency in the platform. This certification has been extremely successful with over 5,300 individuals registering for the exam and almost 2,400 achieving certification.

To match other CNCF and Linux Foundation certifications, the CKAD is extending the expiration date of the earned certification from 24 months to 36 months! That means that if you met the Program Certification requirements, your certification will remain valid for 36 months rather than the original 24 months. 

To maintain your certification, the requirements have not changed. Certificants must meet the renewal requirements, outlined in the candidate handbook, prior to the expiration date of their current certification in order to maintain active certification. If certification renewal requirements are not completed before the expiration date, certification will be revoked. 

If you have already been awarded a CKAD Certification, you should have been contacted. If you have any questions, please reach out to trainingpartners@cncf.io.

The Certified Kubernetes Application Developer exam certifies that users can design, build, configure, and expose cloud native applications for Kubernetes. Interested in taking the exam? Have a look at the Candidate Handbook and the Frequently Asked Questions for more information! 

1 2 41