KubeCon + CloudNativeCon Amsterdam | March 30 – April 2 | Don’t Miss Out | Learn more

Category

Blog

Zendesk: ‘Kubernetes Seemed Like It Was Designed to Solve the Problems We Were Having’

By | Blog

Launched in 2007 with a mission of making customer service easy for organizations, Zendesk offers products involving real-time messaging, voice chat, and data analytics. All of this was built as a monolithic Rails app, using MySQL database and running in a co-located data center on hardware the company owned. 

That system worked fine for the first seven years or so. But as Zendesk grew—the company went public in 2014 and now has 145,000 paid customer accounts and 3,000 employees—it became clear that changes were needed at the infrastructure level, and the effort to make those changes would lead the company to microservices, containers, and Kubernetes.

“We realized that just throwing more and more stuff into a Rails monolith slowed down teams,” says Senior Principal Engineer Jon Moter. “Deploys were really painful and really risky. Every team at Zendesk, some of whom were scattered in engineering offices all over the world, were all tied to this one application.”

Moter’s team built some tooling called ZDI (Zendesk Docker Integration), which got developers set up with containers almost instantly. There were just a couple of options for orchestration at the time, in the summer of 2015, and after some research, Moter says, “Kubernetes seemed like it was designed to solve pretty much exactly the problems we were having. Google knows a thing or two about containers, so it felt like, ‘all right, if we’re going to make a bet, let’s go with that one.’”

Today, about 70% of Zendesk applications are running on Kubernetes, and all new applications are built to run on it. There have been time savings as a result: Previously, changing the resource profile of an application could take a couple of days; now, it takes just a minute or two. Outage resolution happens with self-healing in minutes instead of the hours previously spent patching things up. 

Having a common orchestration platform makes it way easier to have common tooling, common monitoring, and more predictable dashboards, Moter adds. “That has helped make it easier to onboard people and follow standard sorts of templates, and to set up monitors and alerting in a fairly consistent manner. And it helps a lot with on-call. We have offices scattered around the world, so for people on-call, it’s daytime at one of our offices all day.”

The benefits have been clear, and Zendesk is happy to share its learnings with the rest of the community. “Having so many companies that either compete with each other, or are in different industries, all collaborating, sharing best practices, working on stuff together,” says Moter, “I think it’s really inspiring in a lot of ways.”

For more about Zendesk’s cloud native journey, read the full case study here.

Keeping Cloud Native Well

By | Blog

While the CNCF makes every effort to ensure the comfort, health, and happiness of KubeCon + CloudNativeCon attendees, there were some attendees at KubeCon + CloudNativeCon Seattle 2018 who felt overwhelmed or unhappy.

Some of those attendees were brave enough to share their experiences and this led to the creation of the Well-being working Group (WG). As the largest ever open source conference, KubeCon + CloudNativeCon is breaking new ground which provides an excellent opportunity for us to learn how to take care of attendees at scale.

Partnering with OSMI for KubeCon + CloudNativeCon EU 2019

While members of the Well-being WG were compiling a list of suggestions for future CNCF events, Dan Kohn, CNCF’s Executive Director, came across an organization called Open Sourcing Mental Illness (OSMI).

OSMI’s motto is “changing how we talk about mental health in the tech community,” This volunteer-led organization, engages in a range of activities including producing various open source resources to help both employers and employees navigate mental health issues in tech.

The Well-being WG and OSMI then teamed up to create a ‘conference handbook’, which first appeared at KubeCon + CloudNativeCon EU 2019 in Barcelona. The handbook listed helpful tips for how conference attendees could help both themselves and others to remain well during KubeCon + CloudNativeCon. Several hundred copies were distributed during the event at the OSMI booth, which is staffed entirely by volunteers from the Well-Being WG.

In addition to the handbook, Dr. Jennifer Akulian who works closely with OSMI, gave a talk on mental health in tech and there was a very well attended community organized panel session.

KubeCon + CloudNativeCon NA 2019 in San Diego

After positive feedback from the conference in Barcelona the WG decided to repeat the program in San Diego, alongside extra activities from CNCF including more accessible quiet rooms, free massages, and the puppy palooza. All of these items were listed on the conference’s ‘Keep Cloud Native Well’ page which will be a standard fixture for future events.

KubeCon + CloudNativeCon Amsterdam 2020 and beyond

At San Diego a huge number of people expressed interest in joining the Well-Being WG to both shape and deliver future working group activities. The general feeling is that we’ll be going ‘bigger and better’ in 2020. If you would like to be involved, you can either join the WG mailing list directly or contact the CNCF at info@cncf.io.

KubeCon + CloudNativeCon North America 2019 Conference Transparency Report: The Biggest KubeCon + CloudNativeCon to Date

By | Blog

KubeCon + CloudNativeCon North America 2019 was our largest event to date with record-breaking registrations, attendance, sponsorships, and co-located events. With nearly 12,000 attendees, this year’s event in San Diego saw a 49% increase in attendance over last year’s event in Seattle. Sixty-five percent of attendees were first-timers.

We’ve published KubeCon + CloudNativeCon North America 2019 conference transparency report

Key Takeaways:

  • The conference had 11,891 registrations, a 49% increase over last year.
  • 65% were first-time KubeCon + CloudNativeCon attendees.
  • Attendees came from 67 countries across 6 continents.
  • More than 55% of attendees participated in one or more of the 34 co-located events.
  • Feedback from attendees was overwhelmingly positive, with an overall average rating of 4.2 out 5.
  • We received 1,801 submissions – a new record for our North American event – and 2,128 potential speakers submitted to the CFP.
  • The three-day conference offered 366 sessions.
  • Of the keynote speakers, 58% identified as men, and 42% as women or non-binary/other genders.
  • CNCF offered travel support to 115 diversity scholarship applicants, leveraging $177,500 in available funds.
  • 2,631 companies participated in KubeCon + CloudNativeCon, of them were 1,809 End User companies.
  • Keynote sessions garnered 3,804 live stream views.
  • The event generated more than 15,000 articles, blog posts, and press releases.

Save the Dates for 2020!

After a massive 2019, we’re looking forward to bigger and better KubeCon + CloudNativeCon events in 2020.

We’ll be in Amsterdam from March 30-April 2, Shanghai from July 28-30, and Boston from November 17-20.

We hope to see you at one of or all of these upcoming events!

Certified Kubernetes Application Developer (CKAD) Certification is Now Valid for 3 Years

By | Blog

Announced in May 2018, the Certified Kubernetes Application Developer (CKAD) program was designed as an extension of CNCF’s Kubernetes training offerings which already includes certification for Kubernetes administrators. By adding this exam to the CNCF certification line-up, application developers and anyone working with Kubernetes can certify their competency in the platform. This certification has been extremely successful with over 5,300 individuals registering for the exam and almost 2,400 achieving certification.

To match other CNCF and Linux Foundation certifications, the CKAD is extending the expiration date of the earned certification from 24 months to 36 months! That means that if you met the Program Certification requirements, your certification will remain valid for 36 months rather than the original 24 months. 

To maintain your certification, the requirements have not changed. Certificants must meet the renewal requirements, outlined in the candidate handbook, prior to the expiration date of their current certification in order to maintain active certification. If certification renewal requirements are not completed before the expiration date, certification will be revoked. 

If you have already been awarded a CKAD Certification, you should have been contacted. If you have any questions, please reach out to trainingpartners@cncf.io.

The Certified Kubernetes Application Developer exam certifies that users can design, build, configure, and expose cloud native applications for Kubernetes. Interested in taking the exam? Have a look at the Candidate Handbook and the Frequently Asked Questions for more information! 

TOC Votes to Move Falco into CNCF Incubator

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC) voted to accept Falco as an incubation-level hosted project.

Falco, which entered the CNCF Sandbox in October 2018, is an open source Kubernetes runtime security project. It provides intrusion and abnormality detection for cloud native platforms such as Kubernetes, Mesosphere, and Cloud Foundry. 

Given the opaque nature of containers, organizations require deeper insight into container activities. The Falco project was created by Sysdig to better understand container behavior, and to share these insights with organizations, allowing them to protect their container platforms from possible malicious activity. 

“Runtime security is a critical piece in a cloud-native security story and essential for anyone taking cloud-native security seriously,” said Kris Nova, Chief Open Source Advocate at Sysdig. “Access control and policy enforcement are important prevention techniques, but runtime security is needed to detect threats that evade preventions.” 

By leveraging open source Linux kernel instrumentation, Falco gains deep insight into system behavior. The rules engine can then detect abnormal activity in applications, containers, the underlying host, and the container platform. In the event of unexpected behavior at runtime, Falco detects and alerts, reducing the risk of a security incident. It can send these alerts via Slack, Fluentd, NATS, and more. 

Main Falco Features:

  • Strengthen security – Create security rules driven by a context-rich and flexible engine to define unexpected application behavior.
  • Reduce riskImmediately respond to policy violation alerts by plugging Falco into your current security response workflows and processes.
  • Leverage up-to-date rulesAlert using community-sourced detections of malicious activity and CVE exploits.

Notable Milestones:

  • 257% increase in downloads
  • 8.5 million downloads 
  • 100 percent increase in commits year-over-year
  • 64 committers
  • More than 2000 GitHub stars
  • 55 contributors, including engineers from Frame.io, Shopify, Snap, and Booz Allen Hamilton 

Since joining the CNCF sandbox, the Falco community has focused on making the project easier to adopt. Project maintainers have implemented a governance model, which sets guidelines and standards for both contributors and maintainers to ensure the project’s compliance and health. Falco was also made available in the Google marketplace. The Falco community also created an operator that is available in the OperatorHub.io.

“Runtime container security tools like Falco provide the visibility necessary for development teams to feel safe plugging them into their stack,” said Chris Aniszczyk, CTO/COO of the Cloud Native Computing Foundation. “During its time in the Sandbox, Falco has seen impressive growth and end user adoption, and we look forward to seeing the advancements the community continues to make.” 

While in the Incubator, Falco will focus on moving to an API-first architecture, which enables the community to begin developing integrations with other tools, including Prometheus, Envoy, and Kubernetes.

As a CNCF hosted project, joining incubating technologies like OpenTracing, gRPC, CNI, Notary, NATS, Linkerd, Helm, Rook, Harbor, etcd, OPA, and CRI-O, Falco is part of a neutral foundation aligned with its technical interests, as well as the larger Linux Foundation, which provides governance, marketing support, and community outreach.

Every CNCF project has an associated maturity level: sandbox, incubating, or graduated project. For more information on what qualifies a technology for each level, please visit the CNCF Graduation Criteria v.1.3.

To get started with Falco, visit its Falco GitHub page. To get involved, attend the weekly office hours calls to discuss feature work, open issues, and repository planning.

Learn more about Falco, visit www.falco.org.

A Look Back at KubeCon + CloudNativeCon San Diego 2019

By | Blog

Following our BIGGEST KubeCon + CloudNativeCon event yet, we wanted to share a snapshot of the notable highlights and news from KubeCon + CloudNativeCon NA 2019 in San Diego! This year we welcomed 12,000 attendees from around the world who attended compelling talks from CNCF project maintainers, end users, and community members.

Our flagship North American event grew by more than 33% with 4,000 more attendees than last year’s event in Seattle. At the conference, CNCF announced a number of news items including:

  • Its ever-growing ecosystem has hit over 500 member companies.
  • A brand new CNCF job board.
  • A slew of new platinum and gold members. 
  • $200,000 in credits from Amazon Web Services. 


This was the second and final event of Bryan Liles and Vicky Cheung as KubeCon + CloudNativeCon co-chairs! They took the stage to announce project updates and introduced a number of project maintainers from Helm, OpenPolicyAgent, Vitess, and more.

During the opening keynotes, we heard from Director of Ecosystem, Cheryl Hung who discussed CNCF updates including reaching over 500 members, the new job board, and Kubernetes Community Days. She spoke alongside a number of our new platinum members. Erin Boyd from Red Hat and co-chair of SIG-Storage talked about what’s next for Rook and cloud-native data storage. Microsoft Azure’s Lachlan Evenson discussed confidential computing for Kubernetes and became the first to show YAML onstage at KubeCon. 



Continuing to Embrace Diversity in the Ecosystem

CNCF offered travel support to 115 diversity scholarship applicants, leveraging $177,500 in available funds. This generous scholarship funding was provided by Accenture, AspenMesh, CarGurus, Inc., Cloud Native Computing Foundation, Decipher Technology, Google Cloud, MUFG (Union Bank), Palo Alto Networks, Splunk, and VMware.


We also had a great time at the EmpowerUs lunch, Speed Networking + Mentoring Sessions and Diversity lunch and hack!


Keep Cloud Native Well! 

We got to hang out with puppies and dogs during the Puppy Pawlooza! 


We also had a quiet room and chair massages, as well as the Open Sourcing Mental Health booth that was manned by volunteers aimed at making sure that everyone felt safe and comfortable.  

Community Awards!

We announced the 2019 Community Awards Winners! A lot of well-deserving community members were shown appreciation with the Chop Wood Carry Water, Top Ambassador and Top Committer awards! 



All Attendee Party!

Rain or shine, we had a great all attendee party at Gaslamp Quarter. The evening was jam-packed with music, lights, entertainment, food from dozens of restaurants. 


Keynote and Session Highlights

All presentations and videos are now available to watch. Here is how to find all the great content from the show:

  • Keynotes, sessions and lightning talks can be found on the CNCF YouTube.
  • Photos can be found on the CNCF Flickr.
  • Presentations can be found on the Conference Event Schedule, click on the session and scroll to the bottom of the page to see the PDF of the presentation for download.

Save the Dates!

Register now for KubeCon + CloudNativeCon Europe 2020, scheduled for March 30-April 2, 2020 at the RAI Amsterdam in Amsterdam, The Netherlands.

Register now for KubeCon + CloudNativeCon + Open Source Summit China 2020, scheduled for July 28-30 at the Shanghai Expo Centre, Shanghai, China

Save the date for KubeCon + CloudNativeCon NA 2020, scheduled for November 17-20, 2020, in Boston, Massachusetts. 

Kubernetes Knights at KubeCon + CloudNativeCon NA ’19

By | Blog

Guest blog by Pankaj Gupta from Citrix 

KubeCon + CloudNativeCon San Diego 2019 was the place to take stock and ponder the progress being made with Kubernetes. It was also the place to learn the art of possibilities with Kubernetes – the new paradigm of building applications for faster release cycles, modularity, and portability. Unsurprisingly, it was being driven by the many Kubernetes Knights – who, clad in the armor plating of their service meshes, are charging forward with great speed and agility, lancing their way through everything we thought possible, scaling the towers and breaking down the walls of the castle as they press forward on their microservices crusade.   

How far Knights have taken Kubernetes can be summed up by the following key data points from KubeCon + CloudNativeCon NA ’19.

160,000 pods scale in production @Ebay

Ebay now has a staggering 160,000 pods in production spread across more than 60 clusters. This is an enormous number of microservice instances. And they are not alone. Several enterprises report tens of thousands of nodes in their environments – Nordstrom (60k); Yahoo (36K); Lyft (25k). This sort of hyperscale deployment was once only the purview of the largest public cloud providers and now the enterprise is adopting it to deliver consumer services. 

What a long way we’ve come in 5 short years of Kubernetes. It is truly inspirational to see that the promise of microservices as a highly scalable architecture is here today and growing strong. Imagine what scale will be possible next?

1 million new containers a day @Uber

One million new containers in batches are launched every day at Uber – sometimes at the rate of 1,000 per second.  This is a mind-boggling number of new containers. To manage and run this number of containers is amazing but to start them en masse is breathtaking and perfectly illustrates the extreme ephemeral nature of microservices at scale. 

To launch this many containers on demand shows that the Kubernetes has matured and delivers the agility and velocity that customers require for large production environments. Moreover, while the sheer scale of launching 1 million pods is amazing, doing it repetitively is priceless and doing it every day without breaking anything is truly a state of nirvana. 

450 new pods in the blink of an eye @Uber

30 seconds is the time taken for Uber to launch 40,000 pods across 8,000 nodes with their optimization. That is ultrafast at scale. In the time it takes the human eye to blink (approx. 300 ms) it is possible to bring up 450 microservice instances. 

Once again, this stretches the boundaries of what was thought possible only a short while ago for autoscale and highlights the real-world velocity to respond to change. Of course, when you are dealing with the ephemeral nature of microservices at this ultrascale you have to make sure that the rest of your infrastructure can cope with this sort of change. Can yours?

100,000 sidecars for service mesh @Lyft

Lyft has 100,000 sidecars in its service meshes. This is huge and shows service mesh has arrived at scale. Offering great observability, granular traffic management, and enhanced security, it is not surprising that service mesh has emerged as the most sought after microservices architecture over the last 18 months. This level of deployment shows that service mesh has moved out of the realm of the aspirational into reality. It is not just real, but real at scale running production grade services. 

11,000 Walmart stores – Kubernetes moving to edge

This is the number of stores that Walmart will take Kubernetes to in the near future. The world’s largest retailer is shifting their microservices and service mesh to the edge, closer to the point of sale, to improve the customer experience. This bold shift to the edge is the biggest indicator that Kubernetes is mature enough to leave the data center and becoming hugely distributed. Next time you visit Walmart, look for containers in aisle K8s.

F-16 fighter jet soaring high with Kubernetes

The US Department of Defense has deployed microservices into the F-16 Fighter, one of the most advanced war planes in human history. Kubernetes has helped them to evolve their legacy, manual software processes and enabled them to deploy software faster and more reliably to bring new capabilities to the pilots quickly. Perhaps F-16s with Kubernetes should be called K-16s?

These data points highlight the power of Kubernetes and the velocity of innovation it can bring. The outstanding scalability that Kubernetes offers has to be matched by the accompanying infrastructure. Inspired by these scales, Citrix has recently tested that it’s possible to keep pace with the change events associated with a 50,000 pods creation across 1,000 nodes with a single instance of Citrix ADC VPX proxy to deliver applications. The tests illustrated that the Citrix ADC comfortably outpaced the speed of deployment and had a lot of spare capacity for growth. In reality, customers will use multiple instances of proxies like Citrix ADC to manage and scale their workloads to the extents discussed above, but Citrix just wanted to push limits to the extreme with a single proxy instance. 

The Kubernetes Knights have shown us what’s possible today and, in the spirit of chivalry are generously giving back to the community as they open source many of their advances. But, how far can Kubernetes go? Where will the next Kubernetes deployment be – an interplanetary spacecraft? Nobody really knows, but wait for KubeCon 2020. 

Kubernetes 101

By | Blog

Guest post by Jef Spaleta, Sensu, originally published on the Sensu blog

The appeal of running workloads in containers is intuitive and there are numerous reasons to do so. Shipping a process with its dependencies in a package that’s able to just run reduces the friction of organizational communication and operation. Relative to virtual machines, the size, simplicity, and reduced overhead of containers make a compelling case.

In a world where Docker has become a household name in technology circles, using containers to serve production is an obvious need, but real-world systems require many containers working together. Managing the army of containers you need for production workloads can become overwhelming. This is the reason Kubernetes exists.

Kubernetes is a production-grade platform as a service for running workloads in containers. The way it works, from a high level, is relatively straightforward.

You decide what your application needs to do. Then you package your software into container images. Following that, you document how your containers need to work together, including networking, redundancy, fault tolerance, and health probing. Ultimately, Kubernetes makes your desired state a reality.

But you need a few more details to be able to put it to use. In this post, I’ll help lay the groundwork with a few Kubernetes basics.

Why Kubernetes?

Building systems is hard. In constructing something nontrivial, one must consider many competing priorities and moving pieces. Further, automation and repeatability are prerequisites in today’s cultures that demand rapid turnaround, low defect rates, and immediate response to problems.

We need all the help we can get.

Containers make deployment repeatable and create packages that solve the problem of “works on my machine.” However, while it’s helpful having a process in a container with everything it needs to run, teams need more from their platforms. They need to be able to create multiple containers from multiple images to compose an entire running system.

The public cloud offerings for platform as a service give options for deploying applications without having to worry about the machines on which they run and elastic scaling options that ease the burden. Kubernetes yields a similar option for containerized workloads. Teams spell out the scale, redundancy, reliability, durability, networking, and other requirements, as well as dependencies in manifest files that Kubernetes uses to bring the system to life.

This means technologists have an option that provides the repeatability, replaceability, and reliability of containers, combined with the convenience, automation, and cost-effective solution of platform as a service.

What is Kubernetes?

When people describe Kubernetes, they typically do so by calling it a container orchestration service. This is both a good and incomplete way of describing what it is and what it does.

Kubernetes orchestrates containers, which means it runs multiple containers. Further, it manages where they operate and how to surface what they do — but this is only the beginning. It also actively monitors running containers to make sure they’re still healthy. When it finds containers not to be in good operating condition, it replaces them with new ones. Kubernetes also watches new containers to make sure not only that they’re running, but that they’re ready to start handling work.

Kubernetes is a full-scale, production-grade application execution and monitoring platform. It was born at Google and then later open-sourced. It’s now offered as a service by many cloud providers, in addition to being runnable in your datacenter.

How do you use it?

Setting up a Kubernetes cluster can be complex or very simple, depending on how you decide to do it. At the easy end of the spectrum are the public cloud providers, including Amazon’s AWS, Microsoft’s Azure, and Google’s Google Cloud Platform. They have offerings you can use to get up and running quickly.

With your cluster working, you can think about what to do with it. First, you’ll want to get familiar with the vocabulary introduced by Kubernetes. There are many terms you’ll want to be familiar with. This post contains only a subset of the Kubernetes vocabulary that you need to know; you can find additional terms defined more completely in our “How Kubernetes Works” post.

The most important concepts to know are pods, deployments, and services. I’ll define them below using monitoring examples from Sensu Go (for more on monitoring Kubernetes with Sensu, check out this post from CTO Sean Porter, as well as examples from the sensu-kube-demo repo).

  • Pods: As a starting point, you can think of a pod as a container. In reality, pods are one or more containers working together to service a part of your system. There are reasons a pod may have more than one container, like having a supporting Sensu Go agent process that monitors logs or application health metrics in a separate container. The pod abstraction takes care of the drudgery of making sure such supporting containers share network and storage resources with the main application container. Despite these cases, thinking of a pod as a housing for a single container isn’t harmful. Many pods have a single container.
  • Deployments: Deployments group pods of the same type together to achieve load balancing. A deployment has a desired number of identical pods and monitors to make certain that many pods remain running and healthy. Deployments work great to manage stateless workloads like web applications, where identical copies of the same application can run side-by-side to service requests without coordination.
  • StatefulSets: Similar to deployments, but used for applications where copies of the same applications must coordinate with each other to maintain state. StatefulSets manage the lifecycle of unique copies of pods. A Sensu Go backend cluster is a good candidate for a StatefulSet. Each Sensu Go backend holds its own state in a volume mount and must coordinate with its peers via reliable networking links. The StatefulSet manages the lifecycle of each requested copy of the Sensu Go backend pod as unique, making sure the networking and storage resources are reused if unhealthy pods need to be replaced.
  • Services: Services expose your deployments. This exposure can be to other deployments and/or to the outside world.

You interact with a cluster via the Kubernetes REST API. Rather than doing this by constructing HTTP requests yourself, you can use a handy command-line tool called kubectl.

Kubectl

Kubectl enables issuing commands against a cluster. These commands take the form below:

kubectl [command] [TYPE] [NAME] [flags]

There is a more complete overview of commands on the Kubernetes site.

Config

The kubectl tool can be easily installed with Homebrew on macOS, Chocolatey on Windows, or the appropriate package manager for your distribution on Linux. Better yet, recent versions of Docker Desktop on Mac or Windows (also easily installed with Homebrew or Chocolatey) include setup of a local single-node Kubernetes cluster and kubectl on your workstation.

With kubectl installed on your workstation, you’re almost ready to start issuing commands to a cluster. First you’ll need to configure and authenticate with any cluster with which you want to communicate.

You use the kubectl config command to set up access to your cluster or clusters and switch between the contexts you’ve configured.

Get, describe

With access set up, you can start issuing commands. You’ll probably use the kubectl get and kubectl describe commands the most, as you’ll use them to see the states of your pods, deployments, services, secrets, etc.

The get verb will list resources of the type you specify:

kubectl get pods

The above will list the pods running in your cluster (more precisely, the pods running in a namespace on your cluster, but that adds more complexity than desired here).

This example gets the pod named fun-pod (if such a pod exists).

kubectl get pod fun-pod

Finally, the describe verb gives a lot more detail related to the pod named fun-pod.

kubectl describe pod fun-pod

Create, apply

Using the following is useful for making resources in your cluster:

kubectl create

Outside of learning, it’s generally preferable to create manifest files and use kubectl apply to put them into use. This is an especially good way to deploy applications from continuous deployment pipelines.

Teams write manifests in either JSON or YAML. Such a manifest can describe pods, service, deployments, and more. The specification of a deployment includes the definition of the number of times a type of pod should replicate to constitute a healthy and running deployment.

This is a sample of what a manifest looks like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webserver-deployment
  labels:
    app: webserver
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webserver
  template:
    metadata:
      labels:
        app: webserver
    spec:
      containers:
      - name: webserver
        image: nginx:1.17.3
        ports:
        - containerPort: 80

Kubernetes creates or updates the resources in a file with the following command:

kubectl apply -f <filename>

Getting started

You can easily start your active learning journey with Kubernetes with either a cluster in a public cloud or on your workstation. As mentioned earlier, Docker Desktop for Windows or Mac includes a Kubernetes installation. This makes it easy to run a cluster for learning, development, and testing purposes on your machine.

If you can’t or don’t want to use Docker Desktop, you can accomplish the same purpose (setting up a local cluster) by installing Minikube.

With either the Kubernetes installation with Docker Desktop or Minikube, you have a cluster on your machine with which you can interact. You can now use this setup for getting started and for trying deployments before you push them remotely.

Dive in and learn more

This is only the beginning. There’s a lot more to know before you become truly comfortable with Kubernetes. Such is the life of a technology professional!

Courses and resources exist that show more on how to gain confidence in using Kubernetes. The Kubernetes site itself has a wonderful “Learn Kubernetes Basics” section, with plenty of straightforward interactive tutorials. The best way to get up to speed is to get your hands dirty and start getting some experience. Install Docker Desktop or Minikube and start deploying!

FAQ for December 2019 TOC Nominations

By | Blog

FAQ for December 2019 TOC Nominations

How many seats? 

There are 5 seats to nominate: 3 from the Governing Board (GB), 1 from the end user community, 1 from the maintainers of Graduating + Incubating projects. The GB-elected seats are currently held by Joe Beda, Liz Rice, and Alexis Richardson. The other two seats have newly been added. These are the Selecting Groups and will be voting on the nominees after qualification.

How long do we have? 

You have 9 days to be able to nominate candidates from December 12 through December 20th.

We’ve extended the nomination deadline to January 4, 2020 at 12pm Pacific.  

How do I get nominated? 

If you are a maintainer of a graduated or incubating project, you can self-nominate, as explained below. Otherwise, you can ask a governing board member or end-user representative to nominate you.

The charter (section 6(e)(i)) says: “Each individual in a Selecting Group may nominate up to two (2) people, at most one (1) of whom may be from the same group of Related Companies. Each nominee must agree to participate prior to being added to the nomination list.”

How do maintainers get nominated?

Maintainers self-nominate by sending an email to cncf-maintainer-nomination@lists.cncf.io. They need to be endorsed (prior to the December 20th deadline) by two other maintainers from other projects and other companies (so, each nomination will, including the two endorsements, cover 3 projects and 3 companies). Details are in the new Maintainer Election Policy.

Why is the maintainers process different? 

The Governing Board approved this new process to encourage broader representation. 

What should go in a nomination?

Section 6(e)(i)(a) of the charter says: “A nomination requires a maximum one (1) page nomination pitch which should include the nominee’s name, contact information and supporting statement identifying the nominee’s experience in CNCF domains.”

Do the GB and End User nominations need endorsements?

Not this election, but if this new endorsements policy works for the maintainer seat, it may be expanded.

What is the evaluation and qualification process?

The Charter (section 6(e)(i)(c)) says: “A minimum of 14 calendar days shall be reserved for an Evaluation Period whereby TOC nominees may be contacted by members of the Governing Board and TOC.” Section 6(e)(ii) says: “After the Evaluation Period, the Governing Board and the TOC members shall vote on each nominee individually to validate that the nominee meets the qualification criteria. A valid vote shall require at least 50% participation. Nominees passing with over 50% shall be Qualified Nominees.”

Is the evaluation and qualification process the same for all nominees? 

Yes. 

Who votes?

The Governing Board (GB) will vote for 3 seats, the end user community for 1 seat, and the maintainers of Graduating + Incubating projects for 1 seat. These are the Selecting Groups and will be voting after the qualification process.

Can you tell me more about the end-user community votes?

Each end user member in CNCF has a primary representative that has voting rights.

Does Testing Kubernetes Conformance Leave You in the Dark? Get Progress Updates as Tests Run

By | Blog

Guest post originally published on Sonobuoy, by John Schnake

In Sonobuoy 0.15.4, we introduced the ability for plugins to report their plugin’s progress to Sonobuoy by using a customizable webhook. Reporting status is incredibly important for long-running, opaque plugins like the e2e plugin, which runs the Kubernetes conformance tests.

We’re happy to announce that as of Kubernetes 1.17.0, the Kubernetes end-to-end (E2E) test framework will utilize this webhook to provide feedback about how many tests will be run, have been run, and which tests have failed.

This feedback helps you see if tests are failing (and which ones) before waiting for the entire run to finish. It also helps you identify whether tests are hanging or progressing.

How to Use It

There are two requirements to using this feature for the e2e plugin:

  • The conformance image used must correspond to Kubernetes 1.17 or later
  • Sonobuoy 0.16.5 or later must be used; we added this support prior to 0.17.0 to support Kubernetes prereleases.

First, start a run of the e2e plugin by running the following command, which kicks off a long-running set of tests:

$ sonobuoy run

Now, you can poll the status by using this command:

$ sonobuoy status --json | jq

After the tests start running, you will start to see output that includes a section like this:

{
    "plugin": "e2e",
    "node": "global",
    "status": "running",
    "result-status": "",
    "result-counts": null,
    "progress": {
        "name": "e2e",
        "node": "global",
        "timestamp": "2019-11-25T17:21:32.5456932Z",
        "msg": "PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]",
        "total": 278,
        "completed": 2
    }
}

Voila! Anytime during a run, you can now check in and be more informed about how the run is going. As tests fail, the output will also return an array of strings with the test names in the failures field (and the “msg” field just reports the last test finished and its result). For example:

    {
      ...
      "progress": {
        ...
        "msg": "FAILED [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create service with ipv4 cluster ip [Feature:IPv6DualStackAlphaFeature:Phase2]",
        ...
        "failures": [
          "[sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create service with ipv4 cluster ip [Feature:IPv6DualStackAlphaFeature:Phase2]"
        ]
      }
    },

Q and A

Q: I’m using a new version of Kubernetes but am using an advanced test configuration I store as a YAML file. Can I still get progress updates?

A: Yes, there are just two environment variables for the e2e plugin that need to be set in order for this to work:

- name: E2E_USE_GO_RUNNER
  value: "true"
- name: E2E_EXTRA_ARGS
  value: --progress-report-url=http://localhost:8099/progress

The E2E_USE_GO_RUNNER value ensures that the conformance test image uses the Golang-based runner, which enables passing extra arguments when the tests are invoked. The E2E_EXTRA_ARGS value sets the flag to inform the framework about where to send the progress updates.

The status updates are just sent to localhost because the test container and the Sonobuoy sidecar are co-located in the same pod.

Q: I want to try out this feature but don’t have a Kubernetes 1.17.0 cluster available; how can I test it?

A: The important thing is that the conformance test image is 1.17 or later so you can manually specify the image version if you just want to tinker. Since the test image version and the API server version do not match, the results might not be reliable (it might, for instance, test features your cluster doesn’t support) and would not be valid for the Certified Kubernetes Conformance Program.

You can specify the version that you want to use when you run Sonobuoy; here’s an example:

sonobuoy run --kube-conformance-image-version=v1.17.0-beta.2

Q: I’d like to implement progress updates in my own custom plugin. How do I do that?

A: To see an example use of this feature, check out the readme file for the progress reporter. The Sonobuoy sidecar will always be listening for progress updates if your plugin wants to send them, so it is just a matter of posting some JSON data to the expected endpoint.

1 2 3 4 42