KubeCon + CloudNativeCon San Diego | November 18 – 21 | Learn more

Category

Blog

CNCF Hosts Three Student Internships for Kubernetes and CoreDNS Projects Through Linux Foundation’s CommunityBridge

By | Blog

CNCF is pleased to announce its participation in the CommunityBridge, sponsoring three students to work on Kubernetes and CoreDNS projects during the programs pilot stage. 

Recently launched by The Linux Foundation, CommunityBridge is a platform that aims to sustain open source projects and through paid opportunities for new developers to join and learn from open source communities.

“We are thrilled to use CommunityBridge this year side by side with our participation in Google Summer of Code. This is a wonderful platform that empowers us to offer paid internships and mentorships all year round for developers.” – Chris Aniszczyk, CTO, Cloud Native Computing Foundation (CNCF)

Additional details on the CNCF projects, mentors, and students can be found below. Stay tuned for updates! 

Kubernetes 

CSI Driver for Azure Disk

Student: Priyanshu Khandelwal, Indian Institute of Technology, Mandi 

Mentor: Xia Zhang

Kubernetes 

Integrating kube-batch with pytorch-operator/mxnet-operator

Student: Suryavanshi Virendrasingh, Indian Institute of Technology, Mandi

Mentor: Klaus Ma

CoreDNS

Support Google Cloud DNS backend

Student: Palash Nigam, International Institute of Information Technology, Bhubaneswar 

Mentor: Yong Tang, Director Of Engineering at MobileIron

 

Introducing the End User Case Study Redesign ⁠— Now With Filtering!

By | Blog

For the past three years, we’ve been talking to CNCF end users around the globe and sharing their stories about how cloud native technologies are having real-world impact on their businesses. 

These case studies all live on the CNCF site, and as their number continues to grow, we wanted to make this section a useful resource where readers can easily find use cases that are of interest to them. 

We’re happy to announce that the end user case studies can now be filtered by CNCF projects used, geographic location (country and continent), industry, cloud type, challenges, and product type.

Additionally, we’ve redesigned the individual case study pages to be more user-friendly, with all the information on a single page

  • a summary of the challenge, solution, and impact at the top;
  • an infographic that quickly conveys the CNCF projects used and highlighted stats; 
  • and the full case study at the bottom.

We hope you’ll dive into the section, take a look around, test out the filtering, and let us know what you think.

How Kubernetes Works

By | Blog

Guest Post from Jef Spaleta, originally published on The Sensu Blog

It’s no secret that the popularity of running containerized applications has exploded over the past several years. Being able to iterate and release an application by provisioning its dependencies through code is a big win. According to Gartner, “More than 75% of global organizations will be running containerized applications in production” by 2022.

For organizations that operate at a massive scale, a single Linux container instance isn’t enough to satisfy all of their applications’ needs. It’s not uncommon for sufficiently complex applications, such as ones that communicate through microservices, to require multiple Linux containers that communicate with each other. That architecture introduces a new scaling problem: how do you manage all those individual containers? Developers will still need to take care of scheduling the deployment of containers to specific machines, managing the networking between them, growing the resources allocated under heavy load, and much more.

Enter Kubernetes, a container orchestration system — a way to manage the lifecycle of containerized applications across an entire fleet. It’s a sort of meta-process that grants the ability to automate the deployment and scaling of several containers at once. Several containers running the same application are grouped together. These containers act as replicas, and serve to load balance incoming requests. A container orchestrator, then, supervises these groups, ensuring that they are operating correctly.

A container orchestrator is essentially an administrator in charge of operating a fleet of containerized applications. If a container needs to be restarted or acquire more resources, the orchestrator takes care of it for you.

That’s a fairly broad outline of how most container orchestrators work. Let’s take a deeper look at all the specific components of Kubernetes that make this happen.

Kubernetes terminology and architecture

Kubernetes introduces a lot of vocabulary to describe how your application is organized. We’ll start from the smallest layer and work our way up.

Pods

A Kubernetes pod is a group of containers, and is the smallest unit that Kubernetes administers. Pods have a single IP address that is applied to every container within the pod. Containers in a pod share the same resources such as memory and storage. This allows the individual Linux containers inside a pod to be treated collectively as a single application, as if all the containerized processes were running together on the same host in more traditional workloads. It’s quite common to have a pod with only a single container, when the application or service is a single process that needs to run. But when things get more complicated, and multiple processes need to work together using the same shared data volumes for correct operation, multi-container pods ease deployment configuration compared to setting up shared resources between containers on your own.

For example, if you were working on an image-processing service that created GIFs, one pod might have several containers working together to resize images. The primary container might be running the non-blocking microservice application taking in requests, and then one or more auxiliary (side-car) containers running batched background processes or cleaning up data artifacts in the storage volume as part of managing overall application performance.

Deployments

Kubernetes deployments define the scale at which you want to run your application by letting you set the details of how you would like pods replicated on your Kubernetes nodes. Deployments describe the number of desired identical pod replicas to run and the preferred update strategy used when updating the deployment. Kubernetes will track pod health, and will remove or add pods as needed to bring your application deployment to the desired state.

Services

The lifetime of an individual pod cannot be relied upon; everything from their IP addresses to their very existence are prone to change. In fact, within the DevOps community, there’s the notion of treating servers as either “pets” or “cattle.” A pet is something you take special care of, whereas cows are viewed as somewhat more expendable. In the same vein, Kubernetes doesn’t treat its pods as unique, long-running instances; if a pod encounters an issue and dies, it’s Kubernetes’ job to replace it so that the application doesn’t experience any downtime.

A service is an abstraction over the pods, and essentially, the only interface the various application consumers interact with. As pods are replaced, their internal names and IPs might change. A service exposes a single machine name or IP address mapped to pods whose underlying names and numbers are unreliable. A service ensures that, to the outside network, everything appears to be unchanged.

Nodes

A Kubernetes node manages and runs pods; it’s the machine (whether virtualized or physical) that performs the given work. Just as pods collect individual containers that operate together, a node collects entire pods that function together. When you’re operating at scale, you want to be able to hand work over to a node whose pods are free to take it.

Master server

This is the main entry point for administrators and users to manage the various nodes. Operations are issued to it either through HTTP calls or connecting to the machine and running command-line scripts.

Cluster

A cluster is all of the above components put together as a single unit.

Kubernetes components

With a general idea of how Kubernetes is assembled, it’s time to take a look at the various software components that make sure everything runs smoothly. Both the master server and individual worker nodes have three main components each.

Master server components

API Server

The API server exposes a REST interface to the Kubernetes cluster. All operations against pods, services, and so forth, are executed programmatically by communicating with the endpoints provided by it.

Scheduler

The scheduler is responsible for assigning work to the various nodes. It keeps watch over the resource capacity and ensures that a worker node’s performance is within an appropriate threshold.

Controller manager

The controller-manager is responsible for making sure that the shared state of the cluster is operating as expected. More accurately, the controller manager oversees various controllers which respond to events (e.g., if a node goes down).

Worker node components

Kubelet

A Kubelet tracks the state of a pod to ensure that all the containers are running. It provides a heartbeat message every few seconds to the master server. If a replication controller does not receive that message, the node is marked as unhealthy.

Kube proxy

The Kube proxy routes traffic coming into a node from the service. It forwards requests for work to the correct containers.

etcd

etcd is a distributed key-value store that Kubernetes uses to share information about the overall state of a cluster. Additionally, nodes can refer to the global configuration data stored there to set themselves up whenever they are regenerated.

Resources: FAQs + further reading

There’s a lot more to cover when it comes to Kubernetes. For more information on how Kubernetes works, you can read this extensive breakdown from DigitalOcean, as well as posts from the Cloud Native Computing Foundation. And, don’t miss Sensu CEO Caleb Hailey’s webinar on August 13 @ 10am PT on monitoring Kubernetes, hosted by the CNCF.

FAQs

What is Kubernetes used for?

Kubernetes keeps track of your container applications that are deployed into the cloud. It restarts orphaned containers, shuts down containers when they’re not being used, and automatically provisions resources like memory, storage, and CPU when necessary.

How does Kubernetes work with Docker?

Actually, Kubernetes supports several base container engines, and Docker is just one of them. The two technologies work great together, since Docker containers are an efficient way to distribute packaged applications, and Kubernetes is designed to coordinate and schedule those applications.

How do I use Kubernetes?

If you’re interested in trying Kubernetes out, you can install Minikube as a local testing environment. When you’re ready to try Kubernetes out for real, you’ll use kubectl to deploy your application managed by Kubernetes.

CNCF Archives the rkt Project

By | Blog

As part of a new Archiving Process initiated earlier this year, CNCF announced today that the Technical Oversight Committee (TOC) has voted to archive the rkt project.

All open source projects are subject to a lifecycle and can become less active for a number of reasons. In rkt’s case, despite its initial popularity following its creation in December 2014, and contribution to CNCF in March 2017, end user adoption has severely declined. The CNCF is also home to other container runtime projects: containerd and CRI-O, and while the rkt project played an important part in the early days of cloud native adoption, in recent times user adoption has trended away from rkt towards these other projects. Furthermore, project activity and the number of contributors has also steadily declined over time, along with unpatched CVEs.

At CNCF, incubation stage projects are expected to show end user growth, maintain a healthy number of committers, and demonstrate a substantial ongoing flow of contributions, among other things. For projects that no longer meet these requirements, a proposal can be submitted to the TOC to archive a project. Once a project is archived:

  • CNCF will no longer provide full services for the project, except transition services such as documentation updates to help transition users
  • Trademarks of archived projects are still hosted neutrally by the Linux Foundation
  • CNCF marketing and event activities will no longer be provided for the project

Any project that has been archived can be reactivated into CNCF through the normal project proposal process. The archived project will be hosted under the Linux Foundation and maintainers are welcome to continue working on the project if they wish to do so.

The CNCF TOC would like to thank the rkt project maintainers and contributors for the important part they have played in the development and evolution of cloud native technology.

Learn more about the CNCF archiving process

How Linkerd is Apester’s ‘Safety Net’ Against Cascading Failure from Forgotten Timeouts

By | Blog

Next time you get sucked into a quiz or poll on a media site like The Telegraph or Time, you can thank Apester⁠’s drag-and-drop interactive content platform⁠—and its usage of cloud native technologies like Linkerd. With a microservice architecture and several programming languages in use, Apester adopted Linkerd’s service mesh for visibility and a common metric system. Linkerd ended up solving a major pain point for the company, which deals with more than 20 billion requests per month: outages caused by developers’ forgetting to set timeouts on service-to-service requests. With Linkerd, there have been no outages for six months (and counting), and MTTR has been shortened by a factor of 2. Read more about Apester’s cloud native journey in the full case study.

2019 CNCF Cloud Native Survey Call to Participate

By | Blog

It’s time for the CNCF Cloud Native Survey!

The goal of this survey, which will be issued in advance of KubeCon + CloudNativeCon North America (November 18-21, 2019), is to understand the state of Kubernetes, container, and serverless adoption and use in the cloud native space.

This is the 7th time we have taken the temperature of the infrastructure software marketplace to better understand the adoption of cloud native technologies. We will collect and share insights on:

  • The production usage of CNCF-hosted projects
  • The changing landscape of application development
  • How companies are managing their software development cycles
  • Cloud native in production and the benefits
  • Challenges in using and deploying containers

With this survey, we’ve added new questions on service mesh, service proxy, and CI/CD to better understand the current state of adoption of specific tools. For those answers, as well as cloud native storage, Kubernetes implementations, and serverless, we matched the options to those listed in the CNCF cloud native landscape. The results will be compiled into a report and both that and the anonymized raw data will be shared to highlight the new and changing trends we’re seeing across industries.

You can see the results of the earlier surveys.

In appreciation for your time, respondents will be entered into a drawing to receive one of three $200 Amazon Gift Card prizes. But, more importantly, your views and insights are needed to provide these valuable results to the community. Please fill out the survey by August 30 for a chance to share your experience with cloud native technology and maybe win a prize!

Open Sourcing the Kubernetes Security Audit

By | Blog

Last year, the Cloud Native Computing Foundation (CNCF) began the process of performing and open sourcing third-party security audits for its projects in order to improve the overall security of our ecosystem. The idea was to start with a handful of projects and gather feedback from the CNCF community as to whether or not this pilot program was useful. The first projects to undergo this process were CoreDNS, Envoy and Prometheus. These first public audits identified security issues from general weaknesses to critical vulnerabilities. With these results, project maintainers for CoreDNS, Envoy and Prometheus have been able to address the identified vulnerabilities and add documentation to help users.

The main takeaway from these initial audits is that a public security audit is a great way to test the quality of an open source project along with its vulnerability management process and more importantly, how resilient the open source project’s security practices are. With CNCF graduated projects especially, which are used widely in production by some of the largest companies in the world, it is imperative that they adhere to the highest levels of security best practices.

Findings

Since the pilot has proven successful, CNCF is excited to start offering this to other projects that are interested, with preference given to graduated projects.

With funds provided by the CNCF community to conduct the Kubernetes security audit, the Security Audit Working Group was formed to lead the process of finding a reputable third party vendor. The group created an open request for proposals, taking responsibility for evaluating the submitted proposals and recommending the vendor best suited to complete a security assessment against Kubernetes, bearing in mind the high complexity and wide scope of the project. The working group selected two firms to perform this work: Trail of Bits and Atredis Partners. The team felt that the combination of these two firms, both composed of very senior and well-known staff in the information security industry, would provide the best possible results. 

The Security Audit Working Group managed the audit over a four month time span. Throughout the course of this work, a component-focused threat model of the Kubernetes system was conducted Working with members of the Security Audit Working Group, as well as a number of Kubernetes SIGs,this threat model reviewed Kubernetes’ components across six control families:

  • Networking
  • Cryptography
  • Authentication
  • Authorization
  • Secrets Management
  • Multi-tenancy

Since Kubernetes itself is a large system, with functionality spanning from API gateways to container orchestration to networking and beyond, the Third Party Security Audit Working Group, in concert with Trail of Bits and Atredis Partners, selected eight components within the larger Kubernetes ecosystem for evaluation in the threat model:

  • Kube-apiserver
  • Etcd
  • Kube-scheduler
  • Kube-controller-manager
  • Cloud-controller-manager
  • Kubelet
  • Kube-proxy
  • Container Runtime

The assessment yielded a significant amount of knowledge pertaining to the operation and internals of a Kubernetes cluster. Findings and supporting documentation from the assessment has been made available today, and can be found here.

There were a number of Kubernetes-wide findings, including:

  1. Policies may not be applied, leading to a false sense of security.
  2. Insecure TLS is in use by default.
  3. Credentials are exposed in environment variables and command-line arguments. 
  4. Names of secrets are leaked in logs.
  5. No certificate revocation.
  6. seccomp is not enabled by default.

Guidance was provided to promote further assessments and discussion of Kubernetes from the perspectives of cluster administrators and developers:

Recommendations for cluster administrators included:

  • Attribute Based Access Controls vs Role Based Access Controls
  • RBAC best practices
  • Node-host configurations and permissions
  • Default settings and backwards compatibility
  • Networking
  • Environment considerations
  • Logging and alerting

Recommendations for Kubernetes developers included:

  • Avoid hardcoding paths to dependencies
  • File permissions checking
  • Monitoring processes on Linux
  • Moving processes to a cgroup
  • Future cgroup considerations for Kubernetes
  • Future process handling considerations for Kubernetes

This audit process was partially inspired by, the Core Infrastructure Initiative (CII) Best Practices Badge program that all CNCF projects are required to go through. This badge, provided by the Linux Foundation, is a way for open source projects to show that they follow security best practices. Consumers of the badge can quickly assess which open source projects are following best practices and as a result are more likely to produce higher-quality secure software.

Finally, we hope that open sourcing our security audits and process, we inspire other projects to pursue them in their respective open source communities.

The CNCF wishes to thank the members of the Security Audit Working Group, as well as Kubernetes community who assisted in the threat model and audit work: 

Aaron Small, Google          Security Audit Working Group member

Craig Ingram, Salesforce    Security Audit Working Group member

Jay Beale, InGuardians      Security Audit Working Group member

Joel Smith, Red Hat           Security Audit Working Group member

 

Diversity Scholarship Series: Experiencing Kubernetes Day India 2019

By | Blog

Guest post by Atibhi Agrawal, originally published on Medium

I had been hearing the buzzword Kubernetes and cloud computing for a long time but I had no idea what it was. One day my senior Rajula Vineet Reddy posted on our college’s Facebook group that Kubernetes Day India was being held in Infosys Campus, Bengaluru. Infosys Campus is right opposite our college IIIT Bangalore and this seemed like a good opportunity to get know about Kubernetes. I applied for a diversity ticket and was very happy when I got it !

THE DAY OF THE CONFERENCE, 23 March

I went to the Infosys Campus at 9:00 am. I registered for the conference, got my badge and picked up my T-shirt. Then, I had breakfast and went to attend the KeyNote by Liz Rice.

She talked about how permissions work in Kubernetes and of how we can think of Kubernetes as a distributed operating system. She drew analogies with the Linux Operating System and this helped us to understand the topic better. Her talk was beginner friendly and truly one of the best keynotes I have ever attended.

The next few talks were all beginner friendly and helped me to get to know about Kubernetes. Most of the advanced talks were during the later half.

Two talks that I found really helpful were Noobernetes 101: Top 10 Questions We Get From New K8s Users by Neependra Khare, CloudYuga Technologies & Karthik Gaekwad from Oracle and How to Contribute to Kubernetes by Nikhita Raghunath from Loodse.

In “Noobernetes 101” Neependra and Karthik covered some faqs like What kind of services should we use for our applications?How can we do capacity planning in K8s? Why there is a high learning curve in K8s? Isn’t K8s too complicated?
What is the best way to set a development environment with K8s?
 etc.

In Nikhita’s talk “How to Contribute to Kubernetes” she talked about getting started with contributing to Kubernetes. She told us about the different parts of Kubernetes and how they work, how the various components are related, the skills we need to get started and learn the best ways to get our first Pull Request accepted. She also talked about her GSoC experience.

Talk by Nikitha. Photo Credits : Paavini Nanda

Apart from the talks the sponsor companies had booths in the conference venue where they were sharing information about the services they offer, openings in their companies as well as giving out swag if we answered questions about their APIs. It was a great networking opportunity and I went to almost every booth.

My experience at Kubernetes Day India was memorable. I made lots of new friends, learnt so much about something totally new to me and in the process got a lot of swag. If you’re reading this, I highly recommend you to attend any event by CNCF and Kubernetes. It is an amazing community ❤

How JD.com Uses Vitess to Manage Scaling Databases for Hyperscale

By | Blog

China’s largest retailer, JD.com defines hyperscale: The e-commerce business serves more than 300 million active customers. And in the past few years, as the company’s data ballooned, its MySQL databases became larger, resulting in declining performance and higher costs. JD Retail Chief Architect Haifeng Liu saw in Vitess the ideal solution to easily and quickly scale MySQL, facilitate operation and maintenance, and reduce hardware and labor costs. Today, JD.com runs tens of thousands of MySQL containers, with millions of tables and trillions of records. Read more about how they got there in the full case study.

Announcing Kubernetes Forum Seoul and Sydney: Expanding Cloud Native Engagement Across the Globe

By | Blog

 

Today we’re excited to announce the first two Kubernetes Forums in Seoul and Sydney, which we are launching this December in Seoul, South Korea from December 9-10, and Sydney, Australia from December 12-13.

Kubernetes Forum in global cities bring together international and local experts with adopters, developers, and practitioners, in an accessible and compact format. Much like our three annual KubeCon + CloudNativeCon events, the Forums are designed to promote face-to-face collaboration and deliver rich educational experiences. At the Forums, attendees can engage with the leaders of Kubernetes and other CNCF-hosted projects, and help set direction for the cloud native ecosystem. Kubernetes Forums have both a beginner and an advanced track. About half of the speakers are international experts and half are from the local area.

Kubernetes Forums consist of two events running consecutively in two cities in the same geographical area during a single week. This enables international speakers and sponsor teams to double their cloud native event engagement, and the local community benefits from having access to subject matter experts and representatives from global organizations that they may not otherwise reach.

The Call for Proposal (CFP)-based sessions will occur on the first day of each Summit. On the second day, attendees can select among several co-located events. These may be cloud- or distribution-specific training or any other topics of interest to Kubernetes Forum attendees. On the second night of the first event, the sponsors and international speakers will take a red-eye flight to the second city, where they will have a full day to recover, and then kick off again with day 1 sessions (interspersed with sessions from that area’s local experts) and day 2 co-located events.

The CFP for the Seoul and Sydney Forum is open now. If you are a local expert in Korea or Oceania, or an international expert who has previously presented at KubeCon + CloudNativeCon and wants to present at both Forums, please submit a talk! The CFP deadline is Friday, September 6. If your organization is interested in sponsoring, you can find more information.

We’re expecting 2020 locations will add Mexico City/Sao Paulo, Bengaluru/New Delhi, Tokyo/Singapore, Tel Aviv, and possibly more. Please join us as we spread the word about Kubernetes and cloud native computing around the world.

We can’t wait to see you in a city near you!

Note: we were originally going to use the name Kubernetes Summits instead of Kubernetes Forums. However, that risked confusion with the Kubernetes Contributor Summit, so we’re going forward with the name Kubernetes Forums.

1 2 3 4 37