Category

Blog

CNCF Archives the rkt Project

By | Blog

As part of a new Archiving Process initiated earlier this year, CNCF announced today that the Technical Oversight Committee (TOC) has voted to archive the rkt project.

All open source projects are subject to a lifecycle and can become less active for a number of reasons. In rkt’s case, despite its initial popularity following its creation in December 2014, and contribution to CNCF in March 2017, end user adoption has severely declined. The CNCF is also home to other container runtime projects: containerd and CRI-O, and while the rkt project played an important part in the early days of cloud native adoption, in recent times user adoption has trended away from rkt towards these other projects. Furthermore, project activity and the number of contributors has also steadily declined over time, along with unpatched CVEs.

At CNCF, incubation stage projects are expected to show end user growth, maintain a healthy number of committers, and demonstrate a substantial ongoing flow of contributions, among other things. For projects that no longer meet these requirements, a proposal can be submitted to the TOC to archive a project. Once a project is archived:

  • CNCF will no longer provide full services for the project, except transition services such as documentation updates to help transition users
  • Trademarks of archived projects are still hosted neutrally by the Linux Foundation
  • CNCF marketing and event activities will no longer be provided for the project

Any project that has been archived can be reactivated into CNCF through the normal project proposal process. The archived project will be hosted under the Linux Foundation and maintainers are welcome to continue working on the project if they wish to do so.

The CNCF TOC would like to thank the rkt project maintainers and contributors for the important part they have played in the development and evolution of cloud native technology.

Learn more about the CNCF archiving process

How Linkerd is Apester’s ‘Safety Net’ Against Cascading Failure from Forgotten Timeouts

By | Blog

Next time you get sucked into a quiz or poll on a media site like The Telegraph or Time, you can thank Apester⁠’s drag-and-drop interactive content platform⁠—and its usage of cloud native technologies like Linkerd. With a microservice architecture and several programming languages in use, Apester adopted Linkerd’s service mesh for visibility and a common metric system. Linkerd ended up solving a major pain point for the company, which deals with more than 20 billion requests per month: outages caused by developers’ forgetting to set timeouts on service-to-service requests. With Linkerd, there have been no outages for six months (and counting), and MTTR has been shortened by a factor of 2. Read more about Apester’s cloud native journey in the full case study.

2019 CNCF Cloud Native Survey Call to Participate

By | Blog

It’s time for the CNCF Cloud Native Survey!

The goal of this survey, which will be issued in advance of KubeCon + CloudNativeCon North America (November 18-21, 2019), is to understand the state of Kubernetes, container, and serverless adoption and use in the cloud native space.

This is the 7th time we have taken the temperature of the infrastructure software marketplace to better understand the adoption of cloud native technologies. We will collect and share insights on:

  • The production usage of CNCF-hosted projects
  • The changing landscape of application development
  • How companies are managing their software development cycles
  • Cloud native in production and the benefits
  • Challenges in using and deploying containers

With this survey, we’ve added new questions on service mesh, service proxy, and CI/CD to better understand the current state of adoption of specific tools. For those answers, as well as cloud native storage, Kubernetes implementations, and serverless, we matched the options to those listed in the CNCF cloud native landscape. The results will be compiled into a report and both that and the anonymized raw data will be shared to highlight the new and changing trends we’re seeing across industries.

You can see the results of the earlier surveys.

In appreciation for your time, respondents will be entered into a drawing to receive one of three $200 Amazon Gift Card prizes. But, more importantly, your views and insights are needed to provide these valuable results to the community. Please fill out the survey by August 30 for a chance to share your experience with cloud native technology and maybe win a prize!

Open Sourcing the Kubernetes Security Audit

By | Blog

Last year, the Cloud Native Computing Foundation (CNCF) began the process of performing and open sourcing third-party security audits for its projects in order to improve the overall security of our ecosystem. The idea was to start with a handful of projects and gather feedback from the CNCF community as to whether or not this pilot program was useful. The first projects to undergo this process were CoreDNS, Envoy and Prometheus. These first public audits identified security issues from general weaknesses to critical vulnerabilities. With these results, project maintainers for CoreDNS, Envoy and Prometheus have been able to address the identified vulnerabilities and add documentation to help users.

The main takeaway from these initial audits is that a public security audit is a great way to test the quality of an open source project along with its vulnerability management process and more importantly, how resilient the open source project’s security practices are. With CNCF graduated projects especially, which are used widely in production by some of the largest companies in the world, it is imperative that they adhere to the highest levels of security best practices.

Findings

Since the pilot has proven successful, CNCF is excited to start offering this to other projects that are interested, with preference given to graduated projects.

With funds provided by the CNCF community to conduct the Kubernetes security audit, the Security Audit Working Group was formed to lead the process of finding a reputable third party vendor. The group created an open request for proposals, taking responsibility for evaluating the submitted proposals and recommending the vendor best suited to complete a security assessment against Kubernetes, bearing in mind the high complexity and wide scope of the project. The working group selected two firms to perform this work: Trail of Bits and Atredis Partners. The team felt that the combination of these two firms, both composed of very senior and well-known staff in the information security industry, would provide the best possible results. 

The Security Audit Working Group managed the audit over a four month time span. Throughout the course of this work, a component-focused threat model of the Kubernetes system was conducted Working with members of the Security Audit Working Group, as well as a number of Kubernetes SIGs,this threat model reviewed Kubernetes’ components across six control families:

  • Networking
  • Cryptography
  • Authentication
  • Authorization
  • Secrets Management
  • Multi-tenancy

Since Kubernetes itself is a large system, with functionality spanning from API gateways to container orchestration to networking and beyond, the Third Party Security Audit Working Group, in concert with Trail of Bits and Atredis Partners, selected eight components within the larger Kubernetes ecosystem for evaluation in the threat model:

  • Kube-apiserver
  • Etcd
  • Kube-scheduler
  • Kube-controller-manager
  • Cloud-controller-manager
  • Kubelet
  • Kube-proxy
  • Container Runtime

The assessment yielded a significant amount of knowledge pertaining to the operation and internals of a Kubernetes cluster. Findings and supporting documentation from the assessment has been made available today, and can be found here.

There were a number of Kubernetes-wide findings, including:

  1. Policies may not be applied, leading to a false sense of security.
  2. Insecure TLS is in use by default.
  3. Credentials are exposed in environment variables and command-line arguments. 
  4. Names of secrets are leaked in logs.
  5. No certificate revocation.
  6. seccomp is not enabled by default.

Guidance was provided to promote further assessments and discussion of Kubernetes from the perspectives of cluster administrators and developers:

Recommendations for cluster administrators included:

  • Attribute Based Access Controls vs Role Based Access Controls
  • RBAC best practices
  • Node-host configurations and permissions
  • Default settings and backwards compatibility
  • Networking
  • Environment considerations
  • Logging and alerting

Recommendations for Kubernetes developers included:

  • Avoid hardcoding paths to dependencies
  • File permissions checking
  • Monitoring processes on Linux
  • Moving processes to a cgroup
  • Future cgroup considerations for Kubernetes
  • Future process handling considerations for Kubernetes

This audit process was partially inspired by, the Core Infrastructure Initiative (CII) Best Practices Badge program that all CNCF projects are required to go through. This badge, provided by the Linux Foundation, is a way for open source projects to show that they follow security best practices. Consumers of the badge can quickly assess which open source projects are following best practices and as a result are more likely to produce higher-quality secure software.

Finally, we hope that open sourcing our security audits and process, we inspire other projects to pursue them in their respective open source communities.

The CNCF wishes to thank the members of the Security Audit Working Group, as well as Kubernetes community who assisted in the threat model and audit work: 

Aaron Small, Google          Security Audit Working Group member

Craig Ingram, Salesforce    Security Audit Working Group member

Jay Beale, InGuardians      Security Audit Working Group member

Joel Smith, Red Hat           Security Audit Working Group member

 

Diversity Scholarship Series: Experiencing Kubernetes Day India 2019

By | Blog

Guest post by Atibhi Agrawal, originally published on Medium

I had been hearing the buzzword Kubernetes and cloud computing for a long time but I had no idea what it was. One day my senior Rajula Vineet Reddy posted on our college’s Facebook group that Kubernetes Day India was being held in Infosys Campus, Bengaluru. Infosys Campus is right opposite our college IIIT Bangalore and this seemed like a good opportunity to get know about Kubernetes. I applied for a diversity ticket and was very happy when I got it !

THE DAY OF THE CONFERENCE, 23 March

I went to the Infosys Campus at 9:00 am. I registered for the conference, got my badge and picked up my T-shirt. Then, I had breakfast and went to attend the KeyNote by Liz Rice.

She talked about how permissions work in Kubernetes and of how we can think of Kubernetes as a distributed operating system. She drew analogies with the Linux Operating System and this helped us to understand the topic better. Her talk was beginner friendly and truly one of the best keynotes I have ever attended.

The next few talks were all beginner friendly and helped me to get to know about Kubernetes. Most of the advanced talks were during the later half.

Two talks that I found really helpful were Noobernetes 101: Top 10 Questions We Get From New K8s Users by Neependra Khare, CloudYuga Technologies & Karthik Gaekwad from Oracle and How to Contribute to Kubernetes by Nikhita Raghunath from Loodse.

In “Noobernetes 101” Neependra and Karthik covered some faqs like What kind of services should we use for our applications?How can we do capacity planning in K8s? Why there is a high learning curve in K8s? Isn’t K8s too complicated?
What is the best way to set a development environment with K8s?
 etc.

In Nikhita’s talk “How to Contribute to Kubernetes” she talked about getting started with contributing to Kubernetes. She told us about the different parts of Kubernetes and how they work, how the various components are related, the skills we need to get started and learn the best ways to get our first Pull Request accepted. She also talked about her GSoC experience.

Talk by Nikitha. Photo Credits : Paavini Nanda

Apart from the talks the sponsor companies had booths in the conference venue where they were sharing information about the services they offer, openings in their companies as well as giving out swag if we answered questions about their APIs. It was a great networking opportunity and I went to almost every booth.

My experience at Kubernetes Day India was memorable. I made lots of new friends, learnt so much about something totally new to me and in the process got a lot of swag. If you’re reading this, I highly recommend you to attend any event by CNCF and Kubernetes. It is an amazing community ❤

How JD.com Uses Vitess to Manage Scaling Databases for Hyperscale

By | Blog

China’s largest retailer, JD.com defines hyperscale: The e-commerce business serves more than 300 million active customers. And in the past few years, as the company’s data ballooned, its MySQL databases became larger, resulting in declining performance and higher costs. JD Retail Chief Architect Haifeng Liu saw in Vitess the ideal solution to easily and quickly scale MySQL, facilitate operation and maintenance, and reduce hardware and labor costs. Today, JD.com runs tens of thousands of MySQL containers, with millions of tables and trillions of records. Read more about how they got there in the full case study.

Announcing Kubernetes Forum Seoul and Sydney: Expanding Cloud Native Engagement Across the Globe

By | Blog

 

Today we’re excited to announce the first two Kubernetes Forums in Seoul and Sydney, which we are launching this December in Seoul, South Korea from December 9-10, and Sydney, Australia from December 12-13.

Kubernetes Forum in global cities bring together international and local experts with adopters, developers, and practitioners, in an accessible and compact format. Much like our three annual KubeCon + CloudNativeCon events, the Forums are designed to promote face-to-face collaboration and deliver rich educational experiences. At the Forums, attendees can engage with the leaders of Kubernetes and other CNCF-hosted projects, and help set direction for the cloud native ecosystem. Kubernetes Forums have both a beginner and an advanced track. About half of the speakers are international experts and half are from the local area.

Kubernetes Forums consist of two events running consecutively in two cities in the same geographical area during a single week. This enables international speakers and sponsor teams to double their cloud native event engagement, and the local community benefits from having access to subject matter experts and representatives from global organizations that they may not otherwise reach.

The Call for Proposal (CFP)-based sessions will occur on the first day of each Summit. On the second day, attendees can select among several co-located events. These may be cloud- or distribution-specific training or any other topics of interest to Kubernetes Forum attendees. On the second night of the first event, the sponsors and international speakers will take a red-eye flight to the second city, where they will have a full day to recover, and then kick off again with day 1 sessions (interspersed with sessions from that area’s local experts) and day 2 co-located events.

The CFP for the Seoul and Sydney Forum is open now. If you are a local expert in Korea or Oceania, or an international expert who has previously presented at KubeCon + CloudNativeCon and wants to present at both Forums, please submit a talk! The CFP deadline is Friday, September 6. If your organization is interested in sponsoring, you can find more information.

We’re expecting 2020 locations will add Mexico City/Sao Paulo, Bengaluru/New Delhi, Tokyo/Singapore, Tel Aviv, and possibly more. Please join us as we spread the word about Kubernetes and cloud native computing around the world.

We can’t wait to see you in a city near you!

Note: we were originally going to use the name Kubernetes Summits instead of Kubernetes Forums. However, that risked confusion with the Kubernetes Contributor Summit, so we’re going forward with the name Kubernetes Forums.

How CreditEase, Pinterest, Slamtec, Ant Financial and ING Experienced Faster Iterations and Production Times

By | Blog

Kubernetes enables CreditEase, Pinterest, Ant Financial, Slamtec and ING to overcome a multitude of challenges experienced as they looked to scale. By investing in Kubernetes and cloud native technology, these companies experienced reduced build times and massive efficiency wins. 

CreditEase had a long list of challenges in their infrastructure and addressed all of them by choosing Kubernetes for orchestration. CreditEase experienced faster product iterations and significantly improved deployment and delivery times. Read the case study.

With 200 million active monthly users and 100 billion objects saved, Pinterest managed more than 1,000 microservers and multiple layers of infrastructure. After moving to Kubernetes, Pinterest built on-demand scaling and new failover policies, while simplifying deployment and management. The company also reclaimed over 80 percent of capacity during non-peak hours. Read the case study.

After an agile transformation, ING was looking to standardize their deployment process while following the companies strict security guidelines. Using Kubernetes and other cloud native technologies, ING built an internal public cloud to standardize and speed up their deployment process. They now have the ability to go from idea to production within 48 hours. Read the case study

Ant Financial operates at massive scale, with 900+ million users worldwide and 256,000 transactions per second during the peak of Double 11 Singles Day 2017. In order to provide reliable and consistent services to its customers, the company invested in Kubernetes and has seen at least tenfold improvement in operations. Read the case study

Slamtec had multiple needs for their new cloud platform, most importantly stability and reliability. That’s why they chose to deploy Kubernetes as well as Prometheus monitoring, Fluentd logging, Harbor registry, and Helm package manager. 

With this new platform, Slamtec experienced more than 18 months of 100% stability, and for users there is now zero service downtime and seamless upgrades. Read the case study

Interested in more content like this? We curate and deliver relevant articles just like this one, directly to you, once a month in our CNCF newsletter. Get on the list.

KubeCons Coming Your Way

Registration is open for KubeCon + CloudNativeCon North America 2019 which takes place in San Diego from November 18-21. 

KubeCon + CloudNativeCon Europe is in Amsterdam from March 30-April 2, 2020. Registration will go live at the end of the year. We’ll soon be announcing the location for our China event in the summer of 2020.

We hope to see you there!

Deploy your machine learning models with Kubernetes

By | Blog

Guest post originally published on cnvrg.io by Itay Ariel, Senior Software Developer at cnvrg.io 

You’re an AI expert. A deep learning Ninja. A master of machine learning. You’ve just completed another iteration of training your awesome model. This new model is the most accurate you have ever created, and it’s guaranteed to bring a lot of value to your company.

But…

You reach a road block, holding back your models potential. You have full control of the model throughout the process. You have the capabilities of training it, you can tweak it, and you can even verify it using the test set. But, time and time again, you reach the point where your model is ready for production and your progress must take a stop. You need to communicate with DevOps, who likely has a list of tasks to the floor that hold priority over your model. You patiently wait your turn, until you become unbearingly restless in your spinning chair. You have every right to be restless. You know that your model has the potential to produce record breaking results for your company. Why waste any more time?

There is another way…

Publish your models on Kubernetes. Kubernetes is quickly becoming the cloud standard. Once you know how to deploy your model on kubernetes you can do it anywhere (Google cloud or AWS

How to deploy models to production using Kubernetes

You’ll never believe how simple deploying models can be. All you need is to wrap your code a little bit. Soon you’ll be able to build and control your machine learning models from research to production. Here’s how:

Layer 1- your predict code

Since you have already trained your model, it means you already have predict code. The predict code takes a single sample, fits the model with the sample and returns a prediction.

Below you’ll see a sample code that takes a sentence as an input, and returns a number that represents the sentence sentiment as predicted by the model. In this example, an IMDB dataset was used to train a model to predict the sentiment of a sentence.

import keras
model = keras.models.load_model("./sentiment2.model.h5")

def predict(sentence):
    encoded = encode_sentence(sentence)
    pred = np.array([encoded])
    pred = vectorize_sequences(pred)
    a = model.predict(pred)
    return a[0][0]

def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))
    for i, sequence in enumerate(sequences):
        results[i, sequence] = 1.
    return results
 predict.py hosted with ❤ by GitHub

 

*Tip
To make deploying even easier, make sure to track all of your code dependencies in a requirements file.

Layer 2- flask server

After we have a working example of the predict code, we need to start speaking HTTP instead of Python.

The way to achieve this is to spawn a flask server that will accept the input as arguments to its requests, and return the model’s prediction in its responses.

from flask import Flask, request, jsonify
import predict

app = Flask(__name__)

@app.route('/predict', methods=['POST'])
def run():
    data = request.get_json(force=True)
    input_params = data['input']
    result =  predict.predict(input_params)
    return jsonify({'prediction': result})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8080)

 

In this small snippet we import flask and define a route it should listen to. Once a request is sent to the server to the route /predict it will take the request argument and send them to the predict function we wrote in the first layer. The function return value is sent back to the client via the HTTP response.

Layer 3 — Kubernetes Deployment

And now, on to the final layer! Using kubernetes we can declare our deployment in a YAML file. This methodology is called Infrastructure as code, and it enables us to define the command we want to run in a single text file.

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: predict-imdb 
spec:
  replicas: 1 
  template:
    spec:
      containers:
      - name: app
        image: tensorflow/tensorflow:latest-devel-py3
        command: ["/bin/sh", "-c"]
        args:
         - git clone https://github.com/itayariel/imdb_keras;
           cd imdb_keras;
           pip install -r requirements.txt;
           python server.py;
        ports:
        - containerPort: 8080

 

You can see in the file that we declared a Deployment with a single replica. Its image is based off of the tensorflow docker image, and then runs a set of four commands in order to trigger the server.

In this command, it clones the code from Github, installed the requirements and spins up the flask server written. 

*Note: feel free to change the clone command to suit your needs.

Additionally, it’s important to add a service that will expose deployment outside of kubernetes cluster. Be sure to check your cluster networking settings via your cloud provider.

apiVersion: v1
kind: Service
metadata:
  name: predict-imdb-service
  labels:
    app: imdb-server
spec:
  ports:
    - port: 8080
  selector:
    app: imdb-server
  type: NodePort

 

🚀Send it to the cloud

Now that we have all files set, it’s time to send the code to the Cloud.

Assuming you have a running kubernetes cluster – and you have its kube config file – you should run the following commands:

kubectl apply -f deployment.yml

This command will create our deployment on the cluster.

kubectl apply -f service.yml

Doing this command will create a service that will expose the endpoint to world. In this example, a NodePort service was used – meaning the service will be attached to a port on the cluster nodes.

Use the command `kubectl get services` to find the service IP and port. Now the model can be called using HTTP with the following curl command:

curl http://node-ip:node-port/predict \
-H 'Content-Type: application/json' \
-d '{"input_params": "I loved this videoLike, love, amazing!!"}'

Wrapping it up – It’s Aliiiive!

Easy huh? Now you know how to publish models to the internet using Kuberentes. And, with just a few lines of code. It actually gets easier.

KubeCon + CloudNativeCon + Open Source Summit China 2019 Conference Transparency Report: A Record-Breaking CNCF and LF Event

By | Blog

KubeCon + CloudNativeCon + Open Source Summit China 2019 was a great success with record-breaking registrations, attendance, sponsorships, and co-located events for CNCF’s second annual conference in China. With 3,500 registrations, attendance for this year’s event was up by 1,000. China is the second largest contributor of code to Kubernetes and more than 10% of CNCF members are from China, including 16% of platinum members and 35% of gold members. China also makes up a crucial part of the CNCF and Kubernetes vendor ecosystem, containing 26% of Certified Kubernetes vendors, 19% of Kubernetes Certified Service Providers, and 32% of Kubernetes Training Partners.

We’ve published the KubeCon + CloudNativeCon + Open Source Summit China 2019 conference transparency report in English and Chinese

Key Takeaways:

  • Of the 3,500 attendees, 60% were first-time KubeCon attendees. 
  • Attendees came from 43 countries across five continents; the majority of attendees (83%) were from China.
  • The conference also welcomed 42 sponsors, 9 community partners, and 14 co-located events.
  • 50% of attendees came from companies with 3,000 employees or more, indicating significant enterprise interest in the event.
  • 54 media and analysts attended the event, including 13 English-speakers, generating more than 393 clips of compelling event news.
  • KubeCon + CloudNativeCon + Open Source Summit China 2019 received 937 CFP submissions; the acceptance rate was 20%.
  • The combined three-day conference included 29 keynotes, 177 breakout sessions, 38 Maintainer Track sessions, and 8 lightning talks.
  • More than 160 people that were unable to attend in person registered to view the keynote live stream.