Category

Blog

The 10 Most Viewed Videos from Past KubeCon + CloudNativeCons 

By | Blog

Each KubeCon + CloudNativeCon conference is jam-packed with inspiring, insightful and informative keynotes, sessions and lightning talks. They are published on our YouTube channel after each event so that regardless of attendance, everyone can benefit! 

Today, we wanted to share a “Best Of” of the most viewed talks from KubeCon + CloudNativeCon over the last three years. Of course, the best part of KubeCon + CloudNativeCon is the hallway track where you get to engage face-to-face with other members of the cloud native community. You can do that by registering for KubeCon + CloudNativeCon NA in San Diego from November 18-21. 

1 

First up from KubeCon + CloudNativeCon North America 2017, Carson Anderson breaks down Kubernetes with his talk that hit over 62k views!

 Kubernetes Deconstructed: Understanding Kubernetes by Breaking It Down

Next, Amy Chen’s talk on learning the basics of Kubernetes from the perspective of creating a Helm Chart from scratch at KubeCon + CloudNativeCon North America 2017 has been watched over 54k times! 

Building Helm Charts From the Ground Up: An Introduction to Kubernetes

3

While at KubeCon + CloudNativeCon Europe 2017, Peter Bourgon gave a talk introducing and deep-diving into Go kit, an independent open-source toolkit for writing microservices in Go. 

Go + Microservices = Go Kit

4

Kelsey Hightower’s keynote – Kubernetes and the Path to Serverless – from KubeCon + CloudNativeCon North America 2018 came in fourth with a whopping 30,075 views.

5

For his KubeCon + CloudNativeCon Europe 2018 keynote, Oliver Beattie dove into the Kubernetes outage that Monzo experienced in 2018, its causes and effects, and the architectural and operational lessons learned.

Anatomy of a Production Kubernetes Outage

6

During KubeCon + CloudNativeCon Europe 2018, James Strachan’s talk on Jenkins X, an open source CI / CD platform for Kubernetes based on Jenkins, received almost 26k views.  

Jenkins X: Easy CI/CD for Kubernetes

7

In this talk from KubeCon + CloudNativeCon North America 2017, Cheryl Hung compares and contrasts the most popular persistent storage solutions, and lays out the eight principles for cloud native storage. 

Persistent Storage with Kubernetes in Production – Which Solution and Why?

8

Another one from Kelsey! His opening keynote/ project update at KubeCon + CloudNativeCon North America 2017 has been seen by 19.7k people! 

Keynote: KubeCon Opening Keynote

9

During KubeCon + CloudNativeCon Europe 2018, Fernando Diaz’s session is great for beginners or community experts alike who would like to get more involved with Nginx-Ingres

Make Ingress-Nginx Work for you, and the Community

10

In this video from KubeCon + CloudNativeCon Europe 2017 Lachlan Evenson and Adam Reese delve into the depths of Helm, focusing on lifecycle management and continuous delivery (CI/CD) of Kubernetes-native applications in different environments. 

Delve into Helm: Advanced DevOps

Diversity Scholarship Series: From Networking in Copenhagen to a Lightning Talk in Shanghai

By | Blog

At every KubeCon + CloudNativeCon since Seattle in 2016, CNCF has offered funding and support to attendees from traditionally underrepresented and/or marginalized groups. More than 300 of these diversity scholarships — which include free registration, travel stipends, and networking events — have been accepted over the years.

Through their experiences at the conferences, diversity scholars have found mentors, been inspired to contribute to the open source community, and even gone on to become CNCF ambassadors.

Yang Li was one of the diversity scholars at KubeCon + CloudNativeCon EU 2018 in Copenhagen. Just over a year later, he was giving a lightning talk in Shanghai. 

“I was always a believer of ‘do what you love,’” says Yang. “However, I was not doing things that made me excited before KubeCon Copenhagen. That experience gave me the courage to change myself to work with things I love again.”

Based in Hangzhou, China, Yang began working with Kubernetes in 2017, initially helping with localizing the dashboard to Chinese. When he found out about the diversity scholarships, he decided to apply. “I thought it would be wonderful if I can attend KubeCon once, especially since it would be my first time going to a conference overseas,” he recalls. When he found out that he had been accepted, “I read the email several times before believing it was true. I was extremely happy and thankful for this opportunity.”

In Copenhagen, Yang met many people in the open source community and especially bonded with the other diversity scholars, with whom he’s still in touch. “I learned many things about Kubernetes and other CNCF projects,” he says, “but the most important thing I learned is how awesome this inclusive community is and how important that is for an open source project.”

When he got home, he decided to become more involved with the Kubernetes project. Though he had used open source software and GitHub before, he had never been active in any community. “KubeCon made me realize that I want to work more with open source projects,” he says.

Yang soon became a contributor, joining the release team for 1.12 and 1.13, and getting involved in SIG-testing, SIG-release, and SIG-contribex. “I have found some mentors during my journey as a contributor in my spare time,” he says. “They are very kind and experienced, and they taught me by example how to work better on an open source project.”

Meanwhile, the Kubernetes-related project he was working on at his job was discontinued. So he promptly found another full-time position so he could keep working on Kubernetes. He’s now an SRE at The Plant K.K., which creates web applications. “The company is an end user of Kubernetes, so I’m connecting my open source work with my day job,” he says.

At the end of 2018, Yang took his involvement to the next level, helping run the Contributor Summit in Shanghai and in Seattle. “I helped the community onboard new faces,” he says. “Once I knew how wonderful it was to work within the community, I started to encourage other people whenever I got the chance.” 

The biggest chance to do that so far has been on the stage in Shanghai this past June. When he submitted a talk proposal for KubeCon China, “I thought it should be a community topic since the community wants to grow more contributors in APAC regions,” he says. “I decided to share my own experiences from the past year.”

On the KubeCon stage that day, “I was both nervous and excited,” he says. “I got the message out, I had some good feedback, and I’m glad it encouraged and inspired more people to join the community.”

Yang had written in his proposal that joining the Kubernetes community “has made me not only a better open source contributor but a better software engineer. It is safe to say that it changed my career path.”

Indeed, he has just been relocated to Tokyo by The Plant K.K. “It has definitely surprised me where this has taken me over the past year and a half,” he says. “I’m very grateful for the CNCF scholarship, the Kubernetes community, and my employer. I think I’m living my goal of doing what I love.”

*Anyone interested in applying for a CNCF diversity scholarship to attend KubeCon + CloudNativeCon North America 2019 in San Diego November 18-21 can find out more here. Applications are due September 9.*

CNCF Meetups Are Now Happening in More than 200 Locations

By | Blog

Following our recent 100,000 member milestone, we are excited to highlight that CNCF Meetups are now active in over 200 locations around the world! Thanks to our rapidly growing community and CNCF Ambassadors, Meetup Groups are becoming increasingly prevalent. With the goal of expanding the cloud native ecosystem, each Meetup is unique but all cover cloud native computing and/or CNCF-hosted projects as topics. 

“After creating the CNCF Meetup program over 3 years ago, we’ve been thrilled to see it span nearly 50 countries with the help of our community and official Cloud Native Ambassadors. The cloud native movement is truly a global endeavor, we look forward to continue supporting new meetups across the world to learn more about cloud native technology.” said Chris Aniszczyk, CTO, CNCF.

CNCF Meetup Stats

  • Members: 119,632
  • Groups: 202
  • Countries: 49

Here is a quick overview of the top 10 Meetups in each region by number of members. With over 200 groups, chances are there is a Meetup near you. Find your local CNCF meetup.

North America

City State/Country Members
San Francisco California 7,882
Palo Alto California 7,468
San Jose California 5,222
New York New York 5,136
Mountain View California 4,837
Cambridge Massachusetts 2,699
Seattle Washington 2,442
Dallas Texas 1,757
Montreal Quebec 1,739
Chicago Illinois 1,534

Europe

City Country Members
London United Kingdom 4,873
Madrid Spain 3,421
Berlin Germany 3,311
Munich Germany 3,072
Paris France 2,311
Zurich Switzerland 2,139
Budapest Hungary 2,093
Amsterdam Netherlands 1,819
Warsaw Poland 1,547
Hamburg Germany 1,202

Rest of World

City Country Members
Bangalore India 8,916
Sao Paulo Brazil 3,623
Pune India 1,956
Singapore Singapore 1,948
Jakarta Indonesia 1,496
Tel Aviv-Yafo Israel 1,364
Sydney Australia 1,138
Seoul Korea (South) 1,051
Ahmedabad India 997
Melbourne Australia 980

CNCF is always working to expand the cloud native community and are happy to accept new meetup communities to join the CNCF. For more information, check out our Meetup page on Github.  

Comcast, ricardo.ch, PingCAP, Prowise and the City of Montreal Attain New Heights with Kubernetes

By | Blog

Kubernetes enabled Comcast, ricardo.ch, PingCAP, Prowise and The City of Montreal to scale without increasing their operations team, and improved their productivity by over 15%. They also considerably reduced their deployments with faster scaling. Kubernetes enables users to deploy multiple containers to multiple hosts, making it ideal for larger deployments and load balancing. Its flexibility allows enterprises to deliver applications consistently and easily no matter how complex their needs are. 

Comcast looked to Kubernetes in 2014 when launching its X1 Cloud DVR service, allowing millions of customers to download or stream content onto mobile and IP-connected devices. In the following years that one project expanded to a company-wide cloud native journey. “Kubernetes has helped to get our development teams more interested and invested in production environments that they work in. It’s bridged the gap between our lab and production environments and enabled us to get stuff deployed faster.” —David Arbuckle, Director, Infrastructure Software Engineering

Comcast has gone from deploying a new application stack a quarter to deploying 20+ environments in a week! Read the case study


The City of Montreal switched from its legacy systems to containerization and Kubernetes, drastically decreasing time to market from many months to a few weeks, and deployments from months to hours. “Getting a project running in Kubernetes is entirely dependent on how long you need to program the actual software. It’s no longer dependent on deployment. Deployment is so fast that it’s negligible.” —Marc Khouzam, Solutions Architect, City of Montréal. In the near future 60% of the city’s workloads should run on a Kubernetes platform—basically everything that they can get to work this way. Read the case study

PingCAP’s current productivity improvement from running on Kubernetes is about 15%, but as they gain more knowledge on debugging and diagnosis, they anticipate that productivity improvement should exceed 20%. “Having Kubernetes be part of the CNCF, as opposed to having only the backing of one individual company, was valuable in having confidence in the longevity of the technology,” says Xu. “Plus, with the governance process being so open, it’s not hard to find out what’s the latest developments in the technology and community, or figure out who to reach out to if we have problems or issues.” PingCAP’s Deployment time has gone from hours to minutes. Read the case study.

Ricardo.ch used Kubernetes to increase deployments from fewer than 10 per week to 30-60 per day. “One of the core moments was when a front-end developer asked me how to do a port forward from his laptop to a front-end application to debug, and I told him the command. And he was like, ‘Wow, that’s all I need to do?’ He was super excited and happy about it. That showed me that this power in the right hands can just accelerate development.” — Cedric Meury, Head of Platform Engineering, ricardo.ch. Read the case study

Prowise’s rapid growth brought the need for flexible scaling and faster deployment. To respond, they chose Kubernetes and are now experiencing rapid and smooth deployments. “Kubernetes allows us to really consider the best tools for a problem. Want to have a full-fledged analytics application developed by a third party that is just right for your use case? Run it. Dabbling in machine learning and AI algorithms but getting tired of waiting days for training to complete? It takes only seconds to scale it. Got a stubborn developer that wants to use a programming language no one has heard of? Let him, if it runs in a container, of course. And all of that while your operations team/DevOps get to sleep at night.” —Victor van den Bosch, Senior DevOps Engineer, Prowise. Deployment time went from 30 minutes of preparation plus 30 minutes deployment to a couple of seconds. Read the case study.

Interested in more content like this? We curate and deliver relevant articles just like this one, directly to you, once a month in our CNCF newsletter. Get on the list.

Save the Dates for upcoming KubeCon + CloudNativeCon events!

The call for proposals for KubeCon + CloudNativeCon North America 2019, which takes place in San Diego from November 18-21, closes tomorrow, July 12th. Registration is open.

We’ll be back in Europe for KubeCon + CloudNativeCon Europe in Amsterdam from March 30-April 2, 2020. Registration will go live at the end of the year.

We hope to see you there!

What Image Formats Should You Be Using in 2019?

By | Blog

Here are some succinct guidelines on which image formats to use on the web, in email, and in print, based on CNCF’s experiences:

SVG: Use for logos

SVGs are the preferred image format for logos as they’re resolution-independent (that is, they look good no matter how high-resolution your screen is), lightweight (that is, their file size is smaller than other formats), and can be easily converted into PNGs and into print formats (like PDF and EPS). All of the logos in the interactive landscape are SVGs (following our logo guidelines). SVGs are now natively supported in PowerPoint but importing SVGs into Google Slides is much more tedious than it should be, so you may want to substitute a PNG, but ensure it’s high resolution.

JPG: Use for photos

Though lossy and not resolution-independent, JPGs are the preferred format for photos but aren’t much good for anything else. (Lossy means that text and illustrations can look blurry and so it’s not suitable for logos. Not resolution-independent means that photos will look blurry if you blow them up more than their original resolution.)  When you create a JPG you can set the degree of compression. When preparing an image for the web, make the compression as high as possible to minimize the file size but not so much that the photo starts looking “blocky”.

PNG: Use for logos and diagrams when you can’t use an SVG

Minimize your use of PNGs. It is not a resolution-independent format so they often look blurry on high-resolution screens like Macs and iPhones. They are useful, however, in the following situations:

  • Gmail doesn’t support SVGs, so PNGs are the best choice for logos in email
  • On webpages, use PNGs for the Twitter card preview image, since Twitter doesn’t support SVGs
  • Use when embedding large, complex drawings into webpages such as the trail map or landscape
  • PNGs can include transparency so use on webpages when you need transparency and there’s no SVG option

PDF: Use for print and ready-to-print brochures

PDF, like SVG, is a resolution-independent format that is suitable for printing or displaying on high-resolution screens. It’s intended as an output format, however, and is not easily usable as an input to webpages or other media. Also, embedding a PNG or JPG into a PDF will still look blurry at high-resolution. Instead, start with a resolution-independent format (such as SVGs or original designs from Adobe Illustrator) to produce PDFs that look good zoomed in or printed commercially.

EPS & AI: Don’t use

These are often used as the original format in which a design is created and/or sent to print. If you get a logo in this format, use cloudconvert.com to convert it to an SVG.

Demystifying Containers – Part II: Container Runtimes

By | Blog

 

This blog post was first published on suse.com by Sascha Grunert.

This series of blog posts and corresponding talks aims to provide you with a pragmatic view on containers from a historic perspective. Together we will discover modern cloud architectures layer by layer, which means we will start at the Linux Kernel level and end up at writing our own secure cloud native applications.

Simple examples paired with the historic background will guide you from the beginning with a minimal Linux environment up to crafting secure containers, which fit perfectly into todays’ and futures’ orchestration world. In the end it should be much easier to understand how features within the Linux kernel, container tools, runtimes, software defined networks and orchestration software like Kubernetes are designed and how they work under the hood.


Part II: Container Runtimes

This second blog post (and talk) is primary scoped to container runtimes, where we will start with their historic origins before digging deeper into two dedicated projects: runc and CRI-O. We will initially build up a great foundation about how container runtimes work under the hood by starting with the lower level runtime runc. Afterwards, we will utilize the more advanced runtime CRI-O to run Kubernetes native workloads, but without even running Kubernetes at all.

Introduction

In the previous part of this series we discussed Linux Kernel Namespaces and everything around to build up a foundation about containers and their basic isolation techniques. Now we want to dive deeper into answering the question: “How to actually run containers?”. We will do so without being overwhelmed by the details of Kubernetes’ features or security related topics, which will be part of further blog posts and talks.

What is a Container Runtime?

Applications and their required or not required use cases are contentiously discussed topics in the UNIX world. The mainUNIX philosophy propagates minimalism and modular software parts which should fit well together in a complete system. Great examples which follow these philosophical aspects are features like the UNIX pipe or text editors like vim. These tools solve one dedicated task as best as they can and are tremendously successful at it. On the other side, there are projects like systemd or cmake, which do not follow the same approach and implement a richer feature set over time. In the end we have multiple views and opinions about answers to questions like ”How should an initialization system look like?” or ”What should a build system do?”. If these multi-opinionated views mix up with historical events, then answering a simple question might need more explanations than it should.

Now, welcome to the world of containers!

Lots of applications can run containers, whereas every application would have a sightly different opinion about what a container runtime should do and support. For example, systemd is able to run containers via systemd-nspawn, and NixOS has integrated container management as well. Not to mention all the other existing container runtimes like CRI-OKata ContainersFirecrackergVisorcontainerdLXCruncNabla Containers and many more. A lot of them are now part of theCloud Native Computing Foundation (CNCF) and their huge landscape, whereas someone might ask: ”Why do so many container runtimes exist?”.

Per usual for our series of blog posts, we should start from the historical beginning.

A Brief History

After the invention of cgroups back in 2008, a project called Linux Containers (LXC) started to pop-up in the wild, which should revolutionize the container world. LXC combined cgroup and namespace technologies to provide an isolated environment for running applications. You may know that we sometimes live in a parallel world. This means that Google started their own containerization project in 2007 called Let Me Contain That For You (LMCTFY), which works mainly at the same level as LXC does. With LMCTFY, Google tried to provide a stable and API driven configuration without users having to understand the details of cgroups and its internals.

If we now look back into 2013 we see that there was a tool written called Docker, which was built on top of the already existing LXC stack. One invention of Docker was that the user is now able to package containers into images to move them between machines. Docker were the fist ones who tried to make containers a standard software unit, as they state in their ”Standard Container Manifesto”.

Some years later they began to work on libcontainer, a Go native way to spawn and manage containers. LMCTFY was abandoned during that time too, whereas the core concepts and major benefits of LMCTFY were ported into libcontainer and Docker.

We are now back in 2015, where projects like Kubernetes hit version 1.0. A lot of stuff was ongoing during that time: The CNCF was founded as part of the Linux Foundation with the target to promote containers. The Open Container Initiative (OCI)was founded 2015 as well, as an open governance structure around the container ecosystem.

Their main target is to create open industry standards around container formats and runtimes. We were now in a state where containers are used, in terms of their popularity, side by side with classic Virtual Machines (VMs). There was a need for a specification of how containers should run, which resulted in the OCI Runtime Specification. Runtime developers should now be able to have a well-defined API to develop their container runtime. The libcontainer project was donated to the OCI during that time, whereas a new tool called runc was born as part of that. With runc it was now possible to directly interact with libcontainer, interpret the OCI Runtime Specification and run containers from it.

As of today, runc is one of the most popular projects in the container ecosystem and is used in a lot of other projects like containerd (used by Docker), CRI-O and podman. Other projects adopted the OCI Runtime Specification as well. For example Kata Containers makes it possible to build and run secure containers including lightweight virtual machines that feel and perform like containers, but provide stronger workload isolation using hardware virtualization technology as a second layer of defense.

Let’s dig more into the OCI Runtime Specification to get a better understanding about how a container runtime works under the hood.

Running Containers

runc

The OCI Runtime Specification provides information about the configuration, execution environment and overall life cycle of a container. A configuration is mainly a JSON file that contains all necessary information to enable the creation of a container on different target platforms like Linux, Windows or Virtual Machines (VMs).

An example specification can be easily generated with runc:

> runc spec
> cat config.json
{
  "ociVersion": "1.0.0",
  "process": {
    "terminal": true,
    "user": { "uid": 0, "gid": 0 },
    "args": ["sh"],
    "env": [
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "TERM=xterm"
    ],
    "cwd": "/",
    "capabilities": {
      "bounding": ["CAP_AUDIT_WRITE", "CAP_KILL", "CAP_NET_BIND_SERVICE"],
      [...]
    },
    "rlimits": [ { "type": "RLIMIT_NOFILE", "hard": 1024, "soft": 1024 } ],
    "noNewPrivileges": true
  },
  "root": { "path": "rootfs", "readonly": true },
  "hostname": "runc",
  "mounts": [
    {
      "destination": "/proc",
      "type": "proc",
      "source": "proc"
    },
    [...]
  ],
  "linux": {
    "resources": { "devices": [ { "allow": false, "access": "rwm" } ] },
    "namespaces": [
      { "type": "pid" },
      { "type": "network" },
      { "type": "ipc" },
      { "type": "uts" },
      { "type": "mount" }
    ],
    "maskedPaths": [
      "/proc/kcore",
      [...]
    ],
    "readonlyPaths": [
      "/proc/asound",
      [...]
    ]
  }
}

This file mainly contains all necessary information for runc to get started with running containers. For example, we have attributes about the running process, the defined environment variables, the user and group IDs, needed mount points and the Linux namespaces to be set up. One thing is still missing to get started running containers: We need an appropriate root file-system (rootfs). We already discovered in the past blog post how to obtain it from an already existing container image:

> skopeo copy docker://opensuse/tumbleweed:latest oci:tumbleweed:latest
[output removed]
> sudo umoci unpack --image tumbleweed:latest bundle
[output removed]

Interestingly, the unpacked container image already includes the Runtime Specification we need to run the bundle:

> sudo chown -R $(id -u) bundle
> cat bundle/config.json
{
  "ociVersion": "1.0.0",
  "process": {
    "terminal": true,
    "user": { "uid": 0, "gid": 0 },
    "args": ["/bin/bash"],
    "env": [
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "TERM=xterm",
      "HOME=/root"
    ],
    "cwd": "/",
    "capabilities": { [...] },
    "rlimits": [...]
  },
  "root": { "path": "rootfs" },
  "hostname": "mrsdalloway",
  "mounts": [...],
  "annotations": {
    "org.opencontainers.image.title": "openSUSE Tumbleweed Base Container",
    "org.opencontainers.image.url": "https://www.opensuse.org/",
    "org.opencontainers.image.vendor": "openSUSE Project",
    "org.opencontainers.image.version": "20190517.6.190",
    [...]
  },
  "linux": {
    "resources": { "devices": [ { "allow": false, "access": "rwm" } ] },
    "namespaces": [
      { "type": "pid" },
      { "type": "network" },
      { "type": "ipc" },
      { "type": "uts" },
      { "type": "mount" }
    ]
  }
}

There are now some annotations included beside the usual fields we already know from running runc spec. These can be used to add arbitrary metadata to the container, which can be utilized by higher level runtimes to add additional information to the specification.

Let’s create a new container from the bundle with runc. Before actually calling out to runc, we have to setup a receiver terminal to be able to interact with the container. For this, we can use the recvtty tool included in the runc repository:

> go get github.com/opencontainers/runc/contrib/cmd/recvtty
> recvtty tty.sock

In another terminal, we now call runc create with specifying the bundle and terminal socket:

> sudo runc create -b bundle --console-socket $(pwd)/tty.sock container

No further output, so what happened now? It seems like we have created a new container in created state:

> sudo runc list
ID          PID         STATUS      BUNDLE      CREATED                          OWNER
container   29772       created     /bundle     2019-05-21T08:35:51.382141418Z   root

The container seems to be not running, but what is running inside?

> sudo runc ps container
UID        PID  PPID  C STIME TTY          TIME CMD
root     29772     1  0 10:35 ?        00:00:00 runc init

The

runc init

command sets up a fresh environment with all necessary namespaces and launches a new initial process. The main process

/bin/bash

does not run yet inside the container, but we are still able to execute further processes within the container:

> sudo runc exec -t container echo "Hello, world!"
> Hello, world!

The

created

state of a container provides a nice environment to setup networking for example. To actually do something within the container, we have to bring it into the

running

state. This can be done via

runc start

:

> sudo runc start container

In the terminal where the recvtty process is running, a new bash shell session should now pop up:

mrsdalloway:/ $
mrsdalloway:/ $ ps aux
ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0   5156  4504 pts/0    Ss   10:28   0:00 /bin/bash
root        29  0.0  0.0   6528  3372 pts/0    R+   10:32   0:00 ps aux

Nice, the container seems to be running. We can now utilize

runc

to inspect the container’s state:

> sudo runc list
ID          PID         STATUS      BUNDLE      CREATED                          OWNER
container   4985        running     /bundle     2019-05-20T12:14:14.232015447Z   root
> sudo runc ps container
UID        PID  PPID  C STIME TTY          TIME CMD
root      6521  6511  0 14:25 pts/0    00:00:00 /bin/bash

The

runc init

process has gone and now only the actual /bin/bash process exists within the container. We can also do some basic life cycle management with the container:

> sudo runc pause container

It should now be impossible to get any output from the running container in the

recvtty

session. To resume the container, simply call:

> sudo runc resume container

Everything we tried to type before should now pop up in the resumed container terminal. If we need more information about the container, like the CPU and memory usage, then we can retrieve them via the runc events API:

> sudo runc events container
{...}

The output is a bit hard to read, so let’s reformat it and strip some fields:

{
  "type": "stats",
  "id": "container",
  "data": {
    "cpu": {
      "usage": {
        "total": 31442016,
        "percpu": [ 5133429, 5848165, 827530, ... ],
        "kernel": 20000000,
        "user": 0
      },
      "throttling": {}
    },
    "memory": {
      "usage": {
        "limit": 9223372036854771712,
        "usage": 1875968,
        "max": 6500352,
        "failcnt": 0
      },
      "swap": { "limit": 0, "failcnt": 0 },
      "kernel": {
        "limit": 9223372036854771712,
        "usage": 311296,
        "max": 901120,
        "failcnt": 0
      },
      "kernelTCP": { "limit": 9223372036854771712, "failcnt": 0 },
      "raw": {
        "active_anon": 1564672,
        [...]
      }
    },
    "pids": { "current": 1 },
    "blkio": {},
    "hugetlb": { "1GB": { "failcnt": 0 }, "2MB": { "failcnt": 0 } },
    "intel_rdt": {}
  }
}

We can see that we are able to retrieve detailed runtime information about the container.

To stop the container, we simply exit the

recvtty

session. Afterwards the container can be removed with

runc delete

:

> sudo runc list
ID          PID         STATUS      BUNDLE      CREATED                         OWNER
container   0           stopped     /bundle     2019-05-21T10:28:32.765888075Z  root
> sudo runc delete container
> sudo runc list
ID          PID         STATUS      BUNDLE      CREATED     OWNER

Containers in the stopped state cannot run again, so they have to be recreated from a fresh state. As already mentioned, the extracted bundle contains the necessary config.json file beside the rootfs, which will be used by runc to setup the container. We could for example modify the initial run command of the container by executing:

> cd bundle
> jq '.process.args = ["echo", "Hello, world!"]' config.json | sponge config.json
> sudo runc run container
> Hello, world!

We have nearly every freedom by editing the rootfs or the config.json. So we could tear down the PID namespace isolation between the container and the host:

> jq '.process.args = ["ps", "a"] | del(.linux.namespaces[0])' config.json | sponge config.json
> sudo runc run container
16583 ?        S+     0:00 sudo runc run container
16584 ?        Sl+    0:00 runc run container
16594 pts/0    Rs+    0:00 ps a
[output truncated]

In the end runc is a pretty low level runtime, whereas improper configuration and usage can lead into serious security concerns. Truly, runc has native support for security enhancements like seccompSecurity-Enhanced Linux (SELinux) andAppArmor but these features should be used by higher level runtimes to ensure correct usage in production. It is also worth mentioning that it is possible to run containers in rootless mode via runc to security harden the deployment even further. We will cover these topics in future blog posts as well, but for now that should suffice on that level.

Another drawback in running containers only with runc would be that we have to manually set up the networking to the host to reach out to the internet or other containers. In order to do that we could use the Runtime Specification Hooks feature to set up a default bridge before actually starting the container.

But why don’t we leave this job to a higher level runtime as well? Let’s go for that and move on.

The Kubernetes Container Runtime Interface (CRI)

Back in 2016, the Kubernetes project announced the implementation of the Container Runtime Interface (CRI), which provides a standard API for container runtimes to work with Kubernetes. This interface enables users to exchange the runtime in a cluster with ease. How does the API work? At the bottom line of every Kubernetes cluster runs a piece of software called the kubelet, which has the main job of keeping container workloads running and healthy. The kubelet connects to a gRPC server on startup and expects a predefined API there. For example, some service definitions of the API look like this:

// Runtime service defines the public APIs for remote container runtimes
service RuntimeService {
    rpc CreateContainer (...) returns (...) {}
    rpc ListContainers  (...) returns (...) {}
    rpc RemoveContainer (...) returns (...) {}
    rpc StartContainer  (...) returns (...) {}
    rpc StopContainer   (...) returns (...) {}

That seems to be pretty much what we already did with runc, managing the container life cycle. If we look further at the API, we see this:

    rpc ListPodSandbox  (...) returns (...) {}
    rpc RemovePodSandbox(...) returns (...) {}
    rpc RunPodSandbox   (...) returns (...) {}
    rpc StopPodSandbox  (...) returns (...) {}
}

What does “sandbox” mean? Containers should already be some kind of sandbox, right? Yes, but in the Kubernetes worldPods can consist of multiple containers, whereas this abstract hierarchy has to be mapped into a simple list of containers. Because of that, every creation of a Kubernetes Pod starts with the setup of a so called PodSandbox. Every container running inside the Pod is attached to this sandbox, so the containers inside can share common resources, like their network interfaces for example. runc alone does not provide such features out of the box, so we have to use a higher level runtime to achieve our goal.

CRI-O

CRI-O is a higher level container runtime which has been written on purpose to be used with the Kubernetes CRI. The name originates from the combination of the Container Runtime Interface and the Open Container Initiative. Isn’t that simple? CRI-O’s journey started as Kubernetes incubator project back in 2016 under the name Open Container Initiative Daemon (OCID). Version 1.0.0 has been released one year later in 2017 and follows the Kubernetes release cycles from that day on. This means for example, that the Kubernetes version 1.15 can be safely used together with CRI-O 1.15 and so on.

The implementation of CRI-O follows the main UNIX philosophy and tends to be a lightweight alternative to Docker or containerd when it comes to running production-ready workloads inside of Kubernetes. It is not meant to be a developers-facing tool which can be used from the command line. CRI-O has only one major task: Fulfilling the Kubernetes CRI. To achieve that, it utilizes runc for basic container management in the back, whereas the gRPC server provides the API in the front end. Everything in between is done either by CRI-O itself or by core libraries like containers/storage or containers/image. But in the end it doesn’t mean that we cannot play around with it, so let’s give it a try.

I prepared a container image called “crio-playground” to get started with CRI-O in an efficient manner. This image contains all necessary tools, example files and a working CRI-O instance running in the background. To start a privileged container running the crio-playground, simply execute:

> sudo podman run --privileged -h crio-playground -it saschagrunert/crio-playground
crio-playground:~ $

From now on we will use a tool called crictl to interface with CRI-O and its Container Runtime Interface implementation. crictl allows us to use YAML representations of the CRI API requests to send them to CRI-O. For example, we can create a new PodSandbox with the sandbox.yml lying around in the current working directory of the playground:

metadata:
  name: sandbox
  namespace: default
dns_config:
  servers:
    - 8.8.8.8

To create the sandbox in the running crio-playground container, we now execute:

crio-playground:~ $ crictl runp sandbox.yml
5f2b94f74b28c092021ad8eeae4903ada4b1ef306adf5eaa0e985672363d6336

Let’s store the identifier of the sandbox as $POD_ID environment variable for later usage as well:

crio-playground:~ $ export POD_ID=5f2b94f74b28c092021ad8eeae4903ada4b1ef306adf5eaa0e985672363d6336

If we now run crictl pods we can see that we finally have one PodSandbox up and running:

crio-playground:~ $ crictl pods
POD ID              CREATED             STATE               NAME                NAMESPACE           ATTEMPT
5f2b94f74b28c       43 seconds ago      Ready               sandbox             default             0

But what’s inside our sandbox? We surely can examine the sandbox further by using runc:

crio-playground:~ $ runc list
ID                                                                 PID         STATUS      BUNDLE                                                                                                             CREATED                          OWNER
5f2b94f74b28c092021ad8eeae4903ada4b1ef306adf5eaa0e985672363d6336   80          running     /run/containers/storage/vfs-containers/5f2b94f74b28c092021ad8eeae4903ada4b1ef306adf5eaa0e985672363d6336/userdata   2019-05-23T13:43:38.798531426Z   root

The sandbox seems to run in a dedicated bundle under /run/containers.

crio-playground:~ $ runc ps $POD_ID
UID        PID  PPID  C STIME TTY          TIME CMD
root        80    68  0 13:43 ?        00:00:00 /pause

Interestingly, there is only one process running inside the sandbox, called pause. As the source code of pause indicates, the main task of this process is to keep the environment running and react to incoming signals. Before we actually create our workload within that sandbox, we have to pre-pull the image we want to run. A trivial example would be to run a web server, so let’s retrieve a nginx image by calling:

crio-playground:~ $ crictl pull nginx:alpine
Image is up to date for docker.io/library/nginx@sha256:0fd68ec4b64b8dbb2bef1f1a5de9d47b658afd3635dc9c45bf0cbeac46e72101

Now let’s create a very simple container definition in YAML, like we did for the sandbox:

metadata:
  name: container
image:
  image: nginx:alpine

And now, let’s kick off the container. For that we have to provide the hash of the sandbox as well as the YAML definitions of the sandbox and container:

crio-playground:~ $ crictl create $POD_ID container.yml sandbox.yml
b205eb2c6abec3e7ade72e0cea09d827968a4c1089483cab06bdf0f4ee82ff0c

Seems to work! Let’s store the container identifier as $CONTAINER_ID for later reuse as well:

crio-playground:~ $ export CONTAINER_ID=b205eb2c6abec3e7ade72e0cea09d827968a4c1089483cab06bdf0f4ee82ff0c

What would you expect if we now check out the status of our two running containers while keeping the CRI API in mind? Correct, the container should be in the created state:

crio-playground:~ $ runc list
ID PID STATUS BUNDLE CREATED OWNER
5f2b94f74b28c092021ad8eeae4903ada4b1ef306adf5eaa0e985672363d6336 80 running /run/containers/storage/vfs-containers/5f2b94f74b28c092021ad8eeae4903ada4b1ef306adf5eaa0e985672363d6336/userdata 2019-05-23T13:43:38.798531426Z root
b205eb2c6abec3e7ade72e0cea09d827968a4c1089483cab06bdf0f4ee82ff0c 343 created /run/containers/storage/vfs-containers/b205eb2c6abec3e7ade72e0cea09d827968a4c1089483cab06bdf0f4ee82ff0c/userdata 2019-05-23T14:08:53.701174406Z root

And, like in our previous runc example, the container waits in runc init:

crio-playground:~ $ runc ps $CONTAINER_ID
UID        PID  PPID  C STIME TTY          TIME CMD
root       343   331  0 14:08 ?        00:00:00 /usr/sbin/runc init
crictl shows the container in created as well:
crio-playground:~ $ crictl ps -a
CONTAINER ID        IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
b205eb2c6abec       nginx:alpine        13 minutes ago      Created             container           0                   5f2b94f74b28c

Now we have to start the workload to get it into the running state:

crio-playground:~ $ crictl start $CONTAINER_ID
b205eb2c6abec3e7ade72e0cea09d827968a4c1089483cab06bdf0f4ee82ff0c

This should be successful, too. Let’s verify if all processes are running correctly:

crio-playground:~ $ crictl ps
CONTAINER ID        IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
b205eb2c6abec       nginx:alpine        15 minutes ago      Running             container           0                   5f2b94f74b28c

Inside the container should now run an nginx web server:

crio-playground:~ $ runc ps $CONTAINER_ID
UID        PID  PPID  C STIME TTY          TIME CMD
root       343   331  0 14:08 ?        00:00:00 nginx: master process nginx -g daemon off;
100        466   343  0 14:24 ?        00:00:00 nginx: worker process

But how to reach the web servers content now? We did not expose any ports or other advanced configuration for the container, so it should be fairly isolated from the host. The solution lies down in the container networking. Because we use a bridged network configuration in the crio-playground, we can simply access the containers network address. To get these we can exec into the container and list the network interfaces:

crio-playground:~ $ crictl exec $CONTAINER_ID ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 16:04:8c:44:00:59 brd ff:ff:ff:ff:ff:ff
    inet 172.0.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::1404:8cff:fe44:59/64 scope link
       valid_lft forever preferred_lft forever

And now just query the inet address for eth0:

crio-playground:~ $ curl 172.0.0.2
[output truncated]

Hooray, it works! We successfully run a Kubernetes workload without running Kubernetes!

The overall Kubernetes story about Network Plugins or the Container Network Interface (CNI) is worth another blog post, but that’s a different story and we stop right here with all the magic.

Conclusion

And that’s a wrap for this part of the blog series about the demystification of containers. We discovered the brief history of container runtimes and had the chance to run containers with the low level runtime runc as well as the higher level runtime CRI-O. I can really recommend to have a closer look at the OCI runtime specification and test different configurations within the crio-playground environment. For sure we will see CRI-O in the future again when we talk about container-related topics like security or networking. Besides that, we will have the chance to explore different tools like podmanbuildah or skopeo, which provide more advanced container management solutions. I really hope you enjoyed the read and will continue following my journey into future parts of this series. Feel free to drop me a line anywhere you can find me on the internet. Stay tuned!

You can find all necessary resources about this series on GitHub.

 

Diversity Scholarship Series: Experiencing KubeCon + CloudNativeCon + Open Source Summit in Shanghai

By | Blog

Guest post by Irvi Aini, originally published on Medium

As someone who is still a newbie in the open source world, I often feel intimidated. At first, I wasn’t even sure whether I should contribute or not, even though I’ve been involved in several communities in my home country, Indonesia. This kind of feeling doesn’t dissipate easily (previously I’ve been active in the Docker and Kubernetes communities).

Whenever I want to start contributing, I just feel like, well they must be really smart and I am still a newbie. Then several things happened and someone told me that I’ll be disappointed if I keep getting stuck inside your own mind, forever undecided. Then I gathered my courage to start looking at something that may I could achieve. I began to look at the design docs, various SIGs, and I even tried to look at what the biweekly meeting looks like.

One day my friend asked me to help him initiating localization for Kubernetes’ documentation in Bahasa Indonesia. I had been looking at this project as well, and with the help of a fellow contributor from 🇫🇷, I begin initiating this project with this friend of mine. I made several mistakes, but the response from the reviewer was actually beyond my expectation, they were really nice.. and I got a lot of help as well… To be honest I was really struggling, because when I started doing this project my knowledge about Bahasa Indonesia was not as good as I ever thought beforehand. Now that I was involved in this project I was promoted as a member of the Kubernetes organization with sponsorship from the code OWNERS and fellow contributor from 🇫🇷. At that time, this friend also dared me to write a paper and submit it for KubeCon. However I still don’t have the courage of doing public speaking with a worldwide audience. Even the thought about it is still too much for me.

After this localization project, I began working on the Kubernetes-client organization which involved coding. It was fun, since I got the chance to learn more about Haskell. At this point I dared myself to apply for a Kubecon contributor discount. Since it was too late for Barcelona, I tried my luck for KubeCon Shanghai. I emailed Linux Foundation, included my Github handle and voila I got the discount: so I got the ticket for “free”. However another problem was that I still needed accommodations to be able to attend the event. I remember that Linux Foundation already spent about $770,000 USD for the purpose of providing scholarships. Diversity scholarship program provides support to those from traditionally underrepresented and/or marginalized group in technology and/or open source communities. A recipient will receive up to $1500 USD to reimburse actual travel expenses (airfare, hotel, and ground transport). I began to search if KubeCon Shanghai is also supported, I saw this link about the diversity scholarship and then I applied. The application mechanism is actually pretty straightforward, you’ll be required to fill in the details about your experiences, motivations, and what you will gain after you attend the event. A few weeks later, I got a reply from Linux Foundation and they said they will give me travel funds for my accommodation. I felt so happy and blessed since I was selected as a part of 309 recipients (accumulated for 7 KubeCon events) around the world. I also felt excited to meet all the folks that I’ve known before through Slack or Mailing List. I had the chance to discuss deeper topics about CNCF related projects.

My intention of writing this is actually simple. I don’t know how much people feel the same as me,but I hope that what I share can show that sometimes it’s better to try something new and then say I’m glad I did that, instead of letting the possibility of doing something new pass you by and then regret the decision made in the past. Don’t hesitate to contribute just because you feel intimidated, especially if you’re planning on contributing to CNCF projects. I think CNCF and Kubernetes have very nice people who are eager to help during your journey as a fellow contributor. Cheers..

For the localization project

Surely we will be really happy if you want to help as well 😊. We listed all of the things that need to be done for this first milestone on the tracking page in this Github issue. Not all contributions include coding in the open source project. Happy contributing!

A Look Back At KubeCon + CloudNativeCon + Open Source Summit China 2019

By | Blog

We had an amazing time in Shanghai, and wanted to share an overview of the key highlights and news from KubeCon + CloudNativeCon + Open Source Summit China 2019! This year we welcomed more than 3,500 attendees from around the world to talk about exciting cloud native and open source topics and hear from CNCF project maintainers, end users, and community members.

Our second annual China event grew by 1,000 to more than 3,500 attendees this year! At the conference, CNCF announced the creation of Special Interest Groups (SIGs) as well as two new groups: Storage and Security.

CNCF shared the news that the largest mobile payments company in the world, Ant Financial has joined as a Gold End User member.

CNCF also awarded DiDi, the world’s leading multi-modal transportation platform, the End User Award in recognition of its contributions to the cloud native ecosystem.

We watched an exciting line-up of speakers discuss cloud native technology, open source, and their experience building and using the technology. We also learned how Chinese developers have been engaging increasingly with CNCF and open source. Chinese developers have even now collectively made China the second largest Kubernetes contributor country in the world! 

Diversity Scholarships in Asia! 

For KubeCon + CloudNativeCon + Open Source Summit China this year, CNCF enabled 15 diversity scholars to attend! We are thrilled to have sponsors Aspen Mesh, CNCF, Google Cloud, Red Hat, Twistlock, and VMware come together to help offer this opportunity to attend KubeCon to these recipients! Don’t forget to keep an eye on our blog for our Diversity Scholarship Series, where recipients share their experiences.    

Community Party! 

The community came together on Day 1 at the Welcome Reception in the Sponsor Showcase. Sponsors, attendees and fellow community members enjoyed an evening of networking, food, drink and entertainment, including local calligraphy, sugar and candy floss artists. 

As always, we had a killer job board at the event showing continued growth in the ecosystem. 

That’s a wrap!


Save the Dates!

The call for proposals for KubeCon + CloudNativeCon North America 2019, which takes place in San Diego from November 18-21, closes July 12th. Registration is open. 

We’ll be back in Europe for KubeCon + CloudNativeCon Europe in Amsterdam from March 30-April 2, 2020. Registration will go live at the end of the year. We’ll soon be announcing the location for our China event in the summer of 2020.

We hope to see you at one of or all of these upcoming events!

 

 

 

Diversity Scholarship Series: KubeCon + CloudNativeCon EU 2019

By | Blog

Guest post by Semah Mhamdi originally posted on Medium

At the beginning of May in Barcelona, the sun shines, the birds sing … but the experts, they continue to work! Since the morning, a little more than 8000 people have gathered at Fira Gran Via for KubeCon + CloudNativeCon EU 2019 hosted by the Cloud Native Computing Foundation.

I’m one of the lucky people who were there in Barcelona thanks to the CNCF Diversity scholarship.

My introduction to the world of containers happened one year ago, when I stumbled upon one of my now great friends at a technical workshop, in Tunisia.

I am very grateful for the introduction to this world that happened on that fateful day. The day I found Kubernetes (amongst many other things).

I have been working with Kubernetes and containers as a user and a developer, building tools around it. I really wanted to dig deeper into the world of cloud-native architecture and apps. The scholarship gave me an opportunity to interact with amazing communities who work on cloud technology. I’ve learned a lot from them. It also helped me get started on one of my biggest goals of contributing back to these amazing communities.

From day one, I found myself talking to experts as well as beginners. Everyone I approached was willing to talk about their ideas and work. They also listened to my ideas and offered me advice. I even got some really cool swag.

I was present for the new contributor workshop which helped me learn a lot about how to contribute to Kubernetes. It has now led me to join the Kubernetes community.

The next three days worth of keynotes were really amazing, especially when we meet people that we’ve been following for a long time like Kris Nova, Joe Beda, and Paris Pittman. The list of projects under CNCF is growing, and it was amazing to see the various stages in which the projects were featured on.

Overall, Kubecon was a dream for me to attend.

I have also started on the path to contributing more to amazing upstream projects.

Thanks again, to the Linux Foundation and to the Cloud Native Computing Foundation, for giving me this opportunity to expand my horizons, to meet great people, to make cool, amazing friends. To learn and to grow … and to look forward to KubeCon + CloudNativeCon North America 2019.

KubeCon + CloudNativeCon Europe 2019 Conference Transparency Report: Another Record-Breaking CNCF Event

By | Blog

KubeCon + CloudNativeCon Europe 2019 was a great success with record-breaking registrations, attendance, sponsorships, and co-located events for our annual European conference. With 7,700 registrations, attendance for this year’s event in Barcelona grew by 84% from last year’s event in Copenhagen. 74% of these attendees were at KubeCon + CloudNativeCon for the first time.  

The KubeCon + CloudNativeCon Europe 2019 conference transparency report

Key Takeaways

  • Attendance grew by 84% from last year’s KubeCon event in Copenhagen.
  • Feedback from attendees was very positive with all surveyed highly recommending the event to a colleague or friend.
  • 95 media and analysts attended the event, generating more than 5,300 clips of compelling event news.
  • 353 speakers – 23% – were accepted to speak at the event, out of 1,535 CFP submissions – a new record for our European event.
  • Over 50% of attendees participated in one or more of the 27 workshops, mini-summits and training sessions hosted by community members the day prior to the conference.
  • The event drew attendees from 93 countries across 6 continents. 
  • Leveraging the $100,000 in diversity scholarship funds available from Aspen Mesh, CNCF, Google Cloud, Red Hat, Twistlock, and VMware, CNCF provided travel and/or conference registration to 56 applicants.
  • 40% of the keynote and 14% of the track speakers were led by women.  
  • Over 200 people attended the Diversity Lunch + Hack, sponsored by Google Cloud and over 60 people attended the EmpowerUs reception, sponsored by Red Hat.
  • Kubernetes, Prometheus and Helm were the top three projects in terms of attendee interest.
  • The two reasons respondents cited for attending KubeCon + CloudNativeCon were to learn (72.4%) and to network (18.6%)

Save the Dates!

Speaker submissions are open and due on July 12 for KubeCon + CloudNativeCon North America 2019 which takes place in San Diego from November 18-21. Registration is open. 

We’ll be back in Europe for KubeCon + CloudNativeCon Europe in Amsterdam from March 30-April 2, 2020. Registration will go live at the end of the year.

We hope to see you at one of or all of these upcoming events!