Category

Blog

Meet the Ambassador: Chris Gaun

By | Blog

Chris Gaun, Mesosphere, sat down with Kaitlyn Barnard, Cloud Native Computing Foundation (CNCF), to talk about cloud native, trends in the industry, and being an Ambassador. Below is their interview. You can also view the video.

 

Kaitlyn: Thank you for joining me today to talk about the CNFC Ambassador Program and what you’re doing in the community. Can you start by introducing yourself?

Chris: My name is Chris Gaun. I am a CNCF ambassador and product marketing manager at Mesosphere.

Kaitlyn: I know you have a new baby at home. Five weeks old now. How’s the new dad life treating you?

Chris: Lack of sleep. It’s our first kid, and it’s exciting, amazing.

Kaitlyn: In your experience, what is the biggest challenge for adopting and scaling cloud native applications right now?

Chris: I feel that there’s really great tools out there: open source tools, a lot of them under the CNCF umbrella and some of them not. To piece those together, Prometheus, Kubernetes, Jenkins, or CI/CD pipeline, is still somewhat difficult. There’s a lot of great vendors, cloud providers, giving users a lot of help in this area. But if you’re coming from an old-school, larger organization where most of your experience might be with WebSphere or something of that nature, that approaching this (cloud-native) and saying, “Oh yeah, I’m gonna setup Kubernetes or manage Kubernetes.” Then, “Oh, now I need Prometheus and now I need Linkerd or now I need Istio.” That’s a big project at that point.

Kaitlyn: What cloud native trend are you most excited about?

Chris: Well, first off, this whole thing has been amazing. I have pictures from KubeCon two years ago in London and it was in a co-working space. There there was maybe 500 people coming-and-going, but 150 probably there all the time, eating pizza. So you could feed them on however many boxes of pizza. To see the trajectory of over 4,000 people in a room, it’s just mind boggling from 150 in a basement.

All these tools that are coming out, especially Istio, are amazing. The fact that they’re all open source, people can kick the tires without spending a huge amount of money, and get the knowledge of how to do cloud native infrastructure and cloud native applications without diving in head-first is amazing

Kaitlyn: You’ve actually been part of the CNCF Ambassador Program since the beginning. Can you talk about the early days and why you wanted to be part of the program?

Chris: As background, I’ve been a part of the Kubernetes community for three years, almost since the beginning. I was investigating it very early on before GA. CNCF was created, and then Kubernetes came under its banner.

I thought there should be a safe space where we just talk about Kubernetes and how cool it is, instead of talking about our own vendor politics or selling stuff all the time. I was talking to Chris, the CTO of CNCF at the time, and he said, “I think I’m gonna have this CNCF Ambassador Program. Would you like to be a member?”. Then when the program kicked off, I got an email to be a CNCF Ambassador.

We’re here to promote the CNCF tools, the open source tools, that are under the CNCF banner like Kubernetes and Prometheus without any of the vendor politics. Just talking about how cool the technology is.

Kaitlyn: We’ve talked a lot about work. What do you do when you’re not working?

Chris: Well, now I have a five-week-old, so not a lot right now. Before that – I’m actually from New York, but I recently moved to Mississippi. My wife’s a doctor and she’s in a fellowship. So we spend a lot of time outdoors. Mississippi is a beautiful state. We go on a lot of hikes, we have a nice river nearby called the Mississippi River, and a lake. We try to get out as much as possible. I have a dog named Panda who goes out with us and we go for hikes.

Kaitlyn: That’s awesome. For the record Panda’s one of my favorite dog names up to this point. Thank you so much for taking the time to talk to me today.

Chris: Thanks for having me.

Diversity Scholarship Series: My Experiences at KubeCon EU 2018

By | Blog

CNCF offers diversity scholarships to developers and students to attend KubeCon + CloudNativeCon events. In this post, scholarship recipient, Yang Li, Software Engineer at TUTUCLOUD, shares his experience attending sessions and meeting the community. Anyone interested in applying for the CNCF diversity scholarship to attend KubeCon + CloudNativeCon North America 2018 in Seattle, WA December 11-13, can submit an application here. Applications are due October 5th.

This guest post was written by Yang Li, Software Engineer 

Thanks to the Diversity Scholarship sponsored by CNCF, I attended the KubeCon + CloudNativeCon Europe 2018 in Copenhagen May 2-4.

The Conference

When I was in college, I wrote software with Python on Ubuntu, and read Cathedral and the Bazaar by Eric S. Raymond. These were my first memories of open source.

Later on, I worked with a variety of different open source projects, and I attended many tech conferences. But none of them were like KubeCon, which gave me the opportunity to take part in the open source community in real life.

Not only was I able to enjoy great speeches and sessions at the event, but I also met and communicated with many different open source developers. I made many new friends and amazing memories during the four days in Copenhagen.

In case anyone missed it, here are the videos, presentations, and photos of the whole conference.

Although I haven’t been to many cities around the world, I can safely say that Copenhagen is one of my favorites.

The Community

“Diversity is essential to happiness.”

This quote by Bertrand Russell is one of my firm beliefs. Even as a male software engineer and a Han Chinese in China, I always try to speak for the minority groups which are still facing discrimination. But to be honest, I haven’t found much meaning in the word diversity for myself.

However, soon after being at KubeCon, I understood that I’m one of the minorities in the world of open source. More importantly, with people from all over the world, I learned how inclusiveness made this community a better place. Both the talk by Dirk Hohndel and the Diversity Luncheon at KubeCon were very inspirational.

Final Thoughts

I started working with Kubernetes back in early 2017, but I only made a few contributions in the past year. Not until recently did I become active in the community and joined multiple SIGs. Thanks to the conference, I have a much better understanding of the culture of the Kubernetes community. I think this is the open source culture at its best.

  • Distribution is better than centralization
  • Community over product or company
  • Automation over process
  • Inclusive is better than exclusive
  • Evolution is better than stagnation

It is the culture that makes this community an outstanding place which deserves our persevering.

 

 

Supporting Fast Decisioning Applications with Kubernetes

By | Blog

Capital One has applications that handle millions of transactions a day. Big-data decisioning—for fraud detection, credit approvals and beyond—is core to the business. To support the teams that build applications with those functions for the bank, the cloud team embraced Kubernetes for its provisioning platform.

In this case study, Capital One describes how their use of Kubernetes and other CNCF open source projects has been a “significant productivity multiplier”.

Read how their Kubernetes implementation reduced their attack vulnerability for applications in the cloud and increased response time to threats in the marketplace by being able to push new rules, detect new patterns of behavior, and detect anomalies in account and transaction flows.

With Kubernetes, Capital One has been able to provide the tools in the same ecosystem, in a consistent way.

Launching and Scaling Up Experiments, Made Simple

By | Blog

From experiments in robotics to old-school video game play research, OpenAI’s work in artificial intelligence technology is meant to be shared. With a mission to ensure the safety of powerful AI systems, they care deeply about open source—both benefiting from it and contributing to it.

In this case study, OpenAI describes how their use of Kubernetes on Azure made it easy to launch experiments, scale their nodes, reduce costs for idle notes, and provide low latency and rapid iteration, all with little effort to manage.

Containers were connected and coordinated using Kubernetes for batch scheduling and as a workload manager. Their workloads are running both in the cloud and in the data center – a hybrid model that’s allowed OpenAI to take advantage of lower costs and have the availability of specialized hardware in their data center.

 

Building on the Open Source Model

By | Blog

SUSE has been in business for over 25 years, built on a model that depends on open source technologies and which are driven by communities.

Engaging with the technologies that are at the forefront of the industry, such as cloud native and Kubernetes, directly aligns with SUSE’s strategic business interests. The open source model is what makes it work by bringing together the greatest minds to solve a problem, participating in open discussions, and collaboration. It’s about the ideals of advancing the technology and holding that above the business interests of any one entity.

KubeCon + CloudNativeCon gives SUSE the opportunity to engage with people 1:1, learn about the architecture and processes as part of the whole paradigm, and professional networking.

In this CNCF Member Video, Jennifer Kotzen, Senior Product Marketing Manager at SUSE, goes into detail on these topics, the CNCF projects they are using now and plan to use, and how the company benefits from attending KubeCon + CloudNativeCon events.

 

HTTP/2: Smarter At Scale

By | Blog

This guest post was written by Jean de Klerk, Developer Program Engineer, Google

Much of the web today runs on HTTP/1.1. The spec for HTTP/1.1 was published in June of 1999, just shy of 20 years ago. A lot has changed since then, which makes it all the more remarkable that HTTP/1.1 has persisted and flourished for so long. But in some areas it’s beginning to show its age; for the most part, in that the designers weren’t building for the scale at which HTTP/1.1 would be used and the astonishing amount of traffic that it would come to handle.

HTTP/2, whose specification was published in May of 2015, seeks to address some of the scalability concerns of its predecessor while still providing a similar experience to users. HTTP/2 improves upon HTTP/1.1’s design in a number of ways, perhaps most significantly in providing a semantic mapping over connections. In this post we’ll explore the concept of streams and how they can be of substantial benefit to software engineers.

Semantic Mapping over Connections

There’s significant overhead to creating HTTP connections. You must establish a TCP connection, secure that connection using TLS, exchange headers and settings, and so on. HTTP/1.1 simplified this process by treating connections as long-lived, reusable objects. HTTP/1.1 connections are kept idle so that new requests to the same destination can be sent over an existing, idle connection. Though connection reuse mitigates the problem, a connection can only handle one request at a time – they are coupled 1:1. If there is one large message being sent, new requests must either wait for its completion (resulting in head-of-line blocking) or, more frequently, pay the price of spinning up another connection.

HTTP/2 takes the concept of persistent connections further by providing a semantic layer above connections: streams. Streams can be thought of as a series of semantically connected messages, called frames. A stream may be short-lived, such as a unary stream that requests the status of a user (in HTTP/1.1, this might equate to `GET /users/1234/status`). With increasing frequency it’s long-lived. To use the last example, instead of making individual requests to the /users/1234/status endpoint, a receiver might establish a long-lived stream and thereby continuously receive user status messages in real time.

Streams Provide Concurrency

The primary advantage of streams is connection concurrency, i.e. the ability to interleave messages on a single connection.

To illustrate this point, consider the case of some service A sending HTTP/1.1 requests to some service B about new users, profile updates, and product orders. Product orders tend to be large, and each product order ends up being broken up and sent as 5 TCP packets (to illustrate its size). Profile updates are very small and fit into one packet; new user requests are also small and fit into two packets.

In some snapshot in time, service A has a single idle connection to service B and wants to use it to send some data. Service A wants to send a product order (request 1), a profile update (request 2), and two “new user” requests (requests 3 and 4). Since the product order arrives first, it dominates the single idle connection. The latter three smaller requests must either wait for the large product order to be sent, or some number of new HTTP/1.1 connection must be spun up for the small requests.

Meanwhile, with HTTP/2, streaming allows messages to be sent concurrently on the same connection. Let’s imagine that service A creates a connection to service B with three streams: a “new users” stream, a “profile updates” stream, and a “product order” stream. Now, the latter requests don’t have to wait for the first-to-arrive large product order request; all requests are sent concurrently.

Concurrency does not mean parallelism, though; we can only send one packet at a time on the connection. So, the sender might round robin sending packets between streams (see below). Alternatively, senders might prioritize certain streams over others; perhaps getting new users signed up is more important to the service!

Flow Control

Concurrent streams, however, harbor some subtle gotchas. Consider the following situation: two streams A and B on the same connection. Stream A receives a massive amount of data, far more than it can process in a short amount of time. Eventually the receiver’s buffer fills up and the TCP receive window limits the sender. This is all fairly standard behavior for TCP, but this situation is bad for streams as neither streams would receive any more data. Ideally stream B should be unaffected by stream A’s slow processing.

HTTP/2 solves this problem by providing a flow control mechanism as part of the stream specification. Flow control is used to limit the amount of outstanding data on a per-stream (and per-connection) basis. It operates as a credit system in which the receiver allocates a certain “budget” and the sender “spends” that budget. More specifically, the receiver allocates some buffer size (the “budget”) and the sender fills (“spends”) the buffer by sending data.The receiver advertises to the sender additional buffer as it is made available, using special-purpose WINDOW_UPDATE frames. When the receiver stops advertising additional buffer, the sender must stop sending messages when the buffer (its “budget”) is exhausted.

Using flow control, concurrent streams are guaranteed independent buffer allocation. Coupled with round robin request sending, streams of all sizes, processing speeds, and duration may be multiplexed on a single connection without having to care about cross-stream problems.

Smarter Proxies

The concurrency properties of HTTP/2 allow proxies to be more performant. As an example, consider an HTTP/1.1 load balancer that accepts and forwards spiky traffic: when a spike occurs, the proxy spins up more connections to handle the load or queues the requests. The former – new connections – are typically preferred (to a point); the downside to these new connections is paid not just in time waiting for syscalls and sockets, but also in time spent underutilizing the connection whilst TCP slow-start occurs.

In contrast, consider an HTTP/2 proxy that is configured to multiplex 100 streams per connection. A spike of some amount of requests will still cause new connections to be spun up, but only 1/100 connections as compared to its HTTP/1.1 counterpart. More generally speaking: If n HTTP/1.1 requests are sent to a proxy, n HTTP/1.1 requests must go out; each request is a single, meaningful request/payload of data, and requests are 1:1 with connections. In contrast, with HTTP/2 n requests sent to a proxy require n streams, but there is no requirement of n connections!

The proxy has room to make a wide variety of smart interventions. It may, for example:

  • Measure the bandwidth delay product (BDP) between itself and the service and then transparently create the minimum number of connections necessary to support the incoming streams.
  • Kill idle streams without affecting the underlying connection.
  • Load balance streams across connections to evenly spread traffic across those connections, ensuring maximum connection utilization.
  • Measure processing speed based on WINDOW_UPDATE frames and use weighted load balancing to prioritize sending messages from streams on which messages are processed faster.

HTTP/2 Is Smarter At Scale

HTTP/2 has many advantages over HTTP/1.1 that dramatically reduce the network cost of large-scale, real-time systems. Streams present one of the biggest flexibility and performance improvements that users will see, but HTTP/2 also provides semantics around graceful close (see: GOAWAY), header compression, server push, pinging, stream priority, and more. Check out the HTTP/2 spec if you’re interested in digging in more – it is long but rather easy reading.

To get going with HTTP/2 right away, check out gRPC, a high-performance, open-source universal RPC framework that uses HTTP/2. In a future post we’ll dive into gRPC and explore how it makes use of the mechanics provided by HTTP/2 to provide incredibly performant communication at scale.

Getting the Most out of Istio with CNCF Projects

By | Blog

This guest post was written by Neeraj Poddar, Platform Lead, Aspen Mesh

Are you considering or using a service mesh to help manage your microservices infrastructure? If so, here are some basics on how a service mesh can help, the different architectural options, and tips and tricks on using some key CNCF tools that are included with Istio to get the most out of it.

The beauty of a service mesh is that it bundles so many capabilities together, freeing engineering teams from having to spend inordinate amounts of time managing microservices architectures. Kubernetes has solved many build and deploy challenges, but it is still time consuming and difficult to ensure reliability at runtime. A service mesh handles the difficult, error-prone parts of cross-service communication such as latency-aware load balancing, connection pooling, service-to-service encryption, TLS, instrumentation, and request-level routing.

Once you have decided a service mesh makes sense to help manage your microservices, the next step is deciding what service mesh to use. There are several architectural options, from the earliest model of a library approach, the node agent architecture, and the model which seems to be gaining the most traction – the sidecar model. We have also recently seen an evolution from data plane meshes like Envoy, to control plane meshes such as Istio. As active users of Istio and believers in the sidecar architecture striking the right balance between a robust set of features and a lightweight footprint, so let’s drill down into how to get the most out of Istio.

One of the capabilities Istio provides is distributed tracing. Tracing provides service dependency analysis for different microservices and it provides tracking for requests as they are traced through multiple microservices. It’s also a great way to identify performance bottlenecks and zoom into a particular request to define things like which microservice contributed to the latency of a request or which service created an error.

We use and recommend Jaeger for tracing as it has several advantages:

  • OpenTracing compatible API
  • Flexible & scalable architecture
  • Multiple storage backends
  • Advanced sampling
  • Accepts Zipkin spans
  • Great UI
  • CNCF project and active OS community

Another powerful thing you gain with Istio is the ability to collect metrics. Metrics are key to understanding historically what has happened in your applications, and when they were healthy compared to when they were not. A service mesh can gather telemetry data from across the mesh and produce consistent metrics for every hop. This makes it easier to quickly solve problems and build more resilient applications in the future.

We use and recommend Prometheus for gathering metrics for several reasons:

  • Pull model
  • Flexible query API
  • Efficient storage
  • Easy integration with Grafana
  • CNCF project and active OS community

Check out the recent CNCF webinar on this topic for a deeper look into what you can do with these tools and more.

RackN: How cloud native is redefining the industry

By | Blog

RackN co-founded the bare metal provisioning project Digital Rebar. They have been involved in CNCF since the early days, participating in cloud native technologies and how it enables developers to deliver product and code in an efficient and productive way.

In looking at the operator side and redefining the operational components for infrastructure, CNCF brings API driven and immutable type of thinking into infrastructure, making it cleaner and easier to understand. The benefits are powerful and Rob feels we are on the edge of redefining the ROI for the data center.

In this CNCF Member Video, Rob Hirschfeld, CEO and Co-Founder of RackN, goes into detail on the benefits of CNCF and cloud native technologies, building community at a KubeCon event, and why participating in KubeCon is important to RackN.

Meet the Ambassadors: Kris Nova

By | Blog

Kris Nova, Heptio, sat down with Kim McMahon, Cloud Native Computing Foundation, to talk about cloud native, trends in the industry, and being an Ambassador. Below is their interview. You can also view the video.

Kim: Thank you for joining us for this Ambassador Interview. Kris, can you do a quick introduction?

Kris: Hi. I’m Kris Nova. I work for Heptio. I’m a developer advocate, contribute to open source Kubernetes, which I have been doing for two years. Last year, I joined CNCF as an ambassador and here I am.

Kim: Wonderful. Let’s start with a little bit about you. What do you like to do in your free time?

Kris: So when I’m not contributing to open source, which is a lot of my free time, I would say I’m usually climbing mountains. Mountain climbing, mountaineering.

Kim: Have you found much of that here?

Kris: Here, in Denmark, no, which I was kind of bummed. Every time there’s a tech conference, I’m always like, “Hrmm, I wonder if there’s any really great mountains nearby I could go climb.” There’s not much here, but Seattle (KubeCon North America 2018) has some good mountains.

Kim: I read tech news, you probably read tech news. What’s a good source for tech news that you’ve found, learning about the industry, or that you’re sharing?

Kris: My Twitter feed. 🙂

I sit next to Joe (Beda) at the office and he is a great resource. I feel like I constantly get a lot of news delivered to me on a silver platter because he usually goes through and finds all the good stuff and just tells me about it. I spend a lot of time on Reddit. I’m a moderator for the Golang Sub-Reddit.

I’m active in the Kubernetes Reddit as well. Twitter is a great source. A lot of my friends do the same thing where we’ll get hand picked tech articles that come our way on Twitter. I’m pretty selective about who I follow. So I think a lot of it comes through there.

Kim: Let’s get into a little bit of tech and talk about innovations and what do you see as next big things coming up in cloud native, containers, or open source.

Kris: I’m so narrow minded because I work in a very unique small section of open source, which is the Kubernetes infrastructure layer, what little bit of it is there. But I think the next big thing for me is looking at taking this idea that we can write cloud native software, like Kubernetes, to manage and mutate underlying infrastructure and taking that and building out Kubernetes self-deployment and Kubernetes autoscaling primitives with some of the work we’re doing in upstream like cluster lifecycle.

I wrote a book on it called “Cloud Native Infrastructure” and it kind of goes really deep down the rabbit hole on a lot of the stuff. But I’d say over the next year or so we could really start to see the software layers coming in and making infrastructure operators job much more elegant.

Kim: So is that taking it a step closer to the enterprise or do you think you’re there already?

Kris: I think we’re there philosophically or we have the ideas in place and I think we’ve all kind of done enough on our own to demonstrate that this stuff works. It’s now a matter of coming together as a community and saying, “Okay. How do we as an open source community, under CNCF, want to go through and actually solve these problems? And what is that going to look like? And what is the software tooling going to look like?” We’re at the proposal phase, which is the most exciting phase because that’s when everybody just gets to talk and say whatever is on their mind. I’m really happy right now.

Kim: You also mentioned that a lot of your time is taken up with things in the community.

Kris: I’m fortunate enough to where a lot of my day-to-day job is keeping up with the community. We have a lot of common interests as well. It’s kind of like a match made in heaven. So it works well together.

Kim: You mentioned keeping up with the community. What kind of things do you do besides the SIG that we talked about and contributing code? You’re the developer advocate so are there certain things that you really like doing in that role?

Kris: I’m the only developer advocate right now (at Heptio) and I came over from developer advocacy at Microsoft so the two ends of the spectrum where it’s an entire army of developer advocates versus like just little old me. A lot of what I’m doing is understanding what’s going on in the community and then what is the user experience like for people not only running the tools and using the software, but creating the software.

What that looks like and then advocating that back to the community, back to Heptio, connecting that together, and being that liaison. A lot of that even spans the scope of these open source meetings. Here at KubeCon, at other tech conferences, it’s at Meetups, kind of anywhere in the wild. Twitter is a great resource as well.

Kim: Wow. That’s great. So people are connecting at events, conferences, Meetups, and Twitter.

Kris: And I’m like the garbage collector. I just take all this noise, sleep on it, go climb a mountain, think about it, and, oh, I can make sense of it now and there’s this really important revelation I just had. And then go deliver that back to the community or our company.

Kim: We’re on the topic of cloud native. Tell me, what does cloud native mean to you and how would you explain it to somebody who’s new to this field?

Kris: That’s such a great question. What does cloud native mean to me? Well, I have a really weird definition, but I’m just going to say it because I feel like some people get a kick out of it. When I hear cloud, I think HTTP or probably HTTPS, which to me represents the Rest APIs and the interfaces that cloud providers put in front of whatever infrastructure you’re mutating or whatever services you’re mutating.

So whenever I think cloud, I think because it’s good old HTTP, I can interact with that with software. Instantly it goes from me connecting wires in a server room to Curl minus Capital SSL. It’s now this automatable thing. And native to me sort of implies that your application is reaction driven. We’re looking at our application within the context of the new way of doing things over HTTP and what does this new application need to look like?

What form does it come in? How is it different than before? What does the new shape of the application look like? Writing a cloud native application, to me, usually means writing it from scratch. Although there’s this really interesting concept of we’re not migrating to a cloud native application, how do we take this application that wasn’t designed to be run in this new way where everything is exposed over an interface and actually go and try and migrate that and turn that into a cloud native app.

So cloud native, it’s like this new way of thinking about apps and it just introduces a lot of new paradigms.

Kim: It does, yeah. Definitely. So you’re an ambassador. Why did you join? What do you like about the Ambassador Program?

Kris: I do a lot of work directly with folks at the Linux Foundation, folks at CNCF for the diversity committee, and I’m involved with the whole diversity scholarship effort in general. I’m constantly advocating for CNCF as part of my day job. I think it was just kind of a natural fit. I was doing everything that an Ambassador would normally have done without the actual Ambassador title.

Kim: Excellent. I have a couple word association for you. Ski or ride?

Kris: Technically I snowboard. So I guess it would be ride. I’m very strong against skiing or riding in mountaineering terms because if you walk up, you’re breaking ethics if you ski down. You’re cheating. You’ve got to walk down. That’s part of the game. There’s been a couple times I brought my snowboard up with me. But usually I’m like, “Nah. We gotta walk back down.”

Kim: Mountain climbing – indoors or outdoors?

Kris: Outdoors. Definitely.

Kim: Do you have a favorite place?

Kris: My favorite place to climb? So I just moved from Colorado and there’s 58 mountains there that are called the 14ers. 58 mountains that are 14,000 feet or taller. I climbed all of those and my favorite one is Little Bear. I free soloed Mount Rainier, which means I climbed it with no ropes or anyone else with me. And that was the most intense climb I’ve ever done in my life. So that’s probably my new favorite place.

Kim: How long did that take?

Kris: I was up there for three days.

Kim: Wow. Good for you. The last question – hiking or literally anything other than hiking?

Kris: I hate hiking. And I have a whole blog article on this.

Kim: I know. I read the blog article. It was really good blog article. And thank you so much for talking about the Ambassador Program and cloud native.

Kris: Sure. Absolutely.

CNCF Joins Google Summer of Code 2018 With Projects Envoy Proxy, Containerd, CoreDNS, Prometheus, Kubernetes, and Rook

By | Blog

Since 2005, the Google Summer of Code (GSoC) program has accepted thousands of university students from around the world to spend their summer holiday writing code and learning about the open source community. This year GSoC accepted 1,264 students from 62 countries into the program to work with 206 open source organizations. Going on 14 years, the program has accepted over 13,000 students from 108 countries who have collectively written more than 33 million lines of code for over 608 open source projects.

Accepted students have the opportunity to work with a mentor, becoming part of an open source community. The Cloud Native Computing Foundation (CNCF) is proud to be one of these organizations, hosting seven interns this summer. Mentors are paired with interns to help advance the following CNCF projects: Envoy Proxy, CoreDNS, containerd, Prometheus, Rook, and Kubernetes.

“We are really pleased to participate in GSoC again this year with seven interns working on six projects that showcase a range of cloud native technologies. From what we have seen, the impact this program has on students, projects, and the open source community as a whole is immense. We look forward to watching the progress and excitement continue to grow through the summer.” – Chris Aniszczyk, CTO, Cloud Native Computing Foundation (CNCF)

Additional details on the CNCF projects, mentors, and students can be found below. Coding goes through August 16th and we’ll report back on progress in a few months.  

Envoy Proxy

Extending Envoy’s fuzzing coverage

Student: Anirudh Murali, Anna University (India)

Mentors:

  • Matt Klein, Lyft
  • Constance Caramanolis, Lyft
  • Harvey Tuch, Google

Envoy is getting fuzz testing support. This project focuses on extending the fuzz coverage including proto, data plane, and H2 level frame fuzzing.

CoreDNS

Conditional Name Server Identifier – CoreDNS

Student: Jiacheng Xu, École Polytechnique Fédérale de Lausanne (Switzerland)

Mentors:

  • Miek Gieben, Google
  • Yong Tang

In distributed TensorFlow, identifying the nodes without domain name collision is a big challenge. CoreDNS supports DNS Name Server Identifier (NSID) which allows a DNS server to identify itself. CoreDNS can be deployed for every node in the distributed TensorFlow cluster to solve this problem. There are two ways to achieve this goal: (1) Set up a distributed Key-Value store like ZooKeeper or etcd, and (2) Assign each node with an order based on the timestamp. Jiacheng’s GSoc work aims to implement one of the approaches above.

containerd

Integrate containerd with Kata Containers

Student: Jian (Anthony) Liu, Zhejiang University (China)

Mentor:

  • Harry Zhang, Microsoft

The project aims at creating a containerd-kata runtime plugin for containerd to integrate with Kata Containers. The integration enables containerd and its users (Docker & Kubernetes) to enjoy security and multi-tenancy brought by Kata Containers as well as native Linux container experience brought by the existing containerd runtime plugin Linux .

Prometheus

Building a testing and benchmarking environment for Prometheus

Student: Harsh Agarwal, IIT Hyderabad (India)

Mentors:

  • Krasi Georgiev, Red Hat
  • Goutham Veeramachaneni, Prometheus contributor

This project aims to benchmark Prometheus & test Prometheus’s Kubernetes and Consul Service Discovery in an automated and real-time environment. This will help in recognizing bugs before confirming new releases and also confirm the robustness of new releases.

Prometheus

Composite Label Indices & Alerts Rule Testing

Student: Ganesh Vernekar, CSE Undergrad at IIT Hyderabad (India)

Mentor: Goutham Veeramachaneni, Prometheus contributor

Alerting is an important feature in monitoring when it comes to maintaining site reliability and Prometheus is being used widely for this. Hence it becomes very important to be able to check the correctness of the alerting rules. Prometheus lacks a good and convenient way of visualizing and testing the alert rules before it can be used.

There are many long standing issues and feature requests regarding the above, and this project aims to solve some of them.

Rook

Add Network File System (NFS) as a Rook storage backend

Student: Rohan Gupta, University of Engineering and Management, UEM jaipur – Rajasthan (India)

Mentors:

  • Jared Watts, Upbound
  • Travs Nielsen, Red Hat

Rook is an open source orchestrator for distributed storage systems running in Kubernetes. Rook is currently in alpha state and has focused initially on orchestrating Ceph on top of Kubernetes. There is no option for Network File System (NFS) yet. This project aims to add NFS as another storage backend.

Kubernetes

Storage API for Aggregated API Servers

Student: Marko Mudrinić, University of Belgrade (Serbia)

Mentors:

  • David Eads
  • Stefan Schimanski

Kubernetes offers two ways to extend the core API, by using the CustomResourceDefinitons or by setting up an aggregated API server. Users don’t need to modify the core API in order to add the features needed for their workflow, which later provides a more stable and secure core API.

One missing part is how to efficiently store data used by aggregated API servers. This project implements a Storage API, with a main goal to share the cluster’s main etcd server with the Aggregated API Servers, allowing it to use cluster’s main etcd just like it would use it’s own etcd server.