In its fifth year participating in Google Summer of Code (GSoC), CNCF is excited to announce 16 interns have graduated from the program after working with the Foundation’s projects. Interns this year contributed to Graduated, Incubating and Sandbox projects including Buildpacks, Envoy, Kubernetes, Thanos, and TiKV.
GSoC is one of several programs CNCF is involved in that helps students from around the world get involved with open source projects. Since its creation, GSoC has had over 16,000 students from 111 countries help over 715 open source organizations write over 38 million lines of code.
“CNCF is proud to be one of the many organizations that have had the opportunity to host interns through GSoC. So many of the interns that started with us through organizations like GSoC have become important and valued contributors to our community. Congratulations to this round of successful interns.” – Ihor Dvoretskyi, Developer Advocate, Cloud Native Computing Foundation (CNCF)
Buildpacks
Automatic Buildpack Registry Action Updates (code submission)
Mentee: Mritunjay Sharma
Mentor: Joe Kutner
Buildpacks are pluggable tools that can be used to transform the source code to OCI compliant images (Docker images) that can run on any cloud. It is a Cloud Native Computing Foundation incubating project that provides a higher level of abstraction compared to Dockerfiles, decoupling the entire operational overhead and automating the process of rendering the applications production-ready. Currently, we don’t have the mechanism to inform the Buildpack Registry Index and Buildpack Registry Namespace repositories if they are updated with the latest Github Actions version or not. Through this project, we aim to accomplish automatically Automating Pull Request to Staging Index – whenever a new version of the github-action repo is released by automating staging tests of new action versions.
Buildpacks
Buildpacks Lifecycle Prepare Phase (code submission)
Mentee: Halil İbrahim Ceylan
Mentor: narellano, Joe Kutner
Buildpacks are a kind of framework for preparing a container environment for your apps. Lifecycle is a particular workload that needs to run one by one in order. A Lifecycle Prepare phase should make it easier for Platform Implementers to achieve parity with features of Pack. Today, features like project.toml are only supported by Pack, and a new platform would need to write it’s own parser. My goal is to create new Lifecycle phase and associated binary should be available to Platform Implementers, and should be executed by Pack.
Buildpacks (code submission)
Mentee: Faith Ates
Mentor: J Romero
Cloud Native Buildpacks’ primary function is to turn source code into a runnable image and because of that it’s natural for it to be used within common CI/CD platform pipelines. This project allows you to create a pipeline plugin that makes it easier for users to use Cloud Native Buildpacks within Jenkins.
Envoy
Adaptive Load Control and Distributed Load Testing of Envoy Data Planes (code submission)
Mentee: Dhruv P
Mentor: Lee Calcote
Users configuring their Envoy-based data planes don’t know how to find the optimal Envoy configuration given their workload’s resiliency and performance requirements. Nighthawk, Envoy’s load generator, supports adaptive load control and horizontally distributed scaling of itself. Using distributed load testing and the creation of a set of adaptive load controllers, Envoy users can be empowered with repeatable tooling to automate identification of an optimal Envoy data plane configuration.
In-Toto
Develop in-toto-rs (Rust) for integration with rebuilderd (code submission)
Mentee: Qijia “Joy” Liu
Mentor: Aditya Sirish and Santiago Torres Arias
rebuilderd is a verification system for binary packages. It repeats the build process of a package in an identical environment and verifies that the package is identical. It is part of the Reproducible Builds effort and can currently be used to rebuild Arch Linux packages. The rebuild must optionally generate in-toto link attestations which can be used to verify the entire process. To that end, the nascent in-toto-rs library must be developed to enable this integration with rebuilderd.
Kubernetes
Improve the usability of cert-manager on multiple cloud providers (code submission)
Mentee: Arsh Sharma
Mentor: jakexks
cert-manager is a Kubernetes add-on used extensively to automate the management and issuance of certificates from various issuers.
This project aims to capture the problems users often run into when deploying cert-manager on managed Kubernetes solutions on different clouds with the help of improved e2e testing.
Kubernetes
Make it easy to install and verify the installation of cert-manager (code submission)
Mentee: Tim Ramlot
Mentor: Richard Wall
cert-manager can be installed using Helm or with regular Kubernetes manifests, but when using regular manifests it is difficult to customize the deployment. A kubectl cert-manager install command will make it easier for non-Helm users to customize the configuration of cert-manager, it will provide a way to see all the configuration options for a cert-manager deployment from the command line (i.e. by running kubectl cert-manager install –help). It will provide an option to wait for and verify the deployment, perhaps by integrating the code of cert-manager-verifier. And it may become the foundation of a future kubectl cert-manager upgrade command, which will help users safely upgrade and downgrade between cert-manager versions.
Kubernetes
web-based simulator for scheduler behaviour (code submission)
Mentee: Kensei Nakada
Mentor: Adhityaa Chandrasekar
In real Kubernetes, we cannot know the results of scheduling in detail without reading the logs, which usually requires privileged access to the control plane. Therefore, we have developed a simulator for kube-scheduler — you can see the results of each plugin.
With kube-scheduler simulator, you create resources and see where a new pod goes. And, you can check how scheduling was performed in detail.
It can be used to learn about the Kubernetes scheduler or to examine the detailed behavior of plugins, etc.
KubeVela
Merge Crossplane/OAM-Runtime with KubeVela by specify version (code submission)
Mentee: Yeshuai Cui
Mentor: Jianbo Sun
Open Application Model [OAM] is a runtime-agnostic specification for defining cloud native applications and enabling building app-centric platforms by default. KubeVela is a modern application engine that adapts to your application’s needs, not the other way around. KubeVela is now an standard implementation of OAM. The project is to merge OAM k8s runtime into it and make a difference on a program start option. The final goal is by completing this project, users are able to select any version they want to install.
Litmus Code
Authentication Module Refactor and OAuth Implementation (code submission)
Mentee: Hemanth Krishna
Mentor: Raj Babu Das
The Authentication server of the litmus portal (which resides in the litmus-portal folder of the main litmus repository) is written in golang, uses certain outdated dependencies (such as mgo), and currently does not support third-party OAuth authentication modules such as:
- Google Authentication
- GitHub Authentication
- Local Authentication
The current implementation of the authentication server also consumes more resources than the litmus portal’s GraphQL server.
This proposal shall focus on re-writing the Authentication Module of the litmus-portal so that it achieves the following:
- The Authentication Module is Independent (Can be moved to a separate repository)
- The Module is light-weight and makes use of actively maintained dependencies (if any)
- The Authentication Server is robust and flexible to any future addition of features
- The Authentication Server supports OAuth authentication such as Google Auth and GitHub Auth
Meshery
SMI Support for Multi-Cluster Operations (code submission)
Mentee: Piyush Singariya
Mentors: Navendu Niranjan P and Lee Calcote
Currently, SMI doesn’t provide connectivity over multiple Kubernetes clusters. It is becoming common for users of a service mesh to have more than one cluster.
What if Service Meshes could be connected with associated features like observability, control, and security, etc. even when they are being managed under different organizations.
This project focuses to provide the ability to discover traffic across different service meshes and authenticate it, where each mesh is in a different and untrusted administrative domain, where each mesh can be of the same or from different vendors, can have the same or different control and data plane implementations, be single or multi-cluster, and can provide the same or different functionality to its customers.
Extending multi-cluster capabilities mainly depends on two core features, which are required to provide the ability to discover traffic across different service meshes and authenticate it.
- Service Catalog Federation
- Identity Federation
OpenEBS
Update mount-points and capacity of Block Devices without restarting NDM (code submission)
Mentee: Z0marlin
Mentor: akhilerm
The Node Disk Manager (NDM) daemonset runs on every node in the Kubernetes cluster, discovers and monitors various storage devices connected to the node. It exports these devices as BlockDevice (BD) custom resources on the Kubernetes cluster, which are then used by other OpenEBS stack components. The NDM currently supports the detection of various storage devices connected to the node. However, it cannot detect specific changes that may happen to the block devices while connected to the node. Specifically, the NDM cannot detect changes in the mount-point(s) and the filesystem associated with a device. It also cannot detect a change in the capacity of the block device.
The mount-point(s) of a block device can easily be changed on a system. It is easy to change the disk size on the cloud too. This causes issues in the Kubernetes cluster using OpenEBS as the changes in the device properties will not be reflected immediately in the BD resource.
This proposal aims to solve the issue above by adding functionality in the NDM to detect changes in the mount-points and capacity of supported block devices and propagate them immediately to the corresponding BD resource.
Thanos
Automated, Granular TLS client support in Thanos (code submission)
Mentee: Naman Lakhwani
Mentors: Kemal Akkoyun, Bartlomiej Plotka
Thanos Querier component supports basic TLS configuration for internal gRPC communication. This works great for basic use cases but it still requires extra forward proxies to make it work for bigger deployments. It’s also hard to rotate certificates automatically and configure safe mTLS. This project aims to remove those simplifications allowing better TLS story for all Thanos metrics APIs!
Thanos
Smart automation for project documentation and website (code submission)
Mentee: Saswata Mukherjee
Mentor: Bartlomiej Plotka
This project aims to build and implement mdox, a documentation automation CLI tool, which can keep project documentation completely up-to-date by validating remote and local links, formatting markdown using GitHub Flavored Markdown guidelines, and generating code thereby making the act of maintaining quality documentation much easier and ensuring that this documentation is readable from GitHub as well as a website.
TiKV
High-performance Data Import Tool (code submission)
Mentee: Bingchang Chen
Mentors: Andy Lok and kennytm
Lightning is a tool used to import large amounts of data into TiDB. In addition to the logic of preparation and improving performance, the main logic of lightning can be divided into two parts:
Translate SQL files to KV data by different encoders according to different backends.
Import KV data into TiKV cluster.
These two steps are in the logical layer. However, in the implementation layer, these two steps are coupled. Therefore, when support for KV database like HBase, lightning still needs to convert KV like data to SQL first, and then translate SQL to KV like data again.
I will try to implement this feature by enhancing the ability of local backend, refactoring the logic of the import table, and so on.
TiKV
TLA+ Spec for Async Commit (code submission)
Mentee: Zhuoran He
Mentor: Andy Lok and Ziqian Qin
The goal of this project is to offer a formal verification using TLA+ for TiKV’s new “Async Commit” transaction model, which brings us more confidence in the improved performance.