KubeCon + CloudNativeCon San Diego | November 18 – 21 | Learn more

Category

Blog

The Secret to Cloud Native Success for adidas? ‘We Love Competition’

By | Blog

A few years ago, it would take as long as a week just to get a developer VM at adidas. For engineers, says Daniel Eichten, Senior Director of Platform Engineering, “it felt like being an artist with your hands tied behind your back, and you’re supposed to paint something.”

To improve the process, “we started from the developer point of view,” and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago.

They found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus. Releases went from every 4-6 weeks to 3-4 times a day, and within a year, the team got 40% of the company’s most impactful systems running on the platform. 

How did they do it? One big factor was taking into consideration the company culture; adidas has sports and competition in its DNA. “Top-down mandates don’t work at adidas, but gamification works,” says Cornago. “So this year we had a DevOps Cup competition. Every team created new technical capabilities and had a hypothesis of how this affected business value. We announced the winner at a big internal tech summit with more than 600 people. It’s been really, really useful for the teams.”

Read more about how adidas won at cloud native in the full case study, and watch Fernando and Daniel chat about what’s worked for them in this video:

5 Kubernetes RBAC Mistakes You Must Avoid

By | Blog

Guest post by Connor Gilbert, originally published on StackRox

If you run workloads in Kubernetes, you know how much important data is accessible through the Kubernetes API—from details of deployments to persistent storage configurations to secrets. The Kubernetes community has delivered a number of impactful security features over the years, including Role-Based Access Control (RBAC) for the Kubernetes API. RBAC is a key security feature that protects your cluster by allowing you to control who can access specific API resources. Because the feature is relatively new, your organization might have configured RBAC in a manner that leaves you unintentionally exposed. To achieve least privilege without leaving unintentional weaknesses, be sure you haven’t made any of the following five configuration mistakes.

The most important advice we can give regarding RBAC is: “Use it!” Different Kubernetes distributions and platforms have enabled RBAC by default at different times, and newly upgraded older clusters may still not enforce RBAC because the legacy Attribute-Based Access Control (ABAC) controller is still active. If you’re using a cloud provider, this setting is typically visible in the cloud console or using the provider’s command-line tool. For instance, on Google Kubernetes Engine, you can check this setting on all of your clusters using gcloud:

$ gcloud container clusters list --format='table[box](name,legacyAbac.enabled)'
┌───────────┬─────────┐
│    NAME   │ ENABLED │
├───────────┼─────────┤
│ with-rbac │         │
│ with-abac │ True    │
└───────────┴─────────┘

Once you know that RBAC is enabled, you’ll want to check that you haven’t made any of the top five configuration mistakes. But first, let’s go over the main concepts in the Kubernetes RBAC system.

Overview of Kubernetes RBAC

Your cluster’s RBAC configuration controls which subjects can execute which verbs on which resource types in which namespaces. For example, a configuration might grant user alice access to view resources of type pod in the namespace external-api. (Resources are also scoped inside of API groups.)

These access privileges are synthesized from definitions of:

  • Roles, which define lists of rules. Each rule is a combination of verbs, resource types, and namespace selectors. (A related noun, ClusterRole, can be used to refer to resources that aren’t namespace-specific, such as nodes.)
  • RoleBindings, which connect (“bind”) roles to subjects (users, groups, and service accounts). (A related noun, ClusterRoleBinding, grants access across all namespaces.)

In Kubernetes 1.9 and later, Cluster Roles can be extended to include new rules using the Aggregated ClusterRoles feature.

This design enables fine-grained access limits, but, as in any powerful system, even knowledgeable and attentive administrators can make mistakes. Our experiences with customers have revealed the following five most common mistakes to look for in your RBAC configuration settings.

Configuration Mistake 1: Cluster Administrator Role Granted Unnecessarily

The built-in cluster-admin role grants effectively unlimited access to the cluster. During the transition from the legacy ABAC controller to RBAC, some administrators and users may have replicated ABAC’s permissive configuration by granting cluster-admin widely, neglecting the warnings in the relevant documentation. If users or groups are routinely granted cluster-admin, account compromises or mistakes can have dangerously broad effects. Service accounts typically also do not need this type of access. In both cases, a more tailored Role or Cluster Role should be created and granted only to the specific users that need it.

Configuration Mistake 2: Improper Use of Role Aggregation

In Kubernetes 1.9 and later, Role Aggregation can be used to simplify privilege grants by allowing new privileges to be combined into existing roles. However, if these aggregations are not carefully reviewed, they can change the intended use of a role; for instance, the system:view role could improperly aggregate rules with verbs other than view, violating the intention that subjects granted system:view can never modify the cluster.

Configuration Mistake 3: Duplicated Role Grant

Role definitions may overlap with each other, giving subjects the same access in more than one way. Administrators sometimes intend for this overlap to happen, but this configuration can make it more difficult to understand which subjects are granted which accesses. And, this situation can make access revocation more difficult if an administrator does not realize that multiple role bindings grant the same privileges.

Configuration Mistake 4: Unused Role

Roles that are created but not granted to any subject can increase the complexity of RBAC management. Similarly, roles that are granted only to subjects that do not exist (such as service accounts in deleted namespaces or users who have left the organization) can make it difficult to see the configurations that do matter. Removing these unused or inactive roles is typically safe and will focus attention on the active roles.

Configuration Mistake 5: Grant of Missing Roles

Role bindings can reference roles that do not exist. If the same role name is reused for a different purpose in the future, these inactive role bindings can suddenly and unexpectedly grant privileges to subjects other than the ones the new role creator intends.

Summary

There are many security considerations to be aware of when using Kubernetes—are your images, deployments, nodes, and clusters properly locked down?—and Kubernetes RBAC configuration is one of the more important controls for the security of your clusters. Properly configuring your cluster RBAC roles and bindings helps minimize the impact of application compromises, user account takeovers, application bugs, or simple human mistakes. Check your clusters today—have you made any of these configuration mistakes?

Nominations are open for the annual CNCF community awards

By | Blog

The nomination process is now open for the fourth annual CNCF Community Awards. These awards recognize the community members, developers and advocates working hardest to advance cloud native.  

If you know a deserving ambassador, maintainer and/or advocate working hard to advance cloud native innovation, please check out the categories and forms below and nominate them  for this year’s awards:

  • CNCF Top Ambassador: A champion for the cloud native space, this individual helps spread awareness of the CNCF (and its incubated projects). The CNCF Ambassador leverages multiple platforms, both online as well as speaking engagements, driving interest and excitement around the project.
  • CNCF Top Committer: This will recognize excellence in technical contributions to CNCF and its projects. The CNCF Top Committer has made key commits to projects and, more importantly, contributes in a way that benefits the project neutrally as a whole.
  • Chop Wood/Carry Water Award: This is given to a community member who helps behind the scenes, dedicating countless hours of their time to open source projects + completing often thankless tasks for the ecosystem’s benefit. The winner of this award will be chosen by the TOC and CNCF Staff.

Previous winners of the Community Awards include Michael Hausenblas, Jordan Liggitt, Dianne Mueller, Jorge Castro, Paris Pittman and many more!

Nominations are now open, and will close on October 2nd. We’ll be announcing the awards at KubeCon + CloudNativeCon North America 2019 in San Diego on November 19.

Voting, which will be open starting October 2nd, will be performed using the CIVS tool, using emails from the CNCF database for the following groups:

  • CNCF Ambassadors are eligible to vote for the Top Ambassador
  • CNCF Maintainers are eligible to vote for the Top Committer

Nominate now, and be sure to register for KubeCon + CloudNativeCon 2019 to find out the winners!

How Bloomberg Achieves Close to 90-95% Hardware Utilization with Kubernetes

By | Blog

If you work in financial services, the Bloomberg Terminal is probably your best friend. Behind the scenes, every day Bloomberg deals with hundreds of billions of pieces of data coming in from the financial markets and millions of news stories from hundreds of thousands of sources. There are 14,000 different applications on the Terminal alone. At that scale, delivering information across the globe with high reliability and low latency is a big challenge for the company’s more than 5,500-person strong Engineering department.

In recent years, the infrastructure team has worked on delivering infrastructure as a service while spinning up a lot of VMs and scaling as needed. “But that did not give app teams enough flexibility for their application development, especially when they needed to scale out faster than the actual requests that are coming in,” says Andrey Rybka, Head of the Compute Architecture Team in Bloomberg’s Office of the CTO. “We needed to deploy things in a uniform way across the entire network, and we wanted our private cloud to be as easy to use as the public cloud.”

In 2016, Bloomberg adopted Kubernetes—when it was still in alpha—and has seen remarkable results ever since using the project’s upstream code. “With Kubernetes, we’re able to very efficiently use our hardware to the point where we can get close to 90 to 95% utilization rates,” says Rybka. Autoscaling in Kubernetes allows the system to meet demands much faster. Furthermore, Kubernetes “offered us the ability to standardize our approach to how we build and manage services, which means that we can spend more time focused on actually working on the open source tools that we support,” says Steven Bower, Data and Analytics Infrastructure Lead. “If we want to stand up a new cluster in another location in the world, it’s really very straightforward to do that. Everything is all just code. Configuration is code.”

Read more about Bloomberg’s cloud native success story in the full case study

Kubernetes IoT Edge WG: Identifying Security Issues at the Edge

By | Blog

With IoT and Edge computing emerging as viable options for building and deploying applications, more and more developers are wanting to use Kubernetes and other cloud native technologies outside of typical data center deployments. These developers want to be able to use all of the tools and best practices they are used to, even in non-traditional environments.

For this reason, the community formed the Kubernetes IoT Edge Working Group, a cross-SIG effort currently sponsored by sig-networking and sig-multicluster with a focus on improving Kubernetes IoT and Edge deployments. Within the working group, community members are encouraged to share their ideas to push forward cloud native developments in IoT and Edge technologies. 

IoT and Edge applications by design have a lot of distributed components that don’t usually sit together within the same data center infrastructure. For this reason, there are a lot of potential security implications to take into consideration when using these technologies.

The Kubernetes IoT Edge Working Group has developed a new whitepaper to expose these security challenges within one single document. The purpose of the whitepaper is to identify a comprehensive list of edge security challenges and concerns that the CNCF and Kubernetes communities should recognize. 

In publishing the whitepaper, the working group hopes to:

  • Identify a set of universal security challenges at the edge (covering roughly 80% of the total security concerns for all use cases).
  • Describe each security challenge in a manner that allows all professionals with a moderate level of technical skill to understand the issue.

“With the proliferation of IoT and Edge computing, it’s important that we as a community take steps toward ensuring these new technologies are as secure as they can be,” said Dejan Bosanac, Senior Software Engineer at Red Hat and chair of the IoT Edge working group. “As with any emerging technology, there are blind spots, and we want to identify these so that the community can work together to resolve these before they can be used maliciously. We’re excited to work on this with the community, giving us more eyes to identify potential issues, and more brainpower to identify solutions.”

The whitepaper covers potential security implications of IoT and Edge implementations in the following areas:

  • Trusting hardware
  • Trusting connected devices
  • Within the operating system
  • Network concerns
  • Edge microservices

Since the types of security challenges and layers at which they occur are varied, producing secure edge computing stacks will require the effort of many vendors and contributors working in concert. We hope this whitepaper will help encourage immediate effort from the community to resolve the identified issues!

How T-Mobile Is Leveraging Kubernetes to Handle iPhone-Launch Scale

By | Blog

Were you ready for the launch of the new iPhone? Thanks to Kubernetes, T-Mobile was.

In 2015, it took T-Mobile seven months to get new code to production. That certainly wasn’t the speed of delivery that the third-largest wireless carrier in the U.S. needed to keep up with its business goals, so the following year, the company adopted Pivotal’s Platform as a Service offering, Pivotal Cloud Foundry. The migration yielded great results: The time to production shrank to less than a day. 

But not all applications, particularly vendor-delivered ones that were shipped to T-Mobile in Docker containers, ran smoothly on PaaS during updates. The company needed an orchestrator, and the main requirements were high availability at every level, persistent storage, and the ability to patch and upgrade the infrastructure seamlessly without any impact to customers.

“Kubernetes checked a lot of those boxes,” says James Webb, Member of Technical Staff, and by that point, “it had become the dominant force.” The team spent six months working with an outside company to build a completely open source Kubernetes platform for T-Mobile, but when Pivotal rolled out PKS, they decided to switch over. “We deploy Cloud Foundry in a very specific way, and if we could do the same thing with Kubernetes, that gives us a lot of efficiencies in terms of how we operate, the automation we build, the monitoring we do,” says Brendan Aye, Director, Platform Architecture. “It was win-win-win.” 

During the new iPhone launch last September—the beginning of the peak retail season for T-Mobile—a small amount of production traffic was running on Kubernetes. Soon the company was able to do 95% of deployments in daytime with zero impact, and for development teams, getting a new database went from 5 days to 5 seconds.

This September, Aye says, “we’ll have a huge portion of apps, especially in the sales path for iPhone, running on Kubernetes.”

Read more about T-Mobile’s adoption of Kubernetes for iPhone-launch scale in the full case study.

CNCF Technical Principles and Open Governance Success

By | Blog

As CNCF approaches its 4th year anniversary, the community has grown to sustain over 30 projects and 450 members. It has become one of the most successful open source organizations in terms of impact which you can see from the 2018 annual report and the Kubernetes Project Journey Report, published this week.

It’s important to highlight the role of neutral and open governance has had in that impact, especially since research has shown that neutral foundations can promote growth and community better than other approaches.

The CNCF Technical Oversight Committee (TOC) defines a set of principles to steward the technical community of projects. The most important principle is around a minimum viable governance that enables projects to be self-governing. TOC members are available to provide guidance to the projects but do not control them. 

CNCF does not require its hosted projects to follow any specific governance model by default. Instead, CNCF specifies that graduated projects need to “[e]xplicitly define a project governance and committer process.” This differs from other open source organizations like the ASF with the Apache Way or the Eclipse Foundation that has the Eclipse Development Process. This varied and open governance approach has led to different projects defining what is best and optimized for their community: 

These governance documents also aren’t static and evolve over time to meet the needs of their community. For example, when containerd joined the CNCF their governance was geared towards a BDFL approach but over time evolved to a more neutral approach that spread authority across maintainers.

Neither the CNCF Governing Board (GB) nor the TOC is responsible for managing CNCF-hosted projects. Instead, the maintainers of those projects manage them; this includes defining the governance and operations. CNCF offers a variety of services to our hosted projects, but maintainers decide which they want to accept.

At the end of the day, the CNCF believes in building and sustaining healthy open source communities. One of the most important services we offer is neutrality. Specifically, organizations are less willing to adopt and contribute to a project when the trademark, domain, and/or repository – which provide the ultimate control – are owned by another company rather than a foundation. A neutral home increases the willingness of developers from other organizations to collaborate, contribute, and eventually become leaders in the project.

Announcing the CNCF Kubernetes Project Journey Report

By | Blog

Today we are very excited to release our first Project Journey Report for Kubernetes. This is the first of several such reports we’ll be issuing for CNCF graduated projects. Here’s the backstory.

The largest CNCF-hosted project is Kubernetes. It is the most widely used container orchestration platform today, often described as the “Linux of the cloud”. CNCF’s efforts to nurture the growth of Kubernetes span a wide range of activities from organizing and running the enormously successful Kubecon + CloudNativeCon events to creating educational MOOCs and end user communities to certifying that different versions of Kubernetes are conformant. We even underwrite security audits. All of this is funded by CNCF’s membership dues and revenues from sponsorship and registration at our conferences.

We wanted to create a series of reports that help explain our nurturing efforts and some of the positive trends and outcomes we see developing around our hosted projects. This report attempts to objectively assess the state of the Kubernetes project and how the CNCF has impacted the progress and growth of Kubernetes. We recognize that it’s impossible to sort out correlation and causation but this report attempts to document correlations. For the report, we pulled data from multiple sources, particularly CNCF’s own DevStats tool, which provides detailed development statistics for all CNCF projects.

Some of the highlights of the report include:

  • Actively contributing companies are up over 2000% from 15 active contributing companies prior to Kubernetes joining CNCF to 315 companies contributing to the project today with several thousand having committed over the life of the project.

  • Number of individual contributors up over 7x since Kubernetes joined CNCF, from 400 contributors to over 3000 contributors.

  • Code diversity across more and more companies – Google and Red Hat contributed 83% of Kubernetes code prior to the project joining CNCF. Today, Google and Red Hat contribute only 35% of code, even though the number of contributions they make continues to increase. 

Since joining CNCF in 2016, Kubernetes has recorded:

  • 24K contributors
  • 148K code commits
  • 83K pull requests 
  • 1.1M contributions
  • 1,704 contributing companies

What’s more, we think that Kubernetes has a lot more room to run – as do many other CNCF projects that are also growing quickly.

This report is part of CNCF’s commitment to fostering and sustaining an ecosystem of open source, vendor-neutral projects. Please read and enjoy the report, share your feedback with us – and stay tuned for more project journey reports for other projects.

How gRPC Is Enabling Salesforce’s Unified Interoperability Strategy

By | Blog

The leader in customer relationship management software, Salesforce supports more than 150,000 organizations with its customer success platform and other products. 

On the technology side, “the big thing we’re trying to establish is a unified interoperability strategy across the company,” says Ryan Michela, Principal Engineer, Service Mesh Team. “One of the pain points we’ve had with JSON-based integrations in the past is it that they require a lot of negotiation on each side and can be brittle to backwards-incompatible changes.”

The company found the solution with gRPC, which it has been using with service mesh for the past two and a half years. “It has been fantastic for distributing service contracts so that teams can have a very well-understood, well-defined interface between each other over the network,” says Michela. Plus, as a binary protocol over HTTP2, gRPC “has given us more flexibility in designing streaming services and push notification-type services where we wouldn’t be able to do that as easily with HTTP1.”

Though the impact is hard to quantify, Michela believes that developer velocity has been improved as teams have evolved their services with maintaining backwards compatibility. “In a sense, gRPC just works,” he says. “It solves a very specific problem, it solves it well, and it solves it without fuss. We knew we made the right choice because we didn’t have to fight with it.”

Read more about Salesforce’s use of gRPC in the full case study.

CNCF Joins Google Summer of Code 2019 with 17 Interns, Projects for containerd, CoreDNS, Kubernetes, OPA, Prometheus, Rook and More

By | Blog

Since 2005, the Google Summer of Code (GSoC) program has accepted thousands of university students from around the world to spend their summer holiday writing code and learning about the open source community. This year GSoC accepted 1,276 students from 63 countries into the program to work with 201 open source organizations. Now celebrating its 15th year, the program has accepted more than 14,000 students from 109 countries who have collectively written more than 35 million lines of code for 651 open source projects.

Students who are accepted have the opportunity to work with a mentor and become part of an active open source community. CNCF is proud to be one of these communities, hosting 17 interns this summer – our largest class ever. Mentors from our community are paired with interns and work with them to help advance certain aspects of CNCF projects.

“CNCF is a big supporter of GSoC’s mission and we are excited to participate again this year with a record 17 interns showcasing a wide range of cloud native contributions. As open source continues to take over the world, this program has become an important catalyst for students to have an impact on future technologies that we all depend on.” – Chris Aniszczyk, CTO, Cloud Native Computing Foundation (CNCF)

Additional details on the CNCF projects, mentors, and students can be found below. Coding ended August 19th and we’ll report back on progress soon!  

_______

Falco

Project: Falco engine performance analysis and optimization

Student: Mattia Lavacca, Politecnico di Torino (Italy)

Mentors:

This project aims to develop a system to trace and profile Falco Engine performance. First, it is necessary to monitor and document the existing performance constraints of Falco, then by using that information, we can potentially improve the performance by relaxing the impact of the discovered bottlenecks, performing an optimization of the Falco engine. Finally we’ll provide an analysis of the performance improvements and compare the obtained result to the initial one.

containerd

Project: Remote blob store for containerd

 

 

 

Student: Yeshwanth Reddy Karnatakam, Reva University (India)

Mentor:

This project aims to let containerd have remote blob store for image content (layer blobs).

CoreDNS

Project: Support Source IP Based Query Block/Allow in CoreDNS

Student: An Xiao, Zhejiang University (China)

Mentor:

When CoreDNS serves DNS queries publicly or inside Kubernetes clusters, the source IP of the incoming DNS query is an important identity. For security considerations, only certain queries (from specific source-IP or CIDR block) should be allowed to prevent the server from being attacked. The goal of this project is to support a firewall-like source-IP based block/allow mechanism for CoreDNS. With our plugin (named as firewall) enabled, users are able to define ACLs for any DNS queries, i.e. allowing authorized queries to recurse or blocking unauthorized queries towards protected DNS zones.

Kubernetes

Project: Implement volume snapshotting support into the external Manila provisioner

Student: Róbert Vašek, University of Zilina (Slovakia)

Mentor:

OpenStack Manila manages shared file-systems across the cloud. Being able to create and access these with ease from the container world is showing to be quite useful – and that’s what csi-manila is for. One of the features that’s also in high demand when dealing with shared file-systems is taking snapshots as well as creating new shares from those snapshots from within Container Orchestrators like Kubernetes. csi-manila itself is quite a new piece of software and is missing certain features, like snapshots for an instance. This GSoC project will try to close this feature gap.

Kubernetes

Project: Add Support for Custom Resource Definitions to the Dashboard

Student: Elijah Oyekunle, Federal University of Technology Akure (Nigeria)

Mentors:

The Kubernetes dashboard previously supported Third Party Resources (TPR), but these were replaced in Kubernetes by Custom Resource Definitions (CRD). As a result, the original TPR support was removed in Dashboard, but CRD support has not yet been added. This proposal aims to provide a generic support for Custom Resource Definitions to the dashboard, similar to the previous TPR support.

Kubernetes

Project: Run GPU sharing workloads with Kubernetes + Kubeflow

Student: Jianbo Ma, Zhejiang University (China)

Mentors:

GPUSharing is an open source project which could share GPU by leveraging Kubernetes scheduling and Device Plugin extensibility. This project aims to integrate it with kubeflow/arena.

Kubernetes

Project: Add Plugin Mechanism to the Dashboard

Student: Ajat Prabha, Indian Institute of Technology, Jodhpur

Mentors:

This project aims to introduce a plugin mechanism to the Kubernetes Dashboard. It will deal with defining the plugin framework architecture, it’s scope, how it could enhance the Dashboard UI and make it possible to utilize third party APIs to extend its functionality.

Kubernetes

Project: Kubernetes with hardware devices topology awareness at node level

Student: Junjun LI, Zhejiang University (China)

Mentors:

This project aims to improve current Kubernetes topology manager to become aware of generic hardware device topology at node level.This will make it so Deep Learning training can be improved significantly due to data inter-connection between NVIDIA GPU devices on the node.

Linkerd and Envoy Proxy

Project: Multi-mesh performance benchmark tool

Student: Shahriyar Mammadov, International Technological University – ITU (USA)

Mentor:

Benchmarks continuously strive to improve performance standards in order to stay relevant in the market, and play an important role in better customer loyalty, SEO ranking and more. Meanwhile there are various factors affecting performance. Having a high performance proxy in front of web servers is very important and can be achieved by continuous performance measurements and improvements.

Open Policy Agent (OPA)

Project: IPTables Integration with OPEN POLICY AGENT(OPA)

Student: Urvil Patel, L.D. College of Engineering (India)

Mentors:

This project involves designing the layout of IPTable rules using OPA’s policy language Rego, implementing the algorithms that generate IPTables from that policy, and writing the code that populates the generated IPTables rules into Linux host.

Prometheus

Project: Extending Prombench and adding rule formatting for Prometheus

Student: Hrishikesh Barman, Girijananda Chowdhury Institute Of Management And Technology (India)

Mentor:

Prombench, the benchmarking tool for Prometheus, will be extended to support even more tests, newer components, and metrics, which will help both developers and users in terms of identifying bugs and scalability tests. Another task we aim to solve is the longstanding issue of Prometheus rule formatting.

Prometheus

Project: GitHub integrated benchmarking tool for Prometheus TSDB

Student: Vladimir Masarik, Masaryk University (Czech Republic)

Mentor:

This project aims to make it easier to discover database performance problems. Newly introduced performance flaws are hard to notice, and the process of discovering them is cumbersome if done manually. Since Prometheus TSDB does not yet have such a feature, this project is intended to be the solution. The plan is to develop detailed performance tests and automate the process of testing using Prow, the Kubernetes based CI/CD system with GitHub integration. Moreover, for easy analysis, the results of the benchmarked pull request will be compared against the master branch test results. Fortunately, the foundation for implementing the benchmarks partially exists, and so do some benchmarking tests, which make an excellent start for the project.

Prometheus

Project: Optimize Prometheus queries using regex matchers for set lookups & Postings compression 

Student: Zhiqi Wang, Carnegie Mellon University (USA)

Mentor:

A common use case for regex matchers is to use them to query all series matching a set of label values, e.g. up{instance=~”foo|bar|baz”}. Grafana’s template variables feature is a big user of this pattern. Our goal is to catch and split it into three different matchers, each selecting the three cases, whichwould make the templated queries produced by Grafana much faster. Postings is a lists of numbers which are references to series that contain a given label pair, andare used as a reference table to get the requested series. This project aims to research and implement compression for these features.

Prometheus

Project: Continue the work on low hanging issues in Prombench

Student: Nikita Kokitkar, Pune Institute of Computer Technology (India)

Mentor:

This project aims to help with work that needs to be done to check whether Prow can be replaced by Github actions, getting metrics without any gaps and other low hanging fruit labeled issues.

rkt

Project: Add support for the OCI runtime spec by implementing a runc stage2

Student: Alejandro Germain, University of Hertfordshire (UK)

Mentor:

rkt implements the App Container Executor specification of the appc Container Specification and uses systemd unit properties to implement its features. To implement the OCI runtime spec, systemd unit properties are not suitable since they differ from what the spec defines. The aim of this project is to replace systemd unit properties by runc to implement the OCI runtime spec.

Rook

Project: Enable multiple network interfaces for Rook storage providers

Student: Giovan Isa Musthofa, University of Indonesia

Mentors: 

This project aims to create a new API to enable multiple network interfaces for Rook storage providers. Currently, Rook providers only choice is to use hostNetwork or not. The new API will be used to define networks resource for Rook clusters. Rook operators will be able to consume those definitions and manage them, therefore enabling more fine-grained control over storage providers network access.

TiKV

Project: Proposal for Auto-tune RocksDB

Student: Yuanli Wang, University of Minnesota (USA)

Mentor:

This project aims to use a machine learning method to tune database configurations automatically.

 

1 2 3 37