KubeCon + CloudNativeCon Europe Virtual | August 17-20, 2020 | Don’t Miss Out | Learn more



Demystifying RBAC in Kubernetes

By | Blog

Today’s post is written by Javier Salmeron, Engineer at Bitnami

Many experienced Kubernetes users may remember the Kubernetes 1.6 release, where the Role-Based Access Control (RBAC) authorizer was promoted to beta. This provided an alternative authentication mechanism to the already existing, but difficult to manage and understand, Attribute-Based Access Control (ABAC) authorizer. While everyone welcomed this feature with excitement, it also created innumerable frustrated users. StackOverflow and Github were rife with issues involving RBAC restrictions because most of the docs or examples did not take RBAC into account (although now they do). One paradigmatic case is that of Helm: now simply executing “helm init + helm install” did not work. Suddenly, we needed to add “strange” elements like ServiceAccounts or RoleBindings prior to deploying a WordPress or Redis chart (more details in this guide).

Leaving these “unsatisfactory first contacts” aside, no one can deny the enormous step that RBAC meant for seeing Kubernetes as a production-ready platform. Since most of us have played with Kubernetes with full administrator privileges, we understand that in a real environment we need to:

  • Have multiple users with different properties, establishing a proper authentication mechanism.
  • Have full control over which operations each user or group of users can execute.
  • Have full control over which operations each process inside a pod can execute.
  • Limit the visibility of certain resources of namespaces.

In this sense, RBAC is a key element for providing all these essential features. In this post, we will quickly go through the basics (for more details, check the video below) and dive a bit deeper into some of the most confusing topics.

The key to understanding RBAC in Kubernetes

In order to fully grasp the idea of RBAC, we must understand that three elements are involved:

  • Subjects: The set of users and processes that want to access the Kubernetes API.
  • Resources: The set of Kubernetes API Objects available in the cluster. Examples include Pods, Deployments, Services, Nodes, and PersistentVolumes, among others.
  • Verbs: The set of operations that can be executed to the resources above. Different verbs are available (examples: get, watch, create, delete, etc.), but ultimately all of them are Create, Read, Update or Delete (CRUD) operations.

With these three elements in mind, the key idea of RBAC is the following:

We want to connect subjects, API resources, and operations. In other words, we want to specify, given a user, which operations can be executed over a set of resources.

Understanding RBAC API Objects

So, if we think about connecting these three types of entities, we can understand the different RBAC API Objects available in Kubernetes.

  • Roles: Will connect API Resources and Verbs. These can be reused for different subjects. These are binded to one namespace (we cannot use wildcards to represent more than one, but we can deploy the same role object in different namespaces). If we want the role to be applied cluster-wide, the equivalent object is called ClusterRoles.
  • RoleBinding: Will connect the remaining entity-subjects. Given a role, which already binds API Objects and verbs, we will establish which subjects can use it. For the cluster-level, non-namespaced equivalent, there are ClusterRoleBindings.

| TIP: Watch the video for a more detailed explanation.

In the example below, we are granting the user jsalmeron the ability to read, list and create pods inside the namespace test. This means that jsalmeron will be able to execute these commands:

But not these:

Example yaml files:

Another interesting point would be the following: now that the user can create pods, can we limit how many? In order to do so, other objects, not directly related to the RBAC specification, allow configuring the amount of resources: ResourceQuota and LimitRanges. It is worth checking them out for configuring such a vital aspect of the cluster.

Subjects: Users and… ServiceAccounts?

One topic that many Kubernetes users struggle with is the concept of subjects, but more specifically the difference between regular users and ServiceAccounts. In theory it looks simple:

  • Users: These are global, and meant for humans or processes living outside the cluster.
  • ServiceAccounts: These are namespaced and meant for intra-cluster processes running inside pods.

Both have in common that they want to authenticate against the API in order to perform a set of operations over a set of resources (remember the previous section), and their domains seem to be clearly defined. They can also belong to what is known as groups, so a RoleBinding can bind more than one subject (but ServiceAccounts can only belong to the “system:serviceaccounts” group). However, the key difference is a cause of several headaches: users do not have an associated Kubernetes API Object. That means that while this operation exists:

this one doesn’t:

This has a vital consequence: if the cluster will not store any information about users, then, the administrator will need to manage identities outside the cluster. There are different ways to do so: TLS certificates, tokens, and OAuth2, among others.

In addition, we would need to create kubectl contexts so we could access the cluster with these new credentials. In order to create the credential files, we could use the kubectl config commands (which do not require any access to the Kubernetes API, so they could be executed by any user). Watch the video above to see a complete example of user creation with TLS certificates.

RBAC in Deployments: A use case

We have seen an example where we establish what a given user can do inside the cluster. However, what about deployments that require access to the Kubernetes API? We’ll see a use case to better understand this.

Let’s go for a common infrastructure application: RabbitMQ. We will use the Bitnami RabbitMQ Helm chart (in the official helm/charts repository), which uses the bitnami/rabbitmq container. This container bundles a Kubernetes plugin responsible for detecting other members of the RabbitMQ cluster. As a consequence, the process inside the container requires accessing the Kubernetes API, and so we require to configure a ServiceAccount with the proper RBAC privileges.

When it comes to ServiceAccounts, follow this essential good practice:

Have ServiceAccounts per deployment with the minimum set of privileges to work.

For the case of applications that require access to the Kubernetes API, you may be tempted to have a type of “privileged ServiceAccount” that could do almost anything in the cluster. While this may seem easier, this could pose a security threat down the line, as unwanted operations could occur. Watch video above to see the example of Tiller, and the consequences of having ServiceAccounts with too many privileges.

In addition, different deployments will have different needs in terms of API access, so it makes sense to have different ServiceAccounts for each deployment.

With that in mind, let’s check what the proper RBAC configuration for our RabbitMQ deployment should be.

From the plugin documentation page and its source code, we can see that it queries the Kubernetes API for the list of Endpoints. This is used for discovering the rest of the peer of the RabbitMQ cluster. Therefore, what the Bitnami RabbitMQ chart creates is:

A ServiceAccount for the RabbitMQ pods.A Role (we assume that the whole RabbitMQ cluster will be deployed in a single namespace) that allows the “get” verb for the resource Endpoint.

A RoleBinding that connects the ServiceAccount and the role.

The diagram shows how we enabled the processes running in the RabbitMQ pods to perform “get” operations over Endpoint objects. This is the minimum set of operations it requires to work. So, at the same time, we are ensuring that the deployed chart is secure and will not perform unwanted actions inside the Kubernetes cluster.

Final thoughts

In order to work with Kubernetes in production, RBAC policies are not optional. These can’t be seen as only a set of Kubernetes API Objects that administrators must know. Indeed, application developers will need them to deploy secure applications and to fully exploit the potential that the Kubernetes API offers to their cloud-native applications. For more information on RBAC, check the following links:

CNCF to Host Harbor in the Sandbox

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) accepted Harbor, a cloud native registry, into the CNCF Sandbox, a home for early stage and evolving cloud native projects.

Project Harbor is an open source cloud native registry that stores, signs, and scans content. Created at VMware, Harbor extends the open source Docker Distribution by adding the functionalities usually required by users – such as security, identity, and management – and supports replication of images between registries. With more than 4,600 stars on GitHub, the project also offers advanced security features such as vulnerability analysis, role-based access control, activity auditing, and more.

“Container registries are essential to healthy cloud native environments, and enterprises require the security, features, flexibility, and portability that a trusted registry provides,” said Haining Henry Zhang, Technical Director, Innovation and Evangelism, at VMware, and Harbor project founder. “We’re thrilled to have Harbor in an neutral home that fosters open collaboration, which is incredibly important for creating new critical features. The project will benefit greatly from the contributions of CNCF’s thriving community.”

Harbor users include CNCF members Caicloud, China Mobile, JD.com, Pivotal, Rancher, Tencent Cloud, Tenxcloud and Wise2c, along with OnStar Shanghai, Talking Data and TrendMicro, among others.

TOC sponsors of the project include Quinton Hoole and Ken Owens.

The CNCF Sandbox is a home for early stage projects, for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.



Harbor项目是一个具有存储、签署和扫描内容功能的开源云原生registry。Harbor 由VMware创建,通过添加用户所需功能(如安全性,身份认证和管理)来扩展开源Docker Distribution,并支持在registry之间复制镜像。Harbor还提供高级安全功能,比如漏洞分析,基于角色的访问控制,活动审计等等。该项目在GitHub上已获得超过4600颗星。


Harbor用户包括CNCF成员才云、中国移动、京东、Pivotal、Rancher、腾讯云、时速云、睿云智合,以及 上海安吉星,Talking Data和TrendMicro等。

该项目的TOC发起人包括Quinton HooleKen Owens


Meet the Ambassador: Cheryl Hung

By | Blog

Cheryl Hung, StorageOS, sat down with Kaitlyn Barnard, Cloud Native Computing Foundation (CNCF), to talk about cloud native, the cloud native Meetup group in London, and being an Ambassador. Below is their interview. You can also view the video.


Kaitlyn: Thank you so much for joining me today to talk about your community involvement and our Ambassador Program. Can you tell us a little bit about yourself?

Cheryl: I’ve been involved with Cloud Native for a couple of years, I used to work at Google as a software engineer. Now I’m the Product and DevOps Manager at a London start-up called StorageOS, building persistent storage for containers, and I also run the Cloud Native London Meetup group.

Kaitlyn: You’ve been talking about storage recently. Can you talk a little bit about some of the trends and challenges you’re seeing the in cloud native storage space right now?

Cheryl: It’s definitely an evolving space, because containers were designed to be stateless and immutable so storage is not really a concern. Except that at the end of the day, pretty much everybody has data that they need to store somewhere! It’s clearly an unsolved problem about what’s the best way to do databases and other stateful applications with Kubernetes, which is why I work at StorageOS, because it provides that storage abstraction layer and that means you can run databases within Kubernetes and have replication and failover, etc.

In terms of what I see coming next, the Container Storage Interface is one of the big things to be aware of in the cloud native space. Over the course of the next six months to a year, all of the vendors, cloud providers, and cloud orchestrators are going to get behind this interface. So hopefully, by this time next year, storage will be a solved problem as far as end users are concerned.

Kaitlyn: You run one of the largest CNCF meetups that we have. Why and how did you start the Cloud Native London Meetup?

Cheryl: I started it in about June of 2017. At the time there was Cloud Native Paris, Cloud Native Berlin, Cloud Native Barcelona, and there was a Cloud Native London, but it had been quiet and dormant for a couple of years. I thought this was a really good time to revive that Meetup and to bring in all the new knowledge, community, and interest around Kubernetes and Docker and also Prometheus, Linkerd, and all the other projects.  

It was about bringing in people from all different interests, but all in the same infrastructure and DevOps mindset and having a space where people can teach others as well. So I really encourage new speakers to join and share their stories, because there’s always a mindset around, “Oh, am I good enough to do public speaking?”. I see it as part of my role as the organizer to tell people that, “Yes, your stories are interesting and people do want to hear from you.”

Kaitlyn: Why did you want to be a CNCF ambassador?

Cheryl: I’m probably one of the truest cloud natives in that when I joined Google, I was 21. I was using Borg, which was Google’s internal predecessor to Kubernetes. Because I was so young, I really didn’t have a memory of how software was done before. So to me, it’s always been natural to containerize your software into running with an orchestrator, like Kubernetes, and to package it as microservices. It seems like the whole industry is moving that way. So becoming a CNCF ambassador has been about taking what I know and bringing that attitude, culture, tools, and infrastructure out to the entire industry.

Kaitlyn: You travel a lot, I see you at a lot of conferences. What has been your favorite place that you visited so far?

Cheryl: I travel mostly in the Bay Area and then around Europe. Last year we met at Los Angeles, Open Source Summit, that was really cool because I had not been to Los Angeles before and that was a really interesting place. Some really great stuff and some stuff that was a bit scary, but fun. That was really cool to see.

One thing about being at all these conferences, when I was here last year at the Berlin KubeCon I really didn’t know anybody, I was completely on my own. This time, as I’ve been walking around, it’s been, “Hey, Cheryl” and  “Oh, I know you,” “Oh, you run the meetup, right?”. That’s been awesome, that’s been fantastic to have so many people get involved with the community, know about me, and come up and say hi.

Kaitlyn: It’s fun. Even though it’s 4,300 people now, which is crazy growth in the first place, it’s still a small community and everyone still kind of knows each other.

Cheryl: Exactly. And it’s still a really friendly and open community, which I love.

Kaitlyn: What do you do in your free time?

Cheryl: I’m getting married at the end of August! I’m incredibly excited about it, but it means that any time I’m not planning my work, my engineering team, and what they’re working on, I’m planning my wedding, which is a challenge in its own right! It’s a two day thing, a western wedding and a Chinese wedding, so I spend a lot of time negotiating with people about, “What kind of stationary do I like? What kind of flowers do I like?” It’s crazy!

Kaitlyn: Thank you so much for taking the time to speak with me today.

Cheryl: You are very welcome. Thank you for inviting me.

Announcing EnvoyCon! CFP due August 24

By | Blog

Originally published by Richard Li on the Envoy blog, here.

We are thrilled to announce that the first ever EnvoyCon will be held on December 10 in Seattle, WA as part of the KubeCon / CloudNative Con Community Events Day. The community growth since we first open sourced Envoy in September 2016 has been staggering, and we feel that it’s time for a dedicated conference that can bring together Envoy end users and system integrators. I hope you are as excited as we are about this!

The Call For Papers is now open, with submissions due by Friday, August 24.

Talk proposals can be either 30 minute talks or 10 minute lightning talks, with experience levels from beginner to expert. The following categories will be considered:

  • Using and Integrating with Envoy (both within modern “cloud native” stacks such as Kubernetes but also in “legacy” on-prem and IaaS infrastructures)
  • Envoy in production case studies
  • Envoy fundamentals and philosophy
  • Envoy internals and core development
  • Security (deployment best practices, extensions, integration with authn/authz)
  • Performance (measurement, optimization, scalability)
  • APIs, discovery services and configuration pipelines
  • Monitoring in practice (logging, tracing, stats)
  • Novel Envoy extensions
  • Porting Envoy to new platforms
  • Migrating to Envoy from other proxy & LB technologies
  • Using machine learning on Envoy observability output to aid in outlier detection and remediation
  • Load balancing (both distributed and global) and health checking strategies

Reminder: This is a community conference — proposals should emphasize real world usage and technology rather than blatant product pitches.

If you’re using or working on Envoy, please submit a talk! If you’re interested in attending, you can register as part of the KubeCon registration process.

Reinventing the World’s Largest Education Company With Kubernetes

By | Blog

Pearson, a global education company serving 75 million learners, set a goal to more than double that number to 200 million by 2025. To serve the digital learning experiences of their users, they needed to scale and adapt to this growing online audience.

In this case study, Pearson describes their process to build a platform that would enable their developers to build, manage, and deploy applications, thus improving their engineer’s productivity.

Read about the benefits they saw from their Kubernetes implementation not only for their team, but in customer experience. As Pearson states: “We’re not worried about 9s. There aren’t any. It’s 100% (uptime)”.

Meet the Ambassador: Chris Gaun

By | Blog

Chris Gaun, Mesosphere, sat down with Kaitlyn Barnard, Cloud Native Computing Foundation (CNCF), to talk about cloud native, trends in the industry, and being an Ambassador. Below is their interview. You can also view the video.


Kaitlyn: Thank you for joining me today to talk about the CNFC Ambassador Program and what you’re doing in the community. Can you start by introducing yourself?

Chris: My name is Chris Gaun. I am a CNCF ambassador and product marketing manager at Mesosphere.

Kaitlyn: I know you have a new baby at home. Five weeks old now. How’s the new dad life treating you?

Chris: Lack of sleep. It’s our first kid, and it’s exciting, amazing.

Kaitlyn: In your experience, what is the biggest challenge for adopting and scaling cloud native applications right now?

Chris: I feel that there’s really great tools out there: open source tools, a lot of them under the CNCF umbrella and some of them not. To piece those together, Prometheus, Kubernetes, Jenkins, or CI/CD pipeline, is still somewhat difficult. There’s a lot of great vendors, cloud providers, giving users a lot of help in this area. But if you’re coming from an old-school, larger organization where most of your experience might be with WebSphere or something of that nature, that approaching this (cloud-native) and saying, “Oh yeah, I’m gonna setup Kubernetes or manage Kubernetes.” Then, “Oh, now I need Prometheus and now I need Linkerd or now I need Istio.” That’s a big project at that point.

Kaitlyn: What cloud native trend are you most excited about?

Chris: Well, first off, this whole thing has been amazing. I have pictures from KubeCon two years ago in London and it was in a co-working space. There there was maybe 500 people coming-and-going, but 150 probably there all the time, eating pizza. So you could feed them on however many boxes of pizza. To see the trajectory of over 4,000 people in a room, it’s just mind boggling from 150 in a basement.

All these tools that are coming out, especially Istio, are amazing. The fact that they’re all open source, people can kick the tires without spending a huge amount of money, and get the knowledge of how to do cloud native infrastructure and cloud native applications without diving in head-first is amazing

Kaitlyn: You’ve actually been part of the CNCF Ambassador Program since the beginning. Can you talk about the early days and why you wanted to be part of the program?

Chris: As background, I’ve been a part of the Kubernetes community for three years, almost since the beginning. I was investigating it very early on before GA. CNCF was created, and then Kubernetes came under its banner.

I thought there should be a safe space where we just talk about Kubernetes and how cool it is, instead of talking about our own vendor politics or selling stuff all the time. I was talking to Chris, the CTO of CNCF at the time, and he said, “I think I’m gonna have this CNCF Ambassador Program. Would you like to be a member?”. Then when the program kicked off, I got an email to be a CNCF Ambassador.

We’re here to promote the CNCF tools, the open source tools, that are under the CNCF banner like Kubernetes and Prometheus without any of the vendor politics. Just talking about how cool the technology is.

Kaitlyn: We’ve talked a lot about work. What do you do when you’re not working?

Chris: Well, now I have a five-week-old, so not a lot right now. Before that – I’m actually from New York, but I recently moved to Mississippi. My wife’s a doctor and she’s in a fellowship. So we spend a lot of time outdoors. Mississippi is a beautiful state. We go on a lot of hikes, we have a nice river nearby called the Mississippi River, and a lake. We try to get out as much as possible. I have a dog named Panda who goes out with us and we go for hikes.

Kaitlyn: That’s awesome. For the record Panda’s one of my favorite dog names up to this point. Thank you so much for taking the time to talk to me today.

Chris: Thanks for having me.

Diversity Scholarship Series: My Experiences at KubeCon EU 2018

By | Blog

CNCF offers diversity scholarships to developers and students to attend KubeCon + CloudNativeCon events. In this post, scholarship recipient, Yang Li, Software Engineer at TUTUCLOUD, shares his experience attending sessions and meeting the community. Anyone interested in applying for the CNCF diversity scholarship to attend KubeCon + CloudNativeCon North America 2018 in Seattle, WA December 11-13, can submit an application here. Applications are due October 5th.

This guest post was written by Yang Li, Software Engineer 

Thanks to the Diversity Scholarship sponsored by CNCF, I attended the KubeCon + CloudNativeCon Europe 2018 in Copenhagen May 2-4.

The Conference

When I was in college, I wrote software with Python on Ubuntu, and read Cathedral and the Bazaar by Eric S. Raymond. These were my first memories of open source.

Later on, I worked with a variety of different open source projects, and I attended many tech conferences. But none of them were like KubeCon, which gave me the opportunity to take part in the open source community in real life.

Not only was I able to enjoy great speeches and sessions at the event, but I also met and communicated with many different open source developers. I made many new friends and amazing memories during the four days in Copenhagen.

In case anyone missed it, here are the videos, presentations, and photos of the whole conference.

Although I haven’t been to many cities around the world, I can safely say that Copenhagen is one of my favorites.

The Community

“Diversity is essential to happiness.”

This quote by Bertrand Russell is one of my firm beliefs. Even as a male software engineer and a Han Chinese in China, I always try to speak for the minority groups which are still facing discrimination. But to be honest, I haven’t found much meaning in the word diversity for myself.

However, soon after being at KubeCon, I understood that I’m one of the minorities in the world of open source. More importantly, with people from all over the world, I learned how inclusiveness made this community a better place. Both the talk by Dirk Hohndel and the Diversity Luncheon at KubeCon were very inspirational.

Final Thoughts

I started working with Kubernetes back in early 2017, but I only made a few contributions in the past year. Not until recently did I become active in the community and joined multiple SIGs. Thanks to the conference, I have a much better understanding of the culture of the Kubernetes community. I think this is the open source culture at its best.

  • Distribution is better than centralization
  • Community over product or company
  • Automation over process
  • Inclusive is better than exclusive
  • Evolution is better than stagnation

It is the culture that makes this community an outstanding place which deserves our persevering.



Supporting Fast Decisioning Applications with Kubernetes

By | Blog

Capital One has applications that handle millions of transactions a day. Big-data decisioning—for fraud detection, credit approvals and beyond—is core to the business. To support the teams that build applications with those functions for the bank, the cloud team embraced Kubernetes for its provisioning platform.

In this case study, Capital One describes how their use of Kubernetes and other CNCF open source projects has been a “significant productivity multiplier”.

Read how their Kubernetes implementation reduced their attack vulnerability for applications in the cloud and increased response time to threats in the marketplace by being able to push new rules, detect new patterns of behavior, and detect anomalies in account and transaction flows.

With Kubernetes, Capital One has been able to provide the tools in the same ecosystem, in a consistent way.

Launching and Scaling Up Experiments, Made Simple

By | Blog

From experiments in robotics to old-school video game play research, OpenAI’s work in artificial intelligence technology is meant to be shared. With a mission to ensure the safety of powerful AI systems, they care deeply about open source—both benefiting from it and contributing to it.

In this case study, OpenAI describes how their use of Kubernetes on Azure made it easy to launch experiments, scale their nodes, reduce costs for idle notes, and provide low latency and rapid iteration, all with little effort to manage.

Containers were connected and coordinated using Kubernetes for batch scheduling and as a workload manager. Their workloads are running both in the cloud and in the data center – a hybrid model that’s allowed OpenAI to take advantage of lower costs and have the availability of specialized hardware in their data center.


Building on the Open Source Model

By | Blog

SUSE has been in business for over 25 years, built on a model that depends on open source technologies and which are driven by communities.

Engaging with the technologies that are at the forefront of the industry, such as cloud native and Kubernetes, directly aligns with SUSE’s strategic business interests. The open source model is what makes it work by bringing together the greatest minds to solve a problem, participating in open discussions, and collaboration. It’s about the ideals of advancing the technology and holding that above the business interests of any one entity.

KubeCon + CloudNativeCon gives SUSE the opportunity to engage with people 1:1, learn about the architecture and processes as part of the whole paradigm, and professional networking.

In this CNCF Member Video, Jennifer Kotzen, Senior Product Marketing Manager at SUSE, goes into detail on these topics, the CNCF projects they are using now and plan to use, and how the company benefits from attending KubeCon + CloudNativeCon events.


1 32 33 34 53