Category

Blog

How to rapidly develop apps with microservices

By | Blog

Originally published by Rick Osowski on the IBM Cloud Blog, here.

 

As of late last year, a global majority primarily now accesses the internet through a mobile device.

The business implications of the trend are often brought into focus with cautionary tales about Uber and Airbnb successfully disrupting their respective markets. Incumbents in all markets have been put on notice that their customers will soon be offered increasingly innovative user experiences, shifting expectations. Rapidly innovating the relationship with customers through mobile applications is now a platitude in business planning.

Mobile first is not enough: Your customers for awhile may tolerate a great mobile-friendly version of your existing web site, but you have to ask how long you can keep them waiting for features that augment their mundane lives in context. If implementing your ideas take months—as is often the case with a monolithic application that requires coordinated work among many different teams—your innovation easily could become a me-too offering. A more nimble competitor will always be seeking to grab the baton.

It’s uncomfortably obvious that development teams need to accelerate how they deliver new benefits to users. Since nobody can fully predict user behavior, even the fastest, most successful DevOps program has to be ready to fail in the field of actual user experience. Quickly redesigning, replacing, and augmenting parts of the user experience are a top priority based on analysis of clear usage data. That’s why a microservices model of developing cloud-based applications is so powerful. It allows a different small team to own the entire cycle (concept, development, deployment, monitoring) for each component of an application, providing the flexibility necessary to precisely iterate an underperforming part of the user experience as reflected by data gathered in monitoring what users themselves are doing. The DevOps process becomes a dynamic interaction–almost a conversation–with users in the field.

Starting from where you are

Failing fast and iterating quickly: these are DevOps requirements for competitive app delivery in the mobile services era. They imply application architectures that decouple services from each other in a continuous development and delivery cycle while ensuring well-performing interactions with users.

While a startup has an advantage in building greenfield cloud-native applications—using a microservices approach along with DevOps tools and practices—incumbent companies often must begin by refactoring an existing monolith.

Let’s look at a specific example.

In this case, an online retailer wanted to transform a monolith into microservices in order to learn more about customers, and to quickly update and introduce new features.

Since browsing the online catalog presented pressing business problems to solve, the transformation of the overall app began there:

Pilot Task: Determine and implement a better handling of the catalog

The current app failed to help customers easily find product data and blocked the business from exposing data to other sites.

As a proof of concept for the microservices approach, the team built a single microservice for the business catalog using the following steps:

  • Establish a new continuous integration/continuation development model to do the work.
  • Import data into an elastic search to get new ways to search their data and identify new data.
  • Link the existing website to the new search.

At this point, the catalog was still integrated with the existing ordering components, which run core business logic and were too complex to break up without additional work. However, with a successful pilot, the team was convinced of the value of microservices and decided to expand the scope of the app transformation.

Task 2: Learn more about the customer

To learn more about the customer, the team created an account microservice by figuring how to shift the business to focus on customers instead of inventory.

When they determined that customer experience could be enriched over time based on analytics, marketing, and cognitive data, the choice to use an unstructured database became obvious. So they designed a new customer model and used a NOSQL database (like Mongo DB or Cloudant) to manage the unstructured data.

Task 3Innovate the user experience

The team built a new native mobile app and created a new front end for mobile and web access. Even though the catalog depends on the legacy ordering code, the overall user experience was noticeably enhanced.

Task 4: Update access to the order microservice

The team created new order APIs for mobile and integrated them into existing transactions. The business decided to create an adapter microservice that called into the existing ordering system of record on premises. They also used the adapter to integrate with new payment methods and systems.

Task Next: Create an new auction feature

Within the newly flexible architecture, the team has planned to add this innovation in upcoming sprints.

Asking the right questions

As you think about the example, consider these questions:

  • What do your customers want–now and next?
  • Are users of mobile devices satisfied with the experiences your apps are providing?
  • In terms of delivering what users want, how are the DevOps processes and practices of your IT organization helping and hindering?
  • Considering what’s hindering your Devops, and assuming you have an existing app monolith that you need to modernize, do you know exactly what you need to do first?
  • What experiments with cloud platforms should the individual members of your application development team be conducting?
  • What is a good pilot project to use in evaluating a microservices approach and cloud platforms for implementing it?

 

 

Prometheus Graduates Within CNCF

By | Blog

Originally posted August 9, 2018 by Richard Hartmann on Prometheus.io

We are happy to announce that as of today, Prometheus graduates within the CNCF.

Prometheus is the second project ever to make it to this tier. By graduating Prometheus, CNCF shows that it’s confident in our code and feature velocity, our maturity and stability, and our governance and community processes. This also acts as an external verification of quality for anyone in internal discussions around choice of monitoring tool.

Since reaching incubation level, a lot of things happened; some of which stand out:

  • We completely rewrote our storage back-end to support high churn in services
  • We had a large push towards stability, especially with 2.3.2
  • We started a documentation push with a special focus on making Prometheus adoption and joining the community easier

Especially the last point is important as we currently enter our fourth phase of adoption. These phases were adoption by

  1. Monitoring-centric users actively looking for the very best in monitoring
  2. Hyperscale users facing a monitoring landscape which couldn’t keep up with their scale
  3. Companies from small to Fortune 50 redoing their monitoring infrastructure
  4. Users lacking funding and/or resources to focus on monitoring, but hearing about the benefits of Prometheus from various places

Looking into the future, we anticipate even wider adoption and remain committed to handling tomorrow’s scale, today.

Pinning its Past, Present, and Future on Cloud Native

By | Blog

After eight years in existence, Pinterest has grown into 1000 microservices, multiple layers of infrastructure, and a diverse set-up of tools and platforms. In order to manage all of this they needed a compute platform that enabled the fastest path from idea to production with simplicity for their engineers.

Pinterest turned to Kubernetes on Docker containers. In this case study, they describe their journey, using Jenkins clusters, and how the team was able to build on-demand scaling and new failover policies in addition to simplifying the overall deployment and management.

Running on AWS since 2010, Pinterest sees themselves as a ‘cloud native pioneer’ and is eager to share their ongoing cloud native journey and contribute their learnings back to the community.

See a summary of Pinterest, with metrics, here.

Watch the video of Pinterest’s journey from VM’s to Containers presented at KubeCon + CloudNativeCon North America.

CNCF to Host OpenMetrics in the Sandbox

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) accepted OpenMetrics, an open source specification for metrics exposition, into the CNCF Sandbox, a home for early stage and evolving cloud native projects.

OpenMetrics brings together the maturity and adoption of Prometheus, Google’s background in working with stats at extreme scale, along with the experience and needs of a variety of projects, vendors, and end-users – aiming to move away from the hierarchical way of monitoring to enable users to transmit metrics at scale.

The open source initiative, focused on creating a neutral metrics exposition format, provides a sound data model for current and future needs of users, and embeds this into a standard that is an evolution of the widely-adopted Prometheus exposition format. While there are numerous monitoring solutions available today, many do not focus on metrics and are based on old technologies with proprietary, hard-to-implement and hierarchical data models.

“The key benefit of OpenMetrics is that it opens up the de facto model for cloud native metric monitoring to numerous industry leading implementations and new adopters. Prometheus has changed the way the world does monitoring and OpenMetrics aims to take this organically grown ecosystem and transform it into a basis for a deliberate, industry-wide consensus, thus bridging the gap to other monitoring solutions like InfluxData, Sysdig, Weave Cortex, and OpenCensus. It goes without saying that Prometheus will be at the forefront of implementing OpenMetrics in its server and all client libraries,” said Richard Hartmann, technical architect at SpaceNet, Prometheus team member, and founder of OpenMetrics. “CNCF has been instrumental in bringing together cloud native communities. We look forward to working with this community to further cloud native monitoring and continue building our community of users and upstream contributors.”

OpenMetrics contributors include AppOptics, Cortex, Datadog, Google, InfluxData, OpenCensus, Prometheus, Sysdig and Uber, among others.

“Metrics, when combined with traces and logging, are key to measuring the success of any technology initiative. By offering support for open standards such as OpenMetrics, we’ve enabled our customers to quickly gain visibility into the growing ecosystem technologies that are key to their cloud native transitions,” said Ilan Rabinovitch, VP of Product & Community at Datadog. “We look forward to our continued collaboration with the CNCF and observability community on developing standards and best practices for monitoring in cloud native architectures.”

“Google has a history of innovation in the metric monitoring space, from its early success with Borgmon, which has been continued in Monarch and Stackdriver. OpenMetrics embodies our understanding of what users need for simple, reliable and scalable monitoring, and shows our commitment to offering standards-based solutions. In addition to our contributions to the spec, we’ll be enabling OpenMetrics support in OpenCensus” said Sumeer Bhola, Lead Engineer on Monarch and Stackdriver at Google.

“At InfluxData, we’re very excited about the work to create a standard metrics format. Even though metrics is a subset of our larger focus on time series data, we see real value in creating a standard that can work across vendors and open source projects,” said Paul Dix, founder and CTO of InfluxData. “We’re excited to collaborate with members from Prometheus, OpenCensus and others to push the OpenMetrics standard forward. It will be a first class citizen in the InfluxData set of open source tools, including InfluxDB, our open source time series database and Telegraf, our open source data collector.”

OpenMetrics is a huge improvement over the current mixed bag of formats out there today, and will make systems more interoperable with each other. Uber has a lot of complexity with the challenge of ingesting all of the metrics different applications and services expose due to this problem. Complex software and data centers will become a little easier to observe, monitor and run with this standard, which makes tools work together more easily, and small to medium organizations will need to invest less time for first class monitoring,” said Rob Skillington, lead of metrics and systems monitoring at Uber. “We are excited to be a part of OpenMetrics and will natively support the standard in our open source distributed time series database M3DB, which we use to store petabytes of metrics with, alongside supporting long-term storage of metrics for Prometheus.”

TOC sponsors of the project include Alexis Richardson and Bryan Cantrill.

The CNCF Sandbox is a home for early stage projects, for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.

Helm, the Package Manager for Kubernetes

By | Blog

A few weeks ago, the CNCF family was extended with a new projectHelm, the package manager for Kubernetes.

Kubernetes was developed as a solution to manage and orchestrate containerized workloads. At the same time, managing the pure containers is not always enough. At the final end, Kubernetes is being used to run applications, and having a solution that will simplify the ability to run and deploy applications with Kubernetes, was a high demand. Helm was this solution.

Originally developed by Deis, Helm shortly became a de-facto open source standard for running and managing applications with Kubernetes.

Helm history – a slide from the Helm project presentation to CNCF TOC, May 2018


Imagine Kubernetes as an Operating System (OS), Helm is the apt or yum for it. Any operating system is a great foundation, but the real value is in the applications. Package managers like apt, yum, or similar, simplify the operations so instead of building the application from the source files, you can easily install it with the package manager in a few clicks.

The same approach is with Kubernetes and Helm. It’s not difficult to write a simple YAML file describing your desired deployment for Kubernetes, but it is much easier to use the application definitions, which can be easily installed in a few clicks (especially if we are speaking about the complex applications). Also, Helm makes it easier to customise an application (e.g. setting the WordPress username) compared to the raw Kubernetes manifests where you’d either have to edit the YAML or use a sed command or similar.

These predefined application definitions in the Helm world are called Charts. The Charts definition from the Helm official documentation: “A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.” In other words, Charts are similar to DEB or RPM packages for the Linux operating system distributions.

Helm consists of two main components: Helm Client and Tiller Server. Helm Client is responsible for the local chart development, repositories management, and interaction with the Tiller Server, which, in its turn, handles the interaction with the Kubernetes API (installs, upgrades and uninstalls charts).

The major prerequisite for installing Helm is a Kubernetes cluster. If you don’t have it, minikube is the easiest way to get one. Helm installation is as easy as installing the binary package (there is also an option to build it from source, though) – all the possible Helm installation options are covered in the Helm documentation.


The simplest way of Helm client installation is running a script that will fetch and install the latest version of it. (This is not recommended for the production environments, but the fastest solution to have Helm installed in a few minutes for development or demo purposes):

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6740 100 6740 0 0 164k 0 --:--:-- --:--:-- --:--:-- 164k
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.9.1-linux-amd64.tar.gz
Preparing to install into /usr/local/bin
helm installed into /usr/local/bin/helm
Run 'helm init' to configure helm.

After the Helm client is installed, install Tiller to have an ability to interact with the Kubernetes cluster:

helm init

Now we are ready to install applications. The easiest way to list the available applications from the stable repo is to run the following command:

$ helm search


Here we’ll install WordPress, a well-known open source content management system as an application sample:
$ helm repo update #optionally, to fetch the latest data from the helm repo
$ helm install stable/wordpress

Done!

Check if the application was installed correctly using the Kubernetes command-line tools:

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
washed-tortoise-wordpress 1 1 1 0 1m

Here is a list of Kubernetes deployments, with the WordPress installation.

Also, listing the pods in the Kubernetes cluster shows the WordPress chart deployed two pods – wordpress itself, and mariadb as a database for the WordPress application.

idv@ihor:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
exciting-grasshopper-mariadb-0 1/1 Running 0 8m
exciting-grasshopper-wordpress-5f666d765c-xgk85 1/1 Running 1 8m

And of course, we may check if the chart has been installed via helm command:

$ helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
washed-tortoise 1 Mon Jul 23 19:22:33 2018 DEPLOYED wordpress-2.1.3 default

This was the easiest sample of what can be achieved with Helm, but it is definitely able to handle much more complex deployments. The best place to get started with Helm is the project website, where you may find all the necessary information about this amazing technology.

Q&A with JD.com: Kubernetes, Cloud Native, and CNCF Projects Driving Big Data and AI

By | Blog

Liu Haifeng, Chief Architect at JD.com, sat down with the Cloud Native Computing Foundation (CNCF), to talk about Cloud Native, JD.com’s Kubernetes implementation, and tips for other companies looking to get started with open source. Below is their interview.

 

CNCF: How do you see your CNCF membership and cloud native technologies helping JD.com realize its “Retail as a Service” vision?

Haifeng: The goal of our Retail as a Service (RaaS) strategy is to open up our capabilities and resources to empower our partners, suppliers, and other industries. This is very much in line with our commitment to open source technologies. We’ve already benefited tremendously from the CNCF projects we have been a part of and our new commitment to CNCF enables us to build even stronger collaborative relationships with the industry’s top developers, end users, and vendors and ultimately enables us to contribute more to the open source community. Joining CNCF is an important step for us as we develop new container-native technologies towards an open platform to realize our RaaS vision.

CNCF: What impact has Kubernetes had on your company and/or development team?

Haifeng: JD.com is one of the earliest adopters of Kubernetes. The company currently manages the world’s largest Kubernetes clusters in production with more than 20,000 bare metal services in several clusters across data centers in multiple regions.

CNCF: How has Kubernetes helped JD to conduct AI or big-data analytics to revolutionize e-commerce?

Haifeng: JDOS, our customized and optimized Kubernetes supports a wide range of workloads and applications, including big data and AI. JDOS provides a unified platform for managing both physical servers and virtual machines, including containerized GPUs and delivering big data and deep learning frameworks such as Flink, Spark, Storm, and Tensor Flow as services. By co-scheduling online services and big data and AI computing tasks, we significantly improve resource utilization and reduce IT costs.

CNCF: How big is the Kubernetes cluster JD runs? Please describe it, your team using Kubernetes.

Haifeng: JD currently manages the world’s largest Kubernetes clusters in production with more than 20,000 bare metal services in several clusters across data centers in multiple regions.

CNCF: How is Kubernetes and cloud native empowering JD developers? What can they do now that they couldn’t do before?

Haifeng: The old deployment tools required different processes for different environments, from application packaging, container application, deployment, configuration, and scaling. The overall process was complicated and time-consuming. The introduction of Kubernetes dramatically simplifies the process. Applications are now automatically packaged into images and deployed onto containers in near-real time. Scaling is now a simple one-click operation that can occur within a few seconds.

CNCF: JD runs one of the largest Kubernetes clusters in production in the world, how has the company overcome hurdles to make this possible?

Haifeng: We are constantly monitoring the performance of our systems. To address performance issues in the past, we collected and analyzed several key performance indicators and generated a detailed bottleneck analysis report. We then customized Kubernetes by removing unnecessary functions and optimizing the default scheduler. We also enhanced multiple controllers to avoid cascading failures. In addition, we developed an operational toolkit for inspection, monitoring, alarming, and fault handling, which helps operators troubleshoot and quickly resolve any issues which may come up.

CNCF: JD just celebrated its infamous June 18 anniversary sale (“618”) clocking transaction volume of over $24.7 billion during the 18-day period. That is a lot of orders. Can you talk about how your system is able to handle this much volume?

Haifeng: JDOS uses prediction-based algorithm to proactively allocate resources to meet forecasted demand and improve resource utilization. It also provides millisecond-level elastic scaling to handle the extreme workloads. Our June 18 anniversary sales period event, which we hold annually, generated $24.7 billion in transaction volume this year. With over 300 million customers on our platform, we experience a significant peak in traffic during this period. We scheduled approximately 460,000 containers (Pod) and 3,000,000 CPU cores to support the massive volume of orders.

CNCF: Tell us about your use of Vitess. What impact has this had?

Haifeng: Our elastic database is one of the largest and most complex Vitess deployments in the world. We have successfully scaled Vitess to manage large volumes of complex transitional data on JD’s Kubernetes platform. The salient features include the support of RocksDB and TokuDB as new storage engines, automatic re-sharding, automatic load balancing, and migration. Our system currently manages 2,600 MySQL clusters, 9,000 MySQL instances, 350,000 tables, 160 billion records, and 65T data in support of various business applications and services at JD. The use of Vitess enables us to manage resources much more flexibly and efficiently, which significantly reduces operations and maintenance costs. We are actively collaborating with the CNCF community to add new features such as subquery support and global transactions to Vitess.

CNCF: What’s next for Kubernetes and other cloud native technologies (GitLab, Jenkins, Logstash, Harbor, Elasticsearch, and Prometheus) in your company?

Haifeng: Our containerized platform separates the application and infrastructure layers by deploying a DevOps stack on Kubernetes that includes Vitess, Prometheus, GitLab, Jenkins, Logstash, Harbor, Elasticsearch, etc. We have contributed code to some of these projects. We would like to make more contributions in the future. One example of where we think we can really add value is Vitess, the CNCF project for scalable MySQL cluster management. We are not only the largest end-user of Vitess, but also a very active and significant contributor. We look forward to working together with others in the CNCF community to add new features to Vitess, including sub-query support, global transactions, etc. Separately, we are extending Prometheus to create a real time and high performance monitoring system. We’d like to improve Kubernetes to support multiple, diverse workloads and hopefully contribute code to Kubernetes as well.

We plan to release our internal and homegrown projects too. There are a bunch of them on github.com/tiglabs already. We also plan to propose new CNCF projects. One such project is ContainerFS –a large scale, container-native cluster file systems that has been seamlessly integrated with Kubernetes.  

CNCF: What other technologies or practices (DevOps, CI/CD) are you currently evaluating?

Haifeng: We are actively working on our own open source projects centered around cloud native or container-native software and technology, from computing, storage and middleware to applications. One focus is container platforms for diverse workloads, including online services, data analytics, edge computing, and IoT. Another focus is scalable and high-performance data storage for container platforms.  

CNCF: For other Chinese companies just getting started with cloud native, what are the most important things to consider?

Haifeng: With Docker, Kubernetes, and microservices, you can get a lot of value out of cloud native without having to endure high costs. A cloud native solution doesn’t only function in the cloud. It is flexible enough to be deployed across on-premise, private cloud, public cloud, and hybrid environments. It is important to keep a close eye on new technology and on industry trends, leverage open source technologies and actively engage with open source communities.

CNCF: What advice do you have for other companies looking to deploy a cloud native infrastructure?

Haifeng: Think about how to meet your business needs from an ecosystem perspective, including containerized infrastructure, data storage, microservice platform, messaging, monitoring systems, etc. In terms of container orchestration and management, Kubernetes is the de facto standard and a sure thing to bet on. You should also take advantage of emerging serverless architectures to simplify the process of application development, packaging, deployment, and management.

CNCF: Why is cloud native such a business imperative today for JD?

Haifeng: With over 300 million customers, in addition to our merchants, it is imperative that our infrastructure is scalable and extremely efficient. To put it in perspective, five years ago, there were about two billion images in our product images system. Today, there are more than one trillion, and that figure increases by 100 million every day. Furthermore, as not only China’s largest retailer, online or offline, but also the operator of China’s largest e-commerce logistics infrastructure – developed fully in-house – our business is complex and changing rapidly by the day. Accordingly, our infrastructure has to be extremely agile and support a wide range of workloads and application scenarios in a host of areas such as online services, data analytics, AI, supply chain, finance, IoT, or edge computing. Cloud native technologies are well suited to handle our ever-changing requirements.

CNCF: Is this the first open source foundation JD has joined?

Haifeng: Yes.  We are a firm believer in open source and it closely aligns with our own strategy. Through CNCF, we aim to have more and stronger engagement with the open source community and fully see the potential mutual benefit of contributing to the open source community. As the third largest internet company in the world by revenue, JD has already developed many leading technology innovations and we recognize our responsibility to take a leadership role in the open source community.

CNCF: How do you plan to work hand-in-hand with the CNCF?

Haifeng: The areas where we can collaborate are unlimited. Joining CNCF and working with the other members will be extremely helpful as we take some of our new projects forward. In addition, CNCF provides us with a platform with which to raise awareness about some of our projects and recruit leading developers to collaborate and contribute to our efforts.  

CNCF: What are you excited to learn or see at KubeCon China?

Haifeng: We look forward to meeting with the industry’s top developers, end users, and vendors and continuing to learn about the newest technological developments. We also plan to showcase our own work and identify potential collaboration opportunities with companies, end users, and independent developers.

 

Want to learn more about how technology leaders in China are leveraging Cloud Native technologies? Join us for our inaugural KubeCon + CloudNativeCon China in Shanghai from Nov. 14-15. Hope to see you there!

Demystifying RBAC in Kubernetes

By | Blog

Today’s post is written by Javier Salmeron, Engineer at Bitnami

Many experienced Kubernetes users may remember the Kubernetes 1.6 release, where the Role-Based Access Control (RBAC) authorizer was promoted to beta. This provided an alternative authentication mechanism to the already existing, but difficult to manage and understand, Attribute-Based Access Control (ABAC) authorizer. While everyone welcomed this feature with excitement, it also created innumerable frustrated users. StackOverflow and Github were rife with issues involving RBAC restrictions because most of the docs or examples did not take RBAC into account (although now they do). One paradigmatic case is that of Helm: now simply executing “helm init + helm install” did not work. Suddenly, we needed to add “strange” elements like ServiceAccounts or RoleBindings prior to deploying a WordPress or Redis chart (more details in this guide).

Leaving these “unsatisfactory first contacts” aside, no one can deny the enormous step that RBAC meant for seeing Kubernetes as a production-ready platform. Since most of us have played with Kubernetes with full administrator privileges, we understand that in a real environment we need to:

  • Have multiple users with different properties, establishing a proper authentication mechanism.
  • Have full control over which operations each user or group of users can execute.
  • Have full control over which operations each process inside a pod can execute.
  • Limit the visibility of certain resources of namespaces.

In this sense, RBAC is a key element for providing all these essential features. In this post, we will quickly go through the basics (for more details, check the video below) and dive a bit deeper into some of the most confusing topics.

The key to understanding RBAC in Kubernetes

In order to fully grasp the idea of RBAC, we must understand that three elements are involved:

  • Subjects: The set of users and processes that want to access the Kubernetes API.
  • Resources: The set of Kubernetes API Objects available in the cluster. Examples include Pods, Deployments, Services, Nodes, and PersistentVolumes, among others.
  • Verbs: The set of operations that can be executed to the resources above. Different verbs are available (examples: get, watch, create, delete, etc.), but ultimately all of them are Create, Read, Update or Delete (CRUD) operations.

With these three elements in mind, the key idea of RBAC is the following:

We want to connect subjects, API resources, and operations. In other words, we want to specify, given a user, which operations can be executed over a set of resources.

Understanding RBAC API Objects

So, if we think about connecting these three types of entities, we can understand the different RBAC API Objects available in Kubernetes.

  • Roles: Will connect API Resources and Verbs. These can be reused for different subjects. These are binded to one namespace (we cannot use wildcards to represent more than one, but we can deploy the same role object in different namespaces). If we want the role to be applied cluster-wide, the equivalent object is called ClusterRoles.
  • RoleBinding: Will connect the remaining entity-subjects. Given a role, which already binds API Objects and verbs, we will establish which subjects can use it. For the cluster-level, non-namespaced equivalent, there are ClusterRoleBindings.

| TIP: Watch the video for a more detailed explanation.

In the example below, we are granting the user jsalmeron the ability to read, list and create pods inside the namespace test. This means that jsalmeron will be able to execute these commands:

But not these:

Example yaml files:

Another interesting point would be the following: now that the user can create pods, can we limit how many? In order to do so, other objects, not directly related to the RBAC specification, allow configuring the amount of resources: ResourceQuota and LimitRanges. It is worth checking them out for configuring such a vital aspect of the cluster.

Subjects: Users and… ServiceAccounts?

One topic that many Kubernetes users struggle with is the concept of subjects, but more specifically the difference between regular users and ServiceAccounts. In theory it looks simple:

  • Users: These are global, and meant for humans or processes living outside the cluster.
  • ServiceAccounts: These are namespaced and meant for intra-cluster processes running inside pods.

Both have in common that they want to authenticate against the API in order to perform a set of operations over a set of resources (remember the previous section), and their domains seem to be clearly defined. They can also belong to what is known as groups, so a RoleBinding can bind more than one subject (but ServiceAccounts can only belong to the “system:serviceaccounts” group). However, the key difference is a cause of several headaches: users do not have an associated Kubernetes API Object. That means that while this operation exists:

this one doesn’t:

This has a vital consequence: if the cluster will not store any information about users, then, the administrator will need to manage identities outside the cluster. There are different ways to do so: TLS certificates, tokens, and OAuth2, among others.

In addition, we would need to create kubectl contexts so we could access the cluster with these new credentials. In order to create the credential files, we could use the kubectl config commands (which do not require any access to the Kubernetes API, so they could be executed by any user). Watch the video above to see a complete example of user creation with TLS certificates.

RBAC in Deployments: A use case

We have seen an example where we establish what a given user can do inside the cluster. However, what about deployments that require access to the Kubernetes API? We’ll see a use case to better understand this.

Let’s go for a common infrastructure application: RabbitMQ. We will use the Bitnami RabbitMQ Helm chart (in the official helm/charts repository), which uses the bitnami/rabbitmq container. This container bundles a Kubernetes plugin responsible for detecting other members of the RabbitMQ cluster. As a consequence, the process inside the container requires accessing the Kubernetes API, and so we require to configure a ServiceAccount with the proper RBAC privileges.

When it comes to ServiceAccounts, follow this essential good practice:

Have ServiceAccounts per deployment with the minimum set of privileges to work.

For the case of applications that require access to the Kubernetes API, you may be tempted to have a type of “privileged ServiceAccount” that could do almost anything in the cluster. While this may seem easier, this could pose a security threat down the line, as unwanted operations could occur. Watch video above to see the example of Tiller, and the consequences of having ServiceAccounts with too many privileges.

In addition, different deployments will have different needs in terms of API access, so it makes sense to have different ServiceAccounts for each deployment.

With that in mind, let’s check what the proper RBAC configuration for our RabbitMQ deployment should be.

From the plugin documentation page and its source code, we can see that it queries the Kubernetes API for the list of Endpoints. This is used for discovering the rest of the peer of the RabbitMQ cluster. Therefore, what the Bitnami RabbitMQ chart creates is:

A ServiceAccount for the RabbitMQ pods.A Role (we assume that the whole RabbitMQ cluster will be deployed in a single namespace) that allows the “get” verb for the resource Endpoint.

A RoleBinding that connects the ServiceAccount and the role.

The diagram shows how we enabled the processes running in the RabbitMQ pods to perform “get” operations over Endpoint objects. This is the minimum set of operations it requires to work. So, at the same time, we are ensuring that the deployed chart is secure and will not perform unwanted actions inside the Kubernetes cluster.

Final thoughts

In order to work with Kubernetes in production, RBAC policies are not optional. These can’t be seen as only a set of Kubernetes API Objects that administrators must know. Indeed, application developers will need them to deploy secure applications and to fully exploit the potential that the Kubernetes API offers to their cloud-native applications. For more information on RBAC, check the following links:

CNCF to Host Harbor in the Sandbox

By | Blog

Today, the Cloud Native Computing Foundation (CNCF) accepted Harbor, a cloud native registry, into the CNCF Sandbox, a home for early stage and evolving cloud native projects.

Project Harbor is an open source cloud native registry that stores, signs, and scans content. Created at VMware, Harbor extends the open source Docker Distribution by adding the functionalities usually required by users – such as security, identity, and management – and supports replication of images between registries. With more than 4,600 stars on GitHub, the project also offers advanced security features such as vulnerability analysis, role-based access control, activity auditing, and more.

“Container registries are essential to healthy cloud native environments, and enterprises require the security, features, flexibility, and portability that a trusted registry provides,” said Haining Henry Zhang, Technical Director, Innovation and Evangelism, at VMware, and Harbor project founder. “We’re thrilled to have Harbor in an neutral home that fosters open collaboration, which is incredibly important for creating new critical features. The project will benefit greatly from the contributions of CNCF’s thriving community.”

Harbor users include CNCF members Caicloud, China Mobile, JD.com, Pivotal, Rancher, Tencent Cloud, Tenxcloud and Wise2c, along with OnStar Shanghai, Talking Data and TrendMicro, among others.

TOC sponsors of the project include Quinton Hoole and Ken Owens.

The CNCF Sandbox is a home for early stage projects, for further clarification around project maturity levels in CNCF, please visit our outlined Graduation Criteria.


CNCF接纳Harbor为沙盒项目

今天,云原生计算基金会(CNCF)将云原生注册项目Harbor引入了CNCF的沙盒,沙盒是处在早期阶段的、进化中的云原生项目的主要基地。

Harbor项目是一个具有存储、签署和扫描内容功能的开源云原生registry。Harbor 由VMware创建,通过添加用户所需功能(如安全性,身份认证和管理)来扩展开源Docker Distribution,并支持在registry之间复制镜像。Harbor还提供高级安全功能,比如漏洞分析,基于角色的访问控制,活动审计等等。该项目在GitHub上已获得超过4600颗星。

“容器registry对于健康的云原生环境至关重要,企业需要一个受信任的registry管理机构提供的具有安全性、功能、灵活性和可移植性的registry”,VMware创新和推广的技术总监和Harbor项目创始人张海宁(Henry)说道:“我们很高兴Harbor能够在一个中立的环境中促进开放式合作,这对于创造新的关键功能非常重要。该项目将极大地受益于CNCF蓬勃发展的社区所做出的贡献。”

Harbor用户包括CNCF成员才云、中国移动、京东、Pivotal、Rancher、腾讯云、时速云、睿云智合,以及 上海安吉星,Talking Data和TrendMicro等。

该项目的TOC发起人包括Quinton HooleKen Owens

CNCF沙盒是早期项目的基地,为了更清晰的了解CNCF项目的成熟度,请访问我们关于毕业标准的概述。

Meet the Ambassador: Cheryl Hung

By | Blog

Cheryl Hung, StorageOS, sat down with Kaitlyn Barnard, Cloud Native Computing Foundation (CNCF), to talk about cloud native, the cloud native Meetup group in London, and being an Ambassador. Below is their interview. You can also view the video.

 

Kaitlyn: Thank you so much for joining me today to talk about your community involvement and our Ambassador Program. Can you tell us a little bit about yourself?

Cheryl: I’ve been involved with Cloud Native for a couple of years, I used to work at Google as a software engineer. Now I’m the Product and DevOps Manager at a London start-up called StorageOS, building persistent storage for containers, and I also run the Cloud Native London Meetup group.

Kaitlyn: You’ve been talking about storage recently. Can you talk a little bit about some of the trends and challenges you’re seeing the in cloud native storage space right now?

Cheryl: It’s definitely an evolving space, because containers were designed to be stateless and immutable so storage is not really a concern. Except that at the end of the day, pretty much everybody has data that they need to store somewhere! It’s clearly an unsolved problem about what’s the best way to do databases and other stateful applications with Kubernetes, which is why I work at StorageOS, because it provides that storage abstraction layer and that means you can run databases within Kubernetes and have replication and failover, etc.

In terms of what I see coming next, the Container Storage Interface is one of the big things to be aware of in the cloud native space. Over the course of the next six months to a year, all of the vendors, cloud providers, and cloud orchestrators are going to get behind this interface. So hopefully, by this time next year, storage will be a solved problem as far as end users are concerned.

Kaitlyn: You run one of the largest CNCF meetups that we have. Why and how did you start the Cloud Native London Meetup?

Cheryl: I started it in about June of 2017. At the time there was Cloud Native Paris, Cloud Native Berlin, Cloud Native Barcelona, and there was a Cloud Native London, but it had been quiet and dormant for a couple of years. I thought this was a really good time to revive that Meetup and to bring in all the new knowledge, community, and interest around Kubernetes and Docker and also Prometheus, Linkerd, and all the other projects.  

It was about bringing in people from all different interests, but all in the same infrastructure and DevOps mindset and having a space where people can teach others as well. So I really encourage new speakers to join and share their stories, because there’s always a mindset around, “Oh, am I good enough to do public speaking?”. I see it as part of my role as the organizer to tell people that, “Yes, your stories are interesting and people do want to hear from you.”

Kaitlyn: Why did you want to be a CNCF ambassador?

Cheryl: I’m probably one of the truest cloud natives in that when I joined Google, I was 21. I was using Borg, which was Google’s internal predecessor to Kubernetes. Because I was so young, I really didn’t have a memory of how software was done before. So to me, it’s always been natural to containerize your software into running with an orchestrator, like Kubernetes, and to package it as microservices. It seems like the whole industry is moving that way. So becoming a CNCF ambassador has been about taking what I know and bringing that attitude, culture, tools, and infrastructure out to the entire industry.

Kaitlyn: You travel a lot, I see you at a lot of conferences. What has been your favorite place that you visited so far?

Cheryl: I travel mostly in the Bay Area and then around Europe. Last year we met at Los Angeles, Open Source Summit, that was really cool because I had not been to Los Angeles before and that was a really interesting place. Some really great stuff and some stuff that was a bit scary, but fun. That was really cool to see.

One thing about being at all these conferences, when I was here last year at the Berlin KubeCon I really didn’t know anybody, I was completely on my own. This time, as I’ve been walking around, it’s been, “Hey, Cheryl” and  “Oh, I know you,” “Oh, you run the meetup, right?”. That’s been awesome, that’s been fantastic to have so many people get involved with the community, know about me, and come up and say hi.

Kaitlyn: It’s fun. Even though it’s 4,300 people now, which is crazy growth in the first place, it’s still a small community and everyone still kind of knows each other.

Cheryl: Exactly. And it’s still a really friendly and open community, which I love.

Kaitlyn: What do you do in your free time?

Cheryl: I’m getting married at the end of August! I’m incredibly excited about it, but it means that any time I’m not planning my work, my engineering team, and what they’re working on, I’m planning my wedding, which is a challenge in its own right! It’s a two day thing, a western wedding and a Chinese wedding, so I spend a lot of time negotiating with people about, “What kind of stationary do I like? What kind of flowers do I like?” It’s crazy!

Kaitlyn: Thank you so much for taking the time to speak with me today.

Cheryl: You are very welcome. Thank you for inviting me.

Announcing EnvoyCon! CFP due August 24

By | Blog

Originally published by Richard Li on the Envoy blog, here.

We are thrilled to announce that the first ever EnvoyCon will be held on December 10 in Seattle, WA as part of the KubeCon / CloudNative Con Community Events Day. The community growth since we first open sourced Envoy in September 2016 has been staggering, and we feel that it’s time for a dedicated conference that can bring together Envoy end users and system integrators. I hope you are as excited as we are about this!

The Call For Papers is now open, with submissions due by Friday, August 24.

Talk proposals can be either 30 minute talks or 10 minute lightning talks, with experience levels from beginner to expert. The following categories will be considered:

  • Using and Integrating with Envoy (both within modern “cloud native” stacks such as Kubernetes but also in “legacy” on-prem and IaaS infrastructures)
  • Envoy in production case studies
  • Envoy fundamentals and philosophy
  • Envoy internals and core development
  • Security (deployment best practices, extensions, integration with authn/authz)
  • Performance (measurement, optimization, scalability)
  • APIs, discovery services and configuration pipelines
  • Monitoring in practice (logging, tracing, stats)
  • Novel Envoy extensions
  • Porting Envoy to new platforms
  • Migrating to Envoy from other proxy & LB technologies
  • Using machine learning on Envoy observability output to aid in outlier detection and remediation
  • Load balancing (both distributed and global) and health checking strategies

Reminder: This is a community conference — proposals should emphasize real world usage and technology rather than blatant product pitches.

If you’re using or working on Envoy, please submit a talk! If you’re interested in attending, you can register as part of the KubeCon registration process.