KubeCon + CloudNativeCon Amsterdam | March 30 – April 2 | Best Pricing Ends February 2 | Learn more

Category

Blog

Announcing Envoy Project Journey Report

By | Blog

Today we are very excited to release our Project Journey Report for Envoy. This is the second reports we have issued  for CNCF graduated projects (the first was Kubernetes).

Envoy is a widely-adopted, open source network proxy developed by engineers at Lyft and released on September 14, 2016. It is frequently used in conjunction with deployments of Kubernetes and other cloud native technologies, however is used in many non-cloud environments also.

CNCF’s efforts to nurture the growth of Envoy span a wide range of activities from organizing and running the rapidly growing EnvoyCon to creating webinars and recording case studies to make Envoy more accessible and to foster and nurture the Envoy community. These activities are funded by CNCF’s membership dues and revenues from sponsorship and registration at our conferences.

This is one of a series of reports that help explain our nurturing efforts and some of the positive trends emerging around CNCF hosted projects. This report attempts to objectively assess the state of the Envoy project and how the CNCF has impacted the progress and growth of Envoy. Note – we recognize that it’s not feasible to conclusively sort out correlation and causation but this report tries to document correlations. For the report, we pulled data from multiple sources, particularly CNCF’s own DevStats tool, which provides detailed development statistics for all CNCF projects.

Some of the highlights of the report include:

  • Development Velocity – Envoy continues to show strong growth across all four of the key development velocity vectors – code commits, pull requests, issues filed, and authors. Envoy has enjoyed over a 600% increase in the number of project contributors since Envoy joined CNCF.

Cumulative growth of Envoy contributors over time

  • Code Diversity – Envoy has actually grown out of the end user community and that community continues to contribute a large percentage of Envoy’s. Additionally Envoy started primarily as a Lyft/Google project but today over four dozen companies are regularly contributing to the project, spanning the world’s largest cloud computing companies down to small startups and individual contributors. 

Percentage breakdown of contributions by company since Envoy project launch

  • Documentation Expansion – Continuous additions to and improvements of project documentation are essential for the growth of any open source project. Since joining CNCF, the number of authors and companies committing documentation to Envoy has grown by more than 300% and 200%, respectively. 

Since joining CNCF in 2017, Envoy has recorded:

  • >1.7K contributors
  • >10.3K code commits
  • >5.7 K pull requests 
  • >51k contributions
  • 176 contributing companies

We’re thrilled that Envoy has come so far in three years and that CNCF has been able to positively impact that growth in the past two years of our collaboration with the Envoy community. Even better, we strongly believe Envoy’s growth has only just begun. 

This report is part of CNCF’s commitment to fostering and sustaining an ecosystem of open source, vendor-neutral projects. Please read and enjoy the report, share your feedback with us – and stay tuned for more project journey reports for other projects.

How DENSO Is Fueling Development on the Vehicle Edge with Kubernetes

By | Blog

Cars that update like smartphones, adjusting features based on the driver’s preferences? The future is now for Japan’s DENSO Corporation, one of the biggest automotive components suppliers in the world. 

With the advent of connected cars, DENSO established a Digital Innovation Department to expand its business beyond the critical layer of the engine, braking systems, and other automotive parts into the non-critical analytics and entertainment layer. Comparing connected cars to smartphones, R&D Product Manager Seiichi Koizumi says DENSO wants the ability to quickly and easily develop and install apps for the “blank slate” of the car, and iterate them based on the driver’s preferences. Thus “we need a flexible application platform,” he says. 

But working on vehicle edge and vehicle cloud products meant there were several technical challenges: “the amount of computing resources, the occasional lack of mobile signal, and an enormous number of distributed vehicles,” says Koizumi. “We are tackling these challenges to create an integrated vehicle edge/cloud platform.”

With a vehicle edge computer, a private Kubernetes cloud, and managed Kubernetes on GKE, EKS, and AKS, DENSO now has a 2-month development cycle for non-critical apps, compared to 2-3 years for traditional, critical-layer development. The company releases 10 new applications a year. And cloud native technologies have enabled DENSO to deliver these applications via its new dash cam, which has a secure connection that collects data to the cloud. 

The Digital Innovation Department is known as “Noah’s Ark,” and it has grown from 2 members to 70—with plans to more than double in the next year. The way they operate is completely different from the traditional Japanese automotive culture. But just as the company embraced change brought by hybrid cars in the past decade, Koizumi says, they’re doing it again now, as technology companies have moved into the connected car space. “Another disruptive innovation is coming,” he says, “so to survive in this situation, we need to change our culture.”

For more about DENSO’s cloud native journey, read the full case study.

Declarative Data Infrastructure Powers the Data Driven Enterprise

By | Blog

Guest post from Kiran Mova and Chuck Piercey, MayaData

BigData, AI/ML and modern analytics permeate the business world and have become a critical element of enterprise strategies to serve customers better, innovate faster and stay ahead of the competition. Data is core to all of this. In this blog, we focus on how the Kubernetes and related container native storage technologies are enabling the data engineers (aka DataOps teams) to build scalable, agile data infrastructure that achieves these goals. 

Being a Data-Driven Enterprise is Strategic

Enterprises are increasing their spending to enable data driven decisions and foster a data driven culture. A recent survey of enterprise executives’ investments showed the importance of data-driven analytics to the C-suite.

One data point to highlight is the fact that people and process pose greater challenges to adoption than technology and tools.  

What is DataOps

Kubernetes has changed the landscape of application development. With emerging data operators in Kubernetes-managed data infrastructures, we have entered a new world of self-managed data infrastructures called DataOps. Inspired by DevOps, DataOps is a way to enable collaboration between distributed autonomous teams’ data analysts, data scientists and data engineers with shared KPIs. 

“DataOps stems from the DevOps movement in the software engineering world which bridges the traditional gap between development, QA, and operations so the technical teams can deliver high-quality output at an ever-faster pace. Similarly, DataOps brings together data stakeholders, such as data architects, data engineers, data scientists, data analysts, application developers, and IT operations….DataOps applies rigor to developing, testing, and deploying code that manages data flows and creates analytic solutions.” – Wayne W. Eckerson DataOps white paper

A key aspect of Kubernetes success is that DevOps can drive everything through versioned-managed Intent/YAML files (aka GitOps) to manage infrastructures the same way IT manages code reproducibly and scalably. The same approach can be applied to DataOps.

Siloed Teams & Processes Fail at DataOps

Data pipelines produced by organizational silos built around specialized teams (still in use by many businesses today) will suffer from having the delivery responsibility split across those functional groups. The organizational structure constrains the development process to something that can bridge the gaps between tools and this approach will ineluctably be prone to failures caused by the bureaucracy itself.

Integrated Teams & Processes Succeed at DataOps

A recent article on Distributed Data Mesh proposes many organizational and technical changes to how Data Engineering / Science teams can become more effective and agile, like the success seen by product development teams using DevOps/SRE culture. A shift from a tool focus to a CI/CD process focus is core to the paradigm. Both process and development execution necessarily co-evolve and, unsurprisingly, such teams often are themselves distributed, much like the architecture they build on. In his paper, Zhamak Dehghani proposes an approach for managing data as a product that parallels the DevOps techniques applied to commercial software products and teams.

The core of his approach is to redefine the team and its responsibilities by shifting the architectural milieu from a focus on technology and tools (like Data Engineers, ML Engineers, Analytics Engineers) to a more interdisciplinary concept structured around treating Data itself as a Product. In this structure, each Data Product has an independent team that can innovate, select tools, and implement while exposing data results using a standard API contract. 

These teams consist of a variety of skill sets: data engineers, data scientists, data analysts, ML engineers, decision makers, data collection specialists, data product managers, and reliability engineers (while the roles for successful data product teams are clear, individuals can simultaneously fulfill multiple roles). Critically, all team members interact with a shared set of DataOps processes.

Declarative Data Infrastructure

DataOps relies on a data infrastructure that can abstract away platform-specific features and allow product teams to focus on the data they own while leveraging shared resources. The key enabling technology for DataOps is Declarative Data Infrastructure (DDI). DDI refers to both the data and the storage infrastructure running on Kubernetes and is the technology stack that converts compute, network, and storage into a scalable, resilient and self-managed global resource that each autonomous team can use without having to wait on approvals from central storage administrators. Kubernetes and related technologies have emerged as a standard that enables the DDI technology stack.

For example, the data infrastructure in the data mesh example above is comprised of three layers:

  • A Data Pipeline – like the Airflow framework
  • A Data Access Layer – like Apache Kafka or Postgres 
  • A Data Storage Layer – like OpenEBS.  

In the past couple of years, we have seen quite a few technologies under the Data Pipeline and Data Access Layer move towards Kubernetes. For instance, it isn’t uncommon for data engineering teams to move away from Hadoop-based systems to something like Pachyderm, moving their data pipelines into Kubernetes with Airflow to reduce the cost of infrastructure and create reproducible, resilient and extensible data pipelines.

Kubernetes projects like Airflow have reached maturity on the data pipelines’ implementation and orchestration over the past two years and are being adopted at companies like Lyft, Airbnb, Bloomberg.

Correspondingly, data pipeline’s adoption has triggered and enabled a new breed of products and tools for the Data Access and Storage Layers it feeds. 

An excellent example that demonstrates the move of Data Access Layer into Kubernetes is the Yolean prebuilt Kafka cluster on Kubernetes that delivers a production-quality, customizable Kafka-as-a-service Kubernetes cluster. Recently, Intuit used this template to rehost all their financial applications onto Kubernetes. 

Two components have made the data access layer easier. The first is a shim layer that provides declarative YAMLs for instantiating the Data Access Layer (like Kafka) for either an on-prem version or a managed service version. This approach to running the Data Access Layer helps users migrate from current implementations into Kubernetes. It suffers by locking users into specific implementations. An alternate approach is to have the Operators build the data access layer so that it can run on any storage layer, thereby avoiding cloud vendor lock-in. 

The data storage layer is seeing a similar shift, responding in part to the rise of new, inherently distributed workloads and the emergence of a new set of users as well. There are several Kubernetes native storage solutions that are built using the same declarative philosophy and managed by Kubernetes itself, that we refer as the Declarative Data Plane

Declarative Data Plane

The data plane is composed of two parts:

  1. An Access Layer which primarily concerns with the access to the storage, and
  2. Data Services like replication, snapshot, migration, compliance and so forth. 

CSI, a standard Declarative Storage Access Layer

Kubernetes and the CNCF vendor and end user community have been able to achieve a vendor neutral standard in the form of CSI to enable any storage vendors to provide storage to the Kubernetes  workloads. The workloads can be running on any type of container runtime – docker or hypervisors. In part, the success of some of the self-managed Data Access Layer products can be attributed to CSI as a standard and constructs like Storage Classes, PVCs and Customer Resources and Operators. 

However, CSI leaves many core data infrastructure implementation details to vendors and service, such as:

  • Data Locality and high availability
  • Compliance for GDPR or HIPPA
  • Multi-Cloud and Hybrid-Cloud deployments
  • Analytics and Visibility into usage for various teams

Thankfully, the extensibility of Kubernetes via the customer resources and operators is enabling a new breed of storage technologies that are Kubernetes native and sometimes called Container Attached Storage (CAS). CAS declaratively manages data services down to the storage device. 

Declarative (Composable) Data Services

Depending on the application requirements, a data plane can be constructed using one of the many options available. Kubernetes and containers have helped redefine the way Storage or the Data Plane is implemented.

For example, for distributed applications like Kafka, that has inbuilt replication, rebuilding, capabilities – a Local PV is just what is needed for the Data Plane. To use Local PVs in production, that typically involves provisioning, monitoring and managing backup/migration –  data infrastructure engineers just need to enable deploying of the Kubernete native tools like – Operators (that can perform static or dynamic provisioning), Prometheus, Grafana, Jaeger for observability,  Velero – for backup/restore.

For another set of applications, there may be a need to deploy a Data Plane that can perform replication – within a cluster, across clusters, across zones/clouds, and snapshots and cloning. By making use of the Customer Resources and Operators, Kubernetes native projects like Rancher, Longhorn and OpenEBS have emerged to combat cloud vendor lock-in for the Kubernetes ecosystem, not only providing a unified experience and tools for managing storage resources on-prem or cloud, but also leveraging and optimizing investments that Enterprises have already made in Legacy storage. 

 The Next Gen: Declarative Data Plane

With a unified experience and declarative interface to manage the storage/data services, data engineers can interact with Data Infrastructure in a standard way. Building on the CSI foundation, projects like OpenEBS, Velero, standards like KubeMove, SODA Foundation (aka Open Data Autonomy/OpenSDS) are focusing on implementing Easy-To-Use Kubernetes Storage Services for on-prem and cloud and are pushing forward standardization of Declarative Data Plane  (aka DDP).

The DDP delivers several architecturally important elements for the next generation of distributed applications’ DataOps: 

  • Enabling autonomous teams to manage their own storage
  • Scalable polyglot big data storage
  • Encryption for data at rest and in motion
  • Compute and data locality
  • Compliance for GDPR or HIPPA
  • Multi-Cloud and Hybrid-Cloud deployments
  • Backup and Migration 
  • Analytics and Visibility into usage for various teams 

The DDI project is backed by Infrastructure Teams in large Enterprises that have already adopted Kubernetes and are using Kubernetes and projects like OpenEBS to deliver: 

  • Etcd As a Service
  • ElasticSearch As a Service
  • PostgreSQL As a Service
  • ML pipelines of many types (one promising one is MELTANO from GitHub)
  • Kafka as a service

These implementations show that enterprise customers are taking full advantage of the capabilities delivered by a Declarative Data Infrastructure. Such enterprises are leveraging the significant architectural enhancements DDI at the data layer provides to deliver faster, better and competitively-differentiating enterprise analytics. DDI helps with optimal use of the

Infrastructure / Cost Optimization.

Declarative Data Infrastructures are Here to Stay

The declarative (GitOps) approach to managing data infrastructure is here to stay. We recently heard a top executive at a large enterprise say that unless the technology stack is declarative, it is not operable. 

ServiceMeshCon 2019 Schedule Announced

By | Blog

We are pleased to announce the schedule for the inaugural ServiceMeshCon, a KubeCon + CloudNativeCon co-located event. Hosted by CNCF, the conference will take place on November 18th on Day Zero of KubeCon + CloudNativeCon San Diego. 

ServiceMeshCon is a vendor neutral conference on service mesh technologies. The line-up will feature maintainers across various service mesh projects and showcase lessons learned from running this technology in production. 

Sessions will range from introductory-level to advanced, featuring speakers from companies that are both innovating and using service mesh technologies. Talks include:

  • An Intro to Network Service Mesh (NSM) and its relationship to Service Mesh – John Joyce & Tim Swanson, Cisco
  • Control Plane for Large Mesh in a Heterogeneous Environment – Fuyuan Bie & Zhimeng Shi, Pinterest
  • How Google manages sidecars for millions of containers without breaking anything (much) – Sven Mawson, Google
  • Service Mesh Interface: Developer friendly APIs for Service Mesh – Michelle Noorali, Microsoft
  • There’s A Bug in My Service Mesh! What Do You Do When the Tool You Rely On is the Cause? – Ana Calin, Paybase

Tickets are available for $199 and pre-registration is required to attend ServiceMeshCon. Attendees can add it on during registration for KubeCon + CloudNativeCon San Diego. 

Turning the TOC Up to 11

By | Blog

The Technical Oversight Committee (TOC) is, along with the governing board and the end user community, one of the three core governing bodies of CNCF. It’s charge includes:

  • defining and maintaining the technical vision for CNCF
  • approving new projects
  • creating a conceptual architecture for the projects,
  • removing or archiving projects
  • accepting feedback from the end user community and mapping to projects
  • defining common practices to be implemented across CNCF projects, if any

The TOC is the technical backbone of the CNCF community, and that’s why we have decided to turn it up to 11. Starting in January 2020, The TOC will expand from 9 members to 11, adding a second end user position and one selected by the project maintainers. 

The TOC will consist of:

  • 6 by the Governing Board (GB)
  • 2 by the End User Community (rather than 1)
  • 1 by the non-sandbox project maintainers (rather than 0)
  • 2 by the other 9 members of the TOC

In addition to the two new members, the governing board has also approved revisions to the current charter, which help to clarify the rules around nomination, qualification, and election.

The next TOC Elections will take place in December, with 3 GB-appointed TOC members up for election. The new end user-appointed member and a maintainer-appointed member will also be elected at this time. All TOC terms will still be 2 years.

The TOC is made up of representatives with impressive technical knowledge and backgrounds who provide technical leadership to the cloud native community. For additional information, please review: the TOC election process, TOC principles, and TOC representatives.

The Secret to Cloud Native Success for adidas? ‘We Love Competition’

By | Blog

A few years ago, it would take as long as a week just to get a developer VM at adidas. For engineers, says Daniel Eichten, Senior Director of Platform Engineering, “it felt like being an artist with your hands tied behind your back, and you’re supposed to paint something.”

To improve the process, “we started from the developer point of view,” and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago.

They found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus. Releases went from every 4-6 weeks to 3-4 times a day, and within a year, the team got 40% of the company’s most impactful systems running on the platform. 

How did they do it? One big factor was taking into consideration the company culture; adidas has sports and competition in its DNA. “Top-down mandates don’t work at adidas, but gamification works,” says Cornago. “So this year we had a DevOps Cup competition. Every team created new technical capabilities and had a hypothesis of how this affected business value. We announced the winner at a big internal tech summit with more than 600 people. It’s been really, really useful for the teams.”

Read more about how adidas won at cloud native in the full case study, and watch Fernando and Daniel chat about what’s worked for them in this video:

5 Kubernetes RBAC Mistakes You Must Avoid

By | Blog

Guest post by Connor Gilbert, originally published on StackRox

If you run workloads in Kubernetes, you know how much important data is accessible through the Kubernetes API—from details of deployments to persistent storage configurations to secrets. The Kubernetes community has delivered a number of impactful security features over the years, including Role-Based Access Control (RBAC) for the Kubernetes API. RBAC is a key security feature that protects your cluster by allowing you to control who can access specific API resources. Because the feature is relatively new, your organization might have configured RBAC in a manner that leaves you unintentionally exposed. To achieve least privilege without leaving unintentional weaknesses, be sure you haven’t made any of the following five configuration mistakes.

The most important advice we can give regarding RBAC is: “Use it!” Different Kubernetes distributions and platforms have enabled RBAC by default at different times, and newly upgraded older clusters may still not enforce RBAC because the legacy Attribute-Based Access Control (ABAC) controller is still active. If you’re using a cloud provider, this setting is typically visible in the cloud console or using the provider’s command-line tool. For instance, on Google Kubernetes Engine, you can check this setting on all of your clusters using gcloud:

$ gcloud container clusters list --format='table[box](name,legacyAbac.enabled)'
┌───────────┬─────────┐
│    NAME   │ ENABLED │
├───────────┼─────────┤
│ with-rbac │         │
│ with-abac │ True    │
└───────────┴─────────┘

Once you know that RBAC is enabled, you’ll want to check that you haven’t made any of the top five configuration mistakes. But first, let’s go over the main concepts in the Kubernetes RBAC system.

Overview of Kubernetes RBAC

Your cluster’s RBAC configuration controls which subjects can execute which verbs on which resource types in which namespaces. For example, a configuration might grant user alice access to view resources of type pod in the namespace external-api. (Resources are also scoped inside of API groups.)

These access privileges are synthesized from definitions of:

  • Roles, which define lists of rules. Each rule is a combination of verbs, resource types, and namespace selectors. (A related noun, ClusterRole, can be used to refer to resources that aren’t namespace-specific, such as nodes.)
  • RoleBindings, which connect (“bind”) roles to subjects (users, groups, and service accounts). (A related noun, ClusterRoleBinding, grants access across all namespaces.)

In Kubernetes 1.9 and later, Cluster Roles can be extended to include new rules using the Aggregated ClusterRoles feature.

This design enables fine-grained access limits, but, as in any powerful system, even knowledgeable and attentive administrators can make mistakes. Our experiences with customers have revealed the following five most common mistakes to look for in your RBAC configuration settings.

Configuration Mistake 1: Cluster Administrator Role Granted Unnecessarily

The built-in cluster-admin role grants effectively unlimited access to the cluster. During the transition from the legacy ABAC controller to RBAC, some administrators and users may have replicated ABAC’s permissive configuration by granting cluster-admin widely, neglecting the warnings in the relevant documentation. If users or groups are routinely granted cluster-admin, account compromises or mistakes can have dangerously broad effects. Service accounts typically also do not need this type of access. In both cases, a more tailored Role or Cluster Role should be created and granted only to the specific users that need it.

Configuration Mistake 2: Improper Use of Role Aggregation

In Kubernetes 1.9 and later, Role Aggregation can be used to simplify privilege grants by allowing new privileges to be combined into existing roles. However, if these aggregations are not carefully reviewed, they can change the intended use of a role; for instance, the system:view role could improperly aggregate rules with verbs other than view, violating the intention that subjects granted system:view can never modify the cluster.

Configuration Mistake 3: Duplicated Role Grant

Role definitions may overlap with each other, giving subjects the same access in more than one way. Administrators sometimes intend for this overlap to happen, but this configuration can make it more difficult to understand which subjects are granted which accesses. And, this situation can make access revocation more difficult if an administrator does not realize that multiple role bindings grant the same privileges.

Configuration Mistake 4: Unused Role

Roles that are created but not granted to any subject can increase the complexity of RBAC management. Similarly, roles that are granted only to subjects that do not exist (such as service accounts in deleted namespaces or users who have left the organization) can make it difficult to see the configurations that do matter. Removing these unused or inactive roles is typically safe and will focus attention on the active roles.

Configuration Mistake 5: Grant of Missing Roles

Role bindings can reference roles that do not exist. If the same role name is reused for a different purpose in the future, these inactive role bindings can suddenly and unexpectedly grant privileges to subjects other than the ones the new role creator intends.

Summary

There are many security considerations to be aware of when using Kubernetes—are your images, deployments, nodes, and clusters properly locked down?—and Kubernetes RBAC configuration is one of the more important controls for the security of your clusters. Properly configuring your cluster RBAC roles and bindings helps minimize the impact of application compromises, user account takeovers, application bugs, or simple human mistakes. Check your clusters today—have you made any of these configuration mistakes?

Nominations are open for the annual CNCF community awards

By | Blog

The nomination process is now open for the fourth annual CNCF Community Awards. These awards recognize the community members, developers and advocates working hardest to advance cloud native.  

If you know a deserving ambassador, maintainer and/or advocate working hard to advance cloud native innovation, please check out the categories and forms below and nominate them  for this year’s awards:

  • CNCF Top Ambassador: A champion for the cloud native space, this individual helps spread awareness of the CNCF (and its incubated projects). The CNCF Ambassador leverages multiple platforms, both online as well as speaking engagements, driving interest and excitement around the project.
  • CNCF Top Committer: This will recognize excellence in technical contributions to CNCF and its projects. The CNCF Top Committer has made key commits to projects and, more importantly, contributes in a way that benefits the project neutrally as a whole.
  • Chop Wood/Carry Water Award: This is given to a community member who helps behind the scenes, dedicating countless hours of their time to open source projects + completing often thankless tasks for the ecosystem’s benefit. The winner of this award will be chosen by the TOC and CNCF Staff.

Previous winners of the Community Awards include Michael Hausenblas, Jordan Liggitt, Dianne Mueller, Jorge Castro, Paris Pittman and many more!

Nominations are now open, and will close on October 2nd. We’ll be announcing the awards at KubeCon + CloudNativeCon North America 2019 in San Diego on November 19.

Voting, which will be open starting October 2nd, will be performed using the CIVS tool, using emails from the CNCF database for the following groups:

  • CNCF Ambassadors are eligible to vote for the Top Ambassador
  • CNCF Maintainers are eligible to vote for the Top Committer

Nominate now, and be sure to register for KubeCon + CloudNativeCon 2019 to find out the winners!

How Bloomberg Achieves Close to 90-95% Hardware Utilization with Kubernetes

By | Blog

If you work in financial services, the Bloomberg Terminal is probably your best friend. Behind the scenes, every day Bloomberg deals with hundreds of billions of pieces of data coming in from the financial markets and millions of news stories from hundreds of thousands of sources. There are 14,000 different applications on the Terminal alone. At that scale, delivering information across the globe with high reliability and low latency is a big challenge for the company’s more than 5,500-person strong Engineering department.

In recent years, the infrastructure team has worked on delivering infrastructure as a service while spinning up a lot of VMs and scaling as needed. “But that did not give app teams enough flexibility for their application development, especially when they needed to scale out faster than the actual requests that are coming in,” says Andrey Rybka, Head of the Compute Architecture Team in Bloomberg’s Office of the CTO. “We needed to deploy things in a uniform way across the entire network, and we wanted our private cloud to be as easy to use as the public cloud.”

In 2016, Bloomberg adopted Kubernetes—when it was still in alpha—and has seen remarkable results ever since using the project’s upstream code. “With Kubernetes, we’re able to very efficiently use our hardware to the point where we can get close to 90 to 95% utilization rates,” says Rybka. Autoscaling in Kubernetes allows the system to meet demands much faster. Furthermore, Kubernetes “offered us the ability to standardize our approach to how we build and manage services, which means that we can spend more time focused on actually working on the open source tools that we support,” says Steven Bower, Data and Analytics Infrastructure Lead. “If we want to stand up a new cluster in another location in the world, it’s really very straightforward to do that. Everything is all just code. Configuration is code.”

Read more about Bloomberg’s cloud native success story in the full case study

Kubernetes IoT Edge WG: Identifying Security Issues at the Edge

By | Blog

With IoT and Edge computing emerging as viable options for building and deploying applications, more and more developers are wanting to use Kubernetes and other cloud native technologies outside of typical data center deployments. These developers want to be able to use all of the tools and best practices they are used to, even in non-traditional environments.

For this reason, the community formed the Kubernetes IoT Edge Working Group, a cross-SIG effort currently sponsored by sig-networking and sig-multicluster with a focus on improving Kubernetes IoT and Edge deployments. Within the working group, community members are encouraged to share their ideas to push forward cloud native developments in IoT and Edge technologies. 

IoT and Edge applications by design have a lot of distributed components that don’t usually sit together within the same data center infrastructure. For this reason, there are a lot of potential security implications to take into consideration when using these technologies.

The Kubernetes IoT Edge Working Group has developed a new whitepaper to expose these security challenges within one single document. The purpose of the whitepaper is to identify a comprehensive list of edge security challenges and concerns that the CNCF and Kubernetes communities should recognize. 

In publishing the whitepaper, the working group hopes to:

  • Identify a set of universal security challenges at the edge (covering roughly 80% of the total security concerns for all use cases).
  • Describe each security challenge in a manner that allows all professionals with a moderate level of technical skill to understand the issue.

“With the proliferation of IoT and Edge computing, it’s important that we as a community take steps toward ensuring these new technologies are as secure as they can be,” said Dejan Bosanac, Senior Software Engineer at Red Hat and chair of the IoT Edge working group. “As with any emerging technology, there are blind spots, and we want to identify these so that the community can work together to resolve these before they can be used maliciously. We’re excited to work on this with the community, giving us more eyes to identify potential issues, and more brainpower to identify solutions.”

The whitepaper covers potential security implications of IoT and Edge implementations in the following areas:

  • Trusting hardware
  • Trusting connected devices
  • Within the operating system
  • Network concerns
  • Edge microservices

Since the types of security challenges and layers at which they occur are varied, producing secure edge computing stacks will require the effort of many vendors and contributors working in concert. We hope this whitepaper will help encourage immediate effort from the community to resolve the identified issues!

1 3 4 5 39