All Posts By

cncf

CNCF and BVP Host Discussions with Leading Open Source Contributors and End Users

By | Blog

This blog was originally posted on the Bessemer Venture Partners Blog.

Bessemer was proud to partner with the CNCF to host an evening devoted to discussion around exciting new open source infrastructure projects and best practices for enterprises making the cloud native transformation. Given the high concentration of financial services firms in NYC, we were fortunate to have great input from many of these leaders on the added challenges of making these transitions in highly-regulated institutions.

Across all of these discussions, it was clear that everyone is dealing with similar macro challenges. Today’s infrastructure leaders are in a tough spot, having to meet increasingly strict data governance and security policies while also providing the ever increasing scalability that modern enterprises demand.  In addition, these leaders are now faced with a myriad of infrastructure options (public/private/hybrid cloud), compute paradigms (VM/Container/Serverless), open source projects, and startup vendors to choose from. Fortunately, governing bodies such as the CNCF are providing some direction and guidance for enterprises as they begin to navigate the crowded space.

Some of the best actionable tips that we heard regarding helping enterprises succeed with cloud native transitions include:

1)    Empower engineers to explore the open source landscape and experiment with new projects and then establish a point person, such as a Director of Open Source, to help centralize enforcement of best practices before new open source projects go into production.

2)    Create a funnel process to help qualify open source projects during testing and help teams get approval to use production data or build mission critical apps with open source software. This enables development teams to move quickly and experiment with new technologies while also ensuring an enterprise can manage their security and compliance concerns before new deployments go into production.

3)    Assign clear ownership to microservices and internal open source projects. When you have hundreds or thousands of microservices in production, knowing who to contact for what service can become a real problem.

4)    If you need help navigating this rapidly changing ecosystem, check out some of the guidance provided by the CNCF here.

In addition, the evening featured talks from three great speakers, as outlined below.

Ken Owens, VP of Cloud Engineering at Mastercard shared lessons learned from his experience adopting two of the leading open source technologies, Kubernetes and Istio, in production at Mastercard.  Based on our conversations with many large enterprises in the financial services industry we were really impressed to learn from Ken just how far ahead Mastercard is in this transition. His main advice was that in order to get any large enterprise, particularly a highly-regulated company in the financial services industry, to cross the open source chasm, you need to have very strong buy in from senior executives to go cloud native. He advised team leads to lay the necessary communications and relationship building groundwork, since this infrastructure change will be a multi-month, sometimes multi-year endeavor.

Priyanka Sharma, Director of Alliances at Gitlab, shared tips on how to avoid some of the devops horror stories she has seen across enterprises who have tried to keep up with the latest trends: Go for quality over quantity on devtools. Latching onto new trends in devops can ultimately lead to more headache and heartbreak than efficiency savings. Instead of having a “toolbox” of ten different devops tools, try to standardize across 2 or 3 tools that fit the bulk of your needs.

Andrew Jessup, project maintainer for SPIFFE and cofounder of Scytale, shared the origin story of their open source project (SPIFFE) and the need they saw in the market for scalable identity and access management infrastructure for microservices. He also highlighted how the pain points that SPIFFE solves are increasingly acute in enterprises moving to a microservices-based architecture, distributed across heterogeneous infrastructure that requires high scalability and elasticity.  As these environments have workloads that are constantly spinning up and disappearing across various network boundaries, the traditional approaches to security and authentication approaches break down and require an identify-focused approach.

CNCF is proud to foster and support the next wave of open source communities. We want to give a special thanks to BVP for supporting CNCF, and making this event possible with support for their portfolio companies, npm, ScyllaDB, and Scytale. We also want to thank the speakers for sharing their first hand open source knowledge, helping to give back and push the community forward.

 

GSoC 2018: Extending Envoy’s fuzzing coverage

By | Blog

Google Summer of Code (GSOC) program 2018 has come to an end and we followed up with CNCF’s seven interns previously featured in this blog post to check in on how their summer project progressed.

Over the summer, Anirudh Morali of Anna University worked with mentors Matt Klein, Constance Caramanolis, and Harvey Tuch on “Extending Envoy’s Fuzzing Coverage,” a project focusing on extending the fuzz coverage including proto, data plane, and H2 level frame fuzzing.

I am Anirudh, a final year computer science undergraduate at Anna University, India. I previously participated in Google Summer of Code 2017 with Haiku OS, and this time, Google Summer of Code 2018 with Cloud Native Computing Foundation.

It was February 12th when Google posted the list of organizations that will be participating for the Google Summer of Code 2018. During my internship with Hasura, I came to learn about the existence of Cloud Native Computing Foundation and all their projects. Having heard about CNCF’s success from the people who participated in Google Summer of Code with CNCF the previous year, I was interested in sending a proposal to CNCF, but had no knowledge on any specific project or domain to work on. The CNCF projects which participated in GSoC (Kubernetes, Prometheus, Envoy, CoreDNS and few more) had huge codebases, which was overwhelming for me at first. Though most codebases had parts of code written in different languages, the majority usage was Golang. I wanted to learn Go, find low-hanging issues, fix them and then work on a proposal. But I realised it was late to learn a new language with just a few weeks left for the program to begin, and it would be even more difficult to write code for the projects since the projects are not beginner friendly.

I previously worked with a C++ codebase and was looking for projects which were built with C++. I came across Envoy, and noticed that this was the first time for Envoy to participate in GSoC. You can read about Envoy here: https://www.envoyproxy.io/ – Envoy is a service mesh initially created at Lyft, with codebase in C++. It’s designed to minimize the memory and CPU footprint, while offering load balancing, network tracing and database activity in micro-service architecture system.

I spent a few days learning what Envoy was and took part in one of the community meetings. Even though I didn’t understand what they spoke technically, it was a good experience on how people were in touch with each other remotely and worked collaboratively on a project. I had talks with Harvey Tuch, who was already working on fuzzing for Envoy. Matt Klein told me that I’d be working with him and extending the fuzzing support for Envoy.

My last exam and the community bonding were both on the week of May 2nd. With no exams, I was able to spend more time on GSoC. We had video calls to get things started. There was an ongoing issue with all the fuzzing support needed for Envoy: https://github.com/envoyproxy/envoy/issues/508

I became part of the Envoy GitHub organization and was assigned the issue to work on. Harvey gave me a quick intro on the library Envoy was using for fuzzing: oss-fuzz, and there was a guide on getting started with writing fuzz tests. I started working on fuzzing utility functions in Envoy, such as the string functions, and then gave a pull request to the repository: https://github.com/envoyproxy/envoy/pull/3493. After a series of code reviews and suggestions, followed by fixing them, the pull request got merged which marking my first contribution to Envoy. 😀

For the second evaluation, I picked up the configuration validation fuzzing tasks, and the task was supposed to be a small fix to the existing server fuzz test, but it took me time to get that figured out. I also started working on the H1 capture fuzzing test. H1 fuzzing is on both the request and response path as of now, i.e. fuzzing happens both downstream and upstream, with direct response enabled the fuzzing happens only in the downstream. A response like a file is thrown instead of connecting to the upstream.

OSS-Fuzz is a project by Google which helps open-source projects be more secure and stable by fuzzing the codebase with fuzzing engines. I got access to Envoy’s OSS-Fuzz dashboard and started working on the issues present there. I picked up simple tests which were failing because of a failure in assert, and fixed them by proper error capture or adding bazel build constraints. Some PRs needed changes to other repositories as well, raised issues there as well.

Work done during the summer:

Pull requests:

Issues:

Owe my biggest thanks to:

  • Envoy open-source community (Specifically, my kind mentors for taking time to clarify doubts, and helping me learn).
  • Nikhita, Chris Aniszczyk (CNCF organization admins) and Amit, for introducing me to Envoy.
  • People at my university for the permission and time to pursue GSoC.

If you’re an aspiring GSoCer, go ahead and apply to CNCF, you’ll have an awesome summer. You can get in touch with me for any queries on getting started. You can find my contact details here. Will be happy to help! 🙂

 

TOC Votes to Move Rook into CNCF Incubator

By | Blog

By Jared Watts

Rook, an open source cloud native storage orchestrator for Kubernetes, was the first storage project accepted into CNCF back in January of this year. Today, roughly 8 months later, we are excited to announce that the TOC has voted to officially move Rook out of the sandbox entry stage and up to the CNCF Incubator, alongside such projects as gRPC, Envoy and Helm.

We want to extend a thank you to the community who is continuing to help develop Rook into what it is today. In that short amount of time we’ve seen 13x the number of container downloads, doubled the number of GitHub stars and most importantly doubled the contributors to Rook.

While we invite you to read more on the Rook blog, we wanted to highlight some of the work the community has done together. We’ve worked on two major releases over the past 8 months, the 0.7 and the 0.8 releases, which included significant features and improvements. Some of the highlights of the releases include:

  • Rook Framework for storage providers turns Rook into a general purpose cloud-native storage orchestrator that now supports multiple new storage solutions with reusable specs, logic, policies and testing.
  • CockroachDB and Minio support and orchestration shipped in v0.8. Support for Network File System (NFS) has been merged to master and work on Cassandra, Nexenta, and Alluxio are all ongoing.
  • Ceph support graduated to Beta maturity, taking a major step towards being declared stable.

There are many different ways to get involved in the Rook project, so please join us in helping the project continue to grow on its way to the final stage as a CNCF hosted project: graduation! You can learn more on the Rook website + get involved in the community on GitHub or Slack.

GSoC 18: Kata Containers support for containerd

By | Blog

Google Summer of Code (GSOC) program 2018 has come to an end and we followed up with CNCF’s seven interns previously featured in this blog post to check in on how their summer project progressed.

Over the summer, graduate student at Zhejiang University and currently an intern at HarmonyCloud.cn Jian Liu worked with mentor Harry Zhang, Fupan Li, and Lantao Liu to “Integrate containerd with Kata Containers,” a project aimed at creating a containerd-kata runtime plugin for containerd to integrate with Kata Containers.

My story started in January 2018 when I noticed that the CNCF community had some container technology-related ideas for GSoC 2018. After browsing all the topics I was very interested in the topic of “KataContainers support for containerd/cri-containerd.” I already had some knowledge about Kubernetes, so I spent quite some time to deeply study documentation and code for “containerd” and “KataContainers.” Once I had a much better understanding of these two projects, I drafted a design proposal and was luckily selected as a GSoC candidate! I believe the study of these open source projects’ source code helped a lot in my design proposal.

In the Kata/CRI native manner, the theory was that we could avoid using too many independent shim and proxy processes. Some investigations showed a shim consuming too much memory, causing huge overhead costs in high density cases. So our project was aimed at removing independent shim and proxy processes to save memory and start the container more quickly.

In the beginning of the project, we were going to develop a kata-runtime plugin for containerd. When we implemented part of the interfaces that containerd’s runtime plugin needed, the basic container operations worked successfully. That moment was engraved on my mind. It meant that I went from a container user to a container developer. So excited!

But soon, something unexpected happened.

After I have already finished some part of the task, the containerd upstream community proposed a fresh new proposal “Shim API v2.” The goal was to establish a new standard to make containerd compatible with various runtimes. This sounds very helpful to our ongoing work, and after discussing with maintainers from Google, KataContainers and containerd community, we decide to make a huge turnaround.

While considering the remaining time of my GSoC program was short, I began to feel worried about whether I could finish these new tasks. Fortunately, my mentors connected me with a maintainer of the KataContainers project, and he set up the skeleton code for me and the two of us cooperated closely on the new design together. Also, maintainers from Google and the containerd community gave me lots of useful ideas on how to follow the upstream progress. With this timely guidance, when it came to the end of GSoC, I had successfully implemented many functions of what Kubernetes CRI required and passed 95% of node e2e conformance tests. More details about my work can be found at here.

Eventually we used containerd+shimv2+kata-runtime to test the time of starting pause container. And the result is following.

This GSoC project was really challenging for me, and I felt a flood of new knowledge poured into my mind. The practical experience had given me a profound understanding of the open source world. My love for the open source community grew stronger. With a burning passion, I continued to follow the project after GSoC. This past summer was really amazing. Developing in this excellent community, I gained so much joy.

I would like to thank my mentors for their constant support and guidance, especially Harry Zhang, Fupan Li and Lantao Liu. As well as thank Google Summer of Code team and the CNCF organization for giving me such a golden opportunity to contribute to the open source community. I am eager to continue developing and contributing to open source world.

Last but not least, CNCF community is a good starting point for every student. There are many great projects in it and always an option that can fit you.

Announcing The Final Four Keynotes for KubeCon + CloudNativeCon China

By | Blog

KubeCon + CloudNativeCon China, taking place on November 13-15 in Shanghai, has added the final four new keynotes to its line-up. With the rapidly growing interest in cloud native technologies in Asia, the event provides a platform for the CNCF ecosystem and community to collaborate around an impressive mix of topics, technical sessions, deep-dives, and case studies.  

Experts from companies such as Huawei, Google, Katacoda, Alibaba, Microsoft, and more, will present on service mesh technologies, CI/CD, and container security, as well as discuss how their organizations are adopting these cutting-edge cloud native technologies.

New keynotes include:

Brendan Burns, distinguished engineer at Microsoft and co-founder of Kubernetes, will present on Kubernetes’ Serverless Present and Future.

 

 

Chao Wang, CTO, X-Turing and Anni Lai, Head of Global Business Development, VP of Strategy & Business Development, Huawei will detail their journey in Accelerating Genome Sequencing via Containers and Kubernetes and the challenges that the team at X-Turing faced during their genome sequencing projects. The talk will cover how they were able to leverage modern technologies including containers and Kubernetes to overcome them.

Wei Zhang, VP Technology, Goldwind Smart Energy and Sheng Liang, CEO, Rancher Labs will discuss strategies for Delivering Renewable Energy with Kubernetes and share how, as the third largest wind turbine manufacturer in the world, Goldwind improved operational efficiency and 10x faster software iteration speed after adopting Kubernetes.

 

Vicki Cheung, Engineering Manager, Lyft will provide insight into the company’s experience using Kubernetes as a Foundational Layer of Infrastructure. The keynote will discuss why Kubernetes is the platform of choice for Lyft, challenges and surprises the team has encountered implementing it, what use cases were most natural, and how they are migrating production traffic.

 

Join us in Shanghai – Register now! Additionally, hotel room rate discounts are available here. Book early, as the discounted rate is based upon availability.

 

___________________________________

 

KubeCon+CloudNativeCon 中国大会的最后4个主题演讲

将于11月13日至15日在上海举行的KubeCon + CloudNativeCon中国大会,今天更新了最后四个新的主题演讲。随着亚洲地区对云原生技术的兴趣迅速增长, KubeCon + CloudNativeCon大会也伴随着这种趋势,为CNCF生态系统和社区提供了一个平台。在此基础上,参会者可以围绕各种主题,技术会议,深度研究和案例分析进行合作。

来自华为,谷歌,Katacoda,阿里巴巴,微软等公司的专家将介绍服务网技术,CI / CD和容器安全,并讨论他们的公司如何应用这些尖端的云原生技术。

主题演讲包括:

Brendan Burns,微软公司杰出工程师,Kubernetes联合创始人,将为大家带来关于Kubernetes无服务器发展现状和发展未来的主题演讲。

 

 

 

X-Turing首席技术官Chao Wang和华为公司战略与业务开发副总裁,全球业务开发负责人Anni Lai将详细介绍他们通过容器和Kubernetes加速基因组测序的过程以及X-Turing团队在此期间面临的挑战。演讲将展示他们如何利用容器和Kubernetes等现代技术来克服上述困难。

 

Goldwind Smart Energy公司技术副总裁Wei Zhang和Rancher Labs首席执行官Sheng Liang将讨论Kubernetes提供可再生能源的战略,并分享全球第三大风力涡轮机制造商金风科技在采用Kubernetes后以何种方式提高运营效率和如何使软件迭代速度提高10倍。

 

 

Lyft工程经理Vicki Cheung将深入介绍该公司使用Kubernetes作为基础设施基础层的经验。此主题演讲将讨论为什么Kubernetes是Lyft的首选平台,Lyft团队在应用Kubernetes时遇到的挑战和惊喜,哪些是最常见的用例,以及他们如何转移生产流量。

 

 

即刻参与上海的KubeCon盛会- 立即注册!此外,酒店还提供房价折扣。请提早预订,折扣价格视供应情况而定。

 

 

gRPC On HTTP/2: Engineering A Robust, High Performance Protocol

By | Blog

This guest post was written by Jean de Klerk, Developer Program Engineer, Google

In a previous article, we explored how HTTP/2 dramatically increases network efficiency and enables real-time communication by providing a framework for long-lived connections. In this article, we’ll look at how gRPC builds on HTTP/2’s long-lived connections to create a performant, robust platform for inter-service communication. gRPC is a high-performance, open-source universal RPC framework. We will explore the relationship between gRPC and HTTP/2, how gRPC manages HTTP/2 connections, and how gRPC uses HTTP/2 to keep connections alive, healthy, and utilized.

gRPC Semantics

To begin, let’s dive into how gRPC concepts relate to HTTP/2 concepts. gRPC introduces three new concepts: channels(1), remote procedure calls (RPCs), and messages. The relationship between the three is simple: each channel may have many RPCs while each RPC may have many messages.

Let’s take a look at how gRPC semantics relate to HTTP/2:

 

Channels are a key concept in gRPC. Streams in HTTP/2 enable multiple concurrent conversations on a single connection; channels extend this concept by enabling multiple streams over multiple concurrent connections. On the surface, channels provide an easy interface for users to send messages into; underneath the hood, though, an incredible amount of engineering goes into keeping these connections alive, healthy, and utilized.

Channels represent virtual connections to an endpoint, which in reality may be backed by many HTTP/2 connections. RPCs are associated with a connection (this association is described further on). RPCs are in practice plain HTTP/2 streams. Messages are associated with RPCs and get sent as HTTP/2 data frames. To be more specific, messages are layered on top of data frames. A data frame may have many gRPC messages, or if a gRPC message is quite large (2) it might span multiple data frames.

Resolvers and Load Balancers

In order to keep connections alive, healthy, and utilized, gRPC utilizes a number of components, foremost among them name resolvers and load balancers. The resolver turns names into addresses and then hands these addresses to the load balancer. The load balancer is in charge of creating connections from these addresses and load balancing RPCs between connections.

A DNS resolver, for example, might resolve some host name to 13 IP addresses, and then a RoundRobin balancer might create 13 connections – one to each address – and round robin RPCs across each connection. A simpler balancer might simply create a connection to the first address. Alternatively, a user who wants multiple connections but knows that the host name will only resolve to one address might have their balancer create connections against each address 10 times to ensure that multiple connections are used.

Resolvers and load balancers solve small but crucial problems in a gRPC system. This design is intentional: reducing the problem space to a few small, discrete problems helps users build custom components. These components can be used to fine-tune gRPC to fit each system’s individual needs.

Connection Management

Once configured, gRPC will keep the pool of connections – as defined by the resolver and balancer – healthy, alive, and utilized.

When a connection fails, the load balancer will begin to reconnect using the last known list of addresses (3). Meanwhile, the resolver will begin attempting to re-resolve the list of addresses. This is useful in a number of scenarios. If the proxy is no longer reachable, for example, we’d want the resolver to update the list of addresses to not include that proxy’s address. To take another example: DNS entries might change over time, and so the list of addresses might need to be periodically updated. In this manner and others, gRPC is designed for long-term resiliency.

Once resolution is finished, the load balancer is informed of the new addresses. If addresses have changed, the load balancer may spin down connections to addresses not present in the new list or create connections to addresses that weren’t previously there.

Identifying Failed Connections

The effectiveness of gRPC’s connection management hinges upon its ability to identify failed connections. There are generally two types of connection failures: clean failures, in which the failure is communicated, and the less-clean failure, in which the failure is not communicated.

Let’s consider a clean, easy-to-observe failure. Clean failures can occur when an endpoint intentionally kills the connection. For example, the endpoint may have gracefully shut down, or a timer may have been exceeded, prompting the endpoint to close the connection. When connections close cleanly, TCP semantics suffice: closing a connection causes the FIN handshake to occur. This ends the HTTP/2 connection, which ends the gRPC connection. gRPC will immediately begin reconnecting (as described above). This is quite clean and requires no additional HTTP/2 or gRPC semantics.

The less clean version is where the endpoint dies or hangs without informing the client. In this case, TCP might undergo retry for as long as 10 minutes before the connection is considered failed. Of course, failing to recognize that the connection is dead for 10 minutes is unacceptable. gRPC solves this problem using HTTP/2 semantics: when configured using KeepAlive, gRPC will periodically send HTTP/2 PING frames. These frames bypass flow control and are used to establish whether the connection is alive. If a PING response does not return within a timely fashion, gRPC will consider the connection failed, close the connection, and begin reconnecting (as described above).

In this way, gRPC keeps a pool of connections healthy and uses HTTP/2 to ascertain the health of connections periodically. All of this behavior is opaque to the user, and message redirecting happens automatically and on the fly. Users simply send messages on a seemingly always-healthy pool of connections.

Keeping Connections Alive

As mentioned above, KeepAlive provides a valuable benefit: periodically checking the health of the connection by sending an HTTP/2 PING to determine whether the connection is still alive. However it has another equally useful benefit: signaling liveness to proxies.

Consider a client sending data to a server through a proxy. The client and server may be happy to keep a connection alive indefinitely, sending data as necessary. Proxies, on the other hand, are often quite resource constrained and may kill idle connections to save resources. Google Cloud Platform (GCP) load balancers disconnect apparently-idle connections after 10 minutes, and Amazon Web Services Elastic Load Balancers (AWS ELBs) disconnect them after 60 seconds.

With gRPC periodically sending HTTP/2 PING frames on connections, the perception of a non-idle connection is created. Endpoints using the aforementioned idle kill rule would pass over killing these connections.

A Robust, High Performance Protocol

HTTP/2 provides a foundation for long-lived, real-time communication streams. gRPC builds on top of this foundation with connection pooling, health semantics, efficient use of data frames and multiplexing, and KeepAlive.

Developers choosing protocols must choose those that meet today’s demands as well as tomorrow’s. They are well served by choosing gRPC, whether it be for resiliency, performance, long-lived or short-lived communication, customizability, or simply knowing that their protocol will scale to extraordinarily massive traffic while remaining efficient all the way. To get going with gRPC and HTTP/2 right away, check out gRPC’s Getting Started guides.

_____________________________

(1) In Go, a gRPC channel is called ClientConn because the word “channel” has a language-specific meaning.

(2) gRPC uses the HTTP/2 default max size for a data frame of 16kb. A message over 16kb will span multiple data frames, whereas a message below that size will share a data frame with some number of other messages.

(3) This is the behavior of the RoundRobin balancer, but not every load balancer does or must behave this way.

Becoming Director of Ecosystem at CNCF

By | Blog

After twenty months, I’m moving on from StorageOS. I am deeply grateful to my manager Alex for being an incredible role model, and to my reports Ferran, Frank and the rest of the team for being brilliant, patient, understanding and kind. (Although I’ll definitely live on in Jira tickets for some time yet…!)

I’m delighted to announce my next role as the Director of Ecosystem at the Cloud Native Computing Foundation, reporting to Dan Kohn. I’ve collaborated with the Foundation for over a year through the Cloud Native London meetup, so I’m thrilled to officially join CNCF and The Linux Foundation. The commitment to technical excellence, community outreach and inclusion makes it a perfect home for me.

As Director of Ecosystem, my focus will be on making our cloud native end users happy and successful. I will build on my own experience as a Google software engineer and Borg end user, community engagement and public speaking. I also look forward to improving my Mandarin and working with our Chinese members!

In terms of timing, I am getting married in two days (it’s good to remember there are things worth doing outside of tech!), honeymooning in the Caribbean, and taking a few weeks to do life admin. I will officially join the CNCF staff on October 1, and you’ll see me speaking at conferences and running the Cloud Native London meetup. Check Twitter or my blog, oicheryl.com, for my latest movements, and I hope to meet you very soon!

How to rapidly develop apps with microservices

By | Blog

Originally published by Rick Osowski on the IBM Cloud Blog, here.

As of late last year, a global majority primarily now accesses the internet through a mobile device.

The business implications of the trend are often brought into focus with cautionary tales about Uber and Airbnb successfully disrupting their respective markets. Incumbents in all markets have been put on notice that their customers will soon be offered increasingly innovative user experiences, shifting expectations. Rapidly innovating the relationship with customers through mobile applications is now a platitude in business planning.

Mobile first is not enough: Your customers for awhile may tolerate a great mobile-friendly version of your existing web site, but you have to ask how long you can keep them waiting for features that augment their mundane lives in context. If implementing your ideas take months—as is often the case with a monolithic application that requires coordinated work among many different teams—your innovation easily could become a me-too offering. A more nimble competitor will always be seeking to grab the baton.

It’s uncomfortably obvious that development teams need to accelerate how they deliver new benefits to users. Since nobody can fully predict user behavior, even the fastest, most successful DevOps program has to be ready to fail in the field of actual user experience. Quickly redesigning, replacing, and augmenting parts of the user experience are a top priority based on analysis of clear usage data. That’s why a microservices model of developing cloud-based applications is so powerful. It allows a different small team to own the entire cycle (concept, development, deployment, monitoring) for each component of an application, providing the flexibility necessary to precisely iterate an underperforming part of the user experience as reflected by data gathered in monitoring what users themselves are doing. The DevOps process becomes a dynamic interaction–almost a conversation–with users in the field.

Starting from where you are

Failing fast and iterating quickly: these are DevOps requirements for competitive app delivery in the mobile services era. They imply application architectures that decouple services from each other in a continuous development and delivery cycle while ensuring well-performing interactions with users.

While a startup has an advantage in building greenfield cloud-native applications—using a microservices approach along with DevOps tools and practices—incumbent companies often must begin by refactoring an existing monolith.

Let’s look at a specific example.

In this case, an online retailer wanted to transform a monolith into microservices in order to learn more about customers, and to quickly update and introduce new features.

Since browsing the online catalog presented pressing business problems to solve, the transformation of the overall app began there:

Pilot Task: Determine and implement a better handling of the catalog

The current app failed to help customers easily find product data and blocked the business from exposing data to other sites.

As a proof of concept for the microservices approach, the team built a single microservice for the business catalog using the following steps:

  • Establish a new continuous integration/continuation development model to do the work.
  • Import data into an elastic search to get new ways to search their data and identify new data.
  • Link the existing website to the new search.

At this point, the catalog was still integrated with the existing ordering components, which run core business logic and were too complex to break up without additional work. However, with a successful pilot, the team was convinced of the value of microservices and decided to expand the scope of the app transformation.

Task 2: Learn more about the customer

To learn more about the customer, the team created an account microservice by figuring how to shift the business to focus on customers instead of inventory.

When they determined that customer experience could be enriched over time based on analytics, marketing, and cognitive data, the choice to use an unstructured database became obvious. So they designed a new customer model and used a NOSQL database (like Mongo DB or Cloudant) to manage the unstructured data.

Task 3Innovate the user experience

The team built a new native mobile app and created a new front end for mobile and web access. Even though the catalog depends on the legacy ordering code, the overall user experience was noticeably enhanced.

Task 4: Update access to the order microservice

The team created new order APIs for mobile and integrated them into existing transactions. The business decided to create an adapter microservice that called into the existing ordering system of record on premises. They also used the adapter to integrate with new payment methods and systems.

Task Next: Create an new auction feature

Within the newly flexible architecture, the team has planned to add this innovation in upcoming sprints.

Asking the right questions

As you think about the example, consider these questions:

  • What do your customers want–now and next?
  • Are users of mobile devices satisfied with the experiences your apps are providing?
  • In terms of delivering what users want, how are the DevOps processes and practices of your IT organization helping and hindering?
  • Considering what’s hindering your Devops, and assuming you have an existing app monolith that you need to modernize, do you know exactly what you need to do first?
  • What experiments with cloud platforms should the individual members of your application development team be conducting?
  • What is a good pilot project to use in evaluating a microservices approach and cloud platforms for implementing it?

 

 

Announcing EnvoyCon! CFP due August 24

By | Blog

Originally published by Richard Li on the Envoy blog, here.

We are thrilled to announce that the first ever EnvoyCon will be held on December 10 in Seattle, WA as part of the KubeCon / CloudNative Con Community Events Day. The community growth since we first open sourced Envoy in September 2016 has been staggering, and we feel that it’s time for a dedicated conference that can bring together Envoy end users and system integrators. I hope you are as excited as we are about this!

The Call For Papers is now open, with submissions due by Friday, August 24.

Talk proposals can be either 30 minute talks or 10 minute lightning talks, with experience levels from beginner to expert. The following categories will be considered:

  • Using and Integrating with Envoy (both within modern “cloud native” stacks such as Kubernetes but also in “legacy” on-prem and IaaS infrastructures)
  • Envoy in production case studies
  • Envoy fundamentals and philosophy
  • Envoy internals and core development
  • Security (deployment best practices, extensions, integration with authn/authz)
  • Performance (measurement, optimization, scalability)
  • APIs, discovery services and configuration pipelines
  • Monitoring in practice (logging, tracing, stats)
  • Novel Envoy extensions
  • Porting Envoy to new platforms
  • Migrating to Envoy from other proxy & LB technologies
  • Using machine learning on Envoy observability output to aid in outlier detection and remediation
  • Load balancing (both distributed and global) and health checking strategies

Reminder: This is a community conference — proposals should emphasize real world usage and technology rather than blatant product pitches.

If you’re using or working on Envoy, please submit a talk! If you’re interested in attending, you can register as part of the KubeCon registration process.

Diversity Scholarship Series: My Experiences at KubeCon EU 2018

By | Blog

CNCF offers diversity scholarships to developers and students to attend KubeCon + CloudNativeCon events. In this post, scholarship recipient, Yang Li, Software Engineer at TUTUCLOUD, shares his experience attending sessions and meeting the community. Anyone interested in applying for the CNCF diversity scholarship to attend KubeCon + CloudNativeCon North America 2018 in Seattle, WA December 11-13, can submit an application here. Applications are due October 5th.

This guest post was written by Yang Li, Software Engineer 

Thanks to the Diversity Scholarship sponsored by CNCF, I attended the KubeCon + CloudNativeCon Europe 2018 in Copenhagen May 2-4.

The Conference

When I was in college, I wrote software with Python on Ubuntu, and read Cathedral and the Bazaar by Eric S. Raymond. These were my first memories of open source.

Later on, I worked with a variety of different open source projects, and I attended many tech conferences. But none of them were like KubeCon, which gave me the opportunity to take part in the open source community in real life.

Not only was I able to enjoy great speeches and sessions at the event, but I also met and communicated with many different open source developers. I made many new friends and amazing memories during the four days in Copenhagen.

In case anyone missed it, here are the videos, presentations, and photos of the whole conference.

Although I haven’t been to many cities around the world, I can safely say that Copenhagen is one of my favorites.

The Community

“Diversity is essential to happiness.”

This quote by Bertrand Russell is one of my firm beliefs. Even as a male software engineer and a Han Chinese in China, I always try to speak for the minority groups which are still facing discrimination. But to be honest, I haven’t found much meaning in the word diversity for myself.

However, soon after being at KubeCon, I understood that I’m one of the minorities in the world of open source. More importantly, with people from all over the world, I learned how inclusiveness made this community a better place. Both the talk by Dirk Hohndel and the Diversity Luncheon at KubeCon were very inspirational.

Final Thoughts

I started working with Kubernetes back in early 2017, but I only made a few contributions in the past year. Not until recently did I become active in the community and joined multiple SIGs. Thanks to the conference, I have a much better understanding of the culture of the Kubernetes community. I think this is the open source culture at its best.

  • Distribution is better than centralization
  • Community over product or company
  • Automation over process
  • Inclusive is better than exclusive
  • Evolution is better than stagnation

It is the culture that makes this community an outstanding place which deserves our persevering.