Prometheus User Profile: Dynamically Helping Weaveworks Accelerate Cloud Native Application Development

By | Blog | No Comments

Sometimes two things go so well together you wonder how you ever saw them separately, like peanut butter and chocolate coming together to make Reese’s cups. The combination of Kubernetes and Prometheus invokes the same feeling for Weaveworks, the makers of Weave networking, monitoring and management for containers and microservices.

Director of Engineering at Weaveworks, Tom Wilkie, recently dove into some of the practices the company has developed using these two perfectly paired cloud native technologies over the last six months in his blog post Prometheus and Kubernetes: A Perfect Matchand recently presented how Weaveworks uses Prometheus to monitor Weave Cloud, which runs on a Kubernetes cluster in AWS.

Digging deeper into Weavework’s use of Prometheus, Brian Brazil sat down with Tom to discuss how Weaveworks discovered Prometheus, what improvements they made to the platform, what is next for the platform and the moment they knew Prometheus and Kubernetes were a “Perfect Match.”  Read on to learn more!

*interview originally posted at

Interview with Weaveworks

Posted at: February 20, 2017 by Brian Brazil

Continuing our series of interviews with users of Prometheus, Tom Wilkie from Weaveworks talks about how they choose Prometheus and are now building on it.

Can you tell us about Weaveworks?

Weaveworks offers Weave Cloud, a service which “operationalizes” microservices through a combination of open source projects and software as a service.

Weave Cloud consists of:

You can try Weave Cloud free for 60 days. For the latest on our products check out our blog, Twitter, or Slack (invite).

What was your pre-Prometheus monitoring experience?

Weave Cloud was a clean-slate implementation, and as such there was no previous monitoring system. In previous lives the team had used the typical tools such as Munin and Nagios. Weave Cloud started life as a multitenant, hosted version of Scope. Scope includes basic monitoring for things like CPU and memory usage, so I guess you could say we used that. But we needed something to monitor Scope itself…

Why did you decide to look at Prometheus?

We’ve got a bunch of ex-Google SRE on staff, so there was plenty of experience with Borgmon, and an ex-SoundClouder with experience of Prometheus. We built the service on Kubernetes and were looking for something that would “fit” with its dynamically scheduled nature – so Prometheus was a no-brainer. We’ve even written a series of blog posts of which why Prometheus and Kubernetes work together so well is the first.

How did you transition?

When we started with Prometheus the Kubernetes service discovery was still just a PR and as such there were few docs. We ran a custom build for a while and kinda just muddled along, working it out for ourselves. Eventually we gave a talk at the London Prometheus meetup on our experience and published a series of blog posts.

We’ve tried pretty much every different option for running Prometheus. We started off building our own container images with embedded config, running them all together in a single Pod alongside Grafana and Alert Manager. We used ephemeral, in-Pod storage for time series data. We then broke this up into different Pods so we didn’t have to restart Prometheus (and lose history) whenever we changed our dashboards. More recently we’ve moved to using upstream images and storing the config in a Kubernetes config map – which gets updated by our CI system whenever we change it. We use a small sidecar container in the Prometheus Pod to watch the config file and ping Prometheus when it changes. This means we don’t have to restart Prometheus very often, can get away without doing anything fancy for storage, and don’t lose history.

Still the problem of periodically losing Prometheus history haunted us, and the available solutions such as Kubernetes volumes or periodic S3 backups all had their downsides. Along with our fantastic experience using Prometheus to monitor the Scope service, this motivated us to build a cloud-native, distributed version of Prometheus – one which could be upgraded, shuffled around and survive host failures without losing history. And that’s how Weave Cortex was born.

What improvements have you seen since switching?

Ignoring Cortex for a second, we were particularly excited to see the introduction of the HA Alert Manager; although mainly because it was one of the first non-Weaveworks projects to use Weave Mesh, our gossip and coordination layer.

I was also particularly keen on the version two Kubernetes service discovery changes by Fabian – this solved an acute problem we were having with monitoring our Consul Pods, where we needed to scrape multiple ports on the same Pod.

And I’d be remiss if I didn’t mention the remote write feature (something I worked on myself). With this, Prometheus forms a key component of Weave Cortex itself, scraping targets and sending samples to us.

What do you think the future holds for Weaveworks and Prometheus?

For me the immediate future is Weave Cortex, Weaveworks’ Prometheus as a Service. We use it extensively internally, and are starting to achieve pretty good query performance out of it. It’s running in production with real users right now, and shortly we’ll be introducing support for alerting and achieve feature parity with upstream Prometheus. From there we’ll enter a beta programme of stabilization before general availability in the middle of the year.

As part of Cortex, we’ve developed an intelligent Prometheus expression browser, with autocompletion for PromQL and Jupyter-esque notebooks. We’re looking forward to getting this in front of more people and eventually open sourcing it.

I’ve also got a little side project called Loki, which brings Prometheus service discovery and scraping to OpenTracing, and makes distributed tracing easy and robust. I’ll be giving a talk about this at KubeCon/CNCFCon Berlin at the end of March.

Getting to Know Todd Moore, CNCF’s New Governing Board Chair

By | Blog | No Comments

1)    What does the CNCF Governing Board do and what is your role as chair?  

The CNCF is a result of a shared vision by many of us in the cloud community.  It was created to develop a common path and understanding for how the next generation of cloud native applications should be developed and to develop the necessary  infrastructure. We realized that collaborating on the plumbing would be the fastest path to building a level playing field and a vibrant ecosystem. We seek to find and cooperate with related open ecosystem communities to support projects and cooperate to advance the state of the art. Many projects are tailored to fit the era of cloud computing, and solve a range of needs from private clouds to  hyper-scale capabilities.  The three key attributes of the CNCF approach are: 1). Container packaged, 2.) Dynamically managed, and 3). Micro-services oriented. The role of the CNCF Governing Board is to provide overall stewardship of the many projects that will comprise the initiative, foster the growth and evolution of the ecosystem, and ensure the initiative serves to benefit the community by maintaining an open, level ‘playing field’ for everyone.

Check out the  wide list of companies who have joined, and you will quickly see that the CNCF has been able to draw an impressive list of entities big and small (see member list here: As the Governing Board Chair, one of my important responsibilities is to ensure the fair and efficient operation of the Board, so that all members have the opportunity to collaborate effectively/gain value from their involvement – regardless of their size/scope.  

2)    Why did you want to become chair?

Having worked closely with former Governing Board Chair Craig McLuckie in the formation of the CNCF, and in the handling of the formation, I had an insider perspective on our membership’s needs and a sense of responsibility to the organization. I have 17 years of experience working with open source communities and have worked to both form, grow and support them over many years.  As a board member for the Node.js Foundation and OpenStack Foundation, I have been an active member of these organizations and have a record of bringing folks together to work for the common good.  Recently, I handed over my board seat with the OpenStack Foundation, so I could focus my time on CNCF. I am excited for the chance to serve the CNCF community and the GB, as we grow the CNCF to be the premiere organization charting the course for Cloud Native development.

3)    What is your vision for CNCF in 2017?

Simply put, diversity in projects, continued growth of memberships, and a focus on the user community.  The Technical Oversight Committee (TOC) is working with a number of exciting technologies for inclusion. We now have an incubation process with set Graduation Criteria and the user base is growing and seeking to comment on the technologies we shepard. We need to engage even more participants to build out a complete ecosystem with change and growth guided by our users.  

4)    Are you involved in any other Linux Foundation projects or open community Foundations? If so, which ones and why?

As I stated above, I am a board member of the Node.js Foundation.  I helped form this organization and played a role in healing the split in the community.  I work in the Node.js community because I believe in the technology and am inspired by the phenomenal growth.  We have created a blend of stability and experimental edge that keeps the community moving and it is just plain fun to watch it grow.

There is a long list of projects that I have helped to bring to the Linux Foundation.  These include Hyperledger, Cloud Foundry, ODPi, JanusGraph, Node.js, and JS Foundation to name a few.  There are others I am sure to come.  The LF has shown itself to be a supportive well run home for projects with an excellent staff.  That is what keeps me coming back.

5)    You are the Vice President of Open Technology at IBM. Please tell us more about IBM’s focus on open communities and open source.

For many years, IBM has been recognized as a leader in open technology, from its early work with Linux, Apache and eClipse to its current work across all layers of the cloud and application development.  You can find IBM old vnet ids used in some of the earliest open source projects.

IBM has a long history of working in open technology. In fact, in many ways, IBM is largely responsible for the open source movement’s success. IBM’s strong early support for Linux brought about a change in posture toward open source for many enterprises.  The commitment of funds and IP showed others that they should take open source seriously.  

We have worked hard over the years to establish a solid and respected reputation in open source circles, and especially in those communities where we invest strategically. One thing IBM has learned through all of this is that those communities that strive for inclusiveness and open governance tend to attract the largest ecosystems and create the most expansive markets.

IBM knows that a “rising tide floats all boats”. It isn’t enough that IBM succeeds—we need to ensure that many can succeed to ensure a vibrant ecosystem. This reduces the risk that comes with embracing open source for ourselves and, more importantly, our users.

Today, IBM’s work in open technology continues in the cloud. We still work with Apache, Eclipse and Linux on multiple projects, and IBM is recognized as a leader in the OpenStack, Cloud Foundry, Node.js, Docker, Hyperledger and OpenWhisk communities, among many others.  With 62,000 engineers certified to participate in open source, we are all in on open source.

6)    CloudNativeCon + KubeCon North America 2017 is coming to Austin, TX in December. As an Austin resident, what are your recommendations for restaurants while attendees are in town?

When you come to Austin for CloudNativeCon + KubeCon North America, two things you can’t miss are tacos and brisket!  A couple of my favorites are Torchy’s and Taco Deli.  For brisket, Franklin BBQ is the best, but the wait can be more than a couple hours–so, other good options would be Coopers on Congress Ave and Lambert’s on 2nd street.  For some Mexican food, La Condesa also in the 2nd street district and Guero’s on South Congress (SoCo) are really good.  Austin also has some of the best ramen in the country, Ramen Tatsu-Ya on South Lamar combined with drinks next door at BackBeat are a great combo.  Some of my other favorites are Fixe on West 5th street for some very tasty fried chicken and if you’re in the mood for Indian food, G’raj Mahal on Rainey street is a great option to combine with an evening of drinks and music in that neighborhood.  

Vegan Friendly restaurants include: True Food Kitchen, Counter Culture, Blue Dahlia, G’raj Mahal, Bouldin Creek Cafe are also good choices. For breakfast, Magnolia Cafe (must get the pancakes), Counter Cafe and 24 Diner are great options.

CNCF Purchases RethinkDB Source Code and Contributes It to The Linux Foundation Under the Apache License

By | Blog | No Comments

CNCF has purchased the source code to the RethinkDB database, relicensed the code under the Apache License, Version 2.0 (ASLv2) and contributed it to The Linux Foundation.

RethinkDBTM is an open source, NoSQL, distributed document-oriented database that is in production use today by hundreds of technology startups, consulting firms and Fortune 500 companies, including NASA, GM, Jive, Platzi, the U.S. Department of Defense, Distractify and Matters Media. Some of Silicon Valley’s top firms invested $12.2 million over more than eight years in the RethinkDB company to build a state-of-the-art database system, but were unsuccessful in creating a sustainable business, and it shut down in October 2016.

The project was licensed under the GNU Affero General Public License, Version 3 (AGPLv3), a strong copyleft license, which limited the willingness of some companies to use and contribute to the software. CNCF paid $25,000 to purchase the RethinkDB copyright and assets and has re-licensed the software under the ASLv2, one of the most popular permissive software licenses, which enables anyone to use the software for any purpose without complicated requirements. (See related blog post “Why CNCF Recommends ASLv2”.)

“CNCF saw the opportunity to salvage an enormous investment with a small incremental contribution,” said CNCF Executive Director Dan Kohn. “RethinkDB created millions of dollars worth of value and is used by a wide range of projects, companies and startups. Now that the software is available under the Apache license, the RethinkDB community has the opportunity to define the future path for themselves.” 

Praised for its ease-of-use, rich data model and ability to support extremely flexible querying capabilities, RethinkDB’s real-time push architecture is well-suited for collaborative web and mobile apps, streaming analytics apps, multiplayer games, realtime marketplaces and connected devices and services.

“RethinkDB was built by a team of database experts with the help of hundreds of contributors from around the world. With its current feature set, we believe RethinkDB continues to offer distinct advantages over other data stores in its class,” said RethinkDB community member Chris Abrams. “CNCF’s relicensing and our new home within the Linux Foundation allows the RethinkDB community to unite together to push the project forward.”

RethinkDB supports cloud-native clustering out of the box. Additional benefits include its elegant functional query language, change feeds for dynamic queries and user interface simplicity. (See related blog post “The Liberation of RethinkDB”)

“I love RethinkDB and use it every day in production,” said Bryan Cantrill, Joyent CTO and a member of the CNCF Technical Oversight Committee. “I’m thrilled that the CNCF is liberating the RethinkDB community with an open source license that is amenable to broad adoption. I hope RethinkDB will consider applying to become a CNCF Inception project in the future, and I look forward to advocating for them in my capacity as a member of the TOC.”

The RethinkDB software is available to download at Development occurs at and work has been underway on the 2.4 release, which will be available shortly. Follow the RethinkDB community discussion at Continued community support is available at:


Contact for any questions and media inquiries: Sarah Conway, 978-578-5300,

RethinkDB is a trademark of The Linux Foundation.

Why CNCF Recommends ASLv2

By | Blog | No Comments

By Dan Kohn, @dankohn1, CNCF Executive Director

February 1, 2017

The Cloud Native Computing Foundation (CNCF) believes that the best software license for open source projects today is the Apache Software License v2 (ASLv2). Our goal is to enable the greatest possible adoption of our projects by developers and users. Our larger goal with CNCF (and with the Linux Foundation, of which we are part) is to create an intellectual property “no-fly zone”, where contributors and users can come together from any company or from no company, collaborate, and build things together better than any of them could do on their own.

We think that permissive software licenses foster the best ecosystem of commercial and noncommercial uses by enabling the widest possible use cases. A report this month from Redmonk shows the increasing popularity of these permissive licenses. Proponents of copyleft licenses have argued that these licenses prevent companies from exploiting open source projects by building proprietary products on top of them. Instead, we have found that successful projects can help companies’ products be successful, and that the resulting profits can be fed back into those projects by having the companies employ many of the key developers, creating a positive feedback loop.

In addition, ASLv2 provides protection against a company intentionally or unintentionally contributing code that might read on their patents, by including a patent license. We believe that this patent protection removes another possible barrier to adoption and collaboration. Our view is that having all CNCF projects under the same license makes it easier for companies to be comfortable using and contributing, as their developers (and those developers’ attorneys) do not need to review a lot of licenses.

Of course, many CNCF projects also rely on libraries released under other open source licenses. For example, Linux underlies the entire cloud native platform and git is the software development technology of choice for all of our projects. Both are licensed under GPLv2 (and both were originally authored by Linux Foundation Fellow Linus Torvalds). The CNCF projects themselves are currently mainly written using the open source programming languages Go (BSD-3), Ruby (BSD-2) and Scala (BSD-3).

Let’s now look at the CNCF policy for projects. For an ASLv2-licensed project to be accepted into the CNCF, it requires a supermajority vote of our Technical Oversight Committee (TOC). For a project under any other license, it would require both a supermajority TOC vote and a majority vote by our Governing Board. While this may occur in the future, our strong preference is to work with prospective projects to relicense under the ASLv2. Let’s look at two example projects to see how this can work.

We’re currently having conversations with gRPC, which is licensed under the BSD-3 license plus a patent grant. When combined with the patent license in Google’s Contributor License Agreement (CLA), this combination of BSD-3 + patent grant + CLA is quite similar to ASLv2, in that it combines a permissive copyright license with patent protections. However, ASLv2 is a better-known and more familiar license, and so it accomplishes similar goals while likely requiring less legal reviews from new potential gRPC users and contributors.

Separately, we’ve also been talking with gitlab, which uses the same MIT license as its underlying framework, Ruby on Rails. Although it’s natural to go with the same license as Rails and Ruby, we are working with the gitlab team to investigate whether it would be feasible to relicense to ASLv2 for some or all of their codebase. The main advantage of doing so would be the additional patent protections, so that companies would be confident in their ability to contribute and use the software without later being accused of violating the patents of other contributors.

In closing, we’d like the acknowledge the debt of gratitude we have for the work done by the authors of these licenses, especially the Apache Software Foundation, and of course all the developers who write the software to make these licenses useful.

Linkerd Project Joins the Cloud Native Computing Foundation

By | Blog | No Comments

Today, the Cloud Native Computing Foundation’s (CNCF) Technical Oversight Committee (TOC) voted to accept Linkerd as the fifth hosted project alongside Kubernetes, Prometheus, OpenTracing and Fluentd. You can find more information on the project on their GitHub page.

Linkerd is an open source, resilient service mesh for cloud-native applications. Created by Buoyant founders William Morgan and Oliver Gould in 2015, Linkerd builds on Finagle, the scalable microservice library that powers companies like Twitter, Soundcloud, Pinterest and ING. Linkerd brings scalable, production-tested reliability to cloud-native applications in the form of a service mesh, a dedicated infrastructure layer for service communication that adds resilience, visibility and control to applications without requiring complex application integration.

“As companies continue the move to cloud native deployment models, they are grappling with a new set of challenges running large scale production environments with complex service interactions,” said Fintan Ryan, Industry Analyst at Redmonk. “The service mesh concept in Linkerd provided a consistent abstraction layer for these challenges, allowing developers to deliver on the promise of microservices and cloud native applications at scale. In bringing linkerd under the auspices of CNCF, Buoyant are providing an important building block for to the wider cloud native community to use with confidence.”

Enabling Resilient and Responsive Microservice Architectures

Linkerd enables a consistent, uniform layer of visibility and control across services and adds features critical for reliability at scale, including latency-aware load balancing, connection pooling, automatic retries and circuit breaking. As a service mesh, Linkerd also provides transparent TLS encryption, distributed tracing and request-level routing. These features combine to make applications scalable, performant, and resilient. Linkerd integrates directly with orchestrated environments such as Kubernetes (example) and DC/OS (demo), and supports a variety of service discovery systems such as ZooKeeper, Consul, and etcd. It recently added HTTP/2 and gRPC support and can provide metrics in Prometheus format.

diagram-individual-instance (1)

“The service mesh is becoming a critical part of building scalable, reliable cloud native applications,” said William Morgan, CEO of Buoyant and co-creator of Linkerd. “Our experience at Twitter showed that, in the face of unpredictable traffic, unreliable hardware, and a rapid pace of production iteration, uptime and site reliability for large microservice applications is a function of how the services that comprise that application communicate. Linkerd allows operators to manage that communication at scale, improving application reliability without tying it to a particular set of libraries or implementations.

Companies around the world use Linkerd in production to power their software infrastructure; including Monzo, Zooz, Foresee, Olark, Houghton Mifflin Harcourt, the National Center for Biotechnology Information, Douban and more, and it’s featured as a default part of cloud-native distributions such as Apprenda’s Kismatic Enterprise Toolkit and StackPointCloud.

Notable Milestones:

  • 29 Releases
  • 28 contributors and 400 Slack members
  • 1,370 Github stars

“Linkerd was built based on real world developer experiences in solving problems found when building large production systems at web scale companies like Twitter and Google,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “It brings these expertise to the masses, allowing a greater number of companies to benefit from microservices. I’m thrilled to have Linkerd as a CNCF inception project and for them to share their knowledge of building a cloud native service mesh with scalable observability systems to the wider CNCF community.”

As CNCF’s first inception level project, under the CNCF Graduation Criteria v1.0, Linkerd will receive mentoring from the TOC, priority access to the CNCF Community Cluster, and international awareness at CNCF events like CloudNativeCon/KubeCon Europe. The CNCF Graduation Criteria was recently voted in by the TOC to provide every CNCF project an associated maturity level of either inception, incubating or graduated, which allows CNCF to review projects at different maturity levels to advance the development of cloud native technology and services.

For more on Linkerd, listen to an interview with Alex Williams of The New Stack and Morgan here, or read Morgan’s upcoming blog post on the project’s roots and why Linkerd joined CNCF.

Container Management Trends: Kubernetes moves out of testing and into production

By | Blog | No Comments

In conjunction with CloudNativeCon+ KubeCon (Nov 8-9, 2016), the Cloud Native Computing Foundation (CNCF) conducted a survey of attendees.  More than 170 conference attendees completed the survey, with a majority of respondents (73%) coming from technology companies (vs. enterprise IT), including suppliers of cloud management technology.

The goal of the study was to understand the state of deployment of Kubernetes and other container management platforms, as well the progress of container deployment in general.

While this was the first time CNCF had taken the temperature of the container management marketplace, comparisons to other surveys, such as Google’s own Kubernetes Survey in March, 2016 and again in June 2016 highlight important trends in this space.  You can download the raw survey data here.

Cloud Management Platforms


Container Management Platforms Preferences

While both the recent CloudNativeCon + KubeCon survey and the Google surveys targeted audiences with an existing interest in Kubernetes, it’s interesting to observe changes in this segment of the container management marketplace over just the last eight months, including:

  • Growth in commitment to Kubernetes from 48% (Google survey, March) to 83% (CloudNativeCon + KubeCon survey)
  • Ongoing move away from home-grown management (Shell Scripts and CAPS) to commercial off-the-shelf (COTS) solutions (Kubernetes, Docker Swarm, Mesos, etc.)

Locus of Deployment

The CloudNativeCon + KubeCon survey also illustrates other types of maturation in container management, in particular changes in the locus of deployment on premises and cloud platforms:


Container Deployment Platforms

Highlighted trends include:

  • Incrementally growing hosting on Amazon and doubling of deployment on Google clouds in only 8 months
  • Doubling of hosting on Microsoft Azure from March to November 2016
  • A shift from ad hoc workstation container hosting to data center blades in premises-centric applications, representative of a maturing from purely experimental deployment to more serious development and even production settings (see next section).

Kubernetes Supporting Containers in Production

Definitely, the most interesting observable trend from the CloudNativeCon + KubeCon survey is the maturation of Kubernetes deployment, with both growth in respondents investing in Kubernetes for development test, and a near tripling over the last eight months of Kubernetes in production settings:


The Shift from Development/Test to Production for Kubernetes

Container Usage Volume on the Rise

Not only are more companies using containers in all stages of the product/services life-cycle, they are also using a larger fleet of containers overall, as low volume deployments (<50 units) rise by 27%, with higher volume deployments (>250 units) more than doubling.  In fact, 12% of respondents in the CloudNativeCon + KubeCon survey report deploying more than 1,000 containers, with an additional 19% fielding more than 5,000.


Growth in Container Deployment Volumes

Upcoming Events – Join Us to Learn More Kubernetes in Production

Interested in learning more about how to use Kubernetes and other cloud native technologies in production? Join us at our upcoming events – Cloud Native/Kubernetes 101 Roadshow: Pacific Northwest 2017, February 7 – February 9 – or CloudNativeCon + KubeCon Europe 2017.

The three-city training tour will hit Portland, Seattle and Vancouver and discuss how cloud users are implementing cloud native computing. Real-world Kubernetes use cases at Amadeus, LeShop, Produban/Santander, and FICO will be presented. These $30 training sessions are ideal for end end users, developers and students just beginning to explore how to orchestrate containers as part of a microservices architectures, instead of VMs. Registration details here.

The CNCF’s flagship CloudNativeCon + KubeCon will take place March 29-30 in Berlin. The event gathers leading technologists from multiple open source cloud native communities to further the education and advancement of cloud native computing. Discounted registration of $900 for corporations and $450 for individuals ends February 3.

Knowledge, Abilities & Skills You Will Gain at Cloud Native/Kubernetes 101 Roadshow: Pacific Northwest!

By | Blog | No Comments

The Cloud Native Computing Foundation is taking to the road February 7-9  in Portland, Seattle and Vancouver to offer end users, developers, students and other community members the ability to learn from experts at Red Hat, Apprenda and CNCF on how to use Kubernetes and other cloud native technologies in production. Sponsored by Intel and Tigera, the first ever Cloud Native/Kubernetes 101 Roadshow: Pacific Northwest will introduce key concepts, resources and opportunities for learning more about cloud native computing.

The CNCF roadshow series focuses on meeting with and catering to those using cloud native technologies in development, but not yet in production. Cities and locations include:

Each roadshow will be held from 2-5pm, with the full agenda including presentations from:

Dan Kohn, Executive Director, CNCF

Dan Kohn, Executive Director of the Cloud Native Computing Foundation.  Dan will discuss:

  • What is cloud native computing — orchestrated containers as part of a microservices architecture — and why are so many cloud users moving to it instead of virtual machines
  • An overview of the CNCF projects — Kubernetes, Prometheus, OpenTracing and Fluentd — and how we as a community are building maps through previously uncharted territory
  • A discussion of top resources for learning more, including Kubernetes the Hard Way, Kubernetes bootcamp, and CloudNativeCon/KubeCon and training and certification opportunities

Brian Gracely, Director of Product Strategy at Red Hat. Brian will discuss:

  • Real-world use of Kubernetes in production today at Amadeus, LeShop, Produban/Santander & FICO
  • Why contributing to CNCF-hosted projects should matter to you
  • How cross-community collaboration is the key to the success of the future of Cloud Native

Isaac Arias, Technology Executive, Digital Business Builder, and Passionate Entrepreneur at Apprenda. Isaac will discuss:

  • Brief history of machine abstractions: from VMs to Containers
  • Why containers are not enough: the case for container orchestration
  • From Borg to Kubernetes: the power of declarative orchestration
  • Kubernetes concepts and principles and what it takes to be Cloud Native

By the end of this event, attendees will understand how cloud users are implementing cloud native computing — orchestrated containers as part of a microservices architecture – instead of virtual machines. Real-world Kubernetes use cases at Amadeus, LeShop, Produban/Santander, and FICO will be presented. A detailed walk through of Prometheus (monitoring system), OpenTracing (tracing standard) and Fluentd (logging) projects and each level of the stack will also be provided.

Each city is limited in space, so sign up now! Use the code MEETUP50 to receive 50% off registration!

Diversity Scholarship Series: One Software Engineer’s Unexpected CloudNativeCon + KubeCon Experience

By | Blog | No Comments

By: Kris Nova, Platform Engineer at Datapipe

Diversity noun : the condition of having or being composed of differing elements.

As defined by Merriam-Webster, diversity indicates the presence of differing elements. Without going too data science on everyone, I suppose there is a lot of things about me that plots me outside of the predicted regression; especially for backend systems engineers who work on Kubernetes. However, there are also a lot of things that I have in common with the larger group as well. Thanks to CNCF for providing me with a fabulous scholarship to CloudNativeCon + KubeCon in Seattle, I was able to have a once-in-a-lifetime experience engaging with this larger group and experiencing our similarities and differences.

Being that I am often the only woman when I find myself in a room of software engineers, I have grown quite used to it. Unfortunately, not everyone I find myself working with is equally as used to it. To be honest, I was a bit nervous about what the trip might have in store.

The scholarship I received gave me an exciting opportunity to not only attend the conference, but to also attend one of the Kubernetes sig-cluster-lifecycle meetings in-person at the Google office in Seattle. I was happy as a clam debating over kops vs. kubeadm scope, and drinking espresso with Joe Beda and the Googlers face-to-face. My gender never once crossed my mind, which was such an unique experience the Kubernetes community gave me that morning. I wasn’t a woman in a room full of men, I was a valuable member of the community who is held just as responsible as anyone else for a careless commit to the codebase. So a big thank you to everyone in sig-cluster-lifecycle and Google Seattle who made me feel right at home and as welcomed as any other software engineer.

Open source software has always been an ideology I keep very close to my heart. In fact, open source software is what helped inspire me to come out as a lesbian in my life. To me, open source software has always represented a wonderful world of science, honesty, and learning. A world where mistakes and failure is encouraged, and growing with your peers is a foundational aspect to success. Walking around the conference the first morning before the keynotes, I experienced the same excitement and wanderlust that has always attracted me to the open source community. The hotel lobby was buzzing with activity, and everywhere I looked I could see and hear fascinating conversation around containers and evolving the Kubernetes tooling as a community.

Having gone through the ringer in a few other open source communities, it was so refreshing getting to meet the people who bring Kubernetes to life. How nice it was to not be scrutinized for my lack of neck-beard. To this day, thinking about the fact that I was able to bring home a suitcase stuffed with t-shirt’s fit for my gender is beyond exciting. Thanks Kubernetes, you guys rock!

The conference was a hit, I don’t even know where to begin. The sig-aws meeting that we were able to attend, thanks to CoreOS, was surreal. Sitting with Chris Love and Justin Santa Barbara on the floor of the hotel lobby having a very effective, yet impromptu kops planning meeting still makes me smile. I still have the original plans for running Kubernetes on AWS in a private VPC scribbled on a cocktail napkin. Getting to meet some of my new favorite people at Red Hat, Atlassian, and Google was even better. The conference changed the way I look at (my new favorite) open source community. This feeling stays with me every day when I open up emacs and start writing code for Kubernetes.

Upon coming home I hung my conference badge up in the hallway proudly. It is still there to this day. A symbol of the amazing time I had in Seattle, and a symbol of pride. The badge holds the name “Kris,” which may not mean a lot to anyone else, but to me represents success. Success in my career with Kubernetes, success of my love of learning software, and success of my gender transition from male to female. The badge is hopefully the first of many with my new name on it, and the first of many Cloud Native conferences to come.

So I guess maybe I am diverse after all. I love Kubernetes for so many reasons. After the conference, I think one of the main reasons I love the community is that I am just another committer to the code base. To be honest, I am so grateful that I can fit right in. I just want to write code and be treated like everyone else. Thanks to the Kubernetes community for the gift of being pleasantly accepted as a software engineer despite being a bit of a black sheep. It’s all I could ever ask for.

The Cloud Native Computing Foundation is offering diversity scholarships at both its European and North America shows in 2017. To apply, please visit  here for Europe and here for North America.

Fluentd: Cloud Native Logging

By | Blog | No Comments

By Eduardo Silva, Fluentd Maintainer


When deploying applications – either for development or production purposes – there are several steps one needs to take to have a healthy environment. One such step is making sure you have logging capabilities from the application to its environment. This is mandatory if you want to perform continuous monitoring and have the ability to troubleshoot any anomaly during or after runtime.

Regardless of your environment, logging can be complex. System services and specific application logs need to be consumed different ways and the data retrieved likely comes in a variety of different formats, which presents an interest challenge. In the Cloud Native era, we see this complexity increase when deployment happens at scale. At this point having a non-generic logging tool is not enough to solve the problem. Instead a custom solution capable to integrate, understand and connect the dots between different end-points is highly recommended; that’s why Fluentd was created.

Unified Logging Layer

Fluentd allows you to implement an unified logging layer in any type of environment. It was designed with flexibility in mind, with a pluggable architecture of more than 600 extensions provided by the community, and can collect, parse, filter and deliver logs from any source to most of the well known destinations like local databases or cloud services:


Looking to the future

When the project was started in 2011 by Treasure Data, its primary goal was to solve the data collection problem. Due to its Open Source nature and quick adoption by the industry, it experienced amazing organic growth. Today, we can see Fluentd integrated in Docker and Kubernetes ecosystems and running in thousands of environments, but still there is plenty of room to grow.

From a project and technical perspective, better integration with different cloud native environments is a future goal, as well as establishing a formal and closed relationship with core teams of related projects would help Fluentd maintainers better understand their needs and align development efforts. Since openness and collaboration is part of the Fluentd DNA, the core team decided it was time to take the next big step and join a Foundation.

Fluentd joins CNCF

When the core team decided to join a Foundation, we evaluated many different options and found the Cloud Native Computing Foundation (CNCF) to be a really good fit. The Foundation provides enough flexibility to let the project grow organically, but the benefit of attracting resources that would better Fluentd from a technical and community aspect.

In mid 2016, the core team started our application process with the CNCF Technical Oversight Committee (TOC). It took a few months of positive technical discussions to met the general requirements. Finally, the day before the inaugural CloudNativeCon/KubeCon (November 7th), the TOC approved and welcomed Fluentd as an official CNCF project:


Note, Fluentd is a whole ecosystem with 35 repositories including Fluentd service, plugins, language SDKs and complementary projects such as Fluent Bit (lightweight forwarder) on our Github Organization. All of them are part of CNCF now!

What’s next ?

We are committed to improving all Logging aspects of Cloud Native projects. Currently, the core Fluentd team is participating in the Kubernetes sig-instrumentation group and looking forward to an integration with Prometheus and other projects of the stack.

We expect to release Fluentd v1.0 near Q1 2017, which will bring exciting features such as an enhanced API for plugins, Windows Support and Compression/Authentication for network transfers within others.

The Fluentd community will continue to participate actively at open source events such as CloudNativeCon. We invite everyone to join us and want to hear from you! Feel free to reach us through the usual communication channels:

Thanks again and Happy Logging!

CNCF Webinar Series Launches December 15th!

By | Blog | No Comments

Cloud native – orchestrating containers as part of a microservices architecture – is a departure from traditional application design. Kubernetes and other cloud native technologies enable more rapid software development at a lower cost than traditional infrastructure. However, the containerization wave can be a little confusing – which applications to lift, which ones to keep as is, which ones can’t be left behind, etc.

The Cloud Native Computing Foundation is offering a map to guide developers and users through this new terrain with the launch of a new webinar series.

The CNCF Webinar Series kicks off December 15th at 10:am PT – 11:00 a.m. PT with a discussion on Cloud Native Strategy with Jamie Dobson of Container Solutions. Register for the Webinar today!

The series, along with our major events like CloudNativeCon/KubeCon and Pacific Northwest Roadshows, bring the community together and dive into different facets of this formerly uncharted, but increasingly popular territory.


Many companies see the benefits of highly available, scalable and resilient systems. They want to go ‘cloud native,’ but as they reach for containerized microservices they may actually be grabbing the golden egg rather than the goose that laid it.

In this webinar, we’ll look at a model for emerging strategy, classic mistakes and how to avoid them. We’ll also look at how we can iterate through the ‘cloud native’ problem space. Along the way, and before we get to recent history, we’ll visit ancient Greece, post-war Scandinavia, and the Jet Propulsion Lab. We’ll learn about heuristics, including the doughnut principle, and, of course, we’ll confront the key paradox that strategy tries to resolve: what is good for a business, is not necessary good for those who work in it.


Jamie is the CEO of Container Solutions, one of the world’s leading cloud native consultancies. He specializes in strategy and works with companies that have particularly difficult problems to solve.