KubeCon + CloudNativeCon San Diego | November 18 – 21 | Learn more

Category

Blog

Prometheus User Profile: How DigitalOcean Uses Prometheus

By | Blog

DigitalOcean – a CNCF member and devoted Prometheus user – is approaching one million registered users with more than 40,000 active teams. With workloads becoming more complex, it is focused on delivering the tools and performance that are required to seamlessly deploy, scale and manage any sized application.

However, before transitioning to Prometheus, the team’s metrics collecting experience from all of DigitalOcean’s physical servers wasn’t so great. Many of the company’s teams were disappointed with the previous monitoring system for a myriad of reasons, with more than one team expressing frustration in the offering’s query language and visualization tools available.

This led to DigitalOcean’s transition to Prometheus.

In the below blog, originally published by Prometheus, Ian Hansen who works on DigitalOcean’s platform metrics team talks about how they use Prometheus.

Going to CloudNativeCon + KubeCon Berlin? Ian’s DigitalOcean colleague, Joonas Bergius, will be presenting “Kubernetes at DigitalOcean: Building a Platform for the Future [B]” on March 29th. Make sure to check catch his session!

 

Interview with DigitalOcean

Posted at: September 14, 2016 by Brian Brazil

Next in our series of interviews with users of Prometheus, DigitalOcean talks about how they use Prometheus. Carlos Amedee also talked about the social aspects of the rollout at PromCon 2016.

Can you tell us about yourself and what DigitalOcean does?

My name is Ian Hansen and I work on the platform metrics team. DigitalOcean provides simple cloud computing. To date, we’ve created 20 million Droplets (SSD cloud servers) across 13 regions. We also recently released a new Block Storage product.

What was your pre-Prometheus monitoring experience?

Before Prometheus, we were running Graphite and OpenTSDB. Graphite was used for smaller-scale applications and OpenTSDB was used for collecting metrics from all of our physical servers via Collectd. Nagios would pull these databases to trigger alerts. We do still use Graphite but we no longer run OpenTSDB.

Why did you decide to look at Prometheus?

I was frustrated with OpenTSDB because I was responsible for keeping the cluster online, but found it difficult to guard against metric storms. Sometimes a team would launch a new (very chatty) service that would impact the total capacity of the cluster and hurt my SLAs.

We are able to blacklist/whitelist new metrics coming in to OpenTSDB, but didn’t have a great way to guard against chatty services except for organizational process (which was hard to change/enforce). Other teams were frustrated with the query language and the visualization tools available at the time. I was chatting with Julius Volz about push vs pull metric systems and was sold in wanting to try Prometheus when I saw that I would really be in control of my SLA when I get to determine what I’m pulling and how frequently. Plus, I really really liked the query language.

How did you transition?

We were gathering metrics via Collectd sending to OpenTSDB. Installing the Node Exporter in parallel with our already running Collectd setup allowed us to start experimenting with Prometheus. We also created a custom exporter to expose Droplet metrics. Soon, we had feature parity with our OpenTSDB service and started turning off Collectd and then turned off the OpenTSDB cluster.

People really liked Prometheus and the visualization tools that came with it.

Suddenly, my small metrics team had a backlog that we couldn’t get to fast enough to make people happy, and instead of providing and maintaining Prometheus for people’s services, we looked at creating tooling to make it as easy as possible for other teams to run their own Prometheus servers and to also run the common exporters we use at the company.

Some teams have started using Alertmanager, but we still have a concept of pulling Prometheus from our existing monitoring tools.

What improvements have you seen since switching?

We’ve improved our insights on hypervisor machines. The data we could get out of Collectd and Node Exporter is about the same, but it’s much easier for our team of golang developers to create a new custom exporter that exposes data specific to the services we run on each hypervisor.

We’re exposing better application metrics. It’s easier to learn and teach how to create a Prometheus metric that can be aggregated correctly later. With Graphite it’s easy to create a metric that can’t be aggregated in a certain way later because the dot-separated-name wasn’t structured right.

Creating alerts is much quicker and simpler than what we had before, plus in a language that is familiar. This has empowered teams to create better alerting for the services they know and understand because they can iterate quickly.

What do you think the future holds for DigitalOcean and Prometheus?

We’re continuing to look at how to make collecting metrics as easy as possible for teams at DigitalOcean. Right now teams are running their own Prometheus servers for the things they care about, which allowed us to gain observability we otherwise wouldn’t have had as quickly. But, not every team should have to know how to run Prometheus. We’re looking at what we can do to make Prometheus as automatic as possible so that teams can just concentrate on what queries and alerts they want on their services and databases.

We also created Vulcan so that we have long-term data storage, while retaining the Prometheus Query Language that we have built tooling around and trained people how to use.

Measuring the Popularity of Kubernetes Using BigQuery

By | Blog

By Dan Kohn, CNCF Executive Director, @dankohn1

Kubernetes Logo

As the executive director of CNCF, I’m proud to host Kubernetes, which is one of the highest development velocity projects in the history of open source. I know this because I can do a web search and see… quite a few people being quoted saying that, but does the data support this claim?

This blog post works through the process of investigating that question. CNCF licenses a dashboard from Bitergia, but it’s more useful for project trends over time than comparing to other open source projects. Project velocity matters because developers, enterprises and startups are more interested in working with a technology that others are adopting, so that they can leverage the investments of their peers. So, how does Kubernetes compare to the other 53 million GitHub repos?

By way of excellent blog posts from Felipe Hoffa and Jess Frazelle (the latter a Kubernetes contributor and speaker at our upcoming CloudNativeCon/KubeCon Berlin), I got started on using BigQuery to analyze the public GitHub data set. You can re-run any of the gists below by creating a free BigQuery account. All of the data below is for 2016, though you can easily run against different time periods.

My first attempt found that the project with the highest commit rate on GitHub is… KenanSulayman/heartbeat, a repo with 9 stars which appears to be an hourly update from a Tor exit node. Well, that’s kind of a cool use of GitHub, but not really what I’m looking for. I learned from Krihelinator (a thoughtful though arbitrary new metric that currently ranks Kubernetes #4, right in front of Linux), that some people use GitHub as a backup service. So, rerunning with a filter of more than 10 contributors puts Kubernetes at #29 based on its 8,703 commits. For reference, that’s almost exactly one commit an hour, around the clock, for the entire year.

That metric also leaves off torvalds/linux, because the kernel’s git tree is mirrored to GitHub, but that mirroring does not generate GitHub events that are stored in that data set. Instead, there is a separate BigQuery data set that just measures commits. When I run a query to show the projects with the most commits, I unhelpfully get dozens of forks of Linux and also many forks of a git learning tool. Here is a better query that manually checks for committers, authors, and commits of 8 popular projects, and shows Kubernetes as #2, with about 1/5th the authors and commits of Linux.1

To see how many unique committers Kubernetes had in 2016, I used this query, which showed that there were… 59, because Kubernetes uses a GitHub robot to do the vast majority of the actual commits. The correct query requires looking inside the commits at the actual authors, and when ranked by unique authors, Kubernetes comes in at #10 with 868.

Updating Hoffa’s query about issues opened to include data for all of 2016 (while still ignoring robot comments), Kubernetes remains #1 with 42,703, with comments from 3,077 different developers. Frazelle’s analysis of pull requests (updated for all of 2016 and to require more than 10 contributors to avoid backup projects) now shows Kubernetes at #2 with 10,909, just behind a Java intranet portal. (Rather than GitHub issues and pull requests, Linux uses its own email-based workflow described in a talk last year by stable kernel maintainer Greg Kroah-Hartman, so it doesn’t show up in these comparisons.)

Kubernetes 2016 Rankings

Measure Ranking
Krihelinator 4
Commits 29
Authors 10
Issue Comments 1
Pull Requests 2

In conclusion, I’m not sure that any of these metrics represents the definitive one. You can pick your preferred statistic, such as that Kubernetes is in the top 0.00006% of the projects on GitHub. I prefer to just think of it as one of the fastest moving projects in the history of open source.

What’s your preferred metric(s)? Please let me know at @dankohn1 or in the Hacker News comments, and I’m happy to provide t-shirts in exchange for cool visualizations.


1 OpenHub incorrectly showed more than 3x as many authors and 5x the commits for Linux in 2016 as the BigQuery data set. I confirmed this is an error with Linux stable kernel maintainer Greg Kroah-Hartman (who checked the actual git results) and reported it to OpenHub. They’ve since fixed the bug.

Prometheus User Profile: Dynamically Helping Weaveworks Accelerate Cloud Native Application Development

By | Blog

Sometimes two things go so well together you wonder how you ever saw them separately, like peanut butter and chocolate coming together to make Reese’s cups. The combination of Kubernetes and Prometheus invokes the same feeling for Weaveworks, the makers of Weave networking, monitoring and management for containers and microservices.

Director of Engineering at Weaveworks, Tom Wilkie, recently dove into some of the practices the company has developed using these two perfectly paired cloud native technologies over the last six months in his blog post Prometheus and Kubernetes: A Perfect Matchand recently presented how Weaveworks uses Prometheus to monitor Weave Cloud, which runs on a Kubernetes cluster in AWS.

Digging deeper into Weavework’s use of Prometheus, Brian Brazil sat down with Tom to discuss how Weaveworks discovered Prometheus, what improvements they made to the platform, what is next for the platform and the moment they knew Prometheus and Kubernetes were a “Perfect Match.”  Read on to learn more!

*interview originally posted at https://prometheus.io/blog/2017/02/20/interview-with-weaveworks/

Interview with Weaveworks

Posted at: February 20, 2017 by Brian Brazil

Continuing our series of interviews with users of Prometheus, Tom Wilkie from Weaveworks talks about how they choose Prometheus and are now building on it.

Can you tell us about Weaveworks?

Weaveworks offers Weave Cloud, a service which “operationalizes” microservices through a combination of open source projects and software as a service.

Weave Cloud consists of:

You can try Weave Cloud free for 60 days. For the latest on our products check out our blog, Twitter, or Slack (invite).

What was your pre-Prometheus monitoring experience?

Weave Cloud was a clean-slate implementation, and as such there was no previous monitoring system. In previous lives the team had used the typical tools such as Munin and Nagios. Weave Cloud started life as a multitenant, hosted version of Scope. Scope includes basic monitoring for things like CPU and memory usage, so I guess you could say we used that. But we needed something to monitor Scope itself…

Why did you decide to look at Prometheus?

We’ve got a bunch of ex-Google SRE on staff, so there was plenty of experience with Borgmon, and an ex-SoundClouder with experience of Prometheus. We built the service on Kubernetes and were looking for something that would “fit” with its dynamically scheduled nature – so Prometheus was a no-brainer. We’ve even written a series of blog posts of which why Prometheus and Kubernetes work together so well is the first.

How did you transition?

When we started with Prometheus the Kubernetes service discovery was still just a PR and as such there were few docs. We ran a custom build for a while and kinda just muddled along, working it out for ourselves. Eventually we gave a talk at the London Prometheus meetup on our experience and published a series of blog posts.

We’ve tried pretty much every different option for running Prometheus. We started off building our own container images with embedded config, running them all together in a single Pod alongside Grafana and Alert Manager. We used ephemeral, in-Pod storage for time series data. We then broke this up into different Pods so we didn’t have to restart Prometheus (and lose history) whenever we changed our dashboards. More recently we’ve moved to using upstream images and storing the config in a Kubernetes config map – which gets updated by our CI system whenever we change it. We use a small sidecar container in the Prometheus Pod to watch the config file and ping Prometheus when it changes. This means we don’t have to restart Prometheus very often, can get away without doing anything fancy for storage, and don’t lose history.

Still the problem of periodically losing Prometheus history haunted us, and the available solutions such as Kubernetes volumes or periodic S3 backups all had their downsides. Along with our fantastic experience using Prometheus to monitor the Scope service, this motivated us to build a cloud-native, distributed version of Prometheus – one which could be upgraded, shuffled around and survive host failures without losing history. And that’s how Weave Cortex was born.

What improvements have you seen since switching?

Ignoring Cortex for a second, we were particularly excited to see the introduction of the HA Alert Manager; although mainly because it was one of the first non-Weaveworks projects to use Weave Mesh, our gossip and coordination layer.

I was also particularly keen on the version two Kubernetes service discovery changes by Fabian – this solved an acute problem we were having with monitoring our Consul Pods, where we needed to scrape multiple ports on the same Pod.

And I’d be remiss if I didn’t mention the remote write feature (something I worked on myself). With this, Prometheus forms a key component of Weave Cortex itself, scraping targets and sending samples to us.

What do you think the future holds for Weaveworks and Prometheus?

For me the immediate future is Weave Cortex, Weaveworks’ Prometheus as a Service. We use it extensively internally, and are starting to achieve pretty good query performance out of it. It’s running in production with real users right now, and shortly we’ll be introducing support for alerting and achieve feature parity with upstream Prometheus. From there we’ll enter a beta programme of stabilization before general availability in the middle of the year.

As part of Cortex, we’ve developed an intelligent Prometheus expression browser, with autocompletion for PromQL and Jupyter-esque notebooks. We’re looking forward to getting this in front of more people and eventually open sourcing it.

I’ve also got a little side project called Loki, which brings Prometheus service discovery and scraping to OpenTracing, and makes distributed tracing easy and robust. I’ll be giving a talk about this at KubeCon/CNCFCon Berlin at the end of March.

Getting to Know Todd Moore, CNCF’s New Governing Board Chair

By | Blog

1)    What does the CNCF Governing Board do and what is your role as chair?  

The CNCF is a result of a shared vision by many of us in the cloud community.  It was created to develop a common path and understanding for how the next generation of cloud native applications should be developed and to develop the necessary  infrastructure. We realized that collaborating on the plumbing would be the fastest path to building a level playing field and a vibrant ecosystem. We seek to find and cooperate with related open ecosystem communities to support projects and cooperate to advance the state of the art. Many projects are tailored to fit the era of cloud computing, and solve a range of needs from private clouds to  hyper-scale capabilities.  The three key attributes of the CNCF approach are: 1). Container packaged, 2.) Dynamically managed, and 3). Micro-services oriented. The role of the CNCF Governing Board is to provide overall stewardship of the many projects that will comprise the initiative, foster the growth and evolution of the ecosystem, and ensure the initiative serves to benefit the community by maintaining an open, level ‘playing field’ for everyone.

Check out the  wide list of companies who have joined, and you will quickly see that the CNCF has been able to draw an impressive list of entities big and small (see member list here: https://www.cncf.io/about/members). As the Governing Board Chair, one of my important responsibilities is to ensure the fair and efficient operation of the Board, so that all members have the opportunity to collaborate effectively/gain value from their involvement – regardless of their size/scope.  

2)    Why did you want to become chair?

Having worked closely with former Governing Board Chair Craig McLuckie in the formation of the CNCF, and in the handling of the formation, I had an insider perspective on our membership’s needs and a sense of responsibility to the organization. I have 17 years of experience working with open source communities and have worked to both form, grow and support them over many years.  As a board member for the Node.js Foundation and OpenStack Foundation, I have been an active member of these organizations and have a record of bringing folks together to work for the common good.  Recently, I handed over my board seat with the OpenStack Foundation, so I could focus my time on CNCF. I am excited for the chance to serve the CNCF community and the GB, as we grow the CNCF to be the premiere organization charting the course for Cloud Native development.

3)    What is your vision for CNCF in 2017?

Simply put, diversity in projects, continued growth of memberships, and a focus on the user community.  The Technical Oversight Committee (TOC) is working with a number of exciting technologies for inclusion. We now have an incubation process with set Graduation Criteria and the user base is growing and seeking to comment on the technologies we shepard. We need to engage even more participants to build out a complete ecosystem with change and growth guided by our users.  

4)    Are you involved in any other Linux Foundation projects or open community Foundations? If so, which ones and why?

As I stated above, I am a board member of the Node.js Foundation.  I helped form this organization and played a role in healing the split in the community.  I work in the Node.js community because I believe in the technology and am inspired by the phenomenal growth.  We have created a blend of stability and experimental edge that keeps the community moving and it is just plain fun to watch it grow.

There is a long list of projects that I have helped to bring to the Linux Foundation.  These include Hyperledger, Cloud Foundry, ODPi, JanusGraph, Node.js, and JS Foundation to name a few.  There are others I am sure to come.  The LF has shown itself to be a supportive well run home for projects with an excellent staff.  That is what keeps me coming back.

5)    You are the Vice President of Open Technology at IBM. Please tell us more about IBM’s focus on open communities and open source.

For many years, IBM has been recognized as a leader in open technology, from its early work with Linux, Apache and eClipse to its current work across all layers of the cloud and application development.  You can find IBM old vnet ids used in some of the earliest open source projects.

IBM has a long history of working in open technology. In fact, in many ways, IBM is largely responsible for the open source movement’s success. IBM’s strong early support for Linux brought about a change in posture toward open source for many enterprises.  The commitment of funds and IP showed others that they should take open source seriously.  

We have worked hard over the years to establish a solid and respected reputation in open source circles, and especially in those communities where we invest strategically. One thing IBM has learned through all of this is that those communities that strive for inclusiveness and open governance tend to attract the largest ecosystems and create the most expansive markets.

IBM knows that a “rising tide floats all boats”. It isn’t enough that IBM succeeds—we need to ensure that many can succeed to ensure a vibrant ecosystem. This reduces the risk that comes with embracing open source for ourselves and, more importantly, our users.

Today, IBM’s work in open technology continues in the cloud. We still work with Apache, Eclipse and Linux on multiple projects, and IBM is recognized as a leader in the OpenStack, Cloud Foundry, Node.js, Docker, Hyperledger and OpenWhisk communities, among many others.  With 62,000 engineers certified to participate in open source, we are all in on open source.

6)    CloudNativeCon + KubeCon North America 2017 is coming to Austin, TX in December. As an Austin resident, what are your recommendations for restaurants while attendees are in town?

When you come to Austin for CloudNativeCon + KubeCon North America, two things you can’t miss are tacos and brisket!  A couple of my favorites are Torchy’s and Taco Deli.  For brisket, Franklin BBQ is the best, but the wait can be more than a couple hours–so, other good options would be Coopers on Congress Ave and Lambert’s on 2nd street.  For some Mexican food, La Condesa also in the 2nd street district and Guero’s on South Congress (SoCo) are really good.  Austin also has some of the best ramen in the country, Ramen Tatsu-Ya on South Lamar combined with drinks next door at BackBeat are a great combo.  Some of my other favorites are Fixe on West 5th street for some very tasty fried chicken and if you’re in the mood for Indian food, G’raj Mahal on Rainey street is a great option to combine with an evening of drinks and music in that neighborhood.  

Vegan Friendly restaurants include: True Food Kitchen, Counter Culture, Blue Dahlia, G’raj Mahal, Bouldin Creek Cafe are also good choices. For breakfast, Magnolia Cafe (must get the pancakes), Counter Cafe and 24 Diner are great options.

CNCF Purchases RethinkDB Source Code and Contributes It to The Linux Foundation Under the Apache License

By | Blog

CNCF has purchased the source code to the RethinkDB database, relicensed the code under the Apache License, Version 2.0 (ASLv2) and contributed it to The Linux Foundation.

RethinkDBTM is an open source, NoSQL, distributed document-oriented database that is in production use today by hundreds of technology startups, consulting firms and Fortune 500 companies, including NASA, GM, Jive, Platzi, the U.S. Department of Defense, Distractify and Matters Media. Some of Silicon Valley’s top firms invested $12.2 million over more than eight years in the RethinkDB company to build a state-of-the-art database system, but were unsuccessful in creating a sustainable business, and it shut down in October 2016.

The project was licensed under the GNU Affero General Public License, Version 3 (AGPLv3), a strong copyleft license, which limited the willingness of some companies to use and contribute to the software. CNCF paid $25,000 to purchase the RethinkDB copyright and assets and has re-licensed the software under the ASLv2, one of the most popular permissive software licenses, which enables anyone to use the software for any purpose without complicated requirements. (See related blog post “Why CNCF Recommends ASLv2”.)

“CNCF saw the opportunity to salvage an enormous investment with a small incremental contribution,” said CNCF Executive Director Dan Kohn. “RethinkDB created millions of dollars worth of value and is used by a wide range of projects, companies and startups. Now that the software is available under the Apache license, the RethinkDB community has the opportunity to define the future path for themselves.” 

Praised for its ease-of-use, rich data model and ability to support extremely flexible querying capabilities, RethinkDB’s real-time push architecture is well-suited for collaborative web and mobile apps, streaming analytics apps, multiplayer games, realtime marketplaces and connected devices and services.

“RethinkDB was built by a team of database experts with the help of hundreds of contributors from around the world. With its current feature set, we believe RethinkDB continues to offer distinct advantages over other data stores in its class,” said RethinkDB community member Chris Abrams. “CNCF’s relicensing and our new home within the Linux Foundation allows the RethinkDB community to unite together to push the project forward.”

RethinkDB supports cloud-native clustering out of the box. Additional benefits include its elegant functional query language, change feeds for dynamic queries and user interface simplicity. (See related blog post “The Liberation of RethinkDB”)

“I love RethinkDB and use it every day in production,” said Bryan Cantrill, Joyent CTO and a member of the CNCF Technical Oversight Committee. “I’m thrilled that the CNCF is liberating the RethinkDB community with an open source license that is amenable to broad adoption. I hope RethinkDB will consider applying to become a CNCF Inception project in the future, and I look forward to advocating for them in my capacity as a member of the TOC.”

The RethinkDB software is available to download at https://rethinkdb.com/. Development occurs at https://github.com/rethinkdb/rethinkdb and work has been underway on the 2.4 release, which will be available shortly. Follow the RethinkDB community discussion at http://slack.rethinkdb.com/. Continued community support is available at:

###

Contact for any questions and media inquiries: Sarah Conway, 978-578-5300, PR@CNCF.io.

RethinkDB is a trademark of The Linux Foundation.

Why CNCF Recommends Apache-2.0

By | Blog

By Dan Kohn, @dankohn1, CNCF Executive Director

February 1, 2017

The Cloud Native Computing Foundation (CNCF) believes that the best software license for open source projects today is the Apache-2.0 license (Apache-2.0). Our goal is to enable the greatest possible adoption of our projects by developers and users. Our larger goal with CNCF (and with the Linux Foundation, of which we are part) is to create an intellectual property “no-fly zone”, where contributors and users can come together from any company or from no company, collaborate, and build things together better than any of them could do on their own.

We think that permissive software licenses foster the best ecosystem of commercial and noncommercial uses by enabling the widest possible use cases. A report this month from Redmonk shows the increasing popularity of these permissive licenses. Proponents of copyleft licenses have argued that these licenses prevent companies from exploiting open source projects by building proprietary products on top of them. Instead, we have found that successful projects can help companies’ products be successful and that the resulting profits can be fed back into those projects by having the companies employ many of the key developers, creating a positive feedback loop.

In addition, Apache-2.0 provides protection against a company intentionally or unintentionally contributing code that might read on their patents, by including a patent license. We believe that this patent protection removes another possible barrier to adoption and collaboration. Our view is that having all CNCF projects under the same license makes it easier for companies to be comfortable using and contributing, as their developers (and those developers’ attorneys) do not need to review a lot of licenses.

Of course, many CNCF projects also rely on libraries released under other open source licenses. For example, Linux underlies the entire cloud native platform and git is the software development technology of choice for all of our projects. Both are licensed under GPLv2 (and both were originally authored by Linux Foundation Fellow Linus Torvalds). The CNCF projects themselves are currently mainly written using the open source programming languages Go (BSD-3), Ruby (BSD-2) and Scala (BSD-3).

Let’s now look at the CNCF policy for projects. For an Apache-2.0-licensed project to be accepted into the CNCF, it requires a supermajority vote of our Technical Oversight Committee (TOC). For a project under any other license, it would require both a supermajority TOC vote and a majority vote by our Governing Board. While this may occur in the future, our strong preference is to work with prospective projects to relicense under the Apache-2.0. Let’s look at two example projects to see how this can work.

We’re currently having conversations with gRPC, which is licensed under the BSD-3 license plus a patent grant. When combined with the patent license in Google’s Contributor License Agreement (CLA), this combination of BSD-3 + patent grant + CLA is quite similar to Apache-2.0, in that it combines a permissive copyright license with patent protections. However, Apache-2.0 is a better-known and more familiar license, and so it accomplishes similar goals while likely requiring less legal reviews from new potential gRPC users and contributors.

Separately, we’ve also been talking with GitLab, which uses the same MIT license as its underlying framework, Ruby on Rails. Although it’s natural to go with the same license as Rails and Ruby, we are working with the gitlab team to investigate whether it would be feasible to relicense to Apache-2.0 for some or all of their codebase. The main advantage of doing so would be the additional patent protections so that companies would be confident in their ability to contribute and use the software without later being accused of violating the patents of other contributors.

In closing, we’d like the acknowledge the debt of gratitude we have for the work done by the authors of these licenses, especially the Apache Software Foundation, and of course all the developers who write the software to make these licenses useful.

 

Linkerd Project Joins the Cloud Native Computing Foundation

By | Blog

Today, the Cloud Native Computing Foundation’s (CNCF) Technical Oversight Committee (TOC) voted to accept Linkerd as the fifth hosted project alongside Kubernetes, Prometheus, OpenTracing and Fluentd. You can find more information on the project on their GitHub page.

Linkerd is an open source, resilient service mesh for cloud-native applications. Created by Buoyant founders William Morgan and Oliver Gould in 2015, Linkerd builds on Finagle, the scalable microservice library that powers companies like Twitter, Soundcloud, Pinterest and ING. Linkerd brings scalable, production-tested reliability to cloud-native applications in the form of a service mesh, a dedicated infrastructure layer for service communication that adds resilience, visibility and control to applications without requiring complex application integration.

“As companies continue the move to cloud native deployment models, they are grappling with a new set of challenges running large scale production environments with complex service interactions,” said Fintan Ryan, Industry Analyst at Redmonk. “The service mesh concept in Linkerd provided a consistent abstraction layer for these challenges, allowing developers to deliver on the promise of microservices and cloud native applications at scale. In bringing linkerd under the auspices of CNCF, Buoyant are providing an important building block for to the wider cloud native community to use with confidence.”

Enabling Resilient and Responsive Microservice Architectures

Linkerd enables a consistent, uniform layer of visibility and control across services and adds features critical for reliability at scale, including latency-aware load balancing, connection pooling, automatic retries and circuit breaking. As a service mesh, Linkerd also provides transparent TLS encryption, distributed tracing and request-level routing. These features combine to make applications scalable, performant, and resilient. Linkerd integrates directly with orchestrated environments such as Kubernetes (example) and DC/OS (demo), and supports a variety of service discovery systems such as ZooKeeper, Consul, and etcd. It recently added HTTP/2 and gRPC support and can provide metrics in Prometheus format.

diagram-individual-instance (1)

“The service mesh is becoming a critical part of building scalable, reliable cloud native applications,” said William Morgan, CEO of Buoyant and co-creator of Linkerd. “Our experience at Twitter showed that, in the face of unpredictable traffic, unreliable hardware, and a rapid pace of production iteration, uptime and site reliability for large microservice applications is a function of how the services that comprise that application communicate. Linkerd allows operators to manage that communication at scale, improving application reliability without tying it to a particular set of libraries or implementations.

Companies around the world use Linkerd in production to power their software infrastructure; including Monzo, Zooz, Foresee, Olark, Houghton Mifflin Harcourt, the National Center for Biotechnology Information, Douban and more, and it’s featured as a default part of cloud-native distributions such as Apprenda’s Kismatic Enterprise Toolkit and StackPointCloud.

Notable Milestones:

  • 29 Releases
  • 28 contributors and 400 Slack members
  • 1,370 Github stars

“Linkerd was built based on real world developer experiences in solving problems found when building large production systems at web scale companies like Twitter and Google,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “It brings these expertise to the masses, allowing a greater number of companies to benefit from microservices. I’m thrilled to have Linkerd as a CNCF inception project and for them to share their knowledge of building a cloud native service mesh with scalable observability systems to the wider CNCF community.”

As CNCF’s first inception level project, under the CNCF Graduation Criteria v1.0, Linkerd will receive mentoring from the TOC, priority access to the CNCF Community Cluster, and international awareness at CNCF events like CloudNativeCon/KubeCon Europe. The CNCF Graduation Criteria was recently voted in by the TOC to provide every CNCF project an associated maturity level of either inception, incubating or graduated, which allows CNCF to review projects at different maturity levels to advance the development of cloud native technology and services.

For more on Linkerd, listen to an interview with Alex Williams of The New Stack and Morgan here, or read Morgan’s upcoming blog post on the project’s roots and why Linkerd joined CNCF.

Container Management Trends: Kubernetes moves out of testing and into production

By | Blog

In conjunction with CloudNativeCon+ KubeCon (Nov 8-9, 2016), the Cloud Native Computing Foundation (CNCF) conducted a survey of attendees.  More than 170 conference attendees completed the survey, with a majority of respondents (73%) coming from technology companies (vs. enterprise IT), including suppliers of cloud management technology.

The goal of the study was to understand the state of deployment of Kubernetes and other container management platforms, as well the progress of container deployment in general.

While this was the first time CNCF had taken the temperature of the container management marketplace, comparisons to other surveys, such as Google’s own Kubernetes Survey in March, 2016 and again in June 2016 highlight important trends in this space.  You can download the raw survey data here.

Cloud Management Platforms

blogchart1

Container Management Platforms Preferences

While both the recent CloudNativeCon + KubeCon survey and the Google surveys targeted audiences with an existing interest in Kubernetes, it’s interesting to observe changes in this segment of the container management marketplace over just the last eight months, including:

  • Growth in commitment to Kubernetes from 48% (Google survey, March) to 83% (CloudNativeCon + KubeCon survey)
  • Ongoing move away from home-grown management (Shell Scripts and CAPS) to commercial off-the-shelf (COTS) solutions (Kubernetes, Docker Swarm, Mesos, etc.)

Locus of Deployment

The CloudNativeCon + KubeCon survey also illustrates other types of maturation in container management, in particular changes in the locus of deployment on premises and cloud platforms:

blogchart2

Container Deployment Platforms

Highlighted trends include:

  • Incrementally growing hosting on Amazon and doubling of deployment on Google clouds in only 8 months
  • Doubling of hosting on Microsoft Azure from March to November 2016
  • A shift from ad hoc workstation container hosting to data center blades in premises-centric applications, representative of a maturing from purely experimental deployment to more serious development and even production settings (see next section).

Kubernetes Supporting Containers in Production

Definitely, the most interesting observable trend from the CloudNativeCon + KubeCon survey is the maturation of Kubernetes deployment, with both growth in respondents investing in Kubernetes for development test, and a near tripling over the last eight months of Kubernetes in production settings:

blogchart3

The Shift from Development/Test to Production for Kubernetes

Container Usage Volume on the Rise

Not only are more companies using containers in all stages of the product/services life-cycle, they are also using a larger fleet of containers overall, as low volume deployments (<50 units) rise by 27%, with higher volume deployments (>250 units) more than doubling.  In fact, 12% of respondents in the CloudNativeCon + KubeCon survey report deploying more than 1,000 containers, with an additional 19% fielding more than 5,000.

blogchart4

Growth in Container Deployment Volumes

Upcoming Events – Join Us to Learn More Kubernetes in Production

Interested in learning more about how to use Kubernetes and other cloud native technologies in production? Join us at our upcoming events – Cloud Native/Kubernetes 101 Roadshow: Pacific Northwest 2017, February 7 – February 9 – or CloudNativeCon + KubeCon Europe 2017.

The three-city training tour will hit Portland, Seattle and Vancouver and discuss how cloud users are implementing cloud native computing. Real-world Kubernetes use cases at Amadeus, LeShop, Produban/Santander, and FICO will be presented. These $30 training sessions are ideal for end end users, developers and students just beginning to explore how to orchestrate containers as part of a microservices architectures, instead of VMs. Registration details here.

The CNCF’s flagship CloudNativeCon + KubeCon will take place March 29-30 in Berlin. The event gathers leading technologists from multiple open source cloud native communities to further the education and advancement of cloud native computing. Discounted registration of $900 for corporations and $450 for individuals ends February 3.

Knowledge, Abilities & Skills You Will Gain at Cloud Native/Kubernetes 101 Roadshow: Pacific Northwest!

By | Blog

The Cloud Native Computing Foundation is taking to the road February 7-9  in Portland, Seattle and Vancouver to offer end users, developers, students and other community members the ability to learn from experts at Red Hat, Apprenda and CNCF on how to use Kubernetes and other cloud native technologies in production. Sponsored by Intel and Tigera, the first ever Cloud Native/Kubernetes 101 Roadshow: Pacific Northwest will introduce key concepts, resources and opportunities for learning more about cloud native computing.

The CNCF roadshow series focuses on meeting with and catering to those using cloud native technologies in development, but not yet in production. Cities and locations include:

Each roadshow will be held from 2-5pm, with the full agenda including presentations from:

Dan Kohn, Executive Director, CNCF

Dan Kohn, Executive Director of the Cloud Native Computing Foundation.  Dan will discuss:

  • What is cloud native computing — orchestrated containers as part of a microservices architecture — and why are so many cloud users moving to it instead of virtual machines
  • An overview of the CNCF projects — Kubernetes, Prometheus, OpenTracing and Fluentd — and how we as a community are building maps through previously uncharted territory
  • A discussion of top resources for learning more, including Kubernetes the Hard Way, Kubernetes bootcamp, and CloudNativeCon/KubeCon and training and certification opportunities

Brian Gracely, Director of Product Strategy at Red Hat. Brian will discuss:

  • Real-world use of Kubernetes in production today at Amadeus, LeShop, Produban/Santander & FICO
  • Why contributing to CNCF-hosted projects should matter to you
  • How cross-community collaboration is the key to the success of the future of Cloud Native

Isaac Arias, Technology Executive, Digital Business Builder, and Passionate Entrepreneur at Apprenda. Isaac will discuss:

  • Brief history of machine abstractions: from VMs to Containers
  • Why containers are not enough: the case for container orchestration
  • From Borg to Kubernetes: the power of declarative orchestration
  • Kubernetes concepts and principles and what it takes to be Cloud Native

By the end of this event, attendees will understand how cloud users are implementing cloud native computing — orchestrated containers as part of a microservices architecture – instead of virtual machines. Real-world Kubernetes use cases at Amadeus, LeShop, Produban/Santander, and FICO will be presented. A detailed walk through of Prometheus (monitoring system), OpenTracing (tracing standard) and Fluentd (logging) projects and each level of the stack will also be provided.

Each city is limited in space, so sign up now! Use the code MEETUP50 to receive 50% off registration!

Diversity Scholarship Series: One Software Engineer’s Unexpected CloudNativeCon + KubeCon Experience

By | Blog

By: Kris Nova, Platform Engineer at Datapipe

Diversity noun : the condition of having or being composed of differing elements.

As defined by Merriam-Webster, diversity indicates the presence of differing elements. Without going too data science on everyone, I suppose there is a lot of things about me that plots me outside of the predicted regression; especially for backend systems engineers who work on Kubernetes. However, there are also a lot of things that I have in common with the larger group as well. Thanks to CNCF for providing me with a fabulous scholarship to CloudNativeCon + KubeCon in Seattle, I was able to have a once-in-a-lifetime experience engaging with this larger group and experiencing our similarities and differences.

Being that I am often the only woman when I find myself in a room of software engineers, I have grown quite used to it. Unfortunately, not everyone I find myself working with is equally as used to it. To be honest, I was a bit nervous about what the trip might have in store.

The scholarship I received gave me an exciting opportunity to not only attend the conference, but to also attend one of the Kubernetes sig-cluster-lifecycle meetings in-person at the Google office in Seattle. I was happy as a clam debating over kops vs. kubeadm scope, and drinking espresso with Joe Beda and the Googlers face-to-face. My gender never once crossed my mind, which was such an unique experience the Kubernetes community gave me that morning. I wasn’t a woman in a room full of men, I was a valuable member of the community who is held just as responsible as anyone else for a careless commit to the codebase. So a big thank you to everyone in sig-cluster-lifecycle and Google Seattle who made me feel right at home and as welcomed as any other software engineer.

Open source software has always been an ideology I keep very close to my heart. In fact, open source software is what helped inspire me to come out as a lesbian in my life. To me, open source software has always represented a wonderful world of science, honesty, and learning. A world where mistakes and failure is encouraged, and growing with your peers is a foundational aspect to success. Walking around the conference the first morning before the keynotes, I experienced the same excitement and wanderlust that has always attracted me to the open source community. The hotel lobby was buzzing with activity, and everywhere I looked I could see and hear fascinating conversation around containers and evolving the Kubernetes tooling as a community.

Having gone through the ringer in a few other open source communities, it was so refreshing getting to meet the people who bring Kubernetes to life. How nice it was to not be scrutinized for my lack of neck-beard. To this day, thinking about the fact that I was able to bring home a suitcase stuffed with t-shirt’s fit for my gender is beyond exciting. Thanks Kubernetes, you guys rock!

The conference was a hit, I don’t even know where to begin. The sig-aws meeting that we were able to attend, thanks to CoreOS, was surreal. Sitting with Chris Love and Justin Santa Barbara on the floor of the hotel lobby having a very effective, yet impromptu kops planning meeting still makes me smile. I still have the original plans for running Kubernetes on AWS in a private VPC scribbled on a cocktail napkin. Getting to meet some of my new favorite people at Red Hat, Atlassian, and Google was even better. The conference changed the way I look at (my new favorite) open source community. This feeling stays with me every day when I open up emacs and start writing code for Kubernetes.

Upon coming home I hung my conference badge up in the hallway proudly. It is still there to this day. A symbol of the amazing time I had in Seattle, and a symbol of pride. The badge holds the name “Kris,” which may not mean a lot to anyone else, but to me represents success. Success in my career with Kubernetes, success of my love of learning software, and success of my gender transition from male to female. The badge is hopefully the first of many with my new name on it, and the first of many Cloud Native conferences to come.

So I guess maybe I am diverse after all. I love Kubernetes for so many reasons. After the conference, I think one of the main reasons I love the community is that I am just another committer to the code base. To be honest, I am so grateful that I can fit right in. I just want to write code and be treated like everyone else. Thanks to the Kubernetes community for the gift of being pleasantly accepted as a software engineer despite being a bit of a black sheep. It’s all I could ever ask for.

The Cloud Native Computing Foundation is offering diversity scholarships at both its European and North America shows in 2017. To apply, please visit  here for Europe and here for North America.

1 32 33 34 38