All Posts By

Dan Kohn

What Image Formats Should You Be Using in 2019?

By | Blog

Here are some succinct guidelines on which image formats to use on the web, in email, and in print, based on CNCF’s experiences:

SVG: Use for logos

SVGs are the preferred image format for logos as they’re resolution-independent (that is, they look good no matter how high-resolution your screen is), lightweight (that is, their file size is smaller than other formats), and can be easily converted into PNGs and into print formats (like PDF and EPS). All of the logos in the interactive landscape are SVGs (following our logo guidelines). SVGs are now natively supported in PowerPoint but importing SVGs into Google Slides is much more tedious than it should be, so you may want to substitute a PNG, but ensure it’s high resolution.

JPG: Use for photos

Though lossy and not resolution-independent, JPGs are the preferred format for photos but aren’t much good for anything else. (Lossy means that text and illustrations can look blurry and so it’s not suitable for logos. Not resolution-independent means that photos will look blurry if you blow them up more than their original resolution.)  When you create a JPG you can set the degree of compression. When preparing an image for the web, make the compression as high as possible to minimize the file size but not so much that the photo starts looking “blocky”.

PNG: Use for logos and diagrams when you can’t use an SVG

Minimize your use of PNGs. It is not a resolution-independent format so they often look blurry on high-resolution screens like Macs and iPhones. They are useful, however, in the following situations:

  • Gmail doesn’t support SVGs, so PNGs are the best choice for logos in email
  • On webpages, use PNGs for the Twitter card preview image, since Twitter doesn’t support SVGs
  • Use when embedding large, complex drawings into webpages such as the trail map or landscape
  • PNGs can include transparency so use on webpages when you need transparency and there’s no SVG option

PDF: Use for print and ready-to-print brochures

PDF, like SVG, is a resolution-independent format that is suitable for printing or displaying on high-resolution screens. It’s intended as an output format, however, and is not easily usable as an input to webpages or other media. Also, embedding a PNG or JPG into a PDF will still look blurry at high-resolution. Instead, start with a resolution-independent format (such as SVGs or original designs from Adobe Illustrator) to produce PDFs that look good zoomed in or printed commercially.

EPS & AI: Don’t use

These are often used as the original format in which a design is created and/or sent to print. If you get a logo in this format, use to convert it to an SVG.

CNCF Openness Guidelines

By | Blog

CNCF is an open source technical community where technical project collaboration, discussions, and decision-making should be open and transparent. Please see our charter, particularly section 3(b), for more background on CNCF values.

Design, discussions, and decision-making around technical topics of CNCF-hosted projects should occur in public view such as via GitHub issues and pull requests, public Google Docs, public mailing lists, conference calls at which anyone may participate (and which are normally published afterward on YouTube), and in-person meetings at KubeCon + CloudNativeCon and similar events. This includes all SIGs, working groups, and other forums where portions of the community meet.

This is particularly important in light of the Linux Foundation’s (revised) Statement on the Huawei Entity List Ruling. (Note that CNCF is part of the Linux Foundation.) Our technical community operates openly and in public which affords us exceptions to regulations other closed organizations may have to address differently. This open, public technical collaboration is also critical to our community’s success as we navigate competitive and shifting industry dynamics. Openness is particularly important in any discussions involving encryption since encryption technologies can be subject to Export Administration Regulations.

If you have questions or concerns about these guidelines, I encourage you to discuss it with your company’s legal counsel and/or to email me and Chris Aniszczyk at Thank you.

CNCF Joins Call for Code

By | Blog

The CNCF, the Linux Foundation, and the entire open source movement would be nothing without community. The ability to come together and foster an ecosystem, develop new, ground-breaking technology, and accelerate innovation is truly inspiring. It’s a testament to the commitment and desire to look beyond their immediate needs and consider what is best for the community at-large (all while balancing the needs and responsibilities of their respective companies).

It is with this same spirit of togetherness and community that CNCF lends its support to Call for Code.

There is so much more we can do than develop platforms for running business applications, or helping businesses to grow. Developers have the power to save lives. Call for Code challenges developers to create sustainable software solutions that address natural disasters. Developers can use the technology to build for and solve complex global problems that impact societies around the world. Call for Code aims to harness the energy, creativity, and collaborative aspects of our work in open source, proving that we can develop solutions to humanity’s greatest problems.

CNCF joins IBM, David Clark Cause, the Linux Foundation, United Nations Human Rights Office and The American Red Cross in making Call for Code a reality. The contest, which awards $200,000 to the winner, is an opportunity to rally developers around a common cause and have a lasting impact. Last year was one of the worst years on record for natural disasters, and the right technology can mitigate the loss of life and property damage.

I encourage you to register for the challenge at, but there are a few more ways that you can get involved immediately:

    1. Commit to the cause: Share the message with your followers, your company, your fellow developers, and express your support via social media. You can start by retweeting one of our recent posts:

    2. Push for change: Interested in bringing Call for Code to your company in a more formal way? Run a Call for Code Day at your company by signing up here:
    3. Answer the call: Form a team, join a team, or build a solution solo. Register for Call for Code at and start building today.

Since 2000, natural disasters have directly affected 2.5 billion people, with $1.5 trillion in economic impact since 2003. And over the last 30 years, flooding is up over 240%. As developers, we can help people be more prepared, help them during a natural disaster, and help them recover afterward. We can make communities more resilient together.

Call for Code judges include iconic developers like Linus Torvalds, plus leaders from the United Nations Human Rights Office and the National Center for Disaster Preparedness. The winning team and two semifinalists will receive support from The Linux Foundation to host their submission as an open source project and build a community around it, ensuring that it is deployable around the world in the areas of greatest need.

Each one of us wields great power as a developer, but together, we’re even stronger. I encourage you to visit today and show the world that developers—and the technology they create—can save lives.

How to Get Your KubeCon + CloudNativeCon Talk Accepted

By | Blog

By Dan Kohn, CNCF executive director

This post is now out-of-date. Please follow the advice from this newer post instead.


Earlier this month, KubeCon + CloudNativeCon held its largest-ever event in Copenhagen, with 4,300 attendees. This week, we opened the call for proposals (CFP) for both Shanghai (Nov. 14-15) and Seattle (Dec. 11-13). Here’s a look at attendance for the six events to date:

For Copenhagen, we had 1,087 talk submissions. We were able to accept 157 regular sessions, 16 lightning talks and 18 keynotes. That’s an acceptance rate of 17.6%, which is unfortunate, because the vast majority of talk submitters spend real effort on their proposals and there is far more high-quality content than we have room to accept.

So, how do we choose the talks and what can you do to increase your chances of being accepted?

First, it’s helpful to understand the different categories of talks and how they’re selected. The key principle is that we want talk selection to be meritocratic and vendor-neutral, while also ensuring that new voices have a chance to be heard.

The regular sessions in the conference are submitted from two sources. The first category consists of the 157 sessions that were submitted via the CFP and are held in 9 parallel tracks.

The other category of session comes from encouraging each project to offer an Intro and/or Deep Dive session. Intros are for helping to bring new people into an activity/community and Deep Dives are to help these efforts move forward. For Copenhagen, we expanded and formalized this practice that had started at earlier events. Specifically, we offer these sessions to each of the (currently) 22 CNCF projects, the 30 Kubernetes SIGs, and the 4 CNCF working groups. For Copenhagen, we had 44 Intro and 48 Deep Dive sessions. If you are a project, SIG or working group leader, this is a great opportunity to reach new potential participants and engage current ones, so please be on the lookout for our email signup request. We’ll be setting aside 5 tracks for this content and working with the submitters to fit in as many as possible.

The regular conference sessions have 9 tracks and are submitted through the CFP process. The conference co-chairs for Shanghai and Seattle are Liz Rice of Aqua Security and Janet Kuo of Google. They are in the process of selecting a program committee of around 60 experts, which includes project maintainers, active community members, and highly-rated presenters from past events. Program committee members register for the topic areas they’re comfortable covering, and CNCF staff randomly assign a subset of relevant talks to each member. We then collate all of the reviews and the conference co-chairs spend a very challenging week assembling a coherent set of topic tracks and keynotes from the highest-rated talks.

During the event, we hold lightning talks the night before the event and keynotes are held each morning as well as on the evening of the first day. You can propose a session specifically as a lightning talk, and we normally select the keynotes from the most notable of the regular session submissions.

So, how can you improve the odds of getting your talk selected?

  • Avoid the common pitfall of submitting a sales or marketing pitch for your product or service, no matter how compelling it is.
  • Focus on your work with an open source project, whether it is one of the CNCF’s 22 hosted projects or a new project that adds value to the cloud native ecosystem.
  • KubeCon + CloudNativeCon is fundamentally a community conference focusing on the development of cloud native open source projects. So, pick your presenter and target audience accordingly. Our participants range from the top experts to total beginners, so we explicitly ask what level of technical difficulty your talk is targeted for (beginner, intermediate, advanced, or any) and aim to provide a range.
  • We often get many submissions covering almost the same topic, so even if there are several great submissions, we’re probably only going to pick one. Consider choosing a more unique topic that is relevant, but less likely to be submitted by multiple people.
  • Given that talk recordings are available on YouTube, and there is limited space on the agenda, we are unlikely to select a submission that has already been presented at a previous KubeCon + CloudNativeCon. If your submission is very similar to a previous talk, please include information on how this version will be different.

We will be working to only accept a single talk from each speaker. To avoid diluting the votes from program committee members, please limit yourself to submitting your best idea, or at most two. We’re eager to feature end user stories so, if appropriate, consider submitting with a customer as your co-presenter who can share their perspective, and mark the submission as an end-user case study.

Look through the talks that were selected for Copenhagen and notice that most have clear, compelling titles and descriptions. The CFP form has a section for including resources that will help reviewers assess your submission. If you have given a talk before that was recorded, please include a link to it. Blog posts, code repos, and other contributions can also help establish your credentials, especially if this will be your first public talk (and we encourage first-time speakers to apply).

Finally, we are explicitly interested in increasing the voice of those who have been traditionally underrepresented in tech. For example, we don’t accept panel proposals unless they include at least one female speaker. While all submissions will be reviewed on merit, we are dedicated to having a diverse and inclusive conference and will actively take that into account when finalizing the list of speakers and overall schedule.

I hope this overview was useful, and that you will consider submitting a talk. The deadline for Shanghai is Friday, July 6 and for Seattle is Sunday, August 12. Note that if you get your proposal together by July 6, you can submit your proposed talk to both Shanghai and Seattle through a single form with no extra work.

如何使您的KubeCon + CloudNativeCon演讲提案获得通过

Dan Kohn,CNCF执行董事

       本月初,KubeCon + CloudNativeCon在哥本哈根举办,这是其有史以来规模最大的会议,参会人数为4300人。本周,我们开放了在上海 (11月14日至15日)和西雅图(12月11日至13日)的演讲提案征集 (CFP) 通道。以下是迄今为止六个论坛的参会情况:

       在哥本哈根论坛上,我们收到的演讲申请为 1087个。但我们只接受了157个常规演讲,16个快闪演讲和18个主题演讲。虽然绝大多数的演讲申请者都在他们的提案上付出了许多努力,但是很遗憾,提案通过率大约为 17.6%,并且提交的演讲方案有更多优质内容,但我们却没有足够的接受空间。




       另一个来源是鼓励每个项目提供介绍会议和/或深度会议。介绍会议可以吸引新人加入一个活动/社区,而深度会议则有助于这个领域向前发展。在哥本哈根会议中,我们扩大并正式确定了这个在我们早期的活动采取的方式。具体而言,我们会为每个(当前为 22 个)CNCF项目,30个Kubernetes特殊兴趣小组(SIGs)和4个CNCF工作小组提供这些会议。在哥本哈根论坛中,我们共有44个介绍会议和48个深度会议。如果您是某个项目、特殊兴趣小组,或是工作小组的负责人,这是一次接触潜在新参与者并吸引现有参与者的绝佳机会。所以,请关注我们关于注册请求的电子邮件。我们将为此内容预留5个平行会议,并与申请者一起努力合作,并尽可能多地安排会议。

       常规论坛共有9个平行会议,可通过提案征集流程申请。上海和西雅图论坛的联合主席是 Aqua Security的Liz Rice和谷歌的Janet Kuo。他们正在遴选项目委员会专家(约60位),其中包括项目维护人员、社区活跃成员以及以往会议中深受好评的演讲嘉宾。项目委员会成员会登记他们感兴趣的主题或领域,而CNCF工作人员会随机为每位成员分配一部分相关演讲,然后我们会整理汇总所有的评审结果。会议联合主席随即会花非常具有挑战性的一周时间,从评价最高的演讲方案中组建系列一致的主题会议和主题演讲。



  • 避免的常见错误 —— 提交产品或服务销售或营销方案(无论它有多么吸引人)。
  • 专注于开源方面的项目,无论是 CNCF的22个托管项目之一,还是能够为云原生生态系统增加价值的新项目。
  • KubeCon + CloudNativeCon本质上是一个关注云原生开源项目开发的社区会议。所以,请选择合适的演讲者和受众。我们的参与者层面包含了顶级专家到初学者,因此我们需要明确地知晓您的演讲涉及的技术难度(初级、中级、高级或任何),并要求您提供一个范围。
  • 我们经常会收到许多主题相同的提案申请,即便多个申请都很出彩,但我们可能只会选择其中一个。您可以考虑选择一个相关但更加独特的主题,并且避免提交可能与很多人重复的主题。
  • 由于会议日程表上的空间有限,我们不太可能选择在之前的KubeCon + CloudNativeCon论坛上已开展过的演讲。您可以在YouTube上找到之前的演讲视频。如果您的申请与以前的演讲非常相似,请指明您的演讲将有哪些不同之处。


       仔细查看哥本哈根被选定的演讲,您就可以留意到大多数演讲都有清晰、引人注目的标题和说明。提案征集表格中有一栏为您可以上传能够帮助评审人员评估您申请的各种资源。如果您在之前发表过演讲并有视频记录,请附上链接。博客帖子、code repos和其他贡献也有助于增强您的资历,特别是如果这将是您的第一次公开演讲(我们非常鼓励首次发言者进行申请)。



The 30 Highest Velocity Open Source Projects

By | Blog

Open Source projects exhibit natural increasing returns to scale. That’s because most developers are interested in using and participating in the largest projects, and the projects with the most developers are more likely to quickly fix bugs, add features and work reliably across the largest number of platforms. So, tracking the projects with the highest developer velocity can help illuminate promising areas in which to get involved, and what are likely to be the successful platforms over the next several years. (If the embedded version below isn’t clear enough, you can view the chart directly on Google Sheets.)

As a follow-on to my previous look at Measuring the Popularity of Kubernetes Using BigQuery, I’ve been working with developer Łukasz Gryglicki to visualize the 30 highest velocity open source projects. Rather than debate whether to measure them via commits, authors, or comments and pull requests, we use a bubble chart to show all 3 axes of data, and plot on a log-log chart to show the data across large scales. In the graph, the bubbles’ area is proportional to the number of authors, the y-axis (height) is the total number of pull requests & issues, and the x-axis is the number of commits.

There are many stories in the data but these are a few of my takeaways:

  • The highest velocity application frameworks are .NET, Node.js and Ruby on Rails.
  • For front-end software, React, Angular and Vue.js all have a presence.
  • For automation, Ansible, Terraform and Chef are included.
  • Kubernetes is dealing with 2 to 3 times the issues and pull requests of other high velocity projects like React and Homebrew. The project (which is hosted by CNCF) has been using and investing in tools like mungegithub and prow to scale better on GitHub, but it’s not surprising that keeping up is challenging. Of the two higher velocity projects, Chromium uses its own bug tracker and Linux uses the Linux Kernel Mailing List.

All of the scripts used to generate this data are at (under an Apache 2.0 license). If you see any errors, please open an issue there. What’s your biggest takeaway? Please join the discussion on Hacker News and let us know.

Measuring the Popularity of Kubernetes Using BigQuery

By | Blog

By Dan Kohn, CNCF Executive Director, @dankohn1

Kubernetes Logo

As the executive director of CNCF, I’m proud to host Kubernetes, which is one of the highest development velocity projects in the history of open source. I know this because I can do a web search and see… quite a few people being quoted saying that, but does the data support this claim?

This blog post works through the process of investigating that question. CNCF licenses a dashboard from Bitergia, but it’s more useful for project trends over time than comparing to other open source projects. Project velocity matters because developers, enterprises and startups are more interested in working with a technology that others are adopting, so that they can leverage the investments of their peers. So, how does Kubernetes compare to the other 53 million GitHub repos?

By way of excellent blog posts from Felipe Hoffa and Jess Frazelle (the latter a Kubernetes contributor and speaker at our upcoming CloudNativeCon/KubeCon Berlin), I got started on using BigQuery to analyze the public GitHub data set. You can re-run any of the gists below by creating a free BigQuery account. All of the data below is for 2016, though you can easily run against different time periods.

My first attempt found that the project with the highest commit rate on GitHub is… KenanSulayman/heartbeat, a repo with 9 stars which appears to be an hourly update from a Tor exit node. Well, that’s kind of a cool use of GitHub, but not really what I’m looking for. I learned from Krihelinator (a thoughtful though arbitrary new metric that currently ranks Kubernetes #4, right in front of Linux), that some people use GitHub as a backup service. So, rerunning with a filter of more than 10 contributors puts Kubernetes at #29 based on its 8,703 commits. For reference, that’s almost exactly one commit an hour, around the clock, for the entire year.

That metric also leaves off torvalds/linux, because the kernel’s git tree is mirrored to GitHub, but that mirroring does not generate GitHub events that are stored in that data set. Instead, there is a separate BigQuery data set that just measures commits. When I run a query to show the projects with the most commits, I unhelpfully get dozens of forks of Linux and also many forks of a git learning tool. Here is a better query that manually checks for committers, authors, and commits of 8 popular projects, and shows Kubernetes as #2, with about 1/5th the authors and commits of Linux.1

To see how many unique committers Kubernetes had in 2016, I used this query, which showed that there were… 59, because Kubernetes uses a GitHub robot to do the vast majority of the actual commits. The correct query requires looking inside the commits at the actual authors, and when ranked by unique authors, Kubernetes comes in at #10 with 868.

Updating Hoffa’s query about issues opened to include data for all of 2016 (while still ignoring robot comments), Kubernetes remains #1 with 42,703, with comments from 3,077 different developers. Frazelle’s analysis of pull requests (updated for all of 2016 and to require more than 10 contributors to avoid backup projects) now shows Kubernetes at #2 with 10,909, just behind a Java intranet portal. (Rather than GitHub issues and pull requests, Linux uses its own email-based workflow described in a talk last year by stable kernel maintainer Greg Kroah-Hartman, so it doesn’t show up in these comparisons.)

Kubernetes 2016 Rankings

Measure Ranking
Krihelinator 4
Commits 29
Authors 10
Issue Comments 1
Pull Requests 2

In conclusion, I’m not sure that any of these metrics represents the definitive one. You can pick your preferred statistic, such as that Kubernetes is in the top 0.00006% of the projects on GitHub. I prefer to just think of it as one of the fastest moving projects in the history of open source.

What’s your preferred metric(s)? Please let me know at @dankohn1 or in the Hacker News comments, and I’m happy to provide t-shirts in exchange for cool visualizations.

1 OpenHub incorrectly showed more than 3x as many authors and 5x the commits for Linux in 2016 as the BigQuery data set. I confirmed this is an error with Linux stable kernel maintainer Greg Kroah-Hartman (who checked the actual git results) and reported it to OpenHub. They’ve since fixed the bug.

Why CNCF Recommends Apache-2.0

By | Blog

By Dan Kohn, @dankohn1, CNCF Executive Director

February 1, 2017

The Cloud Native Computing Foundation (CNCF) believes that the best software license for open source projects today is the Apache-2.0 license (Apache-2.0). Our goal is to enable the greatest possible adoption of our projects by developers and users. Our larger goal with CNCF (and with the Linux Foundation, of which we are part) is to create an intellectual property “no-fly zone”, where contributors and users can come together from any company or from no company, collaborate, and build things together better than any of them could do on their own.

We think that permissive software licenses foster the best ecosystem of commercial and noncommercial uses by enabling the widest possible use cases. A report this month from Redmonk shows the increasing popularity of these permissive licenses. Proponents of copyleft licenses have argued that these licenses prevent companies from exploiting open source projects by building proprietary products on top of them. Instead, we have found that successful projects can help companies’ products be successful and that the resulting profits can be fed back into those projects by having the companies employ many of the key developers, creating a positive feedback loop.

In addition, Apache-2.0 provides protection against a company intentionally or unintentionally contributing code that might read on their patents, by including a patent license. We believe that this patent protection removes another possible barrier to adoption and collaboration. Our view is that having all CNCF projects under the same license makes it easier for companies to be comfortable using and contributing, as their developers (and those developers’ attorneys) do not need to review a lot of licenses.

Of course, many CNCF projects also rely on libraries released under other open source licenses. For example, Linux underlies the entire cloud native platform and git is the software development technology of choice for all of our projects. Both are licensed under GPLv2 (and both were originally authored by Linux Foundation Fellow Linus Torvalds). The CNCF projects themselves are currently mainly written using the open source programming languages Go (BSD-3), Ruby (BSD-2) and Scala (BSD-3).

Let’s now look at the CNCF policy for projects. For an Apache-2.0-licensed project to be accepted into the CNCF, it requires a supermajority vote of our Technical Oversight Committee (TOC). For a project under any other license, it would require both a supermajority TOC vote and a majority vote by our Governing Board. While this may occur in the future, our strong preference is to work with prospective projects to relicense under the Apache-2.0. Let’s look at two example projects to see how this can work.

We’re currently having conversations with gRPC, which is licensed under the BSD-3 license plus a patent grant. When combined with the patent license in Google’s Contributor License Agreement (CLA), this combination of BSD-3 + patent grant + CLA is quite similar to Apache-2.0, in that it combines a permissive copyright license with patent protections. However, Apache-2.0 is a better-known and more familiar license, and so it accomplishes similar goals while likely requiring less legal reviews from new potential gRPC users and contributors.

Separately, we’ve also been talking with GitLab, which uses the same MIT license as its underlying framework, Ruby on Rails. Although it’s natural to go with the same license as Rails and Ruby, we are working with the gitlab team to investigate whether it would be feasible to relicense to Apache-2.0 for some or all of their codebase. The main advantage of doing so would be the additional patent protections so that companies would be confident in their ability to contribute and use the software without later being accused of violating the patents of other contributors.

In closing, we’d like the acknowledge the debt of gratitude we have for the work done by the authors of these licenses, especially the Apache Software Foundation, and of course all the developers who write the software to make these licenses useful.