All Posts By

Kaitlyn Barnard

Certified Kubernetes Administrator (CKA) Certification is Now Valid for 3 Years

By | Announcement, Blog

In 2017, CNCF launched the Certified Kubernetes Administrator (CKA) exam which has become one of the most popular Linux Foundation certifications to date. Over 9,000 individuals have registered for the exam and over 1,700 have achieved the certification.

When the exam was originally released, the certification was valid for 2 years in anticipation of a major curriculum update happening at that time. However, given the current development and release cycle of Kubernetes, we are now planning for this larger curriculum update in 2020.   

This means that CKA Certifications awarded on or after September 2, 2017 will expire 36 months from the date that the Program Certification Requirements were met by the candidate (rather than the 24 months stated previously). The current curriculum still ensures that existing CKAs have the skills, knowledge, and competency that are relevant to perform the responsibilities of a Kubernetes administrator. When we do revise the exam, we will announce it here and the changes will be reflected in the open sourced curriculum. This extra time will allow CKAs to prepare and practice any new skills before their certification expires.

This also means that Kubernetes Certified Service Providers (KCSP), will need to revisit their certifications in 2020 to maintain their status. A reminder will be sent out to all partners up for renewal as those dates approach.

KCSPs are vetted service providers who have deep experience helping enterprises successfully adopt Kubernetes and a minimum of three Certified Kubernetes Administrators in-house. With 90 service providers in the program to date, these partners have helped a variety of organizations with top-tier professional services to support Kubernetes deployments.

This does not impact the Certified Kubernetes Application Developer (CKAD) exam which launched in May 2018. A decision about that exam will be made at a later date depending on how much the curriculum is anticipated to change over the next two years.

Interested in taking the Certified Kubernetes Administrator (CKA) exam? CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster. The Kubernetes Fundamentals course maps directly to the requirements for the Certified Kubernetes Administrator exam or choose from one of 22 Kubernetes Training Partners (KTP), a tier of vetted training providers who have deep experience in cloud native technology training.

CNCF Survey: Cloud Usage in Asia Has Grown 135% Since March 2018

By | Blog

The bi-annual CNCF survey takes a pulse of the community to better understand the adoption of cloud native technologies. This is the second time CNCF has conducted its cloud native survey in Mandarin to better gauge how Asian companies are adopting open source and cloud native technologies. The previous Mandarin survey was published in March 2018. This post also makes comparisons to the most recent North American / European version of this survey from August 2018.

Key Takeaways

  • Usage of public and private clouds in Asia has grown 135% since March 2018, while on-premise has dropped 48%.
  • Usage of nearly all container management tools in Asia has grown, with commercial off-the-shelf solutions up 58% overall, and home-grown solutions up 690%. Kubernetes has grown 11%.
  • The number of Kubernetes clusters in production is increasing. Organizations in Asia running 1-5 production clusters decreased 37%, while respondents running 11-50 clusters increased 154%.
  • Use of serverless technology in Asia has spiked 100% with 29% of respondents using installable software and 21% using a hosted platform.

300 people responded to the Chinese version with 83% being from Asia, compared to 187 respondents from our March 2018 survey.

CNCF has a total of 42 members across China, Japan, and South Korea including 4 platinum members: Alibaba Cloud, Fujitsu, Huawei, and A number of these members are also end users, including:

Growth of Containers

Container usage is becoming prevalent in all phases of the development cycle. There has been a significant jump in the use of containers for testing, up to 42% from 24% in March 2018 with an additional 27% of respondents citing future plans. There has also been an increase in use of containers for Proof of Concept (14% up from 8%).

As the usage of containers becomes more prevalent across all phases of development, the use of container management tools is growing. Since March 2018, there has been a significant jump in the usage of nearly all container management tools.

Usage of Kubernetes has grown 11% since March 2018. Other tools have also grown:

  • Amazon ECS: up to 22% from 13%
  • CAPS: up to 13% from 1%
  • Cloud Foundry: up to 20% from 1%
  • Docker Swarm: up to 27% from 16%
  • Shell Scripts: up to 14% from 5%

There are also two new tools that were not cited in the March 2018 survey. 16% of respondents are using Mesos and an additional 8% are using Nomad for container management.

Commercial off-the-shelf solutions (Kubernetes, Docker Swarm, Mesos, etc.) have grown 58% overall, while home-grown management (Shell Scripts and CAPS) have grown 690%, showing that home-grown solutions are still widely popular in Asia while North American and European markets moved away from those in favor of COTS solutions.

Cloud vs. On-Premise

While on-premise solutions are widely used in the North American and European markets (64%), that number seems to be declining for the Asian market. Only 31% of respondents reported using on-premise solutions in this survey, compared to 60% in March 2018. Cloud usage is growing with 43% of respondents using private clouds (up from 24%) and 51% using public clouds (up from 16%).


As for where Kubernetes is being run, Alibaba still remains No. 1 with 38% of respondents reporting usage, but is down from 52% in March 2018. Following Alibaba, is Amazon Web Services (AWS) with 24% of respondents citing usage, slightly down from 26%.  

New environments that were not previously reported and are taking up additional market share are Huawei Cloud (13%), VMware (6%), Baidu Cloud (21%), Tencent Cloud (7%), IBM Cloud (8%), and Packet (5%).

The decline of on-premise usage is also evident in these responses, with 24% of respondents reporting that they run Kubernetes on-prem compared to 38% in March 2018. OpenStack usage has also declined significantly, down to 9% from 26% in March 2018.

For organizations running Kubernetes, the number of production clusters is also increasing. Respondents running 1-5 production clusters decreased 37%, while respondents running 11-50 clusters increased 154%. Still, respondents are mostly running 6-10 production containers, with 29% reporting that number.

We also asked respondents about the tools they are using to manage various aspects of their applications:


The most popular method of packaging Kubernetes applications is Managed Kubernetes Offerings (37%), followed by Ksonnet (27%) and Helm (24%).  


Respondents are primarily using autoscaling for Task and Queue processing applications (44%) and Java Applications (44%). This is followed by stateless applications (33%) and stateful databases (29%).  

The top reasons respondents aren’t using Kubernetes autoscaling capabilities are because they are using a third party autoscaling solution (32%), were not aware these capabilities existed (30%), or have built their own solution to autoscale (26%).

Ingress Providers

The top Kubernetes ingress providers reported are F5 (36%), nginx (34%), and GCP (22%).

Exposing Cluster External Services

The most popular ways to expose Cluster External Services were Load-Balancer Services (43%), integration with a third party load-balancer (37%), and L7 Ingress (35%).

Separating Kubernetes in an Organization with Multiple Teams

Respondents are separating multiple teams within their organization using namespaces (49%), separate clusters (42%), and only labels (34%).

Separating Kubernetes Applications

Respondents are primarily separating their Kubernetes applications using separate clusters (45%), namespaces (46%), and only labels (33%).

Cloud Native Projects

What are the benefits of cloud native projects in production? Respondents cited the top four reasons as:

  • Improved Availability (47%)
  • Improved Scalability (46%)
  • Cloud Portability (45%)
  • Improved Developer Productivity (45%)

Compared to the North American and European markets, improved availability and developer productivity are more important in the Asian market, while faster deployment time is less important (only 38% cited this compared to 50% in the English version of this survey).

As for the cloud native projects that are being used in production and evaluated:

Many cloud native projects have grown in production usage since March 2018. Projects with the largest spike in production usage are: gRPC (22% up from 13%), Fluentd (11% up from 7%), Linkerd (11% up from 7%), OpenTracing (27% from 20%).

The number of respondents evaluating cloud native projects also grew with: gRPC (20% up from 11%), OpenTracing (27% up from 18%), and Zipkin (12% up from 9%).

Challenges Ahead

As cloud native technologies continue to be adopted, especially into production, there are still challenges to address. The top challenges respondents are facing are:

  • Lack of training (46%)
  • Difficulty in choosing an orchestration solution (30%)
  • Complexity (28%)
  • Finding vendor support (28%)
  • Monitoring (25%)

One interesting note is that many of the challenges have significantly declined since our previous survey in March 2018 as more resources are added to address these concerns. A new challenge that has come up is lack of training. While CNCF has invested heavily in Kubernetes training over the past year, including courses and certification for Kubernetes Administrators and Application Developers, we are still actively working to make translated versions of the courses and certifications available and more easily accessible in Asia. CNCF is also working with a global network of Kubernetes Training Partners to expand these resources, as well as Kubernetes Certified Service Providers to help support organizations with the complexity of embarking on their cloud native journey.  


The use of serverless technology has spiked with 50% of organizations using the technology compared to 25% in March 2018. Of that 50%, 29% are using installable software and 21% are using a hosted platform. An additional 17% plan to use the technology within the next 12-18 months.

For installable serverless platforms, Apache OpenWhisk is the most popular with 11% of respondents citing usage. This is followed by Dispatch (6%), FN (5%) and OpenFaaS, Kubeless, and Fission tied at 4%.

For hosted serverless platforms, AWS Lambda is the most popular with 11% of respondents citing usage. This is followed by Azure Functions (8%), and Alibaba Cloud Compute Functions, Google Cloud Functions, and Cloudflare Functions tied at 7%.

Serverless usage in Asia is higher than what we saw in North American and European markets where 38% of organizations were using serverless technology. Hosted platforms (32%) were also much more popular compared to installable software (6%), whereas in Asia both options are more evenly used. There is also much more variety in the solutions used, whereas AWS Lambda and Kubeless were the clear leaders in North America and Europe.

Relating back to CNCF projects, a small percentage of respondents are now evaluating (3%) or using CloudEvents in production (2%). CloudEvents is an effort organized by CNCF’s Serverless Working Group to create a specification for describing event data in a common way.

Cloud Native is Growing in China

As cloud native continues to grow in China, the methods for learning about these technologies becomes increasingly important. Here are the top ways respondents are learning about cloud native technologies:



50% of respondents are learning through documentation. Each CNCF project hosts extensive documentation on their websites, which can be found here. Kubernetes, in particular, is currently working on translating their documentation and website across multiple languages including Japanese, Korean, Norwegian, and Chinese.


Technical Podcasts

48% of respondents are learning through technical podcasts. There are a variety of cloud native podcasts out there to learn from such as the Kubernetes Podcast from Google and PodCTL by Red Hat. Also, check out this list of 10 China tech podcasts you should follow.


Technical Webinars

29% of respondents are learning through technical webinars. CNCF runs a weekly webinar series that takes place every Tuesday from 10am-11am PT. You can see the upcoming schedule and view recordings and slides of previous webinars.


Business Case Studies

27% of respondents are learning through business case studies. CNCF has a collection of end user case studies while Kubernetes also maintains an extensive list of Kubernetes-specific user case studies.


The Cloud Native Community in China

As cloud native continues to grow in Asia, CNCF is excited to be hosting the first annual KubeCon + CloudNativeCon in Shanghai this week. With over 1,500 attendees at the inaugural event, we look forward to seeing the continued growth of cloud native technologies at a global scale.

To keep up with the latest news and projects, join us at one of the 22 cloud native Meetups across Asia. We hope to see you at one of our upcoming Meetups!  

A huge thank you to everyone who participated in our survey!

You can also view the findings from past surveys here:

About the Survey Methodology & Respondents

The pool of respondents represented a variety of company sizes with the majority being in the 50-499 employee range (48%). As for job function, respondents identified mostly as Developers (22%), Development Manager (15%), and IT Managers (12%).  

Respondents represented 31 different industries, the largest being software (13%) and financial services (11%).

This survey was conducted in Mandarin. You can view additional demographics breakdowns below:





CNCF 调查:自 2018 年 3 月以来,亚洲云使用率增长 135%


一年两次的 CNCF 调查让社区紧跟时代脉搏,更好地了解云原生技术的采用情况。这是 CNCF 第二次用普通话进行云原生调查,以更好地衡量亚洲公司如何采用开源和云原生技术。之前的普通话调查已于 2018 年 3 月进行。本文还将与 2018 年 8 月以来北美/欧洲最新的同文调查版本进行比较。


  • 自 2018 年 3 月以来,亚洲公共云和私有云的使用量增长 135%,而本地使用量下降 48%。
  • 亚洲几乎所有容器管理工具的使用率都有所增长,现成商业解决方案总体增长 58%,本土解决方案增长 690%。Kubernetes 增长 11%。
  • Kubernetes 集群在生产中的数量正在增加。运行 1-5 个生产集群的亚洲组织减少 37%,而运行 11-50 个集群的受访者增加 154%。
  • 在亚洲,无服务器技术的使用率激增了 100%,29% 的受访者使用可安装软件,21% 的受访者使用托管平台。

300 人回复了中文版调查,其中 83% 来自亚洲,相比之下,我们 2018 年 3 月的调查中有 187 人回复。

CNCF 在中国, 日本和韩国共有 42 位会员,其中包括 4 位白金会员:阿里云, 富士通, 华为和京东 (。其中 某些成员也是终端用户,包括:


在开发周期的所有阶段均使用容器已经成为常态利用容器进行测试的人数大幅增加,从 2018 年 3 月的 24% 增加到 42%,另外 27% 的受访者提到未来计划采用。用于概念验证的容器使用也有所增加(从 8% 上升到 14%)。

随着容器的使用在所有开发阶段日益普遍,容器管理工具的使用也越来越多。自 2018 年 3 月以来,几乎所有容器管理工具的使用都大幅增加。

自 2018 年 3 月以来, Kubernetes 的使用量增长 11%。其他工具的使用也有所增加:

  • Amazon ECS:从 13% 上升至 22%
  • CAPS:从 1% 上升至 13%
  • Cloud Foundry:从 1% 上升至 20%
  • Docker Swarm:从 16% 上升至 27%
  • Shell Scripts:从 5% 上升至 14%

也有 2018 年 3 月的调查中没有提及的两个新工具。16% 的受访者使用 Mesos,另外 8% 使用 Noman 进行容器管理。


现成的商业解决方案(Kubernetes, Docker Swarm, Mesos 等)总体增长 58%,而本土管理(Shell Scripts 和 CAPS)增长 690%,这表明本土解决方案在亚洲仍然很受欢迎,而北美和欧洲市场则不再青睐 COTS 解决方案。


虽然北美和欧洲市场广泛使用本地解决方案 (64%),但在亚洲市场,这一数字似乎在下降。在本次调查中,只有 31% 的受访者报告了使用本地解决方案,相比之下,2018 年 3 月这一比例为 60%。云使用率正在增长,43% 的受访者使用私有云(从 24% 上升,51% 的受访者使用公共云(从 16% 上升。


至于在何处运营  Kubernetes,阿里巴巴仍是第一,38% 的受访者报告正在使用,但低于 2018 年 3 月的 52%。紧随阿里巴巴之后的是亚马逊网络服务 (AWS),24% 的受访者提到使用,略低于 26%。 

以前未报告过,但正在占据额外市场份额的新环境是华为云 (13%), VMware (6%), 百度云 (21%), 腾讯云 (7%),IBM 云 (8%) 和 Packet (5%)。

这些回应中也显示,本地使用量下降,24% 的受访者报告说,他们在本地运行 Kubernetes,而 2018 年 3 月这一比例为 38%。OpenStack 的使用率也大幅下降,从 2018 年 3 月的 26% 降至 9%。

对于运行 Kubernetes 的组织而言,生产集群的数量也在增加。运行 1-5 个生产集群的受访者减少 37%,而运行 11-50 个集群的受访者增加 154%。尽管如此,受访者大多运营 6-10 个生产容器,29% 的受访者报告这一数字。




封装 Kubernetes 应用程序最流行的方法是 Managed Kubernetes Offerings (37%),其次是 Ksonnet (27%) 和 Helm (24%)。  



受访者主要对任务和队列处理应用程序 (44%) 和 Java 应用程序 (44%) 使用自动扩展。其次是无状态应用程序 (33%) 和有状态数据库 (29%)。  

受访者未使用 Kubernetes 自动扩展功能的最大原因是因为,其使用第三方自动扩展解决方案 (32%),不知道这些功能存在 (30%),或者已经构建了自己的自动扩展解决方案 (26%)。


入口提供商 (Ingress Providers)

据报告,Kubernetes 最大的入口提供商是 F5 (36%), nginx (34 %) 和 GCP (22%)。


公开集群外部服务 (Cluster External Services)

公开集群外部服务最流行的方式是负载平衡器服务 (43%), 与第三方负载平衡器的集成 (37%)和 L7 入口 (35%)。


在拥有多个团队的组织中分离 Kubernetes

受访者使用命名空间 (49%), 单独集群 (42%)和仅使用标签 (34 %),将组织内的多个团队分开。


分离 Kubernetes 应用程序

受访者主要使用单独的集群 (45%), 命名空间 (46%) 和标签 (33%),分离他们的 Kubernetes 应用程序。



  • 提高可用性 (47%)
  • 改进可扩展性 (46%)
  • 云便携性 (45%)
  • 提高开发人员的工作效率 (45%)

与北美和欧洲市场相比,提高可用性和开发人员工作效率在亚洲市场更为重要,而更快的部署时间则不那么重要(只有 38% 的受访者提到了这一点,而在本调查的英文版本中这一比例为 50%。


自 2018 年 3 月以来,许多云原生项目的生产使用率有所增长。生产使用率最高的项目是:gRPC(从 13% 上升至 22%, Fluentd(从 7% 上升至 11%, Linkerd(从 7% 上升至 11%,OpenTracing(从 20% 上升至 27%。

评估云原生项目的受访者数量也在增长:gRPC(从 11% 上升到 20%), OpenTracing(从 18% 上升到 27%)和 Zipkin(从 9% 上升到 12%。



  • 缺乏培训 (46%)
  • 难以选择编排解决方案 (30%)
  • 复杂性 (28%)
  • 寻找供应商支持 (28 %)
  • 监控 (25%)

我们注意到一个有趣的现象,自 2018 年 3 月的上一次调查以来,随着更多资源用于解决这些问题,许多挑战已经显著减少。出现的一个新挑战是缺乏培训。虽然 CNCF 在过去的一年中在Kubernetes 培训方面投入了大量资金,包括 Kubernetes 管理员应用程序开发人员的课程和认证,但我们仍在积极努力使这些课程和认证的翻译版本在亚洲更容易访问。CNCF 还与 Kubernetes 培训合作伙伴的全球网络合作,以扩展这些资源,并与Kubernetes 认证服务提供商]合作,帮助支持复杂的组织踏上云原生之旅。  


无服务器技术的使用在使用此技术的组织中激增 50%,而 2018 年 3 月这一比例为 25%。在这 50% 中,29% 使用可安装软件,21% 使用托管平台。另外 17% 的受访者计划在未来 12-18 个月内采用这项技术。

对于可安装的无服务器平台,Apache openshill 最受欢迎,有 11% 的受访者提到正在使用。其次是 Dispatch (6%), FN (5%) 和 OpenFaaS, Kubeless,和 Fission,并列 4%。


对于托管式无服务器平台,AWS Lambda 最受欢迎,有 11% 的受访者提到正在使用。其次是 Azure Functions (8%),阿里云 Compute Functions, 谷歌云 Functions 和Cloudflare Functions 并列 7%。

亚洲的无服务器使用量高于北美和欧洲市场,其中,38% 的组织使用无服务器技术。与可安装软件 (6%) 相比,托管平台 (32%) 也更受欢迎,而在亚洲,这两种选择的使用更均匀。使用的解决方案也多种多样,而 AWS Lambda 和 kubeness 显然是北美和欧洲的领导者。

关于回到 CNCF 项目,一小部分受访者现在正在评估 (3%)或在生产中使用 CloudEvents(2%)。CloudEvents 是由 CNCF 无服务器工作组 组织的一项工作,旨在创建以一种通用方式描述事件数据的规范。





50% 的受访者通过文档学习。 每个 CNCF 项目在网站上都有大量文档,可以在此处找到。尤其是 Kubernetes,目前正致力于采用多种语言翻译其文档和网站,包括日语, 韩语, 挪威语和汉语。



48% 的受访者通过技术播客学习。有各种各样的云原生播客可供学习,例如,谷歌的 Kubernetes 播课和 Red Hat 的PodCTL b。此外,可以查看 您应当留意的 10 个中国技术播客



29% 的受访者通过技术网络研讨会学习。CNCF 每周星期二上午 10 点到 11 点举办一次网络研讨会系列。您可以查看即将到来的时间表,或查看 之前网络研讨会的录音和幻灯片。



27% 的受访者通过商业案例学习。CNCF 收集了终端用户案例研究 ,而 Kubernetes 也保留了大量的Kubernetes 特定用户案例研究列表。


随着云原生技术在亚洲的持续增长,CNCF 很期待本周在上海举办首届 KubeCon + CloudNativeCon 年会。有 1,500 多名与会者参加开幕式,我们期待看到云原生技术在全球范围内的持续增长。

若要了解最新的新闻和项目,请参加我们在亚洲举办的 22 场云原生交流会。我们期待着在即将举行的会议上与您见面!  




受访者代表各种公司规模,其中大多数在 50-499 名员工范围内 (48%)。至于工作职能,受访者大多数为开发人员 (22%), 开发经理 (15%) 和 IT 经理 (12%)。  

受访者代表了 31 个不同的行业,主要为软件 (13%) 和金融服务 (11%) 行业。


CNCF Survey: Use of Cloud Native Technologies in Production Has Grown Over 200%

By | Blog

The bi-annual CNCF survey takes a pulse of the community to better understand the adoption of cloud native technologies. This is the sixth time CNCF has taken the temperature of the container management marketplace.

Key Takeaways

  • Production usage of CNCF projects has grown more than 200% on average since December 2017, and evaluation has jumped 372%.
  • The use of serverless technology continues to grow, up 22% since December 2017 with the majority of respondents using hosted platforms such as AWS Lambda (70%).
  • The top three benefits of cloud native technology are faster deployment time, improved scalability, and cloud portability.
  • 40% of respondents from enterprise companies (5000+) are running Kubernetes in production.

About the Survey Methodology & Respondents

This was most responses we’ve received to date with 2,400 people taking part the survey, primarily from North America (40%) and Europe (36%) in Developer or IT-related roles:

  • Developer: 49%
  • Operations: 36%
  • IT Manager: 11%
  • Development Manager: 14%

The majority of respondents are from companies with more than 5,000 employees, skewing the results of this survey toward usage of CNCF technologies in the enterprise. The top industries are technology (22%), software (22%), financial services (9%), and telecommunications (8%).

This survey was conducted in English, and we have a Chinese version currently underway, the results of which will be available later in the year. You can view additional demographics breakdowns below: 

The Changing Landscape of Application Development

In this most recent version of the survey, we’ve added additional questions on releases to learn more about how companies are managing their software development cycles. One of the benefits of microservices architectures is the ability for flexible deployments, allowing companies to cut releases as often as they need. Prior to microservices, typical release cycles happened much less often, typically once or twice a year. Responses highlighted this with the breakdown of respondents’ release cycles fairly evenly spread out:

  • Weekly (20%)
  • Monthly (18%)
  • Daily (15%)
  • Adhoc (14%)

What are your release cycles?

The majority of these releases are automated (42%), with 25% of respondents using a hybrid release method and 27% doing manual releases. As automated releases grow, so does the popularity of tools to manage CI/CD pipelines with Jenkins as the leading tool (70%) followed by Terraform (27%) and Custom Scripts (26%).

Are release cycles manual or automated?

In addition, 67% of respondents are checking in code multiple times per day compared to 28% checking in code a few times a week and 6% a few times per month.

As for number of machines (including VMs, bare metal, etc.) in fleet, we’re starting to see this slightly increase with 5000+ at 17% up from 14% during our last survey in December 2017, 6-20 (16% down from 18%), 21-50 (14%), 51-100 (11%).

On average, how many machines are in your fleet?

What Cloud?

We’re continuing to see companies use a mix of on premise (64%), private cloud (50%), and public cloud (77%) solutions.

Which of the following data center types does your company/organization use?

As for containers, the majority of companies are deploying to AWS (63% down from 69%), followed by on premise servers (43% down from 51%), Google Cloud Platform (35% down from 39%), Microsoft Azure (29% up from 16%), VMware (24%), and OpenStack (20% down from 22%).

Your company/organization deploys containers to which of the following environments?

These numbers continue to be inline with the trends we’ve seen over the past year, with two notable changes. On-premise use is down from 51% in December 2017 to 43%, most likely due to growing use of private clouds. Secondly, this is the first time we’ve seen extensive use of VMware in these survey results, with only 1.2% of people citing usage in the December 2017 survey.

Growth of Containers

73% (compared to 75%) of respondents are currently using containers in production today, with the remaining 27% (compared to 25%) planning to use them in the future. 89% of respondents are currently using containers for proof of concepts, as well as testing (85%) and development (86%).

Your company/organization uses containers for:

The number of containers that organizations are typically running is also holding steady with 29% running less than 50, 50-249 (27%), 250-999 (17%), and 15% running more than 5,000 containers. There is a slight increase in organizations who are using less than 50, up to 29% from 23% in December 2017, and a slight decrease in organizations running 250-999 (down to 17% from 22%).

How many containers does your company/organization typically run?

As for container management tools, Kubernetes remains the leader with 83% (up from 77%) of respondents citing use followed by Amazon ECS (24% up from 18%), Docker Swarm (21% up from 17%), and Shell Scripts (20% up from 12%).

Your company/organization manages containers with:


58% of respondents are using Kubernetes in production, while 42% are evaluating it for future use. In comparison, 40% of enterprise companies (5000+) are running Kubernetes in production.

In production, 40% of respondents are running 2-5 clusters, 1 cluster (22%), 6-10 clusters (14%), and more than 50 clusters (13% up from 9%).

As for which environment Kubernetes is being run in, 51% are using AWS (down from 57%), on premise servers (37% down from 51%), Google Cloud Platform (32% down from 39%), Microsoft Azure (20% down from 23%), OpenStack (16% down from 22%), and VMware (15% up from 1%). The graph below illustrates where respondents are running Kubernetes vs. where they’re deploying containers.

Kubernetes Environment vs. Container Environment

For local development, the majority of respondents are targeting environments such as Minikube (45%), Docker Kubernetes (39%), and on prem Kubernetes installations (30%).

We also asked respondents about the tools they are using to manage various aspects of their applications:

The preferred method for packaging is Helm (68%) followed by managed Kubernetes offerings (19%).

The majority of respondents are autoscaling stateless applications (64%), followed by Java applications (45%), and task/queue processing applications (37%). Those who are not using autoscaling were either not aware of the functionality (21%) or do not want to autoscale their workloads at this time (31%).

Ingress Providers
The top Kubernetes ingress providers cited are nginx (64% up from 57%), HAProxy (29%), F5 (15% up from 11%), and Envoy (15% up from 9%).

Exposing Cluster External Services
The number one way respondents are exposing Cluster External Services like internet or other VMs is through load-balancer services (67%). This is followed by L7 ingress (39%) and integration with 3rd-party load-balancer (33%).

Separating Kubernetes in an Organization with Multiple Teams
Respondents are separating multiple teams within Kubernetes using Namespaces (71%), separate Ccusters (51%), and Only labels (15%).

Separating Kubernetes Applications
Respondents are separating Kubernetes applications using Namespaces (78%), separate clusters (50%), and Only labels (21%).

Cloud Native in Production

What are the benefits of cloud native projects? Respondents cited the top three reasons as:

  • Faster deployment time (50%)
  • Improved scalability (45%)
  • Cloud portability (42%)

As for the cloud native projects that are being used in production and evaluated:

CNCF Projects

Many CNCF projects showed significant jumps in production usage since our last survey, such as Containerd (45% up from 18%), CoreDNS (36% up from 7%), Envoy (24% up from 4%), Fluentd (57% up from 38%), gRPC (45% up from 22%), Jaeger (25% up from 5%), Linkerd (16% up from 3%), and OpenTracing (21% up from 8%). On average, CNCF project usage is up over 200% since our last survey.

The number of respondents evaluating CNCF projects also jumped with Containerd (55% up from 22%), CoreDNS (64% up from 14%), Envoy (74% up from 26%), Fluentd (43% up from 22%), gRPC (55% up from 16%), Jaeger (75% up from 15%), Linkerd (84% up from 15%), and OpenTracing (80% up from 25%). On average, CNCF project evaluation is up 372% since our last survey.

Projects that are new to CNCF also have high rates of consideration, with respondents notably evaluating SPIRE (94%), TUF (93%), Open Policy Agent (92%), Vitess (92%), and SPIFEE (92%).

Challenges in Using & Deploying Containers

As cloud native technologies change the way companies are designing and building applications, challenges are inevitable. The top challenges that respondents face are:

  • Cultural Changes with Development Team (41%)
  • Complexity (40% up from 35%)
  • Lack of Training (40%)
  • Security (38% down from 43%)
  • Monitoring (34% down from 38%)
  • Storage (30% down from 41%)
  • Networking (30% down from 38%)

There are two notable changes to these top challenges. While this is the first time we explicitly asked about cultural changes with the development team, it was cited as the number one challenge in using and deploying containers. Second, lack of training is a new addition to the survey. While CNCF has invested heavily in Kubernetes training over the past year including both free and paid courses and certification for Kubernetes Administrators and Application Developers, we continue to host new projects that need additional training resources as they grow.

The remainder of top challenges have been consistent over our past surveys, but the percentages are continuing to drop as more resources and tools are added to solve these problems.

What are your challenges in using / deploying containers:

Also interesting is the decrease in storage and networking as a challenge alongside the growth of cloud native storage projects such as:

  • Rook: 11% using in production of respondents while 89% (up from 29%) are evaluating.
  • Minio: 27% of respondents are using in production while 73% (up from 28%) are evaluating.
  • OpenSDS: 16% (up from 7%) of respondents are using in production while 84% (up from 14%) are evaluating.
  • REX-Ray: 18% of respondents are using in production while 82% are evaluating.
  • Openstorage: 19% (down from 31%) of respondents are using in production while 81% (up from 36%) are evaluating.

Which of these cloud native storage projects is your company / organization using:

Growth of Serverless

We also continued to track the growth of serverless technology in this survey. 38% of organizations are currently using serverless technology which is up from 31%, with 32% using a hosted platform and 6% using installable software.

37% are not using serverless technology which is down from 41%, but an additional 26% plan to within the next 12-18 months.

Top installable serverless platforms are:

  • Kubeless (42% up from 2%)
  • Apache OpenWhisk (25% up from 12%)
  • OpenFaas (20% up from 10%)

Which installable serverless platforms does your organization use?

Top hosted serverless platforms are:

  • AWS Lambda (70%)
  • Google Cloud Functions (25% up from 13%)
  • Azure Functions (20% up from 12%)

Which hosted serverless platforms does your organization use?

As usage of serverless grew, there is also significant interest in the serverless project CloudEvents with 80% of respondents evaluating the project for us and 21% using it in production. CloudEvents is an effort organized by CNCF’s Serverless Working Group to create a specification for describing event data in a common way.

How to Learn More?

Are you just getting started or want to learn more about cloud native projects? Here are the top ways respondents are learning about cloud native technologies:


20% of respondents use documentation to learn about cloud native projects, the number one resource cited in this survey. For example, SIG-Docs helps maintains an extensive resource of detailed Kubernetes documentation. This includes everything from how to get started with a specific feature to best ways to get involved as a contributor. Each CNCF project hosts extensive documentation on their websites, which can be found here.

KubeCon + CloudNativeCon

12% of respondents attend KubeCon + CloudNativeCon to learn more about the technologies they’re using. KubeCon + CloudNativeCon gathers all CNCF projects under one roof and brings together leading technologists from open source cloud native communities to further the advancement of cloud native computing. The event happens three times per year in Europe, China, and North America.

CNCF Website and Webinars

12% of respondents visit the CNCF website and attend webinars to get more information. is the main resource for all cloud native projects, housing information on a variety of subjects including upcoming events, training, certification, blog posts, and more.

The CNCF Webinar Series takes place every Tuesday from 10am-11am PT. You can see the upcoming schedule and view recordings and slides of previous webinars.

Meetups and Local Events

11% of respondents attend meetups and local events to learn about cloud native technologies. CNCF hosts 149 meetups under our umbrella across 33 countries with over 76,000 members. You can find your local meetup here.

CNCF and the local cloud native communities support events all over the world, from conferences to roadshows. You can view upcoming events here.


10% of respondents get their information from Twitter. CNCF tweets out project, community, and foundation news from our Twitter handle. You can also follow your favorite cloud native projects, a list of their Twitter handles (and additional social accounts) can be found here.

How do you learn about cloud native technologies?

A huge thank you to everyone who participated in our survey. We hope to see you at KubeCon + CloudNativeCon in Shanghai (November 12-15, 2018) and Seattle (December 11-13, 2018).

Stay tuned for our follow-up to this survey with results from our Chinese survey coming out later this year!

You can also view the findings from past surveys here:

March 2018: China is Going Native with Cloud
December 2017: Cloud Native Technologies Are Scaling Production Applications
June 2017: Survey Shows Kubernetes Leading as Orchestration Platform
January 2017: Kubernetes moves out of testing and into production
June 2016: Container Survey
March 2016: Container survey results

Demystifying RBAC in Kubernetes

By | Blog

Today’s post is written by Javier Salmeron, Engineer at Bitnami

Many experienced Kubernetes users may remember the Kubernetes 1.6 release, where the Role-Based Access Control (RBAC) authorizer was promoted to beta. This provided an alternative authentication mechanism to the already existing, but difficult to manage and understand, Attribute-Based Access Control (ABAC) authorizer. While everyone welcomed this feature with excitement, it also created innumerable frustrated users. StackOverflow and Github were rife with issues involving RBAC restrictions because most of the docs or examples did not take RBAC into account (although now they do). One paradigmatic case is that of Helm: now simply executing “helm init + helm install” did not work. Suddenly, we needed to add “strange” elements like ServiceAccounts or RoleBindings prior to deploying a WordPress or Redis chart (more details in this guide).

Leaving these “unsatisfactory first contacts” aside, no one can deny the enormous step that RBAC meant for seeing Kubernetes as a production-ready platform. Since most of us have played with Kubernetes with full administrator privileges, we understand that in a real environment we need to:

  • Have multiple users with different properties, establishing a proper authentication mechanism.
  • Have full control over which operations each user or group of users can execute.
  • Have full control over which operations each process inside a pod can execute.
  • Limit the visibility of certain resources of namespaces.

In this sense, RBAC is a key element for providing all these essential features. In this post, we will quickly go through the basics (for more details, check the video below) and dive a bit deeper into some of the most confusing topics.

The key to understanding RBAC in Kubernetes

In order to fully grasp the idea of RBAC, we must understand that three elements are involved:

  • Subjects: The set of users and processes that want to access the Kubernetes API.
  • Resources: The set of Kubernetes API Objects available in the cluster. Examples include Pods, Deployments, Services, Nodes, and PersistentVolumes, among others.
  • Verbs: The set of operations that can be executed to the resources above. Different verbs are available (examples: get, watch, create, delete, etc.), but ultimately all of them are Create, Read, Update or Delete (CRUD) operations.

With these three elements in mind, the key idea of RBAC is the following:

We want to connect subjects, API resources, and operations. In other words, we want to specify, given a user, which operations can be executed over a set of resources.

Understanding RBAC API Objects

So, if we think about connecting these three types of entities, we can understand the different RBAC API Objects available in Kubernetes.

  • Roles: Will connect API Resources and Verbs. These can be reused for different subjects. These are binded to one namespace (we cannot use wildcards to represent more than one, but we can deploy the same role object in different namespaces). If we want the role to be applied cluster-wide, the equivalent object is called ClusterRoles.
  • RoleBinding: Will connect the remaining entity-subjects. Given a role, which already binds API Objects and verbs, we will establish which subjects can use it. For the cluster-level, non-namespaced equivalent, there are ClusterRoleBindings.

| TIP: Watch the video for a more detailed explanation.

In the example below, we are granting the user jsalmeron the ability to read, list and create pods inside the namespace test. This means that jsalmeron will be able to execute these commands:

But not these:

Example yaml files:

Another interesting point would be the following: now that the user can create pods, can we limit how many? In order to do so, other objects, not directly related to the RBAC specification, allow configuring the amount of resources: ResourceQuota and LimitRanges. It is worth checking them out for configuring such a vital aspect of the cluster.

Subjects: Users and… ServiceAccounts?

One topic that many Kubernetes users struggle with is the concept of subjects, but more specifically the difference between regular users and ServiceAccounts. In theory it looks simple:

  • Users: These are global, and meant for humans or processes living outside the cluster.
  • ServiceAccounts: These are namespaced and meant for intra-cluster processes running inside pods.

Both have in common that they want to authenticate against the API in order to perform a set of operations over a set of resources (remember the previous section), and their domains seem to be clearly defined. They can also belong to what is known as groups, so a RoleBinding can bind more than one subject (but ServiceAccounts can only belong to the “system:serviceaccounts” group). However, the key difference is a cause of several headaches: users do not have an associated Kubernetes API Object. That means that while this operation exists:

this one doesn’t:

This has a vital consequence: if the cluster will not store any information about users, then, the administrator will need to manage identities outside the cluster. There are different ways to do so: TLS certificates, tokens, and OAuth2, among others.

In addition, we would need to create kubectl contexts so we could access the cluster with these new credentials. In order to create the credential files, we could use the kubectl config commands (which do not require any access to the Kubernetes API, so they could be executed by any user). Watch the video above to see a complete example of user creation with TLS certificates.

RBAC in Deployments: A use case

We have seen an example where we establish what a given user can do inside the cluster. However, what about deployments that require access to the Kubernetes API? We’ll see a use case to better understand this.

Let’s go for a common infrastructure application: RabbitMQ. We will use the Bitnami RabbitMQ Helm chart (in the official helm/charts repository), which uses the bitnami/rabbitmq container. This container bundles a Kubernetes plugin responsible for detecting other members of the RabbitMQ cluster. As a consequence, the process inside the container requires accessing the Kubernetes API, and so we require to configure a ServiceAccount with the proper RBAC privileges.

When it comes to ServiceAccounts, follow this essential good practice:

Have ServiceAccounts per deployment with the minimum set of privileges to work.

For the case of applications that require access to the Kubernetes API, you may be tempted to have a type of “privileged ServiceAccount” that could do almost anything in the cluster. While this may seem easier, this could pose a security threat down the line, as unwanted operations could occur. Watch video above to see the example of Tiller, and the consequences of having ServiceAccounts with too many privileges.

In addition, different deployments will have different needs in terms of API access, so it makes sense to have different ServiceAccounts for each deployment.

With that in mind, let’s check what the proper RBAC configuration for our RabbitMQ deployment should be.

From the plugin documentation page and its source code, we can see that it queries the Kubernetes API for the list of Endpoints. This is used for discovering the rest of the peer of the RabbitMQ cluster. Therefore, what the Bitnami RabbitMQ chart creates is:

A ServiceAccount for the RabbitMQ pods.A Role (we assume that the whole RabbitMQ cluster will be deployed in a single namespace) that allows the “get” verb for the resource Endpoint.

A RoleBinding that connects the ServiceAccount and the role.

The diagram shows how we enabled the processes running in the RabbitMQ pods to perform “get” operations over Endpoint objects. This is the minimum set of operations it requires to work. So, at the same time, we are ensuring that the deployed chart is secure and will not perform unwanted actions inside the Kubernetes cluster.

Final thoughts

In order to work with Kubernetes in production, RBAC policies are not optional. These can’t be seen as only a set of Kubernetes API Objects that administrators must know. Indeed, application developers will need them to deploy secure applications and to fully exploit the potential that the Kubernetes API offers to their cloud-native applications. For more information on RBAC, check the following links:

Getting the Most out of Istio with CNCF Projects

By | Blog

This guest post was written by Neeraj Poddar, Platform Lead, Aspen Mesh

Are you considering or using a service mesh to help manage your microservices infrastructure? If so, here are some basics on how a service mesh can help, the different architectural options, and tips and tricks on using some key CNCF tools that are included with Istio to get the most out of it.

The beauty of a service mesh is that it bundles so many capabilities together, freeing engineering teams from having to spend inordinate amounts of time managing microservices architectures. Kubernetes has solved many build and deploy challenges, but it is still time consuming and difficult to ensure reliability at runtime. A service mesh handles the difficult, error-prone parts of cross-service communication such as latency-aware load balancing, connection pooling, service-to-service encryption, TLS, instrumentation, and request-level routing.

Once you have decided a service mesh makes sense to help manage your microservices, the next step is deciding what service mesh to use. There are several architectural options, from the earliest model of a library approach, the node agent architecture, and the model which seems to be gaining the most traction – the sidecar model. We have also recently seen an evolution from data plane meshes like Envoy, to control plane meshes such as Istio. As active users of Istio and believers in the sidecar architecture striking the right balance between a robust set of features and a lightweight footprint, so let’s drill down into how to get the most out of Istio.

One of the capabilities Istio provides is distributed tracing. Tracing provides service dependency analysis for different microservices and it provides tracking for requests as they are traced through multiple microservices. It’s also a great way to identify performance bottlenecks and zoom into a particular request to define things like which microservice contributed to the latency of a request or which service created an error.

We use and recommend Jaeger for tracing as it has several advantages:

  • OpenTracing compatible API
  • Flexible & scalable architecture
  • Multiple storage backends
  • Advanced sampling
  • Accepts Zipkin spans
  • Great UI
  • CNCF project and active OS community

Another powerful thing you gain with Istio is the ability to collect metrics. Metrics are key to understanding historically what has happened in your applications, and when they were healthy compared to when they were not. A service mesh can gather telemetry data from across the mesh and produce consistent metrics for every hop. This makes it easier to quickly solve problems and build more resilient applications in the future.

We use and recommend Prometheus for gathering metrics for several reasons:

  • Pull model
  • Flexible query API
  • Efficient storage
  • Easy integration with Grafana
  • CNCF project and active OS community

Check out the recent CNCF webinar on this topic for a deeper look into what you can do with these tools and more.

Kubernetes 1.10: Stabilizing Storage, Security, and Networking

By | Blog

Editor’s note: today’s post is by the 1.10 Release Team

Originally posted on

We’re pleased to announce the delivery of Kubernetes 1.10, our first release of 2018!

Today’s release continues to advance maturity, extensibility, and pluggability of Kubernetes. This newest version stabilizes features in 3 key areas, including storage, security, and networking. Notable additions in this release include the introduction of external kubectl credential providers (alpha), the ability to switch DNS service to CoreDNS at install time (beta), and the move of Container Storage Interface (CSI) and persistent local volumes to beta.

Let’s dive into the key features of this release:

Storage – CSI and Local Storage move to beta

This is an impactful release for the Storage Special Interest Group (SIG), marking the culmination of their work on multiple features. The Kubernetes implementation of the Container Storage Interface (CSI) moves to beta in this release: installing new volume plugins is now as easy as deploying a pod. This in turn enables third-party storage providers to develop their solutions independently outside of the core Kubernetes codebase. This continues the thread of extensibility within the Kubernetes ecosystem.

Durable (non-shared) local storage management progressed to beta in this release, making locally attached (non-network attached) storage available as a persistent volume source. This means higher performance and lower cost for distributed file systems and databases.

This release also includes many updates to Persistent Volumes. Kubernetes can automatically prevent deletion of Persistent Volume Claims that are in use by a pod (beta) and prevent deletion of a Persistent Volume that is bound to a Persistent Volume Claim (beta). This helps ensure that storage API objects are deleted in the correct order.

Security – External credential providers (alpha)

Kubernetes, which is already highly extensible, gains another extension point in 1.10 with external kubectl credential providers (alpha). Cloud providers, vendors, and other platform developers can now release binary plugins to handle authentication for specific cloud-provider IAM services, or that integrate with in-house authentication systems that aren’t supported in-tree, such as Active Directory. This complements the Cloud Controller Manager feature added in 1.9.  

Networking – CoreDNS as a DNS provider (beta)

The ability to switch the DNS service  to CoreDNS at install time is now in beta. CoreDNS has fewer moving parts: it’s a single executable and a single process, and supports additional use cases.

Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the release notes.


Kubernetes 1.10 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials.

2 Day Features Blog Series

If you’re interested in exploring these features more in depth, check back next week for our 2 Days of Kubernetes series where we’ll highlight detailed walkthroughs of the following features:

  • Day 1 – Container Storage Interface (CSI) for Kubernetes going Beta
  • Day 2 – Local Persistent Volumes for Kubernetes going Beta

Release team

This release is made possible through the effort of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the release team led by Jaice Singer DuMars, Kubernetes Ambassador for Microsoft. The 10 individuals on the release team coordinate many aspects of the release, from documentation to testing, validation, and feature completeness.

As the Kubernetes community has grown, our release process represents an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem.

Project Velocity

The CNCF has continued refining an ambitious project to visualize the myriad contributions that go into the project. K8s DevStats illustrates the breakdown of contributions from major company contributors, as well as an impressive set of preconfigured reports on everything from individual contributors to pull request lifecycle times. Thanks to increased automation, issue count at the end of the release was only slightly higher than it was at the beginning. This marks a major shift toward issue manageability. With 75,000+ comments, Kubernetes remains one of the most actively discussed projects on GitHub.

User Highlights

According to a recent CNCF survey, more than 49% of Asia-based respondents use Kubernetes in production, with another 49% evaluating it for use in production. Established, global organizations are using Kubernetes in production at massive scale. Recently published user stories from the community include:

  • Huawei, the largest telecommunications equipment manufacturer in the world, moved its internal IT department’s applications to run on Kubernetes. This resulted in the global deployment cycles decreasing from a week to minutes, and the efficiency of application delivery improved by tenfold.
  • Jinjiang Travel International, one of the top 5 largest OTA and hotel companies, use Kubernetes to speed up their software release velocity from hours to just minutes. Additionally, they leverage Kubernetes to increase the scalability and availability of their online workloads.
  • Haufe Group, the Germany-based media and software company, utilized Kubernetes to deliver a new release in half an hour instead of days. The company is also able to scale down to around half the capacity at night, saving 30 percent on hardware costs.
  • BlackRock, the world’s largest asset manager, was able to move quickly using Kubernetes and built an investor research web app from inception to delivery in under 100 days.

Is Kubernetes helping your team? Share your story with the community.

Ecosystem Updates

  • The CNCF is expanding its certification offerings to include a Certified Kubernetes Application Developer exam. The CKAD exam certifies an individual’s ability to design, build, configure, and expose cloud native applications for Kubernetes. The CNCF is looking for beta testers for this new program. More information can be found here.
  • Kubernetes documentation now features user journeys: specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers.  
  • CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster.


The world’s largest Kubernetes gathering, KubeCon + CloudNativeCon is coming to Copenhagen from May 2-4, 2018 and will feature technical sessions, case studies, developer deep dives, salons and more! Check out the schedule of speakers and register today!


Join members of the Kubernetes 1.10 release team on April 10th at 10am PDT to learn about the major features in this release including Local Persistent Volumes and the Container Storage Interface (CSI). Register here.

Get Involved:

The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.

Thank you for your continued feedback and support.

  • Post questions (or answer questions) on Stack Overflow
  • Join the community portal for advocates on K8sPort
  • Follow us on Twitter @Kubernetesio for latest updates
  • Chat with the community on Slack
  • Share your Kubernetes story.

CNCF Survey: China is Going Native with Cloud

By | Blog

By Swapnil Bhartiya, Founder & Writer at TFiR: The Fourth Industrial Revolution

Swapnil is a journalist and writer who has been covering Linux & Open Source for 10 years. He is also a science fiction writer whose stories have been broadcast on Indian radio and published in leading Indian magazines.

To better gauge how quickly Asian companies are adopting open source and cloud native technologies like Kubernetes and Prometheus, CNCF recently conducted its cloud native survey in Chinese.  The bi-annual survey takes a pulse of the community to understand the adoption of cloud native technologies. More than 700 people responded to the December 2017 surveys, with 500 responding to the English version and 187 responding to the Chinese version.

For the first time, we have a very comprehensive sample and data to understand the adoption of cloud native technologies in China. Since the survey was conducted in Mandarin, it reflects the Chinese market more than the overall Asian market, excluding major economies like Japan. We expect that future surveys will be conducted in more languages to get an ever clearer picture.

KubeCon + CloudNativeCon Europe is around the corner from May 2-4 in Copenhagen, and the first KubeCon + CloudNativeCon will be held in China in November. It will be interesting to see how the representation and responses change as CNCF reaches new developers, vendors, and user attendees in other parts of the world. I can’t wait for the next KubeCon to learn more about the cloud native journey of Asian players. Read on to learn more about new cloud native developments in China.

P.S. This blog is a follow up to “Cloud Native Technologies Are Scaling Production Applications,” which analyzed trends and data from our global survey conducted in English.  

China, an Open Source Cloud Native Powerhouse

The survey data collected validates the fact that China is embracing cloud native and open source technologies at a phenomenal rate.

We already know the famous BAT (Baidu, Alibaba and Tencent) organizations are using open source technologies to build their services that serve more than a billion users.  For example, I recently interviewed Bowyer Liu, the chief architect at Tencent, which built their own public cloud using OpenStack. It’s called TStack, which is running more than 12,000 virtual machines (VMs), covering 17 clusters, spread across 7 data centers in 4 regions. TStack is managing more than 300 services that include QQ and WeChat. TStack is also using open source projects like CoreOS, Docker, Kubernetes, KVM, HAProxy, Keepalived, Clear Container, Rabbitmq, MariaDB, Nginx, Ansible, Jenkins, Git, ELK, Zabbix, Grafana, InfluxDB, Tempest and Rally.

Of the 18 CNCF platinum members, 4 are located in Asia; of its 8 gold members, 3 are based in Asia. The community runs 18 CNCF Meetups across Asia, with 8 in China alone. Furthermore, in the past quarter, 3 of the top 10 most active companies — Huawei, Fujitsu and Treasure Data — contributing across all CNCF projects are based in Asia.

About the Survey Methodology & Respondents

The respondents represented a variety of company sizes from small startups to large enterprises:

  • Over 5000 employees: 22%
  • 1000-4999 employees: 16%
  • 500-999: 14%
  • 100-499: 27%
  • 50-99: 12%
  • 10-49: 8%

Out of the total respondents, 26% represented the tech industry; 13% represented Container/Cloud Solutions vendors, 12% came from the Financial Services industry; 11% from the Consumer industry; 6% from the Government and 6% from Manufacturing. In comparison, the North American survey had 44% representation from the tech industry and the second largest representation was from the container/cloud vendors 14%, the financial services industry was the 3rd largest with 7% representation. China is embracing cloud native technologies across the board, and the Chinese financial service industry seems to be more open to cloud native technologies as compared to their Western counterparts.

What Cloud Are They Running?

While public cloud continues to dominate the U.S. market, the Chinese market is more diverse. 24% of respondents were using private cloud and only 16% were using public cloud. 60% of respondents were using on-prem. Compare that number to North America where more than 81% of respondents were using the public cloud, and 61% were running on premise; whereas 44% are using some private cloud.

In China, Alibaba Cloud remains the leader with more than 55% of respondents said to be using it, 30% were using AWS, 28% were using OpenStack Cloud, 12% were using Microsoft Azure and around 6% were using Google Cloud Platform.

In North America, AWS remains the leader with 70% of respondents deploying their containers in AWS environment. Google Cloud Platform was at the second spot with 40% and Microsoft Azure took the 3rd spot with 23%. Compared to China, only 22% of respondents from North America acknowledged using OpenStack.

Growth of Containers

Out of the total respondents, only 6% were using more than 5,000 containers. A majority of the respondents, around 32% are using less than 50 containers, and around 30% of respondents were using 50-249 containers. 36% of respondents were using containers in development stage, whereas 32% were using it in production. 57% of respondents said that they are planning to use containers in production in future. A majority of respondents (22%) are using between 6-20 machines in their fleet, with 21-50 machines close behind at 17%; only 6% have more than 5,000 machines, which includes VMs and bare metal.

What about Kubernetes?

No surprises — Kubernetes remains the No. 1 platform to manage containers. 35% of Asian respondents said they were using Kubernetes to manage their containers. Azure Container Service was used by 19%. Docker Swarm was reported to be used by 16% of respondents. ECS came in at the 4 spot with 13% and 11% reporting using another Linux Foundation Project Cloud Foundry. 6% said they were using OpenShift, while another 6% reported using CoreOS Tectonic. This data shows a much more diverse cloud native ecosystem in China as compared to the United States.

Where Are They Running Kubernetes?

The most interesting finding from the survey was that more than 49% of respondents were using Kubernetes in production, with another 49% evaluating it for use. Some of the most well-known examples include Jinjiang Travel International, one of the top 5 largest OTA and hotel companies that sells hotels, travel packages, and car rentals. They use Kubernetes containers to speed up their software release velocity from hours to just minutes, and they leverage Kubernetes to increase the scalability and availability of their online workloads. China Mobile uses containers to replace VMs to run various applications on their platform in a lightweight fashion, leveraging Kubernetes to increase resource utilization. State Power Grid, the state-owned power supply company in China, uses containers and Kubernetes to provide failure resilience and fast recovery., one of China’s largest companies and the first Chinese Internet company to make the Global Fortune 500 list, chronicled their shift to Kubernetes from OpenStack in this blog from last year.

As expected, 52% of respondents were using Kubernetes with Alibaba public cloud, 26% were using AWS. China is bigger consumer of OpenStack as compared to the North America market (16%), around 26% of respondents were using OpenStack. A majority of respondents were running 10 or less clusters in production; 14% were running 1 cluster, 40% were running 2-5 clusters and 26% were running 6-10 clusters. Only 6% of the respondents running Kubernetes have more than 50 clusters in production.

One Ring to Bind Them All…

CNCF is at the bedrock of this cloud native movement and the survey shows adoption of CNCF projects is growing quickly in China. While Kubernetes remains the crown jewel, other CNCF projects are getting into production. 20% of respondents were running OpenTracing in production; 16% were using Prometheus; 13% were using gRPC; 10% were using CoreDNS; and 7% were using Fluentd. China is still warming up to newer projects like Istio where only 3% of respondents were using it in production.

Talking about new CNCF projects, we can’t ignore ‘serverless’ or ‘function as a service.’ The CNCF Serverless Working Group recently came out with a whitepaper to define serverless computing. The survey found that more than 25% of respondents are already using serverless technology and around 23% planned to use it in the next 12-18 months.

China leans heavily toward open source when it comes to serverless technologies. Apache OpenWhisk is the dominant serverless platform in China with more than 30% of respondents using it as their preferred platform. AWS Lambda is in the second spot with 24% of respondents using it. Azure was mentioned by 14% and Google Cloud Functions was mentioned by 9% of respondents.

We will hear more about serverless technologies in 2018, especially at the upcoming KubeCon + CloudNativeCon, May 2-4 in Copenhagen.

Challenges Ahead

As impressive as the findings of the survey are, we are talking about some fairly young technologies. Respondents cited many new and old challenges. Complexity remains the No. 1 concern, with more than 44% of respondents citing it as their biggest challenge followed by reliability (43%); monitoring (38%); and difficulty in choosing an orchestration solution (40%).

In contrast, for the North American respondents security remains the No. 1 concern, with 43% of respondents citing it as their biggest challenge followed by storage (41%); networking (38%); monitoring (38%); complexity (35%) and logging (32%).

This is a clear indication that the Chinese market is younger and still going through basic challenges that their Western counterparts have already overcome. To help Chinese companies move forward, documentation plays a very critical role in the proliferation of any technology, so it’s promising to see massive work going on to translate Kubernetes documentation into Mandarin.

The good news is that finding a vendor is not one of the biggest challenges for either of the two markets. With the release of the Certified Kubernetes Conformance Program, CNCF has instilled more confidence in users to pick and choose a vendor without fearing being locked down.

Getting Ready for Cloud Native World?

There is no playbook to help you embark on your journey to the cloud native world. However, there are some best practices that one can follow. A little under a year ago, Dr. Ying Xiong, Chief Architect of Cloud Computing at Huawei, talked about the company’s move toward cloud native architecture at KubeCon + CloudNativeCon Europe. Dr. Xiong has some tips – start with the right set of applications for cloud native journey; some apps are too difficult to redesign with microservice architecture; don’t start with those. Choose the easy ones, as you succeed you gain confidence and a model to replicate across your organization. He also advises in favor of using one platform to manage container and non container applications.

Be sure to join us at an upcoming CNCF Meetup.

For even deeper exposure to the cloud native community, ecosystem and user successes, be sure to attend our first KubeCon + CloudNativeCon China, Nov. 14-15 in Shanghai. 

This Week in Kubernetes: March 21st

By | Blog

Each week, the Kubernetes community shares an enormous amount of interesting and informative content including articles, blog posts, tutorials, videos, and much more. We’re highlighting just a few of our favorites from the week before. This week we’re talking machine learning, scalability, service mesh, and contributing to Kubernetes.

Running Apache Spark Jobs on AKS, Microsoft

Apache Spark, a fast engine for large-scale data processing, now supports native integration with Kubernetes clusters as a scheduler for Spark jobs. In this article, Lena Hall and Neil Peterson of Microsoft walk you through how to prepare and run Apache Spark jobs on an Azure Container Service (AKS) cluster. If you want to learn more about using Spark for large scale data processing on Kubernetes, check out this treehouse discussion video.

Introducing Agones: Open-source, Multiplayer, Dedicated Game-server Hosting Built on Kubernetes, Google

In the world of distributed systems, hosting and scaling dedicated game servers for online, multiplayer games presents some unique challenges. Because Kubernetes is an open-source, common standard for building complex workloads and distributed systems, it makes sense to expand this to scale game servers. In this article, Mark Mandel of Google introduces Agones, an open-source, dedicated game server hosting and scaling project built on top of Kubernetes, with the flexibility you need to tailor it to the needs of multiplayer games.

8 Ways to Bolster Kubernetes Security, TechBeacon

Kubernetes can affect many runtime security functions, including authentication, authorization, logging, and resource isolation. Since it also affects the container runtime environment, it’s a crucial part of maintaining a secure container infrastructure. In this article, John P. Mello Jr. of TechBeacon explains 8 ways to help keep Kubernetes secure.

Kubernetes from the Ground Up: Choosing a Configuration Method, OzNetNerd

Kubernetes’ configuration is simply a bunch of Kubernetes objects. In this article, Will Robinson of Contino takes you through a quick look at what these objects are, and what they’re used for. You’ll walk through imperative commands, imperative objects, and declarative objects including what imperative vs. declarative means and what is right for your application.

Stay tuned for more exciting content from the Kubernetes community next week, and join the KubeWeekly mailing list for the latest updates delivered directly to your inbox.

Is there a piece of content that you think we should highlight? Tweet at us! We’d love to hear what you’re reading, watching, or listening to.

Trace Your Microservices Application with Zipkin and OpenTracing

By | Blog

By Gianluca Arbezzano, Site Reliability Engineer at InfluxDB, CNCF Ambassador 

Walter Dal Mut is a certified Amazon Web Service consultant. He works at Corley SRL, an Italian company that helps other small and big companies move to the cloud.

During the first CNCF Italy Meetup, Walter shared his experience instrumenting a PHP microservices environment with Zipkin and OpenTracing.

Everybody doing logging in applications, and is effectively useful the problem is that we have a lot of detailed informations, they move very fast across multi services. They are almost impossible to read it in real time.

This is why we have monitors. With monitor I mean events, metrics and time series. One aspect of monitors that I love is the aggregation.

It makes easy to put things together and we can see so many things for example I can see the criticality of my issues looking at the occurances. I can compare the number of requests with the load time in just one graph. This is very useful and it’s something that I can’t see tailing logs.

With metrics we can measure changes. It is one of the most important things in my opinion for monitors because a deployment usually is the point in time when something change. If we are able to detect the entity of this change we can take any action based on how good or bad it is. We see immediately if what we changed is important or not. You will discover that so many times we change something that is not useful at all or my change is just not working as expected. Monitors are the only way to understand all of this.

Describing the image above I instrumented my application to send events and I collect them on InfluxDB. From the bottom right graph you can see green and red lines. Read lines tell to us that something is wrong, and now that we are know the distribution we can measure how a new version of our software improve or not the current state.

One tip to remember when you are building your dashboard is that a deploy is an important event. Based on what monitoring software you are using you can mark this special event with a vertical line, Grafana call this feature annotation. The annotation is printed across all the graphs part of the same dashboard. This line is the key to understand how a new release performs.

One missed information at some point is how the information is propagated in our system.

In a microservices it’s not really important the log generated by a single service we want to trace and jump across services following our request. I want to connect dots across my services and the tracing is designed to read time series in this terms.

In a traditional web application with a database, I want to understand the queries made to load a specific page. I want to understand how much it takes to keep them optimized and low in terms of numbers.

Tracing is all about spans and inter-process propagation and active span management.

A spans is a period of time that starts and ends, other than these two points we mark when the client send the request, when the server receive the request and when the server send the response to the client.

These four signals are important to understand the network latency between services.

Other than that you can mark custom events inside a span and you can calculate how long it takes to your application to end a specific task like generating a pdf, decompress a request, process a message queue and so on.

Inter-process propagation, the way that we propagate things as maybe using four eaters that we can send in my request. There is a trace indeed. It is the unity in fire that starts at time zero and ends when all my micro services are included. It is in the trace I.D. Then they have a spy identification using the spy effectively they want to use. Because the client send a request.

The inter-process propagation describe how we propagate things across network or processes. In HTTP we use headers to pass traces information across services. TraceId is the unique identifier for every single request every spans part of that requests is grouped under the same id. Every span has it’s id and it also have a parent span id. It is useful to aggregate all the spans to get the three of your spans.

There are different traces available the most famous open source are Jaeger (a project under the CNCF) an Zipkin started by Twitter.

During the talk Walter used Zipkin but they are both compatible with OpenTracing. If you use the right libraries you are able to switch between tracers transparently.

This is how a trace is represented by Zipkin. You can see the last of all your services on the left and every spans generated across your distributed system. The length of each span describes how much time it took and from this visualisation we already have a good set of information and optimisation points:

  • How many services we are calling to solve the request. Are they too much?
  • How many time I am calling the same service.
  • How much time a service is taking. If the authentication takes 80% of your request time every time you should fix it.
  • And many more.

Some spans have one or more dots, that white dots are logs. They are useful to understand when a specific event happen in time. You can use this feature to identify when you send an email or when a customer clicked a specific button tracking it’s UI experience.

The video shows a details demo about what Zipkin provides in terms of filtering, searching and event UI trick to quick identify traces that ended with an error. Other then showing how Zipkin works Walter shared his experience instrumenting PHP applications.

The focus of his talk is all about tricks and best practices really useful to listen in order to avoid some common mistakes.

They are a bit hard to transcribe and I will leave the video to you.

I will leave you the quote that he shared with us at the end of the presentation (spoiler alert)

“Measure is the key to science (Richard Feynman).”

Slides available


Cloud Native Computing Foundation Expands Certification Program to Kubernetes Application Developers – Beta Testers Needed

By | Blog

The Cloud Native Computing Foundation (CNCF), which sustains and integrates open source technologies like Kubernetes and Prometheus, announced today that it is expanding its certification program to include application developers who deploy to Kubernetes. After launching the highly successful Certified Kubernetes Administrator (CKA) exam and corresponding Kubernetes Certified Service Provider (KCSP) program back in September with more than 1,200 CKAs and 28 KCSPs to date, the natural follow-on was an exam focused on certifying application developers to cover the full spectrum of Kubernetes technical expertise.

We are looking for skilled Kubernetes professionals at vendor companies to beta test the exam curriculum.

Are you interested in getting an early peek as a beta tester? If so, please read on!

  • The online exam consists of a set of performance-based items (problems) to be solved in a command line.
  • The exam is expected to take approximately 2 hours to complete.
  • Beta testing is targeted for late April — early May.
  • After taking the exam, each beta tester will be asked to provide exam experience feedback in a questionnaire.

If you would like to be considered as a Beta tester for the CKAD exam, please sign-up via this short survey

You will be contacted with additional information when the exam is ready and the Beta team is selected. If you complete the beta exam with a passing grade, you will become one of the first Certified Kubernetes Application Developers.

With the majority of container-related job listings asking for proficiency in Kubernetes as an orchestration platform, the new program will help expand the pool of Kubernetes experts in the market, thereby enabling continued growth across the broad set of organizations a using the technology. Certification is a key step in that process, allowing certified application developers to quickly establish their credibility and value in the job market, and also allowing companies to more quickly hire high-quality teams to support their growth.

“The CKAD program was developed as an extension of CNCF’s Kubernetes training offerings which already includes certification for Kubernetes administrators. By introducing this new exam for application developers, anyone working with Kubernetes can now certify their competency in the platform,” said Dan Kohn Executive Director of Cloud Native Computing Foundation.  

To create the Certified Kubernetes Application Developer (CKAD) exam, 19 Kubernetes subject matter experts (SMEs) participated over four days in job analysis and item writing sessions co-located with Kubecon + CloudNativeCon in Austin, Texas in December. During these work sessions, the following exam scope statement was agreed upon:

CKAD Exam Scope Statement

The Certified Kubernetes Application Developer can design, build, configure, and expose cloud native applications for Kubernetes. The Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications & tools in Kubernetes.

The exam assumes knowledge of, but does not test for, container runtimes and microservice architecture.

The successful candidate will be comfortable using:

  • An OCI-Compliant Container Runtime, such as Docker or rkt.
  • Cloud native application concepts and architectures.
  • A programming language, such as Python, Node.js, Go, or Java.

Also during the work sessions, the team defined seven content domains with a % weight in the exam:

  • 13%  Core Concepts
  • 18%  Configuration
  • 10%  Multi-Container Pods
  • 18%  Observability
  • 20%  Pod Design
  • 13%  Services & Networking
  • 8%   State Persistence

The SME group wrote exam problems, assigned complexity and difficulty ratings, and determine a provisional passing score of 66% during the working session.

CKAD exam launch is targeted for early May.