I’ve attended many conferences before, but I was happy to get the diversity scholarship to attend CloudNativeCon + KubeCon Europe 2017 in Berlin as there is always so much more to learn. It was my first time attending an event organized by the Linux Foundation, and I hope to attend more in the future.
I loved all the insights and advances that I obtained through all of the highlighted Cloud Native projects including Kubernetes, gRPC, OpenTracing, Prometheus, Linkerd, Fluentd and OpenDNS from the variety of industry leaders. The keynotes were quite memorable as well, including the Kubernetes 1.6 updates by Aparna Sinha (Google), Federation from Kelsey Hightower (Google), Kubernetes Security Updates from Clayton Coleman (Red Hat), Helm from Michelle Noorali (Deis), Scaling Kubernetes from Joe Beda (Heptio) and Quay from Brandon Phillips (CoreOS).
There were many sessions, and they overlapped so anybody can just come out with something new learned. Some highlights from some of the sessions I attended (not in order):
How We Run Kubernetes in Kubernetes, aka Kubeception – Timo Derstappen, Giant Swarm
Learned how Giant Swarm uses Kubernetes to deploy encrypted and isolated clusters of Kubernetes. They treat these clusters as TPSs (Third Party Resources)
Explained how they use a “mother” Kubernetes to deploy and manage fully-isolated and encrypted Kubernetes clusters for different customers or teams.
Life of a Packet in Kubernetes – Michael Rubin Senior Staff Engineer & TLM, Google
I learned about where an IP packet goes to in Kubernetes within one pod, multiple pods, and different servers. Also how it fits together with flannel. https://github.com/coreos/flannel
Programming Kubernetes with the Kubernetes Go SDK – Aaron Schlesinger Sr. Software Engineer, Deis
I learned how to use the Kubernetes go client library to build TPS Third party services. Including a coding example and demo
Go + Microservices = Go Kit – Peter Bourgon, Go Kit/Weaveworks
This specific session was a great talk that showed how to build go microservices using Go Kit. What was also interesting was that the kit allows you to implement your microservice using standard rest or if in the future you want to start using gRPC you can also do so. Also, this is the perfect type of microservice that you can deploy to a Kubernetes cluster.
Loki: An OpenSource Zipkin / Prometheus Mashup, Written in Go Tom Wilkie – Director of Software Engineering, Weaveworks
What I learned about Loki is a Zipkin-compatible distributed tracer written in Go. It uses Prometheus’ service discovery to find about the services and examine them.
As a Latino leader (originally from Chile), I enjoyed the diversity talk very much as well. It was an interesting conversation led by Sarah Conway from the Linux Foundation with a variety of different attendees including Kris Nova, a previous diversity scholarship recipient. The general consensus is that the industry still needs to continue to do more. I hope they keep on having these talks in future conferences since this is topic that is not very openly talked about, for fear or whatever reasons, but I believe part of the culture of open source is being ‘Open’ and talk about all the different issues and challenges in industry when it comes to being more inclusive.
Specifically for example, when it comes to companies hiring, there are just a few that are doing things to make it truly more inclusive, there are those that say that they are doing things (but not practicing) and there is the large majority that is not doing anything at all.
The environment and event were great overall. From participating in the morning run around Berlin with some fellow conference attendees to the closing party with the great food and drinks :-). What a great way to start and end your day.
Thanks for the fantastic experience and looking forward to next the CloudNativeCon + KubeCon North America 2017 in Austin.Consider applying for a scholarship to the North America event. Applications can be foundhere and are due October 13th.
The sold out CloudNativeCon + KubeCon Europe 2017 gathered more than 1,500 end users, leading contributors and developers from around the world for three days in Berlin to exchange Cloud Native knowledge, best practices, and experiences.
What started as our seven project logos lit up high above, grew to nine project logos as Executive Director Dan Kohn unveiled our two newest projects — containerd and rkt — and added their logos to the ring of project logos adorning the conference hall.
Speakers from a variety of titles, background and companies talked about a range of Kubernetes topics, including the latest 1.6 update released at the conference, container security, application registry and workload colocations and more. Patrick Chanezon of Docker introduced the audience to containerd in his keynote while Brandon Phillips of CoreOS dove into rkt’s history and goals going forward.
Additional keynote speakers presented on containerd, rkt, Prometheus, and topics like enterprise production and moving to the modern infrastructure. Watch keynotes and all other sessions here.
Bringing the many cloud native communities together to help build professional and personal connections was seen throughout the conference from the job board, to packed sessions to the buzz of the exhibit hall. Sponsor booths filled the exhibit hall, while attendees packed the booths thanks to plenty of demos, cool prizes, T-shirts and fun swag to collect.
Germany’s oldest preserved commercial power plant and worldwide magnet for technology aficionado, ewerk GmbH was transformed into a local Berlin art scene for our event’s closing party, sponsored by CoreOS. Attendees mingled while sipping local beer from BRLO, listened to tunes from local DJ Zunshine as they ate a variety of fusion dishes, and took home a piece of art from street artist Andreas Preis.
Women Who Brunch – CloudNativeCon + KubeCon Europe kicked off by hosting a women’s brunch with co-sponsor Google. Attendees sipped mimosas and networked with other women in the tech industry from around the globe.
Diversity Coffee Talk – Attendees munched on pastries as they explored ways to encourage more inclusivity and camaraderie across the many open source communities involved in cloud native computing today.
5K Run – Attendees laced up their running shoes and took a professionally guided tour of Berlin, seeing sights like Tiergarten Park, The Victory Column, Bellevue Palace, 17th of June Street and more.
Attendees donated 25 books to the Tux’s Bookworms program, which collects children’s books our conferences worldwide and donates them to children in local areas of each conference. Remember to bring a book to our North American conference in December!
We welcomed 6 CNCF Diversity Scholarship winners from around the world. Stay tuned for a series of blog posts from the winners at www.cncf.io/newsroom/blog and consider applying for a scholarship to our North America event. Applications can be found here and are due October 13th.
CloudNativeCon + KubeCon Europe 2017 would not have been possible without the help of its sponsors, speakers, attendees, and organizers.
Next Up, Austin in December!
CloudNativeCon + KubeCon North America 2017 is headed to Austin, Texas and event dates have been extended to December 6-8, allowing for 3 full days of of cloud native education and community bonding. Register to attend here. CFP submissions are due August 19th, submit to speak here.
By: Jonathan Boulle, rkt project co-founder, CNCF TOC representative, and head of containers and Berlin site lead at CoreOS
Earlier this month, we announced that CoreOS made a proposal to add rkt, the pod-native container engine, as a new incubated project within the Cloud Native Computing Foundation (CNCF). Today we are happy to celebrate that rkt has been formally proposed and accepted into the CNCF.
This means that with rkt now housed in the CNCF, we ensure that the rkt and container community will continue to thrive in a neutral home for collaboration. We are excited to work alongside the CNCF community to push forward the conversation around container execution in a cloud native environment. We look forward to working alongside the CNCF to further the development of the rkt community and develop interoperability between Kubernetes, OCI and containerd.
It is a historical moment where CNCF has the opportunity to push progress on container execution for the future of the ecosystem, under a neutral and collaborative home. The future of container execution is important for cloud native. Rkt is now joining the CNCF family alongside other critical projects like gRPC, Kubernetes, Prometheus, and others.
Working with the community and next steps
rkt developers already actively collaborate on container specifications in the OCI project, and we are happy to collaborate more on the implementation side with the CNCF. We are actively working to integrate rkt with Kubernetes, the container cluster orchestration system, and together we can work to refine and solidify the shared API for how Kubernetes communicates with container runtimes. Having container engine developers work side-by-side on the testing and iteration of this API ensures a more robust solution beneficial for users in our communities.
The OCI project is hard at work on the standards side, and we expect we will be able to share the code in working with those image and runtime specifications. rkt closely tracks OCI development and has developers involved in the specification process. rkt features early implementation support for the formats with the intention of being fully compliant once the critical 1.0 milestone is reached.
What can rkt users expect from this new announcement? All of the rkt maintainers will continue working on the project as usual. Even more, we can encourage new users, and maintainers, with the help of the CNCF, to contribute to and rely on rkt.
A big thank you to all the supporters of rkt over the years. We would also like to thank Brian Grant of Google for being the official sponsor of proposal for rkt’s contribution into the CNCF.
What is rkt? A pod-native container engine
rkt, an open source project, is an application container engine developed for modern production cloud-native environments. It features a pod-native approach, a pluggable execution environment, and a well-defined surface area that makes it ideal for integration with other systems.
The core execution unit of rkt is the *pod*, a collection of one or more applications executing in a shared context (rkt’s pods are synonymous with the concept in the Kubernetes orchestration system). rkt allows users to apply different configurations (like isolation parameters) at both pod-level and at the more granular per-application level. rkt’s architecture means that each pod executes directly in the classic Unix process model (i.e. there is no central daemon), in a self-contained, isolated environment. rkt implements a modern, open, standard container format, the App Container (appc) spec, but can also execute other container images, like those created with Docker.
Since its introduction by CoreOS in December 2014, the rkt project has greatly matured and is widely used. It is available for most major Linux distributions and every rkt release builds self-contained rpm/deb packages that users can install. These packages are also available as part of the Kubernetes repository to enable testing of the rkt + Kubernetes integration. rkt also plays a central role in how Google Container Image and CoreOS Container Linux run Kubernetes.
How were rkt and containerd contributed to the CNCF?
On March 15, 2017, at the CNCF TOC meeting, CoreOS and Docker made proposals to add rkt and containerd as new projects for inclusion in the CNCF. During the meeting, we as rkt co-founders, proposed rkt, and Michael Crosby, a containerd project lead and co-founder, proposed containerd. These proposals were the first step, and then the project went through a formal proposal to the TOC, and finally were called to a formal vote last week. Today these projects have been accepted to the organization.
What does this mean for rkt and other projects in the CNCF?
As part of the CNCF, we believe rkt will continue to advance and grow. The donation will ensure that there is ongoing shared ecosystem collaboration around the various projects, where interoperability is key. Finding a well-respected neutral home at the CNCF provides benefits to the entire community around fostering interoperability with OCI, Kubernetes, and containerd. There’s also an exciting number of opportunities for cross-collaboration with other projects like gRPC and Prometheus.
Container execution is a core part of cloud-native. By housing rkt under the CNCF, a neutral, respected home for projects, we see benefits including help with community building and engagement, and overall, fostering of interoperability with other cloud native projects like Kubernetes, OCI, and containerd.
How should we get involved?
The community is encouraged to keep using, or begin using rkt, and you can get involved on the rkt page on GitHub or on the mailing list. Note that this repo will be moved into a new vendor-neutral GitHub organisation over the coming weeks.
The previous round of benchmarking on CNCF’s cluster provided us with a wealth of information, which greatly aided our work on this new release. The last series of scaling tests on the CNCF cluster consisted of using a cluster-loader utility (as demonstrated at CloudNativeCon + KubeCon in Seattle last year) to load the environment with realistic content such as Django/Wordpress, along with multi-tier apps that included databases such as PostgreSQL and MariaDB. We did this on Red Hat OpenShift Container Platform running on a 1,000 node cluster provisioned and managed using Red Hat OpenStack Platform. We scaled the number of applications up, analyzed the state of the system while under load and folded all of the lessons learned into Kubernetes and then downstream into OpenShift.
What we built on the CNCF Cluster
This time we wanted to leverage bare metal as well. So we built two OpenShift clusters: one cluster of 100 nodes on bare metal and another cluster of 2,048 VM nodes on Red Hat OpenStack Platform 10. We chose 2,048 because it’s a power of 2, and that makes engineers happy.
We kept some of our goals from last time, and added some cool new ones:
2,000+ node OpenShift Cluster and research future reference designs
Use Overlay2 graph driver for improved density & performance, along with recent SELinux support added in kernel v4.9
Saturation test for OpenShift’s HAProxy-based network ingress tier
Saturation test for OpenShift’s integrated container registry and CI/CD pipeline
Network Ingress/Routing Tier
The routing tier in OpenShift consists of machine(s) running HAProxy as the ingress point into the cluster. As our tests verified, HAProxy is one of the most performant open source solutions for load balancing. In fact, we had to re-work our load generators several times in order to push HAProxy as required. By super popular demand from our customers, we also added SNI and TLS variants to our test suite.
Our load generator runs in a pod and its configuration is passed in via configmaps. It queries the Kubernetes API for a list of routes and builds its list of test targets dynamically.
Here is an example of the configuration that our load generator uses:
In our scenario, we found that HAProxy was indeed exceptionally performant. From field conversations, we identified a trend that there are (on average) a large number of low-throughput cluster ingress connections from clients (i.e. web browsers) to HAProxy versus a small number of high-throughput connections. Taking this feedback into account, the default connection limit of 2,000 leaves plenty of room on commonly available CPU cores for additional connections. Thus, we have bumped the default connection limit to 20,000 in OpenShift 3.5 out of the box.
If you have other needs to customize the configuration for HAProxy, our networking folks have made it significantly easier — as of OpenShift 3.4, the router pod now uses a configmap, making tweaks to the config that much simpler.
As we were pushing HAProxy we decided to zoom in on a particularly representative workload mix – a combination of HTTP with keepalive and TLS terminated at the edge. We chose this because it represents how most OpenShift production deployments are used – serving large numbers of web applications for internal and external use, with a range of security postures.
Let’s take a closer look of this data, noting that since this is a throughput test with a Y-axis of Requests Per Second, higher is better.
nbproc is the number of HAProxy processes spawned. nbproc=1 is currently the only supported value in OpenShift, but we wanted to see what if anything increasing nbproc bought us from a performance and scalability standpoint.
Each bar represents a different potential tweak:
1p-mix-cpu*: HAProxy nbproc=1, run on any CPU
1p-mix-cpu0: HAProxy nbproc=1, run on core 0
1p-mix-cpu1: HAProxy nbproc=1, run on core 1
1p-mix-cpu2: HAProxy nbproc=1, run on core 2
1p-mix-cpu3: HAProxy nbproc=1, run on core 3
1p-mix-mc10x: HAProxy nbproc=1, run on any core, sched_migration_cost=5000000
2p-mix-cpu*: HAProxy nbproc=2, run on any core
4p-mix-cpu02: HAProxy nbproc=4, run on core 2
We can learn a lot from this single graph:
CPU affinity matters. But why are certain cores nearly 2x faster? This is because HAProxy is now hitting the CPU cache more often due to NUMA/PCI locality with the network adapter.
Increasing nbproc helps throughput. nbproc=2 is ~2x faster than nbproc=1, BUT we get no more boost from going to 4 cores, and, in fact, nbproc=4 is slower than nbproc=2. This is because there were 4 cores in this guest, and 4 busy HAProxy threads left no room for the OS to do its thing (like process interrupts).
In summary, we know that we can improve performance more than 20 percent from baseline with no changes other than sched_migration_cost. What is that knob? It is a kernel tunable that weights processes when deciding if/how the kernel should load balance them amongst available cores. By increasing it by a factor of 10, we keep HAProxy on the CPU longer, and increase our likelihood of CPU cache hits by doing so.
We’re excited about this one, and will endeavor to bring this optimization to an OpenShift install near you :-).
Look for more of this sort of tuning to be added to the product as we’re constantly hunting opportunities.
In addition to providing a routing tier, OpenShift also provides an SDN. Similar to many other container fabrics, OpenShift-SDN is based on OpenvSwitch+VXLAN. OpenShift-SDN defaults to multitenant security as well, which is a requirement in many environments.
VXLAN is a standard overlay network technology. Packets of any protocol on the SDN are wrapped in UDP packets, making the SDN capable of running on any public or private cloud (as well as bare metal).
Incidentally, both the ingress/routing and SDN tier of OpenShift are pluggable, so you can swap those out for vendors who have certified compatibility with OpenShift.
When using overlay networks, the encapsulation technology comes at a cost of CPU cycles to wrap/unwrap packets and is mostly visible in throughput tests. VXLAN processing can be offloaded to many common network adapters, such as the ones in the CNCF Cluster.
Web-based workloads are mostly transactional, so the most valid microbenchmark is a ping-pong test of varying payload sizes.
Below you can see a comparison of various payload sizes and stream count. We use a mix like this as a slimmed down version of RFC2544.
The X-axis is number of transactions per second. For example, if the test can do 10,000 transactions per second, that means the round-trip latency is 100 microseconds. Most studies indicate the human eye can begin to detect variations in page load latencies in the range of 100-200ms. We’re well within that range.
Bonus network tuning: large clusters with more than 1,000 routes or nodes require increasing the default kernel arp cache size. We’ve increased it by a factor of 8x, and are including that tuning out of the box in OpenShift 3.5.
Since Red Hat began looking into Docker several years ago, our products have defaulted to using Device Mapper for Docker’s storage graph driver. The reasons for this are maturity, supportability, security, and POSIX compliance. Since the release of RHEL 7.2 in early 2016, Red Hat has also supported the use of overlay as the graph driver for Docker.
Red Hat engineers have since added SELinux support for overlay to the upstream kernel as of Linux 4.9. These changes were backported to RHEL7, and will show up in RHEL 7.4. This set of tests on the CNCF Cluster used a candidate build of the RHEL7.4 kernel so that we could use overlay2 backend with SELinux support, at scale, under load, with a variety of applications.
Red Hat’s posture toward storage drivers has been to ensure that we have the right engineering talent in-house to provide industry-leading quality and support. After pushing overlay into the upstream kernel, as well as extending support for SELinux, we feel that the correct approach for customers is to keep Device Mapper as the default in RHEL, while moving to change the default graph driver to overlay2 in Fedora 26. The first Alpha of Fedora 26 will show up sometime next month.
As of RHEL 7.3, we also have support for the overlay2 backend. The overlay filesystem has several advantages over device mapper (most importantly, page cache sharing among containers). Support for the overlay filesystem was added to RHEL with salient caveats such as that it is not POSIX compliant, and that use of overlay was, at the time, incompatible with SELinux (a key security/isolation technology).
That said, the density improvements gained by page cache sharing are very important for certain environments where there is significant overlap in base image content.
We constructed a test that used a single base image for all pods, and created 240 pods on a node. The cluster-loader utility used to drive this test has a feature called a “tuningset” which we use to control the rate of creation of pods. You can see there are 6 bumps in each line. Each of those represents a batch of 40 pods that cluster-loader created. Before it moves to the next batch, cluster-loader makes sure the previous batch is in running state. In this way, we? avoid crushing the API server with requests, and can examine the system’s profiles at each plateau.
Below are the differences between device mapper and overlay for memory consumption. The savings in terms of memory is reasonable (again, this is a “perfect world” scenario and your mileage may vary).
The reduction in disk operations below is due to subsequent container starts leveraging the kernel’s page cache rather than having to repeatedly fetch base image content from storage:
We have found overlay2 to be very stable, and it becomes even more interesting with the addition of SELinux support.
Container Native Storage
In the early days of Kubernetes, the community identified the need for stateful containers. To that end, Red Hat has contributed significantly to the development of persistent volume support in Kubernetes.
Depending on the infrastructure you’re on, Kubernetes and OpenShift support dozens of volume providers. Fiber Channel, iSCSI, NFS, Gluster, Ceph as well as cloud-specific storage providers such as Amazon EBS, Google persistent disks, Azure blob and OpenStack Cinder. Pretty much anywhere you want to run, OpenShift can bring persistent storage to your pods.
Red Hat Container Native Storage is a Gluster-based persistent volume provider that runs on top of OpenShift in a hyper-converged manner. That is, it is deployed in pods, scheduled like any other application running on OpenShift. We used the NVME disks in the CNCF nodes as “bricks” for gluster to use, out of which CNS provided 1GB secure volumes to each pod running on OpenShift using “dynamic provisioning.”
If you look closely at our deployment architecture, while we have deployed CNS on top of OpenShift, we also labeled those nodes as “unschedulable” from the OpenShift standpoint, so that no other pods would run on the same node. This helps control variability — reliable, reproducible data makes performance engineers happy :-).
We know that cloud providers limit volumes attached to each instance in the 16-128 range (often it is a sliding scale based on CPU core count). The prevailing notion seems to be that field implementations will see numbers in the range of 5-10 per node, particularly since (based on your workload) you may hit CPU/memory/IOPS limits long before you hit PV limits.
In our scenario we wanted to verify that CNS could allocate and bind persistent volumes at a consistent rate over time, and that the API control plane for CNS called Heketi can withstand an API load test. We ran throughput numbers for create/delete operations, as well as API parallelism.
The graph below indicates that CNS can allocate volumes in constant time – roughly 6 seconds from submit to the PVC going into “Bound” state. This number does not vary when CNS is deployed on bare metal or virtualized. Not pictured here are our tests verifying that several other persistent volume providers respond in a very similar timeframe.
OpenStack and Ceph
As we had approximately 300 physical machines for this set of tests, and goals of hitting the “engineering feng shui” value of 2,048 nodes, we first had to deploy Red Hat OpenStack Platform 10, and then build the second OpenShift environment on top. Unique to this deployment of OpenStack was:
Ceph was also deployed through Director. Ceph’s role in this environment is to provide boot-from-volume service for our VMs (via Cinder).
We deployed a 9-node Ceph cluster on the CNCF “Storage” nodes, which include (2) SSDs and (10) nearline SAS disks. We know from our counterparts in the Ceph team that Ceph performs significantly better when deployed with write-journals on SSDs. Based on the CNCF storage node hardware, that meant creating two write-journals on the SSDs and allocating 5 of the spinning disks to each SSD. In all, we had 90 Ceph OSDs, equating to 158TB of available disk space.
From a previous “teachable moment,” we learned that when importing a KVM image into glance, if it is first converted to “raw” format, creating instances from that image takes a snapshot/boot-from-volume approach. The net result of this is that for each VM we create, we end up with approximate 700MB of disk space consumed. For the 2,048 node environment, the VM pool in Ceph only took approximately 1.5TB of disk space. Compare this to the last (internal) test when we had 1,000 VMs taking nearly 22TB.
In addition to reduced I/O to create VMs and reduced disk space utilization, booting from snapshots on Ceph was incredibly fast. We were able to deploy all 2,048 guests in approximately 15 minutes. This was really cool to watch!
Bonus deployment optimization: use image-based deploys! Whether it’s on OpenStack, or any other infrastructure public or private, image-based deploys reduce much of what would otherwise be repetitive tasks, and can reduce the burden on your infrastructure significantly.
Bake in as much as you can. Review our (unsupported) Ansible-based image provisioner for a head start.
Improved Documentation for Performance and Scale
Phew! That was a lot of work..how do we ensure that the community and customers benefit?
First, we push absolutely everything upstream. Next, we bake in as much of the tunings, best practices and config optimization into the product as possible….and we document everything else.
Along with OpenShift 3.5, the performance and scale team at Red Hat will deliver a dedicated Scaling and Performance Guide within the official product documentation. This provides a consistently updated section of documentation to replace our previous whitepaper, and a single location for all performance and scalability-related advice and best practices.
The CNCF Cluster is an extremely valuable asset for the open source community. This 2nd round of benchmarking on CNCF’s cluster has once again provided us with a wealth of information to incorporate into upcoming releases. The Red Hat team hopes that the insights gained from our work will provide benefit for the many Cloud Native communities upon which this work was built:
And many more!
Our team also wishes to thank the CNCF and Intel for making this valuable community resource available. We look forward to tackling the next levels of scalability and performance along with the community!
Want to know what Red Hat’s Performance and Scale Engineering team is working on next? Check out our Trello board. Oh, and we’re hiring Principal-level engineers!
Author: Leah Petersen, Systems Engineer Samsung CNCT
Contributed blog from CNCF Platinum member Samsung
“Tell me your opinion about diversity in tech.”
…not something you expect to be asked at a technology conference booth. This year at the Samsung Cloud Native Computing Team sponsor booth we decided to ask Google Cloud Next attendees their opinion about a problem in our industry – the lack of diversity. We could immediately see a trend – some people nervously shied away from us, laughing it off, and others marched straight up to us and began speaking passionately about the subject.
We chose this unique approach to interacting with conference goers for a few reasons. As big supporters of diversity in tech, we wanted to gather ideas. We asked if their companies were proactively taking any measures to increase the diversity at their company or retain diverse individuals. We also wanted to get people thinking about this issue, since the cloud native computing space is relatively young and we see a great benefit in assembling a diverse group of people to move it forward. Finally, as information-bombarded, weary attendees navigated through the booth space, we wanted to offer something beyond yet another sales pitch.
The three day long conference turned up a lot of interesting ideas and new perspectives. A favorite was how one person defined lack of diversity by describing a group of entirely “Western-educated males” making decisions for a company. After defining what a diverse workforce does or doesn’t look like, lots of people talked about what their companies were doing to take action.
Many companies were involved in youth programs, coding camps, university outreach, and mentoring programs. Salesforce hired a Chief Equality Officer to put words into action. One CTO from a Singapore-based company told us how more women in Asian countries commonly chose a STEM education track and 65% of his engineering team is female. Another company removed names and university names from resumes to address implicit biases in the interviewers. Most of the women we talked to simply thanked us for bringing up this issue and described how isolating it can be being the lone female on a team.
Another common, less positive story was how their company had tried a diversity program but gave up. This scenario underscores the unavoidable truth: bringing diversity into tech isn’t easy. Encouraging children to choose STEM careers is a long game that will bring change, but bringing diversity into the workplace right now is another story.
There’s a diverse, willing, and intelligent pool of workers, but they need training. People of different backgrounds, who weren’t able to get the classic Western STEM education need opportunities to transition into tech. As one man pointed out, adult training programs are the answer, but companies need to do more than just offer training. Finding the time and energy to break out of the demanding lifestyle of a single parent or low-income adult is near impossible.
Apprenticeships with financial support are the answer to getting a mature, diverse workforce.
The week’s undeniable message from everyone was: we NEED diversity and more specifically we need diversity in leadership positions. We need more points of view and we need a better representation of our society in the tech industry.
Diverse teams are more adaptable overall and build better products that serve more people.
Each year, FOSDEM attracts more than 8,000 developers – as Josh Berkus, the project atomic community lead at Red Hat, puts it, the event is “a great way to reach a large number of open source geeks, community members and potential contributors.” Richard Hartmann, project director and system architect at SpaceNet AG, even dubbed it “the largest of its kind in Europe and, most likely, the world.”
On Sunday, the Linux Containers and Microservices Devroom room was overflowing with 200+ infrastructure hackers, devops admins and more – all gathered to learn more about new containerized application stacks, from the practical to the theoretical. The room was at capacity throughout the day. According to Diane Mueller-Klingspor, director of community development for Red Hat OpenShift, it was “so popular that very few relinquished their seats between talks. When people did leave, the entire row just shifted to fill the seat if it was in the middle, leaving an open seat on the end of the row for newcomers. We called this the ‘Open Shift’ – which the audience got a good kick out of.”
That same day the Monitoring and Cloud Devroom kicked off with an equally eager group of developers. The Devroom, which was the largest FOSDEM has to offer, was also packed and demonstrated that as the world moves to cloud- and microservice-based architectures, monitoring is super important to aid observability and diagnose problems.
Here are more highlights from our community ambassadors on why these “grassroots” gatherings foster such excitement in the cloud native community:
Mueller-Klingspor:“Attendees of these Devrooms weren’t just hackers or people hacking on Kube, they were downright serious about the topic of microservices and containers. These developers were looking for information to help empower them to get their organizations up and running – using microservices, containers and, of course, orchestrating it all with Kubernetes. Unlike the FOSDEM Devrooms I’ve attended in the past, the Linux Containers and Microservices Devroom was not filled with dilettantes; everyone here had already committed to the cloud native mindset and was looking for deep technical information to help them get the job done.”
Berkus:“The technical content for the day was really good. In the end, we ended up with a program where most of the talks effectively built on each other – really rewarding the attendees who sat through the whole day (and there were more than a few). From my perspective, the top takeaways for developers who dropped into the Containers and Microservices Devroom were:
Kubernetes is the main cloud native orchestration platform;
Communications between containers is moving from REST to gRPC in upstream development; and
You can containerize Java applications too, and there are some real benefits to that.”
Chris Aniszczyk, COO of CNCF:“The presentations were especially amazing for those new to cloud native monitoring. We kicked off with a talk about the history of monitoring and then transitioned into the general categories of metrics, logs, profiling and distributed tracing, along with dives into how each of these is important. My main takeaway from the Monitoring and Cloud Devroom was how critical monitoring is becoming as companies scale out their operations in the cloud. We heard from Booking.com and Wikimedia about some of the challenges they had with monitoring at scale. I was also thrilled to hear the Prometheus project woven into almost every monitoring talk in the devroom; it’s becoming a fantastic open source choice for cloud native monitoring.”
Hartmann: “From my perspective, the main theme of the Monitoring and Cloud Devroom was to help people lead their teams to cloud-native technology and how to enact said change in a way that their teams want to play along. One of the biggest takeaways for developers was to make sure, first and foremost, to focus on user-visible services. FOSDEM is the world’s single largest FLOSS developer conference with dozens of satellite mini-conferences, team meetings and more. Sponsoring these Devrooms helps to support these efforts, gives back to the community and ensures that CNCF gets better acquainted with traditional developers.”
Here’s a list of of speakers and their topics with links to videos in case you missed FOSDEM:
We’re happy to announce that, one year after version 0.1.0 was released, Linkerd has processed over 100 billion production requests in companies around the world. Happy birthday, Linkerd! Let’s take a look at all that we’ve accomplished over the past year.
The CNCF’s Technical Oversight Committee (TOC) recently voted CoreDNS into the CNCF portfolio of projects. CoreDNS, a fast, flexible and modern DNS server, joins a growing number of projects integral to the adoption of cloud native computing. CoreDNS was voted in as an inception project – see explanation of maturity levels and graduation criteria here.
“Kubernetes and other technology projects use DNS for service discovery, so we are a key component to the implementation of cloud native architectures. Additionally, CoreDNS’ design allows easy extension of DNS functionality to various container stacks,” said John Belamaric, CoreDNS core maintainer and distinguished architect at Infoblox. “As a CNCF project, we are looking forward to the Foundation helping to fuel developer contributions and user growth. Increased integration with other CNCF projects and cloud native technologies and priority usage of itsCommunity Cluster are also important to us.”
“A focused, lightweight DNS server with a microservice philosophy guiding its internal design, CoreDNS is a critical component to the cloud native architecture and an important project for CNCF,” said Jonathan Boulle, CNCF TOC representative and head of containers and Berlin site lead at CoreOS. “The TOC has made it easier to find, submit and follow the progress of project proposals. By making the process more streamlined, we have been able to add several new projects over the past several months to speed the development of open source software stacks that enable cloud native deployments.”
Here’s more about the young, but growing distributed systems-friendly DNS project started by Miek Gieben, a Google Site Reliability Engineer.
Founded in March 2016, CoreDNS is the successor to the popular SkyDNS server. It is built as a server plugin for the widely-used Caddy webserver and uses the same model: it chains middleware.
The SkyDNS architecture did not lend itself to the flexible world of cloud deployments (organic grown code base, monitoring, caching, etc.). Additionally, other DNS servers (BIND9, NSD, Knot) may be good for serving DNS, but are not flexible and do not support etcd as a backend, for instance.
The CoreDNS creators built on the concept of SkyDNS and took into account its limitation and the limitations of other DNS servers to create a generic DNS server that can talk to multiple backends (etcd, Consul, Kubernetes, etc.). CoreDNS aims to be a fast and flexible DNS server, allowing users to access and use their DNS data however they please.
CoreDNS has been extended to operate directly with Kubernetes to access the service data. This “middleware” implementation for CoreDNS provides the same client-facing behavior as KubeDNS. The pipeline-based design of CoreDNS allows easy extension to use any container orchestrator as a DNS data source.
“In creating CoreDNS, our goal is to become the cloud DNS server allowing others like Docker, Hashicorp, and Weaveworks to use this technology as a library,” said Gieben. “Additionally, CoreDNS is useful outside cloud environments as well, so whether you use a private, public, on-premises or multi-cloud environment, you can use CoreDNS.” With the future addition of a pluggable policy engine, CoreDNS will extend its capabilities to sophisticated security and load balancing use cases.
Chains DNS middleware, each “feature” is contained in a middleware, for instance:
file based DNS,
etcd and k8s backends.
Only load the middleware(s) you need
Each middleware is self-contained — easy to add new behavior
Figure 1: CoreDNS Architecture
Currently CoreDNS is able to:
Serve zone data from a file; both DNSSEC (NSEC only) and DNS are supported (middleware/file).
Retrieve zone data from primaries, i.e., act as a secondary server (AXFR only) (middleware/secondary).
Sign zone data on-the-fly (middleware/dnssec).
Load balancing of responses (middleware/loadbalance).
Allow for zone transfers, i.e., act as a primary server (middleware/file).
Health checking (middleware/health).
Use etcd as a backend, i.e., a replacement for SkyDNS (middleware/etcd).
Use k8s (kubernetes) as a backend (middleware/kubernetes).
Serve as a proxy to forward queries to some other (recursive) nameserver, using a variety of protocols like DNS, HTTPS/JSON and gRPC (middleware/proxy).
Rewrite queries (qtype, qclass and qname) (middleware/rewrite).
Provide metrics (by using Prometheus) (middleware/metrics).
Provide Logging (middleware/log).
Support the CH class: version.bind and friends (middleware/chaos).
Profiling support (middleware/pprof)
Integrate with OpenTracing-based distributed tracing solutions (middleware/trace)
For more on CoreDNS, check out the CNCF project proposal here and follow @corednsio to stay in touch.
“CoreDNS provides essential naming services efficiently and integrates effectively with many other cloud native (CNCF) projects,” said Chris Aniszczyk, COO, CNCF. “Furthermore, with a highly modular and extensible framework, CoreDNS is a compelling option for Kubernetes service discovery.”
As a CNCF inception project, every 12 months, CoreDNS will come to a vote with the TOC. A supermajority vote is required to renew a project at inception stage for another 12 months or move it to incubating or graduated stage. If there is not a supermajority for any of these options,, the project is not renewed.
Slack is giving back to the Kubernetes and CNCF communities with free access as part of their not for profit program. We are also thrilled that they have extended their not for profit program to include 501(c)(6) organizations like CNCF:
Like many projects, companies and foundations, CNCF is a very active Slack user, leveraging the tool to communicate with members, ambassadors, and our larger cloud native community. Additionally, we have Slack channels for many of our projects likeOpenTracingandPrometheus, whileKuberneteshas its own channel with 8,062 registered users.
We’re excited about this move from Slack – and certainly appreciate improved archiving at no extra cost. Even more importantly, free Slack frees up CNCF to invest more in the k8s community for things like better documentation, continuous integration and scholarships and to attend community events.
Designed to make connecting and operating distributed systems easy and efficient, Google has been using many of the underlying technologies and concepts in gRPC. The current implementation is being used in several of Google’s cloud products and externally facing APIs.
Outside of Google, there’s a growing number of public users. According to The New Stack: “Within the first year of its launch, gRPC was adopted by CoreOS, Lyft, Netflix, Square, and Cockroach Labs among others. Etcd by CoreOS, a distributed key/value store, uses gRPC for peer to peer communication and saw huge performance improvements. Telecom companies such as Cisco, Juniper, and Arista are using gRPC for streaming the telemetry data and network configuration from their networking devices.”
Developers often work with multiple languages, frameworks, technologies, as well as multiple first- and third-party services. This can make it difficult to define and enforce service contracts and to have consistency across cross-cutting features such as authentication and authorization, health checking, load balancing, logging and monitoring and tracing — all the while maintaining efficiency of teams and underlying resources. gRPC can provide one uniform horizontal layer where service developers don’t have to think about these issues and can code in their native language. (Read this Container Solutions blog for an introduction to gRPC).
“For large-scale Internet companies and high throughput storage and streaming systems where performance really matters, gRPC can shine. In addition, having a uniform framework to connect and operate cross-language services where difficult concepts like circuit breaking, flow control, authentication, and tracing are taken care of can be very useful,” said Varun Talwar,product manager at Google in charge of gRPC.
In the same New Stack article, Janakiram MSV wrote, “When compared to REST+JSON combination, gRPC offers better performance. It heavily promotes the use of SSL/TLS to authenticate the server and to encrypt all the data exchanged between the client and the server.”
Aiming to be the protocol that becomes a next-generation standard for server-to-server communications in an age of cloud microservices, recently InfoWorld reported on the 1.0 release and its ease of use, API stability, and breadth of support.
“We are excited to have gRPC be the sixth project voted in by CNCF’s Technical Oversight Committee (TOC). Being a part of the CNFC can help bolster the gRPC community and tap into new use cases with microservices, cloud, mobile and IoT,” continued Talwar.
Just a little more than one and half years old, the project already has 12K Github stars (combined), >2500 forks (combined) and >100 contributors.
“As the neutral home of Kubernetes and four additional projects in the cloud native technology space (Fluentd, Linkerd, Prometheus, and OpenTracing), having gRPC join CNCF will attract more developers to collaborate, contribute and grow into committers. We also look forward to bringing gRPC into the CNCF family by hosting a gRPC project update at our upcoming CloudNativeCon EU event,” said Chris Aniszczyk, COO, CNCF.
The move into CNCF comes with a change of license from BSD-3 license plus a patent grant to Apache v2. Read “Why CNCF Recommends ASLv2” to understand why CNCF believes this is the best software license for open source projects today.
“ASL v2 is a well-known and familiar license with companies and thus is more suitable for our next wave of adoption while likely requiring less legal reviews from new potential gRPC users and contributors,” said Dan Kohn, Executive Director, CNCF.
With a strong focus on working with existing stacks, gRPC’s pluggable architecture allows for integrations with service discovery systems like Consul, ZooKeeper, etcd, Kubernetes API, tracing and metrics systems like: Prometheus, Zipkin, Open Tracing and proxies like Proxies: nghttp2, linkerd, Envoy. gRPC also encapsulates authentication and provides support for TLS mutual auth or you can plugin your own auth model (like OAuth).
Being a part of the broader CNCF ecosystem will encourage future technical collaborations, according to Talwar.
Interested in hearing more from core gRPC developers? Look for a future post from Talwar on why the project joined CNCF. He’ll also talk about how the project hopes to gain new industry friends and partners to pave the way for becoming the de-facto RPC framework for the industry to adopt and build on top of. This Google Cloud Platform Podcast on gRPC with Talwar from last spring is also worth a listen.