Cloud native – orchestrating containers as part of a microservices architecture – is a departure from traditional application design. Kubernetes and other cloud native technologies enable more rapid software development at a lower cost than traditional infrastructure. However, the containerization wave can be a little confusing – which applications to lift, which ones to keep as is, which ones can’t be left behind, etc.
The Cloud Native Computing Foundation is offering a map to guide developers and users through this new terrain with the launch of a new webinar series.
The CNCF Webinar Series kicks off December 15th at 10:am PT – 11:00 a.m. PT with a discussion on Cloud Native Strategy with Jamie Dobson of Container Solutions. Register for the Webinar today!
Many companies see the benefits of highly available, scalable and resilient systems. They want to go ‘cloud native,’ but as they reach for containerized microservices they may actually be grabbing the golden egg rather than the goose that laid it.
In this webinar, we’ll look at a model for emerging strategy, classic mistakes and how to avoid them. We’ll also look at how we can iterate through the ‘cloud native’ problem space. Along the way, and before we get to recent history, we’ll visit ancient Greece, post-war Scandinavia, and the Jet Propulsion Lab. We’ll learn about heuristics, including the doughnut principle, and, of course, we’ll confront the key paradox that strategy tries to resolve: what is good for a business, is not necessary good for those who work in it.
Jamie is the CEO of Container Solutions, one of the world’s leading cloud native consultancies. He specializes in strategy and works with companies that have particularly difficult problems to solve.
We recently announced a new curriculum development, training and certification initiative for Kubernetes – announced during CloudNativeCon + KubeCon. To kick things off, CNCF will be hosting two in-person sessions for the CNCF Certification Working Group, to be held at the Linux Foundation’s San Francisco offices from 9am-5pm on December 8-9th and 14-15th.
If you are interested in helping develop the certification exam – especially if you are aiming to be in the initial class of Kubernetes Managed Service Providers (KMSP) – we encourage you to send your resident Kubernetes experts to one or both of the workgroup meetings outlined below.
If unable to attend all four days, feel free to join for only a few days or split the time with another attendee. Technical representatives from Apprenda, Canonical, Cisco, Container Solutions, CoreOS, Deis, Huawei, Google, RX-M, Samsung, and Skippbox will be attending. Please RSVP by emailing Liz Kline at firstname.lastname@example.org and be sure to join our mailing list.
Working Group 1:
Time: 8:30am – 5:30pm PT
Location: LF Board Room at The Linux Foundation HQ; 1 Letterman Drive. San Francisco
The focus of the these two days will be to conduct a Job Task Analysis (JTA), determining the skills, knowledge and abilities a certified candidate should be able to demonstrate. The outcome of this JTA will be our exam “blueprint” – thisblueprint of topics for the Linux Foundation Certified System Administrator exam is a great example of the kind of material we’re aiming to produce for public consumption. Once this is complete, any interested training provider will then be able to develop secondary material that adequately prepares candidates to succeed on the certification exam.
Working Group 2:
Time: 8:30am – 5:30 pm PT
Location: LF Board Room at The Linux Foundation HQ; 1 Letterman Drive. San Francisco
The second two-day session will center around writing the certification exam items, which will test the earlier-identified JTA blueprint elements. The entire process will be facilitated by the Linux Foundation’s psychometrician to ensure we leave with the right content, allowing them to immediately move into programming and testing the exam items.
The goal of our Kubernetes Certification, Training and KMSP is to ensure enterprises receive the support they’re looking for to get up to speed and roll out new applications more quickly and more efficiently. We hope you’ll join us for these first two meetings – as this group will help define the program’s open source curriculum – available under theCreative Commons by Attribution 4.0 International licensefor anyone to use. While teleconference won’t be provided in order to make the meeting go as quickly and efficiently as possible, we will be posting the drafts to Github and accepting feedback there if you’re unable to join us in San Francisco.
If you are interested in additional details and the developments stemming from these meetings, please also join the certification working groupmailing list.
I’m Lucas Käldström from Finland. I speak Swedish as my mother tongue in the same manner as 300,000 others in my country do. I just turned 17 and am attending my second year in general upper secondary school. In my spare time, I play soccer, program, go to the gym and read a good book.
I’ve always been interested in Math, and am quite good at it as well. So when I was about 13, I started to become interested in programming. I found it interesting because I could command the computer to do nearly anything. I’ve always loved creating things and it was fascinating to see that every change you make can make a difference in a good way. I started creating small programs with VB.NET and C# and about a year later switched to Node.js and web programming (HTML, CSS and JS). At this point I started to feel the power of open source and what it could do. Also, I made myself a Github account in order to be ready if I found a project to contribute to.
In the beginning of May 2015, I first noticed Kubernetes. I got so excited that I could use something Google has designed free of charge! Unfortunately, I did not have any normal Linux hardware I could use at the time. However, I had two Raspberry Pis that I had been tinkering with a little bit. My Bash skills were practically non-existent and most of the time I was scared of typing something into the command line; however, I realized that Raspberry Pi in fact is the ultimate tool to use when teaching Kubernetes to someone with little cloud computing experience as the cluster becomes really practical. You literally get the “hands-on” experience that is so valuable. This later became the main theme for a 163-page master’s thesis paper Martin Jensen and Kasper Nissen wrote. Likewise, Ray Tsang has been travelling a lot with his Raspberry Pi cluster as well, but now I am getting ahead of myself.
After a lot of hacking in May, I got Kubernetes running on my Raspberry Pi, but it was quite pointless as Docker on ARM had a bug that made it impossible to run any containers via Kubernetes. I continued to improve my scripts during the summer, while not playing soccer or doing something else, like swimming! In August of 2015, I tried the same programming with the v1.0.1 Kubernetes release, and I got it working! That was a truly amazing feeling. I quickly started to expand the context to make it more generic, reproducible and faster. In mid-September, I had it working well, and noticed that a Glasgow University group had done the same thing; both of us working beside each other without knowing the other one.
I knew my work could help others, so I quickly published the source I had to the world; it was the right thing to do. I wanted to help more people run, test and learn from a Kubernetes cluster running on small Raspberry Pis… as well as other devices. This project is known as Kubernetes on ARM. After that, I continued to make lots of improvements on it, with feedback from others suddenly! One of the best moments was when someone reported the first issue and showed interest in helping to improve the project. I was part of the open source community!
I wanted to make my work even more widespread and bring it to the core. And so I did. In November-December I started making myself more familiar with the source code, the contributing process, etc.
On December 14, 2015, I got my first Pull Request merged. What a great feeling! I admit it was really small (a removal of 6 chars from a Makefile), but it was a big step personally to realize that the Kubernetes maintainers wanted my contributions. From January-March, I focused on getting the Kubernetes source code to cross-compile ARM binaries and release them automatically. Kubernetes v1.2.0 was the first release that shipped with “official” Google-built ARM binaries.
I then started to focus on getting an official deployment method multiarch-ready. I chose docker-multinode. The result of that work made Kubernetes v1.3.0 ship ARM and ARM 64-bit binaries and corresponding hyperkube images, which made it possible to run a cluster on different architectures with the same documented commands.
In April 2016, I was added to the Kubernetes organization. I couldn’t believe it! I was one out of about 170 at the time. One week later, I became a Kubernetes Maintainer, with a big M! It was totally crazy! I was 16 and got write permission to several repositories! But with great permissions comes great responsibility, and I always have been looking to the projects’ best when reviewing something for instance. In fact, maintainership should be a very important but boring task.
That same month, I noticed that a project called minikube was added to the Kubernetes organization. The repo was empty, just a markdown file, nothing more. I noticed Dan Lorenc started committing to the repo and I thought it would be cool to improve my Go skills by starting a project from scratch, so I started working with him. I became a maintainer for minikube as well, and am still the 6th-highest contributor to that project by commits.
I continued to contribute to Kubernetes during the summer and I worked on minikube until release v0.5.0 was out. Then I switched focus to improving the Kubernetes deployment experience. sig-cluster-lifecycle added me to their group and it turned out to be a great fit for me and my interests. Subsequent Kubernetes work includes:
Wrote a proposal (a design goal) for Kubernetes about how multi-platform clusters should be supported, set up and managed. It got merged in August and is one of the guidelines for how Kubernetes should be improved.
Making Kubernetes easier to install in the sig-cluster-lifecycle group. We have made a tool called kubeadm that makes the setup process much easier than before. This is a crucial effort in my opinion, since right now it is hard for new users to know what deployment solution to use and how they relate to each other. kubeadm will be a building block all higher-level (turn-key) deployment tools can use. This way we are hoping we can somewhat standardize and simplify the Kubernetes deployment story. I’ve also written a document about what we’d like to do in time for v1.6, please check it out if you’re interested.
All in all, I’ve learned a lot thanks to being able to contribute to Kubernetes. And it has been a lot of fun to be able to actually make a difference, which now has led to that I have more than 150 commits to the Kubernetes organization in total. My Kubernetes journey started with a Raspberry Pi, and we don’t know how it ends.
One thing to remember is that I’ve never taken any computing or programming classes. This has been my spare-time hobby. I’ve learned practically everything I can just by doing and participating in the community, which really shows us the power of the internet combined with the will to create something and make a difference.
I am not monetarily paid for my work on the Kubernetes project, but I have gained knowledge, experience, better English communication skills, respect, trust and so on. That is worth more than money right now. Also, I got about five full-time job opportunities during the summer from large worldwide companies without even asking for a job or listing myself as job-searching.
What I’ve really enjoyed while coding on Github is my partial anonymity. My full name has always been there, my email, my location, etc., but I haven’t written that I’m just 17 (or 16 at the time), so people on Github haven’t judged me for being a minor or for not having been to University or for the fact that I’ve never taken a computing class or worked for a big tech company. The community has accepted me for who I am, and respected me.
That’s the power of open source in my opinion. Regardless of who you are when you’re away from the keyboard, you are allowed to join the party and have fun with others like-minded and make a difference together. Diversity is the true strength of open source, and I think diversity scholarships like the one the Cloud Native Computing Foundation provided me to attend CloudNativeCon/KubeCon 2016 in Seattle are a powerful way to make the community even stronger. Thanks!
I came to the tech industry by way of my earlier career as a motorcycle stunt woman, and leading up to the conference I was wondering if I would feel welcome and able to contribute as a junior developer with an unorthodox background. I was quickly put at ease by Chen’s comments, Dan Kohn’s opening keynote discussing the CNCF’s dedication to diversity, and most of all, by the wonderful people I met.
My goals for the conference were to learn as much as possible about the technologies behind Cloud Native Computing, find a way to start contributing as a junior developer and meet some inspirational people. I was pleased all these goals were met and here are some highlights.
Dan and Piotr pointed out that anything you are doing with Kubectl on the command line, you can also do in the K8s Dashboard. I have been exclusively using Kubectl in the terminal for the past few months and could really see the benefit of additionally using the Dashboard to gain insight into my cluster’s state, performance and activity. As a visual learner, it’s great to have another way to wrap my head around what’s going on inside my Kubernetes cluster.
Officially, “Helm, is a tool that streamlines the creation, deployment and management of Kubernetes-native applications.” My description of Helm? A way to whip up pre-made recipes for components you might need in your Kubernetes cluster. For instance if you need a WordPress deployment on a Kubernetes cluster you simply:
$ helm install stable/wordpress
My favorite idea from this talk was how new users of Kubernetes can use Helm to learn about what components and configurations are needed in a Kubernetes cluster.
Eduardo discussed Fluentd, a product I have been learning about, which “is an open source data collector for unified logging layer.” Treasure Data created a vibrant community around Fluentd with more than 600 user contributed plugins written in Ruby; and Fluentd was announced as an official CNCF project the first day of the conference.
Eduardo took the time to personally discuss Fluentbit, a slimmer data forwarder and how we could use it in our project. He explained support to create Golang plugins had just been added and talked about how I could get involved by writing a plugin. Since I am also learning Golang and could use Fluentbit, the idea of writing a plugin seems like an excellent contribution that will allow me to continue my deep dive into managing data between containers.
On the second day of the conference the CNCF hosted a diversity luncheon. The discussion around the lunch tables focused on challenges facing diverse individuals entering the industry. I had the opportunity to speak with senior female developers with successful careers and hear their advice on entering and navigating the industry. It was a wonderful chance to focus on how we can continue attracting diverse talent who can build stronger and more relevant technology for everyone.
CloudNativeCon 2016 was a wonderful first conference for me and although the whirlwind of a conference is tiring, I left feeling motivated and inspired. The conference made me feel like I was a part of the community and technology I have been working with daily.
Leah Petersen (Twitter: @eccomi_leah) is currently an intern with the CNCT team at Samsung SDS, the newest Platinum Member of the Cloud Native Computing Foundation. She is finishing up a year long program at Ada Developers Academy, which is a training program located in Seattle for women who want to become software developers.
The sold out CloudNativeCon/KubeCon NA 2016 has come to an end. More than 1,000 end users, leading contributors and developers from around the world came together for two days in Seattle to exchange knowledge, best practices, and experiences using Fluentd, Kubernetes, Prometheus, OpenTracing and other cloud native technologies. With 108 sessions, keynotes, lightning talks, breakouts, and BoFs and 38 sponsors, CloudNativeCon/KubeCon NA 2016 was a fabulous success. You can enjoy the entire conference in this Youtube playlist! Follow along with the conversation on Twitter too using hashtag #CloudNativeCon and # KubeCon.
Speakers from a variety of large and small companies talked about how they were using Kubernetes, Prometheus, OpenTracing, building solutions around them and cloud native concerns like security and networking. Key sessions from end users, developers and contributors included:
Bringing the community together to help build professional and personal connections was a main theme of the conference. Attendees got the chance to speak with CNCF members and show sponsors at booths in the exhibit hall while chowing down on mac’n cheese, cupcakes, and popcorn, hanging with a spaceman, and winning flying drones.
Parties hosted at the Seattle Art Museum, Loulay Kitchen & Bar and Hard Rock Cafe provided even more opportunities to make connections with one another, mingle and create an even stronger community bond!
The New Stack’s Judy and Alex Williamstreated the community to fresh coffee and virtual 3D-printed pancakes both mornings of the conference. Early risers were treated to a panel hosted by Alex featuring:
Host John Furrier from theCUBE was on-site chatting with Apprenda, Canonical, CNCF, CoreOS, Microsoft, Platform9, Red Hat, Samsung and Weaveworks. Watch all the great discussions at http://siliconangle.tv/kubecon-2016/.
We welcomed five CNCF Diversity Scholarship winners from around the world to CloudNativeCon/KubeCon and tripled the number of scholarships offered in 2017. Stay tuned for a series of blog posts from the 2016 winners at www.cncf.io/newsroom/blog and an application to apply for next year’s scholarships!
The diversity luncheon held at Loulay Kitchen & Bar featured discussions with Red Hat’s Diane Mueller, Intel’s Michelle Xu and 75 other attendees around diversity and inclusion.
CloudNativeCon/KubeCon NA 2016 would not have been possible without the help of its sponsors, speakers, attendees, and organizers. Thanks so much to all of you! Enormous gratitude to the Events team for their tireless commitment to planning and executing the Conference! Our Diamond and Platinum sponsors deserve a special mention as they made all the food, drinks, video recordings, and swag possible:
On to 2017
In 2017, CloudNativeCon/KubeCon is coming to Berlin, Germany and Austin, Texas! Both shows will bring together leading contributors to showcase a full range of technologies that support the cloud native ecosystem and help bring cloud native project communities together. Early registration, sponsorship and CFP information is now open for both Berlin and Austin. Note, Berlin’s early bird registration discount ends December 6th and CFP submissions are due December 16th.
To further cloud native education, CNCF will also host a Cloud Native/Kubernetes 101 Roadshow: Pacific Northwest January 24-26, 2016. Visiting Vancouver, Seattle, & Portland to reach new end users, developers, and other potential community members and share with them the story of how cloud native technologies—orchestrating containers as part of a microservices architecture— is the best way to deploy modern applications. Visit https://www.cncf.io/events to attend!
Today, the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC) voted to accept Fluentd as the fourth hosted project after Kubernetes, Prometheus and OpenTracing. You can find more information on the project in this proposal presented to the TOC recently.
As CNCF builds out multiple paths for adopting cloud native computing, the TOC is looking to unite high-quality and relevant projects into the Foundation. Fluentd was started by Treasure Data in 2011 and is an open source data collector that allows you to implement at an unified logging layer. Logging is a crucial part of cloud native architectures and aligns well with CNCF’s goal to significantly increase the overall flexibility and reliability of modern distributed systems environments capable of scaling to tens of thousands of self healing multi-tenant nodes.
Fluentd was created to solve log/data collection and distribution needs at scale, offering a comprehensive and reliable service to be implemented in conjunction with microservices and generic cloud monitoring tools.With 650+ plugins connect it to many data sources and data outputs, it is no wonder Fluentd was the 2016 Bossie Awards winner for the best open source datacenter and cloud software.
100 contributors, 50% of top contributors are commercially sponsored
4651 stars, 550 forks
More than 650 plugins available
Additionally, Fluentd has a large adopter community consisting of Atlassian, LINE, Microsoft, Nintendo, Google Cloud Platform, Docker, Kubernetes, and GREE within others. Users include:
Stay tuned for a blog post from Eduardo Silva, Software Engineer at Treasure Data and core Fluentd Contributor, who will dive deep into the project’s roots and technical makeup and why Fluentd joined CNCF.
Today the CNCF is pleased to launch a new training, certification and Kubernetes Managed Service Provider (KMSP) program.
The goal of the program is to ensure enterprises get the support they’re looking for to get up to speed and roll out new applications more quickly and more efficiently. The Linux Foundation, in partnership with CNCF, will develop and operate the Kubernetes training and certification.
Interested in this course? Sign uphereto pre-register. The course, expected to be available in early 2017, is open now at the discounted price of $99 (regularly $199) for a limited time, and the certification program is expected to be available in the second quarter of 2017.
The KMSP program is a pre-qualified tier of highly vetted service providers who have deep experience helping enterprises successfully adopt Kubernetes. The KMSP partners offer SLA-backed Kubernetes support, consulting, professional services and training for organizations embarking on their Kubernetes journey. In contrast to the Kubernetes Service Partners program outlined recently in this blog, to become a Kubernetes Managed Service Provider the following additional requirements must be met: three or more certified engineers, an active contributor to Kubernetes and a business model to support enterprise end users.
As part of the program, a new CNCF Certification Working Group is starting up now. The group will help define the program’s open source curriculum, which will be available under the Creative Commons By Attribution 4.0 International license for anyone to use. Any Kubernetes expert can join the working group via this link. Google has committed to assist, and many others, including Apprenda, Container Solutions, CoreOS, Deis and Samsung SDS, have expressed interest in participating in the Working Group.
To learn more about the new program and the first round of KMSP partners that we expect to grow weekly, check out today’s announcement here.
By: Alexis Richardson, chairman of the Cloud Native Computing Foundation TOC
Here is the news: CNCF is on track to provide common open source tools for cloud native apps.
We are a young organisation, but we are ambitious. Our goal is to enable customers to deliver Cloud Native applications faster. We identify and promote high-quality cloud native software projects that solve real customer problems, and at the same time, we also find ways to support and grow their communities.
We believe that a well-supported community delivers innovation at a faster pace than any other model and thereby repay customers’ confidence in CNCF projects. By doing this, interest from the market is created, which also brings in more investment. This funds our supporting infrastructure, education, marketing, and initiatives around interoperability.
Through the CNCF, we intend to create a commons consisting of software that you can trust. It is natural that this software be diverse — Cloud Native is a big area with many independent projects. As market interest grows, we expect to see greater cohesion especially once vendors ship Cloud Native stacks, distributions, applications and services.
Maybe you will meet some of those vendors at CloudNativeCon this week. You will certainly meet many leading technologists from the community at large.
What we have done
Once we formed the CNCF as part of The Linux Foundation, our first task was to create a Technical Oversight Committee that identified and guided the first set of projects into the CNCF. This TOC was finalised in March — less than 8 months ago.
The next step was to build velocity by engaging open source projects that further cloud native computing. Which ones? The obvious leaders are already on Github and progressing at high velocity. How could we partner with them? We were lucky to have two leading candidates in play early on; Kubernetes and Prometheus. These were welcomed with strong consensus votes from the TOC.
Our view for these projects was “first do no harm.” We did not want to burden projects with intrusive governance, believing instead that well-run projects are more than able to manage their own core decision making processes. This has allowed us to work *with* project leads to understand their needs as we go — and a picture is starting to emerge.
At the time of writing, CNCF now has three terrific projects — Kubernetes, Prometheus and OpenTracing. Fluentd is being voted on. There are others that are getting close or in the pipeline. Exciting times! If your project might like to apply, see here.
As I said, we wanted to build velocity. But we started slowly. We wanted to find high quality projects — so we were patient and cautious and we sought consensus in order to establish trust and ways of working together. This created a baseline, which means we can now move faster and involve the community more. Please do get involved.
Here is what we learned in the first 8 months:
Customers want us to start building up the bigger Cloud Native story and ‘brand’. We are close to having enough excellent projects to do that in a compelling way. Customers also want interoperability e.g. common APIs and formats for networking and monitoring — this is happening at the project level but we can make it more obvious and apparent to end users.
We need to be even more community oriented — e.g open up to younger projects, which is under discussion. E.g. standardise on DCO with CLA as the exception, not the other way round.
Getting real work done takes time and resources. The CNCF Executive Director, Dan Kohn, has created a team and structure for this. You can help us — with initiatives around testing & automation, documentation, example apps and patterns, and best practices.
Overall, I think we’ve learned that our core assumptions are sound. We must back strong projects, but with a light touch. In the era of GitHub, people do not need to be told what to do, they need help, services and common infrastructure that we can provide.
We ask them how we can help — we don’t tell them, we don’t make them join committees. We love open source, which is fast, more than open standards, which are important but emerge slowly over time. We are not a kingmaker organisation — we believe the market & community will select leaders (plural) in time.
Let’s all innovate together
Let’s make sure that with the CNCF, we create an engine for innovation for the next era of applications. The problems that CNCF projects solve are all about translating the lessons learned from the pioneers of cloud native — companies like Netflix, Twitter and Google, so that “anyone can do it.”
The focus of innovation therefore has to come from enablement as well as science. It is all fine to talk about scalability and automation, but you know what’s cool? People are cool. As more people and countries get connected and as web applications become better, we desperately need more developers who can make all of this work.
Innovation comes from people — developers especially — and enterprise end users. This means you! Help us build the common set of tools that *you* need. In the commons, diversity begets quality — ALL are welcome.
Those building microservices at scale understand the role and importance of distributed tracing: after all, it’s the most direct way to understand how and why complex systems misbehave. When we deployed Dapper at Google in 2005, it was like someone finally turned the lights on: everything from ordinary programming errors to broken caches to bad network hardware to unknown dependencies came into plain view.
A screenshot illustrating the multi-process trace of a production workflow
Everyone running a complex distributed system deserves — no, needs — this sort of insight into their own software. So why don’t they already have it?
The problem is that distributed tracing has long harbored a dirty secret: the necessary source code instrumentation has been complex, fragile, and difficult to maintain.
Application developers want the flexibility to choose or swap out a tracing system without touching their instrumentation. They also need the instrumentation in their web framework to be compatible with the instrumentation in their RPC system or database client.
Open-Source package developers need to make their code visible to tracing systems, but they have no way of knowing which tracing system the containing process happens to use. Moreover, for services and RPC frameworks, there’s no way to know specifically how the tracing system needs to serialize data in-band with application requests.
Tracing vendors can’t instrument the world N times over; by using OpenTracing, they can achieve coverage across a wide swath of both open source and proprietary code in one fell swoop.
As OpenTracing gains traction with each constituency above, it then becomes more valuable for the others, and in this way it fosters a virtuous cycle. We have seen this at play with application developers adding instrumentation for their important library dependencies, and community members building adapters from OpenTracing to tracing systems like Zipkin in their favorite language.
Last week, the OpenTracing project joined the Cloud Native Computing Foundation (CNCF). We respect and identify with the CNCF charter, and of course it’s nice for OpenTracing to have a comfortable – and durable – home within The Linux Foundation; however, the most exciting aspect of our CNCF incubation is the possibility for collaboration with other projects that are formally or informally aligned with the CNCF.
To date, OpenTracing has focused on standards for explicit software instrumentation: this is important work and it will continue. That said, as OpenTracing grows, we hope to work with others in the CNCF ecosystem to standardize mechanisms for tracing beyond the realm of explicit instrumentation. With sufficient time and effort, we will be able to trace through container-packaged deployments with little to no source code modification and with vendor neutrality. We couldn’t be more excited about that vision, and by working within the CNCF we believe we’ll get there faster.
During last year’s O’Reilly Velocity conference in Amsterdam, Björn Rabenstein & Julius Volz of Prometheus presented on “Service Instrumentation, Monitoring, and Alerting with Prometheus.” Their comprehensive tutorial kicked off with an introduction into the fundamental concepts of Prometheus and the various components of its ecosystem including:
core collection server with its time-series database;
various exporters to export metrics from third-party systems into the Prometheus ecosystem;
alerting component Alertmanager; and
Pushgateway for metrics of short-lived jobs and much more.