KubeCon + CloudNativeCon San Diego | November 18 – 21 | Learn more

Category

Blog

CNCF Openness Guidelines

By | Blog

CNCF is an open source technical community where technical project collaboration, discussions, and decision-making should be open and transparent. Please see our charter, particularly section 3(b), for more background on CNCF values.

Design, discussions, and decision-making around technical topics of CNCF-hosted projects should occur in public view such as via GitHub issues and pull requests, public Google Docs, public mailing lists, conference calls at which anyone may participate (and which are normally published afterward on YouTube), and in-person meetings at KubeCon + CloudNativeCon and similar events. This includes all SIGs, working groups, and other forums where portions of the community meet.

This is particularly important in light of the Linux Foundation’s (revised) Statement on the Huawei Entity List Ruling. (Note that CNCF is part of the Linux Foundation.) Our technical community operates openly and in public which affords us exceptions to regulations other closed organizations may have to address differently. This open, public technical collaboration is also critical to our community’s success as we navigate competitive and shifting industry dynamics. Openness is particularly important in any discussions involving encryption since encryption technologies can be subject to Export Administration Regulations.

If you have questions or concerns about these guidelines, I encourage you to discuss it with your company’s legal counsel and/or to email me and Chris Aniszczyk at openness@cncf.io. Thank you.

Apple Joins Cloud Native Computing Foundation as Platinum End User Member

By | Blog

The Cloud Native Computing Foundation (CNCF), which sustains and integrates open source technologies like Kubernetes® and Prometheus™, today announced that Apple has joined the CNCF as a Platinum End User Member.

Apple has completely revolutionized personal and enterprise technology, and has long been a pioneer in cloud native computing and one of the earlier adopters of container technology. Apple has also contributed to several CNCF projects, including Kubernetes, gRPC, Prometheus, Envoy Proxy, Vitess and hosted the FoundationDB Summit at KubeCon + CloudNativeCon last year.

“Having a company with the experience and scale of Apple as an end user member is a huge testament to the vitality of cloud native computing for the future of infrastructure and application development,” said Chris Aniszczyk, CTO of the Cloud Native Computing Foundation. “We’re thrilled to have the support of Apple, and look forward to the future contributions to the broader cloud native project community.”

As part of Apple’s Platinum membership, Tom Doron, Senior Engineering Manager at Apple has joined CNCF’s Governing Board.

Apple will join 87 other end user companies including Adidas, Akatsuki, Amadeus, Atlassian, AuditBoard, Bloomberg, Box, Cambia Health Solutions, Capital One, Concur, Cookpad, Cruise,  Curve, DENSO Corporation, DiDi, Die Mobiliar, DoorDash, eBay, Form3, GE Transportation, GitHub, Globo, Goldman Sachs, Granular, i3 Systems, Indeed,Intuit,JD.com, JP Morgan, Kuelap, Mastercard, Mathworks, Mattermost, Morgan Stanley, MUFG Union Bank, NAIC, Nasdaq, NCSOFT, New York Times, Nielsen, NIPR, Pinterest, PostFinance, Pusher, Reddit, Ricardo.ch, Salesforce, Shopify, Showmax, SimpleNexus, Spotify, Spredfast, Squarespace, State Farm, State Street, Steelhouse, Stix Utvikling AS, Testfire Labs, Textkernel, thredUP, TicketMaster, Tradeshift, Twitter, Two Sigma,University of Michigan – ARC, Upsider, Walmart, Werkspot, WeWork, WikiMedia, WooRank, Workday, WPEngine, Yahoo Japan Corporation, Zalando SE, and Zendesk in CNCF’s End User Community. This group meets monthly and advises the CNCF Governing Board and Technical Oversight Committee on key challenges, emerging use cases and areas of opportunity and new growth for cloud native technologies.

Additional Resources

Square: How Vitess Enables ‘Near Unlimited Scale’ for Cash App

By | Blog

Four years ago, Square branched out into peer-to-peer transactions via its Cash App. After doing so, users started increasing by the minute and they needed to come up with a long term solution for scalability.  Vitess was the answer to the scalability issue. With Vitess, Cash App didn’t have to completely change how developers built applications and were able to change only 5% of their system vs. 95% to respond to increased customer demand.  Additionally, Cash App developers can do multiple shard splits per week with less than a second of downtime. Read the full case study here.

 

Reflections on the Fifth Anniversary of Kubernetes

By | Blog

Guest post from the Kubernetes Project

Five years ago, Kubernetes was released into the world. Like all newborns, it was small, limited in functionality, and had only a few people involved in its creation. Unlike most newborns, it also involved a great deal of code written in Bash. Today, at the five year mark, Kubernetes is full grown, and while a human would be just entering kindergarten, Kubernetes is at the core of production workloads from startups to global financial institutions.

They say that success has a thousand parents and failure is an orphan, but in the case of Kubernetes the truth is that its success is due to its thousands (and thousands) of parents. Kubernetes came from humble beginnings, with just a handful of developers and in record time grew into its current state with literally thousands of contributors – and even more people involved in meetups, docs, education, release management, and supporting the broader community. At many points in the project, when it seemed that it might be moving too fast or becoming too big, the community has responded and stepped up with new ways of organizing and new ways of supporting the project so that it could have continued success. It is an amazing achievement to see a project reach this scale and continue to operate successfully, and it is a tribute to each and every member of our amazing community that we’ve been able to do this while maintaining an open, neutral and respectful community.

Five years in, it’s worth reflecting on the things that Kubernetes has achieved. It is one of the largest, if not the single largest open source project on the planet. It has managed to sustain a fast pace of development across a team of thousands of distributed engineers working in a myriad of different companies. It has merged tens of thousands of commits while sustaining a regular release cadence of high-quality software that has become mission-critical for countless organizations and companies. This would be no small achievement within a single company, but to do this while being driven by dozens of different companies and thousands of individuals (many of whom have other jobs or even school!) is truly amazing. It is a credit to the selflessness of all of the folks in the community who chop wood and carry water every single day to ensure that our tests are green (ish), our releases get patched, our security is maintained, and our community conducts itself within the bounds of our code of conduct. To all of the people who do this often tedious, and sometimes emotionally draining work, you deserve our deepest thanks. We could never have gotten here without you.

Of course, the story of Kubernetes isn’t just a story of community, it is also a story of technology. It is breath-taking to see the speed with which the ideas of cloud-native development have shaped the narrative of how reliable and scalable applications are built. Kubernetes has become a catalyst for the digital transformation of organizations toward cloud-native technologies and techniques. It has become the rallying point and supporting platform for the development of an entire ecosystem of projects and products that add powerful cloud-native capabilities for developers and operators. By providing a ubiquitous and extensible control-plane for application development, Kubernetes has successfully uplifted a whole class of higher-level abstractions and tools.

One of the most important facets of the Kubernetes project was knowing where it should stop. This has been a core tenet of the project since the beginning and though the surface area of Kubernetes continues to grow, it has an asymptotic limit. Because of this, there is a flourishing ecosystem on top of and alongside the core APIs. From package managers to automated operators, from workflow systems to AI and deep learning, the Kubernetes API has become the substrate on which a vibrant cloud-native biome is growing.

As Kubernetes turns five, we naturally look to the future and contemplate how we can ensure that it continues to grow and flourish. In the celebration of everything that has been achieved, it must also be noted that there is always room for improvement. Though our community is broad and amazing, ensuring a diverse and inclusive community is a journey, not a destination, and requires constant attention and energy. Likewise, despite the promise of cloud-native technologies, it is still too hard to build reliable, scalable services. As Kubernetes looks to its future, these are core areas where investment must occur to ensure continued success. It’s been an amazing five years, and with your help the next five will be even more amazing. Thank you!

A Look Back At KubeCon + CloudNativeCon Barcelona 2019

By | Blog

 

Hot off an amazing three days in Barcelona, here is a snapshot into some of the key highlights and news from KubeCon + CloudNativeCon Europe 2019! This year we welcomed more than 7,700 attendees from around the world to hear compelling talks from CNCF project maintainers, end users and community members.

The annual European event grew by more than 3,000 attendees than last year in Copenhagen. At the conference, CNCF announced its ever-growing ecosystem has hit over 400 member companies, of which there are now more than 88 end user members. We also learned that Kubernetes has more than 2.66 million posts from 26,214 contributors.

This year we welcomed Bryan Liles as a KubeCon + CloudNativeCon co-chair! He took the stage to announce all the great project news that has come out in the last couple months.

During the opening keynotes we also heard from CNCF Executive Director, Dan Kohn who spoke about the key factors that contributed to the massive growth of the Kubernetes ecosystem, and CNCF Director of Ecosystem, Cheryl Hung who shared CNCF’s growth and plans to continue growing a positive community. Lucas Käldström, CNCF Ambassador, Independent & Nikhita Raghunath, Software Engineer, Loodse shared insights on the what, why and how of contributing to Kubernetes.

Kubernetes Boothday Party!

While we celebrated the cloud native community, we also got to celebrate Kubernetes’ fifth birthday with a “Boothday Party” and donut wall!  

Continuing to Embrace Diversity in the Ecosystem

At KubeCon + CloudNativeCon EU, CNCF’s diversity program offered scholarships to 56 recipients, from traditionally underrepresented and/or marginalized groups, to attend the conference! The $100K investment for Barcelona was donated by CNCF, Aspen Mesh, Google Cloud, Red Hat, Twistlock and VMware.

CNCF has offered more than 300 diversity scholarships to attend KubeCons since November 2016.

We also had a wonderful time at the Diversity lunch and EmpowerUs events!

Take Good Care: Open Sourcing Mental Illness

This year at KubeCon + CloudNativeCon EU, we made sure that self care and mental wellness was top of mind for everyone. As a result, we got to hear inspiring talks throughout the conference on these topics, plus, we had a booth 100% dedicated to relaxing and mental health. We felt so much community support!

All Attendee Party at Poble Espanyol!

Our events team organized a fantastic party at Poble Espanyol, celebrating the many achievements of the cloud native ecosystem in the beautiful Spanish courtyard!

Keynote and Session Highlights

All presentations and videos are available to watch. Here is how to find all the great content from the show:

  • Keynotes, sessions and lightning talks can be found on the CNCF YouTube
  • Photos can be found on the CNCF Flickr
  • Presentations can be found on the Conference Event Schedule, click on the session and scroll to the bottom of the page to see the PDF of the presentation for download

“From the people Computer Weekly spoke to at Kubecon-CloudNativeCon, there is a sense that Kubernetes is breaking out of the open source developer space into the enterprise.” Cliff Saran, ComputerWeekly

“Five years after Google released the toolkit for managing workloads to the open source community, Kubernetes became the celebrated boy. Nothing seems to stop its advance, especially because the developers have embraced this system.” Alfred Montie, Computable.nl

“The Kubernetes container-orchestration system is one platform that is both surviving and thriving.” Nick Marinoff, SiliconANGLE

“It’s apparent that whether an application lives on the Google Cloud Platform or in an on-premises data center, it can be done with containers.” Kristen Nicole, SiliconANGLE

“As stated by Dan Kohn, Kubernetes has emerged on the shoulders of giants: on Linux, the Internet and various cluster manager implementations of cloud-native businesses from Spotify to Facebook to Google. The exciting question for the conference is which innovations will come on the shoulders of the giant Kubernetes in the next few days.” Josef Adersberger, Alex Krause, Heise Online

“The obvious conclusion: If you’re interested in enterprise IT infrastructure, Kubernetes should be your technology of choice, and KubeCon is the place to be.” Jason Bloomberg, SiliconANGLE

It’s a Wrap!

Save the Dates!

Register now for KubeCon + CloudNativeCon + Open Source Summit China 2019, scheduled for June 24-26, 2019 at the Shanghai Expo Centre, Shanghai, China

Register now for KubeCon + CloudNativeCon North America 2019, scheduled for November 18-21, 2019 at the San Diego Convention Center, San Diego, California. The CFP closes July 12.

Save the date for KubeCon + CloudNativeCon Europe 2020, scheduled for March 30-April 2, 2020 in Amsterdam, The Netherlands

Observability should not slow you down

By | Blog

Originally published on Medium by Travis Jeppson, Sr. Director of Engineering, Nav Inc

In any application, the lack of observability is the same as riding a bike with a blindfold over your eyes. The only inevitable outcome is crashing, and crashing always comes with a cost. This cost tends to be the only focus we have when we look at observability, but this isn’t the only cost. The other cost of observability isn’t usually addressed until it becomes more painful than the cost of crashing —the cost of maintenance and adaptability.

I’ve listened to, and watched, many conference talks about this subject; and had my fair share of conversations with vendors as well. Maintenance and adaptability aren’t generally mentioned. I’ve only had these topics come up when I’m talking to other companies about their adopted platform, how they were actually able to integrate observability into real-life situations, and from my own experiences doing the same. The reason these topics come up after some practical application is that we’ve all hit the proverbial wall.

We’ve all run into problems, or incompatibilities, or vendor lock-in that feels almost impossible to get rid of. Our observability begins to dwindle, the blindfold starts falling down over our eyes, and we’re again heading to an inevitable fate. What can be done? Revisit the entire scenario? Wait for a major crash and create an ROI statement to show we have to re-invest in major parts of our applications? This can’t possibly be the only way to deal with this problem. This is an anti-pattern to the way we build software. Observability is supposed to empower speed and agility, not hold it back.

There is another way, and it starts by determining the key elements on which you won’t make concessions. During the last iteration of trying to get this right at Nav, we had a lot of discussions around our previous attempts. The first attempt was a solution that we thought initially had unlimited integrations; it turns out it didn’t have the one we needed, Kubernetes. We also couldn’t produce custom metrics from our applications, so that solution had to go. We weren’t about to wait for them to tell us an integration was ready, we were ready to move. We decided to go with a solution that was end-to-end customizable, we could spend time developing our telemetry data, and how to interpret it. This, unfortunately, forced us into a maintenance nightmare. On the third iteration, however, we decided to settle somewhere in the middle. We sat down and defined our “no compromise” priorities, and started finding solutions that fit. Here’s how we saw the priorities for Nav.

1. Customization! We needed adaptability, no waiting for integrations

First and foremost the solution needed to allow for custom metrics, and handle them like a first-class citizen. This needed to be true for our infrastructure metrics as well as anything coming from our applications. Adaptability was key in our decision: If the solution we chose was adaptable, then we should be free to adjust any component of our infrastructure without having to check if our observability would be affected.

2. No vendor-specific code in our applications, not even libraries

This may seem a little harsh at first, but the fact of the matter is that we didn’t want to have a dependency on a vendor. We use a wide variety of languages at Nav —Ruby, Elixir, Go, Python, Javascript, Java, the list goes on. It was almost impossible to find a vendor solution that would work with all of those languages. We decided the language needed to be agnostic, which means we couldn’t have any vendor code or libraries in our applications. The other side of this is that we didn’t want to be locked to the solution, since we had previously run into issues with that problem.

3. HELP! The maintenance cannot be overwhelming

This meant that at some point we would probably need a vendor to help us out. We didn’t want a ridiculous uptime for our observability platform to be our concern, we wanted to worry about the uptime of our application instead. We also didn’t want to worry about the infrastructure of the observability platform, we wanted to worry about our own. Catch my drift? We also wanted some guidance about what to pay attention to. We wanted a simple way to build dashboards, and the ability to allow pretty much every engineer to build their own dashboards around their own metrics.

Now the Rest: Our Second Tier of Priorities

Now we get into the “like to have” priorities. The following were more of a wish list, the top three were dealbreakers for the solution we came up with. Fortunately, as will be illustrated later, we didn’t need to compromise on any of our priorities.

4. Alerting needed to be easy to do, and integrate with our on-call solution

With our end-to-end customized solution (attempt #2 in observability) alerting was ridiculously tedious. It was a JSON document that had so many defining parts that we never really had any good alerts setup. We also caused a lot of on-call burnout due to large amounts of false positives. We didn’t want to repeat this.

5. We didn’t want to pay the same price for our non-production environments as we do for production

It is a giant pet-peeve of mine that it is required of anyone to pay the same price for observability, just because the size of the environments is the same. Why must this be? I don’t actually care nearly as much if a development environment goes down for 5 minutes; but I definitely care if production is down for 5 minutes.

The Final Decision: Nav’s Tri-Product Solution

With these priorities in hand, we set out to create a solution that worked. To cut a long story short, there didn’t end up being the perfect solution, there wasn’t a solution that could give us the top 3 priorities … on their own. It turns out we needed multiple pieces to work seamlessly together.

Prometheus

Prometheus is an open source metric aggregation service. The fantastic thing about Prometheus is that it is built around a standard, which they also created. This standard is called the exposition format. You can provide a text-based endpoint and Prometheus will come by and “scrape” the data off of this endpoint and feed it into a time series database. This … is … simply … amazing! Our developers were able to write this endpoint in their own code bases, and publish any kind of custom metric they desired.

StatsD

StatsD was a solution originally written by Etsy, StatsD provided a way for us to push metrics on our software that wasn’t associated with a web server, such as short-lived jobs, or event-driven computations.

Between StatsD and Prometheus, we were able to publish custom metrics from virtually anywhere. The other great thing, is with both of these solutions being open source, there was already a thriving community building out assistive components to these two libraries.

The final piece of the puzzle for us was where the vendor came into play. With our priorities set, we found a vendor that did seamlessly integrate with Prometheus metrics, they would even scrape the metrics for us, so we didn’t even need to run Prometheus, just use their standards. They also ingested our StatsD metrics without a hitch.

SignalFx

SignalFx was the vendor we ended up selecting, this is what ended up working for us, and our priorities. The key component with the vendor selection is that the solution fulfills your needs from a managed, and ease-of-use view point. That being said, I’ll illustrate how SignalFx fulfilled this for us.

The tailing part of our third priority is we wanted some guidance on what to pay attention to, SignalFx had some very useful dashboards out of the gate that used our Prometheus metrics to pinpoint some of our key infrastructure components, like Kubernetes and AWS.

They also have a very robust alerting system which was as simple as identifying the “signal” we wanted to pay attention to, and adding a particular constraint to it. These constraints could be anything between a static threshold, to outliers, to historical anomalies. This was significantly simpler than our second attempt, and this was built around custom metrics! Win, Win!

Finally, SignalFx charges per metric you send them, the great thing about this is that our non-prod environments are pretty quiet, we dialed down their resolution to a minute or two, so the metrics that are constantly being generated, like CPU, or memory, didn’t cost an arm and a leg. This fulfilled our final priority, and allowed us to save a significant amount of money over other vendor solutions.

The takeaway from all of this is that the observability platform we use, if built around standardized systems, doesn’t have to be painful. In fact it can be just the opposite. We have been able to accelerate our development and we have never had surprises due to the maintainability and adaptability of our observability platform.

For more on Nav’s cloud native journey, check out the case study and video

Certified Kubernetes Administrator (CKA) Certification is Now Valid for 3 Years

By | Announcement, Blog

In 2017, CNCF launched the Certified Kubernetes Administrator (CKA) exam which has become one of the most popular Linux Foundation certifications to date. Over 9,000 individuals have registered for the exam and over 1,700 have achieved the certification.

When the exam was originally released, the certification was valid for 2 years in anticipation of a major curriculum update happening at that time. However, given the current development and release cycle of Kubernetes, we are now planning for this larger curriculum update in 2020.   

This means that CKA Certifications awarded on or after September 2, 2017 will expire 36 months from the date that the Program Certification Requirements were met by the candidate (rather than the 24 months stated previously). The current curriculum still ensures that existing CKAs have the skills, knowledge, and competency that are relevant to perform the responsibilities of a Kubernetes administrator. When we do revise the exam, we will announce it here and the changes will be reflected in the open sourced curriculum. This extra time will allow CKAs to prepare and practice any new skills before their certification expires.

This also means that Kubernetes Certified Service Providers (KCSP), will need to revisit their certifications in 2020 to maintain their status. A reminder will be sent out to all partners up for renewal as those dates approach.

KCSPs are vetted service providers who have deep experience helping enterprises successfully adopt Kubernetes and a minimum of three Certified Kubernetes Administrators in-house. With 90 service providers in the program to date, these partners have helped a variety of organizations with top-tier professional services to support Kubernetes deployments.

This does not impact the Certified Kubernetes Application Developer (CKAD) exam which launched in May 2018. A decision about that exam will be made at a later date depending on how much the curriculum is anticipated to change over the next two years.

Interested in taking the Certified Kubernetes Administrator (CKA) exam? CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster. The Kubernetes Fundamentals course maps directly to the requirements for the Certified Kubernetes Administrator exam or choose from one of 22 Kubernetes Training Partners (KTP), a tier of vetted training providers who have deep experience in cloud native technology training.

Intuit Wins CNCF End User Award

By | Blog

The Cloud Native Computing Foundation (CNCF) has announced Intuit as the winner of the top end user award. Thanks to our contributions to the cloud native community, Intuit is being recognized for how we leverage cloud native technologies in production, including CNCF projects like Kubernetes, Istio, Prometheus, Fluentd, Jaeger, and Open Policy Agent to build a modern developer platform that provides continuous integration, continuous delivery, and continuous operations to accelerate developer productivity.

As a part of our journey to the cloud and mobile, we continue to modernize all aspects of our technology, including our platform, tools and processes and continue to advance the way we leverage cloud native technologies. This includes how we create, deploy, run, and monitor applications and services, at scale. In January 2018, we acquired Applatix, a talented team with deep system and infrastructure knowledge and expertise in building scalable production systems with containers and Kubernetes in both the public and private cloud environments. As a part of this acquisition, we also inherited their flagship open source tool, Argoproj, a set of Kubernetes native projects.

In the past year alone, Intuit has operationalized more than a hundred Kubernetes clusters that run more than 500 services in production and pre-production across multiple business units. As a part of the deployment, Intuit solved many common issues for teams deploying Kubernetes and related technologies, and shared their solutions to the larger Kubernetes community in order to increase developer productivity.

We continue to be actively involved in the cloud native community, announcing our partnership to the Cloud Native Computing Foundation in January 2018 as an End-User Silver member and serving as a founding member of the GraphQL Foundation since March 2019. Intuit is an active member of the CNCF End User Community, which meets regularly to share adoption best practices and feedback on project roadmaps and future projects for CNCF technical leaders to consider.

“It is such an honor to receive the CNCF End User Award, recognizing our commitment and contributions to the cloud native community,” said Jeff Brewer, Vice President, Chief Architect of the Small Business and Self Employed Group at Intuit. “We’ve undergone several transformations as a company, from the desktop to the web to the cloud and now to AI and ML, and each of these transformations require us to move faster with increased speed of innovation. Through Kubernetes and other cloud native tools, we are able to deploy code faster than we ever have been able to. I’m excited to see how these tools help us to not only better serve our developer community, but also our customers overall as we work to power their prosperity.”  

To learn more about Argo and other Intuit open source tools, check out our open source page at https://opensource.intuit.com.

A Brief History of OpenTelemetry (So Far)

By | Blog

by Ben Sigelman, co-creator of OpenTracing and member of the OpenTelemetry governing committee, and Morgan McLean, Product Manager for OpenCensus at Google since the project’s inception

After many months of planning, discussion, prototyping, more discussion, and more planning, OpenTracing and OpenCensus are merging to form OpenTelemetry, which is now a CNCF sandbox project. The seed governance and technical committees are composed of representatives from Google, LightStep, Microsoft, and Uber, and more organizations are getting involved every day.

We couldn’t be happier about it – here’s why.

Observability, Outputs, and High-Quality Telemetry

Observability is a fashionable word with some admirably nerdy and academic origins. In control theory, “observability” measures how well we can understand the internals of a given system using only its external outputs. If you’ve ever deployed or operated a modern, microservice-based software application, you have no doubt struggled to understand its performance and behavior, and that’s because those “outputs” are usually meager at best. We can’t understand a complex system if it’s a black box. And the only way to light up those black boxes is with high-quality telemetry: distributed traces, metrics, logs, and more.

So how can we get our hands – and our tools – on precise, low-overhead telemetry from the entirety of a modern software stack? One way would be to carefully instrument every microservice, piece by piece, and layer by layer. This would literally work, it’s also a complete non-starter – we’d spend as much time on the measurement as we would on the software itself! We need telemetry as a built-in feature of our services.

The OpenTelemetry project is designed to make this vision a reality for our industry, but before we describe it in more detail, we should first cover the history and context around OpenTracing and OpenCensus.

OpenTracing and OpenCensus

In practice, there are several flavors (or “verticals” in the diagram) of telemetry data, and then several integration points (or “layers” in the diagram) available for each. Broadly, the cloud-native telemetry landscape is dominated by distributed traces, timeseries metrics, and logs; and end-users typically integrate with a thin instrumentation API or via straightforward structured data formats that describe those traces, metrics, or logs.

 

For several years now, there has been a well-recognized need for industry-wide collaboration in order to amortize the shared cost of software instrumentation. OpenTracing and OpenCensus have led the way in that effort, and while each project made different architectural choices, the biggest problem with either project has been the fact that there were two of them. And, further, that the two projects weren’t working together and striving for mutual compatibility.

Having two similar-yet-not-identical projects out in the world created confusion and uncertainty for developers, and that made it harder for both efforts to realize their shared mission: built-in, high-quality telemetry for all.

Getting to One Project

If there’s a single thing to understand about OpenTelemetry, it’s that the leadership from OpenTracing and OpenCensus are co-committed to migrating their respective communities to this single and unified initiative. Although all of us have numerous ideas about how we could boil the ocean and start from scratch, we are resisting those impulses and focusing instead on preparing our communities for a successful transition; our priorities for the merger are clear:

  • Straightforward backwards compatibility with both OpenTracing and OpenCensus (via software bridges)
  • Minimizing the time where OpenTelemetry, OpenTracing, and OpenCensus are being co-developed: we plan to put OpenTracing and OpenCensus into “readonly mode” before the end of 2019.
  • And, again, to simplify and standardize the telemetry solutions available to developers.

In many ways, it’s most accurate to think of OpenTelemetry as the next major version of both OpenTracing and OpenCensus. Like any version upgrade, we will try to make it easy for both new and existing end-users, but we recognize that the main benefit to the ecosystem is the consolidation itself – not some specific and shiny new feature – and we are prioritizing our own efforts accordingly.

How you can help

OpenTelemetry’s timeline is an aggressive one. While we have many open-source and vendor-licensed observability solutions providing guidance, we will always want as many end-users involved as possible. The single most valuable thing any end-user can do is also one of the easiest: check out the actual work we’re doing and provide feedback. Via GitHub, Gitter, email, or whatever feels easiest.

Of course we also welcome code contributions to OpenTelemetry itself, code contributions that add OpenTelemetry support to existing software projects, documentation, blog posts, and the rest of it. If you’re interested, you can sign up to join the integration effort by filling in this form.

Going Big: Harbor 1.8 Takes Security and Replication to New Heights

By | Blog

By Michael Michael, Harbor Core Maintainer, Director of Product Management, VMware (Twitter: @michmike77)

Happy release day everyone! We are very excited to present the latest release of Harbor. The release cycle for version 1.8 was one of our longest cycles, and version 1.8 involved the highest number of contributions from community members of any Harbor release to date. As a result, 1.8 is our best release so far and comes packed with a great number of new features and improvements, including enhanced automation integration, security, monitoring, and cross-registry replication support.

Support for OpenID Connect

In many environments, Harbor is integrated with existing enterprise identity solutions to provide single sign-on (SSO) for developers and users. OpenID Connect (OIDC), which is an authentication layer on top of OAuth 2.0, allows Harbor to verify the identity of users based on authentication performed by an external authorization server or identity provider. Administrators can now enable an OIDC provider as the authentication mode for Harbor users, who can then use their single sign-on credentials to log in to the Harbor portal.

In most situations, tools like the Docker client are incapable of logging in by using SSO and federated identity when the user has to be redirected to an external identity provider. To remedy this issue, Harbor now includes CLI secrets, which can provide end users with a token that can be used to access Harbor via the Docker or Helm clients.

Robot Accounts

In a similar scenario to the Docker client SSO issue mentioned above, Harbor is often integrated with CI/CD tools that are unable to perform SSO with federated enterprise identity providers. With version 1.8, administrators can now create robot accounts, a type of special account that allows Harbor to be integrated and used by automated systems, such as CI/CD tools. You can configure robot accounts to provide administrators with a token that can be granted appropriate permissions for pulling or pushing images. Harbor users can continue operating Harbor using their enterprise SSO credentials, and use robot accounts for CI/CD systems that perform Docker client commands.

Replication Advancements

Many users have the need to replicate images and Helm charts across many different environments, from the data center to the edge. In certain situations, users may have deployed applications on a public cloud and utilize the public cloud provider’s built-in registry. The built-in registries don’t offer the many capabilities and features of Harbor, specifically the static analysis of images.

Harbor 1.8 expands the Harbor-to-Harbor replication feature to add the ability to replicate resources between Harbor and Docker Hub, Docker Registry, and the Huawei Cloud registry by using both push- and pull-mode replication. Harbor can act as the central repository for all images, scan them for vulnerabilities, enforce compliance and other policies, and then replicate images to other registries acting as a pure content repository. One use case is creating replicas of your Harbor image repository on different types of repositories spread across data centers in different regions. This new Harbor feature has been created using a provider interface, and we expect our developer community to add support for more registries in the future.

Additional Features

Harbor 1.8 brings numerous other capabilities for both administrators and end users:

  1. Health check API, which shows detailed status and health of all Harbor components.
  2. Harbor extends and builds on top of the open source Docker Registry to facilitate registry operations like the pushing and pulling of images. In this release, we upgraded our Docker Registry to version 2.7.1
  3. Support for defining cron-based scheduled tasks in the Harbor UI. Administrators can now use cron strings to define the schedule of a job. Scan, garbage collection, and replication jobs are all supported.
  4. API explorer integration. End users can now explore and trigger Harbor’s API via the Swagger UI nested inside Harbor’s UI.
  5. Enhancement of the Job Service engine to include internal webhook events, additional APIs for job management, and numerous bug fixes to improve the stability of the service.

Growing End User Support for Harbor

We’re proud of the functionality we’re delivering in Harbor 1.8. We’re also fortunate to have a growing community willing to try Harbor and provide us with feedback. Here are some comments shared by end users on their use of Harbor:

Fanjian Kong, Senior Engineer, 360 Total Security

“Through Harbor’s Web UI, we can conveniently manage the access rights of projects, members and images. We take advantage of Harbor’s remote replication features to create replicas of image repository in data centers across different regions.”

De Chen, Cloud Platform Senior Software Engineer, CaiCloud

“In Caicloud’s product of cloud native platform, we leverage Harbor to implement the capability of image management, including Harbor’s image synchronization and vulnerability scanning function. Delivered as an important component in our product, Harbor has been used by many of our enterprise customers.”

Mingming Pei, Senior development engineer, Netease Cloud

“Harbor provides rich functions in container image management. It solves our challenges of transferring images and Helm charts between container clusters. Harbor does allow us to save a lot of resources in image repository. The community is very active and the features are constantly being improved.”

Since becoming a Cloud Native Computing Foundation (CNCF) Incubating project, there’s been a tremendous increase in participation by our community, evident in the breadth of new features included with this release. We want to extend a huge thank you to the community for making this release possible through all your contributions of code, testing, and feedback. If you are a new or aspiring contributor, there are many ways to get involved as a developer or a user. You can join us on Slack, GitHub, or Twitter to help advance the Harbor vision.

Join the Harbor Community!

Get updates on Twitter (@project_harbor)

Chat with us on Slack (#harbor on the CNCF Slack)

Collaborate with us on GitHub: github.com/goharbor/harbor

Michael Michael

Harbor Core Maintainer

Director of Product Management, VMware

@michmike77

1 4 5 6 36