Guest post originally published on Kong’s blog by Marco Palladino, Co-founder / CTO at Kong

As a developer, your company hired you to build incredible products that focus on your users’ and customers’ needs. Yet, in the age of microservices, producing the best products relies heavily on efficient cloud service connectivity. For example, an eCommerce marketplace is more than a front-end UI that customers access via a browser. It needs to remain connected with multiple other services, such as your inventory management system, your product reviews and a payment processor, to name a few. 

You could solve this by writing more code in each application to create smart clients that can connect to and make requests to other services in your network.

Smart clients can discover, secure, route, observe and monitor all the requests generated across our applications. But this takes hours of precious time away from building apps—your actual job. Instead, you’ll end up coding and maintaining the service connectivity for your applications indefinitely. That also includes having to replicate those smart clients into different languages if you eventually adopt or already have more than one framework or language within your system. Doing this for each application you create will inevitably lead to technical debt, security risks and potential revenue loss. 

Your second option is to ignore these cloud connectivity issues altogether and let the architect team worry about them.

In theory, this could work. Running a modern infrastructure should be like running a city. The architects should build the underlying infrastructure, roads and bridges. And then, when everything is in place, the app teams can finally focus on building their applications (the buildings) and let someone else worry about how they’ll communicate with other services across the infrastructure.

The problem with this is that your architect team hasn’t been doing their job. If you don’t do something, no one will. If your cloud connectivity fails, your applications will fail, leading to unhappy users and customers. Dissatisfied users and customers may put your job security and the company at risk.

Unfortunately, taking connectivity for cloud native applications into your own hands could do more harm than good. We’ve discovered two significant issues that often come up when developers are left to manage this themselves:

  1. It creates a fragmented architecture.
  2. It causes them to be less productive over the long term.

Luckily, there’s a third option: leveraging an open source service mesh.

We created Kuma to solve these challenges for developers in an easy-to-implement and easy-to-manage way.

In the rest of this article, or if you prefer to watch the below CNCF webinar recording, we’ll explore how a service mesh can improve your productivity and solve your company’s connectivity challenges.

Improving Developer Productivity

Service mesh fundamentally improves the connections among different services within a company’s architecture. The word service mesh implies having a mesh of services, which is undoubtedly the case more often than not. No matter how many services you have running in your systems, you can still benefit from improved connectivity, observability and security among all of these services. It can be thousands of separate applications, a monolith talking to a database or anything in between.

Not to be confused with API management tools, such as an API gateway. Service mesh and API gateways work together but support separate use cases. 

Kuma control plane

As a universal control plane for service mesh, Kuma sits on an abstraction layer that can work simultaneously on traditional infrastructure, virtual machine (VM environments), bare metal, as well as Kubernetes and modern architectures. We built Kuma on top of Envoy, which it relies on for its sidecar proxy functionality.

With Kuma service mesh, a CNCF Sandbox project, you can deploy an out-of-process proxy (Envoy) that can run alongside your services. The Envoy proxy will intercept every outgoing request that the service makes. After the first deployment, it will take care of that request going forward. That means that from a developer standpoint, you can assume that the specific request will always work, be secure and observable. Everything else is taken care of by the sidecar proxy. 

These processes occur at L4-L7, which means that any traffic to any database, system or service using any protocol can benefit from this pattern. But the more sidecar proxies you have, the harder it is to configure them all. You don’t want to redeploy, restart or reconfigure the sidecar proxies manually. You want to do that from a centralized location and then push that configuration to the sidecar proxies or allow the cycle proxies to retrieve it. That’s the role of the Kuma control plane

This global control plane is the source of truth for all the configurations that the sidecar proxy will have to fetch dynamically or receive or ask to enforce all the features you want to execute. Kuma does not require prior expertise with Envoy.

By design, Kuma already knows the address of every data plane proxy and service in the mesh, so it’s unnecessary to use Kuma with a third-party service discovery tool or DNS resolver.

Kuma ships with a RESTful API HTTP interface, a built-in GUI and an official CLI. You can easily use these to retrieve the state of your configuration and policies in every environment.

While you focus on building the actual product, Kuma service mesh does all the work fixing and securing connectivity to ensure that requests never fail.

Solving Connectivity Challenges at Scale

Unless you work at a smaller company, you may not be the only team with these challenges. 

Avoid avoidable connectivity crises. Now is the best time to fix this problem. It only gets more difficult over time as your company onboards more teams, creates more products and supports more platforms. And the more decoupled and distributed they become, the more requests they’ll be making. 

Even though different teams in your company are going at different speeds and using other platforms, they’re still likely to have challenges. No matter the situation type, Kuma’s universal service mesh provides functionality to all of them.

Kuma comes from the learnings that we at Kong had with hundreds of enterprise organizations and enterprise customers. We built Kuma as another open source project that will be joining the family of our other open source projects, like Kong Gateway and Insomnia, to tackle the connectivity problem.

Kuma Multi-Zone Mode

When we spoke with our customers and our users, the most critical feedback we have received was that existing service meshes are too hard to operate. We don’t want to start a new cluster for each team. Instead, we want to implement one advanced control plane. And from that control plane, we want them to provide different measures for each group. 

We encourage you to solve your productivity and connectivity challenges by installing Kuma service mesh for free on your platform of choice.

We believe service mesh implementation doesn’t have to be complicated. See how easy it is to install Kuma and get started.