Guest post originally published on Snapt’s blog by Iwan Price-Evans

In a world that is more reliant than ever on global connectivity, it’s increasingly important to understand how data traffic flows between different nodes or is routed between different locations, and how to design a network for resilience, performance, and compliance. 

When architecting a network for fast, secure, and resilient application delivery (for example, for a multinational e-commerce website or a streaming media service) one of the most important decisions is whether to put all your infrastructure in one location for simplicity or to build a geographically distributed network.

Why Build A Geographically Distributed Network?

A geographically distributed network provides diversified infrastructure and granular control over geographic traffic routing. With multiple network locations available to you, you can stay online when one location fails and prioritize routing traffic to the most appropriate location. 

This can have all sorts of benefits for your users and for your business.

1. Route traffic to an alternative location in the event of a disaster

Disasters happen, no matter how well we might prepare to avoid them. Some location-specific disasters can cause whole data centers to fail: for example, a regional power failure, flood, or cable damage. In other scenarios, servers in one location might come under heavy sustained load causing performance to drop, which could result in a website presenting 503 errors as the servers are unable to serve new requests. Occasionally we see regional outages in public cloud providers like AWS.

In these situations, a geographically distributed network provides resilient application infrastructure and business continuity. When health checks identify localized performance problems or connectivity failures, routing rules can direct traffic to healthy servers in alternative locations.

2. Route traffic to the nearest server for lower latency

Latency (or the time taken for data to make a round-trip) is closely related to the physical distance between a client and the server they connect to – the bigger the distance, the longer the latency.

Those of you who have played competitive gaming online should know firsthand the impact of latency. If the central server hosting the game reacts just a couple of hundred milliseconds after your input, there is a good chance it will have missed your intended action and you will probably end up dead. 

Likewise, browsing an e-commerce website (with all its database requests) will feel especially unresponsive if the servers are a long distance away from the client, where the distance-related latency will affect every page load and will ultimately turn away shoppers. 

Connecting a client to the server that is physically closest to their location reduces the overall impact of latency and ensures a responsive experience.

3. Route traffic to an in-region server to serve location-specific content

We live in a global world with big differences in language, culture, prices, and preferences. To take account of these differences, businesses might want or need to serve location-specific content to particular geographic audiences. 

A media streaming service might serve different content in different geographic markets; an e-commerce store might sell different brands and have different prices; and a US-based news website might translate stories into Spanish for the South American market and into Canadian French for the Canadian market. 

One simple way to serve the location-specific content to the local geographic audience is by hosting unique content on servers in each region. For example, a host might store UK content on servers in the UK, German content on servers in Germany, and Norwegian content on servers in Norway. The host can then use routing rules to ensure clients in each country connect to the in-region servers to get the relevant content.

4. Route traffic to an in-region server to comply with data protection regulations

Connecting a client to a geographically local server is not only about prices and preferences. There are also many laws regulating the flow of data and information in and out of different countries. For example, the EU’s GDPR rules and commitment to “digital sovereignty” establish regulations on the storage and transfer of personal data from within the EU to third countries. Some countries, like Germany, require that certain personal and financial information be stored in-region.

To comply with these regulations, businesses can use local servers and geographic routing to ensure that data is collected, routed, and stored in-region. That way, even if the fastest route between two points is via a location in a third country, the routing will avoid third countries and ensure that data stays in-region. 

The Challenges of a Geographically Distributed Network

Building a network with servers and other infrastructure in multiple locations is far more complex than building a network in one location.

If you rely on privately managed on-premises infrastructure, you will need to maintain your own multi-site data centers. This means procuring and maintaining server hardware; data center power, connectivity, and HVAC; and local IT staff. 

If you use public clouds for application delivery, this means either configuring your cloud account to deploy applications in particular locations (for example, using AWS Regions and Zones), or running a multi-cloud deployment to ensure the geographic presence you need.

On top of this, you need to manage traffic routing so that traffic to your website or application is routed to the most appropriate geographic location that you have set up.

Then there’s the challenge of maintaining security, compliance, and observability in a deployment where you can’t necessarily walk into a single data center or log into a single dashboard to see everything at once.

By comparison, managing one location is simple. One physical site or one cloud, one set of routing and load balancing rules, one place to manage your security, etc.

How To Route Traffic In A Geographically Distributed Network

The most common way of configuring routing rules in a geographically distributed network, to take advantage of the benefits to resilience, latency, and localization, is to use a global server load balancer (GSLB). A load balancer distributes traffic between multiple servers or nodes in a network; a GSLB does this on a geographic or multi-location level.

When a client attempts to connect to a domain that uses a GSLB for multi-location routing, the GSLB will typically:

  1. Check the client’s IP address to determine the location
  2. Perform health checks on the nearest servers
  3. Connect the client to the nearest healthy DNS server with the lowest latency

Big public clouds and CDNs typically offer some level of GSLB built-in. Specialized GSLBs such as Snapt Aria are also available, which offer more features and flexibility, such as the ability to deploy in any environment (on-premises and cloud, VMs and containers), more routing options, and advanced health checks and reporting. 

What About The Complexity Of Running A Geographically Distributed Network?

The benefits of deploying a geographically distributed network are significant, and for multi-national businesses might be worth the costs and complexity. But can they mitigate those downsides?

A multi-location strategy usually introduces a lot of complexity because the application infrastructure that helps ensure security, compliance, and observability is also localized. Load balancers, web application firewalls (WAFs), and web application and API protection (WAAP) are typically deployed in each location and are managed individually. This introduces the risk of inconsistent configuration, out-of-date policy, and lack of visibility across a fragmented network.

The best way to overcome this is using a centralized application infrastructure, where the deployment, configuration, monitoring, and intelligence are all managed in a central control plane, and the localized load balancers, WAFs, and WAAP instances are lightweight “workers” in the data plane. 

This approach provides a “single pane of glass” in which to see and control an entire network of load balancers, etc, deployed across multiple clouds and multiple locations. There’s no need to ensure consistent configuration or worry about out-of-date policy because operators can execute a change centrally and commit it to every node instantly. There’s no fragmentation of data and visibility because it’s centralized, aggregated, and presented in a single set of reports and alerts.

Location Matters

Despite the inherent simplicity of a single-location deployment, there is a risk in putting all one’s eggs in one basket. A geographically distributed network, while more complex, provides substantially more resilience, as well as lower latency and more localization options, which are increasingly important to multi-national businesses. 

You can enable simple geographic routing in a distributed network using a global server load balancer. You can also address the complexity challenge by choosing an application infrastructure that is centrally managed and intelligent. 

Snapt Nova is Application Delivery Control as a Service, providing load balancing and AI-powered WAF security on-demand to every node from a centralized control plane and UI. Nova is the fastest way to achieve security, compliance, and observability in a geographically distributed environment, and get the benefits without the usual costs and complexity. You can try Nova free today, and community users can use it free for an unlimited time.