Browsed by
Tag: Multicluster

From One to Many: The Road to Multicluster

From One to Many: The Road to Multicluster

This is a blog post version of my KubeCon NA 2021 keynote of the same name. You can check out the recording here!

From One to Many, the Road to Multicluster – Keynote from KubeCon NA 2021 – By Kaslin Fields

I’m Kaslin Fields, a Developer Advocate at Google Cloud, CNCF Ambassador, and member of Kubernetes SIG-ContribEx (Special Interest Group for Contributor Experience).

My work revolves around advocating for users, by understanding the real-world challenges they face. And today, I’m going to tell you about the modern challenges faced by organizations when it comes to meeting the demands of scale – specifically scaling out their Kubernetes clusters, and what Kubernetes’ Multicluster & Networking Special Interest Groups are doing to solve those challenges.

Reasons for Multicluster

A single Kubernetes cluster can scale upwards of 10000 compute nodes. And Kubernetes has a variety of useful tools for enabling multi-tenant architectures.

So why would an organization ever need more than one cluster?

Let’s take a look at just a few of the common reasons I see when I talk to customers and users about their multi-cluster environments.

Firstly, Geography or hybrid environments. Whether for

  • Latency
  • Compliance
  • or Resiliency/High Availability

Reasons, you will generally need to create at least one Kubernetes cluster in each region or environment where you want your apps to run.

Multi-tenant Kubernetes clusters are great for using resources efficiently. But when tracking costs is key, many users create clusters to better match their billing model.

While there are some useful tools in Kubernetes for isolating multitenant workloads, sometimes it makes more sense to use the cluster boundary to isolate a team, application, or service for security and compliance reasons. This would mean you’ll end up having multiple clusters in order to meet your security/compliance needs.

This is just a quick look at a few of the reasons I see customers and users cite for their multi-cluster architectures. And most organizations have a combination of these constraints.

So what does running a multi-cluster Kubernetes architecture mean for you?

Multicluster Architectures

Let’s imagine you have applications running in a cluster on-prem, and one in the cloud. Maybe you’re running a website on-prem, and in the cloud maybe you have a mobile app.

Your first challenge in working with this multi-cluster architecture will be networking. And that challenge comes in two dimensions.

First, the vertical dimension. How are you, or your users, going to access the apps running in each of these separate clusters?

Secondly, the horizontal dimension. What if your clusters are running applications that need to communicate with each other?

The Problem

Well one way we could do this, is to use DNS to reach the applications running in your clusters. Which surely won’t introduce any problems, right?

I joke, but really DNS is a fragile tool that causes a lot of real problems.

We’re also gonna need some loadbalancers. In the cloud you can use either the cloud provider’s loadbalancers, or define your own. And on-prem you have a variety of options for loadbalancers.

Not to mention any automation you want to write to make use of these connections, and I barely touched on anything you’d need to know about Kubernetes Ingress itself.

All this is getting pretty complicated.

The Solution

Multicluster Services

SIG Multicluster has created the new Multicluster Services API Standard. Multicluster services, or MCS, creates a concept in your Kubernetes cluster that is very much what it sounds like. It enables you to export and import services across clusters.

This doesn’t change where your apps are running, but it does make it so that each cluster knows about the services running on your other clusters.

For example, you could log into one cluster and be able to access all the services that cluster knows about. Even if they’re actually running somewhere else.

Multicluster Ingress with the Gateway API

Now about that DNS. There has to be a better way to manage incoming traffic for our applications than combining DNS with the Kubernetes Ingress object.

SIG-Network has been hard at work on the Gateway API, which I commonly hear referred to as “Kubernetes Ingress V2.”

The Gateway API is a new implementation of Kubernetes’ capabilities for managing that vertical, or ingress traffic, to your applications. It includes a variety of improvements to make managing ingress easier and aims to provide a consistent way to manage your Kubernetes clusters’ interactions with networking infrastructure.

Gateway API can be used to implement a concept of Multicluster Ingress, where a centralized Kubernetes API server is used to deploy Ingress controls across multiple clusters.

Basically, if a single Kubernetes cluster can know about the services in another cluster, and how to make use of the networking infrastructure in-between. That means we can use the consistent tooling of the Gateway API to manage ingress for all our apps, even across Kubernetes clusters.

Both the Gateway and MCS API standards come from the open source Kubernetes project. Implementations of these tools will depend on your environment. Check the documentation for details on tools and environments that enable use of these APIs.

Additional Resources

Check the docs or try a tutorial!

Get involved!