Browsed by
Category: Container

Container Comics by Kaslin Fields

Container comics by Kaslin Fields give you a fun and approachable introduction to container technology. Containers are a central technology to the DevOps and Cloud Native movement. As such, adoption, knowledge of container technology is in high demand.  Meanwhile workers throughout the tech industry and beyond are finding themselves in the midst of “Cloud Native” transformations.

Moving to the Cloud

Running applications in the cloud provides unique challenges and benefits. The cloud allows businesses to be able to offload many of the challenges of running large datacenters. By moving computing work to the cloud, businesses can focus more on what really matters to them. But that doesn’t mean the move itself will be easy. This movement to the cloud also gives businesses an opportunity to re-evaluate what applications they’re running and how they’re running them. And that re-evaluation requires considerable effort, and re-tooling the existing tech workforce.

Moving to the cloud offers new and exciting ways to run applications. Cloud Native technologies allow businesses new opportunities. In the cloud, starting up new projects, scaling workloads on demand, and minimizing downtime can be done like never before. But retooling workloads also means retooling workers.

If your business is going through a DevOps / Cloud Native transformation, you’re sure to have a lot of work on your plate. Challenge number one – figuring out what you need to learn and how you’re going to learn it.

Time to Get Learning!

Kaslin Fields is a Cloud Advocate at Oracle and a Cloud Native Computing Foundation Ambassador. Kaslin brings her experience working with container and cloud native technology with major cloud providers to you via a fun and creative approach – comics! Container Comics by Kaslin Fields will teach you the basics and give you the tools you need to start your journey toward becoming the Cloud Native expert your company needs.

From One to Many: The Road to Multicluster

From One to Many: The Road to Multicluster

This was the title of my 5min keynote at KubeCon NA 2021! I wanted to share the contents of it here as a blog post too.

I’m Kaslin Fields, a Developer Advocate at Google Cloud, CNCF Ambassador, and member of Kubernetes SIG-ContribEx (Special Interest Group for Contributor Experience).

My work revolves around advocating for users, by understanding the real-world challenges they face. And today, I’m going to tell you about the modern challenges faced by organizations when it comes to meeting the demands of scale – specifically scaling out their Kubernetes clusters, and what Kubernetes’ Multicluster & Networking Special Interest Groups are doing to solve those challenges.

Reasons for Multicluster

A single Kubernetes cluster can scale upwards of 10000 compute nodes. And Kubernetes has a variety of useful tools for enabling multi-tenant architectures.

So why would an organization ever need more than one cluster?

Let’s take a look at just a few of the common reasons I see when I talk to customers and users about their multi-cluster environments.

Firstly, Geography or hybrid environments. Whether for

  • Latency
  • Compliance
  • or Resiliency/High Availability

Reasons, you will generally need to create at least one Kubernetes cluster in each region or environment where you want your apps to run.

Multi-tenant Kubernetes clusters are great for using resources efficiently. But when tracking costs is key, many users create clusters to better match their billing model.

While there are some useful tools in Kubernetes for isolating multitenant workloads, sometimes it makes more sense to use the cluster boundary to isolate a team, application, or service for security and compliance reasons. This would mean you’ll end up having multiple clusters in order to meet your security/compliance needs.

This is just a quick look at a few of the reasons I see customers and users cite for their multi-cluster architectures. And most organizations have a combination of these constraints.

So what does running a multi-cluster Kubernetes architecture mean for you?

Multicluster Architectures

Let’s imagine you have applications running in a cluster on-prem, and one in the cloud. Maybe you’re running a website on-prem, and in the cloud maybe you have a mobile app.

Your first challenge in working with this multi-cluster architecture will be networking. And that challenge comes in two dimensions.

First, the vertical dimension. How are you, or your users, going to access the apps running in each of these separate clusters?

Secondly, the horizontal dimension. What if your clusters are running applications that need to communicate with each other?

The Problem

Well one way we could do this, is to use DNS to reach the applications running in your clusters. Which surely won’t introduce any problems, right?

I joke, but really DNS is a fragile tool that causes a lot of real problems.

We’re also gonna need some loadbalancers. In the cloud you can use either the cloud provider’s loadbalancers, or define your own. And on-prem you have a variety of options for loadbalancers.

Not to mention any automation you want to write to make use of these connections, and I barely touched on anything you’d need to know about Kubernetes Ingress itself.

All this is getting pretty complicated.

The Solution

Multicluster Services

SIG Multicluster has created the new Multicluster Services API Standard. Multicluster services, or MCS, creates a concept in your Kubernetes cluster that is very much what it sounds like. It enables you to export and import services across clusters.

This doesn’t change where your apps are running, but it does make it so that each cluster knows about the services running on your other clusters.

For example, you could log into one cluster and be able to access all the services that cluster knows about. Even if they’re actually running somewhere else.

Multicluster Ingress with the Gateway API

Now about that DNS. There has to be a better way to manage incoming traffic for our applications than combining DNS with the Kubernetes Ingress object.

SIG-Network has been hard at work on the Gateway API, which I commonly hear referred to as “Kubernetes Ingress V2.”

The Gateway API is a new implementation of Kubernetes’ capabilities for managing that vertical, or ingress traffic, to your applications. It includes a variety of improvements to make managing ingress easier and aims to provide a consistent way to manage your Kubernetes clusters’ interactions with networking infrastructure.

Gateway API can be used to implement a concept of Multicluster Ingress, where a centralized Kubernetes API server is used to deploy Ingress controls across multiple clusters.

Basically, if a single Kubernetes cluster can know about the services in another cluster, and how to make use of the networking infrastructure in-between. That means we can use the consistent tooling of the Gateway API to manage ingress for all our apps, even across Kubernetes clusters.

Both the Gateway and MCS API standards come from the open source Kubernetes project. Implementations of these tools will depend on your environment. Check the documentation for details on tools and environments that enable use of these APIs.

Additional Resources

Check the docs or try a tutorial!

Get involved!