Browsed by
Tag: Cookies

Intro to Kubernetes – Containers at Scale

Intro to Kubernetes – Containers at Scale

Kubernetes is a Container Orchestrator, so this post builds on the cookie analogy in my Containers as Cookies: How to Use Containers post. I’ve been using this analogy for years to mainly call out that:

  • we care about containers because we care about running applications (chocolate chips are the best part)
  • containers act as a packaging mechanism that bakes our app together with its dependencies into a convenient, portable unit (mix ingredients into dough, then bake!)
  • and containers act as an isolation mechanism – using cgroups and namespaces to isolate applications in a resource-efficient way. (This one is mainly shown by my Doggy Daycare post.)

With these key benefits, containers took the world by storm – with companies big and small around the world quickly adopting the technology. The new packaging and isolation mechanism brought a lot of benefits to businesses, largely around delivery speed and ease & consistency of distribution. All these benefits are great. But for businesses, if it can’t scale, what’s the point?

If you wanna skip the explanation and get right to the hands-on part, try out this tutorial on GitHub made by Christina Webber explicitly for this content!

Container Orchestration – Managing Containers at Scale

Kubernetes is a container orchestrator, meaning its job is to manage containers at scale. To put it a different way, coming back to our cookie analogy, Kubernetes is like a cookie business. NOT because it’s where cookies are made. They could be made elsewhere. But Kubernetes provides the logistics and management to run your applications at scale, just like a cookie business manages logistics of packaging and shipping cookies!

Kubernetes is about running your containerized workloads at scale, kind of like a global cookie business, managing the logistics of baked goods at scale.

How Does Kubernetes Do That? – Intro to Kubernetes

Kubernetes manages containers at scale using a number of Objects. Let’s go over some of the most fundamental objects and how they enable Kubernetes to do what it does! We’ll explore:

  • Basic Kubernetes Architecture
  • Pods
  • Replication Controllers
  • The Kubernetes Scheduler
  • Services

Basic Kubernetes Architecture

Kubernetes is made up of two key pieces:

  • The Control Plane – the brains of Kubernetes itself (or the Headquarters of our cookie company)
  • Worker Nodes – machines where your workloads actually run (like individual warehouses)’
The Kubernetes Control Plane manages the cluster as a whole, like the headquarters of a cookie company. Meanwhile the Worker Nodes run your containerized workloads.

Kubernetes Control Plane Components

Control Plane node(s) act as the brains or headquarters of the Kubernetes operation, and the control plane is made up of several components (departments?).

  • kube-api-server
    • The API-server serves the APIs we use to interact with the Kubernetes cluster
  • etcd
    • etcd is a open source project in its own right, “A distributed, reliable key-value store for the most critical data of a distributed system.” Which is what it does in Kubernetes – it is the source of truth for what the state of the cluster should be.
  • kube-scheduler
    • The Kubernetes Scheduler is the piece of the Control Plane which determines which apps run on which nodes.
  • Controller Manager(s)
    • kube-controller-manager and any cloud controller managers make sure the actual state of the cluster matches the desired state.

Kubernetes Worker Node Components

Kubernetes Worker Nodes run your workloads, like warehouses that store and ship the product in a cookie factory. Worker Nodes also have a few key components:

  • kubelet
    • The kubelet is the main component of a Kubernetes worker node, kind of like a supervisor, it’s the piece that communicates back with HQ to make sure things are going according to plan onsite.
  • Container Runtime
    • Your Kubernetes workloads are going to be running in containers, so the machines running those workloads need to have whatever container runtime you’re using installed. Most folks create their containers using Docker, which uses the container runtime containerd (also open source and part of the CNCF).
  • kube-proxy
    • Kubernetes is a distributed system, meaning there’s a lot of networking magic taking place. kube-proxy is a big part of that magic, handling how the bits of Kubernetes manage to talk to each other on each node/machine (IPtables are involved) and between nodes.

This is the basic architecture of how Kubernetes itself runs. Most folks today use Kubernetes rather than running Kubernetes itself, so you shouldn’t need to know too much beyond the basics on this unless you’re going to contribute to core components or try to run Kubernetes on some kind of unique and complicated hardware configuration (like my friend Jonathan Rippy who ran a cluster on a set of Android watches).

Packaging Containers in Pods

Since Kubernetes is a container orchestrator, you’d probably expect that it cares a lot about containers. But nope! Kubernetes actually doesn’t care about containers at all! Just like a cookie business packages up cookies in wrappers for sale, Kubernetes wraps containers in its own packaging, Pods.

Some cookies are packaged individually, but many are sold in multi-cookie packs. Likewise, Kubernetes Pods can also contain one or more containers.

By far the most common case is for a pod to have a single primary container. Think of a pod as the unit Kubernetes will work with. A pod should be the smallest unit you would want to manage independently. If you ever want to do anything different to one container vs another, they should NOT go in a pod together.

Sidecar Containers

By far the most common case where you WOULD put multiple containers together, would be what we call “sidecar containers.” This model is very common in service meshes. Kubernetes treats a Pod as a single unit. This means that any networking or storage resources would be assigned to the Pod as a whole. All containers in a Pod would share an IP address, for example. Since all the containers share an IP address and other resources, you can do some useful things by co-locating containers together in a pod. In the case of service meshes, a sidecar is often used to do things like:

  • Gather logs from the primary application container, without modifying the application itself
  • Intercept traffic intended for the primary application container and perform actions to determine if the traffic is permitted

To put it another way, this diagram shows a pod on a worker node. The green boxes show the pieces of Kubernetes that run on the worker node, while the Yellow box represents the pod with containers inside it. Note that all the containers in the pod share storage and an IP address!

This diagram shows a Kubernetes worker node and how you would define a Pod with YAML.

Sidecar containers can be pretty useful! Just one benefit of Kubernetes’ Pod model of working with containers.

Applications at Scale with Replication Controllers

With the pod model of packaging containers into manageable units, Kubernetes sets up an important prerequisite for its key application scaling functionality – replication.

Replication controllers provides the smarts and tools to manage a variety of workload types at scale.

Kubernetes has a number of different Objects for managing the way different types of workloads scale. Objects that perform this important function are called Replication Controllers. A Replication Controller (RC) does just that, it has the smarts to replicate workloads of different types depending on their use case. The Kubernetes replication controllers are:

  • Deployment – primarily for stateless, long-running workloads (the most commonly used RC)
  • StatefulSet – for workloads that have state requirements like needing stable, unique network identifiers or storage
  • DaemonSet – for workloads that need to be run on every node in a cluster
  • Job – for workloads that are meant to do their thing, then go away when they’re done
  • CronJob – for jobs that need to be run on a schedule

Replication controllers are smart. For example, a Deployment can not only scale your application, it can also manage rolling upgrades and rollbacks gracefully.

Replication controllers, like Deployments, manage your workloads in Kubernetes across the underlying machines.

Replication controllers are key to the magic of how Kubernetes abstracts the underlying hardware, allowing you to focus more on managing the applications and less on managing individual machines.

Matching Workloads to Hardware with Scheduling and Labels

It’s great that Kubernetes abstracts the underlying hardware so we can manage workloads across machines – but what if your workload has specific hardware needs? For example, if it needs SSDs (Solid State Drives – fast local storage) or GPUs (Graphical Processing Units – critical for video processing, games, and all sorts of other applications)?

Labels in Kubernetes

Labels in Kubernetes are freeform key-value pairs that can be added to any Kubernetes object. For example, the fifth and sixth lines of the below pod definition snippet apply the label “env: test” to the pod.

apiVersion: v1
kind: Pod
  name: nginx
    env: test

We can assume this pod would be part of a testing environment, or maybe runs a testing environment. But workloads aren’t the only things that can be represented as objects in Kubernetes. The worker nodes of a Kubernetes cluster are also represented by objects, and they can have labels too. These labels allow us to give Kubernetes more information about where applications should be run.

We can use Affinity and Anti-Affinity in Kubernetes to provide more information about where workloads should be scheduled.

We can use Node Affinity via the nodeSelector field to specify a label for Kubernetes to look for on the node it schedules that workload onto, and we can also use Pod Affinity to make sure certain pods are scheduled together. Or you can use Node or Pod Anti-Affinity to make sure workloads are not scheduled onto certain nodes or with certain other pods.

Kubernetes will seek to match the workload to other workloads or the underlying nodes using the labels you specify via fields like nodeSelector, affinity, or podAntiAffinity.

Reaching Your Apps with Services

We’ve covered some very useful tools for understanding how Kubernetes enables running containerized applications at scale – but once your workload is running on Kubernetes, how do you reach it?

A Service in Kubernetes enables communication to, from, or between pods.

One way is with Kubernetes Services. A Service in Kubernetes is an object which enables communication to, from, or between pods. They essentially work as software-defined load balancers across the pods that make up your workload. There are three types of Services with different purposes:

  • ClusterIP
  • NodePort
  • Loadbalancer

A type ClusterIP service gives your workload an IP that can be used to communicate with other workloads within the cluster. The IP of a ClusterIP type service does not enable traffic to or from endpoints outside the cluster.

A type NodePort service, as shown in the image above, will expose your service via a port on the node it’s running on. To reach a workload with a NodePort service, you would use the IP of the node the workload is running on, plus the port (ie. This is useful for small development style use cases, but in production, you may be running hundreds or thousands of copies of a workload and may not want to hunt down which node and port a copy is running on every time you want to access it.

Depending on the environment you’re working in, a type Loadbalancer service may be a more convenient way to reach your workloads from an endpoint outside the cluster. In a managed service from a cloud provider, for example Google Kubernetes Engine (which I work on), the GKE cluster has the smarts to create a loadbalancer resource in Google Cloud for your type Loadbalancer services. this means that a loadbalancer separate from the cluster is handling traffic to your workloads running inside Kubernetes.

A Note on Ingress

Services are key for setting up communication for your workloads in Kubernetes, but handling production level ingress traffic, for example to a large website, requires a bit more.The Kubernetes Ingress object was designed for this purpose, which is in the process of largely being outpaced by the new Gateway API. The Gateway API is a new implementation of tooling for managing ingress for Kubernetes workloads. If you’re looking to use Kubernetes in production, you should definitely check it out.

Learn More about Kubernetes!

Get hands-on and try out the concepts in this article yourself with this tutorial on GitHub created specifically for this content by Christina Webber!

This post covered the basics I think anyone should know about what Kubernetes is and how it does what it does, using core concepts like:

  • Pods
  • Replication Controllers
  • Scheduling
  • Services

These may be the basics, but there’s more to learn. If you’re going to dive deeper into Kubernetes, a few of the next topics I would recommend you explore are:

  • Replication Controller types
    • Deployments, StatefulSets, Jobs, etc.
  • Ingress and Gateway API
    • native tools for managing ingress traffic for Kubernetes workloads
  • Persistent Storage
    • managing storage for stateful workloads in Kubernetes
  • Namespaces and Role Based Access Controls
    • Logical partitions and controls to organize resources- for example, if you have 2 teams sharing one cluster

I hope you enjoyed this post as much as I enjoyed making it!