The secret of Stateful vs Stateless workloads in Kubernetes is this: Everything has state. What matters is whether anything cares about it. Or really, how micromanage-y the things are that care about it (there’s a reason it’s “state” as in “status.”).
In this post, you’ll discover what “stateful” means in the world of Kubernetes, and what the project is doing to support these critical workloads.
This blog post is essentially a text version of the talk, “The State of Stateful on Kubernetes” that I gave alongside Michelle Au (Kubernetes SIG Storage co-chair) at Data on Kubernetes Day at KubeCon NA 2023. You can find a recording of that talk here, and the slides here.
Everything Has State
At the Kubernetes Contributor Summit at KubeCon NA 2022, contributor Andrea Tosatto led an open spaces session to discuss “running stateful workloads on Kubernetes.” I was excited about the idea of stateful workloads, but was still trying to understand what the term “stateful” meant. I was thinking it probably had something to do with databases and persistent storage. Storage = state, right? Then Andrea threw my thinking on “stateless” out the window with a single slide. He presented the topic of “stateless” without using the word “storage” at all.
Andrea Tosatto describes “stateful” features from Kubernetes’ perspective during his session “Maintaining the quorum – running stateful workloads on Kubernetes“ at the Kubernetes Contributor Summit at KubeCon NA 2022.
At the time, I thought that data and stateful were irrevocably intertwined. So how could the contributor community describe Kubernetes’ role in it, without mentioning data or storage at all? After that talk, I marveled at it and talked with folks about it for a few weeks. In the end, I found the pattern that I believe makes a workload “stateful” from Kubernetes’ perspective. It’s all about dependencies.
What Matters is Whether Anything Cares About It
In summary, most of the features Andrea mentioned as “stateful” features in Kubernetes were really what I would consider “persistence” or “availability” features. Having a minimum number of replicas, managing service lifecycles, and generally treating workloads as “pets” rather than “cattle.” The classic “pets vs cattle” analogy has been used for many years to tout Kubernetes’ ability to tolerate failures by treating them as replaceable. A deployment can just delete a failing workload and create a new one to auto-heal a problem. But availability isn’t always so simple.
A StateLESS Use Case
In this photo processing use case, an image is sent, it’s processed, and the output is spit out. The workload doing the processing only needs to exist long enough to make the image, then it can just push its result.
The classic description of a StateLESS workload is one that can spin up, do what it needs to do, and then die so it’s not using excess resources. One example I’ve often used is a photo processing application that can spin up, process a photo, send the output somewhere, and then die. This “stateless” app does go through states like spinning up, receiving input, pushing output, and spinning down. But importantly, there aren’t other apps or services constantly trying to interact with it.
There are cases where services interact with a stateless app frequently. But in those cases, you can generally use tools like load balancers to abstract the connections and keep the instances of your stateless app from being pinned down with constant requests/connections. StateFUL apps are generally those that need extra tools, rules, and planning to provide the same level of availability.
StateFULL Use Cases
Some examples of workloads that I would consider “stateful,” include:
- Pre-container style architectures. These workloads tend to be designed with certain assumptions about the availability of resources and other architecture components. In essence, you might think of them as “monolith” style applications that assume everything they need will be right there with them on a single machine or VM. Kubernetes allows us to distribute those components, meaning those assumptions don’t always hold in the same ways. You may need to do make more explicit connections (and/or rules for those connections) in order to get these apps to work right.
- Game Servers: Dedicated Game Servers are very sensitive to disruption since there are users actively playing games on them! Having a game go down right when you were about to win is never fun.
- Databases/Data-Intensive Workloads: Workloads that use or create a lot of data naturally have strong connections/dependencies to storage resources in your architecture. These connections can be very sensitive to failure.
- AI/ML: Many Artificial Intelligence and Machine Learning workloads involve accessing and processing a lot of data. Moreover, they could be considered a type of data-intensive workload. AI/ML workloads are often highly distributed, which can increase availability needs across your architecture.
There are a lot of workloads where the way they interact with other parts of the system may cause them to be less fluid. This is what we mean by “stateful.” A workload with such sensitive dependencies across your architecture that it’s very hard to abstract it enough to treat it as a replaceable service. You’ll need some special tools to make sure all of the services across your architecture can get what they need when they need it.
Exploring Stateful Features in Kubernetes
Let’s explore a few examples of Kubernetes features that will help you manage the availability of your foundational “stateful” workloads. Kubernetes offers tools like StatefulSets, CRDs, and upgrade disruption mitigation tools, with more always on the roadmap.
StatefulSets in Kubernetes have special characteristics to help you manage the availability of your workload, such as ordering and uniqueness features.
The key difference between a StatefulSet and the other workload types in Kubernetes, is that StatefulSets treat your workloads more like irreplaceable pets than interchangeable cattle. Unlike a workload running in a deployment, a workload in a StatefulSet has a unique network identifier for each pod. Usually, Kubernetes does not guarantee that a pod’s network identifier will be consistent. A pod could restart and its IP could change. In such cases, you use a Kubernetes Service object (essentially a software-defined loadbalancer) to abstract away the unstable IPs. StatefulSets are good for cases where you have very strict dependencies on individual pods themselves (individual instances of that workload), rather than on a workload as a whole.
Custom Resource Definitions (CRDs)
Custom Resource Definitions (or CRDs) are a very attractive option for many stateful workloads running in Kubernetes. Rather than using the pre-defined resource types like deployements and statefulsets, engineers can tell Kubernetes exactly how they want it to manage their application by creating custom-made resources. In reality, engineers create operators, which are what Kubernetes uses to run the custom resources. You might hear “CRDs” and “operators” used interchangably.
CRDs are very common with Databases, such as kubegres, PostgresSQL’s operator. A really fascinating use case for CRDs is Game Servers. The Open Source Agones project is a tool that provides a Kubernetes Operator for a specific use case – Game servers. The Agones CRD teaches Kubernetes to run a custom resource called a “Game Server,” which takes into account that multiplayer game sessions have in-memory state, and therefore cannot be interrupted while game sessions are are in-play.
Reducing Upgrade Disruption
Stateful workloads are often the scariest ones during upgrades. Due to their many dependencies and connections with other parts of your infrastructure, they’re usually the most sensitive to disruption. Kubernetes has a variety of features to help you address the areas where upgrade disruption might strike.
To improve fault tolerance, you should spread workloads acros your cluster, which Pod Topology Spread Constraints can help with. To keep your gregarious stateful workloads from getting disrupted by other workloads or dependencies in your architecture, you can isolate them using tools such as Node Affinity, Pod Priority and Preemption, Quality of Service definitions (QoS), and Pod Resource Requests and Limits. As we’ve mentioned, one of Kubernetes’ claims to fame is its ability to auto-heal. That auto-healing usually involves auto-eviction or killing of pods. You can set rules around how and when Kubernetes evicts pods using Pod Disruption Budgets, Pod Liveness & Readiness Probes, and Graceful Termination via preStop Hooks.
Features in 1.29 and Beyond
The community is working hard on features to make Kubernetes an even safer and smoother place to run your sensitive stateful workloads.
Engineers throughout the tech industry use Kubernetes to run workloads of all kinds. The variety and needs of those workloads will only continue to grow. Thus, the community has a number of projects underway to improve the stability of Stateful workloads on Kubernetes. Kubernetes 1.29, the last release of 2023, introduced the exciting ability to modify persistent volumes in-place (in alpha)! Further workloads to Kubernetes Enhancement Proposals (KEPs) to keep an eye on include:
- STS volume expansion
- Group volume snapshots
- Cross-namespace snapshots (and other data sources)
- Declarative node maintenance
- Topology-aware disruptions
Give Your Stateful Workloads Structure
We all sometimes wish we could live leisurely stateless lives. But the state of our work is important, and we may sometimes be overwhelmed by folks asking for our status. In your infrastructure, you’re likely to end up running some “stateful” workloads with similarly important dependencies
Here are some Best Practices for those sensitive, stateful workloads on Kubernetes:
- Use the aforementioned features!
- Use blue/green strategies for upgrades
- Consider running chaos testing
- Take regular backups
- Backups of the data
- Backups of the config
- Actually test your recovery procedures!
- CI/CD best practices apply
- General Kubernetes best practices around security and networking apply
The Data on Kubernetes Community Day event at KubeCon NA 2023 originally hosted this talk. If you’re an engineer who runs stateful, and especially data-intensive workloads on Kubernetes, you might find the community helpful! You can find the Data on Kubernetes Community here.