I published my container cookie analogy on this blog for the first time in 2019 as “Explain Like I’m 5: Containers vs VMs.” At the time, I created the analogy to help bring sales professionals up to speed on container technology quickly and in a way that helped them understand why their customers were talking about it. Essentially, it was a tool to communicate the value proposition of containers. In May 2019, I addressed some of the gaps in the container analogy with “Welcome to the Doggy Daycare: Containers to a System Engineer.” The Doggy Daycare analogy provides a much better tool for engineers who want to understand how containers work.
From 2019 to now, I’ve been iterating on these ideas and trying to improve and clarify how I teach anyone and everyone about containers. With this post, I want to revamp and clarify how cookies can help you understand what containers do and how you can use them!
Why Should You Care about Containers?
Applications are at the core of how your business does what it does. Running them is critical to the business’ success. They’re like the chocolate chips in a cookie – the best part, what you’re really there for. Like chocolate chips to a cookie, your apps are the core of what makes containers valuable.
Containers act as a packaging mechanism and an isolation mechanism for the applications that matter to you or your business.
In this post, I’ll be focusing on Docker. Docker is a great tool for containerization! But Docker containers aren’t the only type of containers out there. Just like you could put the same cookie ingredients together in different ways, you can implement containers differently too. Docker uses containerd as its container runtime, but there are many others. Each one has trade-offs in terms of speed, security posture, and resource utilization.
The Cookie Recipe: Containers as a Packaging and Isolation Mechanism
One common talking point on the value of containers is that they address the “works on my machine” problem. The first part of how they address this problem is that they contain all the things an app needs to run, along with the instructions to run it. It’s like a recipe and ingredients all in one.
The second piece of how containers address the “works on my machine” problem, is isolation. Say you have a couple of versions of a dependency, like Python, installed on your system. Depending on which is configured as default, your system may not be able to run apps properly not because you don’t HAVE the dependency, but because it’s using the wrong one. By isolating your app and its dependencies in a container, you can make sure that nothing gets between your app and its dependencies!
Let’s walk through each step of creating a Docker container to understand how Docker packages applications and their dependencies together.
Container Definitions – The Recipe & Ingredients
Below, you can see a Dockerfile, a type of container definition. Dockerfiles have several keywords that describe how to build a container. You can always learn more about Dockerfiles in the Dockerfile reference from Docker.
The first line is generally the “FROM” line – where you can specify an existing container image you want to build on top of, or that you’re building from scratch. The FROM line is the base of our container definition, kind of like flour in a cookie recipe. Then there are some lines with keywords like “RUN,” “WORKDIR,” and “COPY.” These commands can be used like the butter, sugar, etc in a cookie recipe. They continue to build the foundation where your main application, your chocolate chips, can shine.
RUN specifies a command that the container should run as it’s starting up, for example to install a dependency of your application, set up some configuration, or to run the key application itself. Coming back to our point about isolation earlier, you might have 2 versions of Python installed on your system, but your app only needs one. You would only need to install the one your app needs in your container.
WORKDIR sets up the directory within the filesystem of the container where following commands should be run. Useful to set, for example, if you need to install some dependencies in a specific place in relation to your main application.
COPY allows you to copy files your app will need into your container.
ENTRYPOINT or CMD are keywords that tell the container what to do when it starts up. CMD in my experience, is a little simpler, whereas ENTRYPOINT is more configurable.
Now that you’ve defined how you want your container to look by creating its recipe and specifying its instructions, let’s walk through how you get it running.
Container Images – Cookie Dough
The next step is to build the container. This is like following the recipe on how to mix all the ingredients together before baking. Just like in baking, this is an important step for the value of containers as a packaging mechanism.
To build your container, you would run the “docker build” command and specify your Dockerfile. I just specified the current directory. If you do that, Docker knows to look for something called “Dockerfile.” The “docker build” outputs “Successfully built bd0a9b2b1ff7.” That string of letters and numbers is a unique identifier. It identifies the real ouptut that we can use – a container image. You can now put that container image into a container registry. There are lots of registry options like DockerHub. Putting your container image in a registry is like rolling dough up and freezing it to save or give away!
A lot of businesses these days are packaging their applications in containers for their internal teams or external customers to use. They do that by providing their users with this container image – the cookie dough stage in our cookie analogy. Companies could give their users a recipe to follow, but they could mess that up. Instead, companies pre-package their apps with everything they need included. That way, they can hand it off to users with minimal assembly required – just bake to run!
Running a Container – Baking
To run the container you built, you can use “docker run” and specify your container image. Below I use short-hand, the first few characters of the container image string. As long as they’re unique, Docker knows what you mean.
If you wanted to run a container image from a repository, you could configure Docker to know about that repository. Then you could specify the image name and version from the repo and Docker could go download it for you. By default Docker tends to be configured to know about DockerHub. So you could say “docker run nginx,” and Docker would get the most recent version of the Nginx container image from DockerHub!
Go Forth and Containerize!
Now you know the steps to create your own containers. The cookie analogy should help you remember what each stage does and why it’s important. Try containerizing an app you’ve built or that you use a lot. Or try running a container image someone else has built!
Not sure where to start? I mentioned a lot of companies these days are providing their applications as container images. One popular one that you’ve probably encounered in some way is games! There are a lot of games like Minecraft, Valheim, Vrising, and many more, that allow you to set up your own server so you can play the game with your friends. Containerizing these game servers is a very popular way to run and distribute them! Try looking into your favorite games and see if you could run your own server via a container!