Kubernetes has become the de-facto standard platform for deploying applications in a lot of companies.
As software engineers, even if we’re not directly involved in operations, chances are pretty good that we’ll need to interact with Kubernetes in one way or another.
And nothing gives use feedback as quickly as testing changes locally, so when we want to optimize our application deployment to work on Kubernetes we’ll want to have a Kubernetes cluster locally.
Kubernetes has come a long way from being that big behemoth that takes days to setup.
There are a couple of solutions that allow us to install Kubernetes locally.
Of course it’s not a fully fledged solution with all the bits and pieces but in most of the cases that’s perfectly fine.
I don’t want a completes user management or failover solution.
I want a quick and easy way to validate my
deployment and check if the container declarations are really working the way I intend them to.
A solution that works very well for me personally is k3d.
k3d is a wrapper around k3s and fires up a fully functional Kubernetes cluster within just a couple of Docker containers.
Dependencies and installation
The only dependency required by k3d is a running Docker installation. For a Mac that’s easy to fulfill by installing Docker Desktop.
k3d itself can be installed via Homebrew:
$ brew install k3d
Now everything we need to fire up a local Kubernetes cluster is installed on our machine. Let’s verify that we’ve correctly installed k3d:
$ k3d version k3d version k3d version v5.5.1 k3s version v1.26.4-k3s1 (default)
As we can see k3d is ready to install k3s and the corresponding Kubernetes version 1.26.
Create a Kubernetes cluster
To create a new Kubernetes cluster we can use the
$ k3d cluster create christian
A couple of seconds and a series of logging outputs later, the cluster is up and running. Let’s verify this by checking both the Kubernetes nodes and pods:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k3d-christian-server-0 Ready control-plane,master 72s v1.26.4+k3s1
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-76d776f6f9-cr67t 1/1 Running 0 100s kube-system coredns-59b4f5bbd5-rpz46 1/1 Running 0 100s kube-system helm-install-traefik-crd-h6l8t 0/1 Completed 0 100s kube-system svclb-traefik-de3b8bd9-qfjmb 2/2 Running 0 89s kube-system helm-install-traefik-rnrdv 0/1 Completed 1 100s kube-system traefik-56b8c5fb5c-5rdhq 1/1 Running 0 89s kube-system metrics-server-7b67f64457-xznlh 1/1 Running 0 100s
Just like that we’re ready to use the Kubernetes cluster locally just as we would use a “real” cluster.
A look under the hood
Under the hood all that k3d does is to fire up a couple of Docker containers running k3s. Let’s verify this by looking at all the running Docker containers:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 044acab9a754 ghcr.io/k3d-io/k3d-tools:5.5.1 "/app/k3d-tools noop" 5 minutes ago Up 5 minutes k3d-christian-tools 271dba078869 ghcr.io/k3d-io/k3d-proxy:5.5.1 "/bin/sh -c nginx-pr…" 5 minutes ago Up 5 minutes 80/tcp, 0.0.0.0:52676->6443/tcp k3d-christian-serverlb f69e13dd189b rancher/k3s:v1.26.4-k3s1 "/bin/k3d-entrypoint…" 5 minutes ago Up 5 minutes k3d-christian-server-0
We recognize the name of the cluster that we defined originally (
christian) by the
k3d allows us to run multiple clusters simultaneously by given them distinct name.
Let’s verify this by creating yet another cluster:
$ k3d cluster create foo
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e5614434c7ce ghcr.io/k3d-io/k3d-tools:5.5.1 "/app/k3d-tools noop" 18 seconds ago Up 16 seconds k3d-foo-tools fcd7b21c66f1 ghcr.io/k3d-io/k3d-proxy:5.5.1 "/bin/sh -c nginx-pr…" 18 seconds ago Up 12 seconds 80/tcp, 0.0.0.0:52780->6443/tcp k3d-foo-serverlb cf0f1cbdadb1 rancher/k3s:v1.26.4-k3s1 "/bin/k3d-entrypoint…" 18 seconds ago Up 15 seconds k3d-foo-server-0 044acab9a754 ghcr.io/k3d-io/k3d-tools:5.5.1 "/app/k3d-tools noop" 8 minutes ago Up 8 minutes k3d-christian-tools 271dba078869 ghcr.io/k3d-io/k3d-proxy:5.5.1 "/bin/sh -c nginx-pr…" 8 minutes ago Up 7 minutes 80/tcp, 0.0.0.0:52676->6443/tcp k3d-christian-serverlb f69e13dd189b rancher/k3s:v1.26.4-k3s1 "/bin/k3d-entrypoint…" 8 minutes ago Up 8 minutes k3d-christian-server-0
During the installation k3d created the necessary configurations for the
kubectl command to access our two clusters (
$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE k3d-christian k3d-christian admin@k3d-christian * k3d-foo k3d-foo admin@k3d-foo
$ kubectl cluster-info Kubernetes control plane is running at https://0.0.0.0:52780 CoreDNS is running at https://0.0.0.0:52780/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://0.0.0.0:52780/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
As we can see the control oane is available to us at
If we “just” use kubectl that fully transparent to us as k3d has entered the necessary information into the
~/.kube/config file but if we want to manually connect to the cluster instance that’s the way to go.
Configuring the cluster
For a lot of use cases this initial setup is more than sufficient. However if we want to tweak the cluster we’re free to do so.
Let’s add a second node to the
$ k3d node create --cluster christian second-node
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k3d-christian-server-0 Ready control-plane,master 17m v1.26.4+k3s1 k3d-second-node-0 Ready <none> 2m12s v1.26.4+k3s1
Starting and stopping the cluster
We may not want to run the local cluster all the time and block precious system resources.
To stop a cluster (which means removing the corresponding Docker containers) we can use the
$ k3d cluster stop foo
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 07f96d29af25 ghcr.io/k3d-io/k3d-tools:5.5.1 "/app/k3d-tools noop" 36 seconds ago Up 36 seconds k3d-christian-tools 2c3512edecc5 d98e08375058 "/bin/k3d-entrypoint…" 6 minutes ago Up 31 seconds k3d-second-node-0 271dba078869 ghcr.io/k3d-io/k3d-proxy:5.5.1 "/bin/sh -c nginx-pr…" 21 minutes ago Up 26 seconds 80/tcp, 0.0.0.0:52676->6443/tcp k3d-christian-serverlb f69e13dd189b rancher/k3s:v1.26.4-k3s1 "/bin/k3d-entrypoint…" 21 minutes ago Up 34 seconds k3d-christian-server-0
All the containers for the
foo cluster have been removed, only the containers for the
christian cluster still exist.
If we need to restart the
foo cluster we can use
k3d to do so:
$ k3d cluster start foo
Creating a Kubernetes cluster locally has never been easier.
I’ve been in a lot of situations where I either didn’t have a “real” cluster to test or where using a “real” cluster wasn’t an option.
Maybe I’m traveling and want to test some changes in the train with a lacking internet connection or whether I’m not really sure what the changes will do and I don’t want to risk polluting (or even damaging) a “real” cluster - working locally is an indispensable tool in my toolbox.
k3d allows me to be as productive as I can - directly on my machine.