Introduction
If you’re developing for Kubernetes, your local environment should match production as closely as possible. Kind lets you spin up multi-node Kubernetes clusters using Docker containers — no VMs, no cloud costs, fully repeatable. I use it daily for testing deployments, operators, and networking configurations before they hit real clusters.
Local development only. Kind runs nodes as privileged Docker containers, ships kindnet as the CNI (which does not enforce NetworkPolicy), and is not designed for untrusted workloads or internet exposure. Use it on your laptop, in CI, and nowhere else.
Advantages of using kind
- Allows for running clusters with multiple nodes.
- Allows for running clusters with multiple controlplane
- Light weight running on top of containers
- Config file for repeatable clusters on any system
- Run multiple clusters with multiple versions of kubernetes
Installation and usage
In order to use Kind you will need at a minimum 2 things docker and kubectl
Installation
On MacOS and linux via Homebrew:
brew install kind
Homebrew pulls whatever the formula currently points at — for reproducibility across a team, use the versioned curl path with checksum verification on all platforms.
On linux (check the releases page for the current version and verify the checksum before installing):
KIND_VERSION=v0.23.0
curl -Lo ./kind "https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64"
EXPECTED=$(curl -sL "https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64.sha256sum" | awk '{print $1}')
echo "${EXPECTED} kind" | sha256sum -c -
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
A note on Docker: on Linux, joining the docker group grants root-equivalent access to the host. On a shared or work machine, run rootless Docker or Docker Desktop rather than adding your user to docker. If you go the rootless route, Kind needs cgroup-v2 delegation configured (systemd-run --user --scope --property=Delegate=yes) or the kubelet inside the node container will fail to manage pod cgroups — see the Kind rootless docs before debugging mysterious node-NotReady errors.
On Windows (PowerShell — verify the checksum the same way as on Linux before moving the binary into PATH):
$KIND_VERSION = "v0.23.0"
curl.exe -Lo kind-windows-amd64.exe "https://kind.sigs.k8s.io/dl/$KIND_VERSION/kind-windows-amd64"
$EXPECTED = (curl.exe -sL "https://kind.sigs.k8s.io/dl/$KIND_VERSION/kind-windows-amd64.sha256sum").Split(" ")[0]
$ACTUAL = (Get-FileHash .\kind-windows-amd64.exe -Algorithm SHA256).Hash.ToLower()
if ($ACTUAL -ne $EXPECTED) { throw "checksum mismatch: $ACTUAL != $EXPECTED" }
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe
# OR via Chocolatey (https://chocolatey.org/packages/kind) -- same provenance caveat as Homebrew:
# you're trusting whatever the package maintainer currently points at.
choco install kind
Basic Usage
To use kind, you will need to have docker up and running on your system. Once you have docker running you can create a cluster with:
kind create cluster
To delete your cluster use:
kind delete cluster
With the 2 commands above you can create and delete clusters at will. There are several options you can pass on your create command to adjust the cluster to your needs, from the name to the number of worker nodes and so on. We could list the options here but our intent is to create replicable clusters, and share the cluster setup in a team this way we can speed up onboarding processes and make sure all developement environments are the closest possible to production environments.
Configure your cluster with a file
We want to setup a file that can be shared in a version control system like git and be available to every developer in the project. If you are working in a team and/or working with microservicces, running multi tear applications and so on you want to be able to create and configure a cluster in seconds.
So on the base of you project create a file kind.yaml and add the following content. The apiVersion: kind.x-k8s.io/v1alpha4 is the current config API — it is an alpha surface and Kind may bump it in a future release, so pin your Kind binary version alongside this file:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: my-cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: "127.0.0.1"
protocol: TCP
- containerPort: 443
hostPort: 443
listenAddress: "127.0.0.1"
protocol: TCP
- role: worker
- role: worker
Basically we are creating a cluster with 1 control-plane and 2 worker nodes and mapping cluster ports to the host system, mainly port 80and port 443.
The listenAddress: "127.0.0.1" is deliberate. Without it, extraPortMappings binds to 0.0.0.0, which means every service you expose through ingress is reachable by anyone on your LAN — coffee shop, hotel, office network. Bind to loopback and you keep the cluster local to your machine. Same rule applies to the API server: do not override apiServerAddress from its default (127.0.0.1). An exposed kubeadm-bootstrapped API server is a full cluster takeover in one kubectl command.
You can use KubeADM Configurations for example to add labels to your control-plane instances and/or your nodes as well. In this example we will add the label “ingress-ready” with the value “true” to the control-plane and worker nodes:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: my-cluster
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: "127.0.0.1"
protocol: TCP
- containerPort: 443
hostPort: 443
listenAddress: "127.0.0.1"
protocol: TCP
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
You can either specify a name on the file or ommit it and name the cluster when running the create command.
Using the name on the file:
kind create cluster --config=kind.yaml
Or name your cluster when creating it:
kind create cluster --name my-cluster --config=kind.yaml
Next Steps
The empty cluster is the starting line, not the finish. Here’s what I do immediately after kind create cluster returns.
Deploy ingress-nginx first. The port mappings above only matter if something is listening on them. Install the Kind-flavoured ingress-nginx manifests and you can hit services on http://localhost with a proper HTTP path, which is exactly what you want when developing an app that assumes it lives behind an ingress.
Before you run kubectl apply, pin the context. ingress-nginx installs a ServiceAccount with a cluster-scoped admission webhook and a NodePort service that maps to hostPort: 80/443. If your current context has drifted to a production cluster since you last looked, this manifest lands there instead — and you’ve just rewritten the cluster’s ingress path. Always pass --context kind-my-cluster explicitly and confirm with kubectl config current-context first. (The kind- prefix is what Kind writes into your kubeconfig; adjust to whatever name you gave the cluster.)
Second, do not kubectl apply -f https://... a manifest fetched by a mutable git tag. controller-v1.10.1 is a tag, and tags can be retargeted. A retargeted tag installs a ServiceAccount with cluster-wide admission-webhook RBAC onto your machine the next time a reader copies this command — that is a one-step cluster compromise via supply chain. Pulling from main is worse for the same reason. The safe pattern matches the Kind binary install earlier on this page: download, verify a known SHA-256, then apply the local file. Pin to a specific commit SHA (immutable) rather than a tag:
# Pick a current, patched minor from https://github.com/kubernetes/ingress-nginx/releases --
# v1.10.1 is outdated and carries known CVEs. Replace INGRESS_SHA with the 40-char commit SHA
# of the release tag you pick (view the tag on GitHub, copy the commit it points at).
INGRESS_SHA=<40-char-commit-sha>
EXPECTED_MANIFEST_SHA256=<sha256-you-record-on-first-download>
curl -fsSLo ingress-nginx-kind.yaml \
"https://raw.githubusercontent.com/kubernetes/ingress-nginx/${INGRESS_SHA}/deploy/static/provider/kind/deploy.yaml"
echo "${EXPECTED_MANIFEST_SHA256} ingress-nginx-kind.yaml" | sha256sum -c -
kubectl --context kind-my-cluster apply -f ingress-nginx-kind.yaml
kubectl --context kind-my-cluster wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
The first time you download the manifest, compute its SHA-256 (sha256sum ingress-nginx-kind.yaml) and record it. Every subsequent run verifies against that recorded hash, so a compromised mirror or a retargeted commit can’t silently change what lands in your cluster. Same supply-chain rule applies to any other manifest you install this way, including the Calico manifests mentioned below — pin by commit SHA, verify the file hash, then apply.
Deploy something trivial and test the full path. A single-replica nginx Deployment, a Service, and an Ingress pointing to localhost. If curl http://localhost/ returns HTML, your cluster plus ingress plus port mapping are all wired correctly. Debugging networking on an empty cluster is much easier than debugging it once you’ve layered five things on top.
Mind your kubeconfig context. kind create cluster merges credentials into ~/.kube/config and switches the current context to the new cluster. That is fine until you tab back to a terminal you were using for a production cluster an hour earlier and fire off a kubectl apply against the wrong context. I keep Kind clusters in a separate kubeconfig (kind create cluster --kubeconfig ~/.kube/kind.config) and source it per-shell, or at minimum double-check kubectl config current-context before any destructive command. kind delete cluster --name my-cluster also removes the context entry, which is the cleanup you want on a shared machine.
kindnet does not enforce NetworkPolicy. The default CNI in Kind silently ignores NetworkPolicy manifests. You can write them, apply them, see them with kubectl get netpol, and they do nothing. If you need to test policy enforcement locally before shipping to a cluster that uses Calico or Cilium, swap the CNI: kind create cluster --config kind.yaml with networking.disableDefaultCNI: true, then install Calico. Otherwise your “tested” policies first meet a real enforcer in production.
Tear down when you’re done. Kind clusters are cheap to create, so treat them as disposable. kind delete cluster --name my-cluster reclaims the Docker resources. I run with an alias that recreates the cluster from kind.yaml in about 40 seconds — faster than trying to clean up a broken cluster by hand. If you find yourself kubectl delete-ing your way back to a good state, just nuke the cluster and start over.