Bootstrapping with Kubeadm
- Standard Tooling:
kubeadmis the official Kubernetes SIG tool for bootstrapping production-grade clusters following best practices. - Initialization vs. Joining:
kubeadm initsets up the Control Plane (certificates, static pods, etcd) on the first node, whilekubeadm joinconnects worker nodes using a bootstrap token. - Certificate Authority: kubeadm generates a full PKI (Public Key Infrastructure) — CA, API Server cert, kubelet certs, etcd certs — securing all cluster communication with mutual TLS.
- Post-Bootstrap Steps: A cluster is non-functional until a CNI (Container Network Interface) plugin is installed. Without CNI, pods cannot communicate across nodes.
- kubeadm vs. Managed Services: kubeadm teaches you what managed services (EKS, GKE, AKS) abstract away. Understanding this flow makes you a better cluster operator.
While managed services like EKS, GKE, and AKS hide the complexity of cluster creation, understanding how a cluster is built from scratch is essential for any serious Kubernetes practitioner. kubeadm is the tool designed by the Kubernetes community (SIG Cluster Lifecycle) to provide a simple, best-practice path for creating Kubernetes clusters.
Why Learn kubeadm?
Even if you never run kubeadm in production (most teams use managed services), understanding the bootstrapping process teaches you:
- What the Control Plane actually is — you see each component start one by one
- How TLS certificates work in Kubernetes — essential for debugging authentication issues
- Why CNI matters — a cluster without networking is just a set of disconnected nodes
- What can go wrong — and how to troubleshoot node failures, certificate expiry, and etcd issues
Prerequisites
Before running kubeadm, each node needs:
| Requirement | Details |
|---|---|
| OS | Ubuntu 22.04+, RHEL 8+, or other supported Linux distribution |
| RAM | Minimum 2 GB per node (control plane needs more under load) |
| CPU | Minimum 2 cores for the control plane node |
| Swap | Must be disabled (swapoff -a). Kubernetes requires this for predictable resource management |
| Container Runtime | containerd or CRI-O installed and running |
| Network | Full connectivity between all nodes, unique hostnames, unique MAC addresses |
| Ports | Control plane: 6443 (API), 2379-2380 (etcd), 10250-10252 (kubelet, scheduler, controller-manager) |
# Disable swap (required)
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
# Enable required kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter
# Set required sysctl parameters
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
The Bootstrapping Flow
"Initialize Control Plane (API, etcd, Scheduler)"
Step 1: Install kubeadm, kubelet, and kubectl
These three binaries must be installed on every node:
# Add the Kubernetes apt repository (Ubuntu/Debian)
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \
https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Note: The apt-mark hold command prevents these packages from being automatically upgraded, which could cause version skew issues in your cluster.
Step 2: Initialize the Control Plane (kubeadm init)
Run this on the first node (the one that will become the control plane):
sudo kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.31.0 \
--control-plane-endpoint=<LOAD_BALANCER_IP_OR_DNS>
What kubeadm init Does (Step by Step)
- Preflight Checks: Verifies the node meets all requirements (swap off, ports available, container runtime running, kernel modules loaded)
- Certificate Authority Generation: Creates a self-signed CA and uses it to generate TLS certificates for:
- API Server (serving cert)
- API Server → kubelet communication
- API Server → etcd communication
- Front Proxy (for aggregation layer)
- etcd peer and server certificates
- kubeconfig Files: Generates kubeconfig files for the controller-manager, scheduler, and admin user in
/etc/kubernetes/ - Static Pod Manifests: Writes YAML manifests for the Control Plane components to
/etc/kubernetes/manifests/:kube-apiserver.yamlkube-controller-manager.yamlkube-scheduler.yamletcd.yaml
- kubelet Starts: The kubelet watches
/etc/kubernetes/manifests/and starts the Control Plane components as static Pods - Bootstrap Token: Generates a token and CA certificate hash for worker nodes to join securely
- Addon Deployment: Applies CoreDNS and kube-proxy as cluster addons
After kubeadm init
The output includes critical instructions:
# Set up kubectl access for your user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Save the join command! You'll need this for worker nodes.
# It looks like:
kubeadm join 192.168.1.100:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:abc123...
Understanding the Certificates
kubeadm creates the following PKI structure in /etc/kubernetes/pki/:
/etc/kubernetes/pki/
├── ca.crt / ca.key # Cluster CA
├── apiserver.crt / apiserver.key # API Server serving cert
├── apiserver-kubelet-client.crt/key # API Server → kubelet
├── front-proxy-ca.crt/key # Front proxy CA
├── front-proxy-client.crt/key # Aggregation layer
├── sa.key / sa.pub # Service Account signing keys
└── etcd/
├── ca.crt / ca.key # etcd CA
├── server.crt / server.key # etcd serving cert
├── peer.crt / peer.key # etcd peer communication
└── healthcheck-client.crt/key # etcd health checks
Certificate rotation: By default, kubeadm certificates expire after 1 year. Renew them before expiry with kubeadm certs renew all. Since Kubernetes 1.20, kubelet client certificates rotate automatically.
Step 3: Install a CNI Plugin
After kubeadm init, the cluster is technically running but nodes show NotReady and pods cannot communicate across nodes. You must install a CNI plugin to provide pod networking.
# Option 1: Cilium (recommended for modern clusters)
cilium install
# Option 2: Calico (widely used, supports network policies)
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml
# Option 3: Flannel (simple overlay network)
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
After installing the CNI, nodes should transition to Ready:
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# control-01 Ready control-plane 5m v1.31.0
See the CNI Deep Dive page for a detailed comparison of networking plugins.
Step 4: Join Worker Nodes (kubeadm join)
On each worker node (after installing kubeadm, kubelet, and the container runtime), run the join command from the kubeadm init output:
sudo kubeadm join 192.168.1.100:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:abc123...
What kubeadm join Does
- Downloads the cluster CA from the API Server using the provided token and verifies it against the certificate hash (preventing MITM attacks)
- Generates a kubelet certificate signed by the cluster CA
- Writes kubeconfig for the kubelet to communicate with the API Server
- Starts the kubelet, which registers the node with the API Server
Regenerating a Join Token
Tokens expire after 24 hours by default. To create a new one:
kubeadm token create --print-join-command
Step 5: Verify the Cluster
# Check all nodes are Ready
kubectl get nodes -o wide
# Verify system pods are running
kubectl get pods -n kube-system
# Test pod-to-pod networking
kubectl run test-1 --image=busybox --command -- sleep 3600
kubectl run test-2 --image=busybox --command -- sleep 3600
kubectl exec test-1 -- ping -c 3 $(kubectl get pod test-2 -o jsonpath='{.status.podIP}')
High Availability with kubeadm
For production, you need multiple control plane nodes. kubeadm supports this with the --control-plane flag on kubeadm join:
# On additional control plane nodes
sudo kubeadm join 192.168.1.100:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:abc123... \
--control-plane \
--certificate-key <key-from-init>
This requires a load balancer in front of the API Servers and either external etcd or stacked etcd (one etcd member per control plane node).
Common Pitfalls
-
Swap not disabled:
kubeadm initwill refuse to run if swap is enabled. This is not optional — Kubernetes needs predictable memory behavior. -
Firewall blocking ports: The API Server (6443), etcd (2379-2380), and kubelet (10250) must be reachable between all nodes.
-
Forgetting the CNI: Without a CNI plugin,
CoreDNSpods stay inPendingstate and nodes remainNotReady. This is the most common "my cluster doesn't work" issue. -
Certificate expiry: Certificates expire after 1 year. Set a calendar reminder to run
kubeadm certs renew alland restart control plane components. -
Container runtime not running: If
containerdorCRI-Ois not running whenkubeadm initstarts, preflight checks will fail. Verify withsystemctl status containerd. -
Version skew: kubeadm, kubelet, and kubectl should be the same version. The kubelet can be one minor version older than the API Server, but never newer.
What's Next?
Now that you understand how clusters are built, proceed to:
- Managed Cloud Providers — See how EKS, GKE, and AKS simplify this process
- Your First Deployment — Deploy an application on your cluster
- Architecture — Deepen your understanding of the components you just bootstrapped