Files
Blog/content/post/8-create-manual-kubernetes-cluster-kubeadm.md
Gitea Actions de21e876d6
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 18s
Blog Deployment / Build (push) Successful in 30s
Blog Deployment / Deploy-Staging (push) Successful in 10s
Blog Deployment / Test-Staging (push) Successful in 3s
Blog Deployment / Merge (push) Successful in 7s
Blog Deployment / Deploy-Production (push) Successful in 9s
Blog Deployment / Test-Production (push) Successful in 2s
Blog Deployment / Clean (push) Successful in 3s
Blog Deployment / Notify (push) Successful in 2s
Auto-update blog content from Obsidian: 2025-07-18 08:23:42
2025-07-18 08:23:42 +00:00

18 KiB
Raw Blame History

slug, title, description, date, draft, tags, categories
slug title description date draft tags categories
create-manual-kubernetes-cluster-kubeadm Template true

Intro

In this [previous article]({{< ref "post/7-terraform-create-proxmox-module" >}}), I explained how to deploy 6 VMs using Terraform on Proxmox, 3 masters and 3 workers nodes, based on [cloud-init template]({{< ref "post/1-proxmox-cloud-init-vm-template" >}}).

Now that the infrastructure is ready, lets move on to the next step: manually building a Kubernetes cluster using kubeadm.

In this post, Ill walk through each step of the installation process of a simple Kubernetes cluster, from preparing the nodes to deploying a basic application.

I will not rely on automation tools to configure the nodes for now, to better understand what are the steps involved in a Kubernetes cluster bootstrapping. Automation will be covered in future posts.


What is Kubernetes

Kubernetes is an open-source platform for orchestrating containers across a group of machines. It handles the deployment, scaling, and health of containerized applications, allowing you to focus on building your services rather than managing infrastructure details.

A Kubernetes cluster is made up of two main types of nodes: control plane (masters) nodes and worker nodes. The control plane is responsible for the overall management of the cluster, it makes decisions about scheduling, monitoring, and responding to changes in the system. The worker nodes are where your applications actually run, inside containers managed by Kubernetes.

In this post, well manually set up a Kubernetes cluster with 3 control plane nodes (masters) and 3 workers. This structure reflects a highly available and production-like setup, even though the goal here is mainly to learn and understand how the components fit together.

The official documentation can be found here, I will use the version v1.32.


Prepare the Nodes

I will perform the following steps on all 6 VMs (masters and workers).

Hostname

Each VM has a unique hostname and all nodes must resolve each other.

The hostname is set upon the VM creation with cloud-init. But for demonstration purpose, I'll set it manually:

sudo hostnamectl set-hostname <hostname>

On my infrastructure, the nodes resolve the hostnames each other using my DNS server on that domain (lab.vezpi.me). In case you don't have a DNS server, you can hardcode the nodes IP in each /etc/hosts file:

192.168.66.168 apex-worker
192.168.66.167 apex-master
192.168.66.166 zenith-master
192.168.66.170 vertex-worker
192.168.66.169 vertex-master
192.168.66.172 zenith-worker

OS Updates

My VMs are running Ubuntu 24.04.2 LTS. Cloud-init handles the updates after the provision in that case, let's make sure everything is up to date and install packages needed to add Kubernetes repository:

sudo apt update && sudo apt upgrade -y
sudo apt install -y apt-transport-https ca-certificates curl gpg

Swap

The default behavior of a kubelet is to fail to start if swap memory is detected on a node. This means that swap should either be disabled or tolerated by kubelet.

My VMs are not using swap, but here how to disable it:

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

Firewall

For this lab, I will just disable the local firewall (don't do that in production):

sudo systemctl disable --now ufw

For production, you want to allow the nodes to talk to each other on these ports:

Control plane

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10259 kube-scheduler Self
TCP Inbound 10257 kube-controller-manager Self

Worker

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10256 kube-proxy Self, Load balancers
TCP Inbound 30000-32767 NodePort Services† All

Kernel Modules and Settings

Kubernetes needs 2 kernel modules:

  • overlay: for facilitating the layering of one filesystem on top of another
  • br_netfilter: for enabling bridge network connections

Let's enable them:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

Some kernel settings related to network are also needed:

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system

Container Runtime

You need to install a container runtime into each node in the cluster so that Pods can run there. I will use containerd:

sudo apt install -y containerd

Create the default configuration:

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null

Enable systemd cgroup driver:

sudo sed -i 's/^\(\s*SystemdCgroup\s*=\s*\)false/\1true/' /etc/containerd/config.toml

Restart and enable the containerd service

sudo systemctl restart containerd
sudo systemctl enable containerd

Kubernetes Packages

Last step: install the Kubernetes packages. I start with adding the repository and its signing key.

Add the key:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Add the repository:

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Finally I can install the needed packages:

  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.

On the nodes, update the apt package index, install kubelet and kubeadm, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm
sudo apt-mark hold kubelet kubeadm

I will not manage the cluster from my nodes, I install kubectl on my LXC controller instead:

sudo apt-get update
sudo apt-get install -y kubectl
sudo apt-mark hold kubectl

Initialize the Cluster

Once all nodes are prepared, its time to initialize the Kubernetes control plane on the first master node.

Initialization

Run the following command to bootstrap the cluster:

sudo kubeadm init \
  --control-plane-endpoint "k8s-lab.lab.vezpi.me:6443" \
  --upload-certs \
  --pod-network-cidr=10.10.0.0/16

Explanation:

  • --control-plane-endpoint: DNS name for your control plane.
  • --upload-certs: Upload the certificates that should be shared across all masters of the cluster.
  • --pod-network-cidr: Subnet for the CNI.

The DNS name k8s-lab.lab.vezpi.me is handled in my homelab by Unbound DNS, this resolves on my OPNsense interface where a HAProxy service listen on the port 6443 and load balance between the 3 control plane nodes.

This step will:

  • Initialize the etcd database and control plane components.
  • Set up RBAC and bootstrap tokens.
  • Output two important kubeadm join commands: one for workers, and one for additional control-plane nodes.

Youll also see a message instructing you to set up your kubectl access.

I0718 07:18:29.306814   14724 version.go:261] remote version is much newer: v1.33.3; falling back to: stable-1.32
[init] Using Kubernetes version: v1.32.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0718 07:18:29.736833   14724 checks.go:846] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [apex-master k8s-lab.lab.vezpi.me kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.167]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [apex-master localhost] and IPs [192.168.66.167 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [apex-master localhost] and IPs [192.168.66.167 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.894876ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 9.030595455s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
70614009469f9fc7a97c392253492c509f1884281f59ccd7725b3200e3271794
[mark-control-plane] Marking the node apex-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node apex-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 8etamd.g8whseg60kg09nu1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes running the following command on each as root:

  kubeadm join k8s-lab.lab.vezpi.me:6443 --token 8etamd.g8whseg60kg09nu1 \
        --discovery-token-ca-cert-hash sha256:65c4da3121f57d2e67ea6c1c1349544c9e295d78790b199b5c3be908ffe5ed6c \
        --control-plane --certificate-key 70614009469f9fc7a97c392253492c509f1884281f59ccd7725b3200e3271794

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-lab.lab.vezpi.me:6443 --token 8etamd.g8whseg60kg09nu1 \
        --discovery-token-ca-cert-hash sha256:65c4da3121f57d2e67ea6c1c1349544c9e295d78790b199b5c3be908ffe5ed6c

Configure kubectl

If you want to manage your cluster from your master node, you can simply copy paste from the output of the kubeadm init command:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If you prefer to control the cluster from elsewhere, in my case my from my LXC bastion:

mkdir -p $HOME/.kube
scp <master node>:/etc/kubernetes/admin.conf $HOME/.kube/config
chmod 600 ~/.kube/config

Verify your access:

kubectl get nodes

You should see only the first master listed (in NotReady state until the CNI is deployed).

Install the CNI Plugin Cilium

From the Cilium documentation, there are 2 common ways for installing the CNI: using the Cilium CLI or Helm, for that lab I will use the CLI tool.

Install the Cilium CLI

The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble):

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}

Install Cilium

Install Cilium into the Kubernetes cluster pointed to by your current kubectl context:

cilium install --version 1.17.6

Validate the Installation

To validate that Cilium has been properly installed:

cilium status --wait

To validate that your cluster has proper network connectivity:

cilium connectivity test

Once installed, the master node should transition to Ready status.


Join Additional Nodes

After initializing the first control plane node, you can now join the remaining nodes to the cluster.

There are two types of join commands:

  • One for joining control-plane (master) nodes
  • One for joining worker nodes

These commands were displayed at the end of the kubeadm init output. If you didnt copy them, you can regenerate them.

⚠️ The certificates and the decryption key expire after two hours.

Additional Masters

Generate Certificates

If you need to re-upload the certificates and generate a new decryption key, use the following command on a control plane node that is already joined to the cluster:

sudo kubeadm init phase upload-certs --upload-certs
kubeadm certs certificate-key

Generate Token

Paired with the certificate, you'll need a new token, this will print the whole join command as control plane:

sudo kubeadm token create --print-join-command --certificate-key <certificate-key>

Join the Control Plane

You can now join any number of control-plane node by running the command above or given by the kubeadm init command:

sudo kubeadm join <control-plane-endpoint> --token <token> --discovery-token-ca-cert-hash <discovery-token-ca-cert-hash> --control-plane --certificate-key <certificate-key>

Join Workers

Again here if you missed the output of the kubeadm init, you can generate a new token and the full join command:

sudo kubeadm token create --print-join-command

Then you can

Generating and running the worker kubeadm join command

Verifying node status

Deploying a Sample Application

Creating a simple Deployment and Service

Exposing it via NodePort or LoadBalancer

Verifying functionality

Conclusion

Summary of the steps

When to use this manual method