Auto-update blog content from Obsidian: 2025-07-17 12:06:40
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 6s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 9s
Blog Deployment / Test-Staging (push) Successful in 2s
Blog Deployment / Merge (push) Successful in 6s
Blog Deployment / Deploy-Production (push) Successful in 10s
Blog Deployment / Test-Production (push) Successful in 3s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 4s
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 6s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 9s
Blog Deployment / Test-Staging (push) Successful in 2s
Blog Deployment / Merge (push) Successful in 6s
Blog Deployment / Deploy-Production (push) Successful in 10s
Blog Deployment / Test-Production (push) Successful in 3s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 4s
This commit is contained in:
@@ -14,9 +14,9 @@ In this [previous article]({{< ref "post/7-terraform-create-proxmox-module" >}})
|
||||
|
||||
Now that the infrastructure is ready, let’s move on to the next step: **manually building a Kubernetes cluster** using `kubeadm`.
|
||||
|
||||
In this post, I’ll walk through each step of the installation process of a simple Kubernetes cluster, from preparing the nodes to deploying a simple application.
|
||||
In this post, I’ll walk through each step of the installation process of a simple Kubernetes cluster, from preparing the nodes to deploying a basic application.
|
||||
|
||||
I will not rely on automation tools to configure the nodes for now, to better understand what are the steps involved in a Kubernetes cluster bootstrapping.
|
||||
I will not rely on automation tools to configure the nodes for now, to better understand what are the steps involved in a Kubernetes cluster bootstrapping. Automation will be covered in future posts.
|
||||
|
||||
---
|
||||
## What is Kubernetes
|
||||
@@ -53,8 +53,7 @@ On my infrastructure, the nodes resolve the hostnames each other using my DNS se
|
||||
192.168.66.172 zenith-worker
|
||||
```
|
||||
|
||||
###
|
||||
OS Updates
|
||||
### OS Updates
|
||||
|
||||
My VMs are running **Ubuntu 24.04.2 LTS**. Cloud-init handles the updates after the provision in that case, let's make sure everything is up to date and install packages needed to add Kubernetes repository:
|
||||
```bash
|
||||
@@ -74,7 +73,7 @@ sudo sed -i '/ swap / s/^/#/' /etc/fstab
|
||||
|
||||
### Firewall
|
||||
|
||||
For testing environment, I will just disable the local firewall (don't do that in production):
|
||||
For this lab, I will just disable the local firewall (don't do that in production):
|
||||
```bash
|
||||
sudo systemctl disable --now ufw
|
||||
```
|
||||
@@ -90,11 +89,11 @@ For production, you want to allow the nodes to talk to each other on these ports
|
||||
|TCP|Inbound|10257|kube-controller-manager|Self|
|
||||
|
||||
#### Worker
|
||||
|Protocol|Direction|Port Range|Purpose|Used By|
|
||||
|---|---|---|---|---|
|
||||
|TCP|Inbound|10250|Kubelet API|Self, Control plane|
|
||||
|TCP|Inbound|10256|kube-proxy|Self, Load balancers|
|
||||
|TCP|Inbound|30000-32767|NodePort Services†|All
|
||||
| Protocol | Direction | Port Range | Purpose | Used By |
|
||||
| -------- | --------- | ----------- | ------------------ | -------------------- |
|
||||
| TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
|
||||
| TCP | Inbound | 10256 | kube-proxy | Self, Load balancers |
|
||||
| TCP | Inbound | 30000-32767 | NodePort Services† | All |
|
||||
|
||||
### Kernel Modules and Settings
|
||||
|
||||
@@ -137,32 +136,134 @@ sudo mkdir -p /etc/containerd
|
||||
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
|
||||
```
|
||||
|
||||
Enable `systemd` cgroup driver:
|
||||
Enable `systemd` *cgroup* driver:
|
||||
```bash
|
||||
sudo sed -i 's/^\(\s*SystemdCgroup\s*=\s*\)false/\1true/' /etc/containerd/config.toml
|
||||
```
|
||||
|
||||
Restart `containerd` service
|
||||
Restart and enable the `containerd` service
|
||||
```bash
|
||||
sudo systemctl restart containerd
|
||||
sudo systemctl enable containerd
|
||||
```
|
||||
|
||||
### Kubernetes Packages
|
||||
|
||||
|
||||
Installing kubeadm and kubelet
|
||||
Last step: install the Kubernetes packages. I start with adding the repository and its signing key.
|
||||
|
||||
Installing kubeadm on bastion
|
||||
Add the key:
|
||||
```bash
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
```
|
||||
|
||||
Add the repository:
|
||||
```bash
|
||||
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
```
|
||||
|
||||
|
||||
Finally I can install the needed packages:
|
||||
- `kubeadm`: the command to bootstrap the cluster.
|
||||
- `kubelet`: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
|
||||
- `kubectl`: the command line util to talk to your cluster.
|
||||
|
||||
On the nodes, update the `apt` package index, install `kubelet` and `kubeadm`, and pin their version:
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubelet kubeadm
|
||||
sudo apt-mark hold kubelet kubeadm
|
||||
```
|
||||
|
||||
ℹ️ I will not manage the cluster from my nodes, I install `kubectl` on my LXC controller instead:
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubectl
|
||||
sudo apt-mark hold kubectl
|
||||
```
|
||||
|
||||
---
|
||||
## Initialize the Cluster
|
||||
|
||||
Running kubeadm init
|
||||
Once all nodes are prepared, it’s time to initialize the Kubernetes control plane on the **first master node**.
|
||||
|
||||
Configuring kubectl on the bastion
|
||||
### Initialization
|
||||
Run the following command to bootstrap the cluster:
|
||||
```bash
|
||||
sudo kubeadm init \
|
||||
--control-plane-endpoint "apex-master.lab.vezpi.me:6443" \
|
||||
--upload-certs \
|
||||
--pod-network-cidr=10.10.0.0/16
|
||||
```
|
||||
|
||||
Installing the CNI plugin Cilium
|
||||
**Explanation**:
|
||||
- `--control-plane-endpoint`: a DNS name for your control plane.
|
||||
- `--upload-certs`: Upload the certificates that should be shared across all the control-plane instances to the cluster.
|
||||
- `--pod-network-cidr`: The subnet for your CNI.
|
||||
|
||||
This step will:
|
||||
- Initialize the `etcd` database and control plane components.
|
||||
- Set up RBAC and bootstrap tokens.
|
||||
- Output two important `kubeadm join` commands: one for **workers**, and one for **additional control-plane nodes**.
|
||||
|
||||
You’ll also see a message instructing you to set up your `kubectl` access.
|
||||
|
||||
### Configure `kubectl`
|
||||
|
||||
If you want to manage your cluster from your master node, you can simply copy paste from the output of the `kubeadm init` command:
|
||||
```bash
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
```
|
||||
|
||||
If you prefer to control the cluster from elsewhere, in my case my from my LXC bastion:
|
||||
```bash
|
||||
mkdir -p $HOME/.kube
|
||||
scp <master node>:/etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
chmod 600 ~/.kube/config
|
||||
```
|
||||
|
||||
Verify your access:
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
ℹ️ You should see only the first master listed (in "NotReady" state until the CNI is deployed).
|
||||
|
||||
### Install the CNI plugin Cilium
|
||||
|
||||
From the [Cilium documentation](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/), there are 2 common ways for installing the CNI: using the **Cilium CLI** or **Helm**, for that lab I will use the CLI tool.
|
||||
|
||||
#### Install the Cilium CLI
|
||||
|
||||
The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. `clustermesh`, `Hubble`):
|
||||
```bash
|
||||
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
|
||||
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz{,.sha256sum}
|
||||
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
|
||||
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
|
||||
rm cilium-linux-amd64.tar.gz{,.sha256sum}
|
||||
```
|
||||
|
||||
#### Install Cilium
|
||||
|
||||
Install Cilium into the Kubernetes cluster pointed to by your current kubectl context:
|
||||
```bash
|
||||
cilium install --version 1.17.6
|
||||
```
|
||||
|
||||
#### Validate the installation
|
||||
|
||||
To validate that Cilium has been properly installed, you can run:
|
||||
```bash
|
||||
cilium status --wait
|
||||
```
|
||||
|
||||
Run the following command to validate that your cluster has proper network connectivity:
|
||||
```bash
|
||||
cilium connectivity test
|
||||
```
|
||||
|
||||
Once installed, the master node should transition to **Ready** status.
|
||||
|
||||
## Join Additional Nodes
|
||||
|
||||
|
Reference in New Issue
Block a user