Auto-update blog content from Obsidian: 2025-07-16 15:05:47
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 6s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 9s
Blog Deployment / Test-Staging (push) Successful in 2s
Blog Deployment / Merge (push) Successful in 6s
Blog Deployment / Deploy-Production (push) Successful in 10s
Blog Deployment / Test-Production (push) Successful in 2s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 2s
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 6s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 9s
Blog Deployment / Test-Staging (push) Successful in 2s
Blog Deployment / Merge (push) Successful in 6s
Blog Deployment / Deploy-Production (push) Successful in 10s
Blog Deployment / Test-Production (push) Successful in 2s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 2s
This commit is contained in:
@@ -16,7 +16,7 @@ Maintenant que l'infrastructure est prête, passons à l'étape suivante : **cr
|
|||||||
|
|
||||||
Dans cet article, je vais détailler chaque étape de l'installation d’un cluster Kubernetes simple, depuis la préparation des nœuds jusqu'au déploiement d'une application simple.
|
Dans cet article, je vais détailler chaque étape de l'installation d’un cluster Kubernetes simple, depuis la préparation des nœuds jusqu'au déploiement d'une application simple.
|
||||||
|
|
||||||
Je n'utiliserai aucun outil d'automatisation pour le moment, afin de mieux comprendre les étapes impliquées dans le bootstrap d’un cluster Kubernetes.
|
Je n'utiliserai pas d'outil d'automatisation pour configurer les nœuds pour le moment, afin de mieux comprendre les étapes impliquées dans le bootstrap d’un cluster Kubernetes.
|
||||||
|
|
||||||
---
|
---
|
||||||
## Qu'est ce que Kubernetes
|
## Qu'est ce que Kubernetes
|
||||||
@@ -27,6 +27,8 @@ Un cluster Kubernetes est composé de deux types de nœuds : les nœuds control
|
|||||||
|
|
||||||
Dans cet article, nous allons mettre en place manuellement un cluster Kubernetes avec 3 nœuds control plane et 3 workers. Cette architecture reflète un environnement hautement disponible et proche de la production, même si l’objectif ici est avant tout pédagogique.
|
Dans cet article, nous allons mettre en place manuellement un cluster Kubernetes avec 3 nœuds control plane et 3 workers. Cette architecture reflète un environnement hautement disponible et proche de la production, même si l’objectif ici est avant tout pédagogique.
|
||||||
|
|
||||||
|
La documentation officielle se trouve [ici](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/), je vais utiliser la version **v1.32**.
|
||||||
|
|
||||||
---
|
---
|
||||||
## Prepare the Nodes
|
## Prepare the Nodes
|
||||||
|
|
||||||
|
@@ -16,7 +16,7 @@ Now that the infrastructure is ready, let’s move on to the next step: **manual
|
|||||||
|
|
||||||
In this post, I’ll walk through each step of the installation process of a simple Kubernetes cluster, from preparing the nodes to deploying a simple application.
|
In this post, I’ll walk through each step of the installation process of a simple Kubernetes cluster, from preparing the nodes to deploying a simple application.
|
||||||
|
|
||||||
I will not rely on automation tools for now, to better understand what are the steps involved in a Kubernetes cluster bootstrapping.
|
I will not rely on automation tools to configure the nodes for now, to better understand what are the steps involved in a Kubernetes cluster bootstrapping.
|
||||||
|
|
||||||
---
|
---
|
||||||
## What is Kubernetes
|
## What is Kubernetes
|
||||||
@@ -27,21 +27,134 @@ A Kubernetes cluster is made up of two main types of nodes: control plane (maste
|
|||||||
|
|
||||||
In this post, we’ll manually set up a Kubernetes cluster with 3 control plane nodes (masters) and 3 workers. This structure reflects a highly available and production-like setup, even though the goal here is mainly to learn and understand how the components fit together.
|
In this post, we’ll manually set up a Kubernetes cluster with 3 control plane nodes (masters) and 3 workers. This structure reflects a highly available and production-like setup, even though the goal here is mainly to learn and understand how the components fit together.
|
||||||
|
|
||||||
|
The official documentation can be found [here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/), I will use the version **v1.32**.
|
||||||
|
|
||||||
---
|
---
|
||||||
## Prepare the Nodes
|
## Prepare the Nodes
|
||||||
|
|
||||||
OS-level updates and basic tools
|
I will perform the following steps on all 6 VMs (masters and workers).
|
||||||
|
|
||||||
Disabling swap and firewall adjustments
|
### Hostname
|
||||||
|
|
||||||
Installing container runtime (e.g., containerd)
|
Each VM has a unique **hostname** and all nodes must **resolve** each other.
|
||||||
|
|
||||||
|
The hostname is set upon the VM creation with cloud-init. But for demonstration purpose, I'll set it manually:
|
||||||
|
```bash
|
||||||
|
sudo hostnamectl set-hostname <hostname>
|
||||||
|
```
|
||||||
|
|
||||||
|
On my infrastructure, the nodes resolve the hostnames each other using my DNS server on that domain (`lab.vezpi.me`). In case you don't have a DNS server, you can hardcode the nodes IP in each `/etc/hosts` file:
|
||||||
|
```bash
|
||||||
|
192.168.66.168 apex-worker
|
||||||
|
192.168.66.167 apex-master
|
||||||
|
192.168.66.166 zenith-master
|
||||||
|
192.168.66.170 vertex-worker
|
||||||
|
192.168.66.169 vertex-master
|
||||||
|
192.168.66.172 zenith-worker
|
||||||
|
```
|
||||||
|
|
||||||
|
###
|
||||||
|
OS Updates
|
||||||
|
|
||||||
|
My VMs are running **Ubuntu 24.04.2 LTS**. Cloud-init handles the updates after the provision in that case, let's make sure everything is up to date and install packages needed to add Kubernetes repository:
|
||||||
|
```bash
|
||||||
|
sudo apt update && sudo apt upgrade -y
|
||||||
|
sudo apt install -y apt-transport-https ca-certificates curl gpg
|
||||||
|
```
|
||||||
|
|
||||||
|
### Swap
|
||||||
|
|
||||||
|
The default behavior of a `kubelet` is to fail to start if **swap memory** is detected on a node. This means that swap should either be disabled or tolerated by `kubelet`.
|
||||||
|
|
||||||
|
My VMs are not using swap, but here how to disable it:
|
||||||
|
```bash
|
||||||
|
sudo swapoff -a
|
||||||
|
sudo sed -i '/ swap / s/^/#/' /etc/fstab
|
||||||
|
```
|
||||||
|
|
||||||
|
### Firewall
|
||||||
|
|
||||||
|
For testing environment, I will just disable the local firewall (don't do that in production):
|
||||||
|
```bash
|
||||||
|
sudo systemctl disable --now ufw
|
||||||
|
```
|
||||||
|
|
||||||
|
For production, you want to allow the nodes to talk to each other on these ports:
|
||||||
|
#### Control plane
|
||||||
|
|Protocol|Direction|Port Range|Purpose|Used By|
|
||||||
|
|---|---|---|---|---|
|
||||||
|
|TCP|Inbound|6443|Kubernetes API server|All|
|
||||||
|
|TCP|Inbound|2379-2380|etcd server client API|kube-apiserver, etcd|
|
||||||
|
|TCP|Inbound|10250|Kubelet API|Self, Control plane|
|
||||||
|
|TCP|Inbound|10259|kube-scheduler|Self|
|
||||||
|
|TCP|Inbound|10257|kube-controller-manager|Self|
|
||||||
|
|
||||||
|
#### Worker
|
||||||
|
|Protocol|Direction|Port Range|Purpose|Used By|
|
||||||
|
|---|---|---|---|---|
|
||||||
|
|TCP|Inbound|10250|Kubelet API|Self, Control plane|
|
||||||
|
|TCP|Inbound|10256|kube-proxy|Self, Load balancers|
|
||||||
|
|TCP|Inbound|30000-32767|NodePort Services†|All
|
||||||
|
|
||||||
|
### Kernel Modules and Settings
|
||||||
|
|
||||||
|
Kubernetes needs 2 kernel modules:
|
||||||
|
- **overlay**: for facilitating the layering of one filesystem on top of another
|
||||||
|
- **br_netfilter**: for enabling bridge network connections
|
||||||
|
|
||||||
|
Let's enable them:
|
||||||
|
```bash
|
||||||
|
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
|
||||||
|
overlay
|
||||||
|
br_netfilter
|
||||||
|
EOF
|
||||||
|
|
||||||
|
sudo modprobe overlay
|
||||||
|
sudo modprobe br_netfilter
|
||||||
|
```
|
||||||
|
|
||||||
|
Some kernel settings related to network are also needed:
|
||||||
|
```bash
|
||||||
|
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
|
||||||
|
net.bridge.bridge-nf-call-iptables = 1
|
||||||
|
net.bridge.bridge-nf-call-ip6tables = 1
|
||||||
|
net.ipv4.ip_forward = 1
|
||||||
|
EOF
|
||||||
|
|
||||||
|
sudo sysctl --system
|
||||||
|
```
|
||||||
|
|
||||||
|
### Container Runtime
|
||||||
|
|
||||||
|
You need to install a **container runtime** into each node in the cluster so that Pods can run there. I will use `containerd`:
|
||||||
|
```bash
|
||||||
|
sudo apt install -y containerd
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the default configuration:
|
||||||
|
```bash
|
||||||
|
sudo mkdir -p /etc/containerd
|
||||||
|
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Enable `systemd` cgroup driver:
|
||||||
|
```bash
|
||||||
|
sudo sed -i 's/^\(\s*SystemdCgroup\s*=\s*\)false/\1true/' /etc/containerd/config.toml
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart `containerd` service
|
||||||
|
```bash
|
||||||
|
sudo systemctl restart containerd
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Installing kubeadm and kubelet
|
Installing kubeadm and kubelet
|
||||||
|
|
||||||
Installing kubeadm on bastion
|
Installing kubeadm on bastion
|
||||||
|
|
||||||
Enabling required kernel modules and sysctl settings
|
|
||||||
|
|
||||||
|
|
||||||
## Initialize the Cluster
|
## Initialize the Cluster
|
||||||
|
|
||||||
Running kubeadm init
|
Running kubeadm init
|
||||||
|
Reference in New Issue
Block a user