From 372f7aa8ae39e88a6e1bc3df4680846b39719396 Mon Sep 17 00:00:00 2001 From: Gitea Actions Date: Wed, 16 Jul 2025 15:05:47 +0000 Subject: [PATCH] Auto-update blog content from Obsidian: 2025-07-16 15:05:47 --- ...te-manual-kubernetes-cluster-kubeadm.fr.md | 4 +- ...reate-manual-kubernetes-cluster-kubeadm.md | 123 +++++++++++++++++- 2 files changed, 121 insertions(+), 6 deletions(-) diff --git a/content/post/8-create-manual-kubernetes-cluster-kubeadm.fr.md b/content/post/8-create-manual-kubernetes-cluster-kubeadm.fr.md index c005648..ac46666 100644 --- a/content/post/8-create-manual-kubernetes-cluster-kubeadm.fr.md +++ b/content/post/8-create-manual-kubernetes-cluster-kubeadm.fr.md @@ -16,7 +16,7 @@ Maintenant que l'infrastructure est prête, passons à l'étape suivante : **cr Dans cet article, je vais détailler chaque étape de l'installation d’un cluster Kubernetes simple, depuis la préparation des nœuds jusqu'au déploiement d'une application simple. -Je n'utiliserai aucun outil d'automatisation pour le moment, afin de mieux comprendre les étapes impliquées dans le bootstrap d’un cluster Kubernetes. +Je n'utiliserai pas d'outil d'automatisation pour configurer les nœuds pour le moment, afin de mieux comprendre les étapes impliquées dans le bootstrap d’un cluster Kubernetes. --- ## Qu'est ce que Kubernetes @@ -27,6 +27,8 @@ Un cluster Kubernetes est composé de deux types de nœuds : les nœuds control Dans cet article, nous allons mettre en place manuellement un cluster Kubernetes avec 3 nœuds control plane et 3 workers. Cette architecture reflète un environnement hautement disponible et proche de la production, même si l’objectif ici est avant tout pédagogique. +La documentation officielle se trouve [ici](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/), je vais utiliser la version **v1.32**. + --- ## Prepare the Nodes diff --git a/content/post/8-create-manual-kubernetes-cluster-kubeadm.md b/content/post/8-create-manual-kubernetes-cluster-kubeadm.md index 4e3aaad..0f952bb 100644 --- a/content/post/8-create-manual-kubernetes-cluster-kubeadm.md +++ b/content/post/8-create-manual-kubernetes-cluster-kubeadm.md @@ -16,7 +16,7 @@ Now that the infrastructure is ready, let’s move on to the next step: **manual In this post, I’ll walk through each step of the installation process of a simple Kubernetes cluster, from preparing the nodes to deploying a simple application. -I will not rely on automation tools for now, to better understand what are the steps involved in a Kubernetes cluster bootstrapping. +I will not rely on automation tools to configure the nodes for now, to better understand what are the steps involved in a Kubernetes cluster bootstrapping. --- ## What is Kubernetes @@ -27,21 +27,134 @@ A Kubernetes cluster is made up of two main types of nodes: control plane (maste In this post, we’ll manually set up a Kubernetes cluster with 3 control plane nodes (masters) and 3 workers. This structure reflects a highly available and production-like setup, even though the goal here is mainly to learn and understand how the components fit together. +The official documentation can be found [here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/), I will use the version **v1.32**. + --- ## Prepare the Nodes - OS-level updates and basic tools +I will perform the following steps on all 6 VMs (masters and workers). - Disabling swap and firewall adjustments +### Hostname - Installing container runtime (e.g., containerd) +Each VM has a unique **hostname** and all nodes must **resolve** each other. +The hostname is set upon the VM creation with cloud-init. But for demonstration purpose, I'll set it manually: +```bash +sudo hostnamectl set-hostname +``` + +On my infrastructure, the nodes resolve the hostnames each other using my DNS server on that domain (`lab.vezpi.me`). In case you don't have a DNS server, you can hardcode the nodes IP in each `/etc/hosts` file: +```bash +192.168.66.168 apex-worker +192.168.66.167 apex-master +192.168.66.166 zenith-master +192.168.66.170 vertex-worker +192.168.66.169 vertex-master +192.168.66.172 zenith-worker +``` + +### +OS Updates + +My VMs are running **Ubuntu 24.04.2 LTS**. Cloud-init handles the updates after the provision in that case, let's make sure everything is up to date and install packages needed to add Kubernetes repository: +```bash +sudo apt update && sudo apt upgrade -y +sudo apt install -y apt-transport-https ca-certificates curl gpg +``` + +### Swap + +The default behavior of a `kubelet` is to fail to start if **swap memory** is detected on a node. This means that swap should either be disabled or tolerated by `kubelet`. + +My VMs are not using swap, but here how to disable it: +```bash +sudo swapoff -a +sudo sed -i '/ swap / s/^/#/' /etc/fstab +``` + +### Firewall + +For testing environment, I will just disable the local firewall (don't do that in production): +```bash +sudo systemctl disable --now ufw +``` + +For production, you want to allow the nodes to talk to each other on these ports: +#### Control plane +|Protocol|Direction|Port Range|Purpose|Used By| +|---|---|---|---|---| +|TCP|Inbound|6443|Kubernetes API server|All| +|TCP|Inbound|2379-2380|etcd server client API|kube-apiserver, etcd| +|TCP|Inbound|10250|Kubelet API|Self, Control plane| +|TCP|Inbound|10259|kube-scheduler|Self| +|TCP|Inbound|10257|kube-controller-manager|Self| + +#### Worker +|Protocol|Direction|Port Range|Purpose|Used By| +|---|---|---|---|---| +|TCP|Inbound|10250|Kubelet API|Self, Control plane| +|TCP|Inbound|10256|kube-proxy|Self, Load balancers| +|TCP|Inbound|30000-32767|NodePort Services†|All + +### Kernel Modules and Settings + +Kubernetes needs 2 kernel modules: +- **overlay**: for facilitating the layering of one filesystem on top of another +- **br_netfilter**: for enabling bridge network connections + +Let's enable them: +```bash +cat < /dev/null +``` + +Enable `systemd` cgroup driver: +```bash +sudo sed -i 's/^\(\s*SystemdCgroup\s*=\s*\)false/\1true/' /etc/containerd/config.toml +``` + +Restart `containerd` service +```bash +sudo systemctl restart containerd +``` + + + Installing kubeadm and kubelet Installing kubeadm on bastion - Enabling required kernel modules and sysctl settings + ## Initialize the Cluster Running kubeadm init