Auto-update blog content from Obsidian: 2025-07-18 06:24:08
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 6s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 10s
Blog Deployment / Test-Staging (push) Successful in 2s
Blog Deployment / Merge (push) Successful in 6s
Blog Deployment / Deploy-Production (push) Successful in 9s
Blog Deployment / Test-Production (push) Successful in 2s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 3s

This commit is contained in:
Gitea Actions
2025-07-18 06:24:08 +00:00
parent 7e33d65bf6
commit d91f1996a0
2 changed files with 121 additions and 27 deletions

View File

@@ -187,33 +187,88 @@ Une fois tous les nœuds préparés, on peut initialiser le **plan de contrôle*
### Initialisation
Exécutez la commande suivante pour lancer la création du cluster:
Exécutez la commande suivante pour amorcer le cluster:
```bash
sudo kubeadm init \
--control-plane-endpoint "apex-master.lab.vezpi.me:6443" \
--control-plane-endpoint "k8s_lab.lab.vezpi.me:6443" \
--upload-certs \
--pod-network-cidr=10.10.0.0/16
```
**Explications** :
- `--control-plane-endpoint` : un nom DNS pour votre plan de contrôle.
- `--upload-certs` : permet dajouter dautres nœuds maîtres ensuite.
- `--pod-network-cidr` : le sous-réseau à utiliser pour le réseau des Pods (compatible avec Cilium).
- `--control-plane-endpoint` : Nom DNS pour votre plan de contrôle.
- `--upload-certs` : Télécharge les certificats qui doivent être partagés entre toutes les masters du cluster.
- `--pod-network-cidr` : Sous-réseau à utiliser pour le CNI.
Le nom DNS `k8s_lab.lab.vezpi.me` est géré dans mon homelab par **Unbound DNS**, cela résout sur mon interface d'**OPNsense** où un service **HAProxy** écoute sur le port 6443 et équilibre la charge entre les 3 nœuds du plan de contrôle.
Cette étape va :
- Initialiser etcd et les composants du plan de contrôle.
- Initialiser la base `etcd` et les composants du plan de contrôle.
- Configurer RBAC et les tokens damorçage.
- Afficher deux commandes `kubeadm join` importantes : une pour les **workers**, lautre pour les **maîtres supplémentaires**.
- Afficher deux commandes `kubeadm join` importantes : une pour les **workers**, lautre pour les **masters supplémentaires**.
Vous verrez aussi un message indiquant comment configurer laccès `kubectl`.
## Create the Cluster
### Configurer `kubectl`
Running kubeadm init
Si vous préférez gérer votre cluster depuis le nœud master, vous pouvez simplement copier-coller depuis la sortie de la commande `kubeadm init` :
```bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Configuring kubectl on the bastion
Si vous préférez contrôler le cluster depuis autre part, dans mon cas depuis mon bastion LXC :
```bash
mkdir -p $HOME/.kube
scp <master node>:/etc/kubernetes/admin.conf $HOME/.kube/config
chmod 600 ~/.kube/config
```
Installing the CNI plugin Cilium
Vérifiez l'accès :
```bash
kubectl get nodes
```
You devriez voir seulement le premier master listé (dans l'état `NotReady` jusqu'à ce que le CNI soit déployé).
### Installer le Plugin CNI Cilium
Depuis la [documentation Cilium](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/), Il y a 2 manières principales pour installer le CNI : utiliser la CLI Cilium ou Help, pour ce lab je vais utiliser l'outil CLI.
#### Installer la CLI Cilium
La CLI Cilium peut être utilisée pour installer Cilium, inspecter l'état de l'installation Cilium et activer/désactiver diverses fonctionnalités (ex : `clustermesh`, `Hubble`) :
```bash
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
```
#### Installer Cilium
Installer Cilium dans le cluster Kubernetes pointé par le contexte `kubectl` :
```bash
cilium install --version 1.17.6
```
#### Valider l'Installation
Pour valider que Cilium a été installé correctement :
```bash
cilium status --wait
```
Pour vérifier que votre cluster dispose d'une connectivité réseau appropriée :
```bash
cilium connectivity test
```
Une fois installé, le nœud master doit passer au statut `Ready`.
---
## Join Additional Nodes

View File

@@ -186,18 +186,21 @@ sudo apt-mark hold kubectl
Once all nodes are prepared, its time to initialize the Kubernetes control plane on the **first master node**.
### Initialization
Run the following command to bootstrap the cluster:
```bash
sudo kubeadm init \
--control-plane-endpoint "apex-master.lab.vezpi.me:6443" \
--control-plane-endpoint "k8s_lab.lab.vezpi.me:6443" \
--upload-certs \
--pod-network-cidr=10.10.0.0/16
```
**Explanation**:
- `--control-plane-endpoint`: a DNS name for your control plane.
- `--upload-certs`: Upload the certificates that should be shared across all the control-plane instances to the cluster.
- `--pod-network-cidr`: The subnet for your CNI.
- `--control-plane-endpoint`: DNS name for your control plane.
- `--upload-certs`: Upload the certificates that should be shared across all masters of the cluster.
- `--pod-network-cidr`: Subnet for the CNI.
The DNS name `k8s_lab.lab.vezpi.me` is handled in my homelab by **Unbound DNS**, this resolves on my **OPNsense** interface where a **HAProxy** service listen on the port 6443 and load balance between the 3 control plane nodes.
This step will:
- Initialize the `etcd` database and control plane components.
@@ -227,9 +230,9 @@ Verify your access:
kubectl get nodes
```
You should see only the first master listed (in "NotReady" state until the CNI is deployed).
You should see only the first master listed (in `NotReady` state until the CNI is deployed).
### Install the CNI plugin Cilium
### Install the CNI Plugin Cilium
From the [Cilium documentation](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/), there are 2 common ways for installing the CNI: using the **Cilium CLI** or **Helm**, for that lab I will use the CLI tool.
@@ -246,36 +249,72 @@ rm cilium-linux-amd64.tar.gz{,.sha256sum}
#### Install Cilium
Install Cilium into the Kubernetes cluster pointed to by your current kubectl context:
Install Cilium into the Kubernetes cluster pointed to by your current `kubectl` context:
```bash
cilium install --version 1.17.6
```
#### Validate the installation
#### Validate the Installation
To validate that Cilium has been properly installed, you can run:
To validate that Cilium has been properly installed:
```bash
cilium status --wait
```
Run the following command to validate that your cluster has proper network connectivity:
To validate that your cluster has proper network connectivity:
```bash
cilium connectivity test
```
Once installed, the master node should transition to **Ready** status.
Once installed, the master node should transition to `Ready` status.
---
## Join Additional Nodes
### Join Masters
After initializing the first control plane node, you can now join the remaining nodes to the cluster.
Creating the control-plane join command
There are two types of join commands:
- One for joining **control-plane (master) nodes**
- One for joining **worker nodes**
Syncing PKI and etcd certs
These commands were displayed at the end of the `kubeadm init` output. If you didnt copy them, you can regenerate them.
⚠️ The certificates and the decryption key expire after two hours.
### Additional Masters
#### Generate Certificates
If you need to re-upload the certificates and generate a new decryption key, use the following command on a control plane node that is already joined to the cluster:
```bash
sudo kubeadm init phase upload-certs --upload-certs
kubeadm certs certificate-key
```
#### Generate Token
Paired with the certificate, you'll need a new token, this will print the whole join command as control plane:
```bash
sudo kubeadm token create --print-join-command --certificate-key <certificate-key>
```
#### Join the Control Plane
You can now join any number of control-plane node by running the command above or given by the `kubeadm init` command:
```bash
sudo kubeadm join <control-plane-endpoint> --token <token> --discovery-token-ca-cert-hash <discovery-token-ca-cert-hash> --control-plane --certificate-key <certificate-key>
```
Running kubeadm join on master 2 and 3
### Join Workers
Again here if you missed the output of the `kubeadm init`, you can generate a new token and the full `join` command:
```bash
sudo kubeadm token create --print-join-command
```
Then you can
Generating and running the worker kubeadm join command
Verifying node status