Auto-update blog content from Obsidian: 2025-07-16 10:34:32
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 6s
Blog Deployment / Build (push) Successful in 36s
Blog Deployment / Deploy-Staging (push) Successful in 10s
Blog Deployment / Test-Staging (push) Successful in 2s
Blog Deployment / Merge (push) Successful in 6s
Blog Deployment / Deploy-Production (push) Successful in 9s
Blog Deployment / Test-Production (push) Successful in 3s
Blog Deployment / Clean (push) Successful in 2s
Blog Deployment / Notify (push) Successful in 3s

This commit is contained in:
Gitea Actions
2025-07-16 10:34:32 +00:00
parent a1725ecfbb
commit dc0c4ecfa6
4 changed files with 182 additions and 14 deletions

View File

@@ -72,7 +72,7 @@ resource "proxmox_virtual_environment_file" "cloud_config" {
node_name = var.node_name # The Proxmox node where the file will be uploaded
source_raw {
file_name = "vm.cloud-config.yaml" # The name of the snippet file
file_name = "${var.vm_name}.cloud-config.yaml" # The name of the snippet file
data = <<-EOF
#cloud-config
hostname: ${var.vm_name}
@@ -737,12 +737,12 @@ Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Outputs:
vm_ip = {
"apex-master" = "192.168.66.161"
"apex-worker" = "192.168.66.162"
"vertex-master" = "192.168.66.160"
"vertex-worker" = "192.168.66.164"
"zenith-master" = "192.168.66.165"
"zenith-worker" = "192.168.66.163"
"apex-master" = "192.168.66.167"
"apex-worker" = "192.168.66.168"
"vertex-master" = "192.168.66.169"
"vertex-worker" = "192.168.66.170"
"zenith-master" = "192.168.66.166"
"zenith-worker" = "192.168.66.172"
}
```

View File

@@ -71,7 +71,7 @@ resource "proxmox_virtual_environment_file" "cloud_config" {
node_name = var.node_name # The Proxmox node where the file will be uploaded
source_raw {
file_name = "vm.cloud-config.yaml" # The name of the snippet file
file_name = "${var.vm_name}.cloud-config.yaml" # The name of the snippet file
data = <<-EOF
#cloud-config
hostname: ${var.vm_name}
@@ -731,12 +731,12 @@ Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Outputs:
vm_ip = {
"apex-master" = "192.168.66.161"
"apex-worker" = "192.168.66.162"
"vertex-master" = "192.168.66.160"
"vertex-worker" = "192.168.66.164"
"zenith-master" = "192.168.66.165"
"zenith-worker" = "192.168.66.163"
"apex-master" = "192.168.66.167"
"apex-worker" = "192.168.66.168"
"vertex-master" = "192.168.66.169"
"vertex-worker" = "192.168.66.170"
"zenith-master" = "192.168.66.166"
"zenith-worker" = "192.168.66.172"
}
```

View File

@@ -0,0 +1,84 @@
---
slug: create-manual-kubernetes-cluster-kubeadm
title: Template
description:
date:
draft: true
tags:
categories:
---
## Intro
Dans cet [article précédent]({{< ref "post/7-terraform-create-proxmox-module" >}}), j'expliquais comment déployer 6 VMs avec **Terraform** sur **Proxmox**, 3 nœuds masters et 3 nœuds workers, en m'appuyant sur un [template cloud-init]({{< ref "post/1-proxmox-cloud-init-vm-template" >}}).
Maintenant que l'infrastructure est prête, passons à l'étape suivante : **créer manuellement un cluster Kubernetes** avec `kubeadm`.
Dans cet article, je vais détailler chaque étape de l'installation dun cluster Kubernetes simple, depuis la préparation des nœuds jusqu'au déploiement d'une application simple.
Je n'utiliserai aucun outil d'automatisation pour le moment, afin de mieux comprendre les étapes impliquées dans le bootstrap dun cluster Kubernetes.
---
## Qu'est ce que Kubernetes
Kubernetes est une plateforme open-source qui orchestre des conteneurs sur un ensemble de machines. Elle gère le déploiement, la montée en charge et la santé des applications conteneurisées, ce qui vous permet de vous concentrer sur vos services plutôt que sur linfrastructure sous-jacente.
Un cluster Kubernetes est composé de deux types de nœuds : les nœuds control plane (masters) et les workers. Le control plane assure la gestion globale du cluster, il prend les décisions de planification, surveille létat du système et réagit aux événements. Les workers, eux, exécutent réellement vos applications, dans des conteneurs gérés par Kubernetes.
Dans cet article, nous allons mettre en place manuellement un cluster Kubernetes avec 3 nœuds control plane et 3 workers. Cette architecture reflète un environnement hautement disponible et proche de la production, même si lobjectif ici est avant tout pédagogique.
---
## Prepare the Nodes
OS-level updates and basic tools
Disabling swap and firewall adjustments
Installing container runtime (e.g., containerd)
Installing kubeadm and kubelet
Installing kubeadm on bastion
Enabling required kernel modules and sysctl settings
## Initialize the Cluster
Running kubeadm init
Configuring kubectl on the bastion
Installing the CNI plugin Cilium
## Join Additional Nodes
### Join Masters
Creating the control-plane join command
Syncing PKI and etcd certs
Running kubeadm join on master 2 and 3
### Join Workers
Generating and running the worker kubeadm join command
Verifying node status
## Deploying a Sample Application
Creating a simple Deployment and Service
Exposing it via NodePort or LoadBalancer
Verifying functionality
## Conclusion
Summary of the steps
When to use this manual method

View File

@@ -0,0 +1,84 @@
---
slug: create-manual-kubernetes-cluster-kubeadm
title: Template
description:
date:
draft: true
tags:
categories:
---
## Intro
In this [previous article]({{< ref "post/7-terraform-create-proxmox-module" >}}), I explained how to deploy 6 VMs using **Terraform** on **Proxmox**, 3 masters and 3 workers nodes, based on [cloud-init template]({{< ref "post/1-proxmox-cloud-init-vm-template" >}}).
Now that the infrastructure is ready, lets move on to the next step: **manually building a Kubernetes cluster** using `kubeadm`.
In this post, Ill walk through each step of the installation process of a simple Kubernetes cluster, from preparing the nodes to deploying a simple application.
I will not rely on automation tools for now, to better understand what are the steps involved in a Kubernetes cluster bootstrapping.
---
## What is Kubernetes
Kubernetes is an open-source platform for orchestrating containers across a group of machines. It handles the deployment, scaling, and health of containerized applications, allowing you to focus on building your services rather than managing infrastructure details.
A Kubernetes cluster is made up of two main types of nodes: control plane (masters) nodes and worker nodes. The control plane is responsible for the overall management of the cluster, it makes decisions about scheduling, monitoring, and responding to changes in the system. The worker nodes are where your applications actually run, inside containers managed by Kubernetes.
In this post, well manually set up a Kubernetes cluster with 3 control plane nodes (masters) and 3 workers. This structure reflects a highly available and production-like setup, even though the goal here is mainly to learn and understand how the components fit together.
---
## Prepare the Nodes
OS-level updates and basic tools
Disabling swap and firewall adjustments
Installing container runtime (e.g., containerd)
Installing kubeadm and kubelet
Installing kubeadm on bastion
Enabling required kernel modules and sysctl settings
## Initialize the Cluster
Running kubeadm init
Configuring kubectl on the bastion
Installing the CNI plugin Cilium
## Join Additional Nodes
### Join Masters
Creating the control-plane join command
Syncing PKI and etcd certs
Running kubeadm join on master 2 and 3
### Join Workers
Generating and running the worker kubeadm join command
Verifying node status
## Deploying a Sample Application
Creating a simple Deployment and Service
Exposing it via NodePort or LoadBalancer
Verifying functionality
## Conclusion
Summary of the steps
When to use this manual method