Auto-update blog content from Obsidian: 2025-08-03 18:22:37
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 8s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 13s
Blog Deployment / Test-Staging (push) Successful in 3s
Blog Deployment / Merge (push) Successful in 8s
Blog Deployment / Deploy-Production (push) Successful in 28s
Blog Deployment / Test-Production (push) Successful in 12s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 4s
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 8s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 13s
Blog Deployment / Test-Staging (push) Successful in 3s
Blog Deployment / Merge (push) Successful in 8s
Blog Deployment / Deploy-Production (push) Successful in 28s
Blog Deployment / Test-Production (push) Successful in 12s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 4s
This commit is contained in:
@@ -10,60 +10,72 @@ categories:
|
||||
|
||||
## Intro
|
||||
|
||||
After have created a Kubernetes cluster in my homelab with `kubeadm` in [that post]({{< ref "post/8-create-manual-kubernetes-cluster-kubeadm" >}}), my next goal is to expose a simple pod externally, reachable with an URL and secured with a TLS certificate verified by Let's Encrypt.
|
||||
After building my own Kubernetes cluster in my homelab using `kubeadm` in [that post]({{< ref "post/8-create-manual-kubernetes-cluster-kubeadm" >}}), my next challenge is to expose a simple pod externally, reachable with an URL and secured with a TLS certificate verified by Let's Encrypt.
|
||||
|
||||
To achieve that, I will need several components:
|
||||
- Service: TODO add oneline description
|
||||
- Ingress: TODO add oneline description
|
||||
- Ingress Controller: TODO add oneline description
|
||||
- TLS Certificates: TODO add oneline description
|
||||
To achieve this, I needed to configure several components:
|
||||
- **Service**: Expose the pod inside the cluster and provide an access point.
|
||||
- **Ingress**: Define routing rules to expose HTTP(S) services externally.
|
||||
- **Ingress Controller**: Listen to Ingress resources and handles actual traffic routing.
|
||||
- **TLS Certificates**: Secure traffic with HTTPS using certificates from Let’s Encrypt.
|
||||
|
||||
This post will guide you through each step, to understand how external access works in Kubernetes, in a homelab environment.
|
||||
|
||||
Let’s dive in.
|
||||
|
||||
---
|
||||
## Helm
|
||||
|
||||
For these components to work, I will have to install new products. To install them, I will use Helm
|
||||
To install the external components needed in this setup (like the Ingress controller or cert-manager), I’ll use **Helm**, the de facto package manager for Kubernetes.
|
||||
### Why Helm
|
||||
explain install Helm
|
||||
|
||||
Helm simplifies the deployment and management of Kubernetes applications. Instead of writing and maintaining large YAML manifests, Helm lets you install applications with a single command, using versioned and configurable charts.
|
||||
### Install Helm
|
||||
|
||||
I installed Helm on my LXC bastion host, which already has access to the Kubernetes cluster:
|
||||
```bash
|
||||
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
|
||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
|
||||
sudo apt update
|
||||
sudo apt install helm
|
||||
```
|
||||
|
||||
---
|
||||
## Kubernetes Services
|
||||
|
||||
TODO add why we need service
|
||||
Before we can expose a pod externally, we need a way to make it reachable inside the cluster. That’s where Kubernetes Services come in.
|
||||
|
||||
### What is a Kubernetes Service
|
||||
A Service provides a stable, abstracted network endpoint for a set of pods. This abstraction ensures that even if the pod’s IP changes (for example, when it gets restarted), the Service IP remains constant.
|
||||
|
||||
explain what is a Service and its purpose
|
||||
### Different Services
|
||||
There are several types of Kubernetes Services, each serving a different purpose:
|
||||
|
||||
give the list of differents services
|
||||
#### ClusterIP
|
||||
|
||||
explain what ClusterIP services are
|
||||
This is the default type. It exposes the Service on a cluster-internal IP. It is only accessible from within the cluster. Use this when your application does not need to be accessed externally.
|
||||
|
||||
#### NodePort
|
||||
|
||||
explain what NodePort services are
|
||||
This type exposes the Service on a static port on each node’s IP. You can access the service from outside the cluster using `http://<NodeIP>:<NodePort>`. It’s simple to set up, great for testing.
|
||||
|
||||
#### LoadBalancer
|
||||
|
||||
explain what LoadBalancer services are
|
||||
This type provisions an external IP to access the Service. It usually relies on cloud provider integration, but in a homelab (or bare-metal setup), we can achieve the same effect using BGP.
|
||||
|
||||
---
|
||||
## Expose a `LoadBalancer` Service with BGP
|
||||
|
||||
At first, I was thinking of using **MetalLB** to expose the IP of my services to my home network. This is what I used in the past when I was using my ISP box as router. After reading this post, [Use Cilium BGP integration with OPNsense](https://devopstales.github.io/kubernetes/cilium-opnsense-bgp/), I could do it differently using **BGP** with my OPNsense router.
|
||||
Initially, I considered using **MetalLB** to expose service IPs to my home network. That’s what I used in the past when relying on my ISP box as the main router. But after reading this post, [Use Cilium BGP integration with OPNsense](https://devopstales.github.io/kubernetes/cilium-opnsense-bgp/), I realized I could achieve the same (or even better) using BGP with my **OPNsense** router and **Cilium**, my CNI.
|
||||
### What Is BGP?
|
||||
|
||||
BGP (Border Gateway Protocol) is a routing protocol used to exchange network routes between systems. In the Kubernetes homelab context, BGP allows your Kubernetes nodes to advertise IPs directly to your **network router or firewall**. Your **router then knows** how to reach the IPs managed by your cluster.
|
||||
BGP (Border Gateway Protocol) is a routing protocol used to exchange network routes between systems. In the Kubernetes homelab context, BGP allows your Kubernetes nodes to advertise IPs directly to your network router or firewall. Your router then knows how to reach the IPs managed by your cluster.
|
||||
|
||||
So instead of MetalLB managing IP allocation and ARP replies, your nodes directly tell your router: “Hey, I own 192.168.1.240”.
|
||||
### Legacy MetalLB Approach
|
||||
|
||||
Without BGP, MetalLB in Layer 2 mode works like this:
|
||||
- Assigns a LoadBalancer IP (e.g., `192.168.1.240`) from a pool.
|
||||
- One node responds to **ARP** for that IP on your LAN.
|
||||
- One node responds to ARP for that IP on your LAN.
|
||||
|
||||
I know that MetalLB can also work with BGP, but what if my CNI (Cilium) can handle it out of the box?
|
||||
Yes, MetalLB can also work with BGP, but what if my CNI (Cilium) can handle it out of the box?
|
||||
### BGP with Cilium
|
||||
|
||||
With Cilium + BGP, you get:
|
||||
@@ -73,8 +85,26 @@ With Cilium + BGP, you get:
|
||||
|
||||
### BGP Setup
|
||||
|
||||
BGP is
|
||||
|
||||
#### On OPNsense
|
||||
|
||||
Following the [OPNsense BGP documentation](https://docs.opnsense.org/manual/dynamic_routing.html#bgp-section), to enable BGP, I need to install a plugin. Go to `System` > `Firmware` > `Plugins` and install the `os-frr` plugin:
|
||||

|
||||
|
||||
First, enable the plugin in the `Routing` > `General`:
|
||||

|
||||
|
||||
Then, go to the `BGP` section, enable it in the `General` tab by ticking the box. Set the BGP AS Number, I set `64512` as it is the first in the AS (autonomous System) private range, you can find the ranges [here](https://en.wikipedia.org/wiki/Autonomous_system_(Internet)#ASN_Table):
|
||||

|
||||
|
||||
Now create the neighbors, I will add the 3 workers, I don't add the masters as they won't run any workload. I set the node IP in the `Peer-IP` field. For the `Remote AS`, I use the same number for all the node: `64513`, I set the Interface name in `Update-Source Interface`, which is `Lab`. Finally I tick the box for `Next-Hop-Self`:
|
||||

|
||||
|
||||
Finally, my neighbor list look like this:
|
||||

|
||||
|
||||
|
||||
#### In Cilium
|
||||
|
||||
### Deploying a LoadBalancer with BGP
|
||||
|
Reference in New Issue
Block a user