Auto-update blog content from Obsidian: 2025-08-04 19:23:43
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 7s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 12s
Blog Deployment / Test-Staging (push) Successful in 3s
Blog Deployment / Merge (push) Successful in 8s
Blog Deployment / Deploy-Production (push) Successful in 11s
Blog Deployment / Test-Production (push) Successful in 3s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 3s
All checks were successful
Blog Deployment / Check-Rebuild (push) Successful in 7s
Blog Deployment / Build (push) Has been skipped
Blog Deployment / Deploy-Staging (push) Successful in 12s
Blog Deployment / Test-Staging (push) Successful in 3s
Blog Deployment / Merge (push) Successful in 8s
Blog Deployment / Deploy-Production (push) Successful in 11s
Blog Deployment / Test-Production (push) Successful in 3s
Blog Deployment / Clean (push) Has been skipped
Blog Deployment / Notify (push) Successful in 3s
This commit is contained in:
@@ -4,7 +4,14 @@ title: Template
|
|||||||
description:
|
description:
|
||||||
date:
|
date:
|
||||||
draft: true
|
draft: true
|
||||||
tags:
|
tags:
|
||||||
|
- kubernetes
|
||||||
|
- helm
|
||||||
|
- bgp
|
||||||
|
- opnsense
|
||||||
|
- cilium
|
||||||
|
- nginx-ingress-controller
|
||||||
|
- cert-manager
|
||||||
categories:
|
categories:
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -85,67 +92,416 @@ With Cilium + BGP, you get:
|
|||||||
|
|
||||||
### BGP Setup
|
### BGP Setup
|
||||||
|
|
||||||
BGP is
|
By default, BGP is disabled by default, both on my OPNsense router and in Cilium. Let’s enable it on both ends.
|
||||||
|
|
||||||
#### On OPNsense
|
#### On OPNsense
|
||||||
|
|
||||||
Following the [OPNsense BGP documentation](https://docs.opnsense.org/manual/dynamic_routing.html#bgp-section), to enable BGP, I need to install a plugin. Go to `System` > `Firmware` > `Plugins` and install the `os-frr` plugin:
|
According to the [official OPNsense documentation](https://docs.opnsense.org/manual/dynamic_routing.html#bgp-section), enabling BGP requires installing a plugin.
|
||||||

|
|
||||||
|
|
||||||
First, enable the plugin in the `Routing` > `General`:
|
Head to `System` > `Firmware` > `Plugins` and install the `os-frr` plugin:
|
||||||

|

|
||||||
|
Install `os-frr` plugin in OPNsense
|
||||||
|
|
||||||
Then, go to the `BGP` section, enable it in the `General` tab by ticking the box. Set the BGP AS Number, I set `64512` as it is the first in the AS (autonomous System) private range, you can find the ranges [here](https://en.wikipedia.org/wiki/Autonomous_system_(Internet)#ASN_Table):
|
Once installed, enable the plugin under `Routing` > `General`:
|
||||||

|

|
||||||
|
Enable routing in OPNsense
|
||||||
|
|
||||||
Now create the neighbors, I will add the 3 workers, I don't add the masters as they won't run any workload. I set the node IP in the `Peer-IP` field. For the `Remote AS`, I use the same number for all the node: `64513`, I set the Interface name in `Update-Source Interface`, which is `Lab`. Finally I tick the box for `Next-Hop-Self`:
|
Then navigate to the `BGP` section. In the **General** tab:
|
||||||

|
- Tick the box to enable BGP.
|
||||||
|
- Set your **BGP ASN**. I used `64512`, the first private ASN from the reserved range (see [ASN table](https://en.wikipedia.org/wiki/Autonomous_system_\(Internet\)#ASN_Table)):
|
||||||
|

|
||||||
|
General BGP configuration in OPNsense
|
||||||
|
|
||||||
Finally, my neighbor list look like this:
|
Now create your BGP neighbors. I’m only peering with my **worker nodes** (since only they run workloads). For each neighbor:
|
||||||

|
- Set the node’s IP in `Peer-IP`
|
||||||
|
- Use `64513` as the **Remote AS** (Cilium’s ASN)
|
||||||
|
- Set `Update-Source Interface` to `Lab`
|
||||||
|
- Tick `Next-Hop-Self`:
|
||||||
|

|
||||||
|
BGP neighbor configuration in OPNsense
|
||||||
|
|
||||||
|
Here’s how my neighbor list looks once complete:
|
||||||
|

|
||||||
|
BGP neighbor list
|
||||||
|
|
||||||
|
Don’t forget to create a firewall rule allowing BGP (port `179/TCP`) from the **Lab** VLAN to the firewall:
|
||||||
|

|
||||||
|
Allow TCP/179 from Lab to OPNsense
|
||||||
|
|
||||||
#### In Cilium
|
#### In Cilium
|
||||||
|
|
||||||
### Deploying a LoadBalancer with BGP
|
I already had Cilium installed and couldn’t find a way to enable BGP with the CLI, so I simply reinstalled it with the BGP option:
|
||||||
#### Using an IP Address
|
|
||||||
#### Using a URL
|
```bash
|
||||||
|
cilium uninstall
|
||||||
|
cilium install --set bgpControlPlane.enabled=true
|
||||||
|
```
|
||||||
|
|
||||||
|
Next, I want only **worker nodes** to establish BGP peering. I add a label to each one for the future `nodeSelector`:
|
||||||
|
```bash
|
||||||
|
kubectl label node apex-worker node-role.kubernetes.io/worker=""
|
||||||
|
kubectl label node vertex-worker node-role.kubernetes.io/worker=""
|
||||||
|
kubectl label node zenith-worker node-role.kubernetes.io/worker=""
|
||||||
|
```
|
||||||
|
```plaintext
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
apex-master Ready control-plane 5d4h v1.32.7
|
||||||
|
apex-worker Ready worker 5d1h v1.32.7
|
||||||
|
vertex-master Ready control-plane 5d1h v1.32.7
|
||||||
|
vertex-worker Ready worker 5d1h v1.32.7
|
||||||
|
zenith-master Ready control-plane 5d1h v1.32.7
|
||||||
|
zenith-worker Ready worker 5d1h v1.32.7
|
||||||
|
```
|
||||||
|
|
||||||
|
For the entire BGP configuration, I need:
|
||||||
|
- **CiliumBGPClusterConfig**: BGP settings for the Cilium cluster, including its local ASN and its peer
|
||||||
|
- **CiliumBGPPeerConfig**: Sets BGP timers, graceful restart, and route advertisement settings.
|
||||||
|
- **CiliumBGPAdvertisement**: Defines which Kubernetes services should be advertised via BGP.
|
||||||
|
- **CiliumLoadBalancerIPPool**: Configures the range of IPs assigned to Kubernetes LoadBalancer services.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: cilium.io/v2alpha1
|
||||||
|
kind: CiliumBGPClusterConfig
|
||||||
|
metadata:
|
||||||
|
name: bgp-cluster
|
||||||
|
spec:
|
||||||
|
nodeSelector:
|
||||||
|
matchLabels:
|
||||||
|
node-role.kubernetes.io/worker: "" # Only for worker nodes
|
||||||
|
bgpInstances:
|
||||||
|
- name: "cilium-bgp-cluster"
|
||||||
|
localASN: 64513 # Cilium ASN
|
||||||
|
peers:
|
||||||
|
- name: "pfSense-peer"
|
||||||
|
peerASN: 64512 # OPNsense ASN
|
||||||
|
peerAddress: 192.168.66.1 # OPNsense IP
|
||||||
|
peerConfigRef:
|
||||||
|
name: "bgp-peer"
|
||||||
|
---
|
||||||
|
apiVersion: cilium.io/v2alpha1
|
||||||
|
kind: CiliumBGPPeerConfig
|
||||||
|
metadata:
|
||||||
|
name: bgp-peer
|
||||||
|
spec:
|
||||||
|
timers:
|
||||||
|
holdTimeSeconds: 9
|
||||||
|
keepAliveTimeSeconds: 3
|
||||||
|
gracefulRestart:
|
||||||
|
enabled: true
|
||||||
|
restartTimeSeconds: 15
|
||||||
|
families:
|
||||||
|
- afi: ipv4
|
||||||
|
safi: unicast
|
||||||
|
advertisements:
|
||||||
|
matchLabels:
|
||||||
|
advertise: "bgp"
|
||||||
|
---
|
||||||
|
apiVersion: cilium.io/v2alpha1
|
||||||
|
kind: CiliumBGPAdvertisement
|
||||||
|
metadata:
|
||||||
|
name: bgp-advertisement
|
||||||
|
labels:
|
||||||
|
advertise: bgp
|
||||||
|
spec:
|
||||||
|
advertisements:
|
||||||
|
- advertisementType: "Service"
|
||||||
|
service:
|
||||||
|
addresses:
|
||||||
|
- LoadBalancerIP
|
||||||
|
selector:
|
||||||
|
matchExpressions:
|
||||||
|
- { key: somekey, operator: NotIn, values: [ never-used-value ] }
|
||||||
|
---
|
||||||
|
apiVersion: "cilium.io/v2alpha1"
|
||||||
|
kind: CiliumLoadBalancerIPPool
|
||||||
|
metadata:
|
||||||
|
name: "dmz"
|
||||||
|
spec:
|
||||||
|
blocks:
|
||||||
|
- start: "192.168.55.20" # LB Range Start IP
|
||||||
|
stop: "192.168.55.250" # LB Range End IP
|
||||||
|
```
|
||||||
|
|
||||||
|
Apply it:
|
||||||
|
```bash
|
||||||
|
kubectl apply -f bgp.yaml
|
||||||
|
|
||||||
|
ciliumbgpclusterconfig.cilium.io/bgp-cluster created
|
||||||
|
ciliumbgppeerconfig.cilium.io/bgp-peer created
|
||||||
|
ciliumbgpadvertisement.cilium.io/bgp-advertisement created
|
||||||
|
ciliumloadbalancerippool.cilium.io/dmz created
|
||||||
|
```
|
||||||
|
|
||||||
|
If everything works, you should see the BGP sessions **established** with your workers:
|
||||||
|
```bash
|
||||||
|
cilium bgp peers
|
||||||
|
|
||||||
|
Node Local AS Peer AS Peer Address Session State Uptime Family Received Advertised
|
||||||
|
apex-worker 64513 64512 192.168.66.1 established 6m30s ipv4/unicast 1 2
|
||||||
|
vertex-worker 64513 64512 192.168.66.1 established 7m9s ipv4/unicast 1 2
|
||||||
|
zenith-worker 64513 64512 192.168.66.1 established 6m13s ipv4/unicast 1 2
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploying a `LoadBalancer` Service with BGP
|
||||||
|
|
||||||
|
Let’s quickly validate that the setup works by deploying a test `Deployment` and `LoadBalancer` `Service`:
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: test-lb
|
||||||
|
spec:
|
||||||
|
type: LoadBalancer
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
targetPort: 80
|
||||||
|
protocol: TCP
|
||||||
|
name: http
|
||||||
|
selector:
|
||||||
|
svc: test-lb
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: nginx
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
svc: test-lb
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
svc: test-lb
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: web
|
||||||
|
image: nginx
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /
|
||||||
|
port: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
Check if it gets an external IP:
|
||||||
|
```bash
|
||||||
|
kubectl get services test-lb
|
||||||
|
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
test-lb LoadBalancer 10.100.167.198 192.168.55.20 80:31350/TCP 169m
|
||||||
|
```
|
||||||
|
|
||||||
|
The service got the first IP from our defined pool: `192.168.55.20`.
|
||||||
|
|
||||||
|
Now from any device on the LAN, try to reach that IP on port 80:
|
||||||
|

|
||||||
|
|
||||||
|
✅ Our pod is reachable through BGP-routed `LoadBalancer` IP, first step successful!
|
||||||
|
|
||||||
---
|
---
|
||||||
## Kubernetes Ingress
|
## Kubernetes Ingress
|
||||||
|
|
||||||
TODO add why we need service
|
We managed to expose a pod externally using a `LoadBalancer` service and a BGP-assigned IP address. This approach works great for testing, but it doesn't scale well.
|
||||||
### What is a Kubernetes Ingress
|
|
||||||
|
|
||||||
explain what is an Ingress and its purpose
|
Imagine having 10, 20, or 50 different services, would I really want to allocate 50 IP addresses, and clutter my firewall and routing tables with 50 BGP entries? Definitely not.
|
||||||
|
|
||||||
### How Ingress Work
|
That’s where **Ingress** kicks in.
|
||||||
|
|
||||||
|
### What Is a Kubernetes Ingress?
|
||||||
|
|
||||||
|
A **Kubernetes Ingress** is an API object that manages **external access to services** in a cluster, typically HTTP and HTTPS, all through a single entry point.
|
||||||
|
|
||||||
|
Instead of assigning one IP per service, you define routing rules based on:
|
||||||
|
- **Hostnames** (`app1.vezpi.me`, `blog.vezpi.me`, etc.)
|
||||||
|
- **Paths** (`/grafana`, `/metrics`, etc.)
|
||||||
|
|
||||||
|
With Ingress, I can expose multiple services over the same IP and port (usually 443 for HTTPS), and Kubernetes will know how to route the request to the right backend service.
|
||||||
|
|
||||||
|
Here is an example of a simple `Ingress`, routing traffic of `test.vezpi.me` to the `test-lb` service on port 80:
|
||||||
|
```yaml
|
||||||
---
|
---
|
||||||
## Ingress Controller
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: test-ingress
|
||||||
|
spec:
|
||||||
|
rules:
|
||||||
|
- host: test.vezpi.me
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
pathType: Prefix
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: test-lb
|
||||||
|
port:
|
||||||
|
number: 80
|
||||||
|
```
|
||||||
|
|
||||||
### What is an Ingress Controller
|
### Ingress Controller
|
||||||
|
|
||||||
explain what is an Ingress Controller and its purpose
|
On its own, an Ingress is just a set of routing rules. It doesn’t actually handle traffic. To bring it to life, I need an **Ingress Controller** which will:
|
||||||
|
- Watches the Kubernetes API for `Ingress` resources.
|
||||||
|
- Opens HTTP(S) ports on a `LoadBalancer` or `NodePort` service.
|
||||||
|
- Routes traffic to the correct `Service` based on the `Ingress` rules.
|
||||||
|
|
||||||
### Which Ingress Controller to Use
|
Think of it as a reverse proxy (like NGINX or Traefik), but integrated with Kubernetes.
|
||||||
|
|
||||||
comparison between ingress controller
|
Since I’m looking for something simple, stable, well-maintained, and with a large community, I went with **NGINX Ingress Controller**.
|
||||||
which one I picked and why
|
|
||||||
### Install NGINX Ingress Controller
|
### Install NGINX Ingress Controller
|
||||||
|
|
||||||
detail installation of NGINX Ingress Controller
|
I install it using Helm, I set `controller.ingressClassResource.default=true` to define `nginx` as default for all my future ingresses:
|
||||||
verify ingress controller service
|
```bash
|
||||||
|
helm install ingress-nginx \
|
||||||
|
--repo=https://kubernetes.github.io/ingress-nginx \
|
||||||
|
--namespace=ingress-nginx \
|
||||||
|
--create-namespace ingress-nginx \
|
||||||
|
--set controller.ingressClassResource.default=true
|
||||||
|
```
|
||||||
|
```plaintext
|
||||||
|
NAME: ingress-nginx
|
||||||
|
LAST DEPLOYED: Wed Jul 23 15:44:47 2025
|
||||||
|
NAMESPACE: ingress-nginx
|
||||||
|
STATUS: deployed
|
||||||
|
REVISION: 1
|
||||||
|
TEST SUITE: None
|
||||||
|
NOTES:
|
||||||
|
The ingress-nginx controller has been installed.
|
||||||
|
It may take a few minutes for the load balancer IP to be available.
|
||||||
|
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'
|
||||||
|
|
||||||
|
An example Ingress that makes use of the controller:
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: example
|
||||||
|
namespace: foo
|
||||||
|
spec:
|
||||||
|
ingressClassName: nginx
|
||||||
|
rules:
|
||||||
|
- host: www.example.com
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- pathType: Prefix
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: exampleService
|
||||||
|
port:
|
||||||
|
number: 80
|
||||||
|
path: /
|
||||||
|
# This section is only required if TLS is to be enabled for the Ingress
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- www.example.com
|
||||||
|
secretName: example-tls
|
||||||
|
|
||||||
|
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
|
||||||
|
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: example-tls
|
||||||
|
namespace: foo
|
||||||
|
data:
|
||||||
|
tls.crt: <base64 encoded cert>
|
||||||
|
tls.key: <base64 encoded key>
|
||||||
|
```
|
||||||
|
|
||||||
|
My NGINX Ingress Controller is now installed and its service picked the 2nd IP in the load balancer range, `192.168.55.21`:
|
||||||
|
```bash
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
|
||||||
|
ingress-nginx-controller LoadBalancer 10.106.236.13 192.168.55.21 80:31195/TCP,443:30974/TCP 75s app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
>💡 I want to make sure my controller will always pick the same IP.
|
||||||
|
|
||||||
|
I will create 2 separate pools, one dedicated for the Ingress Controller with only one IP, and another one for anything else.
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: "cilium.io/v2alpha1"
|
||||||
|
kind: CiliumLoadBalancerIPPool
|
||||||
|
metadata:
|
||||||
|
name: "ingress-nginx"
|
||||||
|
spec:
|
||||||
|
blocks:
|
||||||
|
- cidr: "192.168.55.55/32" # Ingress Controller IP
|
||||||
|
serviceSelector:
|
||||||
|
matchLabels:
|
||||||
|
app.kubernetes.io/name: ingress-nginx
|
||||||
|
app.kubernetes.io/component: controller
|
||||||
|
---
|
||||||
|
apiVersion: "cilium.io/v2alpha1"
|
||||||
|
kind: CiliumLoadBalancerIPPool
|
||||||
|
metadata:
|
||||||
|
name: "default"
|
||||||
|
spec:
|
||||||
|
blocks:
|
||||||
|
- start: "192.168.55.100" # LB Start IP
|
||||||
|
stop: "192.168.55.250" # LB Stop IP
|
||||||
|
serviceSelector:
|
||||||
|
matchExpressions:
|
||||||
|
- key: app.kubernetes.io/name
|
||||||
|
operator: NotIn
|
||||||
|
values:
|
||||||
|
- ingress-nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
After replacing the previous pool by these two, my Ingress Controller got the desired IP `192.168.55.55` and my `test-lb` service picked the first one `192.168.55.100` in the new range as expected.
|
||||||
|
```bash
|
||||||
|
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
default test-lb LoadBalancer 10.100.167.198 192.168.55.100 80:31350/TCP 6h34m
|
||||||
|
ingress-nginx ingress-nginx-controller LoadBalancer 10.106.236.13 192.168.55.55 80:31195/TCP,443:30974/TCP 24m
|
||||||
|
```
|
||||||
|
|
||||||
### Associate a Service to an Ingress
|
### Associate a Service to an Ingress
|
||||||
|
|
||||||
|
Now let’s wire up a service to this controller.
|
||||||
|
|
||||||
oneline to explain how to use https
|
We transform our `LoadBalancer` service to a standard `ClusterIP` and add a minimal Ingress definition to expose my test pod over HTTP:
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: test-lb
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
targetPort: 80
|
||||||
|
protocol: TCP
|
||||||
|
name: http
|
||||||
|
selector:
|
||||||
|
svc: test-lb
|
||||||
|
---
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: test-ingress
|
||||||
|
spec:
|
||||||
|
rules:
|
||||||
|
- host: test.vezpi.me
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
pathType: Prefix
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: test-lb
|
||||||
|
port:
|
||||||
|
number: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
---
|
---
|
||||||
## Secure Connection with TLS
|
## Secure Connection with TLS
|
||||||
|
|
||||||
to use https
|
oneline to explain how to use https
|
||||||
|
|
||||||
### Cert-Manager
|
### Cert-Manager
|
||||||
|
|
||||||
|
BIN
static/img/Pasted_image_20250803215654.png
Normal file
BIN
static/img/Pasted_image_20250803215654.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 29 KiB |
BIN
static/img/k8s-test-loadbalancer-service-with-bgp.png
Normal file
BIN
static/img/k8s-test-loadbalancer-service-with-bgp.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 40 KiB |
BIN
static/img/opnsense-create-firewall-rule-bgp-peering.png
Normal file
BIN
static/img/opnsense-create-firewall-rule-bgp-peering.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 50 KiB |
Reference in New Issue
Block a user