MetalLB is a load-balancer for bare metal Kubernetes. This example explains how to configure MetalLB, OpenBGPD package within pfSense.
- Bare metal Kubernetes cluster. Thanks to Dan Manners’ project SimpleSK8S, this is becoming very easy.
- Install OpenBGPD in pfSence
I’m setting up 3 Kubernetes clusters. There will be services within each cluster that needs to be accessible from the internet.
On the Settings tab in pfSense for the OpenBGPD service set the following;
- Autonomous System (AS) Number = 64512
- Holdtiem = leave blank
- fib-update = yes
- Listen on IP = 10.10.100.2
- Router = 10.10.100.2
- CARP Status UP = none
- Networks = 10.10.100.0/22
The Listen on IP and Router are set to the VIP configured under Firewall | Virtual IPs The network is the VLAN these clusters are on.
Further research: Can I move one of these clusters to another VLAN?
There should be one group, in OpenBGPD, for each Kubernetes cluster.
Only worker nodes should be listed
Each worker node, within each cluster, should be listed as above.
Each Kubernetes cluster will have MetalLB installed. MetalLB requires a
configmap For the
jx cluster, this configmap is defined as below.
This details are mapped as below;
- peer-address = Listen on IP
- peer-asn = Autonomous System (AS) Number
- my-asn = Remote AS (defined for the group)
- name: default (can be almost anything)
- protocal: bgp
- 10.10.100.50 - 10.10.100.100
For the 3 Kubernetes clusters, the address pools are as follows;
|jx||10.10.100.50 - 10.10.100.100|
|dev||10.10.101.50 - 10.10.101.100|
|prd||10.10.102.50 - 10.10.102.100|
service is created in Kubernetes, MetalLB will use the corresponding address pool to issue an external IP for each service created starting with 10.10.10x.50 and incrementing for each service. An HAProxy backend needs to be created for each external IP issued. It would be nice to automate the backend creation but that would take some doing.
Looking at the
routes within pfSense, you can see what OpenBPGD is ding.
Each cluster has 2 services defined and OpenBGPD has created routes between the address pool and the pyshical nodes in this group.
Test if the 10.10.100.11 nodes goes down, does this route dynamically change?