In previous articles we talked about installing a k3s cluster and using Longhorn for resilient storage.
But assuming you install a service on your cluster, how do you get traffic to that service? Let’s think about our options.
Say you’re hosting a website. You start all the needed pods up in k8s. Commonly you might have a few, maybe one for frontend, one for backend, one for a database like Postgres, maybe even one for Redis in order to cache information. Assuming communication between the different pods works correctly, how will you access your website from outside the k8s network?
At first you might test it out with a port-forward from within the local network:
$ kubectl port-forward -n <namespace> svc/<service> 8080:80
Here <service> is the name of your service, running in <namespace> , 80 is the port the service is running on and 8080 is the port on your local machine where you want to expose the service. So now you can open http://localhost:8080 on your local machine and you will be able to access the website.
This is obviously only good for testing, as it’s only available while the port-forward is running and only from your local machine.
So now let’s think of real traffic, coming from outside your network, hitting your router, that you want to direct to the service running in k8s.
One of the types of k8s services is NodePort, which allows you to bind a service to a specific port on the node it’s running on. There are a couple of issues with this approach. One is the fact that you might want to host multiple services or websites. You can, of course, bind each to a distinct port on the node they are running on. But if you want to expose them to the outside world, you can only port forward to a specific port. But that has a solution. K3s comes with an ingress controller called Traefik. An ingress controller is a service that is able to direct traffic for a specific URL to a service running in the k8s cluster. So you could theoretically make Traefik run as NodePort and port forward your traffic from the router to the Traefik ingress.
However, there would still be a very important limitation. If you take that specific node down for maintenance, there is nothing left to forward traffic to, so you will have downtime, even though there would still be nodes that can serve that traffic. Or if your node goes down by itself due to some hardware problem, you will have to manually intervene and at a minimum port forward from the router to another node.
Enter MetalLB
This is exactly why MetalLB exists. It’s, as the name suggests, a load-balancer meant for bare-metal clusters. But how does it work? I like simple explanations, so here goes:
- You define a pool of IPs in a
IPAddressPoolobject. These IPs will be available for MetalLB to distribute to services of typeLoadBalancer. - When a service of type
LoadBalanceris created, MetalLB assigns one of the unused IPs from the pool to the service. - MetalLB chooses one node to “own” that IP.
- That node starts responding to ARP requests (from your LAN) for that specific IP saying “I have that IP”.
- From that point on, your router sends traffic for that specific IP to the selected node.
- The node routes traffic to the correct pod backing the service, using normal K8s networking.
- If the chosen node fails, MetalLB reassigns the IP to another node, and that new node starts answering ARP requests.
Setup
First, install MetalLB itself. You can check the documentation (ain’t nobody got time for that), but TL;DR just run this command:
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
You can check the status of the service in the metallb-system namespace in your cluster, which should have been created for you.
Create a file named ip-pool.yml with the following contents:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb-ip-pool
namespace: metallb-system
spec:
addresses:
- 192.168.68.80-192.168.68.99
Note the IPs in the range you provide must be valid IPs for your network (so make sure to change them) and should also not be assignable by your router’s DHCP server, because they might get assigned to other devices on the network and you can run into conflicts.
Now apply the file to the cluster:
$ kubectl apply -f ip-pool.yml
Now create an advertisement.yml file with the following contents:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: metallb-advert
namespace: metallb-system
And apply it:
$ kubectl apply -f advertisement.yml
Check the cluster to see the objects were properly created using your favorite client. I use Lens, which at the time of this writing is free for personal use.
Configure Traefik
Right now you have a working configuration. If you create a service of type LoadBalancer in your cluster, it will get an IP from the pool and you will be able to access it from within your network using that IP.
You can check the service’s IP by running:
$ kubectl get service <service-name> -n <namespace>
You’ll see an EXTERNAL-IP column, that’s what you need.
But… There is a but. If you want to use your cluster’s ingress to host multiple services and expose them to the internet by port forwarding your router, you’ll want your Traefik instance to have a stable IP. So let’s do that now.
SSH into your k3s cluster’s main node. This should be the node on which you first installed k3s, the one that provided the token for the other nodes.
Edit this file:
$ sudo nano /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
Give it the desired IP by editing the contents to look like:
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
service:
spec:
loadBalancerIP: 192.168.68.99
Replace the IP with your own. It must be from the range of IPs you defined earlier.
Save the file. K3s continually scans the manifests folder and will automatically redeploy Traefik using this manifest and MetalLB should assign the IP.
You can check by running:
$ kubectl get svc -n kube-system traefik
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.6.237 192.168.68.99 80:30132/TCP,443:32099/TCP 11m
Hope this helps, have fun clickity-clacking.
Leave a Reply