Running Gloo as a Kubernetes Ingress for bare-metal clusters with MetalLB

When you deploy workloads into Kubernetes, you probably know you cannot just directly communicate with those workloads. Kubernetes assigns IP addresses to Pods running within the cluster, but those IP addresses are routable only within the cluster unless you’ve explicitly set up your networking to allow for these to be routable. You have a few different options for how you allow traffic into your cluster.

On a public cloud like Google Cloud or AWS, exposing your services is made a bit more convenient. You can create Kubernetes Services of type LoadBalancer which will automatically create a cloud load balancer with a routable IP address that you can use to access your services. At some point, if you are exposing too many services using this approach, you may find it difficult (and expensive) to maintain. A good approach is to use an ingress or an open-source API and function gateway based on Envoy, like Gloo, to funnel all traffic through a well-known access point at which you can enforce policy, rate-limiting, security, as well as API or function routing and transformation. When you create the Gloo gateway-proxy as type LoadBalancer on a public cloud, you get a routable IP address through the cloud’s external load balancers.

What if you’re setting up a bare-metal cluster? Or a cluster that cannot automatically take advantage of the LoadBalancer Kubernetes service type?

An awesome new open-source project called MetalLB from the good folks at Google allow us to do just that. We can plugin in with our existing Layer 2/BGP routing, assign a pool of IP addresses, and let MetalLB handle the rest. That means we can use LoadBalancer Service types in bare metal clusters and get the same, automatically routable IP addresses for our service including an ingress service like Gloo.

We can give it a try on Minikube by following the installation instructions for MetalLB on Minikube.

Once installed, you should have the following components running from MetalLB:

$ kubectl get pod -n metallb-system
NAME                               READY   STATUS    RESTARTS   AGE
controller-7cc9c87cfb-gbwnr        1/1     Running   0          32m
speaker-sxwqc                      1/1     Running   0          32m
test-bgp-router-85b7ccb986-mdsvl   1/1     Running   0          34m

When we have MetalLB installed, we can install Gloo like

$ glooctl install gateway

If we do a listing of the Gloo services: (we are customizing the column output to focus on the important bits, but a normal kubectl get svc works too):

$ kubectl get service -n gloo-system  -o=custom-columns=NAME:.metadata.name,TYPE:.spec.type,EXTERNAL-IP:.status.loadBalancer.ingress[*].ip
NAME            TYPE           EXTERNAL-IP
gateway-proxy   LoadBalancer   198.51.100.1
gloo            ClusterIP      <none>

We see, on Minikube, that MetalLB has assigned an “external-IP” address for our Gloo gateway.

Let’s add a sample Petstore application to test the routing:

$ kubectl apply 
  -f https://raw.githubusercontent.com/solo-io/gloo/master/example/petstore/petstore.yaml

And add a Gloo route:

glooctl add route    
--path-exact /sample-route-1    
--dest-name default-petstore-8080    
--prefix-rewrite /api/pets

Now we should be able to hit the Petstore using the Gloo gateway’s external IP address.

$ minikube ssh -- curl 198.51.100.1/sample-route-1
[{"id":1,"name":"Dog","status":"available"},{"id":2,"name":"Cat","status":"pending"}]

Cool!

Note, however, we had to log in to Minikube for this example as the MetalLB routers are running there and able to route from there. Remember this solution is meant for bare-metal clusters.

Although MetalLB is in beta right now, it’s a promising and exciting new technology for you to keep an eye on. Give it a try with Gloo on bare-metal clusters and let us know how you get along!