Kubernetes Ingress Past, Present, and Future

This post was inspired by listening to February 19, 2019, Kubernetes Podcast“Ingress, with Tim Hockin.” The Kubernetes Podcast is turning out to be a very well done podcast overall, and well worth the listen. In the Ingress episode, the podcasters interview Tim Hockin who’s one of the original Kubernetes co-founders, a team lead on the Kubernetes predecessor Borg/Omega, and is still very active within the Kubernetes community such as chairing the Kubernetes Network Special Interest Group that currently own the Ingress resource specification. Tim talks in the podcast about the history of the Kubernetes Ingress, current developments around Ingress, and proposed futures. This talk inspired me to reflect on both Ingress Controllers (realizes the implementation of Ingress manifest) and Ingress the concept (allow client outside the Kubernetes cluster to access services running in the Kubernetes cluster).

So what’s a Kubernetes Ingress?

Each Ingress manifest includes an annotation that indicates which Ingress controller should manage that Ingress resource. For example, to have Solo.ioGloo manage a specific Ingress resource, you would specify the following. Note the included annotation kubernetes.io/ingress.class: gloo.

Ingress Challenges

What’s Next for Ingress?

The Istio community, also based on Envoy like Heptio Contour, are also defining Ingress CRDs.

It will be fascinating to see how Ingress evolves in the not too distant future.

Related reading: API Gateways are going through an identity crisis.

Demo Time

For this example, I’m going to use a Kubernetes service created from https://jsonplaceholder.typicode.com/, which provides a quick set of REST APIs that provide different JSON output that can be helpful for testing. It’s based on a Node.js json-server — it’s very cool and worth looking at independently. I forked the original GitHub jsonplaceholder repository, ran draft create on the project, and made a couple of tweaks to the generated Helm chart. Draft is a super fast and easy way to bootstrap existing code into Kubernetes. I’m running all of this example locally using minikube.

The jsonplaceholder service comes with six common resources each of which returns several JSON objects. For this example, we’ll be getting the first user resource at /users/1.

  • /posts 100 posts
  • /comments 500 comments
  • /albums 100 albums
  • /photos 5000 photos
  • /todos 200 todos
  • /users 10 users

Following is a script to try this example yourself, and there’s also an asciinemaplayback so you can see what it looks like running on my machine. We’ll unpack what’s happening following the playback.

What Happened?

We then started up a local Kubernetes cluster (minikube) and initialized Helm and Draft. We also installed Gloo ingress into our local cluster.

We then git clone our example and used draft up to build and deploy it to our cluster. Let’s spend a minute on what happened in this step. I originally forked the jsonplaceholder GitHub repository and ran draft create against its code. Draft autodetects the source code language, in this case, Node.js, and creates both a Dockerfile that builds our example application into an image container and creates a default Helm chart. I then made a few minor tweaks to the Helm chart to enable its Ingress. Let’s look at that Ingress manifest. The main changes are the addition of the ingress.class: gloo annotation to mark this Ingress for Gloo’s Ingress Controller. And the host is set to gloo.example.com, which is why our curl statement set curl --header "Host: gloo.example.com".

charts/template/ingress.yaml

For more examples of using Gloo as a basic Ingress controller, you can check out Kubernetes Ingress Control using Gloo.

You may have also noticed the call to $(glooctl proxy url --name ingress-proxy) in the curl command. This is needed when you’re running in a local environment like minikube and you need to get the host IP and port of the Gloo proxy server. When Gloo is deployed to a Cloud Provider like Google or AWS, then those environments would associate a static IP and allow port 80 (or port 443 for HTTPS) to be used, and that static IP would be registered with a DNS server, i.e., when Gloo is deployed to a cloud-managed Kubernetes you could do curl http://gloo.example.com/users/1.

Ingress Example Challenges

Solo.io Gloo Virtual Services

You’ll notice that it looks very similar to the Ingress we had previously created with a few subtle changes. The path specifier is prefix: / which is generally what people intend, i.e., if the beginning of the request message path matches the route path specifier than apply the route actions. If we wanted to exactly match the previous Ingress, we could use regex: /.* instead. Virtual Services allow you to specify paths by: prefix, exact, and regular expression. You can also see that instead of backend: with serviceName and servicePort, a Virtual Service has a routeAction that delegates to a singleupstream. Gloo upstreams are auto-discovered and can refer to Kubernetes Services AND REST/gRPC function, cloud functions like AWS Lambda and Google Functions, and other external to Kubernetes services.

More details on Solo.io Gloo at:

Let’s go back to our example, and update our Virtual Service to do the path rewrite we wanted, i.e., /people => /users.

We’ve added a second route matcher, just like adding a second route path in an Ingress and specified prefix: /people. This will match all requests that start with /people, and all other calls to the gloo.example.com domain will be handled by the other route matcher. We also added a routePlugins section that will rewrite the request path to /users such that our service will now correctly handle our request. Route Plugins allow you to perform many operations on both the request to the upstream service and the response back from the upstream service. Best shown with an example, so for our new /people route let’s also transform the response to both add a new header x-test-phone with a value from the response body, and let’s transform the response body to return a couple of fields: name, and the address/street and address/city.

Let’s see what that looks like. My example GitHub repository already included the full Gloo Virtual Service we just examined. We need to configure Gloo for gateway which means adding another proxy to handle Virtual Services in addition to Ingress resources. We’ll use draft up to ensure our example is fully deployed including the full Virtual Service, and then we’ll call both /users/1 and /people/1 to see the differences.

Ok, well not that mind-blowing if you’ve used other L7 networking products or done other integration work, but still pretty cool relative to standard Ingress objects. Gloo is using Inja Templates to process the JSON response body. More details in the Gloo documentation.

Summary