From Zero to Gloo Edge in 15 Minutes*

15 minutes blog header
.

*…your mileage may vary

How long does it take to configure your first cloud-native application on an open-source API Gateway?

How about 15 minutes? Give us that much time and we’ll give you a Kubernetes-hosted application accessible via a gateway configured with policies for routing, service discovery, timeouts, debugging, access logging, and observability. We’ll host all of this on a local KinD (Kubernetes in Docker) cluster to keep the setup standalone and as simple as possible. In addition, this gateway will be laid on the foundation of Envoy Proxy, the open-source proxy that comprises the backbone of some of the most influential enterprise cloud projects available today, like Istio.

Would you prefer to perform this exercise on a public cloud rather than a local KinD cluster? Then check out these alternative versions of this post:

If you have questions, please reach out on the Solo #edge-quickstart Slack channel.

Ready? Set? Go!

Prerequisites

For this exercise, we’re going to do all the work on your local workstation. All you’ll need to get started is a Docker-compatible environment such as Docker Desktop, plus the CLI utilities kubectl, kind, curl, and jq. Make sure these are all available to you before jumping into the next section. I’m building this on MacOS but other platforms should be perfectly fine as well.

INSTALL

Let’s start by installing the platform and application components we need for this exercise.

Install KinD Cluster

Once you have the kind utility installed along with Docker on your local workstation, creating a cluster to host this exercise is simple and takes only about a minute. Run the command:

kind create cluster

You should see:

Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.18.2) ?
✓ Preparing nodes ?
✓ Writing configuration ?
✓ Starting control-plane ?️
✓ Installing CNI ?
✓ Installing StorageClass ?
Set kubectl context to "kind-kind"
You can now use your cluster with:
 
kubectl cluster-info --context kind-kind
 
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community ?

Confirm that your kube config is pointing to your new cluster using this command:

kubectl config use-context kind-kind

The response should be:

Switched to context "kind-kind".

Install htttpbin Application

HTTPBIN is a great little REST service that can be used to test a variety of http operations and echo the response elements back to the consumer. We’ll use it throughout this exercise. First, we’ll install the httpbin service on our kind cluster. Run:

kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/zero-to-gateway/httpbin-svc-dpl.yaml

You should see:

serviceaccount/httpbin created
service/httpbin created
deployment.apps/httpbin created

You can confirm that the httpbin pod is running by searching for pods with an app label of httpbin:

kubectl get pods -l app=httpbin

And you will see:

NAME                       READY   STATUS    RESTARTS   AGE
httpbin-66cdbdb6c5-2cnm7   1/1     Running   0          21m

 

Install glooctl Utility

GLOOCTL is a command-line utility that allows users to view, manage and debug Gloo Edge deployments, much like a Kubernetes user employs the kubectl utility. Let’s install glooctl on our local workstation:

curl -sL https://run.solo.io/gloo/install | sh
export PATH=$HOME/.gloo/bin:$PATH

We’ll test out the installation using the glooctl version command. It responds with the version of the CLI client that you have installed. However, the server version is undefined since we have not yet installed Gloo Edge. Enter:

glooctl version

Which responds:

Client: {"version":"1.7.10"}
Server: version undefined, could not find any version of gloo running

Install Gloo Edge

Finally, we will complete the INSTALL phase by configuring an instance of open-source Gloo Edge on our kind cluster.

glooctl install gateway

And you’ll see:

Creating namespace gloo-system... Done.
Starting Gloo Edge installation...
Gloo Edge was successfully installed!

It should take less than a minute for the full Gloo Edge system to be ready for use. You can use this bash script to notify you when everything is ready to go.

until kubectl get ns gloo-system
do
sleep 1
done

until [ $(kubectl -n gloo-system get pods -o jsonpath='{range .items[*].status.containerStatuses[*]}{.ready}{"\n"}{end}' | grep false -c) -eq 0 ]; do
echo "Waiting for all the gloo-system pods to become ready"
sleep 1
done

echo "Gloo Edge deployment is ready :-)"

The system will respond:

NAME          STATUS   AGE
gloo-system   Active   15s
Waiting for all the gloo-system pods to become ready
Waiting for all the gloo-system pods to become ready
Waiting for all the gloo-system pods to become ready
Gloo Edge deployment is ready :-)

That’s all that’s required to install Gloo Edge. Notice that we did not install or configure any kind of external database to manage Gloo artifacts. That’s because the product was architected from Day 1 to be Kubernetes-native. All artifacts are expressed as Kubernetes Custom Resources, and they are all stored in native etcd storage. Consequently, Gloo Edge leads to more resilient and less complex systems than alternatives that are either shoe-horned into Kubernetes or require external moving parts.

Note that everything we do in this getting-started exercise runs on the open-source version of Gloo Edge. There is also an enterprise edition of Gloo Edge that adds features to support advanced authentication and authorization, rate limiting, and observability, to name a few. If you’d like to work through this blog post using Gloo Edge Enterprise instead, then request a free trial here.

DISCOVER

A unique feature of Gloo Edge is its ability to discover Kubernetes services and wrap them into an Upstream abstraction. Upstreams represent targets to which request traffic can be routed.  To learn more about how Upstreams operate in a Gloo Edge environment, see the product documentation here and here, and the API reference here.

Explore Service Discovery

Let’s use the glooctl utility to explore the catalog of Upstreams that Gloo Edge has already compiled within our kind cluster. You can run:

glooctl get upstreams

And you’ll see:

+-------------------------------+------------+----------+------------------------------+
|           UPSTREAM            |    TYPE    |  STATUS  |           DETAILS            |
+-------------------------------+------------+----------+------------------------------+
| default-httpbin-8000          | Kubernetes | Accepted | svc name:      httpbin       |
|                               |            |          | svc namespace: default       |
|                               |            |          | port:          8000          |
|                               |            |          |                              |
| default-kubernetes-443        | Kubernetes | Accepted | svc name:      kubernetes    |
|                               |            |          | svc namespace: default       |
|                               |            |          | port:          443           |
|                               |            |          |                              |
... abridged ...
| kube-system-kube-dns-9153     | Kubernetes | Accepted | svc name:      kube-dns      |
|                               |            |          | svc namespace: kube-system   |
|                               |            |          | port:          9153          |
|                               |            |          |                              |
+-------------------------------+------------+----------+------------------------------+

Notice in particular the default-httpbin-8000 Upstream that corresponds to the httpbin service we installed earlier.

Explore Function Discovery with OpenAPI

We could begin routing to this newly discovered httpbin Upstream right away. Before we do that, let’s explore advanced function discovery features that ship with open-source Gloo Edge. Function discovery is supported for both OpenAPI / REST and gRPC interfaces. In this example, we will associate an OpenAPI document with the httpbin Upstream and then observe the discovery feature at work.

First, we need to enable function discovery on the default namespace where the service is deployed. Since not all users employ OpenAPI or gRPC interfaces, and because function discovery can become resource-intensive, it is disabled by default. We could have enabled it via helm values at installation time; instead, we will do it here with kubectl:

kubectl label namespace default discovery.solo.io/function_discovery=enabled

Which confirms:

namespace/default labeled

Second, we will modify the httpbin Upstream to associate an OpenAPI document. There’s nothing unique or Gloo-specific in the OpenAPI document itself; it’s just an OpenAPI spec for a standard REST interface. You can see the full spec for httpbin here and interact with the individual operations in the httpbin sandbox here.

Let’s take a look at the modifications to the generated httpbin Upstream. All we’re doing is adding a URL to the Upstream that locates the OpenAPI specification for the httpbin service.

    serviceSpec:
      rest:
        swaggerInfo:
          url: https://raw.githubusercontent.com/jameshbarton/solo-resources/main/zero-to-gateway/httpbin-openapi.json

Now we’ll apply this change to the Upstream:

kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/zero-to-gateway/httpbin-openapi-us.yaml

And it confirms:

upstream.gloo.solo.io/default-httpbin-8000 configured

Now when we use glooctl to inspect the upstream and compare it with what we had before, you can see that Gloo Edge has discovered (with the guidance of the OpenAPI document) a number of individual operations being published from the httpbin service. This will allow us to be much more precise and avoid errors as we establish routing rules.

glooctl get upstream default-httpbin-8000

You should see:

+----------------------+------------+----------+------------------------+
|       UPSTREAM       |    TYPE    |  STATUS  |        DETAILS         |
+----------------------+------------+----------+------------------------+
| default-httpbin-8000 | Kubernetes | Accepted | svc name:      httpbin |
|                      |            |          | svc namespace: default |
|                      |            |          | port:          8000    |
|                      |            |          | REST service:          |
|                      |            |          | functions:             |
|                      |            |          | - /anything            |
|                      |            |          | - /base64              |
|                      |            |          | - /brotli              |
|                      |            |          | - /bytes               |
|                      |            |          | - /cache               |
|                      |            |          | - /deflate             |
|                      |            |          | - /delay               |
|                      |            |          | - /delete              |
|                      |            |          | - /get                 |
|                      |            |          | - /gzip                |
|                      |            |          | - /headers             |
|                      |            |          | - /ip                  |
|                      |            |          | - /patch               |
|                      |            |          | - /post                |
|                      |            |          | - /put                 |
|                      |            |          | - /redirect-to         |
|                      |            |          | - /response-headers    |
|                      |            |          | - /status              |
|                      |            |          | - /stream              |
|                      |            |          | - /user-agent          |
|                      |            |          | - /uuid                |
|                      |            |          | - /xml                 |
|                      |            |          |                        |
+----------------------+------------+----------+------------------------+

CONTROL

In this section, we’ll configure external access to Gloo Edge, establish routing rules that are the core of the proxy configuration, and also show how to establish timeout policies from the proxy.

Establish External Access to Proxy

Because we are running  Gloo Edge inside a Docker-hosted cluster that’s not linked to our host network, the network endpoints of the Envoy data plane aren’t exposed to our development workstation by default. We will use a simple port-forward to expose the proxy’s http port for us to use. (Note that gateway-proxy is our deployment of the Envoy data plane.)

kubectl port-forward -n gloo-system deployment/gateway-proxy 8080 &

This returns:

Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080

With this port-forward in place, we’ll be able to access the routes we are about to establish on port 8080 of our workstation.

Configure Simple Routing with CLI

Let’s begin our routing configuration with the simplest possible route to expose the /get operation on httpbin. This endpoint simply reflects back in its response the headers and any other arguments passed into the service.

We’ll use the glooctl utility to get started:

glooctl add route \
  --path-exact /api/httpbin/get \
  --dest-name default-httpbin-8000 \
  --prefix-rewrite /get

And can see the output:

{"level":"info","ts":"2021-06-08T13:34:28.582-0400","caller":"add/route.go:156","msg":"Created new default virtual service","virtualService":"virtual_host:{domains:\"*\"  routes:{matchers:{exact:\"/api/httpbin/get\"}  route_action:{single:{upstream:{name:\"default-httpbin-8000\"  namespace:\"gloo-system\"}}}  options:{prefix_rewrite:{value:\"/get\"}}}}  status:{}  metadata:{name:\"default\"  namespace:\"gloo-system\"  resource_version:\"437677\"  generation:1}"}
+-----------------+--------------+---------+------+---------+-----------------+----------------------------------+
| VIRTUAL SERVICE | DISPLAY NAME | DOMAINS | SSL  | STATUS  | LISTENERPLUGINS |              ROUTES              |
+-----------------+--------------+---------+------+---------+-----------------+----------------------------------+
| default         |              | *       | none | Pending |                 | /api/httpbin/get ->              |
|                 |              |         |      |         |                 | gloo-system.default-httpbin-8000 |
|                 |              |         |      |         |                 | (upstream)                       |
+-----------------+--------------+---------+------+---------+-----------------+----------------------------------+

This glooctl invocation created a Gloo Edge VirtualService component, which is named default by default. It routes any request to the path /api/httpbin/get to the httpbin /get endpoint. Attempting to reach any other endpoint on the httpbin service will be rejected.

Note that when the route is initially created, the status of the route is Pending. Issue the glooctl get virtualservice default command and observe that the status has now changed from Pending to Accepted.

+-----------------+--------------+---------+------+----------+-----------------+----------------------------------+
| VIRTUAL SERVICE | DISPLAY NAME | DOMAINS | SSL  |  STATUS  | LISTENERPLUGINS |              ROUTES              |
+-----------------+--------------+---------+------+----------+-----------------+----------------------------------+
| default         |              | *       | none | Accepted |                 | /api/httpbin/get ->              |
|                 |              |         |      |          |                 | gloo-system.default-httpbin-8000 |
|                 |              |         |      |          |                 | (upstream)                       |
+-----------------+--------------+---------+------+----------+-----------------+----------------------------------+

Test the Simple Route with Curl

Now that the VirtualService is in place, let’s use curl to display the response with the -i option to additionally show the HTTP response code and headers.

curl http://localhost:8080/api/httpbin/get -i

This command should complete successfully.

HTTP/1.1 200 OK
server: envoy
date: Tue, 08 Jun 2021 17:43:55 GMT
content-type: application/json
content-length: 315
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 7

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Content-Length": "0",
    "Host": "localhost:8080",
    "User-Agent": "curl/7.64.1",
    "X-Envoy-Expected-Rq-Timeout-Ms": "15000",
    "X-Envoy-Original-Path": "/api/httpbin/get"
  },
  "origin": "10.244.0.9",
  "url": "http://localhost:8080/get"
}

Note that if we attempt to invoke another valid endpoint /delay on the httpbin service, it will fail with a 404 Not Found error. Why? Because our VirtualService routing policy is only exposing access to /get, one of the many endpoints available on the service. If we enter:

curl http://localhost:8080/api/httpbin/delay/1 -i

You’ll see:

HTTP/1.1 404 Not Found
date: Tue, 08 Jun 2021 17:46:49 GMT
server: envoy
content-length: 0

Explore Complex Routing with Regex Patterns

Let’s assume that now we DO want to expose other httpbin endpoints like /delay. Our initial VirtualService is inadequate, because it is looking for an exact path match with /api/httpbin/get. Here is the core YAML for that VirtualService as constructed by the glooctl add route command we issued earlier.

apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: default
  namespace: gloo-system
spec:
  virtualHost:
    domains:
    - '*'
    routes:
    - matchers:
      - exact: /api/httpbin/get
      options:
        prefixRewrite: /get
      routeAction:
        single:
          upstream:
            name: default-httpbin-8000
            namespace: gloo-system

This time, rather than using the glooctl CLI, let’s manipulate the VirtualService directly. We’ll modify the matchers: stanza to match the path prefix /api/httpbin and replace it with /. So a path like /api/httpbin/delay/1 will be sent to httpbin with the path /delay/1. Now it will look like this:

apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: default
  namespace: gloo-system
spec:
  virtualHost:
    domains:
    - '*'
    routes:
    - matchers:
      - prefix: /api/httpbin
      options:
        regexRewrite:
          pattern:
            regex: '/api/httpbin/'
          substitution: '/'
      routeAction:
        single:
          upstream:
            name: default-httpbin-8000
            namespace: gloo-system

Let’s apply the modified VirtualService and test. Note that throughout this exercise, we are managing Gloo Edge artifacts using Kubernetes utilities like kubectl. That’s an important point because it allows developers to work with familiar tools when working with Gloo Edge configuration. It also benefits organization using GitOps strategies to manage deployments, as tools like ArgoCD and Flux are able to easily handle Gloo artifacts as first-class Kubernetes citizens.

kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/zero-to-gateway/httpbin-vs-regex.yaml

Note that you can safely ignore the “kubectl apply” warning below. As long as kubectl responds that the default VirtualService was configured, then your change was applied.

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
virtualservice.gateway.solo.io/default configured

Test Routing with Regex Patterns

When we used only a single route with an exact match pattern, we could only exercise the httpbin /get endpoint. Let’s now use curl to confirm that both /get and /delay work as expected.

% curl http://localhost:8080/api/httpbin/get -i
HTTP/1.1 200 OK
server: envoy
date: Tue, 08 Jun 2021 21:48:54 GMT
content-type: application/json
content-length: 288
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 1

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "localhost:8080",
    "User-Agent": "curl/7.64.1",
    "X-Envoy-Expected-Rq-Timeout-Ms": "15000",
    "X-Envoy-Original-Path": "/api/httpbin/get"
  },
  "origin": "10.244.0.10",
  "url": "http://localhost:8080/get"
}
% curl http://localhost:8080/api/httpbin/delay/1 -i
HTTP/1.1 200 OK
server: envoy
date: Tue, 08 Jun 2021 21:48:57 GMT
content-type: application/json
content-length: 342
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 1002

{
  "args": {},
  "data": "",
  "files": {},
  "form": {},
  "headers": {
    "Accept": "*/*",
    "Host": "localhost:8080",
    "User-Agent": "curl/7.64.1",
    "X-Envoy-Expected-Rq-Timeout-Ms": "15000",
    "X-Envoy-Original-Path": "/api/httpbin/delay/1"
  },
  "origin": "10.244.0.10",
  "url": "http://localhost:8080/delay/1"
}

Perfect! It works just as expected! For extra credit, try out some of the other endpoints published via httpbin as well.

Configure Timeouts

Don’t you hate it when you visit a website and the request just gets “lost”? You wait and wait. Maybe you see the network connection established but then you wait some more. And still the request never completes.

Gloo Edge provides an easy-to-configure set of timeouts that you can apply to spare your valuable users this frustration. And like other Gloo features, it can be added to your policy with standard Kubernetes tooling, and without touching the source application. All we need to do add a timeout directive to our VirtualService. In this case, we will apply the timeout in the simplest fashion at the httpbin route level by adding timeout to our route options.

    routes:
    - matchers:
      - prefix: /api/httpbin
      options:
        timeout: '5s'  # Adding 5-second timeout HERE
        regexRewrite: 
          pattern:
            regex: '/api/httpbin/'
          substitution: '/'

Let’s apply this VirtualService change using kubectl:

kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/zero-to-gateway/httpbin-vs-timeout.yaml

It confirms:

virtualservice.gateway.solo.io/default configured

Test Timeouts with httpbin delays

We will confirm that our new timeout policy works by using the httpbin /delay endpoint. First, we’ll specify a 1-second delay, and we expect everything to work just fine. Second, we’ll specify a longer delay, say 8 seconds, and we will expect our timeout policy to be triggered and return a 504 Gateway Timeout error. Run:

curl http://localhost:8080/api/httpbin/delay/1 -i

This returns:

HTTP/1.1 200 OK
server: envoy
date: Wed, 09 Jun 2021 13:58:56 GMT
content-type: application/json
content-length: 341
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 1006

{
  "args": {},
  "data": "",
  "files": {},
  "form": {},
  "headers": {
    "Accept": "*/*",
    "Host": "localhost:8080",
    "User-Agent": "curl/7.64.1",
    "X-Envoy-Expected-Rq-Timeout-Ms": "5000",
    "X-Envoy-Original-Path": "/api/httpbin/delay/1"
  },
  "origin": "10.244.0.10",
  "url": "http://localhost:8080/delay/1"
}

Note that the operation completed successfully and that the 1-second delay was applied. The response header x-envoy-upstream-service-time: 1006 indicates that the request spent 1,006 milliseconds being processed by Envoy.

Now let’s switch to an 8-second delay and see if our 5-second timeout triggers as expected. Execute:

curl http://localhost:8080/api/httpbin/delay/8 -i

And you’ll get a timeout:

HTTP/1.1 504 Gateway Timeout
content-length: 24
content-type: text/plain
date: Wed, 09 Jun 2021 14:03:24 GMT
server: envoy

upstream request timeout

BOOM! Our simple timeout policy works just as expected, triggering a 504 Gateway Timeout error when the 5-second threshold is exceeded by our 8-second httpbin delay.

DEBUG

Let’s be honest with ourselves: Debugging bad software configuration is a pain. Gloo Edge engineers have done their best to ease the process as much as possible, with documentation like this, for example. However, as we have all experienced, it can be a challenge with any complex system. In this slice of our 15 minutes, we’ll explore how to use the glooctl utility to assist in some simple debugging tasks for a common problem.

Solve a Problem with glooctl CLI

A common source of Gloo Edge configuration errors is mistyping an upstream reference, perhaps when copy/pasting it from another source but “missing a spot” when changing the name of the Upstream target. In this example, we’ll simulate making an error like that, and then demonstrating how glooctl can be used to detect it.

First, let’s apply a change to simulate the mistyping of an upstream config so that it is targeting a non-existent default-httpbin-8080 Upstream, rather than the correct default-httpbin-8000:

kubectl delete virtualservice default -n gloo-system
sleep 5
kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/zero-to-gateway/httpbin-vs-bad-port.yaml

You should see:

virtualservice.gateway.solo.io "default" deleted
virtualservice.gateway.solo.io/default created

Note that we applied a sleep between deleting the VirtualService and re-creating it to ensure that we clear the working route from Envoy’s route cache before we create the new mis-configured route.

Now if we try to access one of our httpbin endpoints:

curl http://localhost:8080/api/httpbin/get -i

It has failed:

curl: (52) Empty reply from server

So we’ll deploy one of the first weapons from the Gloo Edge debugging arsenal, the glooctl check utility. It performs checks on a number of Gloo resources, confirming that they are configured correctly and are interconnected with other resources correctly. For example, in this case, glooctl will detect the error in the mis-connection between a VirtualService and its Upstream target:

glooctl check

You can see the checks respond:

Checking deployments... OK
Checking pods... OK
Checking upstreams... OK
Checking upstream groups... OK
Checking auth configs... OK
Checking rate limit configs... OK
Checking secrets... OK
Checking virtual services... 2 Errors!
Checking gateways... OK
Checking proxies... 1 Errors!
Error: 3 errors occurred:
	* Found virtual service with warnings: gloo-system default (Reason: warning:
  Route Warning: InvalidDestinationWarning. Reason: *v1.Upstream { gloo-system.default-httpbin-8080 } not found)
	* Virtual service references unknown upstream: (Virtual service: gloo-system default | Upstream: gloo-system default-httpbin-8080)
	* Found proxy with warnings: gloo-system gateway-proxy
Reason: warning:
  Route Warning: InvalidDestinationWarning. Reason: *v1.Upstream { gloo-system.default-httpbin-8080 } not found

The detected errors clearly identify that the VirtualService is pointed at an invalid destination.

So let’s reapply the previous configuration, and then we’ll confirm that the configuration is again clean.

kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/zero-to-gateway/httpbin-vs-timeout.yaml

Now we get confirmation:

virtualservice.gateway.solo.io/default configured

Re-run glooctl check and observe that there are no problems. Our curl commands to the httpbin endpoint will also work again as expected:

Checking deployments... OK
Checking pods... OK
Checking upstreams... OK
Checking upstream groups... OK
Checking auth configs... OK
Checking rate limit configs... OK
Checking secrets... OK
Checking virtual services... OK
Checking gateways... OK
Checking proxies... OK
No problems detected.

OBSERVE

Finally, let’s tackle a final exercise where we’ll learn about some simple observability tools that ship with open-source Gloo Edge.

Configure Simple Access Logging

The default Envoy logging configurations are very quiet by design. When working in extremely high-volume environments, verbose logs can potentially impact performance and consume excessive storage.

However, Access Logs are quite important at development and test time, and potentially in production as well. So let’s first explore how to set up and consume simple access logs.

So far we have discussed Gloo Edge custom resources like Upstreams and VirtualServices. Upstreams represent the target systems to which Gloo routes traffic. VirtualServices represent the policies that determine how external requests are routed to those targets.

With Access Logging, we will consider another Gloo Edge component called a Gateway. A Gateway is a custom resource that configures the protocols and ports on which Gloo Edge listens for traffic. For example, by default Gloo Edge will have a Gateway configured for HTTP and HTTPS traffic. More information on Gateways is available here. Gloo Edge allows you to customize the behavior of your Gateways in multiple ways. One of those ways is by adding access logs.

Let’s start by adding the simplest, default access log configuration. We’ll simply add the following options to activate access logging. You can see the full Gateway YAML that we’ll apply here.

  options:
    accessLoggingService:
      accessLog:
      - fileSink:
          path: /dev/stdout
          stringFormat: ""

Now let’s apply this change to enable access logging from our Envoy data plane.

kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/zero-to-gateway/gateway-basic-access-logs.yaml

As before, you can safely ignore the benign kubectl warning.

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
gateway.gateway.solo.io/gateway-proxy configured

Now let’s generate some traffic to produce some access logs.

curl http://localhost:8080/api/httpbin/get
curl http://localhost:8080/api/httpbin/delay/1
curl http://localhost:8080/api/httpbin/delay/8

You should be able to view the resulting access logs by using the kubectl logs commands against the Envoy data plane pod:

kubectl logs -n gloo-system deploy/gateway-proxy

Here’s what we saw:

[2021-06-09T17:25:41.083Z] "GET /api/httpbin/get HTTP/1.1" 200 - 0 287 1 1 "-" "curl/7.64.1" "0b84d743-8809-4e7a-af72-8454a0ff09ad" "localhost:8080" "10.244.0.5:80"
[2021-06-09T17:25:41.100Z] "GET /api/httpbin/delay/1 HTTP/1.1" 200 - 0 341 1002 1002 "-" "curl/7.64.1" "2a526591-80ab-48b5-87a2-a6cb23b86733" "localhost:8080" "10.244.0.5:80"
[2021-06-09T17:25:42.119Z] "GET /api/httpbin/delay/8 HTTP/1.1" 504 UT 0 24 5000 - "-" "curl/7.64.1" "4a5cf476-84ad-4402-bae8-ec518212d3ff" "localhost:8080" "10.244.0.5:80"

Notice from the output the default string formatted access log for each of the operations we executed. You can see the paths of the operations, plus the HTTP response codes:  200 for the first two, and 504 (Gateway Timeout). Plus there is a host of other information.

While we are viewing these access logs using kubectl, you may want to export it for use with an enterprise log aggregator like ELK, Splunk, or Datadog. For example, Gloo Edge provides guidance for integrating with popular platforms like Datadog.

Customize Access Logging

Gloo Edge Gateways can also be configured to produce access logs in other formats like JSON, and to customize the actual content that is published to those logs. We will do both of those things by replacing our previous access log configuration in the Gateway component with this:

  options:
    accessLoggingService:
      accessLog:
      - fileSink:
          jsonFormat:
            # HTTP method name
            httpMethod: '%REQ(:METHOD)%'
            # Protocol. Currently either HTTP/1.1 or HTTP/2.
            protocol: '%PROTOCOL%'
            # HTTP response code. Note that a response code of ‘0’ means that the server never sent the
            # beginning of a response. This generally means that the (downstream) client disconnected.
            responseCode: '%RESPONSE_CODE%'
            # Total duration in milliseconds of the request from the start time to the last byte out
            clientDuration: '%DURATION%'
            # Total duration in milliseconds of the request from the start time to the first byte read from the upstream host
            targetDuration: '%RESPONSE_DURATION%'
            # Value of the "x-envoy-original-path" header (falls back to "path" header if not present)
            path: '%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%'
            # Upstream cluster to which the upstream host belongs to
            upstreamName: '%UPSTREAM_CLUSTER%'
            # Request start time including milliseconds.
            systemTime: '%START_TIME%'
            # Unique tracking ID
            requestId: '%REQ(X-REQUEST-ID)%'
          path: /dev/stdout

More information on customizing access log content is provided here.

Now we will apply this Gateway change, generate some additional traffic to our proxy, and view the resulting logs.

kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/zero-to-gateway/gateway-json-access-logs.yaml

This responds:

gateway.gateway.solo.io/gateway-proxy configured

We’ll use the same curl commands as before to generate some traffic.

curl http://localhost:8080/api/httpbin/get
curl http://localhost:8080/api/httpbin/delay/1
curl http://localhost:8080/api/httpbin/delay/8

Note that it may take a few seconds until the access log content is flushed to the logs:

kubectl logs -n gloo-system deploy/gateway-proxy | grep ^{ | jq

In the end, you should be able to see our customized JSON content, looking something like this:

{
"systemTime": "2021-06-09T17:42:02.721Z",
"upstreamName": "default-httpbin-8000_gloo-system",
"clientDuration": 2,
"httpMethod": "GET",
"requestId": "6e6f2bbb-4dd8-498e-934e-1c118153ee03",
"responseCode": 200,
"protocol": "HTTP/1.1",
"path": "/api/httpbin/get",
"targetDuration": 2
}
{
"protocol": "HTTP/1.1",
"targetDuration": 1002,
"clientDuration": 1002,
"path": "/api/httpbin/delay/1",
"requestId": "d965e488-0c12-4650-b41a-bf5f245a7397",
"systemTime": "2021-06-09T17:42:02.739Z",
"responseCode": 200,
"upstreamName": "default-httpbin-8000_gloo-system",
"httpMethod": "GET"
}
{
"systemTime": "2021-06-09T17:42:03.760Z",
"clientDuration": 5001,
"httpMethod": "GET",
"requestId": "15e35e74-6641-4a8d-9755-6cbf2a2a0680",
"protocol": "HTTP/1.1",
"responseCode": 504,
"path": "/api/httpbin/delay/8",
"upstreamName": "default-httpbin-8000_gloo-system",
"targetDuration": null
}

Explore Envoy Metrics

Envoy publishes a host of metrics that may be useful for observing system behavior. In our very modest kind cluster for this exercise, you can count over 3,000 individual metrics! You can learn more about them in the Envoy documentation here.

For this 15-minute exercise, let’s take a quick look at a couple of the useful metrics that Envoy produces for every one of our Upstream targets.

First, we’ll port-forward the Envoy administrative port 19000 to our local workstation:

kubectl port-forward -n gloo-system deploy/gateway-proxy 19000 &

This shows:

Forwarding from 127.0.0.1:19000 -> 19000
Forwarding from [::1]:19000 -> 19000
Handling connection for 19000

Then let’s view two of the metrics that are most relevant to this exercise: one that counts the number of successful (HTTP 200) requests processed by our httpbin Upstream, and another that counts the number of gateway timeout (HTTP 504) requests against that same upstream:

curl -s http://localhost:19000/stats | grep -E 'cluster.default-httpbin-8000_gloo-system.upstream_rq_(200|504)'

Which gives us:

cluster.default-httpbin-8000_gloo-system.upstream_rq_200: 7
cluster.default-httpbin-8000_gloo-system.upstream_rq_504: 2

As you can see, on my instance I’ve processed seven good requests and two bad ones. If we apply the same three curl requests as before, I’d expect the number of 200 requests to increment by two, and the number of 504 requests to increment by one:

curl http://localhost:8080/api/httpbin/get
curl http://localhost:8080/api/httpbin/delay/1
curl http://localhost:8080/api/httpbin/delay/8
curl -s http://localhost:19000/stats | grep -E 'cluster.default-httpbin-8000_gloo-system.upstream_rq_(200|504)'

And that is exactly what we see!

cluster.default-httpbin-8000_gloo-system.upstream_rq_200: 9
cluster.default-httpbin-8000_gloo-system.upstream_rq_504: 3

If you’d like to have more tooling and enhanced visibility around system observability, we recommend taking a look at an Enterprise subscription to Gloo Edge. You can sign up for a free trial here.

Gloo Edge Enterprise provides out-of-the-box integration with both Prometheus and Grafana, allowing you to replace curl and grep with per-Upstream generated dashboards like this.

You can learn more about creating your own Grafana dashboards from Gloo Edge metrics in this blog post.

CLEANUP

If you’d like to cleanup the work you’ve done, simply delete the kind cluster where you’ve been working.

kind delete cluster

Learn More

In this blog post, we explored how you can get started with the open-source edition of Gloo Edge in 15 minutes on your own workstation. We walked step-by-step through the process of standing up a KinD cluster, installing an application, and then managing it with policies for routing, service discovery, timeouts, debugging, access logging, and observability. All of the code used in this guide is available on github.

A Gloo Edge Enterprise subscription offers even more value to users who require:

  • integration with identity management platforms like Auth0 and Okta;
  • configuration-driven rate limiting;
  • securing your application network with WAF, ModSecurity, or Open Policy Agent;
  • an API Portal for publishing and managing OpenAPI and gRPC interfaces; and
  • enhanced observability with batteries-included Prometheus and Grafana instances.
For more information, check out the following resources.