Solving an Information Leakage Problem with the Envoy ExtProc Filter and Kubernetes Gateway API
Multiple Gloo Gateway customers have approached us with questions like this: “Our product security team wants our applications to remove any response headers from our services that indicate to a potential attacker that we’re using Envoy as the foundation of our API Gateway. In particular, we’d like to remove the server
header and any header like x-envoy-upstream-service-time
. How can Gloo Gateway help us with that?”
Articles like this advocate for scrubbing server responses of any artifacts that might tip a potential bad actor to the details of the server infrastructure that you’re using. They specifically call out the server
header as a prime candidate for removal or obfuscation.
We’ll explore these questions using a couple of avenues in this blog post. First, we’ll survey the Envoy landscape for suitable tools to address the problem. Second, we’ll dig more deeply into a couple of these approaches. We’ll consider some built-in configuration options with Gloo Gateway that will solve the problem. We’ll walk you step-by-step through how to solve this problem using those open-source features. Then we’ll leverage one of the newer features of both Gloo Gateway and Envoy, by building a custom external processing filter to accomplish the same objective. Finally, we’ll compare the throughput and performance of these two approaches.
ExtProc vs. Other Approaches
Envoy is an open-source proxy that represents the foundation of modern approaches to API gateways and service meshes. It is maintained by a diverse community of contributors. Not surprisingly, it offers a variety of tools that we could use to solve this problem. Let’s consider a few options.
Option Zero: Rely on Applications
What’s the most common approach to these types of problems? In many organizations, issues like this are still resolved with application-specific techniques. Find a language library that allows header manipulation, integrate it into a variety of applications, then wait for them all to deploy. Years ago, I was an engineer in an organization that operated like this, and one particularly nasty security breach led to dozens of application teams investing years of labor into custom code. Today, this code could have been easily avoided with a configuration-driven approach using Envoy and standard APIs like the Kubernetes Gateway API.
So while we’d like to avoid application-specific approaches to our header removal problem, let’s begin by acknowledging that they’re still quite common out there.
Option One: Built-In Gateway Configuration
For the use case we’re tackling in this blog — securing external communication by removing headers from a response that reveals details of Envoy processing — the base Gateway API is expressive enough to handle it. So we’ll use that as our first approach.
However, in general we want to consider approaches that allow us to solve more general problems at the gateway level using custom services. Perhaps there are some specific message transformations to perform that require information from other systems in my application network. Or maybe we require enrichment of the original request to infer geo-location data from the original request as in this example. Or perhaps some sort of custom organization-specific analytics recording. In my experience, there are frequently some kinds of custom data or processing requirements not captured by the baseline API standard.
Option Two: Language-Specific Filters
One option for supplying custom gateway functionality with Envoy is to use a language-specific filter, such as LUA or Golang.
One important consideration for language-specific filters is separation of concerns. By that, I mean that these filters require a binding to the Envoy proxy itself. That limits flexibility in deployment, especially for larger enterprises that require coordination across multiple teams to manage deployments.
Consider an enterprise where ownership of the proxy is owned by one organization but the requirements for custom filter processing exist for only one or a small set of applications. The operations people don’t want to change their proxy config for this one use case, and the application people require the flexibility to manage this service separately. Neither language-specific filters nor WebAssembly filters in the next section play nicely with that requirement.
Option Three: WebAssembly Filters
WebAssembly (or WASM) began life as a mechanism to add sandboxed custom logic inside web browsers. More recently, its popularity has grown substantially in reverse proxies like Envoy as well. Envoy now delivers WASM filters as part of its recent distributions, and Solo.io provides enterprise support for building custom filters using subsets of multiple languages, including AssemblyScript (a subset of TypeScript), C++, Rust, and TinyGo (a subset of Golang). For further information, Solo has covered WebAssembly widely in the past, including blogs here and here, product documentation, and the Hoot podcast.
The Envoy WebAssembly filter shares many similarities with the previous option of language-specific filters. They do support users writing custom code, and they require that functionality to be bound to the proxy, leading to the same separation-of-concerns issue we discussed earlier.
But they do offer support for multiple languages as mentioned earlier, though these often represents only subsets of the language’s complete functionality, in order to ensure that code produced using it will play nicely in the WASM “sandbox.”
Option Four: External Processing Filters – ExtProc
ExtProc is important to the Envoy community because it is a filter that solves for both language generality and deployment flexibility. Because it relies on an external gRPC service that implements a well-defined interface, it can be written in any language. In addition, there is no deployment dependency on an Envoy proxy. The ExtProc service is deployed separately either inside or outside a Kubernetes cluster. This flexibility is often critical for large enterprises.
With Envoy external processing, you can implement an external gRPC processing server that can read and modify all aspects of an HTTP request or response, such as headers, body, and trailers, and add that server to the Envoy filter chain by using the Envoy external processing (ExtProc) filter. The external service can manipulate headers, body, and trailers of a request or response before it is forwarded to an upstream or downstream service. The request or response can also be terminated at any given time.
This sequence diagram from the Gloo Gateway docs illustrates how ExtProc works in the context of an inbound request from a downstream service.
As is common with Envoy filters, there is an opportunity to activate ExtProc both on the inbound request side and the outbound response side. Whatever the order of the filter traversal during the request processing, that order will be reversed for response processing. If ExtProc is configured, the response headers, bodies, and trailers can all be sent to the gRPC service for evaluation and even modification. We’ll be using that capability in this example to remove the Envoy-specific headers before returning the response to the downstream service.
The fact that ExtProc is deployed as a separate service from the proxy has potential downsides. For example, what happens in the event of service or network failure? Gloo Gateway takes full advantage of configuration options provided with the ExtProc filter. It allows processing of requests to fail automatically or simply be ignored and proceed if the ExtProc service is unavailable.
This set of values from the Gloo Settings
Custom Resource shows typical proxy-wide configurations. Note that this defaults to blocking the request if the ExtProc service is unavailable. But you can “fail open” as well for non-essential tasks like data collection.
extProc:
grpcService: # ExtProc is reachable via standard service reference
extProcServerRef:
name: default-ext-proc-grpc-4444
namespace: gloo-system
filterStage: # Control where ExtProc is invoked within filter chain
stage: AuthZStage
predicate: After
failureModeAllow: false # <<< Default is to fail request if ExtProc fails >>>
allowModeOverride: false
processingMode:
requestHeaderMode: SEND # Defaults to calling ExtProc on request side...
responseHeaderMode: SKIP # ...but not calling ExtProc on response side
Another notable downside to ExtProc is the additional latency associated with each request and response requiring extra hops to the external service. This can be mitigated by using it only when required. In the example above, requests are forwarded to ExtProc (requestHeaderMode: SEND
) but not responses (responseHeaderMode: SKIP
).
Note also that ExtProc behavior is configurable not only at the proxy level as shown above, but all the way down to individual routes via RouteOption
objects. Scoping usage down as tightly as possible may be important for managing overall proxy performance.
Let’s summarize what we know about possible approaches to solving our information leakage problem.
Approach | Effort | Solution Scalability | Problem Flexibility | Language Flexibility | Deployment Flexibility | Runtime Overhead |
---|---|---|---|---|---|---|
Applications | High | Low | High | High | High | Built-In to Upstream |
Built-In Gateway | Low | High | Low | N/A | High | Proxy Native |
Language Filters | Medium | High | High | Low | Linked to Proxy | Linked to Proxy |
Web Assembly | Medium | High | High | Medium | Linked to Proxy | Linked to Proxy |
ExtProc | Medium | High | High | High | External Process | External Process |
ExtProc is considered a reliable option within Envoy. The ExtProc filter has been marked as stable since Envoy v1.29, and its security posture is considered robust to downstream traffic.
Hands-On with Gloo Gateway: Comparing Approaches
We’ll take a hands-on approach for the remainder of this blog. To solve our information leakage problem, we’ll begin by using what we outlined as Option One earlier, the built-in gateway configuration options. (If this were the real world, this is the most likely option we’d use given how easy it is to implement and its high performance within the Envoy proxy itself.) To demonstrate how we could solve more complex problems, we’ll also walk through Option Four, the ExtProc approach. In addition to comparing for ease of use, we’ll also perform some simple side-by-side performance tests.
Prerequisites
You’ll need a Kubernetes cluster and associated tools, plus an instance of Gloo Gateway Enterprise to complete this guide. Note that there is a free and open source version of Gloo Gateway. It will support the first approach we take to this problem, but only the enterprise version supports the ExtProc filter integration required for the second approach.
We used Kind v0.23.0 running Kubernetes v1.30.0 on a MacBook workstation to test this guide. But any recent version of a well-behaved Kubernetes distribution should work fine.
We use Gloo Gateway Enterprise v1.17.0 as our API gateway. Use this start-up guide if you need to install it. If you don’t already have access to Gloo Gateway Enterprise, you can request a free trial here.
Once you’ve completed that start-up guide, you should have an enterprise instance of Gloo Gateway installed on a Kubernetes cluster, with the httpbin
service running, plus single HTTP Gateway
and HTTPRoute
objects.
You can verify that you are ready to continue by running a curl
against the proxy that should route to your httpbin
service. Note that this curl
assumes that you have port-forwarded the Envoy HTTP listener on port 8080 to your local workstation, as shown in the start-up guide.
curl -i http://localhost:8080/get -H "host: www.example.com"
If everything is configured correctly, then you should see a response similar to this:
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-type: application/json; encoding=utf-8 date: Fri, 16 Aug 2024 23:15:25 GMT content-length: 416 x-envoy-upstream-service-time: 3 server: envoy { "args": {}, "headers": { "Accept": [ "*/*" ], "Host": [ "www.example.com" ], "User-Agent": [ "curl/8.6.0" ], "X-Envoy-Expected-Rq-Timeout-Ms": [ "15000" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "97349761-545a-4436-be05-c696c53cab85" ] }, "origin": "10.244.0.13:59836", "url": "http://www.example.com/get" }
Recall that our overall objective here is to cleanse the data we return of any leaks that identify our proxy technology. As we walk through this example, we’ll explore ways to remove the two response headers, server: envoy
and x-envoy-upstream-service-time
, that would let a potential bad actor know that we’re using Envoy.
Approach #1: Header Manipulation
Gloo Gateway provides a library of header manipulation operations that we can apply to solve this problem. These include both adding and removing request and response headers. We will attach a our base VirtualHostOption
Custom Resource to our gateway in order to identify the unwanted headers for all routes.
apiVersion: gateway.solo.io/v1
kind: VirtualHostOption
metadata:
name: header-manipulation
namespace: gloo-system
spec:
options:
headerManipulation:
responseHeadersToRemove: ["server", "x-envoy-upstream-service-time"]
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: http
namespace: gloo-system
Let’s apply this policy to our gateway:
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/extproc-info-leak/01-vho-resp-hdr-rm.yaml
virtualhostoption.gateway.solo.io/header-manipulation created
Note that no route-specific configuration was required to activate this policy, although we could have been more fine-grained in applying this if that were needed.
A Bump in the Road
With the headerManipulation
configuration in place, note the response we get back from curl
. We see that the x-envoy-upstream-service-time
header is removed as expected. But the server: envoy
response header is still returned, despite the fact that we asked Gloo Gateway to remove it. This happens because Envoy does not honor configuration provided from the control plane which manipulates that header.
curl -i http://localhost:8080/get -H "host: www.example.com"
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-type: application/json; encoding=utf-8 date: Sat, 17 Aug 2024 20:42:24 GMT content-length: 416 server: envoy { "args": {}, "headers": { "Accept": [ "*/*" ], "Host": [ "www.example.com" ], "User-Agent": [ "curl/8.6.0" ], "X-Envoy-Expected-Rq-Timeout-Ms": [ "15000" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "3a0b1e50-f373-4d03-9532-b3e02f4bbd3b" ] }, "origin": "10.244.0.13:59622", "url": "http://www.example.com/get" }
However, there is a solution to the server header problem that spans both the header manipulation and ExtProc approaches. Attaching an HTTPListenerOption
to a gateway allows you to specify Envoy HTTP connection manager settings that include removal or obfuscation of the server
header.
apiVersion: gateway.solo.io/v1
kind: HttpListenerOption
metadata:
name: server-name
namespace: gloo-system
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: http
options:
httpConnectionManagerSettings:
serverHeaderTransformation: PASS_THROUGH # Return our server header value to client
# serverName: "im-not-telling" # Use this setting to obfuscate the header instead
We’re specifying the PASS_THROUGH
value for serverHeaderTransformation
, which will simply return whatever value we provide in the server
header, or no header at all. In this blog, we’ll remove the header altogether. You can see the full set of alternatives in the Gloo Gateway API reference here.
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/extproc-info-leak/02-http-listener-options.yaml
httplisteneroption.gateway.solo.io "server-name" created
Let’s confirm that our httpbin
response no longer contains any server
header.
curl -i http://localhost:8080/get -H "host: www.example.com"
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-type: application/json; encoding=utf-8 date: Sat, 17 Aug 2024 21:09:16 GMT content-length: 416 { "args": {}, "headers": { "Accept": [ "*/*" ], "Host": [ "www.example.com" ], "User-Agent": [ "curl/8.6.0" ], "X-Envoy-Expected-Rq-Timeout-Ms": [ "15000" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "2dd1a21b-f589-47c6-b7d9-4c18428179d0" ] }, "origin": "10.244.0.13:34040", "url": "http://www.example.com/get" }
There is an alternative approach that would allow you to obfuscate the server
header rather than removing it. Simply specify the serverName
property in the HTTPListenerOption
object instead. Envoy will then use that value instead of its default “envoy” value.
Evaluate Performance
We want to conduct a high-level performance test to compare the performance of the native Envoy filter we configured in this section with the custom ExtProc filter that we will consume next.
To evaluate performance, we will use the simple but effective web load tester called hey. It is easy to install — brew install hey
on MacOS — and just as easy to use. After warming up the cluster, we ran 10,000 requests with the default of 50 client threads against an untuned kind cluster running in Docker on my local MacBook workstation (Apple M2 Max, 12 cores, 64GB RAM). We achieved throughput of 7140.3 requests per second with an average response time of 6.8 milliseconds, a p99 of 13.6 milliseconds, and all with no request failures.
hey -n 10000 -host www.example.com http://localhost:8080/get
Summary: Total: 1.4005 secs Slowest: 0.0694 secs Fastest: 0.0014 secs Average: 0.0068 secs Requests/sec: 7140.3389 Total data: 4720000 bytes Size/request: 472 bytes Response time histogram: 0.001 [1] | 0.008 [7904] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.015 [2023] |■■■■■■■■■■ 0.022 [22] | 0.029 [0] | 0.035 [0] | 0.042 [0] | 0.049 [0] | 0.056 [6] | 0.063 [23] | 0.069 [21] | Latency distribution: 10% in 0.0039 secs 25% in 0.0049 secs 50% in 0.0063 secs 75% in 0.0079 secs 90% in 0.0093 secs 95% in 0.0105 secs 99% in 0.0136 secs Details (average, fastest, slowest): DNS+dialup: 0.0001 secs, 0.0014 secs, 0.0694 secs DNS-lookup: 0.0001 secs, 0.0000 secs, 0.0176 secs req write: 0.0000 secs, 0.0000 secs, 0.0031 secs resp wait: 0.0066 secs, 0.0013 secs, 0.0524 secs resp read: 0.0000 secs, 0.0000 secs, 0.0016 secs Status code distribution: [200] 10000 responses
Later we’ll compare these results with the same workload using our custom ExtProc filter.
Reset the Configuration
To prepare for part two of this exercise, we’ll reset the virtual host configuration to stop removing the Envoy-generated response headers.
kubectl delete -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/extproc-info-leak/01-vho-resp-hdr-rm.yaml
virtualhostoption.gateway.solo.io "header-manipulation" deleted
kubectl delete -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/extproc-info-leak/02-http-listener-options.yaml
httplisteneroption.gateway.solo.io "server-name" deleted
Re-running our curl
command from earlier will confirm that the Envoy response headers are again being returned to the client.
Approach #2: Custom ExtProc Filter
In the remainder of this blog, we’ll explore step-by-step what it takes to solve this same problem for all services behind a gateway using an Envoy external processor.
We will not build our ExtProc service from scratch. Instead, we’ll use a pre-built header manipulation service that responds to the gRPC interface that Envoy requires for its ext_proc filter. The source code for this example ExtProc filter is available here. TODO: Make example ExtProc filter public. While Solo does not publish a general purpose ExtProc SDK, you can explore this repo for a more complete ExtProc filter template.
There is already a guide in the Gloo Gateway documentation to lead you through ExtProc setup and a basic test of header manipulation on request headers. If you’d like to follow along live, walk through the first three sections of that guide, which will lead you through the following steps:
- Enable ExtProc in Gloo Gateway settings.
- Deploy the sample ExtProc service.
- Test ExtProc on request headers.
Note that there is a fourth section of the ExtProc guide that shows how to configure ExtProc on a per-route basis. But that’s unnecessary for our purposes, since our goal is to remove Envoy-generated headers for all services behind our gateway.
Verify Initial ExtProc Config
Let’s use a curl
to the httpbin /get
endpoint in which we add an instructions
header that gives our ExtProc service some directives:
- Add a
header3
with value ofvalue3
. - Add a
header4
with value ofvalue4
. - Remove
header2
after ExtProc processing so that it isn’t passed to the upstream service.
curl -i http://localhost:8080/get \
-H "host: www.example.com:8080" \
-H "header1: value1" \
-H "header2: value2" \
-H 'instructions: {"addHeaders":{"header3":"value3","header4":"value4"},"removeHeaders":["header2"]}'
If your ExtProc is configured properly, then this curl
should succeed with a response like this:
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-type: application/json; encoding=utf-8 date: Mon, 19 Aug 2024 01:04:02 GMT content-length: 543 x-envoy-upstream-service-time: 1 server: envoy { "args": {}, "headers": { "Accept": [ "*/*" ], "Header1": [ "value1" ], "Header3": [ "value3" ], "Header4": [ "value4" ], "Host": [ "www.example.com:8080" ], "Instructions": [ "{\"addHeaders\":{\"header3\":\"value3\",\"header4\":\"value4\"},\"removeHeaders\":[\"header2\"]}" ], "User-Agent": [ "curl/8.6.0" ], "X-Envoy-Expected-Rq-Timeout-Ms": [ "15000" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "d2953275-2193-4cce-8ac6-e5c18885d3bf" ] }, "origin": "10.244.0.15:59572", "url": "http://www.example.com:8080/get" }
Note from this response that header1
that was passed in with the original request is echoed by the upstream process, as are header3
and header4
, which were added by the ExtProc service. Note that header2
was removed by the ExtProc as expected before the request was forwarded to the upstream httpbin.
So far, so good.
Reconfigure ExtProc for Response Header Manipulation
In the ExtProc setup from the product documentation, we configured Envoy to NOT send response headers to the ExtProc service. Because nothing was being sent to ExtProc, Envoy would bypass the ExtProc service call on the response side altogether. This is a best practice for performance when response-side processing isn’t required.
Since we only require response-side processing for our use case, we’ll reverse the gateway settings to SKIP request-side processing but SEND on the response side.
We need to make a change to the header mutationRules
in the ExtProc settings for our particular use case. By default, ExtProc services aren’t allowed to modify or remove Envoy-generated response headers like server
and x-envoy-*
. We will change the allowEnvoy
setting to be true so that our configuration will behave as expected.
You’ll edit just the following gateway settings to look like this:
spec:
extProc:
processingMode:
requestHeaderMode: SKIP # Changed from SEND in original config
responseHeaderMode: SEND # Changed from SKIP in original config
mutationRules:
allowEnvoy: true # Allow ExtProc to modify Envoy-generated response headers
Use kubectl
to modify these settings:
kubectl edit settings default -n gloo-system
Configure Virtual Host to Pass Along Instructions Header
In this example, our ExtProc service relies on an instructions
header passed in at runtime to tell it which headers to add and remove. Since we need to remove these headers on the response side, we can’t rely on simply introducing them when we issue the request as we did in our initial ExtProc configuration. The httpbin service we’re routing to upstream does not copy request headers over as response headers, nor would we expect it to.
However, we can lean on a simple Gloo Gateway transformation to help us out. We’ll keep it simple by attaching a VirtualHostOption
to our gateway that copies any request header named x-extproc-instructions
over to a response header named instructions
, which is what our ExtProc service is looking for.
apiVersion: gateway.solo.io/v1
kind: VirtualHostOption
metadata:
name: pass-instructions-header
namespace: gloo-system
spec:
options:
transformations:
responseTransformation:
transformationTemplate:
headers:
instructions:
text: '{{ request_header("x-extproc-instructions") }}'
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: http
namespace: gloo-system
We’ll use kubectl
to create this object, and to reinstate the HTTPListenerOption
that configures Envoy to allow us to passthrough our (lack of) server
response header to the client.
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/extproc-info-leak/02-http-listener-options.yaml
httplisteneroption.gateway.solo.io "server-name" created
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/extproc-info-leak/03-vho-cp-resp-hdr.yaml
virtualhostoption.gateway.solo.io/pass-instructions-header created
Test the ExtProc Filter
As you can see from the test below, we pass in a value of x-extproc-instructions
that removes both the server
and x-envoy-upstream-service-time
headers from the response, as expected.
curl -i -H "host: www.example.com" \
-H 'x-extproc-instructions: {"removeHeaders":["x-envoy-upstream-service-time","server"]}' \
http://localhost:8080/get
You should see a response like this, without unwanted Envoy-revealing HTTP response headers:
HTTP/1.1 200 OK access-control-allow-credentials: true access-control-allow-origin: * content-type: application/json; encoding=utf-8 date: Mon, 19 Aug 2024 22:13:21 GMT content-length: 530 instructions: {"removeHeaders":["x-envoy-upstream-service-time","server"]} { "args": {}, "headers": { "Accept": [ "*/*" ], "Host": [ "www.example.com" ], "User-Agent": [ "curl/8.6.0" ], "X-Envoy-Expected-Rq-Timeout-Ms": [ "15000" ], "X-Extproc-Instructions": [ "{\"removeHeaders\":[\"x-envoy-upstream-service-time\",\"server\"]}" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "0284809e-b685-4e94-89ad-c3ec931f1cc3" ] }, "origin": "10.244.0.15:52828", "url": "http://www.example.com/get" }
Evaluate Performance
Finally, we will conduct the same performance evaluation with the ExtProc filter as we did with the native Envoy filter. And we will use the same environment with the same settings — 10,000 requests with the default maximum of 50 client threads — against the same warmed-up but untuned kind cluster on a local workstation.
% hey -n 10000 -host www.example.com \
-H 'x-extproc-instructions: {"removeHeaders":["x-envoy-upstream-service-time","server"]}' \
http://localhost:8080/get
Summary: Total: 3.1121 secs Slowest: 0.0785 secs Fastest: 0.0025 secs Average: 0.0151 secs Requests/sec: 3213.3044 Total data: 5860000 bytes Size/request: 586 bytes Response time histogram: 0.003 [1] | 0.010 [2465] |■■■■■■■■■■■■■■■■■■■■■■ 0.018 [4539] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 0.025 [2422] |■■■■■■■■■■■■■■■■■■■■■ 0.033 [464] |■■■■ 0.041 [56] | 0.048 [3] | 0.056 [11] | 0.063 [10] | 0.071 [10] | 0.078 [19] | Latency distribution: 10% in 0.0077 secs 25% in 0.0102 secs 50% in 0.0142 secs 75% in 0.0188 secs 90% in 0.0230 secs 95% in 0.0257 secs 99% in 0.0335 secs Details (average, fastest, slowest): DNS+dialup: 0.0000 secs, 0.0025 secs, 0.0785 secs DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0082 secs req write: 0.0000 secs, 0.0000 secs, 0.0018 secs resp wait: 0.0150 secs, 0.0025 secs, 0.0711 secs resp read: 0.0000 secs, 0.0000 secs, 0.0011 secs Status code distribution: [200] 10000 responses
With the ExtProc filter, we achieved the same functional results as before with throughput of 3213.3 requests per second, an average response time of 15.1 milliseconds and a p99 of 33.5 milliseconds.
Comparing these results to the internal Envoy filter, the ExtProc filter throughput lagged by 55% (3213.3 vs. 7140.3 requests per second), the average response time increased by 122% (15.1 vs 6.8 milliseconds), and the p99 increased by 146% (33.5 vs. 13.6 milliseconds.)
Not surprisingly, adding an ExtProc service to our Envoy request processing can carry a substantial performance hit. Considering the fact that the ExtProc solution doubles the number of upstream services we’re delegating to, this performance lag actually aligns with expectations. It underscores the fact that accomplishing simple tasks like this are often best carried out on-board the Envoy proxy. Use the ExtProc filter in cases where the extraordinary deployment flexibility it offers is valuable.
As we said before, this is not a rigorous benchmark and of course, you will see different results. You should always benchmark with your own workloads in your own environment before making deployment decisions.
Frequently Asked Questions
“I’ve heard some people suggest running an ExtProc service as a sidecar to the Envoy process. Is that a good idea?”
Jacob Bohanon is a Senior Software Engineer at Solo, an Envoy expert and lead engineer on Gloo Gateway’s ExtProc support. He responds: “For most users, that’s not a good idea. The primary power of ExtProc is removing the requirement for administrative access to the gateway config to pivot on these routing decisions. For example, the infrastructure team configures the Envoy instance to have an ExtProc server, and then the application team can use that ExtProc server to manipulate traffic at their discretion. If you have the processing server deployed as a sidecar, then in order to modify it you need as much access to the cluster as you would need to modify a LUA/Golang filter or a WASM extension. So Gloo Gateway doesn’t support sidecar deployments of ExtProc services today.”
Learn More
In this blog post, we explored how to solve an information leakage problem using Gloo Gateway with native Gateway API configuration and with a custom Envoy external processing filter. We walked step-by-step through each scenario and compared performance across approaches.
All of the YAML configuration used in this guide is available on GitHub. The source code for the example ExtProc filter that we used is available here. TODO: Make example ExtProc filter public. While Solo does not publish a general purpose ExtProc SDK, you can explore this repo for a more complete ExtProc filter template.
For more information on ExtProc and a live demonstration, watch this excellent Hoot podcast episode from Solo engineers Jacob Bohanon and Peter Jausovec.
For more information, check out the following resources.
- Explore the documentation for Gloo Gateway, header manipulation, and ExtProc.
- Request a live demo or trial for Gloo Edge Enterprise.
- See video content on the solo.io YouTube channel.
- Questions? Join the Solo.io Slack community and check out the community #gloo-edge and #wasm channels.
Acknowledgments
A big Thank You to Jacob Bohanon for working with me to understand the details of configuring Envoy ExtProc filters using Gloo Gateway.