The State of WebAssembly in Envoy Proxy

We are very excited about Web Assembly as a way to extend an Envoy-based data plane in frameworks like API Gateways (Gloo) and Service Meshes (Istio, AppMesh, etc). Back in December 2019 we announced tooling to improve the experience of working with Web Assembly (Wasm) called WebAssembly Hub. Later in March 2020 we announced the evolution of that tooling to support Istio 1.5 Wasm extensions in collaboration with Google. Most recently, we announced an OCI-compatible spec for packaging and distributing Wasm modules.

With WebAssemblyHub and its tooling we can quickly bootstrap Wasm projects for different programming languages, build, and publish those modules to a community registry. Users can then share, search, and pull down and install a Wasm module into their framework of choice (Gloo, Istio, vanilla Envoy, etc). However, as Wasm is still evolving upstream, so too is our WebAssemblyHub tooling, developer experience, and support for Gloo and Istio. We have many users and customers kicking the tires on extending Gloo or Istio with WebAssembly but there are some hurdles to this experience. The purpose of this blog is to help set expectations when trying to develop Wasm modules to extend Envoy in Gloo or Istio (or any Envoy-based frameworks) and prepare for the future.

Before we go into the state of Wasm in Envoy, let’s first understand the pieces that contribute to building WebAssembly modules for Envoy.

Understanding the major components in flight 

Envoy Proxy is an open-source proxy written in C++ used in many popular service-mesh and edge gateway implementations. Envoy has many extensibility points including the network pipeline, access logging, stats, et.al. To extend the network pipeline, for example, you can write filters that operate on the byte stream between the downstream client and the upstream backend service. The first major component we discuss here is the Envoy Wasm filter.

Envoy Wasm filter

An Envoy Wasm filter is a C++ filter that “translates” Envoy internal C++ API to a Wasm engine via the Wasm ABI (more on this in the next section). Envoy supports Wasm filters for both the network pipeline as well as the HTTP pipeline (HTTP filters). It’s basically a thin plugin that delegates to a Wasm VM and Wasm module which means you can then write logic for the filter with Wasm (theoretically any language, not just C++).  Due to this model, the semantics of an Envoy Wasm filter are very similar to that of a native envoy filter.

An important point to note about this translation and delegation to Wasm is that Wasm is sandboxed technology. From a security standpoint this is highly desirable but it has implications for the memory model. Any interaction with state between Envoy and the Wasm VM/your Wasm module  (manipulating headers and/or body) will be copied from Envoy memory to Wasm memory and back. Understanding this and the tradeoffs made when processing requests is important (more below).

Abstract Binary Interface (ABI)

The Abstract Binary Interface defines the contract of functions on both sides of the Wasm extension: for those exposed by the host and by those implemented by the Wasm module. The functions exposed by the host are “imported” into the Wasm module while the functions implemented by the module are “exported” by the Wasm module. A Wasm module implementing the functions in the ABI can be loaded to the Envoy Wasm filter and used as a “Wasm Envoy filter”.

You can think of Envoy as the operating system for the Wasm filter. In general, Wasm can only do pure computational operations. I/O operations are provided by the runtime (i.e. Envoy) in the form of functions that the Wasm module can import.

To summarize, the Wasm-proxy ABI is composed of C-like functions. One set of these functions are expected to exist in the Wasm module (for example, a function that gets called when http headers arrive), and some are provided to the Wasm module and implemented in Envoy (for example, a function to perform an http callout).

Language SDKs

As we saw in the previous section, the ABI is a low level set of functions, composed of Wasm primitives that are shared across all Wasm implementations. The SDK is a language-specific implementation of the ABI, that makes writing a Wasm filter in that language easier (i.e. it implements the boilerplate code). The SDK is transparent to Envoy (i.e. Envoy is unaware of its existence; Envoy just depends on the ABI) but the resulting Wasm module produced by the SDK is runnable as an extension in Envoy.

Each language has its own language specific SDK and is written to be idiomatic for that language. For example, the rust SDK makes it easier to write rust filters in a rust idiomatic form (without unsafe/s, etc).

Developer experience tools

The wasme CLI tool assist a developer to quickly bootstrap new Wasm projects for languages like C++, AssemblyScript, Rust, et.al. and will take the hassle out of lining up Envoy proxy versions, ABIs, SDKs, and automatically set up the correct tool chain to build and deploy Wasm modules. The wasme CLI tool can be used to deploy Wasm modules to Gloo, Istio, or vanilla Envoy (based on envoyproxy/envoy-wasm

Wasm VMs

When a Wasm module is loaded into Envoy, it’s run in its own sandboxed Wasm VM. There are a couple different options currently available:

  • Null VM
    • Uses the same ABI but gets compiled natively into envoy. No Wasm code is created. The null VM API is similar to the C++ SDK’s api.
  • V8
    • The Wasm VM from Chrome; loads fast, but doesn’t compile to native. Tests done by google estimate performance here as 50% of native performance. This sounds like a lot, but might be a win for services that would otherwise do a call-out, like ext auth, etc
  • WAVM
    • A vm that pre-compiles Wasm to native assembly. Loads slower but presumably runs faster.

State of these components

There are three main areas to getting Wasm into Envoy Proxy upstream and productive for users. The first is getting upstream Envoy to accept the required API/build/implementation changes. The second is to get some stability around the ABI. The last is stabilizing the language-specific SDKs/ABI implementations and toolchains (wasme, etc). 

Up until now, the work required to add Wasm to Envoy Proxy has happened on a separate fork of Envoy in the envoyproxy/envoy-wasm repo. If you follow the envoyproxy/envoy-wasm repo, you can see that it is actively developed, and that there are still a few pending bugs and design decisions being worked out.  Bits and pieces required to get Wasm into Envoy have been committed to the upstream Envoy repo over the last few months, but work still continues. You can follow this PR to get the latest. In short, getting Wasm into Envoy is going the right direction but is still not in upstream Envoy as of this writing.

As for the ABI, if you follow the ABI spec repo, you can see that the next version of the spec is very different from the existing one (though the filter semantics are mostly the same). Work still continues to solidify the ABI. This has implications on the SDKs for various languages. If the ABI is still in flux, especially between versions of Envoy, the SDK is limited to whatever version of the ABI it implements. 

Ultimately, Envoy support for Wasm is still “work in progress” but in parallel, work on things like the Application Binary Interface, various language SDKs, specs, and developer experience tooling continues

What to watch out for currently

As mentioned earlier, at Solo.io we have quite a few folks kicking the tires on Wasm in real enterprise contexts with intents to solve challenges around the following areas:

  • Security interchange/backward compatibility
  • Request/Response manipulation
  • Extending capabilities of JWT handling
  • Custom Authorization
  • Data loss prevention

Some of these folks have pushed their learnings to WebAssemblyHub for others to consume. The challenges of creating these Wasm extensions for the POCs they’re running center around the following things to watch out for:

  • Unexpected performance observations
  • Expensive operations for certain use cases
  • Challenge with reusing existing libraries

You must consider that some runtimes have a minimum amount of memory required. As each worker thread in Envoy has its own Wasm VM, this may impact proxy memory usage. Also note that Wasm has 32bit memory space, which may put an upper limit on the number of filters that can execute concurrently. 

Filters that perform transformations on the request body need to take into account that to read the body, data needs to be copied into the Wasm vm, and to write it, it needs to be copied back to Envoy which may have an impact on performance.

Lastly, if you’re trying to build or reuse code that communicates with the network or outside of the network, you cannot just compile it to Wasm and run inside Envoy. These types of use cases need to think of I/O in terms of the ABI that Envoy presents such as the proxy_dispatch_http_call

Where do we go from here?

We hope that with this information and context, you can make informed decisions about how to proceed with Wasm and extend Envoy-based frameworks. Our advice is to dig into and invest in  learning Wasm technology, however proceed with the right expectation that this is still evolving upstream (from getting into Envoy itself, to the ABIs, to the SDKs, developer experience, etc) and is not yet production ready. In fact, the experience of POCing this technology will likely prove that there are still some gaps. That’s OK! 

The best place to get started with the correct versions of Envoy, ABIs, and SDKs is using the wasme CLI tool. With wasme, you can very quickly bootstrap Wasm projects targeting the correct ABIs for various programming languages.  You can get the latest wasme on the releases page. For more information on other tools, SDKs, the Wasm OCI specification, and more, please check the Solo.io wasm repo and the Proxy-Wasm repos.