What Is Envoy Proxy?
As organizations adopt microservices, a basic part of the architecture is a network layer 7 (application layer) proxy. In large microservices environments, L7 proxies provide observability, resiliency, and routing, making it transparent to external services that they are accessing a large network of microservices.
The Envoy Proxy is an open source, high-performance, small-footprint edge and service proxy. It works similarly to software load balancers like NGINX and HAProxy. It was originally created by Lyft, and is now a large open source project with an active base of contributors. The project has been adopted by the Cloud Native Computing Foundation (CNCF) and is now at Graduated project maturity.
What Is Envoy Proxy?
As organizations adopt microservices, a basic part of the architecture is a network layer 7 (application layer) proxy. In large microservices environments, L7 proxies provide observability, resiliency, and routing, making it transparent to external services that they are accessing a large network of microservices.
The Envoy Proxy is an open source, high-performance, small-footprint edge and service proxy. It works similarly to software load balancers like NGINX and HAProxy. It was originally created by Lyft, and is now a large open source project with an active base of contributors. The project has been adopted by the Cloud Native Computing Foundation (CNCF) and is now at Graduated project maturity.
Envoy Proxy: Terminology, Features and Architecture
Glossary of Technical Terms
Before discussing the architecture, here are some crucial concepts and their definitions:
Concept | Definition |
Host | An entity that can use network communication, such as mobile phones or internet servers. It also refers to a logical network application. A piece of physical hardware can have multiple hosts, but the hardware should be able to address each individually. |
Downstream | A kind of host that connects to an Envoy proxy. It sends requests and receives responses. |
Upstream | A kind of host that receives an Envoy proxy’s requests and connections and returns responses |
Listener | A network location that downstream clients can connect with. An Envoy proxy has listeners exposed, each having a unique name, so it can connect with downstream hosts. |
Cluster | Collection of upstream hosts that an Envoy proxy connects to. The Envoy load balancing policy determines which cluster member should receive each request. |
Mesh | Collection of coordinated hosts that form a network topology |
Runtime Configuration | Envoy uses a real-time configuration system, which allows modifying server settings without restarting Envoy proxies. |
Learn more in our detailed guides to:
Envoy Proxy Architecture
Envoy creates a transparent network that helps troubleshoot and handle cloud-native applications. It’s an independent executable that runs with a real-world application, is easily deployable, and supports any programming language.
An Envoy Proxy is an L3/L4 proxy with a list of filters, which can connect and enable different TCP/UDP proxy processes. Additionally, it supports HTTP L7 filters since HTTP is crucial for cloud-native applications and TLS termination. It has advanced load balancing functions like circuit breaking and auto-retry, and can route gPRC requests and responses. Its configuration is manageable through an API that can push updates dynamically, even while the cluster is running.
Envoy has a multi-threaded architecture and uses a single process within it. The primary thread controls different coordination operations, and worker threads handle the processing, filtering, and forwarding. Once a listener accepts some incoming connection, a worker thread gets allocated to it till the end of the process.
Hence, Envoy is usually single-threaded and has some complex code that handles coordination between the different worker threads. It is advisable to configure the number of worker threads equal to the number of hardware threads on the system.
Use Cases for Envoy Proxy
There are two main uses for Envoy proxy: as a sidecar in a service mesh (service proxy) and as an API gateway.
Envoy as a Sidecar
Envoy can serve as an L3 or L4 application or sidecar proxy in a service mesh that enables communication between services. The Envoy instance has the same lifecycle as the proxy’s parent application, allowing the extension of applications across multiple technology stacks—this includes legacy apps that don’t offer extensibility.
All application requests to traverse Envoy through the following listeners:
- Ingress listeners—take requests from other services in a service mesh and forward them to the local application related to the Envoy sidecar instance.
- Egress listeners—take requests from the local application related to the Envoy sidecar instance and forwards them to other services in the network.
The picture below shows how the Envoy proxy can attach to the application to enable communication using ingress and egress listeners.
Envoy as API Gateway
Envoy proxy can serve as an API gateway and ‘front proxy’ that sits between the application and the client request. Envoy accepts inbound traffic, collates the information in each request, and directs it to its destination within the service mesh. The image below demonstrates the use of Envoy as a ‘front proxy’ or ‘edge proxy,’ which will get requests from other networks. As an API gateway, the Envoy proxy is responsible for functionality such as traffic routing, load balancing, authentication, and monitoring at the edge.
Learn more in our detailed guide to Envoy API gateway
Related content: Read our guide to:
- Envoy Grafana
- Envoy Wasm
- Envoy proxy examples
- Envoy proxy vs NGINX (coming soon)
Envoy Proxy Tutorial
This tutorial will show you how to get started with Envoy Proxy and customize its configuration.
1. Pull Docker Images
The Envoy project provides multiple pre-built Docker images. They are available for amd64 and arm64 architectures.
To pull the Envoy version of pre-built Docker images:
1. Use the following commands to pull the images:
docker pull envoyproxy/envoy:v1.25.1
docker run –rm envoyproxy/envoy:v1.25.1 –version
2. Use the following command to view Envoy command line options if required:
envoy –help
2. Use Envoy with Demo Configuration
To run Envoy with the demo configuration:
1. Use the following command to tell Envoy the path to its configuration:
envoy -c demo-envoy-config.yaml
2. Use the following command to visit the link at http://localhost:10000 and check if Envoy is proxying:
curl -v localhost:10000
3. Override Existing Configuration
The default configuration may be overridden using the –config-yaml. It will get merged with the main configuration. However, it gets only specified once.
To override the existing configuration:
Create a file named demo-config-override.yaml and save the following in it:
admin:
address:
socket_address:
address: 129.0.1.1
port_value: 9545
2. Use the following command to start the Envoy server with the override configuration:
docker run –rm -it \
-p 9545:9545 \
-p 10000:10000 \
envoyproxy/envoy:v1.25.1 \
-c /etc/envoy/envoy.yaml \
–config-yaml “$(cat demo-config-override.yaml)”
3. Use the following command to access the Envoy admin interface:
curl -v localhost:9545
Learn more in our detailed guide to Advanced Rate Limiting with Envoy Proxy
Envoy Proxy with Gloo Mesh or Gloo Gateway
As a data plane, Envoy can serve multiple important functions:
- Proxy for Istio Service Mesh
- Kubernetes Ingress
- API-GW
- Integrate with WebAssembly
- Integrate with GraphQL
Envoy has proven to be a highly-scalable and flexible, especially in Kubernetes and cloud-native environments. This is why Solo.io chose Envoy to be the consistent data plane in Gloo Edge, Gloo Gateway and Gloo Mesh. This consistent use of Envoy enables companies to learn one technology for filtering and security, and apply that knowledge to multiple use-cases.
Get started with Envoy Proxy in Gloo Platform today.
