What is Envoy Proxy?
Envoy Proxy is an open source edge and service proxy designed for Kubernetes- and cloud-native applications. Originally developed at Lyft and later open sourced to the Cloud Native Computing Foundation (CNCF), Envoy is a high performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. Envoy Proxy is used as the base data plane for technologies like Gloo and Istio.
What is Gloo Edge?
Gloo Edge is a feature-rich, Kubernetes-native API gateway and ingress controller built on Envoy Proxy to facilitate and secure application traffic at the edge. Gloo Edge is exceptional in its function-level routing; its support for legacy apps, microservices and serverless; its discovery capabilities; its security features; and its tight integration with leading open-source projects like Prometheus and Grafana. Gloo Edge is uniquely designed to support hybrid applications, in which multiple technologies, architectures, protocols, and clouds can coexist.
How does Gloo Edge work?
Gloo Edge aggregates back-end services and provides function-to-function translation for clients, allowing decoupling from back-end APIs. Gloo Edge is built on an Envoy Proxy-powered data plane that draws its policies from a dynamic control plane that allows declarative, Kubernetes-native specification of all traffic policies. See our Gloo Edge architecture description in the docs for more detail.
What are common use cases for Gloo Edge?
Gloo Edge is an API gateway, which intercepts all your incoming traffic and routes it to the appropriate backend service according to your rules and policies. Gloo Edge is also a Kubernetes ingress controller, which controls the access to the services inside a Kubernetes cluster. An ingress is a collection of HTTP and HTTPS routes that can be configured to provide externally reachable URLs, load balancing, SSL termination, and name-based virtual hosting. An ingress controller manages the configurations for each ingress resource. Gloo Edge also works well for directing, securing, and observing application traffic to microservices and other distributed applications. These resources can also be running on VMs, bare metal servers, or even AWS Lambda serverless functions, not just Kubernetes.
Where can I run Gloo Edge?
Gloo Edge runs anywhere you can run Kubernetes, including AWS (Amazon EC2, Amazon EKS), Azure (Compute, AKS), Google Cloud (Compute Engine, GKE, Anthos), Red Hat OpenShift, VMware Tanzu Kubernetes Grid, on VMs (virtual machines), and on bare-metal servers.
What is the difference between Envoy Proxy, Gloo Edge, and Gloo Edge Enterprise?
While open source Envoy Proxy has become very popular as an API gateway, the open source version of Gloo Edge adds a little more functionality around security, reliability, unified controls, and ease of use. If customers are looking for the most comprehensive solution for enterprise production workloads, Gloo Edge Enterprise goes much further. See our detailed feature comparison on the Gloo Edge product page.
How do I see a demo, request a trial, or get pricing for Gloo Edge Enterprise?
What levels of help and support are available?
How should I choose between Gloo Mesh Gateway and Gloo Edge?
If you are looking for an API gateway and Kubernetes ingress management without having already invested in Istio, Gloo Edge is a great choice. If you would like to build and manage service meshes with Istio, Gloo Mesh is the place to start.
What is Istio?
Istio is an open source and platform-independent service mesh that provides traffic management, policy enforcement and telemetry collection. Using Envoy Proxy as its sidecar proxy, Istio supports Kubernetes-based deployments today and is being adapted by the community to other environments. Istio is the most popular service mesh, and is used by companies such as Airbnb, Salesforce, T-Mobile, FICO, eBay, and many more.
What is Gloo Mesh?
Gloo Mesh is a Kubernetes-native management plane that enables configuration and operational management of multiple heterogeneous service meshes across multiple clusters through a unified API. Gloo Mesh is an upstream distribution of Istio with long term support (LTS), n-4 version support, FIPS readiness, and enterprise SLAs. Gloo Mesh provides a multi-cluster management plane that simplifies operations and usage through a unified API. Gloo Mesh is engineered with a focus on its utility as an operational management tool, providing both graphical and command line UIs, observability features, and debugging tools.
How does Gloo Mesh work?
Gloo Mesh is a management plane that simplifies operations and workflows of Istio installations across multiple clusters and deployment footprints. With Gloo Mesh, you can install, discover, and operate Istio across your enterprise, on-premises, or in the cloud, even across heterogeneous service mesh implementations. For more info, see Gloo Mesh Concepts in our docs.
What are common use cases for Gloo Mesh?
Gloo Mesh is primarily used to build, secure, manage, and observe Istio across one or more clusters, environments, even multiple clouds. Istio solves challenges in building and operating microservices applications by managing networking and providing insight and control into your distributed application’s behavior. Gloo Mesh can also deliver zero-trust networking, a security model that includes not trusting any person or system inside and outside of your network, verifies before establishing trust, and grants only the minimal access needed to complete a particular function.
Does Gloo Mesh add value in a single-cluster environment?
Yes! Gloo Mesh makes Istio easier to manage by abstracting away complicated Istio configuration. It prevents users from making common Istio mistakes that could potentially cause outages. Even within a single cluster Gloo Mesh brings security features (like role-based access control, certificate management, FIPs certification), reliability features (like dynamic scaling to thousands of nodes, published SLAs, long term N-4 version support, and patch backporting), and extensibility features like WebAssembly (Wasm.)
Where can I run Gloo Mesh?
Gloo Mesh runs anywhere you can run Kubernetes, including AWS (Amazon EC2, Amazon EKS), Azure (Compute, AKS), Google Cloud (Compute Engine, GKE, Anthos), Red Hat OpenShift, VMware Tanzu Kubernetes Grid, on VMs (virtual machines), and on on bare-metal servers.
Is Gloo Mesh FIPS-ready?
Yes! Gloo Mesh Enterprise has been verified and can be supported as part of your overall Federal Information Processing Standards (FIPS) technology solution, alongside appropriate people and process policies. This covers data plane and control plane certification as well as a distro-less build option.
What is the difference between Istio, Gloo Mesh, and Gloo Mesh Enterprise?
While open source Istio has become very popular as a service mesh, the open source version Gloo Mesh adds more functionality around security, reliability, unified controls, and ease of use. If customers are looking for the most comprehensive solution for enterprise production workloads, Gloo Mesh Enterprise goes even further. See our detailed feature comparison on the Gloo Mesh product page.
How do I see a demo, request a trial, or get pricing for Gloo Mesh Enterprise?
What levels of help and support are available?
See our support page for options including enterprise production support for our products, open source Istio, and community support provided on Slack and GitHub.
How should I choose between Gloo Mesh and Gloo Edge?
If you are looking for an API gateway and Kubernetes ingress management for north/south traffic, Gloo Edge is a great choice. If you would like to build and manage an Istio service mesh for east/west traffic, Gloo Mesh is the place to start.
What is an API developer portal?
An API developer portal is part of API management, and provides a way to catalog your APIs and publish them to developers, community, partners, and customers. The UI makes it easy to catalog, manage, find, share, and track usage of APIs with others, which could support chargeback or monetization.
What is Gloo Portal?
Gloo Portal provides a framework for managing the definitions and documentation of APIs, API client identity, and API policies. Vendors of API products can leverage the Gloo Portal to secure, manage, and publish their APIs independent of the operations used to manage networking infrastructure. Powered by the OpenAPI and gRPC specifications, the Gloo Portal provides policy, traffic control, and a web UI for consuming APIs provided by services deployed in and outside of Kubernetes.
What are microservices?
Microservices is an architecture pattern where the application is comprised of many small and independent services that are loosely coupled together and are independently deployable. Microservices have also been called distributed applications based on the nature of how they are typically deployed, and containerized applications as popularized by the adoption of Docker and Kubernetes for these types of applications.
What is an API gateway?
An API gateway directs requests from users or applications on the edge to the appropriate applications. The API gateway most often handles “ingress”, as it’s the entry point for inbound connections and responses, also called “North-South traffic.” For example, your operating environment has to manage different types of incoming connection requests, like from a mobile app, a web portal, or from other internal applications. An API gateway can also handle connections coming from different operating environments, be they on-premises, hybrid, one cloud, or multi-cloud. The open source project Envoy Proxy is the most popular API gateway for Kubernetes and cloud environments, as it was designed to be modern and native, not a retrofit of older, legacy API software.
What is a service mesh?
A service mesh is a cloud-native application networking pattern created to solve the new challenges created as applications evolve from static monolithic workloads to distributed microservices. In microservices, an application is made of potentially hundreds of loosely coupled services networked together, thus making the service to service communication critical to a properly functioning application. In a service mesh, the application network is abstracted out of the business logic and handled through a set of proxies that are paired one to each service.
What is North-South traffic?
This direction of traffic is defined as the client to server traffic, between the clients or end users outside of the datacenter to the network inside the datacenter. Ingress and egress traffic falls within the North-South traffic definition. Incoming traffic is also often referred to as ingress but that is confined to a specific cluster. Network traffic leaving a cluster to an external service is referred to as egress.
What is East-West traffic?
This direction of traffic is defined as the service to service communication that occurs within the cluster and does not leave your network. In microservices architecture, this is how the different services are networked in order to form a complete application. Technologies like service mesh are being developed to help solve the challenges in enabling, securing and controlling intra- and cross-cluster communication.
What is Kubernetes?
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
What is Kubernetes Ingress?
Ingress is a concept for handling incoming traffic to a cluster running your application services. In a Kubernetes context, an object named Ingress Controller exists to specifically fulfill this function. Kubernetes Ingress is useful and handles traffic only incoming to a specific Kubernetes cluster. Ingress is an example of North-South traffic in a single cluster, but could be East-West too in a multi-cluster environment.
What is WebAssembly (Wasm)?
WASM is the emerging standard for building custom filters in Envoy. Per the WebAssembly page, [Wasm] “is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.” What this means is Wasm provides an easy way to build custom filters and rules to manage application traffic and control specific behavior.