What is Envoy Proxy?
Envoy Proxy is an open source edge and service proxy designed for Kubernetes- and cloud-native applications. Originally developed at Lyft and later open sourced to the Cloud Native Computing Foundation (CNCF), Envoy is a high performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. Envoy Proxy is used as the base data plane for technologies like Gloo and Istio.
What is Gloo Edge?
Gloo Edge is a feature-rich, Kubernetes-native API gateway and ingress controller built on Envoy Proxy to facilitate and secure application traffic at the edge. Gloo Edge is exceptional in its function-level routing; its support for legacy apps, microservices and serverless; its discovery capabilities; its security features; and its tight integration with leading open-source projects like Prometheus and Grafana. Gloo Edge is uniquely designed to support hybrid applications, in which multiple technologies, architectures, protocols, and clouds can coexist.
How does Gloo Edge work?
Gloo Edge aggregates back-end services and provides function-to-function translation for clients, allowing decoupling from back-end APIs. Gloo Edge is built on an Envoy Proxy-powered data plane that draws its policies from a dynamic control plane that allows declarative, Kubernetes-native specification of all traffic policies. See our Gloo Edge architecture description in the docs for more detail.
What are common use cases for Gloo Edge?
Gloo Edge is an API gateway, which intercepts all your incoming traffic and routes it to the appropriate backend service according to your rules and policies. Gloo Edge is also a Kubernetes ingress controller, which controls the access to the services inside a Kubernetes cluster. An ingress is a collection of HTTP and HTTPS routes that can be configured to provide externally reachable URLs, load balancing, SSL termination, and name-based virtual hosting. An ingress controller manages the configurations for each ingress resource. Gloo Edge also works well for directing, securing, and observing application traffic to microservices and other distributed applications. These resources can also be running on VMs, bare metal servers, or even AWS Lambda serverless functions, not just Kubernetes.
Where can I run Gloo Edge?
Gloo Edge runs anywhere you can run Kubernetes, including AWS (Amazon EC2, Amazon EKS), Azure (Compute, AKS), Google Cloud (Compute Engine, GKE, Anthos), Red Hat OpenShift, VMware Tanzu Kubernetes Grid, on VMs (virtual machines), and on bare-metal servers.
What is the difference between Envoy Proxy, Gloo Edge, and Gloo Edge Enterprise?
While open source Envoy Proxy has become very popular as an API gateway, the open source version of Gloo Edge adds a little more functionality around security, reliability, unified controls, and ease of use. If customers are looking for the most comprehensive solution for enterprise production workloads, Gloo Edge Enterprise goes much further. See our detailed feature comparison on the Gloo Edge product page.
How do I see a demo, request a trial, or get pricing for Gloo Edge Enterprise?
What levels of help and support are available?
How should I choose between Gloo Mesh Gateway and Gloo Edge?
If you are looking for an API gateway and Kubernetes ingress management without having already invested in Istio, Gloo Edge is a great choice. If you would like to build and manage service meshes with Istio, exploring Gloo Mesh and Gloo Mesh Gateway is the place to start.
What is Istio?
Istio is an open source and platform-independent service mesh that provides traffic management, policy enforcement and telemetry collection. Using Envoy Proxy as its sidecar proxy, Istio supports Kubernetes-based deployments today and is being adapted by the community to other environments. Istio is the most popular service mesh, and is used by companies such as Airbnb, Salesforce, T-Mobile, FICO, eBay, and many more.
What is Gloo Mesh?
Gloo Mesh is a Kubernetes-native management plane that enables configuration and operational management of multiple heterogeneous service meshes across multiple clusters through a unified API. Gloo Mesh is an upstream distribution of Istio with long term support (LTS), n-4 version support, FIPS readiness, and enterprise SLAs. Gloo Mesh provides a multi-cluster management plane that simplifies operations and usage through a unified API. Gloo Mesh is engineered with a focus on its utility as an operational management tool, providing both graphical and command line UIs, observability features, and debugging tools.
How does Gloo Mesh help with complex environments?
Gloo Mesh is a management plane that simplifies operations and workflows of Istio installations across multiple clusters and deployment footprints. With Gloo Mesh, you can install, discover, and operate Istio with federated policies enforced across your enterprise, on-premises, or in the cloud, even across heterogeneous service mesh implementations. For more info, see Gloo Mesh Concepts in our docs.
What are common use cases for Gloo Mesh?
Gloo Mesh is primarily used to build, secure, manage, and observe Istio across one or more clusters, environments, even multiple clouds. Istio solves challenges in building and operating microservices applications by managing networking and providing insight and control into your distributed application’s behavior. Gloo Mesh can also deliver zero-trust networking, a security model that includes not trusting any person or system inside and outside of your network, verifies before establishing trust, and grants only the minimal access needed to complete a particular function.
Does Gloo Mesh add value in a single-cluster environment?
Yes! Gloo Mesh makes Istio easier to manage by abstracting away complicated Istio configuration. It prevents users from making common Istio mistakes that could potentially cause outages. Even within a single cluster Gloo Mesh brings security features (like role-based access control, certificate management, FIPs certification), reliability features (like dynamic scaling to thousands of nodes, published SLAs, long term N-4 version support, and patch backporting), and extensibility features like WebAssembly (Wasm.)
Where can I deploy and run Gloo Mesh?
Gloo Mesh helps you deploy, upgrade, and manage Istio anywhere you can run Kubernetes, including AWS (Amazon EC2, Amazon EKS), Azure (Compute, AKS), Google Cloud (Compute Engine, GKE, Anthos), Red Hat OpenShift, VMware Tanzu Kubernetes Grid, on VMs (virtual machines), and on bare-metal servers.
Is Gloo Mesh secure and FIPS-ready?
Yes! Gloo Mesh Enterprise has comprehensive security controls built in, including mTLS, RBAC, WAF, and DLP. Gloo Mesh has been verified and can be supported as part of your overall Federal Information Processing Standards (FIPS) technology solution, alongside appropriate people and process policies. This covers data plane and control plane certification as well as a distro-less build option.
What is the difference between Istio, Gloo Mesh, and Gloo Mesh Enterprise?
While open source Istio has become very popular as a service mesh, the open source version Gloo Mesh adds more functionality around security, reliability, unified controls, and ease of use. If customers are looking for the most comprehensive solution for enterprise production workloads, Gloo Mesh Enterprise goes even further. See our detailed feature comparison on the Gloo Mesh product page.
How do I see a demo, request a trial, or get pricing for Gloo Mesh Enterprise?
What levels of help and support are available?
See our support page for options including enterprise production support for our products, open source Istio, and community support provided on Slack and GitHub.
How should I choose between Gloo Mesh and Gloo Edge?
If you are looking for an API gateway and Kubernetes ingress management for north/south traffic, Gloo Edge is a great choice. If you would like to build and manage an Istio service mesh or have an Istio-based API gateway traffic, exploring Gloo Mesh and Gloo Mesh Gateway is the place to start.
What is Gloo Mesh Gateway?
Gloo Mesh Gateway is a full-featured API gateway built on Istio (and Envoy Proxy) which offers all the capabilities of Gloo Edge such as DLP, North-South rate limiting, WebAssembly (Wasm), SOAP/XSLT for Istio, etc. Gloo Mesh Gateway inherits and incorporates all the strengths of Gloo Edge, making it a mature offering immediately.
What problems does Gloo Mesh Gateway solve?
Many customers had API gateway software that was too centralized and not able to deal well with dynamic environments due to more rigid configurations and the need to restart to implement changes. Istio can offer an ingress point to Kubernetes and other microservices and distributed applications, but isn’t always the easiest to configure for this use case. Customers needed features like external authentication, advanced rate limiting, and a developer portal for multi-cluster to have a true API gateway solution. Now you can use Gloo Mesh and have Istio act as an API gateway. This gives you a new API gateway option to more efficiently drive the right outcomes and capabilities, with less overhead in terms of resources required and complexity to manage.
What is an API developer portal?
An API developer portal is part of API management, and provides a way to catalog your APIs and publish them to developers, community, partners, and customers. The UI makes it easy to catalog, manage, find, share, and track usage of APIs with others, which could support chargeback or monetization.
What is Gloo Portal?
Gloo Portal provides a framework for managing the definitions and documentation of APIs, API client identity, and API policies. Vendors of API products can leverage the Gloo Portal to secure, manage, and publish their APIs independent of the operations used to manage networking infrastructure. Powered by the OpenAPI and gRPC specifications, the Gloo Portal provides policy, traffic control, and a web UI for consuming APIs provided by services deployed in and outside of Kubernetes.
What are microservices?
Microservices is an architecture pattern where the application is comprised of many small and independent services that are loosely coupled together and are independently deployable. Microservices have also been called distributed applications based on the nature of how they are typically deployed, and containerized applications as popularized by the adoption of Docker and Kubernetes for these types of applications.
What is an API gateway?
An API gateway directs requests from users or applications on the edge to the appropriate applications. The API gateway most often handles “ingress”, as it’s the entry point for inbound connections and responses, also called “North-South traffic.” For example, your operating environment has to manage different types of incoming connection requests, like from a mobile app, a web portal, or from other internal applications. An API gateway can also handle connections coming from different operating environments, be they on-premises, hybrid, one cloud, or multi-cloud. The open source project Envoy Proxy is the most popular API gateway for Kubernetes and cloud environments, as it was designed to be modern and native, not a retrofit of older, legacy API software.
What is a service mesh for Kubernetes?
A service mesh is a cloud-native application networking pattern created to solve the new challenges created as applications evolve from static monolithic workloads to distributed microservices. In microservices, an application is made of potentially hundreds of loosely coupled services networked together, thus making the service to service communication critical to a properly functioning application. In a service mesh, the application network is abstracted out of the business logic and handled through a set of proxies that are paired one to each service.
What is North-South traffic?
This direction of traffic is defined as the client to server traffic, between the clients or end users outside of the datacenter to the network inside the datacenter. Ingress and egress traffic falls within the North-South traffic definition. Incoming traffic is also often referred to as ingress but that is confined to a specific cluster. Network traffic leaving a cluster to an external service is referred to as egress.
What is East-West traffic?
This direction of traffic is defined as the service to service communication that occurs within the cluster and does not leave your network. In microservices architecture, this is how the different services are networked in order to form a complete application. Technologies like service mesh are being developed to help solve the challenges in enabling, securing and controlling intra- and cross-cluster communication.
What is Kubernetes?
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
What is Kubernetes Ingress?
Ingress is a concept for handling incoming traffic to a cluster running your application services. In a Kubernetes context, an object named Ingress Controller exists to specifically fulfill this function. Kubernetes Ingress is useful and handles traffic only incoming to a specific Kubernetes cluster. Ingress is an example of North-South traffic in a single cluster, but could be East-West too in a multi-cluster environment.
What is WebAssembly (Wasm)?
WASM is the emerging standard for building custom filters in Envoy. Per the WebAssembly page, [Wasm] “is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.” What this means is Wasm provides an easy way to build custom filters and rules to manage application traffic and control specific behavior.
What is GraphQL?
GraphQL has started to rapidly gain popularity as a standardized protocol (specification) to query distributed applications in any language and contain the mapping of microsevices APIs that can process and respond to requests. GraphQL is becoming a “no-code” way to handle queries with declarative YAML configurations to drive the right behavior. Read more on our GraphQL page.
What is connection between Solo and GraphQL?
Solo has announced the ability to run GraphQL in Envoy Proxy and Istio, embedding the emerging standard protocol in API gateways and service meshes. This new capability eliminates the burden of building a separate GraphQL server and creating the security, scalability, federation, and controls needed to run it safely in a production environment. Now you can run GraphQL as filters in Envoy sidecars which will then efficiently handle the queries, directing them to the appropriate services, and coordinating responses. Queries are secure, with granular access control by role, and the system will scale efficiently and reliably.
Who should explore running GraphQL on Solo?
Developers (API consumers) who wish to efficiently query distributed applications and microservices, developers (API producers) who wish to securely share relevant information from their applications, and operators who need to give them both those capabilities.
What problems does GraphQL on Solo solve?
Developers need to expose and extract relevant data from distributed microservices to support applications and operations. For example, an application may have a user interface (UI) that will show them information, like in a payroll or inventory application. In the past, this ability to query (akin to SQL) backend microservices had to be implemented directly in the application code itself, for each individual application component (microservices). This puts the burden on the application developer to code API functionality that can respond to queries, including having a schema with “parsers” to interpret the requests and “resolvers” to field them. Developers also all have to build all the security, policies, scaling, and controls for each application. APIs were non-standard, insecure, non-scalable, and in specific languages. There was no easy way to make this efficient or scalable for the organization. GraphQL is quickly emerging as a standardized approach to querying microservices, but GraphQL is still immature and organizations are discovering its limitations in their initiatives to deploy it stumble in production and at scale. The GraphQL libraries of parsers and resolvers need to be implemented on one or more monolithic servers to consolidate the logic and process requests. This becomes unwieldy and can lead to issues if they are not kept fully consistent and up-to-date.
How does Solo work with GraphQL?
Solo.io is embedding GraphQL into Envoy Proxy API gateways (and enabling it to be managed within an Istio service mesh.) The need for a distinct system of servers and schema libraries of parsers and resolvers is eliminated, as all of this intelligence and work can be managed in Envoy Proxy filters. Solo’s implementation of GraphQL auto-generates the gateways and creates a registry of the schemas (and sub-schemas) needed to fulfill query requests. You don’t need to write queries in a specific language, and don’t need to know anything about the backend microservices or how they interrelate with one another. Gloo Edge and Gloo Mesh can run GraphQL natively, making the protocol much easier for you to adopt and implement efficiently at scale. You won’t need to deploy monolithic GraphQL servers, which would be redundant anyway with the Envoy and Istio API management components already deployed in your environment. APIs and policies are federated for consistency in a declarative language, requests are orchestrated, and data responses are automatically processed and aggregated from all the supporting microservices. Advanced logic can be built into queries to pull information from multiple APIs, join it, transform it, or do other operations on it as part of the response to the calling UI application.
What are the benefits of running GraphQL with Solo?
If you are already adopting GraphQL, this gives you a more secure, scalable, consistent, easy way to deploy it across all environments for your enterprise applications. If you already have Envoy Proxy or Istio, Gloo Edge and Gloo Mesh let you use that existing application infrastructure for additional functions with GraphQL. You won’t need to deploy (now redundant) GraphQL servers. If you don’t yet have either GraphQL, Envoy, or Istio, Gloo Edge and Gloo Mesh will help you leapfrog into fully modernized microservices, speeding adoption and getting to the benefits much faster and more reliably than trying to build out these functions for yourself.