Gloo Mesh vs. other Istio products – what we’ve learned over the past year

At Solo.io, we build application-networking products that enable organizations to expose, secure, and observe their services architectures. Envoy proxy and Istio service mesh are part of how we build our products. In fact, we’ve built our Gloo Mesh Enterprise product on top of Istio, as that is the most mature, scalable, and widely adopted service mesh. We are working with large and small organizations worldwide to deploy service mesh with our customersincluding the largest deployments of Istio in the world (on the order of tens of 1000s of clusters!). 

We have achieved significant success here at Solo.io, and in doing so, we’ve grown our technologies and offerings to outcompete other vendors with an Istio offering. In this blog, we discuss how Solo.io solves an organization’s needs around application networking better than the alternatives. 

The challenges of running a service mesh

Service-to-service connectivity across a hybrid environment comes with its challenges. Typical enterprise environments consist of containers, Kubernetes, VMs, bare metal, home-grown software, off-the-shelf software, across private clouds, public clouds, and even mainframes. In order to overcome challenges with application-level networking across a footprint like this, we must take into account this inherent complexity, legacy, and sprawl. 

What we’ve observed in the market for the last four years is an organization’s demand for ease of use, consistency, and support for a hybrid environment:

  • Ease of use
  • Improved developer and operations productivity
  • Intent-based declarative configuration and GitOps workflows
  • Consistent, multi-cluster operations with tenancy
  • Hybrid/multi-cloud deployments, irrespective of underlying technology
  • Deep expertise and leadership in the service mesh ecosystem and bringing that directly to our customers 

Let’s see how other vendors with Istio-based offerings for service mesh stack up against these needs.

Gloo Mesh vs. other Istio products

The larger vendors like Red Hat, Google, or VMware are often working with our customers on related technologies like Kubernetes, virtualization, or a combination (Kubernetes, operating systems, etc.). They all have some level of service mesh offering alongside their Kubernetes platforms, but usually the offering is a “check the box” assuming you go all in with their platform. Alternatively, Gloo Mesh Enterprise solves application-networking problems head on and without limitations from all-in-one vendor lock-in. 

Feature Gloo Mesh Enterprise Anthos Service Mesh OpenShift Service Mesh Tanzu Service Mesh
Simplified API for multi-cluster management
Multi-platform support
Dedicated service-mesh expertise working directly with customers
Global name routing and failover
Ease of use

 

Let’s see in more detail. 

Gloo Mesh Enterprise vs. OpenShift Service Mesh

Our customers who have deployed OpenShift as their Kubernetes distribution are looking at the next step of their journey around modernizing application architectures. A big part of that is leveraging the capabilities of Istio that have been available for nearly two years but are not available in OpenShift’s service mesh. For example, OpenShift’s latest version (4.8 at time of writing) deploys their Service Mesh 2.x which is based on Istio 1.6.x. Istio 1.6 has been EOL (End Of Life) for a year (at time of writing). 

Lagging behind upstream with forked EOL Istio

When evaluating OpenShift Service Mesh, our customers worry about lagging so far behind the community, but more importantly, they are worried about the likely cause of this lag: OpenShift Service Mesh is a fork of Istio. The developers who work on OpenShift’s Service Mesh also consider it a fork:

Gloo Mesh vs. Other Istio Products

Our customers worry that adopting a forked service mesh project will lock them into the vendor product and stall the speed at which they can take advantage of upstream changes. 

UPDATE: OpenShift 4.9 has been released with-soon-to-be released OpenShift Service Mesh v2.2. As previously suggested, we can now confirm that OpenShift Service Mesh’s most recent release will be on Istio 1.9. Unfortunately for OpenShift Service Mesh users, Istio 1.9 is already EOL in the community. Users would be starting on an already-out-of-date version of Istio and are likely to become even further behind. 

Missing critical features

Another aspect of deploying Istio into OpenShift is the support for multi-cluster configuration orchestration and simplifying the burden placed on developers and operators. OpenShift Service Mesh does not support multi-cluster deployments and is far from any of the multi-cluster management and overall mesh simplification features that we have in Gloo Mesh Enterprise. Among the other notable features missing from OpenShift’s Service Mesh implementation is IPv6 support, smart DNS for sidecars, and support for configuring namespace discovery. This last feature is particularly limiting for those large clusters that deploy varying types of workloads, and is something we worked on upstream for some of our larger deployments of Istio.

Istio expertise in the field

The last thing that our customers worry about when considering OpenShift ServiceMesh is where will they get Istio expertise as they architect, deploy, and operationalize their mesh? Red Hat is great at Kubernetes but service mesh is not their area of expertise (incidentally, turns out a lot of the Istio engineering team has left Red Hat to other companies). At Solo.io, this is our core, dedicated, area of expertise and it’s on full display when working with our customers.

Gloo Mesh Enterprise vs. Anthos Service Mesh

Similar to OpenShift, Google’s service mesh is meant to drive users to their Kubernetes distribution (Anthos/GKE). Google Anthos doesn’t have nearly the adoption for on premises deployments as Red Hat OpenShift does (and arguably, Anthos could end up on the chopping block at Google), so they’ve decoupled the service mesh piece as Anthos Service Mesh (ASM). 

Apples and oranges comparison

Anthos Service Mesh (ASM) isn’t really a comparison to Gloo Mesh Enterprise (GME) since it’s simply a re-skinned build of community Istio with fewer features available. Anthos Service Mesh more directly compares to the Istio distribution we ship with Gloo Mesh but that (by itself) misses the point of GME and why our customers adopt it. One of the big reasons our customers pick Solo.io and GME is for simplified multi-cluster configurations and operations.

For example, enabling service-to-service global routing, high-availability, failover, and priority-based routing across multiple clusters is complicated with Istio. Additionally, maintaining the correctness of configurations over time as topology changes requires a lot of toil. With Gloo Mesh Enterprise, it’s as simple as creating a VirtualDestination object like this:

apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualDestination
metadata:
  name: web-api-global-routing
  namespace: demo-config
spec:
  hostname: web-api.gloo-mesh.istiodemos.io
  port:
    number: 80
    targetNumber: 8080
    protocol: http
  localized:
    outlierDetection:
      consecutiveErrors: 1
      maxEjectionPercent: 100
      interval: 5s
      baseEjectionTime: 60s
    destinationSelectors:
    - kubeServiceMatcher:
        namespaces:
        - istioinaction
        labels:
          app: web-api
  virtualMesh:
    name: your-virtual-mesh
    namespace: gloo-mesh

Under the covers, GME translates this resource into the correct Istio ServiceEntry, VirtualService, Gateway, EnvoyFilter, and DestinationRule resources and places them on the correct clusters based on context (i.e., if services on cluster X don’t need to failover to another cluster, don’t put the configurations there). Additionally, if there are any topology changes, GME’s global service discovery will detect this and automatically reconcile the configuration changes. With ASM, building this is entirely up to you. ASM even recommends setting up Istio community’s endpoint discovery service for multi-cluster, which we’ve found with our customers to be insecure. With this model, you basically create permissions in each Kubernetes cluster to communicate with every other clusters’ Kubernetes API directly. If one cluster gets compromised, all clusters are potentially compromised. 

Istio security

Alternatively, GME follows a federation model and uses a relay-communication architecture instead of direct Kube API access. For the organizations with which we work, this provides a much more secure posture. 

Gloo Mesh Management Plane

Complete, portable solution for any cloud

Another big reason why our customers choose GME over ASM is a more complete and portable solution for application networking across cloud providers. GME (and Gloo Mesh Istio) are not tied to any Kubernetes distribution or cloud vendor. Additionally, supporting components like gateways or portals are also not tied to a cloud platform. For example, a multi-cluster service mesh solution (typically for HA, compliance, organizational, and other boundaries) requires gateway capabilities to “gloo” these different boundaries together. With GME, you can use Istio’s ingress gateway, egress gateway, or even the new Gloo Mesh Gateway capabilities. With Gloo Mesh Gateway, you have a complete API Gateway (with OIDC, rate limiting, request/response transformation, WAF, DLP, WASM, etc) built into the Istio ingress gateway and controlled from the mesh. It’s the first and only Istio-native API gateway and can be run on any Kubernetes distribution not tied to any specific cloud. With ASM, you’re tied into the Google ecosystem with sub-par, outdated solutions like Apigee. 

Istio expertise in the field

The last reason people pick Gloo Mesh Enterprise and choose to work with Solo.io for scaled deployments of service mesh is our dedicated expertise and how closely we work with our prospects and customers. Google’s Customer Engineering teams, while experienced and talented, are spread too thin across Google Apps, AI, Data, Kubernetes and others to be deep experts in service mesh. At Solo.io this is our dedicated area of expertise. Working with our many large customers across the world and with our customer-first field-engagement model, we can help you adopt, scale, and operationalize your service-mesh architecture faster than you could yourself or with any other vendor.

Gloo Mesh Enterprise vs. Tanzu Service Mesh

Tanzu Service Mesh (TSM) is SaaS offering from VMware that uses a distribution of Istio under the covers. We don’t see too many folks evaluating TSM but from the couple that have popped up they follow some of the same themes as the previous examples as well as they struggle to get it working nicely in their environments. Two big areas that we see people lean toward Gloo Mesh Enterprise and away from TSM are around observability and security. 

Lacking in observability integration

The observability story with TSM is not clear or flat out does not work for the customers we have. TSM has some out of the box dashboards for observability, but the real push from VMware is for you to adopt their entire stack like their Observability platform and Mission Control. TSM also does not integrate nicely with your (potentially) existing tools like DataDog, Prometheus, and Grafana. At a minimum, plugging into trusted, widely-deployed, metric-collection tools like Prometheus are a must. 

Lacking in security integration

Around security, TSM is pretty rigid in terms of its support for integrating with an organization’s existing PKI. From what we can tell, you can only use the Global Namespace certificate authority (CA) and cannot plugin with existing CA/RA providers. Additionally, there are some hard-coded out-of-the box rotation policies (every 90 days) which are non-starters for customers. Customers typically have opinions about how certificate minting and rotation should happen and usually have some existing infrastructure for this. For those folks evaluating Gloo Mesh Enterprise, we have nice integration with Vault and AWS ACM which are commonly used. 

API gateway and security

For gateway capabilities, VMware tries to position their software load balancer (Avi) but also the istio ingress gateway, Mesh7, and Pivotal spring cloud gateway. All of them have significant overlap, use differing technologies, and are difficult to configure consistently with the rest of the mesh. With Gloo Mesh, Gloo Mesh Gateway is based on Envoy proxy with key capabilities like rate limiting, transformation, OIDC integration, WAF, et.al., and is consistently configured with the rest of the mesh (driven by the Istio control plane and Gloo Mesh). 

Istio expertise in the field

Lastly, VMWare just doesn’t have the same level of expertise that we at Solo.io have around deploying and managing a service mesh based on Istio. Because we work very closely with our customers on this, we save them significant amounts of time and frustration.  

Wrapping up on Gloo Mesh vs. other Istio products

At Solo.io, we’ve been working with the largest deployments of service mesh and Istio in the world, have strong leadership and contribution in the upstream communities, and have deeper experience deploying this technology that we bring directly to our customers than any other company. Not all Istio-based service mesh offerings are the same, especially in terms of what matters: maturity, support, and expertise. We bring this expertise to the forefront of working with our customers and have built an amazing team of Field Engineers that have been in the trenches of pushing the service mesh ecosystem forward. If you are running on any of the above-mentioned platforms (OpenShift, GKE/Anthos, Tanzu—or any others—we embrace multi-platform) and are looking to be successful with your service mesh and API modernization journey, reach out to our experts!