Is 2023 the year of convergence of API gateway and service mesh?

We have been talking about the differences and similarities between API gateway and service mesh for the past few years. As we continue to work with more and more customers here at Solo.io, we should pause for a moment and reflect on the types of problems we see with our customers, how they choose to solve them, and what that means for deploying API gateway or service mesh technology. 

Do you need an API gateway?

If you are developing APIs, reusable services, or trying to wrangle the sprawl of half-implemented application logic for security, routing, or metrics, then you probably would benefit from an API gateway. If you are deploying APIs and services into containers, Kubernetes, or other cloud infrastructure, you absolutely should use an API gateway that was built for those volatile, dynamic, ephemeral environments. 

The old API gateway technologies (Apigee, Kong, etc) won’t cut it, and trying to shoehorn them into your architecture will create unnecessary constraints and slow you down. API gateways built on Envoy Proxy, for example, and those that natively fit a GitOps operational model are a much better fit for these cloud environments.

What do you do if your APIs are deployed across multiple clusters?

In this case you have a decision to make. Do you go with a centralized deployment of an API gateway or a more decentralized/federated deployment? For example, if you have a small number of APIs and deploy them in a handful of clusters that all live in the same datacenter boundary, maybe the centralized API gateway deployment is a better choice. In this scenario, the traffic direction of “North / South” and “East / West” are understood.

API gateway for simple deployment, single cluster

API gateway for simple deployment, single cluster

Centralized API gateway across multiple clusters

Centralized API gateway across multiple clusters

This works fairly well, but what if services in the “East / West” direction need to communicate with each other? Are API calls forced back through the centralized gateway? This is known as “hairpinning” or “backhauling,” as it creates unnecessary hops and potentially multiple hops/a congested network and should be avoided. 

Example of hairpinning API call to a centralized gateway

Example of hairpinning API call to a centralized gateway

A service mesh avoids the need to hairpin traffic by allowing east-west traffic directions while enforcing API and security policies.

But if you have many thousands of APIs, sometimes deployed in islands across different infrastructure (VMs, K8s, Cloud Foundry, AWS ECS/Lambda/Fargate, etc. on premises or/and in public cloud) then you should probably consider a more decentralized architecture. In this case, you need common API gateway functionality implemented for APIs, but the traffic direction (North / South vs East / West) becomes blurred. In this case, some combination of a decentralized API gateway and service mesh might make sense here.

When traffic arrives at an API gateway, how do you secure the traffic to the backend services?

API gateways provide the “front door” to your APIs and enforce traffic and security policies, usage plans, access, etc. But what happens when a call to an API is validated and passed through to the backend service? 

What security mechanisms are used between the API gateway and the backend services? Or… are any security mechanisms used at all? This is the crux of the zero trust security movement: API gateways can create “trust zones,” which might end up being a very large blast radius for attackers.

That is, what if an attacker is already inside an organizational boundary behind the firewall or API gateway? These types of attacks are now the norm and can be mitigated by reducing any trust zones. Here’s where a service mesh can serve your architecture well. A service mesh can block any and all traffic to services behind the API gateway by requiring specific certificates and tokens to invoke an API.

Can your API gateway technology be the same as your service mesh technology?

An area of cloud native infrastructure that gets lost among the features and performance discussions is around management: how do you deploy, operate, and scale these types of application networking solutions? Our customers struggle with the idea of having different technologies, management strategies, and configuration types for their API gateway and service mesh solutions. 

That’s why at Solo.io we’ve built a full-blown API gateway built on the same technology that underpins our service mesh solution: Envoy proxy and Istio service mesh. Our Gloo Platform brings together both use cases and solves them with the same underlying technology and unifies them under a single, consistent API and management plane. This enables our customers to provide a powerful API security and policy mechanism and simplify how they operate these solutions at scale. 

Learn more about unlocking the power of API gateways in our new ebook.