Webinar Recap – Can You Replace API Management with Service Mesh?

This webinar was long in the making as we looked at answering the #1 most frequently asked question:

Can I replace my Edge/API Gateway or API management solution with a service mesh?

This is not surprising given the amount of service mesh hype in the cloud-native ecosystem and service mesh functionality like traffic routing, shaping, traffic policy, security sound a lot like the functionality you’d get from API management / gateway solutions.  While service mesh, edge gateways, and API management all work within the application networking layer, they specifically address different traffic patterns and use cases.

Service Mesh solves communication challenges between services in a cluster (east/west traffic) and API Gateways handle traffic entering and exiting a cluster (north/south traffic), and while complimentary, they are different traffic patterns. Additionally, not all applications are able to run in Kubernetes and service mesh or even as a container. Depending on the purpose of the application, you may have one or all solutions implemented. 

This talk covers: 

  • Evolving traffic patterns from the edge/ingress to service mesh
  • How the traffic patterns differ and areas for integration 
  • Envoy as the underlying proxy technology for the modern application network
  • Key capabilities of Envoy based networks 
  • How to incrementally adopt and operationalize across multiple clusters 

Watch the replay here

Highlights from the Q&A

Do you see enterprises using a single large service mesh cluster or several smaller clusters of service meshes (maybe per Line of Business) with API management layers in between?

The recommended pattern is for many, smaller clusters as the makes it easier to manage security boundaries and isolation compared to having all the workloads on a single large cluster. For example, with Istio, we recommend that each cluster be deployed with its own control plane and by extension, that logical boundary should also map up to the other properties of the failure domains that you’re trying to limit. This does add more operational overhead and management for the additional clusters and control planes but from a failure domain and high availability standpoint, it is the right approach.  Our project Service Mesh Hub was built specifically to address this issue to help federate the management of multi-cluster service mesh environments. 

Can you also expose the APIs to internal teams? The demo of the developer portal focuses on the external developer experience.  

Yes! The Developer Portal allows you to easily catalog running APIs and expose them to developers inside and outside of your organization. You can configure the access and authentication accordingly to support that use case.

Why do you differentiate between security boundaries inside the mesh and at the edge?

There are two different types of boundaries in an organization. Within a line of business is one boundary and the service mesh facilitates the service to service communication within that boundary. The edge is the boundary between different lines of business.

 

You mention “internal gateways” but why would I need “internal gateways” when I already have “sidecar proxies”?

This is due to how you want to handle the communication within and between boundaries. The sidecar proxies handle all the service to service communication within the boundary. The internal gateway handles how you want to enable the requests between the line of business boundaries with capabilities like rate limiting or authentication. These traffic and security policies are applied at the edge instead of every single sidecar proxy across the organization as that becomes messy. This is why you’d have gateways at the edge (internal or external) and sidecar proxies used together.

 

Download the presentation

Learn more