What is Service Mesh?
Service Mesh is an infrastructure layer that abstracts application networking from the business logic of the application. In doing so, the service mesh can provide a configurable network layer to facilitate communication between services using their application programming interfaces (APIs). This architecture is facilitated by deploying a proxy as a sidecar alongside each application service. All communications between the application services are facilitated through the sidecar proxies (data plane) which are configured and managed through a control plane. Popular service mesh technologies include Istio, Linkerd, AWS App Mesh, HashiCorp Consul Connect and others that are either built with Envoy Proxy or a custom proxy specific to the service mesh provider.
Why Do We Need Service Mesh?
The rising popularity of microservices based architecture and container orchestration (Docker and Kubernetes) creates a new challenge in solving the service to service communication within a cluster. These microservices are comprised of potentially hundreds of loosely coupled services that are dynamic, ephemeral, and distributed making the network between them critical to ensure a properly functioning application.
Unlike monolithic applications that primarily focus on incoming traffic to a single application instance, microservices need to consider incoming traffic to many application instances and manage the traffic between the services. Incoming traffic to the cluster is often called north-south while the service to service communication within the cluster is called east-west. A service mesh is designed to solve the requirements of enabling and managing east-west communications.
What Can You Do With a Service Mesh?
Service mesh solves a major challenge in building and operating cloud-native applications by laying the foundation and API to L7 networking to gain more insight and control into the distributed application behavior. Service mesh provides functionality to application developers like service discovery, client-side load balancing, timeouts, retries, circuit breaking and more that work regardless of their application framework or language. For operators, service mesh provides a set of L7 controls over traffic routing, policy enforcement, and strong identity (authentication and authorization) and security (encryption, mTLS). The service mesh is also an extension point and vehicle for new functionality that can be deployed to applications through the service mesh. Examples of extensibility include progressive delivery, chaos engineering and operators for automating service mesh behavior.
Organizations looking to adopt microservices and service mesh are faced with a myriad of choices in mesh providers like AWS App Mesh, Hashicorp Consul, Istio, and Linkerd. The process of selecting new technology with different APIs in a rapidly evolving ecosystem can be difficult and complex.
- Ever increasing number of service mesh options available
- Each service mesh has a different implementation, APIs and integration points
- Each service mesh presents a different operating model
Teams need the ability to choose the service mesh that best suits the business and technical needs of their application. They require the flexibility to use any service mesh on any infrastructure with the controls to manage their diverse environment in a consistent way.
- Flexibility to choose any service mesh at any time for your applications
- Unify service mesh management and improve extensibility through an API translation layer
- Ensure configuration consistency and compliance