Connecting microservices on Red Hat OpenShift across operating environments
Application modernization journeys usually align to a few overlapping trends, including containers, cloud, and microservices. If you’ve chosen Red Hat OpenShift as your container orchestrator platform, congratulations, you’re already halfway home. Indeed, the IDC report “The Business Value of Red Hat OpenShift” declares “[w]ith a Kubernetes platform offering such as Red Hat’s OpenShift, organizations can obtain additional benefits and gain a foundation for developing and running important business applications.”
The challenge is that once applications are distributed across multiple Kubernetes clusters, hybrid on-premises and even multi-cloud environments, you still need to securely and consistently manage connectivity. OpenShift can be made even more capable for modern apps when complemented by the right technologies, namely Gloo Edge (an API gateway which enhances the open source Envoy Proxy) and Gloo Mesh (a service mesh which builds more capabilities onto open source Istio.)
With Gloo Edge, you can direct requests from users or applications on the edge to the appropriate OpenShift-based applications. The API gateway handles Kubernetes ingress, as the entry point for inbound connections and responses. For example, your operating environment has to manage different types of incoming connection requests, like from a mobile app, a web portal, or from other internal applications. Gloo Edge brings advanced features to Envoy. Security is improved with features like an integrated web application firewall (WAF), data loss prevention (DLP), extensible authentication, federated role-based access control, and more. Reliability is improved with advanced rate limiting, configuration validation, and global failover routing. Gloo Edge also unifies observability in an admin dashboard with multi-cluster views and API management in a Developer Portal.
Meanwhile, a service mesh interconnects your OpenShift-based microservices so they can talk to one another. Your application may have different clusters and pods to handle customer login, user interfaces, and databases behind them. You probably also have redundant copies of these apps and databases to handle scale and business continuity with failover in case of problems. With Gloo Mesh, you can install, discover, and operate a service-mesh deployment across your enterprise, deployed on premises, or in the cloud, even across heterogeneous service-mesh implementations. Gloo Mesh goes beyond open source and adds more capabilities. For security you get role-based access controls, certificate management with an external authority, multi-cluster access logging aggregation, and it’s all FIPs ready. Istio is made more reliable with priority failover routing, dynamic scaling to thousands of nodes, and global failover and routing.
Gloo Mesh can even complement OpenShift Service Mesh and give you consistent management into other environments. Say for example you wanted to run Red Hat OpenShift Service on AWS alongside Amazon EKS, Amazon ECS, or even open source Kubernetes on Amazon EC2. Gloo Mesh could coordinate and provide visibility into the service mesh behavior across all these environments.
In his blog, Denis Jannot explains how to deploy Istio (1.9) on multiple OpenShift (4.6.22) clusters on IBM cloud and how to leverage Gloo Mesh for mTLS between pods running on different clusters, locality-based failover, global access control, and global observability.
You certainly wouldn’t be the first to choose Solo to enhance your OpenShift deployment. We have customers building with Solo and Red Hat for global credit cards, national insurance, and other financial services, software, and healthcare/life sciences applications.