Accelerate Kubernetes Adoption with a Service Mesh

[This piece was originally published on The New Stack.]

I have worked for years guiding large organizations on their adoption of Kubernetes and microservices, and during that time I’ve seen many common patterns. Often, organizations want to move conservatively, but this can extend the time to adopt Kubernetes from months to years. In the end, the full value of Kubernetes still might not be realized and those containers might as well be virtual machines (VMs).

To fully leverage Kubernetes, you must first understand that Kubernetes requires add-on software to be a complete solution. Additional components must be installed and configured, such as a service mesh like Istio. A service mesh allows you to secure, observe and manage traffic between services deployed across your Kubernetes clusters. Another missing ingredient is an API gateway to dynamically route traffic into Kubernetes in a safe, secure and flexible manner. Simply exposing your Kubernetes services to an external load balancer is not recommended.

Let’s examine three common challenges organizations face and see how adjacent technologies can enhance Kubernetes and help accelerate your application migration and modernization.

Three Common Challenges in Adopting Kubernetes

1. Unrealistic Application Modernization and Migration Goals

If your organization has large monolithic applications that have been running for many years in a data center, moving them to Kubernetes in a few weeks is not usually realistic. These monolithic applications are like old oak trees, deeply rooted in the existing IT infrastructure and the middleware that connects applications together. To modernize, replace or move these applications can be difficult, especially when there is a forest of intertwined “oak tree” applications.

Simply breaking apart these legacy monolithic applications into microservices and deploying them to Kubernetes is a strategy that often ends in failure. Taking that old “oak tree” and chopping off the limbs doesn’t work, because those limbs cannot grow roots and thrive on their own. The worst-case scenario is that you create a distributed monolith that inherits the old problems and combines that with the challenges of a microservice architecture. The resulting applications will be complex, unreliable, slow and will probably become a poor investment.

Applications should be fully redesigned for modern use cases. By rethinking your approach to application modernization with Kubernetes, your organization will get the desired benefits faster with less risk.

Some companies help simplify your approach to application modernization. For example, with Solo.io’s Gloo Mesh and Gloo Edge — built on Istio and Envoy Proxy — you can route traffic between applications on Kubernetes, VMs and servers. One trick is to modernize your APIs before you modernize the applications themselves, such as transforming JSON to XML to expedite the adoption of REST clients. Focus on applications that are easily movable with lower risk and take your time with the more complex ones. You can also adopt canary deployments to verify that applications are behaving as expected before going into production.

2. Inability to Distribute Applications Globally

A web service connecting to a database on another server is a distributed application in its simplest form. More often, a distributed application will be a cascade of microservices calling each other from multiple data centers and clouds. Since many large organizations have global customers, building globally distributed applications by properly partitioning services and data for each region is needed for better performance.

For production systems, having a microservice directly calling another microservice is inherently unreliable, especially between data centers and clouds. There will be occasional outages for some of those microservices, and there will be networking issues, too. An entire data center may go completely offline — the possibilities for failure are endless. You must think about how to architect these systems and proactively design for different types of failures because they will inevitably happen!

Until reliability challenges are solved, highly distributed systems are not ready for production. You may still need your legacy systems until the new globally distributed deployments on Kubernetes are proven reliable for production workloads.

To illustrate again, Gloo Mesh and Gloo Edge bring a simpler design and the confidence to deploy faster with federation — the ability to configure consistent routes and services across clusters anywhere. There is also a replication of services, to load balance and failover more effectively. You might also combine service meshes into a “mesh of meshes” while still being able to operate each independently should there be localized outages. Gloo Mesh can also bring reliable service-to-service communication — requiring features to handle retries, exponential backoff and failover — without having to write code to do this yourself.

3. Lack of Automation Around Security

DevSecOps promises a lot, but full automation of security can be hard to achieve, especially in an environment with strict requirements and limited ability to change policies.

Kubernetes helps by requiring you to modernize with a holistic approach to security and adoption of DevSecOps, unless you want to maintain two unautomated sets of processes around security, one for Kubernetes and one for legacy IT infrastructure. In most cases, having two parallel but different approaches would be inherently riskier and more complex to manage.

Modernizing your security posture for microservices can take many months. It’s not something that can be easily scripted and may require a rethinking of how your organization manages functions like authentication and authorization. Applications should not go into production on Kubernetes without first addressing modernized security tools and practices.

Istio’s service mesh has DevSecOps built-in. By adopting Istio, you can automate tasks like certificate management and securing service-to-service communication. With Istio, you can even extend your service mesh to VMs and adopt a homogeneous solution for security. Solo’s Gloo Mesh and Gloo Edge have the ability to deploy through Kubernetes custom resources (CRDs), which are also used to configure security and distribute secrets. You can use GitOps tooling to implement your DevSecOps approach, by automatically deploying updated CRDs to modify your security posture.

Conclusion

I love to help people overcome the challenges they have been facing in adopting Kubernetes. If you’d like to find out more on how we can help with your particular needs and use cases, please reach out to us on our Slack channel.