End User Case Study: Zymergen

Today’s post features a Q&A with Gigi Jackson, Staff SRE at Zymergen, and their modernization to Kubernetes. Zymergen is a science and material innovation company that is rethinking biology and reimagining the world. Zymergen brings together life, data and computer science to create never-before imagined materials and products across industries – from agriculture to electronics, personal care to pharmaceuticals, and more. 

The Zymergen team is made up of domain experts in biology, chemistry, robotics, data science and software engineering and while they do not produce any public facing software, the software they develop for their internal teams are part of their belief in driving scale through automation. 

 

Technology Strategy and Challenge

Zymergen’s infrastructure had always been AWS cloud centric with workloads largely in Amazon EC2, AWS Lambda functions and Amazon Load Balancer, to name a few. While fully in the cloud, the application architecture was largely monolithic and this resulted in creating overloaded applications as the special purpose business logic was “kitchen sinked” into monolithic code bases. 

While most of the applications were highly specialized for specific scientific discovery, there was a lot of opportunity on reusability of common components and sharing of common services so the team started to look towards microservices architecture, Kubernetes, and the new ecosystem of tools (like API Gateways) and processes required to successfully automate the environment. The ability to automate as much of the workflow as possible is critical in providing scale for the infrastructure engineering team as they are a central team of 10 people and embedding such expertise into each and every development team is not possible. Additionally, they wanted to build a system that would create a separation of concerns so that the many developers at Zymergen can focus on their specific application and data — and not on the inner workings of AWS, Kubernetes, load balancing, routing, and more. The more programmable the better.

 

The Solution

The team first started with breaking up their monoliths into smaller services, containerizing them and later orchestrating them with Kubernetes using Amazon EKS alongside Lambda functions.

The team also talked about the need for an API Gateway for observability, traffic shaping and control policies, complex routing, and more as they started to break down their monoliths. While their applications were only accessible internally, the changing application architecture presented new challenges in preserving the API contract with the client while they changed the backend services. Having the clients directly calling the monolith made it harder to break down that application. The team turned to the Gloo Edge API gateway from Solo.io to handle the ingress traffic to their EKS clusters. 

They added more tools like Flux, Kr8 and customized Helm charts to enable their transition to a GitOps style workflow for their infrastructure provisioning and configuration. They’ve even customized their installation of Gloo Edge by enforcing a specific ordering for the installation of components and more. The solution is described like an umbrella. While the shared services are not many, the changes to the infrastructure and the automation allows a narrow but very deep set of skills to be shared across a wide range of small, very specific scientific software. 

 

Why Gloo Edge

Gigi had heard about Gloo Edge years ago at a KubeCon and the hybrid application architecture approach of the gateway appealed to what she knew is a reality of migrating to microservices; that the environments are hybridized until it is complete. Gloo Edge gives the team a central way to manage these requests and how they respond to each other with things like request rewrites of the header or body and request transformations. We can keep the API contract between our clients and services while the actual responder can be the monolith one day and then be served by five new microservices the next day. 

As the software development org grows and writes more software, Gloo Edge helps the infrastructure team scale their skills centrally. This is critical as the developers are skilled in their language, but are not experienced in Kubernetes and Gigi’s team wanted to create a workflow where the developers didn’t have to worry about it.  Gloo Edge is almost like an operational API; we can ask the user for a few pieces of information about their service and then our team can automate everything else — from provisioning the EKS cluster, configuring Route53 and other AWS services, deploying the Gloo Edge API gateway, setting up the upstreams, routing, etc. 

 

What’s Next

The team is just finishing migrating the last of their clusters to the new Amazon EKS environment and testing some new automation capabilities that they look forward to sharing in a future online meetup

We hope you enjoyed this post highlighting the Zymergen team’s work with Kubernetes, Cloud and Gloo Edge. To learn more, check out these links: