End User Case Study: uShip

In this post, we feature a Q&A with Richard Simpson, software engineer at uShip working on platform engineering. uShip is a transportation marketplace company launched in 2004 with the mission to make shipping large or bulky items (like cars, cranes, furniture, and freight) quick, easy and affordable. The uShip platform helps people, businesses, e-commerce sellers, and multinational logistics companies ship with greater speed and efficiency. 

 

Technology Strategy and Challenge

The uShip platform is a monolithic application that has grown and evolved over the past decade and a half. A core part of the platform are APIs that partners integrate to which is a critical part of the business. Today that application is running on AWS EC2 instances with Mashery by TIBCO for managing the API access to our monolith. Our monolith is a Windows application on .NET framework with an IIS front end. The end to end experience on Mashery was less than ideal from configuration to monitoring with a fully ops controlled workflow that added complexity to the developer experience. 

As we looked toward the future, we wanted to: 

  • Adopt cloud-native architecture patterns with microservices and serverless for our event driven processes 
  • Provide a better experience for our developers to build and deploy applications
  • Improve the partner integrator experience to our platform

The nature of our business is a platform that serves as a marketplace and integration point for our various partners and for consumers. The API integration experience for our partners is critical to our business and we wanted a new gateway layer to make it easier for our developers to work with and for our partners to integrate to. 

 

Solution

We kicked off our API modernization project in April of this year to modernize our application by breaking up our monolith and to separate out the all API responsibilities from the monolith itself.

We believe that you don’t just settle on Kubernetes as your infrastructure and that’s it — it’s a foundational decision that forms the base of your internal platform and to make it truly effective for the organization, you need to build a PaaS. 

Our journey to Kubernetes included the addition of these technologies into our environment:

  • Cloud infrastructure with AWS EC2, Kuberentes on EKS and Lambda functions 
  • Gloo for our API gateway layer to route to our monolith, new microservices, and serverless functions
  • Policy based authentication to APIs with Open Policy Agent integration to Gloo
  • Flux CD for continuous delivery to Kubernetes targets
  • Platform observability with Prometheus and the ELK stack
  • Kubernetes Cert Manager to issue and manage certificates

What we have today is a mix of what remains of our monolith, new microservices and lambda functions all coexisting together during this time of transition. Our new services are Linux based while our monolith is still Windows based and we are able to contain the scope of Kubernetes as we rewrite the services over time and operationally expand as we go. 

In addition to all the new elements of our technical stack, we’ve focused on automating our infrastructure provisioning and configuration from centralized infrastructure repositories with the support of our embedded and cross-functional SREs.  

[featured_boxes class=”quote”]

“Gloo has enabled use to take advantage of the powerful Kubernetes platform and a rich set of features while keeping the complexity our developers face to a minimum.”
Richard Simpson, Software Engineer

[/featured_boxes]

 

Why Gloo

Gloo is a Kubernetes-native API gateway and ingress controller built on Envoy Proxy to manage and secure service connectivity at the edge. We looked at a number of solutions and learned that Gloo and Envoy Proxy are excellent technologies and being Kubernetes-native meant Gloo would integrate well into our platform. A key requirement was to provide a more developer friendly experience with our new gateway so that the teams could take on more ownership of the APIs of their services. The ergonomics of the other solutions we evaluated were not as developer friendly as we wanted for our environment. 

Specifically there are three features that stood out to us with Gloo, including:

  • Delegation: The ability to delegate configuration of routes to the development teams is quite powerful and a no brainer for us. We want to provide our developers with more control and freedom to deploy with as little friction as possible while having some centralized control over the environment. With Gloo, the operations team owns the top level domain and routing to our monolith and our development teams own a sub path that routes to the new services that they build, deploy, maintain, and are able to control the routing configuration for it. 
  • Authorization: We’ve integrated the Envoy Auth API with Open Policy Agent to handle the authorization logic. While our auth logic is fairly coarse grained today, having this type of setup as our foundation allows us to scale to more complicated logic over time. Specifically, we really like OPA and the idea of having a decentralized policy engine with centralized policy logic and it integrates really well with Envoy.
  • Diverse Endpoints: Routing to workloads in EC2 and Lambda is important as we have these in addition to containers on Kubernetes in our environment. 

 

What’s Next

We’ve had massive gains on both the human and technology side with this transformation. Our goal for this was to design a better centralized infrastructure with smooth self service experience. Technically we’ve been able to standardize the environment, automate it, monitor it better and improve the reliability of our workloads by application by running on Kubernetes. Our developers have ownership of their product areas, making them happier, more creative, and improved their ability to deliver software. 

There’s a lot of interesting areas we are looking into like Windows Containers for Kuberentes for our remaining .NET EC2 instances and exploring tools like Keda to autoscale deployments and jobs based on Prometheus metrics and Argo to improve our CD experience. All while continually hardening our clusters, fine tuning our monitoring, security and how we handle role-based access control. We look forward to more exciting things from the Kubernetes ecosystem.

Learn more