End User Case Study: EquipmentShare

In this post, we feature a Q&A with Hunter Scheib, Cloud Ops Engineer at EquipmentShare to talk about their migration to Kubernetes. EquipmentShare offers contractors a better way to manage mixed construction fleets by leveraging technology solutions to work smarter, more efficiently and increase productivity. EquipmentShare’s smart jobsite solutions and services include: Rent, new and telematics-equipped machine rental service; Track, construction fleet management technology that monitors asset location and utilization; and Own, EquipmentShare’s dealership arm for new and used equipment and parts sales.

 

Technology Strategy and Challenge

The three solutions at EquipmentShare are powered by a software platform that gathers and updates data in real-time about the available equipment, tracking their location and status, and IoT to monitor the equipment to provide preventative maintenance. Their platform was built cloud-first in AWS using many of the native services like Elastic Beanstalk in their software development workflow. The monolithic nature of the application started to cause issues including test isolation, branch contention, and others. All were natural growing pains as their project and team grew over the past few years. 

 

Solution

The Cloud Ops team started to look at solutions like Kubernetes and microservices architecture, and GitOps as a way of creating more separation in the services themselves resulting in isolation between the developers’ workflows. This evolution was focused on the web application part of their solution which included services written in Python, a database and some lambda functions for batch processing. 

We started the migration by adopting a GitOps approach first so that we can automatically deploy when developers do a commit. We wanted to simplify onboarding and GitOps allows us to abstract the kubectl and other Kubernetes specific commands from our developers. We created a set of Helm charts for easily deploying a containerized app to Kubernetes. We updated our fragile path based routing with ALB to Gloo open source API gateway from Solo.io. Gloo immediately picks up on new VirtualService changes, so we can create dynamic test environments within minutes that are replicas of our production setup. Now developers can easily test frontends and backends in their own isolated full stack environment. Gloo gives us fine grained traffic control that’s easy to use.

[featured_boxes class=”quote”]

“When we wanted to bring in microservices and Kubernetes, we tried Gloo as our API gateway and it just fit. It is Kubernetes native and allows us to dynamically change and control our environment.”

[/featured_boxes]

 

Why Gloo

We’d heard of Gloo before and when we went to test it, we understood that it is a native Kubernetes solution using Custom Resources (CRD) and Virtual Services which fit perfectly with our GitOps methodology. Gloo allowed us to jumpstart our Kuberentes deployment and apply a phased approach to going live. With Gloo we deployed a static upstream to Beanstalk and Kubernetes and were able to shift small percentages of traffic instantly to the new environment without having to figure it out in DNS. The speed of changes made it also easy to shift back the traffic if we observed issues in the application behavior. Gloo gives us fine grained traffic control as we ease our production traffic to a new environment. 

We’ve been impressed with the rate of innovation in Gloo as the Solo.io team is constantly updating the code in open source and the community is very active and helpful in slack. We are using the read-only dashboard to keep track of what’s actively running in the environment. In our staging cluster we have a lot of VirtualServices as developers are always spinning up and down dynamic environments to test and we can quickly check in to see how much is running from the dashboard. To help with our infrastructure hygiene, we’ve implemented an automatic deletion policy that deletes Virtual Services from the staging cluster after a certain amount of time has passed. 

 

What’s Next

We’ve currently got our entire pre-production environment on Kubernetes with Gloo and have a production Kubernetes and production Beanstalk environment running side by side. As mentioned earlier, we are constantly shifting traffic back and forth between them as we observe the behavior, make changes to the code or environment, and adjust our operations for the new environment. As we continue on our journey, we look forward to trying more cloud-native technologies and potentially integrating them into our tech stack. 

 

Learn more