Customer Case Study: Vonage – API Gateway Migration

In this post, we feature a Q&A with Jonathan Lane, Senior Manager, Software Engineering, API Platform at Vonage. Vonage (Nasdaq: VG) is a global leader in cloud communications helping businesses accelerate their digital transformation. Vonage offers a unique combination of unified communications, contact center and communications APIs – all on a unified, flexible integrated platform. The Vonage API Platform offers custom, embedded programmable communications capabilities via APIs that enable applications and businesses to easily connect with customers via the channels they prefer – SMS, voice, chat, messaging and video. Over the past 20 years Vonage has grown with new products, services and acquisitions and that presented new challenges and opportunities to the team.

Technology Strategy and Challenge

With our API Platform, we at Vonage are very aware of how our products are presented to our end customers. Recently we’ve been looking at the consistency of the developer experience in using our APIs. As we’ve grown and evolved as a company, we have amassed a diverse portfolio of APIs and want to present one Vonage experience regardless of its API circumstances: whether it is newer, older, developed in-house or as a result of a merger/acquisition. Having a unified experience means how the solution is presented both publicly to the customer and the ability to manage things more consistently, in a standardized way on the back end, as well. We follow a service-oriented architecture so that shared features are in their own component which helps with consistency. However, there are still areas where inconsistencies can creep in. 

Take authentication for instance. The account validation is performed in a consistent way, but the individual services must decide how customers can send credentials. This creates inconsistencies in what types of credentials can be supplied as well as exactly how to supply them 

Example: A customer uses API 1 by logging in with Basic Authentication. They want to call API 2 which uses JWT (JSON Web Token), it does not support Basic Authentication. The customer may decide to standardize on one method of auth like JWT, but another service they need only supports Basic or OAuth. What is the customer then supposed to do? 

We started to look at how we might package some of these common features into a centralized location to remove this inconsistency and present a better experience to our customers. This also frees up the service developers to focus on the value the service is bringing to the customer. After reviewing the options for doing this we decided to pursue API Gateway technology. In general, a gateway provides a single-entry point to your API based system. It’s a good place to plug in all of your common functionality. It can also provide a wealth of other features around packaging and provisioning APIs and it can help with general connectivity between your services.

 

Our Evolving Journey to Envoy Proxy and Kubernetes

Our pursuit of adding an API gateway to our API platform led us to explore Envoy Proxy, Gloo Edge, Kubernetes, and cloud – not all at the same time but through a series of experimentation and adjustments. With each iteration we learned things about not just the technology but also our internal processes and continued to evolve based on what we learned.

Starting with Envoy Proxy

Our first experiments with a gateway used the open source Envoy Proxy. At this stage we were deploying to our own servers so we wanted something which was easy to configure and could run as a native binary. Envoy was a good fit. The only real issue was that we needed to build a plugin to talk to our authentication service. Envoy can be extended with plugins, but they are built in C++ using the Bazel build framework. Being a primarily Java shop we don’t have much infrastructure for building C++ but, dusting off a copy of Effective Modern C++ and some support from Stack Overflow, we got it up and running and into production. Envoy is very low latency and high throughput. We deployed with a brand-new service which was expecting very low traffic to test this out and the deployment was very successful.

Going to Gloo Edge and Kubernetes

Our first set of challenges came when it was time to upgrade to a newer version of Envoy Proxy. Some of the APIs we were using to build our bridge to the authentication service had changed, upgrading took days and we didn’t have a deep bench of C++ expertise just to support Envoy — so we started to look for alternatives.

We devised a process for testing in our API gateway selection and arrived at Gloo Edge, the API gateway from Solo.io. 

Our priorities for this selection process included:

  • Ability to easily integrate with a variety of custom authentication services
  • Rich flexibility for routing requests based on any attribute of the request
  • Flexible rate limiting functionality
  • Ability to route to services hosted in various locations (on-premises, cloud, AWS Lambda, etc.)
  • Easy to use in an on-premises setup and able to help us in our migration to the cloud

In this phase we self-hosted Kubernetes on-premises in Softlayer and then used Minikube to create an isolated environment to build and test the authentication adapter functionality. Gloo Edge provided a control plane to configure and manage Envoy Proxy without having to invest in a substantial engineering team to specialize in customizing, maintaining, and supporting upstream Envoy — we are able to rely on the Solo.io team for that specialization.

 

[featured_boxes class=”quote”]

Collaborating with the team at Solo.io has been great. They are very responsive in helping us with our Gloo environment, brainstorming ideas on how to solve our issues and even beyond Gloo Edge, and responding to the questions we’ve had on Kubernetes like load balancing and how the ecosystem of tools works together.  They are truly invested in our success.

[/featured_boxes]

 

What’s Next

With each step, we’ve learned more about containers, Kubernetes, Gloo Edge, API gateways, and how we want to not only build and deploy to production, but also how we want to operationalize these changes for the day after. While the technology implementation has been straightforward, the rollouts have brought up issues to address around our environments and dependencies on other teams and services. Luckily, we’ve been able to take an agile approach to the build and deployment process to learn and adjust as we go.

The most recent evolution has been to migrate off our self-hosted Kubernetes on Softlayer to AWS using Amazon EKS for the Kubernetes services and Amazon EC2 for the services that are not yet migrated to Kubernetes. We came to the realization that maintaining and supporting Kubernetes on our own hardware was not something we wanted to continue. Instead, we learned we can leverage the expertise built into the Amazon EKS service and focus our efforts on the Vonage business logic and customer experience. We already had a cloud migration effort at the company, and this folded in nicely into that initiative to containerize as many services as possible, gain the ability to deploy services anywhere in a consistent manner, and improve redundancy and failover for our services.

In AWS, Gloo Edge will be fronting our Amazon EKS clusters and once the gateway setup is settled, we’ll be looking to expand its use across the company. As part of this initial process, we are gaining the knowledge needed to educate and empower the internal teams at Vonage on how to interact with the gateway and manage their own configuration. As we progress on our path, we’ll come back and share more of our experience.

 

Learn more

Thank you so much to Jonathan and the Vonage team for sharing their experiences in modernizing to Kubernetes, Gloo Edge, and cloud.