What is the NGINX API gateway?

An API gateway secures and orchestrates traffic between backend services and API consumers. NGINX Plus API Gateway receives all API requests from clients, determines which services are required by the request, and delivers responses with high performance. NGINX provides ultra-fast API responses in less than 30 milliseconds, and can handle thousands of requests per second.

Key features of the the NGINX Plus API Gateway include:

  • Authenticates API calls
  • Routes requests to the appropriate backend
  • Applies rate limiting to prevent service overload
  • Mitigates DDoS attacks
  • Offloads SSL/TLS traffic
  • Improves API performance and handles errors and exceptions

How is NGINX currently used as an API gateway?

NGINX can be deployed as an API gateway in three ways:

  • Using native NGINX functionality—some organizations use basic NGINX features to directly manage API traffic. For example, NGINX can identify whether API traffic is HTTP or gRPC, and translate API management requirements into NGINX configurations to receive, route, rate limit, and secure API requests.
  • Extending NGINX with Lua—OpenResty is an open source product based on NGINX, which adds the Lua interpreter to the NGINX core, allowing users to build feature-rich functionality on top of NGINX. Lua modules can also be compiled and loaded into NGINX open source and NGINX Plus builds. There are many NGINX-based open source API gateway implementations, most of which use Lua or OpenResty.
  • NGINX Plus API Gateway—using NGINX’s dedicated API gateway product.

When using the first two options for API management, keep the following in mind:

  • Performance and request latency—Lua is a great way to extend NGINX, but it can degrade NGINX performance. Independent testing of a simple Lua script shows a performance degradation of 50 to 90%. If you choose a solution that relies heavily on Lua, make sure it meets your performance requirements without adding latency to requests.
  • Converged approach—the NGINX server can manage API traffic in parallel to normal web traffic. An API gateway is a subset of standard NGINX features. When you use other products to manage the network or API communications, these solutions can add complexity to DevOps, CI/CD, monitoring, security, and other application development and delivery capabilities. Using NGINX both for web and API traffic can simplify the architecture.

What are the benefits of the NGINX API Gateway?

The table below illustrates API gateway use cases for channeling external API calls to internal services.

Use of API Gateway Reverse Proxy of NGINX
Topology Management Multiple application programming interfaces for accepting configuration changes and facilitating blue-green processes Using APIs and service discovery to track down endpoints (NGINX Plus); orchestrating APIs for blue-green deployment
Authorization Offloading Authentication tokens in incoming requests are interrogated The use of external auth services in addition to internal authentication mechanisms (such as JWTs, API keys, and OpenID Connect)
Vulnerable Applications Protection API and method-based rate limiting Limiting connections to backend services and imposing rate limits based on various criteria such as origin IP address or request parameters
API Lifecycle Management Rewriting legacy API queries and rejecting API calls that have been deprecated Requests can be rewritten in various ways, and a robust decision engine can be used to redirect or respond
Request Routing In order to determine where to send a request, the service (host header), API method (HTTP URL), and parameters are all taken into account Route requests depend on their host header, URL, and other parameters
Additional Protocols TCP-borne message queue WebSocket, TCP, UDP

NGINX can control web traffic by translating across protocols like HTTP/2, HTTP, FastCGI, and uwsgi, offering uniform configuration and monitoring platforms. In addition, NGINX is flexible enough to execute in containers with minimal resources required.

Tutorial: Deploying NGINX as an API gateway

Here is a tutorial that explains how to deploy NGINX as an API gateway for a RESTful API that communicates using JSON.

The general process is:

  1. Install NGINX Plus: If you don’t already have NGINX Plus installed, you’ll need to download and install it on your server. You can find detailed installation instructions on the NGINX website.
  2. Define the API endpoints: Next, you’ll need to define the endpoints that your API will expose. An endpoint is a specific location within the API that can be accessed and performs a specific function. For example, you might have an endpoint that returns a list of users or an endpoint that allows users to create a new account. You can define your endpoints in a configuration file, using the NGINX Plus API gateway syntax.
  3. Configure the API gateway: Next, you’ll need to configure the API gateway to route requests to the appropriate backend service. You’ll need to specify the location of the backend service and the URL path that the API gateway should use to access it. You can also configure additional functionality, such as rate limiting and data transformation, at this stage.
  4. Test the API gateway: Once you’ve configured the API gateway, you can test it to ensure that it is working correctly. You can use a tool like cURL or Postman to send test requests to the API gateway and verify that it is correctly routing requests to the backend service and returning the correct responses.
  5. Deploy the API gateway: When you’re satisfied that the API gateway is working correctly, you can deploy it to your production environment. You’ll need to make sure that the API gateway is running on a server that is accessible to your clients, and that it is configured to handle the expected load.

Here is what the code looks like:

First, in the main nginx.conf/sites-enabled/<website.conf> file, you will need to add the following declaration:

Include api_backends.conf;
server {
  listen 80 default_server;
  listen [::]:80 default_server;

Then, you can define an API gateway for a simple RESTful API in the NGINX configuration file, as follows:

# Declare the API gateway

gateway_api api;

# Define the API endpoints
  # Use the /users URL path to access this endpoint
  path: /users
  # Forward requests to the backend service at http://backend/api/users
    url: http://backend/api/users;
  # Use the /users URL path to access this endpoint
  path: /users
  # This endpoint supports POST requests only
  method: POST
  # Forward requests to the backend service at http://backend/api/users
    url: http://backend/api/users;
# Redirect all other requests to the default backend
default_backend default {
  url: "http://backend";

This configuration defines two endpoints: api.get_users and api.create_user

  • The api.get_users endpoint is accessed through the /users URL path and is routed to the backend service at http://backend/api/users
  • The api.create_user endpoint is also accessed through the /users URL path, but only responds to POST requests and is also routed to the backend service at http://backend/api/users. All other requests are redirected to the default backend.

API gateway management with Solo Gloo Gateway

While NGINX can be used in API Gateway use cases, many companies are alternatively choosing to move to a more modern, cloud native API gateway architecture based on Envoy Proxy. Solo Gloo Gateway is the leading API Gateway based on Envoy Proxy, which delivers a more secure, more scalable, more extensive API Gateway than NGINX. 

  • Gloo Gateway is Kubernetes native, and able to run on any cloud.
  • Gloo Gateway integrates with GitOps to enable highly automated environments.
  • Gloo Gateway integrated with DevSecOps best practices to ensure compliance with major industry standards. 
  • Gloo Gateway integrates with Gloo Mesh (Istio), to help scale as the Kubernetes and microservices environments grow. 

Get started with Gloo Mesh / Gloo Gateway today!