NGINX rate limiting

The basics and 3 code examples

What is NGINX rate limiting?

NGINX is an open source software for reverse proxying, caching, web serving, load balancing, and media streaming. NGINX rate limiting enables you to restrict the number of HTTP requests a user is allowed to make in a certain period.

Rate limiting can help defend against web-based attacks such as:

  • Brute-force password-guessing attacks—rate limiting can slow down these attacks, ensuring bots cannot keep sending requests without restriction.
  • Distributed Denial of Service (DDoS) attacks—you can limit a website’s incoming request rate to a baseline value representing normal user behavior to identify targeted URLs. It can prevent upstream application servers from being flooded by too many simultaneous user requests.

NGINX’s rate-limiting feature employs the leaky bucket algorithm typically used in packet-switched computer networks and telecommunications to handle burstiness when bandwidth is limited. When the processing queue is filled with client requests, the algorithm ensures they wait to be processed according to a first-in-first-out (FIFO) schedule.

Get started with Istio

Managing rate limiting with Gloo Mesh and Gloo Gateway

Gloo Gateway (API gateway) and Gloo Mesh (Istio Service Mesh) both use Envoy proxy for the data plane, which is an alternative proxy technology from NGINX. Envoy Proxy is a powerful, extensible, proxy built on C++ and is a graduated project in the Cloud Native Computing Foundation (CNCF). Envoy is not owned by any one vendor and is a big reason why we’ve seen an explosion in projects using it to power Layer 7 including projects like API gateways, service meshes, and even CNIs. Envoy proxy is a more modern proxy, capable of higher scalability, and modern extensibility (such as WebAssembly and GraphQL).

If you are considering improving the functionality of your gateway or your service mesh, learn more about how Gloo Gateway and Gloo Mesh can take you beyond where you are today.

3 ways to limit access to proxied HTTP resources in NGINX

You can use rate limiting to protect upstream web and application servers based on NGINX.

1. Limiting the number of connections

To limit the number of connections to proxied HTTP resources in NGINX, you can use the limit_conn directive in your NGINX configuration file. The limit_conn directive specifies the maximum number of connections that NGINX will allow to a particular proxied resource.

For example, if you want to limit the number of connections to a proxied resource named my_resource to 10 connections, you can use the following configuration:

http {
    # ...
    limit_conn my_resource 10;
    # ...
}

This directive would go in the http block of your NGINX configuration file, along with any other directives that control how NGINX processes incoming requests. You can then specify which proxied resource the limit_conn directive applies to by using the proxy_pass directive in your server block. For example:

server {
    # ...
    location /my_resource {
        proxy_pass http://my_backend;
    }
    # ...
}

In this example, requests to /my_resource on the server would be proxied to the http://my_backend resource, and the limit_conn directive would apply, limiting the number of connections to that resource to 10.

2. Limiting the request rate

To limit the request rate to proxied HTTP resources in NGINX, you can use the limit_req directive in your NGINX configuration file. The limit_req directive specifies the maximum rate at which NGINX will allow requests to be made to a particular proxied resource. This rate is typically expressed in requests per second.

For example, if you want to limit the rate of requests to a proxied resource named my_resource to 10 requests per second, you can use the following configuration:

http {
    # ...
    limit_req zone=my_resource burst=10 nodelay;
    # ...
}

This directive would go in the http block of your NGINX configuration file, along with any other directives that control how NGINX processes incoming requests. The zone parameter specifies the name of the resource to which the rate limiting applies, and the burst parameter specifies the maximum number of requests that NGINX will allow within the specified time period. In this example, the nodelay parameter is also specified, which tells NGINX to immediately reject any requests that exceed the rate limit, rather than waiting for the current time period to end before applying the limit.

You can then specify which proxied resource the limit_req directive applies to by using the proxy_pass directive in your server block. For example:

server {
    # ...
    location /my_resource {
        proxy_pass http://my_backend;
    }
    # ...
}

In this example, requests to /my_resource on the server would be proxied to the http://my_backend resource, and the limit_req directive would apply, limiting the rate of requests to that resource to 10 requests per second.

3. Limiting bandwidth

To limit bandwidth to proxied HTTP resources in NGINX, you can use the limit_rate directive in your NGINX configuration file. The limit_rate directive specifies the maximum rate at which NGINX will allow data to be transferred to or from a particular proxied resource. This rate is typically expressed in bytes per second. For example, if you want to limit the bandwidth of a proxied resource named my_resource to 10 kilobytes per second, you can use the following configuration:

http {
    # ...
    limit_rate 10k;
    # ...
}

This directive would go in the http block of your NGINX configuration file, along with any other directives that control how NGINX processes incoming requests. The limit_rate directive applies to all proxied resources by default, but you can also specify a particular resource to which the bandwidth limit should apply. For example:

server {
    # ...
    location /my_resource {
        proxy_pass http://my_backend;
        limit_rate 10k;
    }
    # ...
}

In this example, requests to /my_resource on the server would be proxied to the http://my_backend resource, and the limit_rate directive would apply, limiting the bandwidth of that resource to 10 kilobytes per second.

Get started with Gloo Gateway today!

Cloud connectivity done right