How is NGINX used with Kubernetes?

Kubernetes is an open source platform for automating the deployment, scaling, and management of containerized applications. It allows you to deploy your applications in the form of containers, which are lightweight and portable packages that contain all the necessary code, libraries, and dependencies for an application to run. Kubernetes helps you manage these containers across a fleet of servers, helping you to scale your applications up or down as needed.

NGINX is a web server that can also be used as a reverse proxy, load balancer, and HTTP cache. It is known for its high performance, stability, and low resource usage. As a web server, NGINX is responsible for handling HTTP requests from clients (such as web browsers) and serving them the appropriate responses. As a reverse proxy, NGINX sits in front of one or more web servers and acts as an intermediary, forwarding requests from clients to the appropriate backend server and returning the server’s responses back to the client.

In this article, we will explore the ways in which Kubernetes and NGINX can be used together to deploy and manage applications at scale.

What is NGINX Ingress Controller?

The NGINX Ingress Controller is a production-grade ingress controller for Kubernetes that uses NGINX as a reverse proxy and load balancer. It provides powerful, application-centric configuration capabilities, including role-based access control (RBAC), a simplified configuration utility, and the ability to adapt existing NGINX configurations in an existing environment.

The NGINX Ingress Controller plays an important role in Kubernetes by managing incoming traffic to the cluster. It acts as a reverse proxy and load balancer, routing incoming requests to the correct service based on the hostname and path of the request. This allows for easy management of the traffic routing rules, and it does not require changes to the application code.

The controller can expose multiple services from within a Kubernetes cluster to the internet. Without an ingress controller, each service would need to be exposed on a separate IP address and port, which can be difficult to manage and scale. Additionally, an ingress controller allows the use of Kubernetes annotations to configure the routing rules and other settings, which makes it easy to manage and update the routing rules without having to make changes to the application code.

The controller also provides features such as SSL/TLS termination, support for different protocols such as HTTP/2 and gRPC, health checks, rate limiting, and authentication/authorization, which are essential for running production-grade services. A user-friendly dashboard lets you view and manage the routing rules and statuses of the services, which is useful for troubleshooting and monitoring the incoming traffic to the cluster.

Kubernetes NGINX Ingress Controller best practices

Using the NGINX Ingress Controller as a proxy for non-HTTP requests

The NGINX Ingress controller is a popular choice for Kubernetes users because it provides a way to route traffic to Kubernetes services based on the hostname and path of the incoming request. By default, the NGINX Ingress controller can only handle HTTP and HTTPS traffic, but it can also be configured to proxy other types of traffic, such as gRPC or TCP, by using the “proxy_protocol” configuration option.

This can be useful if you have applications or services that use non-HTTP protocols, such as gRPC, and you want to use the NGINX Ingress controller as a single point of entry for all incoming traffic. This can help simplify your infrastructure and make it easier to manage.

Enabling cross-origin resource sharing (CORS) 

CORS is a mechanism that allows a web page to make requests to a server in a different domain. By default, web browsers block such requests due to security concerns, but CORS provides a way for servers to opt in to allowing these requests.

If you are using the NGINX Ingress controller to proxy requests to your Kubernetes services, you can enable CORS by adding the appropriate “add_header” directives to your configuration. This will allow your services to accept requests from other domains and allow your applications to make cross-origin requests without encountering any security errors.

Rate limiting

Rate limiting is a technique used to control the rate at which clients can access a server or service. It can be used to protect against denial-of-service (DoS) attacks, limit the impact of abusive clients, or simply to ensure that a service is not overwhelmed by too many requests.

If you are using the NGINX Ingress controller, you can enable rate limiting by adding the “limit_req” directive to your configuration. This directive allows you to specify the maximum number of requests that a client can make per second, and it can also be used to specify different limits for different clients based on IP address or other criteria.

Enabling rate limiting can help protect your services from being overwhelmed by too many requests, and it can also help reduce the risk of DoS attacks. However, it is important to carefully consider the appropriate rate limits for your services, as setting them too low may prevent legitimate clients from accessing your services.

Using the default backend

The default backend is a special service that the NGINX Ingress controller uses to handle requests that do not match any of the defined rules in the Ingress resource. This can be useful if you want to redirect all unmatched requests to a single service, such as a custom 404 page or a maintenance page.

To use the default backend, you need to specify the name of the service in the “spec.backend” field of the Ingress resource. For example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
spec:
  backend:
    serviceName: my-default-backend
    servicePort: 80

Enabling access log

Access logs are log entries that are generated when a client makes a request to the NGINX Ingress controller. These logs can contain useful information such as the client’s IP address, the request method (e.g. GET, POST), the request URL, and the response status code.

To enable access logs in the NGINX Ingress controller, you can use the nginx.ingress.kubernetes.io/access-log-path annotation. This annotation allows you to specify the file path where the access logs should be written. For example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/access-log-path: "/var/log/nginx/access.log"
spec:
  rules:
    - host: my-service.example.com
      http:
        paths:
          - backend:
              serviceName: my-service
              servicePort: 80

Enabling access logs can be useful for troubleshooting and monitoring purposes. You can use tools like logstash or fluentd to collect and analyze the logs, or you can use them to trigger alerts or take other actions based on specific patterns or conditions in the logs.

Other Kubernetes NGINX products and services

NGINX Plus

NGINX Plus is a multi-role reverse proxy and load balancer. It is an enterprise-ready, fully supported version of the NGINX open source platform. Features specific to Kubernetes include:

  • Sidecar – a sidecar is a dedicated container that runs alongside your application container in a Kubernetes pod. It offloads functionality required by applications running in a service mesh environment.
  • Ingress Controller – includes an ingress controller that supports Kubernetes clusters in managing ingress and egress traffic.
  • Firewall proxy for services and pods.
  • API gateway that manages service-to-service communication between containers and pods.

NGINX Service Mesh

NGINX Service Mesh is a lightweight service mesh with data plane security, scalability, and cluster-wide traffic management. It provides a secure solution for ingress and egress management, designed to support Kubernetes implementations.

NGINX Kubernetes Gateway

NGINX Kubernetes Gateway, currently in Beta, is a controller that implements the Kubernetes Gateway API specification, which evolved from the Kubernetes Ingress API specification. The Gateway API is an open source project maintained by the Kubernetes Networking Special Interest Group (SIGNETWORK) community to improve and standardize service networking in Kubernetes. NGINX is actively contributing to the project.

NGINX Kubernetes Gateway solves the challenge of enabling multiple teams to manage Kubernetes infrastructure in modern environments. It also simplifies deployment and management by providing many features without implementing a CRD. NGINX Kubernetes Gateway leverages proven NGINX technology as the data plane for improved performance, visibility, and security.

NGINX Kubernetes Gateway maps three key gateway API resources (GatewayClass, Gateway, and Routes) to the relevant roles (Infrastructure Provider, Cluster Operator, and Application Developer) using role-based access control (RBAC). Clarifying responsibilities and separating the various roles simplifies and streamlines management: 

  • Infrastructure Providers define GatewayClasses for Kubernetes clusters.
  • Cluster Operators deploy and configure gateways, including policies, on clusters.
  • Application Developers are free to attach routes to the gateway and expose their applications to the outside world.

NGINX Kubernetes Gateway also simplifies the deployment and management of service meshes in Kubernetes environments, and reduces the need for CRDs, by providing built-in core functionality for most use cases. 

The beta implementation of the NGINX Kubernetes Gateway is focused on providing ingress controller functionality over Layer 7 (HTTP and HTTPS) routing. 

Enabling Envoy Proxy with Solo Gloo Mesh

While NGINX is a popular proxy technology, Envoy Proxy has proven to be a more scalable and more modular technology for cloud native application use cases. Solo Gloo Mesh leverages Envoy Proxy as the core engine within the Istio Service Mesh.  

Learn more about Gloo Mesh.

BACK TO TOP