Site icon GetPageSpeed

NGINX Reverse Proxy: The Complete Guide

NGINX Reverse Proxy: The Complete Guide

An NGINX reverse proxy sits between clients and your backend servers, forwarding client requests and returning responses. This architecture improves security, enables load balancing, and provides SSL termination in a single, high-performance layer.

This guide covers everything you need to configure NGINX as a reverse proxy—from basic proxy_pass directives to advanced load balancing, WebSocket support, and troubleshooting common errors like 502 Bad Gateway and 504 Gateway Timeout.

What is a Reverse Proxy?

A reverse proxy receives requests from clients and forwards them to one or more backend servers. Unlike a forward proxy (which clients use to access external resources), a reverse proxy hides your backend infrastructure from the outside world.

Benefits of using NGINX as a reverse proxy:

NGINX is the most popular choice for reverse proxy deployments due to its event-driven architecture, low memory footprint, and ability to handle thousands of concurrent connections efficiently.

Installing NGINX

On RHEL-based distributions (Rocky Linux, AlmaLinux, CentOS Stream):

dnf install nginx
systemctl enable --now nginx

On Debian/Ubuntu:

apt install nginx
systemctl enable --now nginx

Basic proxy_pass Configuration

The proxy_pass directive is the foundation of NGINX reverse proxy configuration. It tells NGINX where to forward incoming requests. See the official NGINX proxy_pass documentation for complete syntax details.

Minimal Configuration

Create a configuration file in /etc/nginx/conf.d/:

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
    }
}

This NGINX reverse proxy configuration forwards all requests to a backend running on port 3000.

Understanding URI Handling

The trailing slash in proxy_pass matters significantly:

# Request: /api/users
# Backend receives: /api/users
location /api/ {
    proxy_pass http://backend;
}

# Request: /api/users  
# Backend receives: /users (path stripped)
location /api/ {
    proxy_pass http://backend/;
}

When proxy_pass includes a URI (even just /), NGINX replaces the matched location with the specified URI. Without a URI, the full original path passes through unchanged.

Essential Proxy Headers

Backend applications often need client information that gets lost in the proxy process. Configure these headers to pass critical data through your NGINX reverse proxy:

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
    }
}

Header Explanations

Header Purpose
Host Original host requested by client
X-Real-IP Client’s actual IP address
X-Forwarded-For Chain of proxy IPs including client
X-Forwarded-Proto Original protocol (http or https)
X-Forwarded-Host Original host header
X-Forwarded-Port Original port number

The $proxy_add_x_forwarded_for variable appends the client IP to any existing X-Forwarded-For header, maintaining the full proxy chain.

Upstream Blocks for Load Balancing

For multiple backend servers, define an upstream block. This is where the NGINX reverse proxy truly shines for high-availability architectures:

upstream backend_servers {
    server 192.168.1.10:3000;
    server 192.168.1.11:3000;
    server 192.168.1.12:3000;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Load Balancing Methods

NGINX offers several load balancing algorithms:

Round Robin (default):

upstream backend {
    server 192.168.1.10:3000;
    server 192.168.1.11:3000;
}

Least Connections:

upstream backend {
    least_conn;
    server 192.168.1.10:3000;
    server 192.168.1.11:3000;
}

IP Hash (session persistence):

upstream backend {
    ip_hash;
    server 192.168.1.10:3000;
    server 192.168.1.11:3000;
}

Weighted Distribution:

upstream backend {
    server 192.168.1.10:3000 weight=5;
    server 192.168.1.11:3000 weight=3;
    server 192.168.1.12:3000 weight=1;
}

Server Parameters

Configure server behavior with these parameters:

upstream backend {
    server 192.168.1.10:3000 weight=5 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:3000 weight=3;
    server 192.168.1.12:3000 backup;
    server 192.168.1.13:3000 down;
}
Parameter Description
weight=N Server weight for load distribution (default: 1)
max_fails=N Failed attempts before marking server unavailable (default: 1)
fail_timeout=Ns Time server stays unavailable after max_fails (default: 10s)
backup Only used when primary servers are unavailable
down Permanently marks server as unavailable

Connection Keepalive

Enable HTTP keepalive connections to backends for better performance:

upstream backend {
    server 192.168.1.10:3000;
    server 192.168.1.11:3000;

    keepalive 32;
    keepalive_requests 1000;
    keepalive_timeout 60s;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
    }
}

Critical: When using keepalive, you must set proxy_http_version 1.1 and clear the Connection header. HTTP/1.0 closes connections by default, defeating keepalive.

The keepalive 32 directive maintains up to 32 idle connections per worker process. This eliminates connection establishment overhead for subsequent requests.

Buffering Configuration

NGINX buffers responses from backends before sending them to clients. This frees backend servers quickly while your NGINX reverse proxy handles slow client connections.

location / {
    proxy_pass http://backend;

    proxy_buffering on;
    proxy_buffer_size 4k;
    proxy_buffers 8 4k;
    proxy_busy_buffers_size 8k;
    proxy_max_temp_file_size 1024m;
}

Buffer Directive Reference

Directive Default Purpose
proxy_buffering on Enable/disable response buffering
proxy_buffer_size 4k/8k Buffer for response headers
proxy_buffers 8 4k/8k Number and size of response body buffers
proxy_busy_buffers_size 8k/16k Maximum size sent to client while buffering continues
proxy_max_temp_file_size 1024m Maximum size of temp file when buffers overflow

When to Disable Buffering

Disable buffering for streaming responses or Server-Sent Events:

location /stream/ {
    proxy_pass http://backend;
    proxy_buffering off;
    proxy_cache off;
}

With buffering disabled, NGINX relays data immediately, improving Time To First Byte (TTFB) for streaming content.

Timeout Configuration

Configure timeouts to handle slow backends and prevent hung connections:

location / {
    proxy_pass http://backend;

    proxy_connect_timeout 10s;
    proxy_send_timeout 60s;
    proxy_read_timeout 60s;
}

Timeout Breakdown

Directive Default Purpose
proxy_connect_timeout 60s Time to establish connection with backend
proxy_send_timeout 60s Time between consecutive write operations to backend
proxy_read_timeout 60s Time between consecutive read operations from backend

Important: These timeouts measure time between operations, not total request time. A response arriving in chunks resets the read timeout with each chunk.

For long-running requests (file uploads, report generation), increase proxy_read_timeout:

location /api/reports/ {
    proxy_pass http://backend;
    proxy_read_timeout 300s;
}

SSL/TLS Termination

Handle HTTPS at the proxy layer while communicating with backends over HTTP:

server {
    listen 443 ssl;
    server_name example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen 80;
    server_name example.com;
    return 301 https://$host$request_uri;
}

Proxying to HTTPS Backends

When backends require HTTPS:

location / {
    proxy_pass https://backend:443;
    proxy_ssl_verify on;
    proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
    proxy_ssl_server_name on;
}

WebSocket Proxying

WebSocket connections require special handling because they upgrade from HTTP. The NGINX reverse proxy must pass the upgrade headers correctly:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ""      close;
}

server {
    listen 80;
    server_name example.com;

    location /ws/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_read_timeout 86400s;
        proxy_send_timeout 86400s;
    }
}

The map directive sets Connection: upgrade when the client sends an Upgrade header, or Connection: close otherwise.

Increase timeouts significantly for WebSockets since connections remain open indefinitely.

Real-World Backend Examples

Node.js Application

upstream nodejs_app {
    server 127.0.0.1:3000;
    keepalive 64;
}

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://nodejs_app;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_read_timeout 60s;
        proxy_buffering on;
    }

    location /socket.io/ {
        proxy_pass http://nodejs_app;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_read_timeout 86400s;
    }
}

Python with Gunicorn

upstream gunicorn_app {
    server unix:/run/gunicorn/app.sock;
    keepalive 32;
}

server {
    listen 80;
    server_name api.example.com;

    location / {
        proxy_pass http://gunicorn_app;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Script-Name "";

        proxy_redirect off;
    }

    location /static/ {
        alias /var/www/app/static/;
        expires 30d;
    }
}

Java Application (Tomcat/Spring Boot)

upstream java_app {
    server 127.0.0.1:8080;
    server 127.0.0.1:8081;
    keepalive 64;
}

server {
    listen 80;
    server_name java.example.com;

    client_max_body_size 50m;

    location / {
        proxy_pass http://java_app;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_connect_timeout 10s;
        proxy_read_timeout 120s;

        proxy_buffer_size 8k;
        proxy_buffers 16 8k;
    }
}

Failover and High Availability

Configure automatic failover to healthy backends:

upstream backend {
    server 192.168.1.10:3000 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:3000 max_fails=3 fail_timeout=30s;
    server 192.168.1.12:3000 backup;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend;
        proxy_next_upstream error timeout http_502 http_503 http_504;
        proxy_next_upstream_tries 3;
        proxy_next_upstream_timeout 30s;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

The proxy_next_upstream directive controls which errors trigger failover to the next server. Available options:

Troubleshooting Common Errors

502 Bad Gateway

A 502 error means NGINX received an invalid response from the backend or failed to connect.

Common causes:

  1. Backend not running
    systemctl status your-app
    curl -v http://127.0.0.1:3000/
    
  2. Wrong port or socket path
    ss -tlnp | grep 3000
    ls -la /run/gunicorn/app.sock
    
  3. Firewall blocking connection
    firewall-cmd --list-all
    
  4. Response headers too large

    Check the error log for “upstream sent too big header”:

    tail -f /var/log/nginx/error.log
    

    Increase buffer size:

    proxy_buffer_size 8k;
    proxy_buffers 16 8k;
    
  5. SELinux blocking network connection
    setsebool -P httpd_can_network_connect 1
    

504 Gateway Timeout

A 504 error occurs when the backend takes too long to respond.

Solutions:

  1. Increase read timeout
    proxy_read_timeout 300s;
    
  2. Check backend performance
    time curl http://127.0.0.1:3000/slow-endpoint
    
  3. Optimize long-running operations

    Consider returning immediately with a job ID and providing a status endpoint.

Connection Refused

Check if the backend is listening on the expected address and port:

ss -tlnp | grep LISTEN
curl -v http://127.0.0.1:3000/

If using Unix sockets, verify permissions:

ls -la /run/gunicorn/app.sock
# Should be readable by nginx user
chmod 660 /run/gunicorn/app.sock
chown nginx:nginx /run/gunicorn/app.sock

Debugging Tips

Enable detailed error logging:

error_log /var/log/nginx/error.log debug;

View upstream status with stub_status:

location /nginx_status {
    stub_status;
    allow 127.0.0.1;
    deny all;
}

Check configuration syntax:

nginx -t
nginx -T | grep proxy

Security Best Practices

Hide Backend Server Information

proxy_hide_header X-Powered-By;
proxy_hide_header Server;
proxy_hide_header X-AspNet-Version;

proxy_pass_header Date;

Rate Limiting

limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;

server {
    location /api/ {
        limit_req zone=api burst=20 nodelay;
        proxy_pass http://backend;
    }
}

Request Size Limits

client_max_body_size 10m;
client_body_buffer_size 128k;

Restrict Access

location /admin/ {
    allow 10.0.0.0/8;
    allow 192.168.1.0/24;
    deny all;

    proxy_pass http://backend;
}

Performance Optimization

Compression

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml;

Caching

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m max_size=1g inactive=60m;

server {
    location / {
        proxy_cache cache;
        proxy_cache_valid 200 60m;
        proxy_cache_valid 404 1m;
        proxy_cache_use_stale error timeout updating http_502 http_503 http_504;
        proxy_cache_lock on;

        add_header X-Cache-Status $upstream_cache_status;

        proxy_pass http://backend;
    }
}

Connection Pooling

upstream backend {
    server 127.0.0.1:3000;
    keepalive 64;
    keepalive_requests 10000;
    keepalive_timeout 60s;
}

Complete Production Configuration

Here is a complete production-ready NGINX reverse proxy configuration:

upstream app_backend {
    least_conn;
    server 192.168.1.10:3000 weight=5 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:3000 weight=3 max_fails=3 fail_timeout=30s;
    server 192.168.1.12:3000 backup;

    keepalive 64;
    keepalive_requests 10000;
    keepalive_timeout 60s;
}

server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;

    client_max_body_size 50m;
    client_body_buffer_size 128k;

    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml;

    location / {
        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;

        proxy_buffering on;
        proxy_buffer_size 8k;
        proxy_buffers 16 8k;
        proxy_busy_buffers_size 16k;

        proxy_connect_timeout 10s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;

        proxy_next_upstream error timeout http_502 http_503 http_504;
        proxy_next_upstream_tries 3;
        proxy_next_upstream_timeout 30s;

        proxy_hide_header X-Powered-By;
    }

    location /health {
        access_log off;
        return 200 "healthyn";
        add_header Content-Type text/plain;
    }
}

Summary

Configuring NGINX as a reverse proxy involves understanding several key areas:

Start with a basic configuration and add complexity as needed. Always test configuration changes with nginx -t before reloading, and monitor your error logs when troubleshooting issues.

For more advanced topics, see our guides on tuning proxy_buffer_size and fixing 504 Gateway Timeout errors.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience • Maintainer of GetPageSpeed RPM repository • Contributor to open-source NGINX modules

Exit mobile version