yum upgrades for production use, this is the repository for you.
Active subscription is required.
An NGINX reverse proxy sits between clients and your backend servers, forwarding client requests and returning responses. This architecture improves security, enables load balancing, and provides SSL termination in a single, high-performance layer.
This guide covers everything you need to configure NGINX as a reverse proxyβfrom basic proxy_pass directives to advanced load balancing, WebSocket support, and troubleshooting common errors like 502 Bad Gateway and 504 Gateway Timeout.
What is a Reverse Proxy?
A reverse proxy receives requests from clients and forwards them to one or more backend servers. Unlike a forward proxy (which clients use to access external resources), a reverse proxy hides your backend infrastructure from the outside world.
Benefits of using NGINX as a reverse proxy:
- Load balancing across multiple backend servers
- SSL/TLS termination to offload encryption from backends
- Caching to reduce backend load
- Compression to minimize bandwidth
- Security by hiding backend server details
- Centralized logging and monitoring
NGINX is the most popular choice for reverse proxy deployments due to its event-driven architecture, low memory footprint, and ability to handle thousands of concurrent connections efficiently.
Installing NGINX
On RHEL-based distributions (Rocky Linux, AlmaLinux, CentOS Stream):
dnf install nginx
systemctl enable --now nginx
On Debian/Ubuntu:
apt install nginx
systemctl enable --now nginx
Basic proxy_pass Configuration
The proxy_pass directive is the foundation of NGINX reverse proxy configuration. It tells NGINX where to forward incoming requests. See the official NGINX proxy_pass documentation for complete syntax details.
Minimal Configuration
Create a configuration file in /etc/nginx/conf.d/:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:3000;
}
}
This NGINX reverse proxy configuration forwards all requests to a backend running on port 3000.
Understanding URI Handling
The trailing slash in proxy_pass matters significantly:
# Request: /api/users
# Backend receives: /api/users
location /api/ {
proxy_pass http://backend;
}
# Request: /api/users
# Backend receives: /users (path stripped)
location /api/ {
proxy_pass http://backend/;
}
When proxy_pass includes a URI (even just /), NGINX replaces the matched location with the specified URI. Without a URI, the full original path passes through unchanged.
Essential Proxy Headers
Backend applications often need client information that gets lost in the proxy process. Configure these headers to pass critical data through your NGINX reverse proxy:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
Header Explanations
| Header | Purpose |
|---|---|
Host |
Original host requested by client |
X-Real-IP |
Client’s actual IP address |
X-Forwarded-For |
Chain of proxy IPs including client |
X-Forwarded-Proto |
Original protocol (http or https) |
X-Forwarded-Host |
Original host header |
X-Forwarded-Port |
Original port number |
The $proxy_add_x_forwarded_for variable appends the client IP to any existing X-Forwarded-For header, maintaining the full proxy chain.
Upstream Blocks for Load Balancing
For multiple backend servers, define an upstream block. This is where the NGINX reverse proxy truly shines for high-availability architectures:
upstream backend_servers {
server 192.168.1.10:3000;
server 192.168.1.11:3000;
server 192.168.1.12:3000;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Load Balancing Methods
NGINX offers several load balancing algorithms:
Round Robin (default):
upstream backend {
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
Least Connections:
upstream backend {
least_conn;
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
IP Hash (session persistence):
upstream backend {
ip_hash;
server 192.168.1.10:3000;
server 192.168.1.11:3000;
}
Weighted Distribution:
upstream backend {
server 192.168.1.10:3000 weight=5;
server 192.168.1.11:3000 weight=3;
server 192.168.1.12:3000 weight=1;
}
Server Parameters
Configure server behavior with these parameters:
upstream backend {
server 192.168.1.10:3000 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.1.11:3000 weight=3;
server 192.168.1.12:3000 backup;
server 192.168.1.13:3000 down;
}
| Parameter | Description |
|---|---|
weight=N |
Server weight for load distribution (default: 1) |
max_fails=N |
Failed attempts before marking server unavailable (default: 1) |
fail_timeout=Ns |
Time server stays unavailable after max_fails (default: 10s) |
backup |
Only used when primary servers are unavailable |
down |
Permanently marks server as unavailable |
Connection Keepalive
Enable HTTP keepalive connections to backends for better performance:
upstream backend {
server 192.168.1.10:3000;
server 192.168.1.11:3000;
keepalive 32;
keepalive_requests 1000;
keepalive_timeout 60s;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
}
}
Critical: When using keepalive, you must set proxy_http_version 1.1 and clear the Connection header. HTTP/1.0 closes connections by default, defeating keepalive.
The keepalive 32 directive maintains up to 32 idle connections per worker process. This eliminates connection establishment overhead for subsequent requests.
Buffering Configuration
NGINX buffers responses from backends before sending them to clients. This frees backend servers quickly while your NGINX reverse proxy handles slow client connections.
location / {
proxy_pass http://backend;
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
proxy_max_temp_file_size 1024m;
}
Buffer Directive Reference
| Directive | Default | Purpose |
|---|---|---|
proxy_buffering |
on | Enable/disable response buffering |
proxy_buffer_size |
4k/8k | Buffer for response headers |
proxy_buffers |
8 4k/8k | Number and size of response body buffers |
proxy_busy_buffers_size |
8k/16k | Maximum size sent to client while buffering continues |
proxy_max_temp_file_size |
1024m | Maximum size of temp file when buffers overflow |
When to Disable Buffering
Disable buffering for streaming responses or Server-Sent Events:
location /stream/ {
proxy_pass http://backend;
proxy_buffering off;
proxy_cache off;
}
With buffering disabled, NGINX relays data immediately, improving Time To First Byte (TTFB) for streaming content.
Timeout Configuration
Configure timeouts to handle slow backends and prevent hung connections:
location / {
proxy_pass http://backend;
proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
Timeout Breakdown
| Directive | Default | Purpose |
|---|---|---|
proxy_connect_timeout |
60s | Time to establish connection with backend |
proxy_send_timeout |
60s | Time between consecutive write operations to backend |
proxy_read_timeout |
60s | Time between consecutive read operations from backend |
Important: These timeouts measure time between operations, not total request time. A response arriving in chunks resets the read timeout with each chunk.
For long-running requests (file uploads, report generation), increase proxy_read_timeout:
location /api/reports/ {
proxy_pass http://backend;
proxy_read_timeout 300s;
}
SSL/TLS Termination
Handle HTTPS at the proxy layer while communicating with backends over HTTP:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
Proxying to HTTPS Backends
When backends require HTTPS:
location / {
proxy_pass https://backend:443;
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
proxy_ssl_server_name on;
}
WebSocket Proxying
WebSocket connections require special handling because they upgrade from HTTP. The NGINX reverse proxy must pass the upgrade headers correctly:
map $http_upgrade $connection_upgrade {
default upgrade;
"" close;
}
server {
listen 80;
server_name example.com;
location /ws/ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
The map directive sets Connection: upgrade when the client sends an Upgrade header, or Connection: close otherwise.
Increase timeouts significantly for WebSockets since connections remain open indefinitely.
Real-World Backend Examples
Node.js Application
upstream nodejs_app {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://nodejs_app;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 60s;
proxy_buffering on;
}
location /socket.io/ {
proxy_pass http://nodejs_app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400s;
}
}
Python with Gunicorn
upstream gunicorn_app {
server unix:/run/gunicorn/app.sock;
keepalive 32;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://gunicorn_app;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Script-Name "";
proxy_redirect off;
}
location /static/ {
alias /var/www/app/static/;
expires 30d;
}
}
Java Application (Tomcat/Spring Boot)
upstream java_app {
server 127.0.0.1:8080;
server 127.0.0.1:8081;
keepalive 64;
}
server {
listen 80;
server_name java.example.com;
client_max_body_size 50m;
location / {
proxy_pass http://java_app;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 10s;
proxy_read_timeout 120s;
proxy_buffer_size 8k;
proxy_buffers 16 8k;
}
}
Failover and High Availability
Configure automatic failover to healthy backends:
upstream backend {
server 192.168.1.10:3000 max_fails=3 fail_timeout=30s;
server 192.168.1.11:3000 max_fails=3 fail_timeout=30s;
server 192.168.1.12:3000 backup;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_next_upstream error timeout http_502 http_503 http_504;
proxy_next_upstream_tries 3;
proxy_next_upstream_timeout 30s;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
The proxy_next_upstream directive controls which errors trigger failover to the next server. Available options:
errorβ Connection errortimeoutβ Connection or read timeouthttp_502β Backend returned 502http_503β Backend returned 503http_504β Backend returned 504http_500β Backend returned 500http_403β Backend returned 403http_404β Backend returned 404non_idempotentβ Retry non-idempotent requests (POST, PATCH)
Troubleshooting Common Errors
502 Bad Gateway
A 502 error means NGINX received an invalid response from the backend or failed to connect.
Common causes:
- Backend not running
systemctl status your-app curl -v http://127.0.0.1:3000/ - Wrong port or socket path
ss -tlnp | grep 3000 ls -la /run/gunicorn/app.sock - Firewall blocking connection
firewall-cmd --list-all - Response headers too large
Check the error log for “upstream sent too big header”:
tail -f /var/log/nginx/error.logIncrease buffer size:
proxy_buffer_size 8k; proxy_buffers 16 8k; - SELinux blocking network connection
setsebool -P httpd_can_network_connect 1
504 Gateway Timeout
A 504 error occurs when the backend takes too long to respond.
Solutions:
- Increase read timeout
proxy_read_timeout 300s; - Check backend performance
time curl http://127.0.0.1:3000/slow-endpoint - Optimize long-running operations
Consider returning immediately with a job ID and providing a status endpoint.
