yum upgrades for production use, this is the repository for you.
Active subscription is required.
Every blog seems to copy-paste the same “solution” for 504 Gateway Timeout NGINX errors: just increase all the timeouts. Bad advice. Most of those directives don’t need tuning at all—the real fix for 504 Gateway Timeout NGINX problems involves understanding which layer is timing out and adjusting the one directive that matters.
Understanding the 504 Timeout Problem
A 504 Gateway Timeout occurs when NGINX, acting as a proxy, doesn’t receive a response from the upstream server within the configured timeout period. The default timeout is 60 seconds (or 60000ms internally).
The upstream causing the timeout could be:
- Another HTTP server (Node.js, Python, Ruby, Go)
- A caching proxy like Varnish
- PHP-FPM or any FastCGI process
- Any backend that speaks HTTP
The key insight: you only need to increase the timeout that’s actually triggering.
🔍 Which Directive Actually Matters?
For the ngx_http_proxy_module (proxying to HTTP backends), three timeout directives exist:
| Directive | Default | What It Controls |
|---|---|---|
proxy_connect_timeout |
60s | Time to establish TCP connection |
proxy_send_timeout |
60s | Time between successive writes to upstream |
proxy_read_timeout |
60s | Time between successive reads from upstream |
In 99% of 504 Gateway Timeout NGINX cases, proxy_read_timeout is the culprit. Your backend is slow to respond, not slow to accept connections.
The same logic applies to FastCGI:
| Directive | Default | What It Controls |
|---|---|---|
fastcgi_connect_timeout |
60s | Time to establish FastCGI connection |
fastcgi_send_timeout |
60s | Time between successive writes |
fastcgi_read_timeout |
60s | Time between successive reads |
For PHP applications, fastcgi_read_timeout is almost always the one to increase.
🛠️ The Right Way to Fix 504 Gateway Timeout NGINX Errors
Don’t blindly increase all timeouts. Follow this diagnostic approach:
Step 1: Identify the Layer Causing the Timeout
Check your NGINX error log:
tail -f /var/log/nginx/error.log | grep -i timeout
You’ll see messages like:
upstream timed out (110: Connection timed out) while reading response header from upstream
This tells you it’s a read timeout issue.
Step 2: Check Your Architecture
If NGINX proxies to an HTTP backend (Varnish, Node.js, another NGINX, etc.):
location / {
proxy_pass http://127.0.0.1:6081;
proxy_read_timeout 120s; # Only directive that usually needs changing
}
If NGINX talks directly to PHP-FPM:
location ~ .php$ {
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_read_timeout 120s; # The critical one for PHP
}
Step 3: Don’t Forget the Actual Backend
If you’re running PHP, you also need to ensure PHP itself doesn’t timeout before NGINX:
; In php.ini or PHP-FPM pool config
max_execution_time = 120
For PHP-FPM specifically:
; In /etc/php-fpm.d/www.conf
request_terminate_timeout = 120
📐 Multi-Layer Architectures (The “NGINX Sandwich”)
Production setups often look like this:
Client → NGINX (TLS) → Varnish → NGINX (Backend) → PHP-FPM
This isn’t just about Varnish—you might proxy to any HTTP application: Node.js, Python (Gunicorn/uWSGI), Ruby (Puma), Go, Java, or even another NGINX instance.
Each layer has its own timeout. The chain looks like:
- Front NGINX (
proxy_read_timeout) → timeout waiting for Varnish/backend - Varnish (
.first_byte_timeout) → timeout waiting for backend NGINX - Backend NGINX (
fastcgi_read_timeout) → timeout waiting for PHP - PHP-FPM (
request_terminate_timeout) → kills the PHP script
The rule: Each outer layer should have a timeout ≥ the inner layer.
Varnish Backend Timeouts
If you use Varnish, configure the backend in your VCL:
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080";
.first_byte_timeout = 120s; # Time to first response byte
}
The .first_byte_timeout is the critical one—it’s how long Varnish waits for the backend to start responding.
🔧 Complete Working Examples
NGINX Proxying to Node.js/Python/Go
upstream app_backend {
server 127.0.0.1:3000;
}
server {
listen 443 ssl http2;
server_name example.com;
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
# Increase only if your app has slow endpoints
proxy_read_timeout 120s;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
NGINX with PHP-FPM (No Varnish)
location ~ .php$ {
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
# For slow PHP operations
fastcgi_read_timeout 120s;
}
WordPress Admin with Extended Timeouts
WordPress admin operations (imports, plugin updates) often need more time:
location ~ ^/wp-admin/.*.php$ {
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
# Higher timeout for admin operations
fastcgi_read_timeout 300s;
fastcgi_param PHP_ADMIN_VALUE "memory_limit=512M n max_execution_time=300";
}
📋 Diagnostic Commands
Find Timeout Errors in NGINX
grep -i "timed out" /var/log/nginx/error.log | tail -20
Check PHP-FPM Slow Log
Enable it in your pool config:
slowlog = /var/log/php-fpm/slow.log
request_slowlog_timeout = 5s
Then monitor:
tail -f /var/log/php-fpm/slow.log
Validate Configuration Before Reloading
nginx -t && systemctl reload nginx
⚠️ Common Mistakes
Mistake 1: Increasing proxy_connect_timeout for slow responses
This only affects connection establishment. If your backend is slow to process (not slow to accept connections), this won’t help.
Mistake 2: Setting timeouts higher than PHP’s max_execution_time
If PHP kills the script at 30s but NGINX waits 120s, you’ll get a 502 (Bad Gateway) or blank response, not a 504.
Mistake 3: Forgetting intermediate layers
In a multi-layer setup, if Varnish has a 60s timeout but NGINX has 120s, Varnish will timeout first and NGINX will return 503/504.
Mistake 4: Using the same high timeout everywhere
Don’t set proxy_read_timeout 600s globally. Use specific location blocks for slow endpoints. See our guide on the NGINX location directive for targeting specific URLs.
Understanding Error Messages
| Message | Cause | Fix |
|---|---|---|
upstream timed out (110: Connection timed out) while reading |
proxy_read_timeout or fastcgi_read_timeout too low |
Increase read timeout |
upstream timed out (110: Connection timed out) while connecting |
proxy_connect_timeout too low OR backend down |
Check backend, increase connect timeout |
upstream prematurely closed connection |
Backend crashed or PHP-FPM killed process | Check request_terminate_timeout |
Related: Buffer Tuning
If you’re seeing 502 errors instead of 504, your issue might be proxy buffer sizing rather than timeouts. Large response headers can trigger buffer-related errors that look similar.
Best Practice Summary
- Diagnose first: Check logs to identify which timeout triggers
- Increase only what’s needed: Usually just
*_read_timeout - Match the chain: Outer layers ≥ inner layers
- Don’t mask problems: If something takes 5 minutes, fix the code, don’t just increase timeouts
- Use location-specific overrides: Don’t apply high timeouts globally
For most setups, setting proxy_read_timeout or fastcgi_read_timeout to 120s solves the 504 Gateway Timeout NGINX issue. If you need more, ask yourself why—the answer might be a code optimization rather than a configuration change.
