yum upgrades for production use, this is the repository for you.
Active subscription is required.
Understanding NGINX timeout configuration is essential for running stable production servers. Misconfigured timeouts cause either premature connection drops (leading to 504 errors) or resource exhaustion from connections held open too long. This guide covers every NGINX timeout directive, explains when to use each one, and provides production-ready configurations.
The Four Essential NGINX Timeout Directives
NGINX has dozens of timeout-related directives, but four core ones cover most use cases:
| Directive | Default | Purpose |
|---|---|---|
proxy_read_timeout |
60s | Time to wait for upstream response |
client_body_timeout |
60s | Time between client sending request body chunks |
keepalive_timeout |
75s | How long idle connections stay open |
send_timeout |
60s | Time between sending response chunks to client |
Each NGINX timeout directive operates at a different point in the request lifecycle. Understanding when each applies prevents both premature disconnections and resource waste.
proxy_read_timeout: The 504 Error Culprit
When NGINX acts as a reverse proxy, proxy_read_timeout is the most critical NGINX timeout setting. It defines how long NGINX waits for the upstream server to send response data. See the official NGINX proxy_read_timeout documentation for the authoritative reference.
Default Behavior
The default proxy_read_timeout is 60 seconds. If your backend needs longer to process a request, you’ll see this in your error log:
upstream timed out (110: Connection timed out) while reading response header from upstream
And clients receive a 504 Gateway Timeout response.
When to Increase proxy_read_timeout
Increase this NGINX timeout value when:
- Running long-running API operations
- Processing large file uploads or exports
- Executing slow database queries
- Handling report generation or PDF creation
Configuration
location / {
proxy_pass http://backend;
# Increase for slow operations
proxy_read_timeout 120s;
}
For location-specific tuning (recommended over global changes):
# Normal endpoints keep the 60s default
location / {
proxy_pass http://backend;
}
# Slow admin operations get more time
location /admin/reports {
proxy_pass http://backend;
proxy_read_timeout 300s;
}
Related Upstream Timeouts
Two companion directives complete the upstream NGINX timeout trio:
proxy_connect_timeout 60s; # Time to establish TCP connection
proxy_send_timeout 60s; # Time between writes to upstream
proxy_read_timeout 60s; # Time between reads from upstream
In practice, only proxy_read_timeout needs adjustment. Connection establishment (proxy_connect_timeout) rarely takes more than a few seconds, and request bodies are usually sent quickly (proxy_send_timeout).
For comprehensive troubleshooting of upstream timeout issues, see our dedicated guide on 504 Gateway Timeout NGINX errors.
client_body_timeout: File Upload Timeouts
The client_body_timeout directive controls how long NGINX waits between receiving chunks of the request body from the client. This NGINX timeout is critical for file uploads.
How client_body_timeout Works
This NGINX timeout doesn’t measure total upload time. Instead, it measures the gap between successive data packets. A slow client uploading a large file over a poor connection might trigger this timeout even though data is still trickling in.
From the NGINX source code, the timeout applies to each read operation on the client connection:
ngx_add_timer(c->read, clcf->client_body_timeout);
Default Value
The default client_body_timeout is 60 seconds. For most web applications, this is sufficient.
When to Increase client_body_timeout
Increase this NGINX timeout for:
- Large file uploads (video, backups, images)
- Mobile users on poor connections
- API endpoints accepting large payloads
Configuration
# For a file upload location
location /upload {
client_max_body_size 100M;
client_body_timeout 300s;
proxy_pass http://upload_backend;
proxy_read_timeout 300s;
}
Note: When handling uploads, you typically need to increase both client_body_timeout (NGINX receiving from client) and proxy_read_timeout (NGINX waiting for backend to process the upload).
Relationship with client_max_body_size
The client_body_timeout works alongside client_max_body_size to control upload behavior. A request that exceeds client_max_body_size is rejected immediately with a 413 error, before client_body_timeout can apply.
keepalive_timeout: Connection Persistence
The keepalive_timeout directive determines how long NGINX keeps idle client connections open. This NGINX timeout balances connection reuse efficiency against resource consumption.
HTTP Keep-Alive Explained
HTTP/1.1 introduced persistent connections (keep-alive) to avoid the overhead of establishing new TCP connections for each request. A browser loading a page with 50 resources can reuse one connection instead of opening 50.
Default Value
NGINX defaults to 75 seconds for keepalive_timeout. The default distribution config often sets it to 65 seconds:
keepalive_timeout 65;
Syntax Options
The directive accepts one or two parameters:
# Single value: server-side timeout only
keepalive_timeout 65s;
# Two values: server timeout and Keep-Alive header value
keepalive_timeout 65s 60s;
The second parameter sets the Keep-Alive: timeout=60 response header. This tells the browser how long to keep the connection. Setting it slightly lower than the server value prevents race conditions where the client thinks the connection is alive but the server has closed it.
Production Configuration
For production servers:
http {
# Keep connections open for 65 seconds
keepalive_timeout 65s 60s;
# Enable keepalive to upstream servers too
upstream backend {
server 127.0.0.1:8000;
keepalive 32; # Connection pool size
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
For details on upstream connection pooling, see our guide on NGINX upstream keepalive.
When to Adjust keepalive_timeout
Increase this NGINX timeout when:
- Serving applications where users make frequent requests (SPAs, real-time dashboards)
- Operating behind a CDN (CDN connections are valuable to keep open)
Decrease when:
- Running a high-traffic server with connection limits
- Memory is constrained (each connection uses kernel memory)
HTTP/2 and HTTP/3 Considerations
With HTTP/2 and HTTP/3, the importance of keepalive_timeout changes. These protocols multiplex multiple requests over a single connection. This makes connection reuse even more valuable. Consider longer timeouts for HTTP/2+ traffic:
http2_idle_timeout 180s; # HTTP/2 specific
send_timeout: Slow Client Protection
The send_timeout directive controls how long NGINX waits between write operations when sending the response to the client.
How send_timeout Works
Like client_body_timeout, this NGINX timeout measures gaps between operations, not total transfer time. If a client stops reading response data, NGINX closes the connection after send_timeout seconds of no activity.
Default Value
The default is 60 seconds, which works for most scenarios.
When to Adjust send_timeout
Increase for:
- Very large file downloads
- Streaming responses
- Clients on extremely slow connections
Decrease for:
- API servers where slow clients waste resources
- High-traffic sites needing fast connection turnover
Configuration Example
# For large file downloads
location /downloads {
send_timeout 300s;
# Also increase proxy timeout if proxying
proxy_read_timeout 300s;
}
FastCGI, uWSGI, and gRPC Timeouts
When NGINX communicates with FastCGI (PHP-FPM), uWSGI, or gRPC backends, equivalent timeout directives exist.
FastCGI Timeouts (PHP-FPM)
location ~ \.php$ {
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_connect_timeout 60s;
fastcgi_send_timeout 60s;
fastcgi_read_timeout 60s; # Usually the one to increase
include fastcgi_params;
}
For PHP applications, fastcgi_read_timeout is almost always the NGINX timeout that needs adjustment.
For persistent FastCGI connections, see our guide on NGINX FastCGI keepalive.
uWSGI Timeouts (Python)
location / {
uwsgi_pass unix:/tmp/uwsgi.sock;
uwsgi_connect_timeout 60s;
uwsgi_send_timeout 60s;
uwsgi_read_timeout 60s;
include uwsgi_params;
}
gRPC Timeouts
location /grpc.ServiceName {
grpc_pass grpc://backend;
grpc_connect_timeout 60s;
grpc_send_timeout 60s;
grpc_read_timeout 60s;
}
Diagnosing NGINX Timeout Issues
When users report timeout errors, use these diagnostic approaches.
Check NGINX Error Log
The error log reveals which timeout triggered:
grep -i "timed out" /var/log/nginx/error.log | tail -20
Common messages:
| Message | Cause |
|---|---|
upstream timed out ... while reading response header |
proxy_read_timeout too low |
upstream timed out ... while connecting |
proxy_connect_timeout too low or backend down |
client timed out |
client_body_timeout or send_timeout |
View Active Timeout Configuration
Dump the effective configuration to see all NGINX timeout values:
nginx -T 2>/dev/null | grep -E 'timeout|keepalive'
Test Backend Response Time
If seeing upstream timeouts, measure actual backend response time:
curl -w "Time: %{time_total}s\n" -o /dev/null -s http://127.0.0.1:8000/slow-endpoint
If backend takes 90 seconds but proxy_read_timeout is 60 seconds, you’ll get 504 errors.
Monitor for Timeout Patterns
Watch for timeout errors in real-time:
tail -f /var/log/nginx/error.log | grep -i timeout
Production Configuration Recommendations
General Web Application
http {
# Client timeouts
client_body_timeout 60s;
client_header_timeout 60s;
send_timeout 60s;
# Keep-alive
keepalive_timeout 65s 60s;
keepalive_requests 100;
upstream app {
server 127.0.0.1:8000;
keepalive 32;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Connection "";
# Upstream timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Longer timeout for admin operations
location /admin {
proxy_pass http://app;
proxy_read_timeout 180s;
}
}
}
High-Traffic API Server
For API servers handling many requests:
http {
# Aggressive client timeouts
client_body_timeout 30s;
send_timeout 30s;
# Shorter keepalive
keepalive_timeout 30s;
keepalive_requests 1000;
upstream api {
server 127.0.0.1:8000;
keepalive 64;
}
server {
location /api {
proxy_pass http://api;
proxy_read_timeout 30s;
}
}
}
File Upload Server
For servers handling large uploads:
server {
client_max_body_size 500M;
location /upload {
# Extended timeouts for uploads
client_body_timeout 600s;
proxy_read_timeout 600s;
proxy_pass http://upload_backend;
proxy_request_buffering off; # Stream directly to backend
}
}
Common Timeout Mistakes
Mistake 1: Increasing All Timeouts Blindly
Don’t set every NGINX timeout to 300 seconds “just in case.” This wastes resources and masks underlying performance issues.
Mistake 2: Forgetting the Backend Timeout Chain
If NGINX has proxy_read_timeout 120s but PHP-FPM has request_terminate_timeout = 30, PHP dies first. Ensure the chain is consistent:
NGINX proxy_read_timeout >= Backend process timeout
For PHP:
; In php.ini
max_execution_time = 120
; In php-fpm pool config
request_terminate_timeout = 120
Mistake 3: Using Global Timeouts for Specific Endpoints
Instead of:
# Bad: global long timeout
proxy_read_timeout 300s;
Use location-specific overrides:
# Good: targeted long timeout
location /reports/generate {
proxy_read_timeout 300s;
}
Mistake 4: Ignoring Intermediate Proxies
In multi-layer architectures (NGINX to Varnish to Backend), each layer has timeouts. The outer layer should have equal or longer timeouts than inner layers.
See our 504 Gateway Timeout guide for multi-layer architecture examples.
Timeout Quick Reference Table
| Directive | Default | Applies To | Increase For |
|---|---|---|---|
proxy_read_timeout |
60s | Upstream response | Slow backends, long operations |
proxy_connect_timeout |
60s | Upstream TCP connect | Rarely needed |
proxy_send_timeout |
60s | Sending to upstream | Large request bodies |
client_body_timeout |
60s | Client request body | File uploads |
client_header_timeout |
60s | Client request headers | Slow clients |
send_timeout |
60s | Sending to client | Large downloads, slow clients |
keepalive_timeout |
75s | Idle connections | SPAs, CDN connections |
fastcgi_read_timeout |
60s | FastCGI response | Slow PHP scripts |
uwsgi_read_timeout |
60s | uWSGI response | Slow Python apps |
grpc_read_timeout |
60s | gRPC response | Slow gRPC services |
Summary
Effective NGINX timeout configuration requires understanding what each directive controls:
- proxy_read_timeout handles upstream delays and is the main culprit in 504 errors
- client_body_timeout manages upload timeouts from slow clients
- keepalive_timeout balances connection reuse against resource usage
- send_timeout protects against slow or stalled clients
Start with defaults, monitor your error logs, and adjust specific timeouts based on actual issues rather than preemptively setting everything high. For 504 error troubleshooting specifically, see our companion guide on fixing NGINX 504 Gateway Timeout errors.
