When you look at any NGINX performance guide, you’ll encounter three directives: sendfile, tcp_nopush, and tcp_nodelay. These settings are often recommended together, but their purpose and interactions are rarely explained. Understanding what NGINX sendfile does at the kernel level allows you to make informed decisions about your server configuration.
In this guide, we examine what each directive does, why they work together, and when you should enable them—with insights directly from the NGINX source code.
What is NGINX sendfile?
The sendfile directive enables the use of the sendfile() system call for serving static files. This is a zero-copy optimization that fundamentally changes how files transfer from disk to network.
Traditional File Serving (Without sendfile)
Without sendfile, serving a static file requires multiple data copies:
- NGINX calls
read()to copy file data from disk into a kernel buffer - The kernel copies data from the kernel buffer to NGINX’s user-space buffer
- NGINX calls
write()to copy data from user-space back to a kernel socket buffer - The kernel sends the socket buffer contents over the network
This involves four data copies and two context switches between kernel and user space. For every static file request, the CPU must copy the same data multiple times.
Zero-Copy Transfer (With sendfile)
When NGINX sendfile is enabled, the kernel handles the entire file transfer:
- NGINX calls
sendfile()with the file descriptor and socket descriptor - The kernel transfers data directly from the file system cache to the network stack
- Data never enters user space
This reduces the operation to two data copies (or one with DMA). Context switches are eliminated. For large static files, the performance improvement is substantial.
The Linux sendfile(2) man page explains the system call in detail.
Source Code Insight: The 2GB Limit
Looking at NGINX’s Linux sendfile implementation in src/os/unix/ngx_linux_sendfile_chain.c, we find an important limitation:
#define NGX_SENDFILE_MAXSIZE 2147483647L
/* the maximum limit size is 2G-1 - the page size */
if (limit == 0 || limit > (off_t) (NGX_SENDFILE_MAXSIZE - ngx_pagesize)) {
limit = NGX_SENDFILE_MAXSIZE - ngx_pagesize;
}
This means a single sendfile() call on Linux is limited to approximately 2GB minus the page size (typically 4KB). For larger files, NGINX makes multiple sendfile() calls. The source code comment explains why:
“On Linux 2.6.16 and later, sendfile() silently limits the count parameter to 2G minus the page size, even on 64-bit platforms.”
This limit is per-call, not per-file. NGINX handles files larger than 2GB by making multiple sendfile() calls in a loop.
How to Enable sendfile in NGINX
The sendfile directive can be placed in the http, server, or location context:
http {
sendfile on;
server {
listen 80;
server_name example.com;
root /var/www/html;
# sendfile is inherited from http context
location /downloads/ {
# Can also be set per-location
sendfile on;
}
}
}
The default value is off. Most Linux distributions ship NGINX with sendfile already enabled in the default configuration.
To check your current NGINX sendfile setting:
nginx -T 2>/dev/null | grep sendfile
On Rocky Linux and AlmaLinux, the default configuration enables it:
sendfile on;
Understanding tcp_nopush
The tcp_nopush directive enables the TCP_NOPUSH socket option (on FreeBSD/macOS) or TCP_CORK (on Linux). This option tells the kernel to accumulate data until a full TCP packet can be sent.
Why tcp_nopush Matters for sendfile
When NGINX serves a static file, it sends two things: HTTP response headers (typically under 1KB) and file content (potentially megabytes or gigabytes).
Without tcp_nopush, the headers might be sent in their own small TCP packet. File data follows in separate packets. This causes:
- Packet fragmentation: Small initial packets followed by full packets
- Reduced throughput: More packets means more overhead
- Suboptimal network utilization: Partially filled packets waste bandwidth
Source Code Insight: The Real Purpose
The NGINX FreeBSD sendfile implementation in src/os/unix/ngx_freebsd_sendfile_chain.c contains an excellent comment explaining why TCP_NOPUSH exists:
“Although FreeBSD sendfile() allows to pass a header and a trailer, it cannot send a header with a part of the file in one packet until FreeBSD 5.3. Besides, over the fast ethernet connection sendfile() may send the partially filled packets, i.e. the 8 file pages may be sent as the 11 full 1460-bytes packets, then one incomplete 324-bytes packet, and then again the 11 full 1460-bytes packets.”
“Therefore we use the TCP_NOPUSH option (similar to Linux’s TCP_CORK) to postpone the sending – it not only sends a header and the first part of the file in one packet, but also sends the file pages in the full packets.”
This is the key insight: TCP_NOPUSH ensures that HTTP headers and file data are combined into optimally-sized packets (Maximum Segment Size, typically 1460 bytes on Ethernet).
How TCP_CORK Works on Linux
When NGINX sets TCP_CORK before calling sendfile():
- The kernel holds the HTTP headers in the buffer
- It appends file data from the sendfile() call
- Full MSS-sized packets (1460 bytes on Ethernet) are sent
- When TCP_CORK is disabled, any remaining data is flushed
Looking at the NGINX source, we can see the default value for postpone_output is 1460 bytes—exactly the MSS:
ngx_conf_merge_size_value(conf->postpone_output, prev->postpone_output, 1460);
Enabling tcp_nopush
http {
sendfile on;
tcp_nopush on;
server {
listen 80;
server_name example.com;
root /var/www/html;
}
}
Critical: The tcp_nopush directive only has effect when sendfile is also enabled. Without sendfile, tcp_nopush does nothing—NGINX handles the buffering in user space when sendfile is off.
Understanding tcp_nodelay
The tcp_nodelay directive controls Nagle’s algorithm via the TCP_NODELAY socket option. Nagle’s algorithm was designed in 1984 to reduce small packets on the network by buffering data until either enough accumulates to fill a packet or a timeout expires (typically 200ms).
When Nagle’s Algorithm Hurts Performance
For interactive applications and APIs, Nagle’s algorithm introduces noticeable latency. Consider a scenario where:
- NGINX sends HTTP response headers
- A backend generates content in small chunks
- Each chunk waits for the Nagle timer before being sent
With Nagle’s algorithm enabled, each small chunk could be delayed by up to 200ms. For a response with multiple small writes, latency adds up quickly. This is particularly problematic for:
- API responses: Where milliseconds matter
- WebSocket connections: Where real-time delivery is essential
- Streaming responses: Where latency affects user experience
- Proxied requests: Where response data arrives in chunks
Default Value: On
Looking at the NGINX source code in ngx_http_core_module.c:
ngx_conf_merge_value(conf->tcp_nodelay, prev->tcp_nodelay, 1);
Unlike sendfile and tcp_nopush, tcp_nodelay defaults to ON. NGINX developers recognized that for web traffic, the latency penalty of Nagle’s algorithm outweighs its benefits.
Enabling tcp_nodelay
http {
tcp_nodelay on; # This is the default
server {
listen 80;
server_name example.com;
}
}
For upstream keepalive connections, tcp_nodelay is particularly important—it ensures that subsequent requests on persistent connections don’t suffer Nagle delays.
How sendfile, tcp_nopush, and tcp_nodelay Work Together
These three directives might seem contradictory. tcp_nopush tells the kernel to wait and accumulate data. tcp_nodelay tells it to send immediately. How can both be beneficial?
The key is understanding when each is applied.
The NGINX Request Lifecycle
- Request received: NGINX parses the incoming HTTP request
- Response preparation: NGINX generates headers and identifies the response body
- sendfile with tcp_nopush: For static files, NGINX enables TCP_CORK, then calls sendfile(). Headers and file data accumulate into full packets.
- TCP_CORK disabled: After sendfile completes, NGINX disables TCP_CORK, flushing any remaining data.
- tcp_nodelay for keep-alive: For persistent connections, tcp_nodelay ensures subsequent interactions have minimal latency.
Source Code Insight: TCP_CORK and TCP_NODELAY Mutual Exclusivity
On Linux, TCP_CORK and TCP_NODELAY are mutually exclusive socket options. The NGINX source code handles this explicitly in ngx_linux_sendfile_chain.c:
/* the TCP_CORK and TCP_NODELAY are mutually exclusive */
if (c->tcp_nodelay == NGX_TCP_NODELAY_SET) {
tcp_nodelay = 0;
if (setsockopt(c->fd, IPPROTO_TCP, TCP_NODELAY,
(const void *) &tcp_nodelay, sizeof(int)) == -1)
{
/* error handling */
} else {
c->tcp_nodelay = NGX_TCP_NODELAY_UNSET;
}
}
if (c->tcp_nodelay == NGX_TCP_NODELAY_UNSET) {
if (ngx_tcp_nopush(c->fd) == -1) {
/* error handling */
} else {
c->tcp_nopush = NGX_TCP_NOPUSH_SET;
}
}
This shows the exact sequence:
- If TCP_NODELAY is currently set, NGINX disables it first
- Only then does NGINX enable TCP_CORK (tcp_nopush)
- After the file transfer, NGINX disables TCP_CORK
- TCP_NODELAY is re-enabled for keep-alive connections
On FreeBSD and macOS, TCP_NOPUSH and TCP_NODELAY can coexist on the same socket, so this dance isn’t necessary.
NGINX handles these platform differences automatically. You simply enable both directives, and NGINX manages the socket options appropriately.
When to Use Each Directive: Decision Matrix
Here’s a practical guide for configuring these directives based on your workload:
| Workload Type | sendfile | tcp_nopush | tcp_nodelay | Notes |
|---|---|---|---|---|
| Static file server | on | on | on | Maximum throughput for files |
| Reverse proxy only | off* | off | on | No local files to sendfile |
| Mixed static + proxy | on | on | on | Optimal for both |
| API gateway | off* | off | on | Latency is critical |
| WebSocket proxy | off* | off | on | Real-time delivery |
| Large file downloads | on | on | on | Consider sendfile_max_chunk |
| NFS/network storage | off | off | on | sendfile unreliable on NFS |
| PHP application | on | on | on | Static assets benefit |
*sendfile doesn’t apply to proxied content—it only affects static files served directly from disk.
When to Disable sendfile
While NGINX sendfile provides excellent performance, there are cases where you should disable it:
Network File Systems (NFS, CIFS)
When serving files from network file systems, sendfile may not work correctly:
location /nfs-share/ {
sendfile off;
alias /mnt/nfs/share/;
}
The kernel’s sendfile optimization assumes local storage. Network file systems have different semantics that can cause corrupted downloads or timeouts.
Files Modified During Transfer
If files might be modified while being served (log files, files being uploaded), sendfile could send inconsistent data:
location /live-logs/ {
sendfile off;
alias /var/log/app/;
}
The NGINX source code even checks for this:
if (n == 0) {
/*
* if sendfile returns zero, then someone has truncated the file,
* so the offset became beyond the end of the file
*/
ngx_log_error(NGX_LOG_ALERT, c->log, 0,
"sendfile() reported that \"%s\" was truncated at %O",
file->file->name.data, file->file_pos);
return NGX_ERROR;
}
Certain Container Storage Drivers
In some containerized environments with certain storage drivers, sendfile might not work as expected. If you observe corrupted downloads, test with sendfile disabled.
The sendfile_max_chunk Directive
When serving very large files, a single sendfile() call could monopolize a worker process. The sendfile_max_chunk directive limits how much data is sent per call.
Source Code Default Value
From ngx_http_core_module.c:
ngx_conf_merge_size_value(conf->sendfile_max_chunk,
prev->sendfile_max_chunk, 2 * 1024 * 1024);
The default is 2 megabytes per sendfile() call.
When to Tune sendfile_max_chunk
http {
sendfile on;
sendfile_max_chunk 1m; # Limit to 1 megabyte per call
}
Consider tuning this value when:
- High concurrent connections: Smaller chunks (512k-1m) ensure other connections get processing time
- Dedicated download server: Larger chunks (2m or more) maximize throughput for fewer clients
- Mixed workload: Default 2m is usually appropriate
For servers handling many concurrent connections alongside large file downloads, consider values between 512k and 1m. For more on worker process tuning, see our guide on tuning worker_rlimit_nofile.
Production Configuration Examples
Optimal Static File Server
For a server primarily serving static files (images, CSS, JavaScript, downloads):
http {
# Zero-copy file transfer
sendfile on;
# Limit sendfile chunk to prevent worker starvation
sendfile_max_chunk 1m;
# Optimize packet utilization
tcp_nopush on;
# Minimize latency for keep-alive connections
tcp_nodelay on;
# Keep connections alive
keepalive_timeout 65;
server {
listen 80;
server_name static.example.com;
root /var/www/static;
location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2|svg)$ {
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}
}
}
For more details on static file caching, see our guide on NGINX browser caching.
Reverse Proxy Configuration
For a server acting as a reverse proxy to application backends:
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Upstream definition with keepalive
upstream backend {
server 127.0.0.1:8080;
keepalive 32;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# tcp_nodelay is particularly important here
# for low-latency API responses
}
# Static assets still benefit from sendfile
location /static/ {
alias /var/www/api/static/;
sendfile on;
tcp_nopush on;
}
}
}
For upstream connection pooling details, see our NGINX upstream keepalive guide. If you’re using FastCGI with PHP-FPM, also review NGINX FastCGI keepalive.
High-Performance PHP Application
For a typical web application with both dynamic and static content:
http {
sendfile on;
sendfile_max_chunk 512k;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 1000;
server {
listen 80;
server_name www.example.com;
root /var/www/html;
index index.php index.html;
# PHP processing - tcp_nodelay helps here
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/www.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Static files - sendfile and tcp_nopush help here
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 7d;
access_log off;
}
# Default - try static first, then PHP
location / {
try_files $uri $uri/ /index.php?$query_string;
}
}
}
For comprehensive PHP tuning, see our guide on optimizing NGINX for high-performance PHP websites.
Verifying Your Configuration
After configuring these directives, verify that NGINX accepts the configuration:
nginx -t
Expected output:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Check which of these directives are explicitly configured:
nginx -T 2>/dev/null | grep -E 'sendfile|tcp_nopush|tcp_nodelay'
On Rocky Linux and AlmaLinux with the default configuration, you’ll see:
sendfile on;
tcp_nopush on;
Note that tcp_nodelay may not appear in the output even though it’s active. This is because tcp_nodelay defaults to ON in NGINX—it’s enabled unless you explicitly disable it. The directive only appears in nginx -T output if it’s explicitly set in a configuration file.
To apply configuration changes:
systemctl reload nginx
Performance Impact
The performance impact of these directives varies based on your workload.
Static File Serving
For static file serving, enabling NGINX sendfile with tcp_nopush typically provides:
- 20-50% reduction in CPU usage for file serving
- Increased throughput for concurrent connections
- Reduced memory usage (no user-space buffer copies)
For additional performance gains, consider enabling NGINX gzip compression for text-based files.
Dynamic Content
For dynamic content (PHP, proxied requests), the impact is different:
- sendfile is not used for dynamic responses
- tcp_nodelay is critical for latency-sensitive applications
- tcp_nopush has minimal effect on proxied responses
Latency-Sensitive Applications
For APIs and real-time applications, tcp_nodelay is crucial. Disabling it can add up to 200ms latency per response segment. This is due to Nagle’s algorithm buffering.
For proper timeout configuration in latency-sensitive setups, see our NGINX timeout guide.
Default Values Summary
Understanding the defaults helps you know what to configure explicitly. From the NGINX source code:
| Directive | Default | Source Code Line | Recommendation |
|---|---|---|---|
| sendfile | off (0) | ngx_conf_merge_value(conf->sendfile, prev->sendfile, 0) |
Enable for static files |
| sendfile_max_chunk | 2m | 2 * 1024 * 1024 |
Adjust for high concurrency |
| tcp_nopush | off (0) | ngx_conf_merge_value(conf->tcp_nopush, prev->tcp_nopush, 0) |
Enable with sendfile |
| tcp_nodelay | on (1) | ngx_conf_merge_value(conf->tcp_nodelay, prev->tcp_nodelay, 1) |
Keep enabled |
Note that many distribution packages (including Rocky Linux and AlmaLinux) ship with sendfile and tcp_nopush already enabled in /etc/nginx/nginx.conf.
Troubleshooting Common Issues
Files Not Updating After Changes
If you modify files but clients receive old versions, this might be related to sendfile caching. The kernel caches file data aggressively when using sendfile. Solutions include:
- Clear the page cache:
echo 3 > /proc/sys/vm/drop_caches - Use
open_file_cachedirective to control NGINX’s file handle caching - For development, consider disabling sendfile temporarily
Corrupted Downloads
If users report corrupted file downloads, especially with large files:
- Check if files are on a network file system (disable sendfile)
- Verify file system integrity
- Check for sufficient memory (sendfile relies on page cache)
- Try reducing sendfile_max_chunk
High Latency for Small Responses
If API responses have unexpectedly high latency:
- Verify tcp_nodelay is enabled (it should be by default)
- Check if tcp_nopush is incorrectly applied to dynamic content
- Review proxy buffer settings if using reverse proxy
For proxy buffer tuning, see our reverse proxy guide.
Related Topics
To fully optimize your NGINX server, consider these related configurations:
- NGINX proxy cache microcaching: Cache dynamic content for massive performance gains
- NGINX load balancing: Distribute traffic across multiple backends
- Tuning worker processes: Match worker processes to your CPU cores
Conclusion
The NGINX sendfile directive, combined with tcp_nopush and tcp_nodelay, provides powerful optimizations at the kernel level. Understanding how these mechanisms work together—and when each applies—allows appropriate configuration for your workload.
For most configurations, the recommended settings are:
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
}
These three lines enable:
- Zero-copy file transfers via the sendfile() system call
- Optimized packet utilization through TCP_CORK/TCP_NOPUSH
- Minimal latency for all connections via TCP_NODELAY
NGINX handles the complex interactions between these socket options automatically—switching between TCP_CORK and TCP_NODELAY as appropriate for each phase of the request.
By applying these optimizations, your NGINX server operates efficiently at the kernel level, reducing CPU overhead and improving response times for both static content and dynamic applications.
The official NGINX documentation provides additional reference for these directives.

