yum upgrades for production use, this is the repository for you.
Active subscription is required.
Your NGINX server sits in front of an NFS mount, a SAN volume, or a bank of slow SATA drives. Users request static assets — images, PDFs, firmware files — and every single request crawls back to that slow storage. Response times spike. Users leave. You try tuning sendfile, open_file_cache, and worker connections, but nothing helps because the bottleneck is the disk itself, not NGINX. What you need is the NGINX SlowFS Cache module.
The core problem is that NGINX has no native way to cache static file content onto faster storage. The built-in open_file_cache only caches file metadata — descriptors, sizes, and modification times — not the bytes themselves. And proxy_cache only works for upstream responses, not files served via the root directive. The NGINX SlowFS Cache module fills this gap by transparently copying frequently accessed files from slow storage to fast local disks, then serving subsequent requests directly from the cache.
How the NGINX SlowFS Cache Module Works
The SlowFS Cache module intercepts requests normally handled by the NGINX static file handler. On each request, it checks for a cached copy:
- Cache lookup — NGINX checks the shared memory zone for the requested cache key
- MISS — The file is served from the slow origin. If it has been requested enough times (per
slowfs_cache_min_uses), a copy is written to cache - HIT — The file is served from fast cache storage, bypassing the slow origin
- EXPIRED — The cached copy has exceeded its validity. NGINX serves from the origin and updates the cache
For large files (above slowfs_big_file_size, default 128 KB), the module fork()s a child process to copy the file. This prevents the worker process from blocking during the copy, which would otherwise stall all its connections.
What Makes SlowFS Cache Different from proxy_cache
NGINX’s built-in proxy_cache caches responses from upstream servers. It cannot cache files served directly via the root directive. The SlowFS Cache module operates at the static file level, making it the only solution for caching locally served files onto faster storage.
| Feature | proxy_cache |
SlowFS Cache |
|---|---|---|
| Content source | Upstream/backend servers | Local filesystem (root) |
| Use case | Reverse proxy caching | Storage tier acceleration |
| Cache key | URL + headers | URI or custom key |
| Purge support | Via third-party module | Built-in |
| Large file handling | Streamed from upstream | Fork to avoid blocking |
When SlowFS Cache Is (and Isn’t) Useful
This module targets environments where the cache storage is faster than the origin:
| Scenario | Origin Storage | Cache Storage |
|---|---|---|
| NAS/NFS served files | Network mount | Local SSD |
| Media library on bulk SATA | 7,200 RPM SATA | 15K SAS RAID0 |
| Shared storage in a cluster | Distributed filesystem | Local NVMe |
| Archive content on HDD | Cold storage HDD | Hot SSD tier |
There is no benefit when cache and origin are on the same disk. You would simply waste disk space without any speed improvement.
Installing the NGINX SlowFS Cache Module
The module is available through the GetPageSpeed repository for both RHEL-based and Debian/Ubuntu systems.
RHEL, CentOS, AlmaLinux, Rocky Linux
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-slowfs
Then load the module in your nginx.conf (add at the very top, before the events block):
load_module modules/ngx_http_slowfs_module.so;
Debian and Ubuntu
First, set up the GetPageSpeed APT repository, then install:
sudo apt-get update
sudo apt-get install nginx-module-slowfs
On Debian/Ubuntu, the package handles module loading automatically. No
load_moduledirective is needed.
Verifying Installation
Confirm the module is loaded by listing its directives:
strings /usr/lib64/nginx/modules/ngx_http_slowfs_module.so | grep slowfs_
You should see all available directives: slowfs_cache, slowfs_cache_path, slowfs_cache_key, slowfs_cache_valid, slowfs_cache_min_uses, slowfs_big_file_size, slowfs_temp_path, and slowfs_cache_purge.
Configuration Directives Reference
The NGINX SlowFS Cache module provides eight directives:
| Directive | Default | Context | Description |
|---|---|---|---|
slowfs_cache_path |
— | http |
Define cache zone path, size, and structure |
slowfs_cache |
— | http, server, location |
Enable caching using a named zone |
slowfs_cache_key |
— | http, server, location |
Set the cache lookup key |
slowfs_cache_valid |
— | http, server, location |
Set cache validity duration |
slowfs_cache_min_uses |
1 |
http, server, location |
Minimum requests before caching |
slowfs_big_file_size |
128k |
http, server, location |
Fork threshold for large files |
slowfs_temp_path |
/tmp 1 2 |
http, server, location |
Temporary storage for cache writes |
slowfs_cache_purge |
— | location |
Enable cache purge for a zone |
Understanding the Directives
slowfs_cache_path — Defines the on-disk cache area and shared memory zone. This directive follows the same syntax as proxy_cache_path:
slowfs_cache_path /var/cache/nginx/slowfs levels=1:2
keys_zone=filecache:10m
inactive=60m
max_size=1g;
levels=1:2— Creates a two-level directory hierarchy to reduce filesystem pressurekeys_zone=filecache:10m— Allocates 10 MB of shared memory for cache keys (about 80,000 entries)inactive=60m— Removes cached files not accessed within 60 minutesmax_size=1g— Limits total cache size to 1 GB on disk
slowfs_cache — Activates caching for a location by referencing a zone defined in slowfs_cache_path.
slowfs_cache_key — Sets the key used for cache lookups. Typically $uri, but can include other variables.
slowfs_cache_valid — Controls how long cached files are considered fresh. Accepts an optional response code:
slowfs_cache_valid 200 1d; # Cache successful responses for 1 day
slowfs_cache_valid 404 1m; # Cache not-found responses for 1 minute
slowfs_cache_valid any 10m; # Cache all other responses for 10 minutes
slowfs_cache_min_uses — Sets the number of requests needed before a file is cached. The default of 1 means files are cached on first access. Increase this to avoid caching rarely accessed files.
slowfs_big_file_size — Files larger than this threshold (default: 128 KB) are cached in a forked child process. This prevents large copies from blocking the worker process.
slowfs_temp_path — Directory for temporary files during cache writes. For best performance, this must be on the same filesystem as slowfs_cache_path. Otherwise, the file is copied twice instead of a fast rename.
slowfs_cache_purge — Enables cache purge at a specific location. Takes a zone name and cache key.
Basic Configuration Example
The simplest configuration caches all files from a slow root to local SSD:
load_module modules/ngx_http_slowfs_module.so;
events {}
http {
slowfs_cache_path /var/cache/nginx/slowfs levels=1:2
keys_zone=filecache:10m;
server {
listen 80;
location / {
root /mnt/nfs/website;
slowfs_cache filecache;
slowfs_cache_key $uri;
slowfs_cache_valid 200 1d;
}
}
}
With this setup, every static file from the NFS mount at /mnt/nfs/website is cached locally after first request.
Production Configuration with Cache Purge
A production-ready setup includes cache purge, a custom log format for monitoring, and tuned parameters:
load_module modules/ngx_http_slowfs_module.so;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
log_format cache_status '$remote_addr [$time_local] "$request" $status '
'cache=$slowfs_cache_status';
access_log /var/log/nginx/access.log cache_status;
slowfs_cache_path /var/cache/nginx/slowfs levels=1:2
keys_zone=filecache:10m
inactive=60m
max_size=1g;
slowfs_temp_path /var/cache/nginx/slowfs_temp 1 2;
server {
listen 80;
server_name files.example.com;
location / {
root /mnt/nfs/static-files;
slowfs_cache filecache;
slowfs_cache_key $uri;
slowfs_cache_valid 200 1d;
slowfs_cache_min_uses 1;
slowfs_big_file_size 128k;
add_header X-SlowFS-Cache $slowfs_cache_status;
}
location ~ /purge(/.*) {
allow 127.0.0.1;
deny all;
slowfs_cache_purge filecache $1;
}
}
}
Monitoring Cache Performance
The $slowfs_cache_status variable reports cache state for each request. Use it in your log format to track effectiveness:
192.168.1.10 [22/Mar/2026:16:30:58 +0000] "GET /image.jpg HTTP/1.1" 200 cache=HIT
192.168.1.11 [22/Mar/2026:16:30:59 +0000] "GET /report.pdf HTTP/1.1" 200 cache=MISS
192.168.1.10 [22/Mar/2026:16:31:07 +0000] "GET /report.pdf HTTP/1.1" 200 cache=HIT
Expose the status as a response header with add_header X-SlowFS-Cache $slowfs_cache_status for debugging without checking server logs.
Purging Cached Files
When source files change on the slow storage, purge the stale cache entry:
curl -X GET http://localhost/purge/path/to/updated-file.jpg
The purge endpoint returns:
– 200 — File successfully purged from cache
– 404 — File was not in cache (already expired or never cached)
– 403 — Request denied (not from an allowed IP)
Performance Considerations
Filesystem Alignment
Ensure slowfs_cache_path and slowfs_temp_path point to the same filesystem. Different filesystems force a full copy instead of a fast rename:
# Good: same filesystem, rename is instant
slowfs_cache_path /ssd/cache levels=1:2 keys_zone=fast:10m;
slowfs_temp_path /ssd/cache_temp 1 2;
# Bad: different filesystems, file copied twice
slowfs_cache_path /ssd/cache levels=1:2 keys_zone=fast:10m;
slowfs_temp_path /tmp 1 2;
Tuning the Big File Threshold
The slowfs_big_file_size directive controls when forking occurs. The 128 KB default works well for most setups:
- Lower (e.g.,
64k) if workers handle many concurrent connections - Raise (e.g.,
512k) if forking overhead concerns you and files are moderate - Set very high (e.g.,
100m) to disable forking for consistently small files
Cache Sizing
The max_size parameter caps total disk usage. The cache manager evicts least recently used entries when this limit is reached:
# For 100,000 files averaging 50 KB each: 50 KB × 100,000 ≈ 5 GB
slowfs_cache_path /ssd/cache levels=1:2
keys_zone=filecache:20m
max_size=5g;
Each key entry uses approximately 128 bytes of shared memory. Therefore, 10 MB accommodates roughly 80,000 entries.
AIO Incompatibility
The SlowFS Cache module is not compatible with AIO (asynchronous I/O). Disable aio in locations that use slowfs_cache, because the module relies on synchronous file operations for cache management.
Troubleshooting
Common Issues
All requests show MISS, nothing is cached:
Verify that the cache directory exists and is writable by the NGINX worker:
ls -la /var/cache/nginx/slowfs
# Should be owned by nginx:nginx
On RHEL/CentOS systems with SELinux enabled, set the correct context:
chcon -R -t httpd_cache_t /var/cache/nginx/slowfs
chcon -R -t httpd_cache_t /var/cache/nginx/slowfs_temp
chcon -R -t httpd_sys_content_t /mnt/nfs/static-files
Cache purge returns 403 Forbidden:
The purge location restricts access by IP. Add your management network:
location ~ /purge(/.*) {
allow 127.0.0.1;
allow 10.0.0.0/8;
deny all;
slowfs_cache_purge filecache $1;
}
NGINX crashes with segfault on startup:
Module versions 1.11 and earlier crash on NGINX 1.9.11+ due to a missing main config initializer. Upgrade to version 1.12 or later.
Verifying Cache Contents
Inspect cached files:
find /var/cache/nginx/slowfs -type f | wc -l
# Count of currently cached files
du -sh /var/cache/nginx/slowfs
# Total disk usage
Checking Cache Effectiveness
Calculate the cache hit ratio from your access log:
awk -F'cache=' '{count[$2]++} END {for (s in count) print s, count[s]}' \
/var/log/nginx/access.log
Expected output:
HIT 4523
MISS 187
EXPIRED 42
A healthy cache shows a high HIT-to-MISS ratio. If MISS dominates, lower slowfs_cache_min_uses to 1 or increase slowfs_cache_valid.
Security Best Practices
Restrict Cache Purge Access
Always limit purge access to trusted IPs. An exposed purge endpoint lets attackers evict your entire cache:
location ~ /purge(/.*) {
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
slowfs_cache_purge filecache $1;
}
Protect Cache Directory Permissions
Set strict permissions to prevent unauthorized access to cached content:
chown -R nginx:nginx /var/cache/nginx/slowfs
chmod 700 /var/cache/nginx/slowfs
Avoid Caching Sensitive Files
Exclude configuration files and credentials from caching:
location ~* \.(conf|env|key|pem)$ {
root /mnt/nfs/static-files;
# No slowfs_cache here — served without caching
}
location / {
root /mnt/nfs/static-files;
slowfs_cache filecache;
slowfs_cache_key $uri;
slowfs_cache_valid 200 1d;
}
Conclusion
The NGINX SlowFS Cache module provides a straightforward way to accelerate static file serving when origin storage is slow. By transparently caching file content from network or mechanical storage to fast local disks, it cuts response times without any application changes.
For installations and updates, visit the SlowFS module RPM page or the APT module page. Source code and issue tracking are available on GitHub.
