Site icon GetPageSpeed

NGINX Proxy Cache & Microcaching: Complete Guide

NGINX Proxy Cache & Microcaching: Complete Configuration Guide for Maximum Performance

NGINX proxy caching is one of the most effective ways to dramatically improve your web application’s performance. By caching responses from upstream servers, NGINX can serve subsequent requests directly from memory or disk, reducing backend load and latency by orders of magnitude. In production environments, proper NGINX proxy cache configuration can deliver performance improvements of 100x to 400x compared to uncached requests.

This comprehensive guide covers everything you need to know about NGINX proxy caching: from basic configuration to advanced techniques like microcaching for dynamic content, thundering herd prevention, and stale-while-revalidate patterns. Every configuration in this article has been tested on Rocky Linux 10, AlmaLinux 10, and RHEL 9 with NGINX 1.26.

What is NGINX Proxy Cache?

NGINX proxy cache stores responses from upstream servers (your backend applications) and serves them directly to clients without contacting the backend for subsequent requests. This is fundamentally different from browser caching, which stores resources on the client side.

When NGINX receives a request for a cacheable resource:

  1. NGINX checks if the response is in the cache
  2. If found and valid (cache HIT), NGINX serves the cached response immediately
  3. If not found (cache MISS), NGINX forwards the request to the upstream server
  4. NGINX stores the response in cache for future requests

The proxy cache module (ngx_http_proxy_module) is compiled into NGINX by default. According to the NGINX source code analysis, the caching implementation uses a sophisticated combination of shared memory for metadata (using a red-black tree for O(log n) lookups) and disk storage for the actual cached content.

Why Use NGINX Proxy Cache?

Understanding when and why to implement proxy caching helps you make better architectural decisions:

Reduced Backend Load: Every cache HIT is a request your backend servers never see. For high-traffic sites, this can reduce backend requests by 90% or more.

Lower Latency: Cached responses are served in microseconds, compared to milliseconds (or seconds) for backend-generated responses. Our benchmarks show a typical 3x improvement for simple backends and 100x+ for database-backed applications.

Improved Scalability: With caching, your backend can handle traffic spikes without additional infrastructure.

Cost Savings: Fewer backend servers mean lower infrastructure costs.

Better User Experience: Faster page loads directly impact user engagement and conversion rates.

Basic NGINX Proxy Cache Configuration

Let us start with a minimal but production-ready proxy cache configuration. First, install NGINX on your RHEL-based system:

dnf install nginx
systemctl enable --now nginx

Defining the Cache Zone

The proxy_cache_path directive defines where NGINX stores cached content and how the cache is managed:

proxy_cache_path /var/cache/nginx/proxy_cache
    levels=1:2
    keys_zone=my_cache:10m
    max_size=1g
    inactive=60m
    use_temp_path=off;

Let us break down each parameter:

/var/cache/nginx/proxy_cache: The filesystem path where cached content is stored. Create this directory with proper permissions:

mkdir -p /var/cache/nginx/proxy_cache
chown nginx:nginx /var/cache/nginx/proxy_cache

levels=1:2: Creates a two-level directory hierarchy for cached files. This prevents having too many files in a single directory, which improves filesystem performance. A cached file with key hash 12296904807addfd78c2b485e6f0988b would be stored at /var/cache/nginx/proxy_cache/b/88/12296904807addfd78c2b485e6f0988b.

keys_zone=my_cache:10m: Defines a shared memory zone named my_cache with 10 megabytes of storage for cache keys and metadata. According to the NGINX source code, 1 MB stores approximately 8,000 cache entries. This zone is used for fast O(log n) lookups using a red-black tree data structure.

max_size=1g: Limits total cache size to 1 gigabyte. When exceeded, NGINX’s cache manager process removes least-recently-used items.

inactive=60m: Removes cached items that have not been accessed for 60 minutes, regardless of their validity.

use_temp_path=off: Writes cached files directly to the cache directory instead of using a temporary path first. This improves performance by avoiding extra file copies.

Enabling Caching in a Location Block

Once you have defined the cache path, enable caching for specific locations:

upstream backend {
    server 127.0.0.1:8080;
}

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://backend;
        proxy_cache my_cache;
        proxy_cache_valid 200 10m;
        proxy_cache_valid 404 1m;

        # Add header to show cache status
        add_header X-Cache-Status $upstream_cache_status always;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

proxy_cache my_cache: Enables caching using the previously defined cache zone.

proxy_cache_valid 200 10m: Caches HTTP 200 responses for 10 minutes.

proxy_cache_valid 404 1m: Caches HTTP 404 responses for 1 minute. This prevents repeated backend hits for missing resources.

add_header X-Cache-Status: Adds a response header showing the cache status. The always parameter ensures the header is added even for error responses.

Testing Your Cache Configuration

Verify your configuration and reload NGINX:

nginx -t
systemctl reload nginx

Test caching with curl:

# First request - should be MISS
curl -I http://localhost/

# Second request - should be HIT
curl -I http://localhost/

You should see:

HTTP/1.1 200 OK
X-Cache-Status: MISS

Followed by:

HTTP/1.1 200 OK
X-Cache-Status: HIT

Understanding Cache Status Values

The $upstream_cache_status variable provides insight into how NGINX handled each request:

Status Meaning
MISS Response not in cache, fetched from upstream
HIT Response served from cache
EXPIRED Cache entry expired, fresh response fetched
STALE Stale response served (see stale-while-revalidate)
UPDATING Stale response served while cache updates in background
REVALIDATED Response validated via conditional GET (304)
BYPASS Cache bypassed due to proxy_cache_bypass

Ignoring Backend Cache Headers

By default, NGINX respects Cache-Control and Expires headers from your backend. If your backend sends Cache-Control: no-cache, NGINX will not cache the response. For proxy caching to work reliably, you often need to override these headers:

location / {
    proxy_pass http://backend;
    proxy_cache my_cache;
    proxy_cache_valid 200 10m;

    # Ignore backend cache control headers
    proxy_ignore_headers Cache-Control Expires;

    add_header X-Cache-Status $upstream_cache_status always;
}

This is important for legacy applications that send aggressive no-cache headers, applications where you want NGINX to control caching policy, and for implementing microcaching for dynamic content.

Microcaching: Caching Dynamic Content

Microcaching is a powerful technique that caches dynamic content for very short periods (typically 1 second). Even a 1-second cache provides enormous benefits under high load. It collapses hundreds of simultaneous requests into a single backend request.

Why Microcaching Works

Consider a page that receives 1,000 requests per second. Without caching, your backend handles 1,000 requests per second. With 1-second microcaching, your backend handles approximately 1 request per second. NGINX serves 999 requests from cache.

The math is compelling: microcaching can reduce backend load by 99.9% during traffic spikes. Content is never more than 1 second stale.

Microcaching Configuration

location /api {
    proxy_pass http://backend;
    proxy_cache my_cache;

    # Cache for just 1 second
    proxy_cache_valid 200 1s;

    # Ignore backend cache headers (critical for microcaching)
    proxy_ignore_headers Cache-Control Expires Set-Cookie;

    # Prevent thundering herd
    proxy_cache_lock on;

    add_header X-Cache-Status $upstream_cache_status always;
}

The key elements for microcaching are a very short TTL, ignoring backend cache headers, and enabling the cache lock to prevent thundering herd.

Testing Microcaching

# First request - MISS
curl -I http://localhost/api

# Immediate second request - HIT (same timestamp cached)
curl -I http://localhost/api

# Wait 1+ seconds
sleep 2

# Third request - EXPIRED (fetches new content)
curl -I http://localhost/api

Thundering Herd Prevention with proxy_cache_lock

When a cached item expires, multiple simultaneous requests could all trigger backend fetches. This “thundering herd” problem can overwhelm your backend during traffic spikes. The NGINX load balancing article covers distributing load across servers, but cache locking prevents duplicate requests entirely.

The proxy_cache_lock directive solves this problem. It allows only one request to fetch from the backend while other requests wait for the cache to be populated:

location / {
    proxy_pass http://backend;
    proxy_cache my_cache;
    proxy_cache_valid 200 10m;

    # Only one request fetches from backend
    proxy_cache_lock on;

    # How long to wait for lock before fetching anyway
    proxy_cache_lock_timeout 5s;

    # How long a request can hold the lock
    proxy_cache_lock_age 5s;

    add_header X-Cache-Status $upstream_cache_status always;
}

According to the NGINX source code, the lock mechanism works by setting an updating flag on the cache node. Subsequent requests check this flag and either wait (polling every 500ms) or proceed after the timeout.

proxy_cache_lock_timeout: Maximum time to wait for another request to populate the cache. After this timeout, the waiting request will fetch from the backend.

proxy_cache_lock_age: Maximum time a request can hold the lock. After this time, another waiting request can take over. This prevents stuck requests from blocking indefinitely.

Stale-While-Revalidate Pattern

The stale-while-revalidate pattern serves slightly stale content immediately while fetching fresh content in the background. This eliminates user-facing latency for cache refreshes:

location / {
    proxy_pass http://backend;
    proxy_cache my_cache;
    proxy_cache_valid 200 5m;

    # Serve stale content while updating
    proxy_cache_use_stale updating error timeout http_500 http_502 http_503 http_504;

    # Update cache in background subrequest
    proxy_cache_background_update on;

    # Prevent thundering herd
    proxy_cache_lock on;

    add_header X-Cache-Status $upstream_cache_status always;
}

proxy_cache_use_stale updating: Serve stale content while the cache is being updated.

proxy_cache_use_stale error timeout http_500...: Also serve stale content when the backend returns errors. This provides graceful degradation when backends fail.

proxy_cache_background_update on: Fetch fresh content in a background subrequest. The client receives the stale response immediately.

The combination of these directives ensures users always get fast responses, the cache is updated transparently, and backend failures do not cause user-visible errors.

Cache Bypass and Conditional Caching

Sometimes you need to bypass the cache for specific requests. Common scenarios include when users are logged in or when debugging:

location / {
    proxy_pass http://backend;
    proxy_cache my_cache;
    proxy_cache_valid 200 10m;

    # Bypass cache for requests with nocache cookie or argument
    proxy_cache_bypass $cookie_nocache $arg_nocache;

    # Don't store response in cache for these requests
    proxy_no_cache $cookie_nocache $arg_nocache;

    add_header X-Cache-Status $upstream_cache_status always;
}

proxy_cache_bypass: If any specified variable is non-empty and non-zero, the response is taken from the upstream server. The response may still be stored in cache.

proxy_no_cache: If any specified variable is non-empty and non-zero, the response is not stored in cache.

Testing cache bypass:

# Normal request - uses cache
curl -I http://localhost/

# Bypass cache with query parameter
curl -I "http://localhost/?nocache=1"

The bypass request shows X-Cache-Status: BYPASS.

Bypassing Cache for Logged-In Users

A common pattern is to bypass cache for authenticated users:

location / {
    proxy_pass http://backend;
    proxy_cache my_cache;
    proxy_cache_valid 200 10m;

    # Bypass cache if session cookie exists
    proxy_cache_bypass $cookie_session $cookie_PHPSESSID;
    proxy_no_cache $cookie_session $cookie_PHPSESSID;

    add_header X-Cache-Status $upstream_cache_status always;
}

Cache Key Configuration

The cache key determines how NGINX identifies cached content. By default, NGINX uses $scheme$proxy_host$request_uri. You can customize this with proxy_cache_key:

location / {
    proxy_pass http://backend;
    proxy_cache my_cache;
    proxy_cache_valid 200 10m;

    # Default cache key
    proxy_cache_key $scheme$proxy_host$request_uri;

    add_header X-Cache-Status $upstream_cache_status always;
}

Varying Cache by Headers

To cache different versions based on request headers (for example, mobile vs desktop):

proxy_cache_key $scheme$proxy_host$request_uri$http_accept_encoding;

Or to cache different versions based on a custom header:

proxy_cache_key $scheme$proxy_host$request_uri$http_x_device_type;

According to the NGINX source code, the cache key is processed through MD5 hashing to create a 16-byte key. The first 8 bytes are used for red-black tree lookups. The full 16 bytes are compared to prevent collisions.

Minimum Uses Before Caching

For content that is accessed infrequently, caching on first request may waste cache space. The proxy_cache_min_uses directive requires a minimum number of requests:

location /assets {
    proxy_pass http://backend;
    proxy_cache my_cache;
    proxy_cache_valid 200 1h;

    # Only cache after 3 requests
    proxy_cache_min_uses 3;

    add_header X-Cache-Status $upstream_cache_status always;
}

This is useful for large file downloads where you only want to cache popular files, API endpoints with high cardinality, and cold cache startup scenarios.

Complete Production Configuration

Here is a complete, production-ready NGINX proxy cache configuration. For more details on reverse proxy setup, see our NGINX reverse proxy guide:

# Cache path definition (place in http context)
proxy_cache_path /var/cache/nginx/proxy_cache
    levels=1:2
    keys_zone=main_cache:100m
    max_size=10g
    inactive=24h
    use_temp_path=off;

# Microcache for API responses
proxy_cache_path /var/cache/nginx/api_cache
    levels=1:2
    keys_zone=api_cache:10m
    max_size=1g
    inactive=10m
    use_temp_path=off;

upstream backend {
    server 127.0.0.1:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name example.com;

    # Static assets - long cache
    location /static {
        proxy_pass http://backend;
        proxy_cache main_cache;
        proxy_cache_valid 200 7d;
        proxy_cache_valid 404 1m;

        proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;

        add_header X-Cache-Status $upstream_cache_status always;
    }

    # API endpoints - microcaching
    location /api {
        proxy_pass http://backend;
        proxy_cache api_cache;
        proxy_cache_valid 200 1s;

        proxy_ignore_headers Cache-Control Expires Set-Cookie;
        proxy_cache_lock on;
        proxy_cache_lock_timeout 3s;

        # Bypass cache for authenticated requests
        proxy_cache_bypass $http_authorization;
        proxy_no_cache $http_authorization;

        add_header X-Cache-Status $upstream_cache_status always;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    # Default - standard caching with stale-while-revalidate
    location / {
        proxy_pass http://backend;
        proxy_cache main_cache;
        proxy_cache_valid 200 5m;
        proxy_cache_valid 301 302 1h;
        proxy_cache_valid 404 1m;

        proxy_ignore_headers Cache-Control Expires;

        proxy_cache_use_stale updating error timeout http_500 http_502 http_503 http_504;
        proxy_cache_background_update on;
        proxy_cache_lock on;

        # Bypass for logged-in users
        proxy_cache_bypass $cookie_session;
        proxy_no_cache $cookie_session;

        add_header X-Cache-Status $upstream_cache_status always;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Upstream connection keepalive
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

Cache Management and Monitoring

Viewing Cache Statistics

Check the cache directory structure and usage:

# Count cached files
find /var/cache/nginx/proxy_cache -type f | wc -l

# Check cache size
du -sh /var/cache/nginx/proxy_cache/

Purging the Cache

NGINX does not include cache purging in the open-source version. For manual purging:

# Purge entire cache
rm -rf /var/cache/nginx/proxy_cache/*

# Note: NGINX will rebuild cache structure automatically

For selective purging, consider NGINX Plus or third-party modules like ngx_cache_purge.

Cache Manager and Loader Processes

NGINX runs two background processes for cache management:

Cache Manager: Periodically checks cache size and removes least-recently-used items when max_size is exceeded. Configure its behavior with:

proxy_cache_path /var/cache/nginx/proxy_cache
    levels=1:2
    keys_zone=my_cache:10m
    max_size=1g
    manager_files=100
    manager_sleep=50ms
    manager_threshold=200ms;

Cache Loader: On NGINX startup, walks the cache directory and rebuilds the in-memory index. Configure to prevent startup I/O spikes:

proxy_cache_path /var/cache/nginx/proxy_cache
    levels=1:2
    keys_zone=my_cache:10m
    max_size=1g
    loader_files=100
    loader_sleep=50ms
    loader_threshold=200ms;

SELinux Configuration

On RHEL-based systems with SELinux enabled, NGINX needs permission to connect to upstream servers and write to the cache directory:

# Allow NGINX to connect to network
setsebool -P httpd_can_network_connect on

# If using non-standard ports
semanage port -a -t http_port_t -p tcp 8080

The cache directory (/var/cache/nginx) is typically already labeled correctly. Verify with:

ls -laZ /var/cache/nginx/

Performance Benchmarks

To measure the impact of proxy caching, compare direct backend access versus cached access:

# Install Apache Bench
dnf install httpd-tools

# Benchmark direct backend (no cache)
ab -n 1000 -c 10 http://127.0.0.1:8080/

# Benchmark through NGINX cache (warm cache first)
curl -s http://localhost/ > /dev/null
ab -n 1000 -c 10 http://localhost/

Typical results show cached responses handling 3-10x more requests per second for simple backends. For database-backed applications with slow queries, improvements of 100-400x are common.

Troubleshooting Common Issues

Cache Not Working (Always MISS)

  1. Check backend Cache-Control headers: Use proxy_ignore_headers Cache-Control Expires
  2. Verify Set-Cookie: Responses with Set-Cookie are not cached by default
  3. Check cache directory permissions: Must be writable by nginx user

502 Bad Gateway

  1. SELinux: Enable httpd_can_network_connect
  2. Upstream not running: Verify backend is accessible
  3. Timeout: Increase proxy_connect_timeout and proxy_read_timeout

Cache Key Collisions

If different content is being served from cache incorrectly, expand your cache key:

proxy_cache_key $scheme$proxy_host$request_uri$http_accept_encoding$http_accept_language;

For more troubleshooting tips on proxy-related issues, see our guide on tuning proxy_buffer_size.

Conclusion

NGINX proxy caching dramatically improves performance and scalability. The key configurations to remember:

For dynamic applications, microcaching with a 1-second TTL provides enormous benefits under load while ensuring content freshness. Combined with stale-while-revalidate, users experience consistently fast responses regardless of cache state.

Start with the production configuration template in this guide, monitor cache hit rates with the X-Cache-Status header, and tune based on your application’s requirements.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience • Maintainer of GetPageSpeed RPM repository • Contributor to open-source NGINX modules

Exit mobile version