Site icon GetPageSpeed

NGINX srcache: Transparent Memcached and Redis Caching Layer

NGINX srcache: Transparent Memcached and Redis Caching Layer

Have you ever wished NGINX could cache your dynamic content transparently without modifying your application code? The NGINX srcache module delivers exactly that capability. This powerful module creates a transparent caching layer that sits between NGINX and your backend application. It stores responses in memcached or Redis for lightning-fast subsequent requests.

What is NGINX srcache?

The NGINX srcache module implements transparent subrequest-based caching for any NGINX location. Unlike NGINX’s built-in proxy_cache, srcache offers remarkable flexibility. It stores cached responses in external key-value stores like memcached or Redis. This enables distributed caching across multiple NGINX servers.

Additionally, srcache respects standard HTTP cache headers (Cache-Control, Expires, Pragma) by default. Therefore, your existing cache invalidation strategies continue working without modification. The module operates through two distinct phases:

  1. Fetch Phase: Before processing a request, srcache issues a subrequest to check the cache backend
  2. Store Phase: After receiving a response from your backend, srcache stores it via another subrequest

This architecture provides several advantages over traditional caching approaches. For example, you can share cached content across multiple NGINX instances. Simply point them to the same memcached or Redis cluster.

Why Use NGINX srcache Instead of proxy_cache?

NGINX’s built-in proxy_cache stores cached responses on local disk or in shared memory. However, this approach has limitations in distributed environments. Consider the following comparison:

Feature proxy_cache srcache
Storage Location Local disk/memory External (memcached/Redis)
Shared Across Servers No Yes
Cache Invalidation File-based Key-based
Memory Efficiency Limited Scales with backend
Setup Complexity Simple Moderate

Consequently, NGINX srcache excels in scenarios where you need:

If you need a simpler caching solution for single-server deployments, consider using NGINX proxy_cache instead.

Installing the NGINX srcache Module

On Rocky Linux 10, AlmaLinux 10, and RHEL 10, you can install the srcache module from the GetPageSpeed repository. First, enable the repository:

dnf install -y https://extras.getpagespeed.com/release-latest.rpm

Then install the srcache module along with the memcached module:

dnf install -y nginx-module-srcache nginx-module-memc memcached

The installation includes these packages:

After installation, enable the modules. Add these lines to the beginning of /etc/nginx/nginx.conf:

load_module modules/ndk_http_module.so;
load_module modules/ngx_http_srcache_filter_module.so;
load_module modules/ngx_http_memc_module.so;

Finally, start and enable both services:

systemctl enable --now nginx memcached

Basic Configuration with Memcached

The following configuration demonstrates a basic NGINX srcache setup with memcached. This example caches responses from a backend application server:

upstream memc_backend {
    server 127.0.0.1:11211;
    keepalive 512;
}

upstream app_backend {
    server 127.0.0.1:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name example.com;
    server_tokens off;

    # Internal memcached handler
    location = /memc {
        internal;
        memc_connect_timeout 100ms;
        memc_read_timeout 100ms;
        memc_send_timeout 100ms;
        set $memc_key $query_string;
        set $memc_exptime 300;
        memc_pass memc_backend;
    }

    # Cached application location
    location / {
        set $key $uri$is_args$args;

        srcache_fetch GET /memc $key;
        srcache_store PUT /memc $key;
        srcache_store_statuses 200 301 302 307 308;

        proxy_pass http://app_backend;
    }
}

This configuration works as follows:

  1. The /memc location handles all memcached communication internally
  2. For each request, srcache first checks memcached using the URI as the cache key
  3. On a cache hit, NGINX serves the cached response immediately
  4. On a cache miss, the request proceeds to your backend application
  5. Successful responses (status codes 200, 301, 302, 307, 308) get stored

Understanding Cache Status Headers

Debugging cache behavior becomes straightforward when you add status headers. Include these directives to observe how srcache handles each request:

location / {
    set $key $uri$is_args$args;

    srcache_fetch GET /memc $key;
    srcache_store PUT /memc $key;
    srcache_store_statuses 200 301 302 307 308;

    add_header X-SRCache-Fetch-Status $srcache_fetch_status always;
    add_header X-SRCache-Store-Status $srcache_store_status always;

    proxy_pass http://app_backend;
}

The $srcache_fetch_status variable reports these values:

Similarly, $srcache_store_status indicates:

A typical request flow looks like this:

# First request (cache miss)
X-SRCache-Fetch-Status: MISS
X-SRCache-Store-Status: STORE

# Second request (cache hit)
X-SRCache-Fetch-Status: HIT
X-SRCache-Store-Status: BYPASS

Configuring Cache Expiration

The srcache module provides multiple ways to control cache expiration. By default, it respects HTTP cache headers from your backend. However, you can override this behavior with explicit settings.

Using Default Expiration

Set a default expiration time in seconds:

srcache_default_expire 300;  # 5 minutes

Setting Maximum Expiration

Prevent excessively long cache times:

srcache_max_expire 3600;  # Maximum 1 hour

Combining Both Settings

location / {
    set $key $uri$is_args$args;

    srcache_fetch GET /memc $key;
    srcache_store PUT /memc $key;
    srcache_default_expire 300;
    srcache_max_expire 3600;

    proxy_pass http://app_backend;
}

The memcached handler receives the expiration time through $memc_exptime. Cached items expire automatically in memcached without manual intervention.

Advanced Cache Key Strategies

Choosing the right cache key determines cache effectiveness. A poorly designed cache key leads to cache pollution or low hit rates. Consider these strategies:

Basic URI-Based Key

set $key $uri$is_args$args;

This approach works well for simple APIs. However, it may cause issues with query parameter ordering.

Normalized Cache Key

For consistent caching regardless of parameter order:

set $key $uri;
set_by_lua_block $key {
    local args = ngx.var.args
    if args then
        local sorted = {}
        for k, v in string.gmatch(args, "([^&=]+)=([^&]*)") do
            table.insert(sorted, k .. "=" .. v)
        end
        table.sort(sorted)
        return ngx.var.uri .. "?" .. table.concat(sorted, "&")
    end
    return ngx.var.uri
}

This requires the lua-nginx-module. Install it with:

dnf install -y nginx-module-lua

User-Specific Caching

Cache different content per user by including a user identifier:

set $key $uri$is_args$args:$cookie_session_id;

Device-Based Caching

Serve different cached versions for mobile and desktop:

set $device_type "desktop";
if ($http_user_agent ~* "(mobile|android|iphone|ipad)") {
    set $device_type "mobile";
}
set $key $uri$is_args$args:$device_type;

Bypassing the Cache

Sometimes you need to skip caching for specific requests. The srcache module provides two directives for this purpose:

Skip Cache Lookup

srcache_fetch_skip $skip_cache;

Skip Cache Storage

srcache_store_skip $skip_store;

Practical Example

This configuration bypasses caching for authenticated users:

location / {
    set $key $uri$is_args$args;
    set $skip_cache 0;

    # Skip for authenticated users
    if ($http_authorization) {
        set $skip_cache 1;
    }

    # Skip for cookies indicating logged-in state
    if ($cookie_logged_in) {
        set $skip_cache 1;
    }

    srcache_fetch_skip $skip_cache;
    srcache_store_skip $skip_cache;

    srcache_fetch GET /memc $key;
    srcache_store PUT /memc $key;

    proxy_pass http://app_backend;
}

Additionally, srcache only caches GET and HEAD requests by default. The srcache_methods directive allows caching other methods:

srcache_methods GET HEAD;  # Default
srcache_methods GET HEAD POST;  # Include POST

Limiting Cache Size

Prevent memory exhaustion by limiting the maximum cacheable response size:

srcache_store_max_size 1m;  # Maximum 1 megabyte

This setting is important when caching user-generated content or variable API responses. Responses exceeding this limit bypass the cache without causing errors.

Production Configuration Example

The following configuration represents a production-ready NGINX srcache setup. It incorporates best practices for security, performance, and maintainability:

upstream memc_backend {
    server 127.0.0.1:11211;
    keepalive 512;
}

upstream app_backend {
    server 127.0.0.1:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name example.com;
    server_tokens off;

    # Internal memcached handler
    location = /memc {
        internal;
        memc_connect_timeout 100ms;
        memc_read_timeout 100ms;
        memc_send_timeout 100ms;
        set $memc_key $query_string;
        set $memc_exptime 300;
        memc_pass memc_backend;
    }

    # Static assets - no caching needed
    location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff2?)$ {
        root /var/www/html;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # API endpoints with srcache
    location /api/ {
        set $key $uri$is_args$args;
        set $skip_cache 0;

        # Skip cache for authenticated requests
        if ($http_authorization) {
            set $skip_cache 1;
        }

        srcache_fetch_skip $skip_cache;
        srcache_store_skip $skip_cache;

        srcache_fetch GET /memc $key;
        srcache_store PUT /memc $key;
        srcache_store_statuses 200;
        srcache_default_expire 60;
        srcache_max_expire 300;
        srcache_store_max_size 512k;

        add_header X-SRCache-Fetch-Status $srcache_fetch_status always;
        add_header X-SRCache-Store-Status $srcache_store_status always;

        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Connection "";
    }

    # Dynamic pages with longer cache
    location / {
        set $key $uri$is_args$args;
        set $skip_cache 0;

        if ($cookie_logged_in) {
            set $skip_cache 1;
        }

        srcache_fetch_skip $skip_cache;
        srcache_store_skip $skip_cache;

        srcache_fetch GET /memc $key;
        srcache_store PUT /memc $key;
        srcache_store_statuses 200 301 302 307 308;
        srcache_default_expire 300;
        srcache_store_max_size 1m;

        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Connection "";
    }
}

Before deploying, validate your configuration:

nginx -t

Consider using gixy to detect potential security issues:

dnf install -y gixy
gixy /etc/nginx/nginx.conf

Performance Tuning Tips

Maximize NGINX srcache performance with these optimization techniques:

Memcached Connection Pooling

The keepalive directive maintains persistent connections to memcached:

upstream memc_backend {
    server 127.0.0.1:11211;
    keepalive 512;  # Adjust based on worker_processes Γ— expected connections
}

Memory Allocation

Configure memcached with sufficient memory for your cache requirements:

# /etc/sysconfig/memcached
CACHESIZE="2048"  # 2GB of cache memory
MAXCONN="4096"    # Maximum connections

Multiple Memcached Servers

Distribute cache across multiple servers for higher capacity:

upstream memc_backend {
    server 192.168.1.10:11211;
    server 192.168.1.11:11211;
    server 192.168.1.12:11211;
    keepalive 512;
    hash $memc_key consistent;  # Consistent hashing
}

Using Valkey (Redis-Compatible) Backend

While memcached integration works seamlessly with NGINX srcache, Valkey (a Redis fork) requires additional configuration. The redis2 module uses the Redis protocol. However, srcache expects specific response formats.

Install the required packages:

dnf install -y nginx-module-redis2 nginx-module-echo nginx-module-set-misc valkey

Enable the modules in nginx.conf:

load_module modules/ndk_http_module.so;
load_module modules/ngx_http_set_misc_module.so;
load_module modules/ngx_http_echo_module.so;
load_module modules/ngx_http_redis2_module.so;
load_module modules/ngx_http_srcache_filter_module.so;

For reliable Redis/Valkey integration, use the Lua-based approach with lua-resty-redis. This provides better control over response formatting.

Troubleshooting Common Issues

Cache Not Storing Responses

If X-SRCache-Store-Status shows BYPASS unexpectedly, check:

  1. Response status code – Verify it matches srcache_store_statuses
  2. Response size – Ensure it does not exceed srcache_store_max_size
  3. Skip conditions – Review srcache_store_skip variable
  4. Cache-Control headers – Check for no-store or private directives

Cache Not Fetching

If X-SRCache-Fetch-Status always shows MISS:

  1. Key mismatch – Ensure fetch and store use identical cache keys
  2. Backend connectivity – Test memcached connection directly
  3. Expiration – Verify cache TTL has not expired

SELinux Issues

On systems with SELinux enabled, NGINX might fail to connect to memcached. Apply the necessary boolean:

setsebool -P httpd_can_network_connect 1
setsebool -P httpd_can_network_memcache 1

Monitoring Cache Effectiveness

Track cache hit rates by analyzing your access logs:

log_format cache_status '$remote_addr - $request - $srcache_fetch_status';
access_log /var/log/nginx/cache.log cache_status;

Then analyze hit rates:

awk '{print $NF}' /var/log/nginx/cache.log | sort | uniq -c

Conclusion

The NGINX srcache module transforms NGINX into a powerful distributed caching platform. By leveraging external key-value stores like memcached, you achieve scalable caching across multiple servers. The module’s transparent operation means your application remains unaware of the caching layer.

Key takeaways from this guide include:

For high-traffic applications requiring distributed caching, NGINX srcache provides a robust alternative to NGINX Plus commercial features. Combined with proper cache key strategies and monitoring, it delivers significant performance improvements.

Start implementing NGINX srcache today and experience the benefits of transparent caching with external backends.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience β€’ Maintainer of GetPageSpeed RPM repository β€’ Contributor to open-source NGINX modules

Exit mobile version