What if NGINX could remember things between requests — without a database, without an application server, and without restarting? The NGINX keyval module does exactly that. It turns NGINX into a lightweight key-value store. You can read and write variables on the fly, using shared memory or Redis as the backend.
This is a free, open-source alternative to the keyval functionality in NGINX Plus. If you have ever wanted dynamic IP blocking, feature flags, maintenance mode toggles, or A/B testing at the edge — the NGINX keyval module is the tool for the job.
How the NGINX Keyval Module Works
The NGINX keyval module creates variables whose values come from a key-value database. You define a zone (the storage area), a key (which can include NGINX variables like $remote_addr), and a variable that holds the looked-up value.
Here is the core concept:
keyval_zone zone=my_store:32k;
keyval $arg_key $my_value zone=my_store;
This creates a 32 KB shared memory zone called my_store. For every request, the module uses the key query parameter as the lookup key. It then populates $my_value with whatever is stored for that key. The variable is both readable and writable — use NGINX’s set directive to store new values.
The module supports two storage backends:
- Shared memory (default): Data lives in NGINX worker shared memory. Lookups are fast and zero-latency with O(log n) performance via a red-black tree. However, data is lost on NGINX restart.
- Redis: Data persists in an external Redis server. It survives NGINX restarts and can be shared across multiple instances. Requires the
hiredislibrary.
Key Construction
The key parameter is a template combining literal text with NGINX variables:
# Simple: use a single variable as key
keyval $remote_addr $ip_data zone=store;
# Composite: combine multiple variables
keyval $remote_addr:$server_name $session zone=store;
At request time, the module evaluates all variables in the key template. It then concatenates them into the final lookup key.
Installation
The NGINX keyval module is available as a pre-built package from the GetPageSpeed repository. No compilation is required.
RHEL, CentOS, AlmaLinux, Rocky Linux
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-keyval
Debian and Ubuntu
First, set up the GetPageSpeed APT repository, then install:
sudo apt-get update
sudo apt-get install nginx-module-keyval
On Debian/Ubuntu, the package handles module loading automatically. No load_module directive is needed.
Load the Module (RHEL-based Only)
On RHEL-based systems, add the following to the top of /etc/nginx/nginx.conf:
load_module modules/ngx_http_keyval_module.so;
For TCP/UDP stream processing, also load the stream module:
load_module modules/ngx_stream_keyval_module.so;
Verify the module loads correctly:
sudo nginx -t
Configuration Reference
keyval_zone
Defines a shared memory zone for key-value storage.
Syntax: keyval_zone zone=name:size [ttl=time];
Context: http, stream
| Parameter | Description | Default |
|---|---|---|
zone=name:size |
Zone name and shared memory size | Required |
ttl=time |
Time-to-live for entries | 0 (permanent) |
The size parameter controls how many pairs the zone can hold. A 32k zone typically holds hundreds of short entries. The ttl parameter accepts standard NGINX time values: 30s, 10m, 1h, or 24h.
Example:
keyval_zone zone=sessions:1m ttl=30m;
This creates a 1 MB zone. Entries expire automatically after 30 minutes.
keyval
Maps a key to a variable using a named zone.
Syntax: keyval key $variable zone=name;
Context: http, stream
The key can be any combination of text and NGINX variables. The $variable is created by this directive. It can be both read and written.
keyval_zone_redis
Configures a Redis-backed zone for persistent storage.
Syntax: keyval_zone_redis zone=name [hostname=addr] [port=number] [database=number] [connect_timeout=time] [ttl=time];
Context: http, stream
| Parameter | Description | Default |
|---|---|---|
zone=name |
Zone name | Required |
hostname=addr |
Redis server address | 127.0.0.1 |
port=number |
Redis server port | 6379 |
database=number |
Redis database index | 0 |
connect_timeout=time |
Connection timeout | 3s |
ttl=time |
Time-to-live for entries | 0 (permanent) |
Keys in Redis use the zone name as a prefix: zone_name:key.
The module uses the standard Redis protocol via the hiredis library. It works with Redis, Valkey, KeyDB, and any Redis-compatible server.
Basic Usage: A Simple Key-Value API
The simplest use case is an HTTP API for reading and writing key-value pairs:
keyval_zone zone=kv_store:64k ttl=10m;
keyval $arg_key $kv_data zone=kv_store;
server {
listen 80;
server_name localhost;
location = /kv/set {
set $kv_data $arg_value;
return 200 "OK\n";
}
location = /kv/get {
return 200 "$kv_data\n";
}
}
Test it:
# Store a value
curl "http://localhost/kv/set?key=greeting&value=hello_world"
# OK
# Retrieve it
curl "http://localhost/kv/get?key=greeting"
# hello_world
# Values expire after 10 minutes (ttl=10m)
This is already useful for lightweight caching or inter-request communication. However, the real power of the NGINX keyval module comes from its creative applications.
Dynamic IP Blocking Without Restarts
Traditional NGINX IP blocking requires editing config files and reloading. With the keyval module, you can block and unblock IPs at runtime:
keyval_zone zone=blocklist:32k ttl=1h;
keyval $remote_addr $is_blocked zone=blocklist;
server {
listen 80;
server_name example.com;
# Admin endpoints (restrict these in production!)
location = /admin/block {
set $is_blocked 1;
return 200 "Blocked: $remote_addr\n";
}
location = /admin/unblock {
set $is_blocked "";
return 200 "Unblocked: $remote_addr\n";
}
# All other requests check the blocklist
location / {
if ($is_blocked = "1") {
return 403;
}
proxy_pass http://backend;
}
}
The ttl=1h means blocked IPs are unblocked automatically after one hour. No manual cleanup is needed. This is useful for temporary bans in response to abuse. Your application calls /admin/block and the ban expires on its own.
Important: In production, protect the admin endpoints with authentication. Alternatively, restrict them to trusted networks using allow/deny directives.
Zero-Downtime Maintenance Mode
Putting a site into maintenance mode typically requires editing config and reloading NGINX. The NGINX keyval module lets you toggle it with a single HTTP request instead:
keyval_zone zone=maintenance:32k;
keyval "site_mode" $maintenance_enabled zone=maintenance;
server {
listen 80;
server_name example.com;
location = /admin/maintenance/on {
set $maintenance_enabled "1";
return 200 "Maintenance mode: ON\n";
}
location = /admin/maintenance/off {
set $maintenance_enabled "";
return 200 "Maintenance mode: OFF\n";
}
location / {
if ($maintenance_enabled = "1") {
return 503 "Under maintenance. Please try again later.\n";
}
proxy_pass http://backend;
}
}
Notice the key is a literal string "site_mode" instead of a variable. Every request checks the same key, creating a global toggle. Control it with:
# Enable maintenance
curl http://localhost/admin/maintenance/on
# Disable maintenance
curl http://localhost/admin/maintenance/off
No NGINX reload, no config editing, and no downtime. The switch is instantaneous across all workers because data lives in shared memory.
Feature Flags at the Edge
Implement feature flags directly in NGINX. Enable or disable features without deploying new code:
keyval_zone zone=flags:32k;
keyval $arg_flag $flag_value zone=flags;
server {
listen 80;
server_name example.com;
# Set a feature flag
location = /admin/flag {
set $flag_value $arg_value;
return 200 "Flag '$arg_flag' = '$arg_value'\n";
}
# Check a feature flag
location = /admin/get-flag {
return 200 "$flag_value\n";
}
}
Usage:
# Enable dark mode feature
curl "http://localhost/admin/flag?flag=dark_mode&value=on"
# Check the flag
curl "http://localhost/admin/get-flag?flag=dark_mode"
# on
You can also pass flags as headers to the backend:
location /app {
proxy_set_header X-Feature-Dark-Mode $flag_value;
proxy_pass http://backend;
}
Your application reads the header and adjusts its behavior accordingly.
A/B Testing by Client IP
Assign visitors to test groups persistently by IP address:
keyval_zone zone=ab_test:32k;
keyval $remote_addr $ab_group zone=ab_test;
server {
listen 80;
server_name example.com;
location = /admin/ab/assign {
set $ab_group $arg_group;
return 200 "Assigned $remote_addr to group: $arg_group\n";
}
location / {
proxy_set_header X-AB-Group $ab_group;
proxy_pass http://backend;
}
}
Once assigned, every request from the same IP carries the group in the X-AB-Group header. Your backend reads this header and renders the right experience. The assignment lives in shared memory, so it persists across requests without cookies.
Canary Deployments by User Identity
Route specific users to a canary backend based on a custom header:
keyval_zone zone=canary:32k;
keyval $http_x_user_id $canary_group zone=canary;
server {
listen 80;
server_name example.com;
# Enroll a user in the canary group
location = /admin/canary/assign {
set $canary_group $arg_group;
return 200 "User $http_x_user_id enrolled in: $arg_group\n";
}
location / {
if ($canary_group = "canary") {
proxy_pass http://backend_v2;
break;
}
proxy_pass http://backend_v1;
}
}
This works well for API services where clients send a X-User-Id header. The enrollment persists and can be managed without restarting NGINX.
Redis Backend for Persistent Storage
Shared memory is fast but volatile — data vanishes when NGINX restarts. For persistent storage, use the Redis backend:
keyval_zone_redis zone=persistent_store
hostname=127.0.0.1
port=6379
database=0
connect_timeout=3s
ttl=24h;
keyval $arg_key $persistent_value zone=persistent_store;
server {
listen 80;
server_name example.com;
location = /kv/set {
set $persistent_value $arg_value;
return 200 "Stored in Redis\n";
}
location = /kv/get {
return 200 "$persistent_value\n";
}
}
Keys use the zone name as prefix. For example, persistent_store:mykey appears in Redis. You can inspect data with standard Redis tools:
redis-cli GET "persistent_store:mykey"
redis-cli KEYS "persistent_store:*"
Redis is ideal when data must survive restarts. It also helps when multiple NGINX instances share state in a load-balanced cluster.
On RHEL 10 and Rocky Linux 10, the Redis-compatible server in the default repositories is Valkey. Install it with:
sudo dnf install valkey
sudo systemctl enable --now valkey
Valkey uses the same protocol and port (6379) as Redis. No changes to the NGINX keyval configuration are needed.
For more Redis integration patterns, see our NGINX redis2 module guide and the srcache caching layer article.
Stream Module Support
The NGINX keyval module works in both HTTP and TCP/UDP stream contexts. This enables dynamic routing decisions for stream proxying:
load_module modules/ngx_stream_keyval_module.so;
stream {
keyval_zone zone=stream_routes:32k;
keyval $ssl_server_name $backend_addr zone=stream_routes;
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
proxy_pass $backend_addr;
}
}
This allows dynamic routing of TLS connections based on the SNI hostname. Backend addresses are stored in the key-value store.
Performance Considerations
The shared memory backend uses a red-black tree data structure. This provides O(log n) lookup performance. In practice, even 10,000 entries need only about 13 comparisons.
Here are guidelines for sizing your zones:
| Zone Size | Approximate Capacity | Use Case |
|---|---|---|
| 32k | Hundreds of short entries | Feature flags, small blocklists |
| 256k | Thousands of entries | Medium IP blocklists |
| 1m | Tens of thousands of entries | Large session stores |
| 10m | Hundreds of thousands of entries | High-traffic A/B testing |
Memory overhead per entry: Each entry uses roughly key_length + value_length + 64 bytes for the tree node. Entries with TTL use extra memory for the timer event.
Redis latency: The Redis backend adds network round-trip time to each variable access. For latency-sensitive paths, prefer shared memory. Reserve Redis for persistence or cross-instance sharing.
Security Best Practices
The NGINX keyval module exposes powerful runtime capabilities. Without protection, admin endpoints become a serious vulnerability.
Restrict Admin Access
Always protect admin endpoints. Use IP restrictions, HTTP authentication, or both:
location /admin/ {
allow 10.0.0.0/8;
allow 192.168.0.0/16;
deny all;
# Additional authentication
auth_basic "Admin";
auth_basic_user_file /etc/nginx/.htpasswd;
}
For stronger security, consider TOTP two-factor authentication or JWT-based authentication.
Validate Input
The set directive stores whatever value it receives. In production, validate inputs before storing:
location = /admin/block-ip {
# Only accept requests with a valid-looking IP parameter
if ($arg_ip = "") {
return 400 "Missing 'ip' parameter\n";
}
set $is_blocked 1;
return 200 "Blocked\n";
}
Size Your Zones Appropriately
An undersized zone fails to store new entries when full. Set sizes with headroom for peak usage. Shared memory zones cannot be resized without restarting NGINX, so plan ahead.
Troubleshooting
“unknown directive keyval_zone”
The module is not loaded. Check that load_module appears at the top of nginx.conf, before the http block. On Debian/Ubuntu with GetPageSpeed packages, loading is automatic.
Values Disappear After NGINX Restart
Shared memory zones are volatile by design. Use keyval_zone_redis if you need persistence across restarts.
“failed to allocate slab”
The zone is full. Increase the zone size in your keyval_zone directive. Also ensure a TTL is set so stale entries get cleaned up.
“failed to connect redis: Permission denied”
On RHEL-based systems with SELinux enabled, NGINX is blocked from making network connections by default. Allow it with:
sudo setsebool -P httpd_can_network_connect 1
This grants NGINX permission to connect to Redis, Valkey, or any other network service. The -P flag makes the change persist across reboots.
“failed to connect redis”
Verify your Redis-compatible server is running and accessible:
redis-cli ping
# PONG
Check the hostname, port, and firewall rules. Also verify connect_timeout is long enough for your network.
Variable Is Always Empty
Ensure the key matches a value that was previously stored. Debug by returning the key itself:
location /debug {
return 200 "Key: '$arg_key', Value: '$kv_data'\n";
}
NGINX Keyval Module vs NGINX Plus
The open-source NGINX keyval module offers similar features to the commercial NGINX Plus key-value store:
| Feature | NGINX Keyval (open-source) | NGINX Plus |
|---|---|---|
| Shared memory backend | Yes | Yes |
| Redis backend | Yes | No |
| TTL / expiration | Yes | Yes |
| HTTP and Stream contexts | Yes | Yes |
| REST API for management | Via custom locations | Built-in API |
| File-based persistence | No | Yes (state=) |
| Cost | Free (GetPageSpeed repo) | Commercial license |
The open-source module lacks file-based persistence. However, it compensates with Redis backend support. Redis provides both persistence and cross-instance sharing — something NGINX Plus file-based state cannot do.
Conclusion
The NGINX keyval module transforms NGINX from a static configuration server into a dynamic, state-aware edge platform. You get dynamic IP blocking, instant maintenance mode, feature flags, and A/B testing — all without application changes or NGINX reloads.
Add the Redis backend and you gain persistent storage that survives restarts. It also scales across multiple NGINX instances in a cluster. This is NGINX Plus key-value functionality, available for free through the GetPageSpeed repository.
The module source code is available on GitHub.

