yum upgrades for production use, this is the repository for you.
Active subscription is required.
The NGINX dynamic limit req module is a powerful extension of NGINX’s built-in rate limiting that adds Redis-backed IP blocking for effective DDoS protection and brute force prevention. While native limit_req only rejects individual excess requests, the NGINX dynamic limit req module goes further — when a client exceeds the rate limit, the module locks the offending IP in Redis with a configurable lockout period. During that lockout, every subsequent request from the blocked IP is immediately rejected.
This two-layer approach makes the NGINX dynamic limit req module significantly more effective against sustained attacks than native rate limiting alone. Moreover, because blocked IPs are stored in Redis, the state is shared across all NGINX worker processes and can even be shared across multiple NGINX instances.
How the Dynamic Limit Req Module Works
The module operates on two distinct layers:
Layer 1 — Leaky Bucket (Shared Memory): Identical to NGINX’s native limit_req, a shared memory zone tracks the request rate per key using the leaky bucket algorithm. When a client’s request rate exceeds the configured rate plus burst, the request is flagged as over-limit.
Layer 2 — Redis IP Blocking: When a request exceeds the rate limit, the module stores the client’s IP address in Redis with a SETEX command, setting a TTL equal to block_second. For the duration of that TTL, every request from the blocked IP is immediately rejected with the configured status code — without even consulting the leaky bucket.
Additionally, the module maintains a history database in Redis DB 1, permanently recording every IP that was ever blocked. This provides a valuable audit trail for security analysis.
Whitelist and Blacklist via Redis
The module supports dynamic access control through Redis keys:
- Whitelist: Set a key
white<IP>(e.g.,white192.168.1.100) in Redis. Whitelisted IPs will not be blocked when they exceed the rate limit. - Blacklist: Set a key
<IP>(e.g.,10.0.0.5) in Redis with a TTL. Any IP that exists as a key in Redis is immediately rejected, even if it has not exceeded the rate limit.
Localhost Exemption
The module automatically exempts requests originating from 127.0.0.1. This ensures that health checks, internal monitoring, and local processes are never rate-limited or blocked.
Installation
RHEL, CentOS, AlmaLinux, Rocky Linux
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-dynamic-limit-req
This will also install the hiredis library (Redis client for C) as a dependency.
Next, load the module by adding this line at the top of /etc/nginx/nginx.conf, before the events {} block:
load_module modules/ngx_http_dynamic_limit_req_module.so;
You will also need a running Redis-compatible server. On Rocky Linux 10 and similar, Valkey (the open-source Redis fork) is available:
sudo dnf install valkey
sudo systemctl enable --now valkey
On older RHEL versions, install Redis from EPEL:
sudo dnf install epel-release
sudo dnf install redis
sudo systemctl enable --now redis
Debian and Ubuntu
First, set up the GetPageSpeed APT repository, then install:
sudo apt-get update
sudo apt-get install nginx-module-dynamic-limit-req
On Debian/Ubuntu, the package handles module loading automatically. No
load_moduledirective is needed.
Install Redis:
sudo apt-get install redis-server
sudo systemctl enable --now redis-server
Module package pages:
SELinux Configuration (RHEL-Based Systems)
On SELinux-enforcing systems, NGINX cannot connect to Redis by default. You must allow network connections:
sudo setsebool -P httpd_can_network_connect 1
Without this, you will see Permission denied errors in the NGINX error log when the module attempts to connect to Redis. This is a common issue on Rocky Linux and AlmaLinux systems.
Configuration Reference
The module provides five directives. None of them export NGINX variables.
dynamic_limit_req_zone
Defines a shared memory zone for rate tracking and configures the Redis backend.
Syntax: dynamic_limit_req_zone key zone=name:size rate=rate redis=address block_second=seconds
Context: http
Parameters:
key— The variable used to differentiate clients. Typically$binary_remote_addr(16 bytes per IPv4 address, efficient for shared memory).zone=name:size— Name and size of the shared memory zone. A 1 MB zone can hold approximately 16,000 IP addresses.rate=Xr/sorrate=Xr/m— Maximum request rate in requests per second or per minute.redis=address— Redis server address (IP or Unix socket path).block_second=N— How many seconds to block an offending IP in Redis after it exceeds the rate limit.
Example:
dynamic_limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s redis=127.0.0.1 block_second=300;
This creates a 10 MB zone named api that allows 10 requests per second per IP. Offenders are blocked in Redis for 300 seconds (5 minutes).
dynamic_limit_req_redis
Configures Redis connection parameters separately. Use this when you need to specify a non-default port or a password.
Syntax: dynamic_limit_req_redis [port=number] [requirepass=password] [unix_socket]
Context: http
Parameters:
port=N— Redis port (default: 6379).requirepass=secret— RedisAUTHpassword.unix_socket— Connect via Unix socket instead of TCP. Cannot be combined withport.
Examples:
# TCP with password
dynamic_limit_req_redis port=6379 requirepass=YourRedisPassword;
# Unix socket with password
dynamic_limit_req_redis unix_socket requirepass=YourRedisPassword;
When using a Unix socket, ensure Redis is configured with unixsocketperm 770 and that the nginx user belongs to the redis group:
sudo usermod -aG redis nginx
dynamic_limit_req
Enables rate limiting in a location and references a previously defined zone.
Syntax: dynamic_limit_req zone=name [burst=number] [nodelay]
Context: http, server, location, if
Parameters:
zone=name— References a zone defined bydynamic_limit_req_zone.burst=N— Maximum burst size. The module allows up toburstexcess requests before blocking. Default: 0.nodelay— Accepted for compatibility with nativelimit_reqsyntax, but has no practical effect in this module. The module always processes burst requests immediately without throttling. See ThenodelayParameter Has No Effect for details.
Example:
location /api/ {
dynamic_limit_req zone=api burst=20 nodelay;
}
dynamic_limit_req_log_level
Sets the severity level for log messages when requests are rate-limited.
Syntax: dynamic_limit_req_log_level info | notice | warn | error
Default: error
Context: http, server, location
Example:
dynamic_limit_req_log_level warn;
Important: The level you set here must be equal to or higher in severity than your
error_logdirective’s level. For example, iferror_loguses its default level oferror, settingdynamic_limit_req_log_level warnproduces no output becausewarnmessages are filtered out. Either lower yourerror_loglevel (e.g.,error_log /var/log/nginx/error.log warn;) or leave this directive at its default oferror. This is standard NGINX behavior — nativelimit_req_log_levelhas the same interaction.
dynamic_limit_req_status
Sets the HTTP status code returned to blocked clients.
Syntax: dynamic_limit_req_status code
Default: 503
Context: http, server, location, if
Valid values: 400–599.
Example:
dynamic_limit_req_status 429;
Using 429 Too Many Requests is the standard HTTP status code for rate limiting and is recommended over the default 503 Service Unavailable.
Practical Configuration Examples
Basic DDoS Protection
This configuration protects a website with a general rate limit of 10 requests per second per IP, with a 5-minute block for offenders:
load_module modules/ngx_http_dynamic_limit_req_module.so;
events {
worker_connections 1024;
}
http {
dynamic_limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s redis=127.0.0.1 block_second=300;
server {
listen 80;
server_name example.com;
root /var/www/html;
location / {
dynamic_limit_req zone=general burst=20 nodelay;
dynamic_limit_req_status 429;
}
}
}
Multi-Zone Protection for Different Endpoints
Apply different rate limits to different parts of your application. For example, use strict limits on login pages and more lenient limits on general content:
http {
dynamic_limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s redis=127.0.0.1 block_second=300;
dynamic_limit_req_zone $binary_remote_addr zone=login:5m rate=1r/m redis=127.0.0.1 block_second=600;
server {
listen 80;
server_name example.com;
root /var/www/html;
location / {
dynamic_limit_req zone=general burst=20 nodelay;
dynamic_limit_req_status 429;
}
location /login {
dynamic_limit_req zone=login burst=2 nodelay;
dynamic_limit_req_status 403;
}
}
}
With this setup, login attempts are limited to 1 per minute with a burst of 2. Anyone who exceeds this is blocked for 10 minutes and receives a 403 Forbidden response.
Redis Authentication
When Redis requires a password (which it should in production), configure authentication using the NGINX dynamic limit req Redis directive:
http {
dynamic_limit_req_zone $binary_remote_addr zone=main:10m rate=5r/s redis=127.0.0.1 block_second=300;
dynamic_limit_req_redis port=6379 requirepass=YourStrongPassword;
server {
listen 80;
server_name example.com;
root /var/www/html;
location / {
dynamic_limit_req zone=main burst=10 nodelay;
dynamic_limit_req_status 429;
}
}
}
Behind a CDN or Load Balancer
When NGINX sits behind a CDN or reverse proxy, use the real_ip module to extract the true client IP from the X-Forwarded-For or X-Real-IP header:
http {
set_real_ip_from 192.168.0.0/16;
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
dynamic_limit_req_zone $binary_remote_addr zone=main:10m rate=10r/s redis=127.0.0.1 block_second=300;
server {
listen 80;
server_name example.com;
root /var/www/html;
location / {
dynamic_limit_req zone=main burst=20 nodelay;
dynamic_limit_req_status 429;
}
}
}
Always use $binary_remote_addr rather than $http_x_forwarded_for as the zone key after configuring real_ip. This ensures the resolved client IP is used and prevents attackers from spoofing the header.
Managing Blocked IPs via Redis
One of the most powerful features of the NGINX dynamic limit req module is the ability to manage blocked IPs directly through the Redis CLI or any Redis client.
View Currently Blocked IPs
redis-cli keys "*"
Each key is an IP address, with the value being the IP itself. Check the remaining block time:
redis-cli ttl 10.0.0.5
Manually Block an IP (Blacklist)
Block an IP for 1 hour without waiting for it to exceed the rate limit:
redis-cli setex 10.0.0.5 3600 10.0.0.5
Unblock an IP
Remove an IP from the block list:
redis-cli del 10.0.0.5
Also clear the history record if desired:
redis-cli -n 1 del 10.0.0.5
Whitelist an IP
Prevent an IP from ever being automatically blocked (e.g., your office IP or monitoring service):
redis-cli set white203.0.113.50 203.0.113.50
The whitelist key has no TTL, so it persists until explicitly removed.
View Block History
Redis DB 1 stores a permanent record of all IPs that were ever blocked:
redis-cli -n 1 keys "*"
Bulk Operations for Incident Response
During an active DDoS attack, you may need to block entire subnets. While the module operates on individual IPs, you can script bulk blocks:
# Block all IPs in a /24 subnet for 1 hour
for i in $(seq 1 254); do
redis-cli setex "10.0.0.$i" 3600 "10.0.0.$i"
done
How It Differs from Native NGINX Rate Limiting
| Feature | Native limit_req |
Dynamic Limit Req |
|---|---|---|
| Rate limiting algorithm | Leaky bucket | Leaky bucket + Redis blocking |
| Persistent IP blocking | No | Yes (configurable TTL) |
| Shared state across instances | No (per-worker shared memory) | Yes (via Redis) |
| Dynamic whitelist/blacklist | No | Yes (via Redis keys) |
| Block history/audit trail | No | Yes (Redis DB 1) |
| Manual IP blocking | No | Yes (redis-cli setex) |
| External dependencies | None | Redis/Valkey |
| Localhost exemption | No | Yes (127.0.0.1 always passes) |
| Gradual request throttling | Yes (delay without nodelay) |
No (see known limitation) |
If you only need basic rate limiting on a single NGINX instance and do not need persistent blocking, native limit_req is simpler and has no external dependencies. However, for DDoS protection, brute force prevention, and multi-instance deployments, this module’s Redis-backed approach is significantly more effective.
Performance Considerations
The module adds a Redis round-trip to the request processing path. Here is what to expect:
- Redis on localhost (TCP): Adds approximately 0.1–0.5 ms per request. Negligible for most workloads.
- Redis on Unix socket: Slightly faster than TCP, approximately 0.05–0.3 ms per request.
- Redis on remote host: Adds network latency (typically 1–5 ms). Use a local Redis instance whenever possible.
Redis Connection Handling
The module maintains a single static Redis connection per worker process. If the connection drops (e.g., Redis restarts), the module will reconnect on the next request. During the reconnection window, rate limiting gracefully degrades — requests are allowed through rather than erroring.
Shared Memory Sizing
The zone size determines how many unique IPs can be tracked simultaneously:
- 1 MB ≈ 16,000 IPv4 addresses
- 10 MB ≈ 160,000 IPv4 addresses
- 32 MB ≈ 500,000 IPv4 addresses
For most production servers handling moderate traffic, 10 MB is sufficient. For high-traffic servers facing frequent DDoS attacks, consider 32 MB or more.
Security Best Practices
When deploying the NGINX dynamic limit req module in production, follow these recommendations to maximize protection:
Secure Your Redis Instance
Because the module stores security-relevant data in Redis, ensure that Redis itself is hardened:
- Require authentication: Always set a password with
requirepassinredis.confand configure it in thedynamic_limit_req_redisdirective. - Bind to localhost: Set
bind 127.0.0.1inredis.confto prevent remote access. Combine with appropriate firewall rules. - Use Unix sockets when NGINX and Redis are on the same server for both performance and security.
- Disable dangerous commands like
FLUSHALLandCONFIGfor non-admin users with Redis ACLs.
Choose Appropriate Block Durations
Set block_second based on the sensitivity of the endpoint:
- General pages: 60–300 seconds (1–5 minutes)
- Login pages: 600–1800 seconds (10–30 minutes)
- API endpoints: 300–600 seconds (5–10 minutes)
- SMS/OTP verification: 1800–3600 seconds (30–60 minutes)
Protect Sensitive Headers
When using the NGINX dynamic limit req module behind a reverse proxy, ensure you validate the X-Forwarded-For header with set_real_ip_from. Without this, attackers can forge headers to bypass rate limits or block legitimate users. Refer to the NGINX security headers guide for additional header hardening.
Important Caveats and Known Limitations
The return Directive Bypasses Rate Limiting
The dynamic limit req module runs in NGINX’s preaccess phase. However, the return directive executes in the earlier rewrite phase, which runs first. Therefore, if you use return in a location block, rate limiting will never trigger:
# WRONG: return runs before rate limiting
location /api {
dynamic_limit_req zone=api burst=5 nodelay;
return 200 "OK"; # This short-circuits the preaccess phase!
}
# CORRECT: use root, proxy_pass, or try_files instead
location /api {
dynamic_limit_req zone=api burst=5 nodelay;
proxy_pass http://backend;
}
This is a common pitfall. Use proxy_pass, root/try_files, or fastcgi_pass as the content handler instead of return when the NGINX dynamic limit req module is active.
The nodelay Parameter Has No Effect
In native NGINX limit_req, omitting nodelay causes excess requests within the burst to be delayed (throttled) to match the configured rate. For example, at rate=1r/s, burst requests are held for approximately one second each before being forwarded.
The dynamic limit req module accepts the nodelay parameter for syntax compatibility, but the delay mechanism is not functional. Internally, the module’s Redis lookup logic returns a response before the inherited delay code path is reached. As a result, all burst requests pass through immediately regardless of whether nodelay is specified — the behavior is always equivalent to nodelay. This has been reported upstream.
This means the module operates as a binary gate: requests are either allowed (within rate + burst) or the IP is blocked in Redis (exceeds rate + burst). There is no gradual throttling. If you need request throttling, use native limit_req alongside this module — they can coexist on the same location.
Log Level Interaction with error_log
The dynamic_limit_req_log_level directive sets the severity of rate-limiting log messages, but these messages are only visible if NGINX’s error_log is configured at a matching or lower severity level. This is standard NGINX behavior that applies to all modules.
For example, setting dynamic_limit_req_log_level warn while error_log uses its default level of error will silently suppress all rate-limiting log messages. To see warn-level messages, you must also lower the error_log level:
error_log /var/log/nginx/error.log warn;
dynamic_limit_req_log_level warn;
Alternatively, leave dynamic_limit_req_log_level at its default of error to ensure messages always appear regardless of your error_log configuration.
Block Renewal on Repeated Attempts
When a blocked IP makes a request during the lockout period, the leaky bucket still tracks the excess. If the leaky bucket state has not recovered (which depends on the rate setting), the Redis block key’s TTL is effectively renewed. This means persistent offenders may remain blocked longer than block_second — which is actually desirable behavior for security.
Troubleshooting
“Permission denied” when connecting to Redis
On SELinux-enforcing systems, NGINX is prevented from making network connections by default:
redis connection error: Permission denied 127.0.0.1
Fix:
sudo setsebool -P httpd_can_network_connect 1
Rate limiting not working
- Check that
returnis not used in the same location —returnshort-circuits the request before the rate limiter runs. Useproxy_pass,try_files, orrootinstead. -
Verify Redis is running:
redis-cli ping - Check NGINX error log for Redis connection errors:
grep "redis" /var/log/nginx/error.log - Verify the module is loaded:
nginx -tIf
nginx -tpasses withdynamic_limit_reqdirectives in your config, the module is loaded correctly.
No rate-limiting messages in the error log
If you set dynamic_limit_req_log_level to warn, notice, or info but see no rate-limiting messages, check your error_log level. The default error_log level is error, which filters out lower-severity messages. See the Log Level Interaction caveat above.
Requests from localhost are never blocked
This is by design. The module exempts 127.0.0.1 from all rate limiting and Redis blocking. If you need to test rate limiting locally, make requests via the server’s network interface IP rather than localhost.
Redis memory growing unbounded
The module uses SETEX with a TTL for blocked IPs, so block entries expire automatically. However, history records in DB 1 (written with SET, no TTL) persist indefinitely. To clean up old history:
redis-cli -n 1 flushdb
Or schedule periodic cleanup in a cron job.
Conclusion
The NGINX dynamic limit req module is a powerful tool for system administrators who need rate limiting that goes beyond what the built-in limit_req can offer. By integrating Redis for persistent IP blocking, dynamic whitelisting and blacklisting, and cross-instance state sharing, it provides a robust defense against DDoS attacks and brute force abuse.
Keep in mind that the module operates as a binary gate — requests either pass or the IP gets blocked. It does not support gradual request throttling like native limit_req does. If you need both throttling and persistent blocking, you can use both modules together on the same location.
For environments where immediate IP lockout after rate limit violations is essential — login pages, API endpoints, payment forms — this module fills a critical gap that native NGINX rate limiting cannot address.
The module source code is available on GitHub.
