yum upgrades for production use, this is the repository for you.
Active subscription is required.
The NGINX Lua module transforms NGINX from a static web server into a fully programmable application platform. By embedding LuaJIT directly into the NGINX worker processes, you can execute custom logic at every stage of request processing — without the overhead of external scripting languages or separate application servers.
Whether you need custom authentication, intelligent rate limiting, dynamic routing, or a lightweight API gateway, the NGINX Lua module lets you build it with a few lines of Lua code that run at near-native speed.
Why Use the NGINX Lua Module?
Traditional NGINX configuration is declarative. You define locations, set directives, and NGINX handles the rest. However, real-world requirements often demand logic that static configuration cannot express:
- Custom authentication: Validate JWT tokens, query external identity providers, or implement multi-factor checks — all within NGINX, before a request ever reaches your backend. For simpler authentication needs, consider NGINX PAM authentication or the encrypted session module.
- Intelligent rate limiting: Go beyond simple request counting. Implement per-user quotas, sliding windows, or token bucket algorithms backed by Redis. While NGINX offers built-in
limit_reqand dedicated modules like the dynamic limit req module, Lua gives you unlimited flexibility. - API gateway logic: Route requests dynamically, transform payloads, aggregate responses from multiple backends, or enforce API versioning.
- Real-time data processing: Inspect, modify, or log request and response bodies with full programmatic control.
- In-memory caching: Use NGINX shared dictionaries for sub-millisecond key-value storage without external dependencies.
The NGINX Lua module handles all of this inside NGINX’s event loop, which means your custom logic benefits from the same non-blocking, high-concurrency architecture that makes NGINX fast.
How the NGINX Lua Module Works
The NGINX Lua module, originally developed by the OpenResty project, embeds LuaJIT 2.1 into the NGINX core. LuaJIT compiles Lua code to machine instructions at runtime, achieving performance comparable to C for many workloads.
Non-Blocking Cosockets
A key innovation is the cosocket API. Lua code running inside NGINX can make network connections (HTTP, TCP, UDP, Redis, MySQL) without blocking the NGINX worker process. Under the hood, cosockets integrate with NGINX’s event loop, so thousands of concurrent Lua coroutines can run efficiently on a single worker.
Request Processing Phases
NGINX processes each request through a series of phases. The NGINX Lua module lets you hook into every major phase:
| Phase Directive | When It Runs | Common Use |
|---|---|---|
init_by_lua_block |
Server startup | Load shared libraries, initialize globals |
init_worker_by_lua_block |
Worker process startup | Start background timers, connect to databases |
set_by_lua_block |
Variable assignment | Compute dynamic variable values |
rewrite_by_lua_block |
Rewrite phase | URL rewriting, redirects |
access_by_lua_block |
Access control phase | Authentication, authorization, rate limiting |
content_by_lua_block |
Content generation | Generate responses directly in Lua |
header_filter_by_lua_block |
Response header phase | Add, remove, or modify response headers |
body_filter_by_lua_block |
Response body phase | Transform response body content |
log_by_lua_block |
Logging phase | Custom logging, metrics collection |
balancer_by_lua_block |
Upstream selection | Dynamic load balancing decisions |
ssl_certificate_by_lua_block |
TLS handshake | Dynamic SSL certificate selection |
This phase-based architecture means your Lua code runs at exactly the right moment — you can reject unauthorized requests in the access phase before NGINX ever proxies them upstream, saving backend resources.
Installation
RHEL, CentOS, AlmaLinux, Rocky Linux
First, install the GetPageSpeed repository:
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
Then install the NGINX Lua module:
sudo dnf install nginx-module-lua
The module requires the NGINX Development Kit (NDK) as a dependency. The package manager installs it automatically.
Next, load both modules by adding these lines at the top of /etc/nginx/nginx.conf, before the events block:
load_module modules/ndk_http_module.so;
load_module modules/ngx_http_lua_module.so;
Finally, verify the configuration and start NGINX:
sudo nginx -t
sudo systemctl enable --now nginx
Debian and Ubuntu
First, set up the GetPageSpeed APT repository, then install:
sudo apt-get update
sudo apt-get install nginx-module-lua
On Debian/Ubuntu, the package handles module loading automatically. No
load_moduledirective is needed.
Module page links:
- RPM packages: nginx-module-lua on extras.getpagespeed.com
- APT packages: nginx-module-lua on apt-nginx-extras.getpagespeed.com
Your First Lua Script
Add the following to a server block in your NGINX configuration:
location /hello {
content_by_lua_block {
ngx.header["Content-Type"] = "text/plain"
ngx.say("Hello from Lua!")
}
}
Reload NGINX and test it:
sudo nginx -t && sudo systemctl reload nginx
curl http://localhost/hello
Output:
Hello from Lua!
The content_by_lua_block directive tells NGINX to generate the response entirely from Lua code. The ngx.say() function writes a line to the response body, and ngx.header sets response headers.
Practical Use Cases
Generate JSON API Responses
The NGINX Lua module can serve lightweight API endpoints without a backend application server. This is useful for health checks, status pages, or simple data endpoints. For parsing incoming JSON, you might also consider the dedicated NGINX JSON module.
First, install the cjson library for JSON encoding:
sudo dnf install lua5.1-cjson
If LuaJIT cannot find the cjson module, add the system Lua library path. Place this inside the http block of /etc/nginx/nginx.conf:
lua_package_cpath "/usr/share/lua/5.1/?.so;;";
Important: After changing
lua_package_cpath, you must restart NGINX (not just reload), because this directive is read at worker process startup.
Now create a JSON endpoint:
location /api/status {
content_by_lua_block {
local cjson = require "cjson"
ngx.header["Content-Type"] = "application/json"
ngx.say(cjson.encode({
uri = ngx.var.uri,
method = ngx.req.get_method(),
remote_addr = ngx.var.remote_addr,
timestamp = ngx.now()
}))
}
}
This produces:
{"uri":"/api/status","method":"GET","remote_addr":"127.0.0.1","timestamp":1773550485.083}
Custom Response Headers
Use header_filter_by_lua_block to add, remove, or modify headers on every response — without touching upstream application code:
location /app {
proxy_pass http://backend;
header_filter_by_lua_block {
ngx.header["X-Powered-By"] = "NGINX + Lua"
ngx.header["X-Request-ID"] = ngx.var.request_id
-- Remove headers that leak server info
ngx.header["Server"] = nil
}
}
This is especially useful for adding security headers, request tracing IDs, or stripping internal headers before responses reach clients.
In-Memory Counters with Shared Dictionaries
NGINX shared dictionaries provide fast, worker-safe key-value storage in shared memory. They persist across requests and are accessible from all worker processes.
Declare a shared dictionary in the http block:
lua_shared_dict my_cache 10m;
Then use it in a location:
location /count {
content_by_lua_block {
local shared = ngx.shared.my_cache
local count = shared:incr("hits", 1, 0)
ngx.say("Request count: ", count)
}
}
Every request increments the counter atomically. The incr function’s third argument (0) sets the initial value if the key does not exist yet. Making multiple requests demonstrates the counter increasing:
Request count: 1
Request count: 2
Request count: 3
Shared dictionaries are ideal for rate limiting counters, feature flags, caching small values, or sharing state between workers — all without external services like Redis.
JWT Authentication
Validate JSON Web Tokens directly in NGINX, rejecting unauthorized requests before they reach your backend.
Install the JWT library:
sudo dnf install lua5.1-resty-jwt
Then configure JWT validation in the access phase:
location /api/protected {
access_by_lua_block {
local jwt = require "resty.jwt"
local auth_header = ngx.var.http_Authorization
if not auth_header then
ngx.status = 401
ngx.header["WWW-Authenticate"] = "Bearer"
ngx.say("Missing Authorization header")
return ngx.exit(401)
end
local token = auth_header:match("Bearer%s+(.+)")
if not token then
ngx.status = 401
ngx.say("Invalid Authorization format")
return ngx.exit(401)
end
local jwt_obj = jwt:verify("my-secret-key", token)
if not jwt_obj.verified then
ngx.status = 403
ngx.say("Invalid token: ", jwt_obj.reason)
return ngx.exit(403)
end
-- Pass the authenticated user to the backend
ngx.req.set_header("X-User", jwt_obj.payload.sub)
}
proxy_pass http://backend;
}
Because this runs in the access_by_lua_block phase, invalid requests are rejected immediately with a 401 or 403 response. Authenticated requests have the user identity injected as a header that the backend can trust.
Testing without a token:
curl -s -w "n%{http_code}" http://localhost/api/protected
Missing Authorization header
401
Testing with a valid token returns 200 and greets the authenticated user.
Redis Integration
The lua-resty-redis library provides a non-blocking Redis client that integrates with NGINX’s event loop. Unlike external Redis clients, cosocket-based connections do not block the NGINX worker.
Install the Redis library:
sudo dnf install lua5.1-resty-redis
Example — cache a computed value in Redis:
location /cached-data {
content_by_lua_block {
local redis = require "resty.redis"
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)
local ok, err = red:connect("127.0.0.1", 6379)
if not ok then
ngx.log(ngx.ERR, "Redis connect failed: ", err)
ngx.exit(500)
return
end
-- Check cache first
local cached = red:get("mykey")
if cached and cached ~= ngx.null then
ngx.say("Cached: ", cached)
else
-- Compute and store
local value = "computed-at-" .. ngx.now()
red:setex("mykey", 60, value)
ngx.say("Fresh: ", value)
end
-- Return connection to the pool
red:set_keepalive(10000, 100)
}
}
The set_keepalive call is critical for performance. Instead of closing the TCP connection after each request, it returns the connection to a pool. The parameters specify the maximum idle time (10 seconds) and pool size (100 connections).
SELinux note: On RHEL-based systems, NGINX may be blocked from connecting to Redis by SELinux. Enable network connections with:
sudo setsebool -P httpd_can_network_connect 1
Rate Limiting with lua-resty-limit-traffic
While NGINX provides built-in limit_req and limit_conn modules, and there are dedicated modules like the NGINX Limit Traffic Rate module and the Dynamic Limit Req module for Redis-backed rate limiting, the lua-resty-limit-traffic library offers the most flexibility. You can implement per-user limits, combine multiple limiting strategies, or create custom rate limiting logic that no static module can express.
Install the library:
sudo dnf install lua5.1-resty-limit-traffic
Declare a shared dictionary for rate limit state:
lua_shared_dict rate_limit_store 10m;
Then implement token bucket rate limiting:
location /api/ {
access_by_lua_block {
local limit_req = require "resty.limit.req"
-- Allow 2 requests/second with burst of 1
local lim, err = limit_req.new("rate_limit_store", 2, 1)
if not lim then
ngx.log(ngx.ERR, "failed to create limiter: ", err)
return ngx.exit(500)
end
local key = ngx.var.binary_remote_addr
local delay, err = lim:incoming(key, true)
if not delay then
if err == "rejected" then
return ngx.exit(429)
end
ngx.log(ngx.ERR, "limit error: ", err)
return ngx.exit(500)
end
-- If delay > 0, the request is within burst
if delay >= 0.001 then
ngx.sleep(delay)
end
}
proxy_pass http://backend;
}
This is more powerful than limit_req because you can vary the rate limit based on the authenticated user, the requested endpoint, or any other request attribute. For example, premium users could receive a higher rate limit by changing the key or adjusting the rate parameter based on a JWT claim.
Aggregate Multiple Backend Responses
The NGINX Lua module supports subrequests — internal requests that fan out to multiple locations and gather the results. This is a building block for API gateway patterns like response aggregation.
location /dashboard {
content_by_lua_block {
-- Fetch data from multiple backends in parallel
local res1, res2 = ngx.location.capture_multi({
{ "/api/user-profile" },
{ "/api/recent-activity" }
})
local cjson = require "cjson"
ngx.header["Content-Type"] = "application/json"
ngx.say(cjson.encode({
profile = cjson.decode(res1.body),
activity = cjson.decode(res2.body)
}))
}
}
location /api/user-profile {
internal;
proxy_pass http://user-service;
}
location /api/recent-activity {
internal;
proxy_pass http://activity-service;
}
The ngx.location.capture_multi function issues multiple subrequests concurrently. Each subrequest runs through NGINX’s full request processing pipeline, including proxying, caching, and access control. The internal directive ensures these locations cannot be accessed directly by clients.
Time-Based Access Control
Restrict access to certain endpoints based on server time. For simpler variable-based access control without Lua, see the NGINX Access Control module.
location /maintenance {
access_by_lua_block {
local hour = tonumber(os.date("%H"))
if hour < 9 or hour >= 17 then
ngx.status = 403
ngx.say("Access denied: available 9:00-17:00 only")
return ngx.exit(403)
end
}
proxy_pass http://backend;
}
The Lua Library Ecosystem
One of the greatest strengths of the NGINX Lua module is its vast ecosystem of ready-to-use libraries. The GetPageSpeed repository provides over 110 pre-built lua-resty library packages that you can install with a single dnf install command — no manual compilation, no dependency hunting, no version conflicts.
This is a major advantage over building OpenResty from source. With GetPageSpeed packages, you get:
- Pre-compiled binaries tested against your specific OS and NGINX version
- Automatic dependency resolution — libraries that depend on other libraries pull them in automatically
- Security updates delivered through standard package manager channels
- Compatibility guarantees — every package is built and tested for your platform
Installing Lua Libraries
Every library follows the same simple installation pattern:
sudo dnf install lua5.1-resty-<library-name>
Important: Always use the
lua5.1-resty-prefixed packages, not the unprefixedlua-resty-ones. NGINX Lua module uses LuaJIT, which is Lua 5.1 compatible. The unprefixedlua-resty-*packages install to the system Lua path (e.g. Lua 5.4 on EL10), which NGINX cannot find.
No NGINX restart is needed after installing a library — a reload is sufficient, because Lua libraries are loaded at runtime when your code calls require.
Popular Libraries by Category
HTTP and Networking:
| Package | Description |
|---|---|
lua5.1-resty-http |
Full-featured HTTP client with connection pooling |
lua5.1-resty-dns |
Non-blocking DNS resolver |
lua5.1-resty-websocket |
WebSocket server and client |
lua5.1-resty-requests |
Python-requests-style HTTP client |
For dedicated real-time pub/sub and streaming without writing Lua code, the NGINX push stream module provides long-polling, WebSocket, and EventSource channels as built-in directives.
Data Stores:
| Package | Description |
|---|---|
lua5.1-resty-redis |
Redis client with pipelining and Pub/Sub |
lua5.1-resty-mysql |
Non-blocking MySQL/MariaDB client |
lua5.1-resty-memcached |
Memcached client |
lua5.1-resty-kafka |
Apache Kafka producer and consumer |
lua5.1-resty-etcd |
etcd v3 client for service discovery |
For direct PostgreSQL queries without Lua code, the dedicated NGINX PostgreSQL module embeds SQL directly in NGINX configuration — useful for simple lookups and authentication checks.
Security and Authentication:
| Package | Description |
|---|---|
lua5.1-resty-jwt |
JWT creation and verification |
lua5.1-resty-openidc |
OpenID Connect relying party implementation |
lua5.1-resty-session |
Secure session management |
lua5.1-resty-hmac |
HMAC-based message authentication |
lua5.1-resty-openssl |
FFI bindings to OpenSSL |
lua5.1-resty-acme |
Automatic Let’s Encrypt certificate management |
lua5.1-resty-waf |
Full web application firewall |
Performance and Caching:
| Package | Description |
|---|---|
lua5.1-resty-lrucache |
In-process LRU cache (faster than shared dict) |
lua5.1-resty-mlcache |
Multi-layer cache (L1: lrucache, L2: shared dict, L3: callback) |
lua5.1-resty-limit-traffic |
Advanced rate limiting and traffic control |
lua5.1-resty-balancer |
Consistent-hash load balancer |
lua5.1-resty-healthcheck |
Active and passive upstream health checking |
Utilities:
| Package | Description |
|---|---|
lua5.1-resty-string |
String utilities and common hash functions |
lua5.1-resty-template |
HTML templating engine |
lua5.1-resty-validation |
Input validation and filtering |
lua5.1-resty-upload |
Streaming multipart file upload handling |
lua5.1-resty-shell |
Execute system commands non-blockingly |
lua5.1-resty-jit-uuid |
Fast UUID generation |
lua5.1-resty-mail |
Send emails via SMTP |
Browse the full catalog of 110+ Lua libraries available for instant installation.
Key Directives Reference
Global Settings (http block)
| Directive | Default | Description |
|---|---|---|
lua_package_path |
LuaJIT default | Semicolon-separated Lua module search paths |
lua_package_cpath |
LuaJIT default | Semicolon-separated C module search paths |
lua_code_cache |
on |
Cache compiled Lua code (disable only for development) |
lua_shared_dict |
(none) | Declare a named shared memory zone |
lua_max_running_timers |
256 |
Maximum concurrent timer callbacks |
lua_max_pending_timers |
1024 |
Maximum pending timer registrations |
lua_regex_cache_max_entries |
1024 |
Compiled regex cache size |
lua_sa_restart |
on |
Restart system calls on signal interrupts |
Socket Settings (http/server/location)
| Directive | Default | Description |
|---|---|---|
lua_socket_connect_timeout |
60s |
Cosocket connection timeout |
lua_socket_send_timeout |
60s |
Cosocket send timeout |
lua_socket_read_timeout |
60s |
Cosocket read timeout |
lua_socket_pool_size |
30 |
Connection pool size per location |
lua_socket_keepalive_timeout |
60s |
Idle connection timeout in pool |
lua_socket_buffer_size |
page size | Cosocket receive buffer size |
lua_socket_log_errors |
on |
Log cosocket errors |
Behavioral Settings
| Directive | Default | Description |
|---|---|---|
lua_need_request_body |
off |
Pre-read request body |
lua_check_client_abort |
off |
Monitor client connection drops |
lua_use_default_type |
on |
Use default_type when no Content-Type is set |
lua_http10_buffering |
on |
Buffer responses for HTTP/1.0 clients |
lua_transform_underscores_in_response_headers |
on |
Convert underscores to hyphens in header names |
lua_load_resty_core |
on |
Load lua-resty-core for FFI-based API |
Performance Considerations
Keep lua_code_cache On
The lua_code_cache directive controls whether compiled Lua bytecode is cached between requests. In production, this must remain on (the default). Disabling it forces LuaJIT to recompile your code on every request, which destroys performance.
Only set lua_code_cache off during development when you need to see code changes without reloading NGINX.
Use Connection Pooling
Always call set_keepalive() on cosocket connections instead of close(). This returns the connection to NGINX’s connection pool rather than closing and reopening TCP connections:
-- Good: reuse connections
red:set_keepalive(10000, 100)
-- Bad: creates new TCP connection per request
red:close()
Prefer lua-resty-core
The lua-resty-core package (installed automatically as a dependency) reimplements core NGINX Lua APIs using LuaJIT FFI, which is significantly faster than the default C-based API. Keep lua_load_resty_core on (the default) to benefit from these optimizations.
Shared Dictionaries vs. lua-resty-lrucache
Shared dictionaries (lua_shared_dict) are shared across all worker processes but require locking. For read-heavy, per-worker caching, lua-resty-lrucache is faster because it avoids cross-worker locks:
sudo dnf install lua5.1-resty-lrucache
For best results, use both in layers: lua-resty-lrucache as an L1 cache and lua_shared_dict as an L2. The lua-resty-mlcache library automates this pattern.
Security Best Practices
Validate All User Input
Never trust data from ngx.var, request headers, or query parameters. Always validate and sanitize before using in Lua code:
local user_id = tonumber(ngx.var.arg_id)
if not user_id or user_id < 1 then
return ngx.exit(400)
end
Keep lua_code_cache Enabled
Disabling lua_code_cache in production is not just a performance issue — it changes the runtime behavior. Globals and module-level state are reset on every request, which can mask bugs and create inconsistent behavior.
Protect Secrets
Do not hard-code secrets (API keys, JWT signing keys) in NGINX configuration files. Instead, load them from environment variables or files:
init_by_lua_block {
JWT_SECRET = os.getenv("JWT_SECRET")
if not JWT_SECRET then
ngx.log(ngx.ERR, "JWT_SECRET environment variable not set")
end
}
SELinux on RHEL-Based Systems
If NGINX cannot connect to Redis, PostgreSQL, or other network services from Lua, SELinux is likely blocking the connection:
sudo setsebool -P httpd_can_network_connect 1
This allows NGINX to make outbound network connections. Without it, all cosocket connections to external services fail with “Permission denied.”
Troubleshooting
“module not found” Errors
If require("resty.something") fails, the Lua search paths may not include the installation directory. Check where the library was installed:
rpm -ql lua5.1-resty-http | head -5
Then verify the paths in your NGINX configuration match. You may need to add to lua_package_path:
lua_package_path "/usr/share/lua/5.1/?.lua;;";
lua_package_cpath "/usr/share/lua/5.1/?.so;;";
The trailing ;; appends the default LuaJIT search paths.
“unknown directive” After Module Installation
If nginx -t reports an unknown directive for Lua directives, the module is not loaded. Verify:
- Both
load_modulelines are at the top ofnginx.conf, before theeventsblock:
load_module modules/ndk_http_module.so;
load_module modules/ngx_http_lua_module.so;
- The NDK module must be loaded before the Lua module. Order matters.
- The module files exist:
ls /usr/lib64/nginx/modules/ngx_http_lua_module.so
ls /usr/lib64/nginx/modules/ndk_http_module.so
Changes Not Taking Effect
After modifying lua_package_path, lua_package_cpath, lua_shared_dict, or init_by_lua_block, you must restart NGINX — a reload is not sufficient:
sudo systemctl restart nginx
These directives are evaluated at worker process startup time, not at reload.
500 Errors from Lua Code
Check the NGINX error log for detailed Lua stack traces:
sudo tail -f /var/log/nginx/error.log
Lua errors include the exact file, line number, and stack trace. Common causes include nil value access, missing require statements, and incorrect API usage.
Conclusion
The NGINX Lua module bridges the gap between NGINX’s high-performance event-driven architecture and the flexibility of a full programming language. It lets you implement custom authentication, rate limiting, API gateway logic, and real-time data processing — all running inside NGINX at near-native speed.
Combined with the 110+ pre-built Lua libraries available from the GetPageSpeed repository, you can add sophisticated functionality to NGINX with a dnf install and a few lines of configuration. No compiling from source, no dependency conflicts, no version mismatches.
If your team has existing Perl expertise, the NGINX Perl module offers an alternative scripting interface — though Lua provides better performance and a larger ecosystem of NGINX-specific libraries.
Get started:
