You are proxying a database through NGINX’s stream module when a new requirement lands: rate-limit connections per client IP, log which protocol variant each client speaks, and route traffic to different backends based on the first bytes of the handshake. You open the NGINX docs and realize the stream module can do none of this. It proxies bytes and balances load — nothing more. There is no way to inspect payloads, run conditional logic, or maintain state across connections. Your options are an external sidecar process, a custom C module, or the NGINX stream lua module.
The NGINX stream lua module is the right answer. It embeds a full LuaJIT runtime into the NGINX stream subsystem, letting you write Lua code that runs at every stage of a TCP or UDP connection — from the first peeked byte to the final log entry. You get non-blocking cosockets for upstream communication, shared memory dictionaries for cross-worker state, background timers, and the same battle-tested API that powers OpenResty’s HTTP Lua module. This guide covers everything you need to know about installing, configuring, and using stream Lua in production.
How the Stream Lua Module Works
The module hooks into the NGINX stream processing pipeline at several key phases. When a TCP or UDP connection arrives, Lua code can intercept it at each stage:
init_by_lua— runs once when NGINX starts, for loading shared libraries or datainit_worker_by_lua— runs when each worker process starts, ideal for background timersssl_client_hello_by_lua— inspects the TLS ClientHello message before handshakessl_certificate_by_lua— dynamically selects SSL certificates per connectionpreread_by_lua— reads initial bytes without consuming them, for protocol detection or access controlcontent_by_lua— handles the entire connection, replacing the default proxy behaviorbalancer_by_lua— selects upstream servers dynamically, enabling custom load balancinglog_by_lua— runs after the connection closes, for metrics and auditing
Unlike the HTTP Lua module that operates on HTTP requests and responses, the stream variant works with raw byte streams. There are no request headers or response codes — just sockets, data, and the logic you write in Lua.
Installation
RHEL, CentOS, AlmaLinux, Rocky Linux
Install the NGINX stream lua module from the GetPageSpeed repository:
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-stream-lua
On CentOS/RHEL 7 and Amazon Linux 2, use yum instead of dnf.
Then load the module in your /etc/nginx/nginx.conf. The stream lua module requires both the NDK (NGINX Development Kit) module and the HTTP Lua module to be loaded first, because they share the lua-resty-core library:
load_module modules/ndk_http_module.so;
load_module modules/ngx_http_lua_module.so;
load_module modules/ngx_stream_lua_module.so;
The order matters. Load NDK first, then the HTTP Lua module, then the stream lua module.
Your First Stream Lua Server
Here is a minimal working configuration that creates a TCP echo server on port 9001:
stream {
server {
listen 9001;
content_by_lua_block {
local sock = ngx.req.socket(true)
while true do
local data, err = sock:receive("*l")
if not data then
break
end
ngx.say("echo: " .. data)
ngx.flush(true)
end
}
}
}
Test it with nc (netcat):
echo "hello world" | nc -w2 localhost 9001
Expected output:
echo: hello world
The ngx.req.socket(true) call retrieves the downstream client socket in full-duplex mode. From there, you have complete control over the bidirectional data flow.
Configuration Directives Reference
Lua Code Execution Directives
These directives embed Lua code into different phases of stream processing. Each directive has three forms: *_block (inline code), *_file (external file), and the deprecated bare form (inline string).
| Directive | Context | Purpose |
|---|---|---|
init_by_lua_block |
stream |
Master process initialization |
init_worker_by_lua_block |
stream |
Worker process initialization |
preread_by_lua_block |
server |
Inspect data before content phase |
content_by_lua_block |
server |
Handle entire connection |
log_by_lua_block |
server |
Post-connection logging |
balancer_by_lua_block |
upstream |
Custom load balancing |
ssl_client_hello_by_lua_block |
server |
TLS ClientHello inspection |
ssl_certificate_by_lua_block |
server |
Dynamic SSL certificate selection |
Runtime Configuration Directives
| Directive | Default | Context | Description |
|---|---|---|---|
lua_code_cache |
on |
stream, server |
Cache compiled Lua code. Turn off only in development. |
lua_package_path |
— | stream |
Set the Lua module search path for .lua files |
lua_package_cpath |
— | stream |
Set the Lua module search path for C libraries |
lua_shared_dict |
— | stream |
Declare a shared memory zone for cross-worker data |
lua_add_variable |
— | stream |
Register a custom variable accessible in Lua |
lua_check_client_abort |
off |
stream, server |
Detect client connection drops |
lua_socket_connect_timeout |
60s |
stream, server |
Cosocket connection timeout |
lua_socket_send_timeout |
60s |
stream, server |
Cosocket send timeout |
lua_socket_read_timeout |
60s |
stream, server |
Cosocket read timeout |
lua_socket_buffer_size |
4k/8k |
stream, server |
Cosocket receive buffer size |
lua_socket_pool_size |
30 |
stream, server |
Cosocket connection pool size per worker |
lua_socket_keepalive_timeout |
60s |
stream, server |
Idle timeout for pooled cosockets |
lua_socket_log_errors |
on |
stream, server |
Log cosocket errors to error log |
lua_max_running_timers |
256 |
stream |
Maximum concurrent timer callbacks |
lua_max_pending_timers |
1024 |
stream |
Maximum pending timer registrations |
lua_sa_restart |
on |
stream |
Auto-restart interrupted system calls |
lua_regex_cache_max_entries |
1024 |
stream |
Maximum cached compiled regex patterns |
lua_malloc_trim |
1000 |
stream |
Requests between malloc_trim() calls (Linux only) |
preread_by_lua_no_postpone |
off |
stream |
Execute preread Lua immediately without postponing |
SSL Configuration Directives
These directives configure TLS for cosocket connections made from Lua code (outbound connections to upstream services):
| Directive | Default | Description |
|---|---|---|
lua_ssl_protocols |
TLSv1 TLSv1.1 TLSv1.2 |
Allowed TLS versions for cosocket SSL |
lua_ssl_ciphers |
DEFAULT |
Cipher list for cosocket SSL |
lua_ssl_verify_depth |
1 |
Maximum certificate chain depth |
lua_ssl_trusted_certificate |
— | CA bundle for verifying upstream certs |
lua_ssl_certificate |
— | Client certificate for mutual TLS |
lua_ssl_certificate_key |
— | Private key for client certificate |
lua_ssl_crl |
— | Certificate revocation list |
lua_ssl_conf_command |
— | Raw OpenSSL configuration commands |
Practical Examples
Connection Rate Limiting
NGINX’s built-in limit_conn module works at the stream level, but it cannot distinguish between connection types or apply different limits based on payload inspection. With the NGINX stream lua module, you can build sophisticated rate limiters that consider IP addresses, connection frequency, and even protocol-level data.
This example limits each IP address to five connections within a 10-second window:
stream {
lua_shared_dict rate_limit 1m;
server {
listen 9004;
preread_by_lua_block {
local shared = ngx.shared.rate_limit
local key = ngx.var.remote_addr
local count, err = shared:incr(key, 1, 0, 10)
if count and count > 5 then
ngx.log(ngx.WARN, "rate limit exceeded for ", key)
return ngx.exit(ngx.ERROR)
end
}
content_by_lua_block {
ngx.say("allowed")
}
}
}
The shared:incr(key, 1, 0, 10) call atomically increments the counter by 1, initializes it to 0 if absent, and sets a 10-second TTL. After five connections in that window, new connections from the same IP are dropped in the preread phase — before any content processing occurs.
Protocol Detection with Preread
The preread phase is where the NGINX stream lua module truly shines. You can peek at the first bytes of a connection to identify the protocol, then route or reject accordingly. The peek() method reads data without consuming it, so the proxied upstream still receives the full original stream.
Important: The peek() API is only available in preread_by_lua_block. It is designed for inspection before proxying — use it with proxy_pass, not with content_by_lua_block.
stream {
upstream allowed_backend {
server 127.0.0.1:8080;
}
server {
listen 9005;
preread_by_lua_block {
local sock = ngx.req.socket()
sock:settimeouts(2000, 2000, 2000)
local data, err = sock:peek(4)
if data then
if data == "QUIT" then
ngx.log(ngx.WARN, "blocked QUIT command from ",
ngx.var.remote_addr)
return ngx.exit(ngx.ERROR)
end
end
}
proxy_pass allowed_backend;
}
}
After the preread phase inspects the first bytes, NGINX forwards the complete unmodified stream — including those peeked bytes — to the upstream. This pattern is useful for building protocol-aware firewalls, SSH honeypots, or multiplexing different protocols on a single port.
Shared Memory Across Workers
The lua_shared_dict directive allocates a shared memory zone that all NGINX worker processes can read and write atomically. This is essential for maintaining global state — connection counters, rate limit windows, or cached data — without external dependencies like Redis. The stream module provides the same shared dictionary API as the HTTP Lua module.
stream {
lua_shared_dict conn_stats 1m;
server {
listen 9003;
content_by_lua_block {
local shared = ngx.shared.conn_stats
local key = ngx.var.remote_addr
local count = shared:incr(key, 1, 0)
ngx.say("connection #" .. tostring(count) .. " from " .. key)
}
}
}
Shared dictionaries support strings, numbers, and booleans. They also provide list operations (lpush, rpush, lpop, rpop) for queue-like patterns. The memory zone size determines how many keys you can store — 1 MB holds roughly 5,000-10,000 small entries.
Background Timers
Timer callbacks run asynchronously in the NGINX event loop, decoupled from any connection. Use them for periodic health checks, metrics aggregation, or cache warming. The module supports both one-shot timers and recurring intervals.
stream {
lua_shared_dict conn_stats 1m;
init_worker_by_lua_block {
local function report(premature)
if premature then return end
local shared = ngx.shared.conn_stats
local total = shared:get("total") or 0
ngx.log(ngx.NOTICE, "total connections so far: ", total)
end
local ok, err = ngx.timer.every(5, report)
if not ok then
ngx.log(ngx.ERR, "failed to create timer: ", err)
end
}
server {
listen 9006;
content_by_lua_block {
local shared = ngx.shared.conn_stats
shared:incr("total", 1, 0)
ngx.say("connection #" .. tostring(shared:get("total")))
}
log_by_lua_block {
ngx.log(ngx.INFO, "session from ", ngx.var.remote_addr,
" duration: ", ngx.var.session_time, "s")
}
}
}
The ngx.timer.every(5, report) call schedules the report function to run every 5 seconds. The premature argument is true when NGINX is shutting down — always check it to avoid doing work during graceful shutdown. The log_by_lua_block runs after each connection closes, providing a hook for per-connection metrics.
Cosocket TCP Proxy
Cosockets are non-blocking socket objects that integrate with the NGINX event loop. They let your Lua code connect to upstream TCP services, send requests, and read responses without blocking other connections. This is one of the most powerful features available for stream processing.
stream {
server {
listen 9010;
content_by_lua_block {
local sock = ngx.socket.tcp()
local ok, err = sock:connect("127.0.0.1", 80)
if not ok then
ngx.log(ngx.ERR, "upstream connect failed: ", err)
return ngx.exit(ngx.ERROR)
end
sock:send("GET / HTTP/1.0\r\nHost: localhost\r\n\r\n")
local data, err = sock:receive("*a")
if data then
ngx.print(data)
end
sock:close()
}
}
}
This creates a TCP-level proxy that connects to an HTTP backend. In production, you would use sock:setkeepalive() instead of sock:close() to return the connection to the pool for reuse.
Custom Load Balancing
The balancer_by_lua_block directive gives you full control over upstream server selection. Unlike NGINX’s built-in round-robin or least-connections algorithms, the NGINX stream lua module lets you implement any balancing strategy in Lua — weighted random, consistent hashing, latency-based routing, or even decisions based on shared state.
stream {
upstream backend {
server 0.0.0.1:80; # placeholder, overridden by Lua
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local ok, err = balancer.set_current_peer("127.0.0.1", 80)
if not ok then
ngx.log(ngx.ERR, "set_current_peer failed: ", err)
return ngx.exit(ngx.ERROR)
end
}
}
server {
listen 9012;
proxy_pass backend;
}
}
The server 0.0.0.1:80 placeholder is required because NGINX demands at least one server in an upstream block. The balancer_by_lua_block overrides it completely. In a real deployment, you would read the target server from a shared dictionary, an external registry, or a DNS lookup.
Custom Variables
The lua_add_variable directive registers variables that you can set in Lua and read in both Lua and NGINX config contexts. This bridges the gap between Lua logic and NGINX’s native variable system.
stream {
lua_add_variable $greeting;
server {
listen 9011;
preread_by_lua_block {
ngx.var.greeting = "Hello from preread at " .. ngx.now()
}
content_by_lua_block {
ngx.say(ngx.var.greeting)
}
}
}
Custom variables are useful for passing data between phases or for using Lua-computed values in access_log format strings.
Lua API Reference
The module provides a comprehensive Lua API. Here are the most commonly used functions:
Connection I/O
| Function | Description |
|---|---|
ngx.req.socket(raw) |
Get the downstream client socket. Pass true for full-duplex. |
ngx.say(...) |
Send data to the client with a trailing newline |
ngx.print(...) |
Send data to the client without a trailing newline |
ngx.flush(wait) |
Flush output. Pass true to wait until data is sent. |
ngx.exit(status) |
Terminate the connection |
Cosocket Methods
| Method | Description |
|---|---|
sock:connect(host, port) |
Connect to a remote server |
sock:send(data) |
Send data on the socket |
sock:receive(pattern) |
Read data: "*l" (line), "*a" (all), or a byte count |
sock:receiveany(max) |
Read whatever bytes are available, up to max |
sock:receiveuntil(pattern) |
Create an iterator that reads until pattern is found |
sock:peek(size) |
Read bytes without consuming them (preread phase only) |
sock:sslhandshake(opts) |
Upgrade to TLS |
sock:setkeepalive(timeout, pool) |
Return the socket to the connection pool |
sock:settimeouts(conn, send, read) |
Set per-operation timeouts in milliseconds |
sock:close() |
Close the socket |
Shared Dictionary Methods
| Method | Description |
|---|---|
shared:get(key) |
Retrieve a value |
shared:set(key, val, exptime) |
Store a value with optional TTL |
shared:incr(key, delta, init, ttl) |
Atomic increment with optional initialization |
shared:delete(key) |
Remove a key |
shared:get_keys(max) |
List keys (for debugging only — slow on large dicts) |
shared:flush_all() |
Clear all entries |
shared:capacity() |
Total allocated bytes |
shared:free_space() |
Available bytes |
Timers and Scheduling
| Function | Description |
|---|---|
ngx.timer.at(delay, fn, ...) |
Schedule a one-shot callback after delay seconds |
ngx.timer.every(interval, fn, ...) |
Schedule a recurring callback |
ngx.timer.running_count() |
Number of active timer callbacks |
ngx.timer.pending_count() |
Number of pending timers |
Utility Functions
| Function | Description |
|---|---|
ngx.var.VARIABLE |
Read or write NGINX variables |
ngx.log(level, ...) |
Write to the NGINX error log |
ngx.now() |
Current time as a float (seconds) |
ngx.sleep(seconds) |
Yield for a duration without blocking |
ngx.encode_base64(str) |
Base64 encode |
ngx.decode_base64(str) |
Base64 decode |
ngx.md5(str) |
MD5 hex digest |
ngx.sha1_bin(str) |
SHA-1 binary digest |
ngx.re.match(str, regex) |
PCRE regex matching |
ngx.re.gsub(str, regex, repl) |
PCRE regex global substitution |
ngx.worker.id() |
Current worker index (0-based) |
ngx.worker.count() |
Total number of worker processes |
ngx.config.subsystem |
Always "stream" in this module |
Light Threads
| Function | Description |
|---|---|
ngx.thread.spawn(fn, ...) |
Start a concurrent light thread |
ngx.thread.wait(thread1, ...) |
Wait for one or more threads to finish |
ngx.thread.kill(thread) |
Terminate a light thread |
Light threads enable concurrent operations within a single connection. For example, you can spawn one thread to read from the client and another to read from an upstream server, achieving bidirectional proxying.
Performance Considerations
The module leverages LuaJIT, which compiles Lua bytecode to native machine code. For most workloads, the overhead of Lua processing is negligible compared to network I/O latency.
However, keep these points in mind:
- Keep
lua_code_cacheon in production. When code caching is off, NGINX recompiles every Lua file on each connection. This is useful during development but causes severe performance degradation under load. -
Reuse cosocket connections. Call
sock:setkeepalive()instead ofsock:close()to return sockets to the connection pool. Creating a new TCP connection for every request adds significant latency. -
Size shared dictionaries appropriately. Each
lua_shared_dictzone is pre-allocated from shared memory. Too small and you get evictions; too large and you waste memory. Monitor withshared:free_space(). -
Limit timer concurrency. Each timer callback runs in a light thread. If you schedule thousands of timers, you risk exhausting the limits set by
lua_max_running_timersandlua_max_pending_timers. -
Avoid blocking operations in Lua. The NGINX event loop is single-threaded per worker. A blocking
os.execute()or synchronous file I/O will stall all connections on that worker. Use cosockets for network operations andngx.timer.at()for deferred work.
Security Best Practices
When using this module in production, follow these guidelines to keep your deployment secure:
- Validate all input. Data from client sockets is untrusted. Sanitize before using it in log messages, shared dict keys, or upstream requests.
-
Set cosocket timeouts. Without timeouts, a slow or malicious upstream can tie up NGINX worker resources indefinitely. Always call
sock:settimeouts()before I/O operations. -
Restrict access to management ports. If you expose Lua-powered stream servers, bind them to
127.0.0.1or use firewall rules. The module itself provides no authentication mechanism — you must implement your own. -
Use
lua_ssl_trusted_certificatefor upstream TLS. When cosockets connect to TLS-enabled services, verify the server certificate to prevent man-in-the-middle attacks. -
Protect shared dictionaries from exhaustion. A client that triggers rapid key creation in a shared dict can fill it up, causing evictions of legitimate data. Use rate limiting and key normalization.
Troubleshooting
“failed to load the ‘resty.core’ module”
This error means the lua-resty-core library cannot be found or there is a version mismatch. Ensure the lua-resty-core package is installed:
dnf install lua-resty-core
Also verify that lua_package_path includes the resty library path, if it is custom-set.
“content_by_lua_block directive is not allowed here”
This means you placed a stream lua directive inside an http block, or the stream lua module is not loaded. The stream lua directives only work inside stream { } blocks. Make sure load_module modules/ngx_stream_lua_module.so; appears before the stream block.
“module version X instead of Y”
The .so file was compiled against a different NGINX version than the running binary. Reinstall the module package to get a version that matches your NGINX:
dnf reinstall nginx-module-stream-lua
Cosocket “connection refused” or “timeout”
Verify that the upstream service is running and reachable from the NGINX host. Test with nc or curl from the same machine. Check firewall rules and SELinux policies — on RHEL-based systems, SELinux may block NGINX from making outbound connections:
setsebool -P httpd_can_network_connect 1
“lua_code_cache is off”
This warning appears in the error log when code caching is disabled. It is expected during development but should never appear in production. Set lua_code_cache on; in your stream block.
Stream Lua Module vs. HTTP Lua Module
If you are familiar with the NGINX HTTP Lua module, here are the key differences in the stream variant:
| Feature | HTTP Lua | Stream Lua |
|---|---|---|
| Protocol | HTTP requests/responses | Raw TCP/UDP streams |
ngx.req object |
Full HTTP request (headers, body, URI) | Socket only (ngx.req.socket()) |
| Phases | access, content, header_filter, body_filter, log |
preread, content, log, balancer |
| Variables | HTTP variables ($uri, $host, etc.) |
Stream variables ($remote_addr, $session_time, etc.) |
ngx.say/ngx.print |
Send HTTP response body | Send raw bytes to client socket |
| Subrequests | Supported (ngx.location.capture) |
Not available |
content_by_lua context |
location |
server |
The Lua API for cosockets, shared dictionaries, timers, regex, and encoding functions is identical between both modules. Additionally, the Lua Upstream module can complement stream Lua for dynamic upstream management.
Conclusion
The NGINX stream lua module transforms NGINX from a basic TCP/UDP proxy into a programmable stream processing engine. Whether you need protocol-aware routing, custom rate limiting, dynamic load balancing, or real-time connection analytics, Lua scripting in the stream subsystem provides the flexibility that static configuration cannot.
Combined with LuaJIT’s near-native performance and NGINX’s event-driven architecture, the module handles high-throughput workloads without the complexity of external processing daemons.
It is maintained by the OpenResty project on GitHub and packaged for RHEL-based distributions via the GetPageSpeed repository.

