yum upgrades for production use, this is the repository for you.
Active subscription is required.
You run NGINX as a TCP or UDP proxy — forwarding database connections, DNS queries, mail traffic, or custom application protocols. Everything seems fine until a backend goes silent, connections pile up, or bandwidth consumption spikes without explanation. Unlike HTTP traffic, where access logs and status modules provide immediate visibility, stream traffic operates as a black box by default. You have no per-server connection counts, no upstream health metrics, and no way to tell which backend is dropping connections. Proper NGINX stream monitoring requires a dedicated solution.
The NGINX Stream Server Traffic Status (stream-sts) module solves this blind spot. It brings the same real-time NGINX stream monitoring capabilities that the popular VTS module provides for HTTP traffic into the stream subsystem. You get per-listener connection counts, bytes in/out, upstream backend metrics, custom filter groups, traffic limiting, and Prometheus-compatible output — all without touching your application code.
How NGINX Stream Monitoring Works
The stream-sts module operates as a two-module system. The core module (nginx-module-stream-sts) runs inside the stream {} block and collects traffic statistics into a shared memory zone. The display module (nginx-module-sts) runs inside the http {} block and exposes those statistics through an HTTP endpoint in HTML, JSON, or Prometheus format.
Both modules share the same memory zone, which means the collection and display happen independently. Your stream servers continue proxying traffic at full speed while the HTTP endpoint serves real-time dashboards to your monitoring tools.
The module tracks these metrics for every stream server and upstream backend:
- Connection counter — total connections handled since NGINX started
- Bytes in/out — total bandwidth consumed in each direction
- Response status codes — 1xx through 5xx session status counters
- Session duration — average and per-connection timing data
- Upstream response times — connect time, first byte time, and session duration per backend
Installing the Stream STS Module
You need both packages: nginx-module-stream-sts (core stats collection) and nginx-module-sts (HTTP display). Install them from the GetPageSpeed repository.
RHEL, CentOS, AlmaLinux, Rocky Linux
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-stream-sts nginx-module-sts
Then load both modules at the top of your /etc/nginx/nginx.conf:
load_module modules/ngx_stream_server_traffic_status_module.so;
load_module modules/ngx_http_stream_server_traffic_status_module.so;
Debian and Ubuntu
First, set up the GetPageSpeed APT repository, then install:
sudo apt-get update
sudo apt-get install nginx-module-stream-sts nginx-module-sts
On Debian/Ubuntu, the package handles module loading automatically. No
load_moduledirective is needed.
For detailed package information, see the stream-sts RPM page or the stream-sts APT page.
Basic Configuration
A minimal NGINX stream monitoring setup requires three things: a shared memory zone in the stream block, a matching zone in the http block, and a display endpoint. Here is a complete working configuration:
load_module modules/ngx_stream_server_traffic_status_module.so;
load_module modules/ngx_http_stream_server_traffic_status_module.so;
events {
worker_connections 1024;
}
stream {
server_traffic_status_zone;
upstream backend_tcp {
server 192.168.1.10:3306;
server 192.168.1.11:3306;
}
server {
listen 3306;
proxy_pass backend_tcp;
}
}
http {
stream_server_traffic_status_zone;
server {
listen 80;
location /stream-status {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format html;
}
}
}
After saving the configuration, verify and reload:
nginx -t
nginx -s reload
Access the dashboard at `http://your-server/stream-status` to see your stream traffic metrics in real time.
Output Formats
The display module supports four output formats, each useful for a different monitoring workflow.
HTML Dashboard
The built-in HTML dashboard provides an auto-refreshing visual overview of all stream traffic. It shows server zones, upstream backends, and filter groups in a clean, tabular layout:
location /stream-status {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format html;
}
JSON API
Use JSON format for programmatic access. This format is ideal for custom scripts and monitoring integrations:
location /stream-status-json {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format json;
}
Query it with curl:
curl -s http://localhost/stream-status-json | python3 -m json.tool
The JSON response includes complete metrics for every server zone, upstream group, and filter:
{
"streamServerZones": {
"TCP:3306:192.168.1.10": {
"port": 3306,
"protocol": "TCP",
"connectCounter": 1547,
"inBytes": 2458624,
"outBytes": 18734080,
"responses": {
"1xx": 0,
"2xx": 1520,
"3xx": 0,
"4xx": 12,
"5xx": 15
}
}
}
}
Prometheus Metrics
For integration with Prometheus and Grafana, use the Prometheus format:
location /stream-metrics {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format prometheus;
}
This returns metrics in the standard Prometheus exposition format:
# HELP nginx_sts_server_bytes_total The request/response bytes
# TYPE nginx_sts_server_bytes_total counter
nginx_sts_server_bytes_total{listen="TCP:3306:192.168.1.10",port="3306",protocol="TCP",direction="in"} 2458624
nginx_sts_server_bytes_total{listen="TCP:3306:192.168.1.10",port="3306",protocol="TCP",direction="out"} 18734080
# HELP nginx_sts_server_connects_total The connects counter
# TYPE nginx_sts_server_connects_total counter
nginx_sts_server_connects_total{listen="TCP:3306:192.168.1.10",port="3306",protocol="TCP",code="2xx"} 1520
nginx_sts_server_connects_total{listen="TCP:3306:192.168.1.10",port="3306",protocol="TCP",code="5xx"} 15
nginx_sts_server_connects_total{listen="TCP:3306:192.168.1.10",port="3306",protocol="TCP",code="total"} 1547
Add this scrape target to your prometheus.yml:
scrape_configs:
- job_name: 'nginx-stream-sts'
static_configs:
- targets: ['nginx-server:80']
metrics_path: /stream-metrics
JSONP
For cross-domain JavaScript access, the display module also supports JSONP format with a configurable callback name using stream_server_traffic_status_display_jsonp.
Custom Traffic Filtering
Filters let you group and categorize stream traffic by any NGINX variable. This is one of the most powerful features for NGINX stream monitoring because it creates custom metric dimensions beyond the default per-listener breakdown.
Filtering by Protocol
Track TCP and UDP traffic separately across all listeners:
stream {
server_traffic_status_zone;
server_traffic_status_filter_by_set_key $protocol protocol;
server {
listen 53 udp;
proxy_pass dns_backends;
}
server {
listen 3306;
proxy_pass mysql_backends;
}
}
This creates a protocol filter group in the JSON/Prometheus output with separate counters for TCP and UDP connections.
Filtering by Server Address
When a single listener binds to multiple interfaces, track traffic per address:
stream {
server_traffic_status_zone;
server {
listen 3306;
server_traffic_status_filter_by_set_key $server_addr server_addr;
proxy_pass mysql_backends;
}
}
Filtering by Remote Address
Track connections per client IP to identify heavy users:
stream {
server_traffic_status_zone;
server {
listen 5432;
server_traffic_status_filter_by_set_key $remote_addr client;
proxy_pass postgres_backends;
}
}
Disabling Monitoring for Specific Servers
Use server_traffic_status off to exclude a server block from monitoring. This is useful for health check listeners or internal management ports that would pollute your metrics:
stream {
server_traffic_status_zone;
server {
listen 3306;
proxy_pass mysql_backends;
}
server {
listen 9999;
server_traffic_status off;
proxy_pass health_check;
}
}
Traffic Limiting
Beyond monitoring, the stream-sts module can enforce traffic limits. When a server’s cumulative traffic exceeds a threshold, new connections are refused. This protects backends from runaway clients or bandwidth abuse.
Limiting by Bytes
Limit total inbound traffic on a per-server basis:
stream {
server_traffic_status_zone;
server {
listen 8080;
server_traffic_status_limit_traffic in:50m;
proxy_pass backend_tcp;
}
}
This refuses new connections once the server has received 50 megabytes of inbound data. The in member refers to cumulative inbound bytes. You can also limit by out (outbound bytes) or connect_counter (total connections).
By default, refused connections return a 503 status. You can specify a custom status code as a third argument:
server_traffic_status_limit_traffic in:50m 502;
Limiting by Filter Key
Apply limits to specific filter groups using server_traffic_status_limit_traffic_by_set_key. This lets you limit traffic for individual clients or protocol types rather than the entire server.
Directive Reference
All directives below are valid in the stream context. Most can be set at both the main and server levels.
server_traffic_status
Enables or disables traffic monitoring for the current server block.
- Syntax:
server_traffic_status on | off - Default:
on(when a zone is defined) - Context:
stream,server
server_traffic_status_zone
Defines the shared memory zone for storing traffic statistics. Both the stream block and http block must reference the same zone name.
- Syntax:
server_traffic_status_zone [shared:name:size] - Default:
shared:stream_server_traffic_status:1m - Context:
stream
The default zone size of approximately 1 MB is sufficient for most deployments. Increase it if you have many server blocks or filter keys:
server_traffic_status_zone shared:stream_server_traffic_status:10m;
The http block must use a matching zone declaration:
http {
stream_server_traffic_status_zone;
# ...
}
server_traffic_status_filter
Enables or disables the filter feature.
- Syntax:
server_traffic_status_filter on | off - Default:
on - Context:
stream,server
server_traffic_status_filter_by_set_key
Defines a custom filter group using any NGINX stream variable.
- Syntax:
server_traffic_status_filter_by_set_key key [name] - Default: none
- Context:
stream,server
The key is a variable like $protocol or $remote_addr. The optional name sets the filter group label in the output.
server_traffic_status_filter_check_duplicate
Removes duplicate filter entries during configuration.
- Syntax:
server_traffic_status_filter_check_duplicate on | off - Default:
on - Context:
stream,server
server_traffic_status_limit
Enables or disables the traffic limiting feature.
- Syntax:
server_traffic_status_limit on | off - Default:
on - Context:
stream,server
server_traffic_status_limit_traffic
Sets a cumulative traffic limit for the current server.
- Syntax:
server_traffic_status_limit_traffic member:size [code] - Default: none
- Context:
stream,server
The member is one of the traffic counters (e.g., in, out, connect_counter). The size uses standard NGINX size notation (e.g., 1024, 50m, 1g). The optional code sets the response status when the limit is exceeded (default: 503).
server_traffic_status_limit_traffic_by_set_key
Sets a traffic limit for a specific filter key.
- Syntax:
server_traffic_status_limit_traffic_by_set_key key member:size [code] - Default: none
- Context:
stream,server
server_traffic_status_limit_check_duplicate
Removes duplicate limit entries during configuration.
- Syntax:
server_traffic_status_limit_check_duplicate on | off - Default:
on - Context:
stream,server
server_traffic_status_average_method
Sets the method for calculating average session times.
- Syntax:
server_traffic_status_average_method AMM | WMA [period] - Default:
AMM 60s - Context:
stream,server
Two methods are available:
– AMM (Arithmetic Mean Method) — simple average of all session times within the period
– WMA (Weighted Moving Average) — gives more weight to recent sessions, better for detecting trends
The period sets the time window for the calculation. For example, WMA 30s computes a 30-second weighted moving average.
server_traffic_status_histogram_buckets
Defines histogram bucket boundaries for session duration tracking. This is especially useful for Prometheus integration because it generates proper histogram metrics.
- Syntax:
server_traffic_status_histogram_buckets bucket1 bucket2 ... - Default: none
- Context:
stream,server
Values are in seconds with millisecond precision:
server_traffic_status_histogram_buckets 0.005 0.01 0.05 0.1 0.5 1 5 10;
This produces Prometheus histogram output like nginx_sts_server_session_duration_seconds_bucket{le="0.005"}, which enables percentile calculations in Grafana.
Embedded Variables
The module provides nine variables for use in stream log formats or conditional logic. All variables reflect the current server block’s cumulative statistics.
| Variable | Description |
|---|---|
$sts_connect_counter |
Total connections handled |
$sts_in_bytes |
Total bytes received |
$sts_out_bytes |
Total bytes sent |
$sts_1xx_counter |
Sessions with 1xx status |
$sts_2xx_counter |
Sessions with 2xx status |
$sts_3xx_counter |
Sessions with 3xx status |
$sts_4xx_counter |
Sessions with 4xx status |
$sts_5xx_counter |
Sessions with 5xx status |
$sts_session_time |
Total session time in milliseconds |
Use these variables in stream log formats to enrich your log files:
stream {
server_traffic_status_zone;
log_format stream_stats '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'total_connects=$sts_connect_counter '
'total_in=$sts_in_bytes total_out=$sts_out_bytes';
server {
listen 3306;
proxy_pass mysql_backends;
access_log /var/log/nginx/stream-access.log stream_stats;
}
}
Integrating with Prometheus and Grafana
The Prometheus output format makes the stream-sts module a natural fit for modern observability stacks. Here is a production-ready integration workflow.
Prometheus Configuration
Add the NGINX stream monitoring endpoint to your Prometheus scrape configuration:
scrape_configs:
- job_name: 'nginx-stream-sts'
scrape_interval: 15s
static_configs:
- targets: ['nginx-server:80']
metrics_path: /stream-metrics
Key Metrics to Monitor
Set up Grafana dashboards and alerts for these critical stream metrics:
nginx_sts_server_connects_total{code="5xx"}— backend failures, alert on sudden spikesnginx_sts_server_bytes_total{direction="in"}— inbound bandwidth, useful for capacity planningnginx_sts_upstream_connects_total{code="total"}— per-backend connection distribution for load balancing verificationnginx_sts_upstream_session_seconds— average upstream session duration, detects slow backendsnginx_sts_main_shm_usage_bytes{shared="used_size"}— shared memory usage, increase zone size if approaching max
Example Grafana Alert Rule
Alert when 5xx errors exceed a threshold:
- alert: NginxStreamBackendErrors
expr: rate(nginx_sts_server_connects_total{code="5xx"}[5m]) > 0.1
for: 2m
labels:
severity: warning
annotations:
summary: "NGINX stream backend errors on {{ $labels.listen }}"
Security Best Practices
The status endpoint reveals your internal infrastructure: backend server addresses, traffic volumes, and connection patterns. Restrict access carefully.
Restrict by IP Address
Allow access only from trusted monitoring networks:
http {
stream_server_traffic_status_zone;
server {
listen 80;
location /stream-status {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format html;
allow 10.0.0.0/8;
allow 127.0.0.1;
deny all;
}
}
}
Use a Separate Monitoring Port
Isolate the status endpoint on a dedicated port that is not exposed publicly:
server {
listen 127.0.0.1:8090;
location /stream-status {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format html;
}
location /stream-metrics {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format prometheus;
}
}
This keeps monitoring endpoints accessible only from localhost or through an SSH tunnel.
Performance Considerations
The stream-sts module uses shared memory and atomic operations for counter updates, which means minimal impact on proxying throughput. However, keep these factors in mind:
- Shared memory sizing: The default zone of approximately 1 MB stores around 50-100 server/upstream/filter nodes. If you see “shared memory is too small” errors in your error log, increase the zone size.
- Filter cardinality: Filtering by high-cardinality variables like
$remote_addrcreates a node for every unique client IP. This can exhaust shared memory quickly on busy servers. Use filters judiciously. - Display endpoint load: The HTML dashboard and JSON endpoint traverse the entire shared memory tree on every request. Avoid scraping more frequently than every 5-10 seconds in high-traffic environments.
Troubleshooting
“display_handler::shm_init() failed”
This error means the http block cannot find the shared memory zone created by the stream block. The zone names must match. If you use the default server_traffic_status_zone (no arguments) in the stream block, use the default stream_server_traffic_status_zone in the http block as well:
stream {
server_traffic_status_zone; # Creates zone "stream_server_traffic_status"
}
http {
stream_server_traffic_status_zone; # Must reference the same zone name
}
Status Page Shows No Data
If the HTML dashboard loads but all counters are zero, verify that:
- Traffic is actually flowing through your stream servers
- The
server_traffic_statusdirective is not set toofffor the server blocks you want to monitor - You restarted (not just reloaded) NGINX after adding the module for the first time
Shared Memory Exhaustion
If you see “ngx_slab_alloc() failed” errors, your zone is full. Check the current usage via the JSON endpoint:
curl -s http://localhost/stream-status-json | python3 -c "
import json, sys
data = json.load(sys.stdin)
shm = data['sharedZones']
used_pct = (shm['usedSize'] / shm['maxSize']) * 100
print(f\"Used: {shm['usedSize']} / {shm['maxSize']} ({used_pct:.1f}%)\")
print(f\"Nodes: {shm['usedNode']}\")
"
If usage exceeds 80%, increase the zone size and restart NGINX.
SELinux Blocks Non-Standard Ports
On RHEL-based systems with SELinux enabled, NGINX cannot bind to non-standard stream ports by default. Add the ports to SELinux policy:
sudo semanage port -a -t http_port_t -p tcp 3306
sudo semanage port -a -t http_port_t -p udp 53
NGINX Stream Monitoring vs HTTP Monitoring
If you already use the NGINX VTS module for HTTP traffic monitoring, the stream-sts module is its natural counterpart for TCP/UDP workloads. Both modules were created by the same author and share a consistent architecture. The key difference is the context: VTS operates in the http {} block while stream-sts operates in the stream {} block.
For environments that proxy both HTTP and TCP/UDP traffic, you can run both modules simultaneously. Each uses its own shared memory zone and status endpoint.
Complete Production Configuration
Here is a comprehensive, production-ready NGINX stream monitoring configuration that combines all best practices:
load_module modules/ngx_stream_server_traffic_status_module.so;
load_module modules/ngx_http_stream_server_traffic_status_module.so;
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
stream {
server_traffic_status_zone;
server_traffic_status_average_method WMA 30s;
server_traffic_status_histogram_buckets 0.005 0.01 0.05 0.1 0.5 1 5 10;
server_traffic_status_filter_by_set_key $protocol protocol;
upstream mysql_backends {
server 192.168.1.10:3306;
server 192.168.1.11:3306;
}
upstream redis_backends {
server 192.168.1.20:6379;
server 192.168.1.21:6379;
}
server {
listen 3306;
proxy_pass mysql_backends;
proxy_connect_timeout 5s;
}
server {
listen 6379;
proxy_pass redis_backends;
proxy_connect_timeout 3s;
}
}
http {
stream_server_traffic_status_zone;
server {
listen 127.0.0.1:8090;
location /stream-status {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format html;
}
location /stream-metrics {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format prometheus;
}
location /stream-status-json {
stream_server_traffic_status_display;
stream_server_traffic_status_display_format json;
}
}
}
This configuration provides real-time NGINX stream monitoring for MySQL and Redis TCP proxies with weighted moving averages, histogram buckets for Prometheus percentile queries, and protocol-level filtering. The status endpoints are restricted to localhost access only.
Conclusion
The NGINX stream-sts module transforms your TCP/UDP proxies from opaque forwarding pipes into fully observable infrastructure components. By installing two packages and adding a few configuration directives, you gain per-listener connection metrics, upstream backend health monitoring, custom filter dimensions, and native Prometheus integration.
Start with the basic configuration to see your stream traffic patterns. Then add filters for the dimensions that matter most to your operations, configure Prometheus scraping, and build Grafana dashboards that give you instant visibility into your stream infrastructure.
For more information, see the stream-sts module documentation and the source code on GitHub.
