Site icon GetPageSpeed

NGINX StatsD Module: Metrics to Graphite and Datadog

How to Send NGINX Metrics to StatsD, Graphite, and Datadog

You have NGINX handling thousands of requests per second, yet your monitoring dashboard shows nothing about what is happening inside it. The built-in stub_status module offers only a handful of global counters — active connections, total accepts, total requests — with no breakdown by endpoint, no response time tracking, and no way to push metrics to your existing monitoring stack. Syslog-based access logs can technically capture timing data, but extracting and aggregating metrics from raw log lines requires a separate parsing pipeline. You need an NGINX StatsD module to bridge this gap — something that pushes pre-aggregated counters and timings directly from NGINX to your collector.

That is exactly what nginx-module-statsd provides. The NGINX StatsD module sends counters and timing metrics over UDP to any StatsD-compatible collector — during the log phase of every request. You can feed real-time, per-location metrics into Graphite, Datadog, InfluxDB, Telegraf, or any backend that speaks the StatsD protocol, with zero changes to your application code.

In this article, you will learn how to install the NGINX StatsD module, configure it for production use, and build meaningful dashboards from the metrics it produces.

How the NGINX StatsD Module Works

The module hooks into NGINX’s log phase — the final phase of request processing, after the response has been sent to the client. For every request that matches a configured location, it constructs a StatsD-formatted UDP datagram and sends it to the collector you specify.

Two metric types are supported:

Because the module uses fire-and-forget UDP, it adds virtually no latency to request processing. If the StatsD server is unreachable, the UDP send simply fails silently — NGINX continues serving requests without interruption.

Why Not Just Use stub_status or Syslog?

NGINX ships with two native monitoring options, but neither provides what the NGINX StatsD module offers. For a more comprehensive built-in monitoring approach, consider the NGINX VTS module which provides detailed virtual host traffic metrics via an HTTP endpoint. However, VTS still requires a scraper to pull data — it does not push metrics to your collector.

Feature stub_status access_log syslog: NGINX StatsD module
Custom per-location counters No No Yes
Response time tracking No Requires log parsing Yes (native)
Push to Graphite/Datadog No No Yes
Dynamic metric keys with variables No No Yes
Zero application code changes Yes Yes Yes
Sampling to reduce overhead No N/A Yes

If your monitoring stack uses Graphite natively rather than StatsD, you may also want to look at the NGINX Graphite module, which writes metrics in the Graphite plaintext protocol directly. The NGINX StatsD module is the better choice when your infrastructure already runs a StatsD aggregator or when you use Datadog, which accepts StatsD natively.

Installation

RHEL, CentOS, AlmaLinux, Rocky Linux, and Fedora

sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-statsd

Enable the module by adding the following at the top of /etc/nginx/nginx.conf:

load_module modules/ngx_http_statsd_module.so;

For more details and version history, see the nginx-module-statsd RPM page.

Debian and Ubuntu

First, set up the GetPageSpeed APT repository, then install:

sudo apt-get update
sudo apt-get install nginx-module-statsd

On Debian/Ubuntu, the package handles module loading automatically. No load_module directive is needed.

For more details, see the nginx-module-statsd APT page.

Configuration

The NGINX StatsD module provides four directives. All of them have been verified against the module source code and tested on Rocky Linux 10.

statsd_server

Sets the address of your StatsD collector.

Syntax: statsd_server <address[:port]> | off;
Default: none (module is inactive until a server is set)
Context: http, server, location

# Send to localhost on default StatsD port (8125)
statsd_server 127.0.0.1;

# Send to a remote collector on a custom port
statsd_server metrics.example.com:9125;

# Disable stats for a specific location
location /health {
    statsd_server off;
    return 200 "OK\n";
}

When set to off, the module skips all metric sending for that context. This is useful for excluding health check endpoints or static asset locations from your metrics.

statsd_count

Sends a counter metric to StatsD.

Syntax: statsd_count <key> <value> [<condition>];
Default: none
Context: server, location, if

The key is the metric name in dotted notation (e.g., nginx.api.requests). Both the key and the value accept NGINX variables, which means you can build dynamic metric names at request time.

The optional third argument is a condition: the counter is only sent if this value evaluates to a non-empty string. This is useful for conditional counting based on request outcome.

# Count every request to this server block
statsd_count "myapp.requests" 1;

# Count only completed requests
statsd_count "myapp.completed_requests" 1 "$request_completion";

# Use a dynamic key based on HTTP status code
statsd_count "myapp.status.$status" 1;

# Count by upstream server address (for load balancing visibility)
statsd_count "myapp.upstream.$upstream_addr" 1 "$upstream_addr";

If the value evaluates to 0 or an empty string, the metric is not sent.

statsd_timing

Sends a timing metric (in milliseconds) to StatsD.

Syntax: statsd_timing <key> <value> [<condition>];
Default: none
Context: server, location, if

The value is typically an NGINX variable that contains a time measurement. The module automatically converts NGINX’s second-based timing format (e.g., 0.123) to milliseconds (e.g., 123).

# Track upstream response time
statsd_timing "myapp.upstream_time" "$upstream_response_time";

# Track total request processing time
statsd_timing "myapp.request_time" "$request_time";

Important: If the timing value evaluates to 0.000 (which happens for requests served directly by NGINX via return or static files with negligible processing time), the metric is not sent. This is by design — zero-value timings would skew your percentile calculations. Therefore, statsd_timing is most useful for proxied requests where $upstream_response_time is non-zero.

statsd_sample_rate

Controls what percentage of requests actually send metrics.

Syntax: statsd_sample_rate <percentage>;
Default: 100 (all requests)
Context: http, server, location

# Only send metrics for 10% of requests
statsd_sample_rate 10;

# Send all metrics (default)
statsd_sample_rate 100;

When a sample rate below 100 is configured, two things happen:

  1. NGINX randomly decides whether to send metrics for each request based on the configured percentage.
  2. The StatsD message includes a sample rate annotation (e.g., myapp.requests:1|c|@0.10), which tells the StatsD server to multiply the value accordingly when aggregating.

Use sampling on high-traffic servers to reduce the volume of UDP packets without losing statistical accuracy. For example, a server handling 10,000 requests per second at a statsd_sample_rate 10 will send approximately 1,000 UDP packets per second — more than enough for accurate aggregation.

Production Configuration Examples

Basic Request Counting and Timing

This configuration tracks total requests per virtual host and upstream response times for proxied locations:

http {
    statsd_server 127.0.0.1;

    server {
        listen 80;
        server_name api.example.com;

        statsd_count "api.requests" 1;

        location / {
            statsd_timing "api.response_time" "$upstream_response_time";
            proxy_pass http://backend;
        }
    }
}

Per-Endpoint Monitoring

For REST APIs, you often want to track each endpoint separately:

server {
    listen 80;
    server_name api.example.com;

    statsd_server 127.0.0.1;
    statsd_count "api.total" 1;

    location /v1/users {
        statsd_count "api.users.requests" 1;
        statsd_timing "api.users.latency" "$upstream_response_time";
        proxy_pass http://backend;
    }

    location /v1/orders {
        statsd_count "api.orders.requests" 1;
        statsd_timing "api.orders.latency" "$upstream_response_time";
        proxy_pass http://backend;
    }

    location /health {
        statsd_server off;
        return 200 "OK\n";
    }
}

Notice how the /health endpoint explicitly disables StatsD to avoid inflating request counts with synthetic monitoring probes.

High-Traffic Servers with Sampling

On servers receiving tens of thousands of requests per second, sending a UDP packet per request may generate unnecessary network overhead. Use sampling to keep the metric volume manageable:

http {
    statsd_server metrics.internal:8125;
    statsd_sample_rate 10;

    server {
        listen 80;
        server_name cdn.example.com;

        statsd_count "cdn.requests" 1;

        location ~* \.(jpg|png|gif|css|js)$ {
            statsd_count "cdn.static_assets" 1;
            root /var/www/static;
        }
    }
}

At a 10% sample rate, the StatsD server automatically compensates by multiplying each received value by 10 during aggregation.

Testing Your Configuration

After configuring the NGINX StatsD module, verify that metrics are actually reaching your StatsD server. The simplest way is to listen on the StatsD UDP port with a Python one-liner:

# Listen for StatsD packets on port 8125
python3 -c "
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(('0.0.0.0', 8125))
print('Listening for StatsD packets on port 8125...')
while True:
    data, addr = sock.recvfrom(2048)
    print(f'{addr[0]}: {data.decode()}')
"

Then, in another terminal, send a request to NGINX:

curl http://localhost/

You should see output similar to:

127.0.0.1: myapp.requests:1|c

For timing metrics with a proxied backend:

127.0.0.1: myapp.response_time:45|ms

If you are using sampling, the output includes a rate annotation:

127.0.0.1: myapp.requests:1|c|@0.50

Verifying the Module Is Loaded

On RHEL-based systems, confirm the module file is present and NGINX accepts your configuration:

ls /usr/lib64/nginx/modules/ngx_http_statsd_module.so
nginx -t

If nginx -t passes with statsd_server directives in your configuration, the module is correctly loaded and working.

Performance Considerations

The NGINX StatsD module is designed to have minimal impact on request processing:

For extremely high-traffic scenarios (100K+ requests per second), consider:

  1. Using statsd_sample_rate to reduce UDP volume. A 10% sample rate is statistically accurate for traffic above 1,000 RPS.
  2. Running StatsD locally: Send metrics to 127.0.0.1 to eliminate network latency entirely.
  3. Separating metric-heavy locations: Apply statsd_server off to locations that do not need monitoring (health checks, static assets, favicons).

Security Best Practices

StatsD uses unencrypted, unauthenticated UDP. Follow these practices to keep your metrics pipeline secure:

sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="127.0.0.1" port port="8125" protocol="udp" accept'
sudo firewall-cmd --reload

Graphite + Grafana

StatsD was originally created as a front-end for Graphite. The metrics the NGINX StatsD module sends are natively compatible:

  1. Install StatsD and configure it to flush to Graphite.
  2. Configure NGINX with statsd_server 127.0.0.1.
  3. In Grafana, add Graphite as a data source and query metrics like stats.counters.myapp.requests.rate or stats.timers.myapp.response_time.mean.

Datadog

The Datadog Agent includes a built-in DogStatsD server that accepts StatsD-formatted metrics on port 8125 by default. Point your NGINX StatsD module to the agent:

statsd_server 127.0.0.1:8125;

Metrics appear in Datadog under the myapp.* namespace with no additional configuration.

Telegraf + InfluxDB

Telegraf’s StatsD input plugin listens for StatsD packets and writes them to InfluxDB, Prometheus, or any supported output. Enable the plugin in telegraf.conf:

[[inputs.statsd]]
  service_address = ":8125"
  percentiles = [50, 90, 99]

Then configure NGINX to send to the Telegraf host.

Complementary Observability

For full observability beyond metrics, consider pairing the NGINX StatsD module with the NGINX OpenTelemetry module, which provides distributed tracing across your microservices. Together, StatsD handles aggregate metrics (request rates, latency percentiles) while OpenTelemetry traces individual requests through your service graph.

Troubleshooting

No Metrics Arriving at StatsD

  1. Verify the module is loaded: Run nginx -t with a statsd_server directive. If it fails with “unknown directive”, the module is not loaded. Check that load_module modules/ngx_http_statsd_module.so; appears at the top of nginx.conf (RHEL-based only).
  2. Check UDP connectivity: Use socat to confirm packets reach the StatsD port:
socat -u UDP-LISTEN:8125,reuseaddr STDOUT
  1. Verify the StatsD address is resolvable: The statsd_server directive resolves hostnames at configuration load time. If DNS fails, NGINX will not start. Use an IP address for reliability.

Timing Metrics Not Appearing

The statsd_timing directive skips metrics when the value is zero. This commonly occurs:

To verify, test with a proxied backend that adds measurable latency.

Metrics Appear Duplicated

If you define statsd_count at both the server and location level, both counters fire — this is intentional, as the NGINX StatsD module accumulates directives from parent contexts. Place your counters at the most specific level to avoid double-counting.

Conclusion

The NGINX StatsD module transforms NGINX from a black box into a first-class metrics source. By sending counters and timings directly to your monitoring stack over UDP, you gain real-time visibility into per-endpoint request rates, upstream latency, and error distribution — without modifying your application or parsing access logs.

The module is available as a pre-built dynamic module for RHEL-based distributions and Debian/Ubuntu from the GetPageSpeed repository. The source code is maintained at github.com/dvershinin/nginx-statsd.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience • Maintainer of GetPageSpeed RPM repository • Contributor to open-source NGINX modules

Exit mobile version