yum upgrades for production use, this is the repository for you.
Active subscription is required.
Every system administrator running NGINX in production faces the same question: how much traffic does each part of the server actually handle? Which locations get the most requests? How much bandwidth do they consume? What response codes are clients seeing? How fast are upstream backends responding? These are exactly the questions the NGINX traffic accounting module was built to answer — but first, consider why the traditional approaches fall short.
Parsing access logs is the usual solution, and it is surprisingly expensive. Heavyweight log aggregation stacks like Logstash, Fluentd, or Loki need their own servers just to store and process the data. Cron-based scripts chew through gigabytes of text files and always lag behind reality. A single busy server can generate millions of log lines per day, each one requiring disk I/O to write and CPU to parse. By the time your dashboard updates, the spike you needed to catch is already over.
The NGINX traffic accounting module takes a fundamentally different approach. Instead of writing one log line per request and parsing them all externally, it aggregates traffic metrics directly inside the NGINX worker process — counting requests, bytes, latency, and status codes in memory. Every 60 seconds (or at whatever interval you choose), it exports a single compact summary line per traffic group. The result is real-time traffic visibility with negligible CPU and memory overhead, no external tools required.
How the Traffic Accounting Module Works
The module operates in the log phase of NGINX request processing. For every request, it performs these steps:
- Looks up the
accounting_id— a string you assign per location, server, or dynamically via variables - Aggregates metrics (request count, bytes in/out, latency, status codes) into an in-memory red-black tree grouped by that
accounting_id - Every N seconds (set via
accounting_interval), it rotates the metrics and exports the aggregated summary to a log file, syslog, or stderr
Because the module aggregates in memory rather than writing per-request data, it uses far less I/O and CPU than access log analysis. A period that sees 100,000 requests produces just one summary line per accounting_id — not 100,000 log lines.
Metrics Collected
For each accounting_id, the module tracks these values:
| Metric | Description |
|---|---|
requests |
Total number of requests processed |
bytes_in |
Total bytes received from clients |
bytes_out |
Total bytes sent to clients |
latency_ms |
Sum of all request processing times (milliseconds) |
upstream_latency_ms |
Sum of all upstream response times (milliseconds) |
| Status code counts | Individual counts for each HTTP status code (200, 301, 404, 500, etc.) |
Installation
RHEL, CentOS, AlmaLinux, Rocky Linux, and Fedora
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-traffic-accounting
After installation, load the module by adding this line to the top of /etc/nginx/nginx.conf, before any http block:
load_module modules/ngx_http_accounting_module.so;
Debian and Ubuntu
First, set up the GetPageSpeed APT repository, then install:
sudo apt-get update
sudo apt-get install nginx-module-traffic-accounting
On Debian/Ubuntu, the package handles module loading automatically. No
load_moduledirective is needed.
Configuration
Enabling Traffic Accounting
Add the accounting directives inside your http block:
http {
accounting on;
accounting_log /var/log/nginx/accounting.log;
server {
listen 80;
server_name example.com;
location / {
accounting_id static_content;
root /var/www/html;
}
location /api {
accounting_id api_requests;
proxy_pass http://backend;
}
}
}
With this configuration, the NGINX traffic accounting module writes aggregated metrics every 60 seconds (the default interval) to /var/log/nginx/accounting.log. Each location gets its own metrics bucket identified by its accounting_id.
Directive Reference
accounting
Syntax: accounting on | off;
Default: accounting off;
Context: http
Enables or disables traffic accounting for the HTTP subsystem. Place this directive in the http block.
accounting_id
Syntax: accounting_id <string>;
Default: accounting_id default;
Context: http, server, location, if in location
Sets the identifier used to group metrics. Requests with the same accounting_id are aggregated together. You can use a static string or an NGINX variable (prefixed with $).
Static string example:
location /images {
accounting_id image_serving;
}
Variable-based example using the request method:
location /api {
accounting_id $request_method;
}
This produces separate metrics for GET, POST, PUT, and DELETE requests.
accounting_interval
Syntax: accounting_interval <seconds>;
Default: accounting_interval 60;
Context: http
Controls how often aggregated metrics are exported to the log. The default is 60 seconds. Lower values give more granular data but produce more log output.
accounting_log
Syntax: accounting_log <file | stderr | syslog:server=<address>>;
Default: syslog via /dev/log
Context: http
Configures where the aggregated metrics are written. Supported targets:
- File path:
accounting_log /var/log/nginx/accounting.log; - Standard error:
accounting_log stderr; - Syslog:
accounting_log syslog:server=unix:/dev/log; - Remote syslog:
accounting_log syslog:server=192.168.1.10:514;
This directive follows the same syntax as NGINX’s error_log directive. If accounting_log is not specified, the module writes to /dev/log (the system syslog socket).
accounting_perturb
Syntax: accounting_perturb on | off;
Default: accounting_perturb off;
Context: http
When enabled, each worker process randomly staggers its reporting interval by up to 20%. This prevents all workers from flushing their metrics at the same moment. It reduces I/O spikes on busy servers with many worker processes.
Using Variables for Dynamic Grouping
One of the most powerful features is using NGINX variables as the accounting_id. This lets you segment traffic by any attribute that NGINX can resolve.
By virtual host:
http {
accounting on;
accounting_log /var/log/nginx/accounting.log;
server {
server_name shop.example.com;
accounting_id $server_name;
location / {
proxy_pass http://shop_backend;
}
}
server {
server_name blog.example.com;
accounting_id $server_name;
location / {
proxy_pass http://blog_backend;
}
}
}
By client’s HTTP Host header:
accounting_id $http_host;
By geographic region (with the GeoIP2 module):
accounting_id $geoip2_data_country_code;
Understanding the Log Output
The module produces one log line per accounting_id per worker per interval. Here is a real example from a test server:
2026/03/28 23:56:39 [notice] 16446#16446: pid:16446|from:1774713389|to:1774713399|accounting_id:static_content|requests:18|bytes_in:1325|bytes_out:14809|latency_ms:0|upstream_latency_ms:0|200:17|404:1
The pipe-delimited fields are:
| Field | Meaning |
|---|---|
pid:16446 |
NGINX worker process ID |
from:1774713389 |
Start of the measurement period (Unix timestamp) |
to:1774713399 |
End of the measurement period (Unix timestamp) |
accounting_id:static_content |
The configured grouping identifier |
requests:18 |
Total requests in this period |
bytes_in:1325 |
Total bytes received from clients |
bytes_out:14809 |
Total bytes sent to clients |
latency_ms:0 |
Sum of request processing times |
upstream_latency_ms:0 |
Sum of upstream response times |
200:17 |
17 requests returned HTTP 200 |
404:1 |
1 request returned HTTP 404 |
Aggregating Across Worker Processes
Each worker process maintains its own metrics independently. You will see separate log lines from each worker. To get the total, sum the values for the same accounting_id and time period:
# Total requests per accounting_id in the last reporting period
grep "accounting_id" /var/log/nginx/accounting.log | \
awk -F'|' '{
for(i=1;i<=NF;i++) {
if($i ~ /^accounting_id:/) id=$i;
if($i ~ /^requests:/) {split($i,a,":"); req+=a[2]}
}
total[id]+=req; req=0
} END {for(k in total) print k, total[k]}'
When to Use This Module vs. Native NGINX Features
NGINX provides several built-in monitoring capabilities. Knowing when the NGINX traffic accounting module is the better choice matters.
vs. stub_status
The built-in stub_status module provides server-wide totals. It shows active connections, accepted/handled connections, and total requests. However, it cannot break these down by location, virtual host, or custom grouping. The traffic accounting module fills this gap with per-accounting_id breakdowns that include bandwidth and latency.
vs. Access Log Analysis
NGINX’s native access_log records every individual request. Tools like GoAccess, AWStats, or Grafana/Loki can parse these logs. However, this approach has drawbacks:
- It generates large log files that require significant disk I/O
- It requires external parsing tools that consume CPU and memory
- It introduces latency between the request and available metrics
The traffic accounting module aggregates in memory and exports compact summaries. For real-time dashboards and alerting, this is far more efficient.
vs. Prometheus Exporters
Tools like the NGINX VTS module (Virtual Host Traffic Status) expose a JSON or Prometheus endpoint with similar per-location metrics. If you already run Prometheus, a VTS-style exporter may be a better fit. The NGINX traffic accounting module is lighter and works through standard log infrastructure (files, syslog). This makes it simpler to deploy and integrate with existing log systems.
Sending Metrics to a Central Log Server
For production environments, sending accounting metrics to a central syslog server lets you aggregate data from multiple NGINX instances:
http {
accounting on;
accounting_log syslog:server=logserver.example.com:514,tag=nginx_accounting;
accounting_interval 30;
# ...
}
This works with any syslog-compatible receiver:
- rsyslog or syslog-ng on a central log server
- Logstash with a syslog input plugin
- Fluentd with a syslog input
- Grafana Loki with a syslog target in Promtail
Visualizing with Grafana
The module’s pipe-delimited output format is easy to parse. A Logstash filter for the accounting metrics looks like this:
filter {
if [program] == "nginx_accounting" {
kv {
field_split "|"
value_split ":"
}
}
}
Once ingested into Elasticsearch or a time-series database, you can build Grafana dashboards. These show request rates, bandwidth, error rates, and latency — all broken down by accounting_id.
Testing Your Configuration
After setting up the module, verify it works:
1. Validate the configuration:
nginx -t
2. Reload NGINX and send test requests:
sudo systemctl reload nginx
curl -s http://localhost/
3. Wait for the accounting interval (default: 60 seconds), then check the log:
cat /var/log/nginx/accounting.log
You should see output like:
2026/03/28 12:00:00 [notice] 1234#1234: pid:1234|from:1774670400|to:1774670460|accounting_id:static_content|requests:42|bytes_in:3150|bytes_out:52500|latency_ms:8|upstream_latency_ms:0|200:40|304:2
If you use syslog (the default when accounting_log is not set), check your system journal:
journalctl -t NgxAccounting --since "5 minutes ago"
Performance Considerations
The NGINX traffic accounting module is designed for minimal overhead:
- Memory usage: Metrics are stored in a red-black tree with one node per unique
accounting_id. Each node uses roughly 200 bytes. Even with hundreds of distinct IDs, memory stays under 1 MB per worker. - CPU usage: The module runs in the log phase. It performs a hash lookup plus a few integer additions per request. This adds virtually zero overhead.
- I/O usage: Instead of one access log line per request, it writes one line per
accounting_idper interval. A server handling 100,000 requests/minute with 10 accounting IDs produces 10 lines/minute per worker — not 100,000.
Tuning the Interval
- Shorter intervals (10–30 seconds) give faster feedback but more log output
- Longer intervals (120–300 seconds) reduce volume but delay visibility
- The default of 60 seconds works well for most production systems
Using accounting_perturb
On servers with many worker processes, enable accounting_perturb on; to stagger metric flushes. Without perturbation, all workers write at nearly the same instant. This can cause brief I/O spikes. With perturbation, each worker’s interval varies by up to 20%.
Security Best Practices
The accounting log may reveal internal infrastructure details. Follow these practices to keep it safe:
- Restrict log file permissions: Only the NGINX user and administrators should read the log.
touch /var/log/nginx/accounting.log
chown nginx:nginx /var/log/nginx/accounting.log
chmod 640 /var/log/nginx/accounting.log
- Avoid sensitive data in
accounting_id: Do not use variables containing personally identifiable information (like$remote_addror$http_authorization). Use functional groupings instead. -
Secure syslog transport: When sending metrics to a remote server, use TLS-encrypted syslog (RFC 5425) or a VPN tunnel.
Troubleshooting
No Output in the Accounting Log
- Verify the module is loaded. Check that
load_module modules/ngx_http_accounting_module.so;appears before anyhttpblock. - Confirm
accounting on;is in thehttpblock, not insideserverorlocation. - Wait for the full
accounting_interval(default 60 seconds). - If using syslog (no
accounting_log), check/var/log/messagesorjournalctl -t NgxAccounting.
Metrics Show Only accounting_id:default
The accounting_id defaults to default if not set. Add accounting_id directives to your server or location blocks.
Separate Lines Per Worker Process
Each worker reports independently. This is expected and allows lock-free operation. Aggregate values for the same accounting_id and time period to get totals.
Log File Not Created
NGINX does not create the log directory. Ensure it exists and is writable:
sudo mkdir -p /var/log/nginx
sudo chown nginx:nginx /var/log/nginx
Conclusion
The NGINX traffic accounting module provides an efficient, lightweight approach to real-time traffic monitoring. It aggregates metrics in memory and exports compact summaries. This avoids the overhead of traditional log analysis while delivering per-location, per-virtual-host visibility.
The module is available from the GetPageSpeed RPM repository and the GetPageSpeed APT repository. Source code is on GitHub.
