Skip to main content

NGINX

NGINX Tuning Module: Data-Driven Buffer Optimization

by , , revisited on


We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth yum upgrades for production use, this is the repository for you.
Active subscription is required.

The NGINX tuning module eliminates guesswork from reverse proxy configuration. Every NGINX configuration guide tells you to tune proxy_buffer_size, proxy_buffers, and proxy_read_timeout—but what values should you use for your workload?

The traditional approach involves running curl commands against your backend servers:

curl -s -w %{size_header} -o /dev/null https://backend.example.com

This technique gives you a single data point. Your real traffic consists of thousands of requests per minute, each with different header sizes, body sizes, and response times. A single curl command tells you nothing about the 95th percentile header size that causes upstream sent too big header errors in production.

The NGINX tuning module solves this problem by collecting runtime metrics from actual traffic and providing data-driven configuration recommendations.

How the NGINX Tuning Module Works

The NGINX tuning module operates as a LOG_PHASE handler, meaning it executes after each proxied request completes. For every request that passes through proxy_pass, the module collects:

  • Upstream response header sizes — for tuning proxy_buffer_size
  • Upstream response body sizes — for tuning proxy_buffers
  • Upstream response times — for tuning proxy_read_timeout
  • Client request body sizes — for tuning client_body_buffer_size
  • Connection reuse ratios — for optimizing keepalive settings
  • Cache hit/miss rates — for evaluating proxy cache effectiveness

All metrics are stored in shared memory using lock-free atomic counters. This design ensures minimal performance impact—approximately 10 atomic increments per request—while allowing all NGINX worker processes to contribute to the same counters.

The module builds histograms for percentile approximation rather than storing every individual value. When you query the status endpoint, you receive p95 and p99 estimates along with actionable recommendations.

Installation on RHEL, CentOS, AlmaLinux, and Rocky Linux

Install the GetPageSpeed repository and the NGINX tuning module package:

sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-tuning

After installation, load the module by adding this line to your nginx.conf before the events block:

load_module modules/ngx_http_tuning_module.so;

Verify the installation:

nginx -t

For detailed package information, visit the module page.

Installation on Debian and Ubuntu

First, set up the GetPageSpeed APT repository, then install:

sudo apt-get update
sudo apt-get install nginx-module-tuning

On Debian/Ubuntu, the package handles module loading automatically. No load_module directive is needed.

For detailed package information, visit the APT module page.

Basic Configuration

Here is a minimal configuration that enables the NGINX tuning module for all proxied requests:

load_module modules/ngx_http_tuning_module.so;

http {
    # Enable metrics collection for all proxied requests
    tuning_advisor on;

    server {
        listen 80;

        # Expose the tuning advisor status endpoint
        location = /tuning-advisor {
            tuning_advisor_status;
            allow 127.0.0.1;
            allow 10.0.0.0/8;
            deny all;
        }

        location / {
            proxy_pass http://backend;
        }
    }
}

After reloading NGINX, query the endpoint to view collected metrics:

curl http://localhost/tuning-advisor | jq .

Configuration Directives Reference

tuning_advisor

tuning_advisor on | off;
Parameter Value
Default off
Context http, server, location

Enables or disables metrics collection for proxied requests in the specified context. When enabled at the http level, all proxied requests are tracked. You can override this setting at the server or location level to exclude specific paths from metrics collection.

tuning_advisor_shm_size

tuning_advisor_shm_size size;
Parameter Value
Default 1m
Context http

Sets the size of the shared memory zone used for cross-worker metric aggregation. The default 1MB is sufficient for the fixed-size counters and histograms used by the module. Increase this value only if you observe shared memory allocation errors in the NGINX error log.

tuning_advisor_status

tuning_advisor_status;
Parameter Value
Context location

Enables the status handler for the specified location. This endpoint responds to:

  • GET — Returns JSON with metrics and recommendations
  • GET ?prometheus — Returns Prometheus exposition format
  • GET ?reset — Resets all counters to zero
  • POST — Resets all counters to zero

JSON API Response Structure

When you query the status endpoint, you receive a comprehensive JSON response:

{
  "sample_size": 847293,
  "uptime_seconds": 86400,
  "requests_per_second": 9.81,

  "proxy_buffer_size": {
    "observed": {
      "avg": "1.8k",
      "max": "23.4k",
      "p95_approx": "4.0k"
    },
    "recommendation": "OK",
    "suggested_value": "4k",
    "reason": "95% of headers fit in 4k"
  },

  "proxy_buffers": {
    "observed": {
      "avg": "12.3k",
      "max": "2.1m",
      "p95_approx": "32.0k"
    },
    "recommendation": "OK",
    "suggested_value": "8 4k",
    "reason": "Default 32k (8x4k) sufficient for 95% of responses"
  },

  "proxy_read_timeout": {
    "observed": {
      "avg_ms": 127,
      "max_ms": 4832,
      "p99_approx_ms": 500
    },
    "recommendation": "CONSIDER_REDUCING",
    "suggested_value": "10s",
    "reason": "p99 under 5s, 10s timeout provides headroom"
  },

  "nginx_config": {
    "snippet": "proxy_buffer_size 4k;\nproxy_buffers 8 4k;\nproxy_read_timeout 10s;\nclient_body_buffer_size 8k;",
    "apply_to": "http, server, or location block",
    "upstream_note": "For upstream keepalive, add 'keepalive 32;' to upstream block"
  }
}

The response includes ready-to-use NGINX configuration snippets that you can copy directly into your configuration file.

Prometheus Metrics Integration

For integration with Prometheus, Grafana, and Alertmanager, request the Prometheus exposition format:

curl "http://localhost/tuning-advisor?prometheus"

Sample output:

# HELP nginx_tuning_requests_total Total proxied requests observed
# TYPE nginx_tuning_requests_total counter
nginx_tuning_requests_total 847293

# HELP nginx_tuning_header_size_bucket Header size distribution
# TYPE nginx_tuning_header_size_bucket histogram
nginx_tuning_header_size_bucket{le="1024"} 423841
nginx_tuning_header_size_bucket{le="2048"} 712453
nginx_tuning_header_size_bucket{le="4096"} 831029
nginx_tuning_header_size_bucket{le="+Inf"} 847293

Configure your Prometheus scrape job:

scrape_configs:
  - job_name: nginx-tuning
    metrics_path: /tuning-advisor
    params:
      prometheus: ['1']
    static_configs:
      - targets: ['nginx:80']

The module exports metrics for header sizes, body sizes, response times, connection reuse, and cache status—all in standard Prometheus histogram and counter formats.

Understanding the Recommendations

The NGINX tuning module analyzes percentiles and provides actionable recommendations based on established best practices:

proxy_buffer_size Recommendations

Recommendation Condition Action
OK p95 header size ≤ 4KB Current 4k default is sufficient
INCREASE p95 > 4KB Increase to suggested value (8k, 16k, or 32k)
WARNING p95 > 16KB Investigate upstream for unusually large headers

proxy_buffers Recommendations

Recommendation Condition Action
OK p95 body size ≤ 32KB Default 8×4k buffers are sufficient
INCREASE p95 > 32KB Increase buffer count or size

proxy_read_timeout Recommendations

Recommendation Condition Action
CONSIDER_REDUCING p99 response time < 5s You may safely reduce timeout
OK p99 between 5s and 30s Default 60s is appropriate
WARNING p99 > 30s Backend is too slow—investigate performance

Connection Reuse Recommendations

Recommendation Condition Action
EXCELLENT Client ≥ 80% reuse, upstream ≥ 70% Keepalive settings are optimal
OK Moderate reuse rates No action needed
WARNING Low client reuse Increase keepalive_timeout
WARNING Low upstream reuse Configure keepalive in upstream block

Performance Impact

The NGINX tuning module adds minimal overhead to request processing:

  • Lock-free atomics — No mutex contention between worker processes
  • Shared memory — Single allocation at startup, no per-request allocations
  • Histogram-based percentiles — Fixed 8-bucket histograms instead of storing individual values
  • LOG_PHASE execution — Runs after the response is sent to the client

In benchmarks, the overhead is approximately 10 atomic increment operations per proxied request—negligible compared to the cost of the proxy operation itself.

Security Considerations

The status endpoint reveals information about your traffic patterns, including request rates, response sizes, and backend performance. Always restrict access:

location = /tuning-advisor {
    tuning_advisor_status;

    # Allow only from localhost and internal networks
    allow 127.0.0.1;
    allow ::1;
    allow 10.0.0.0/8;
    allow 172.16.0.0/12;
    allow 192.168.0.0/16;
    deny all;

    # Or require authentication
    # auth_basic "Tuning Advisor";
    # auth_basic_user_file /etc/nginx/.htpasswd;
}

Resetting Metrics

To start a fresh observation window (for example, after applying configuration changes), reset the counters:

# Using POST request
curl -X POST http://localhost/tuning-advisor

# Or using query parameter
curl "http://localhost/tuning-advisor?reset"

Both methods return a confirmation:

{"status":"reset","message":"All metrics cleared"}

Selective Metrics Collection

You can enable or disable metrics collection on a per-location basis:

http {
    server {
        listen 80;

        # Enable for most locations
        tuning_advisor on;

        location = /tuning-advisor {
            tuning_advisor_status;
            allow 127.0.0.1;
            deny all;
        }

        location /api/ {
            proxy_pass http://api-backend;
            # Metrics collection enabled (inherited)
        }

        location /health {
            # Disable for health checks to avoid skewing metrics
            tuning_advisor off;
            proxy_pass http://backend;
        }
    }
}

Workflow for Using the NGINX Tuning Module

Follow this workflow to optimize your NGINX reverse proxy configuration:

  1. Enable the module with tuning_advisor on; in your http or server block
  2. Wait for representative traffic — at least 10,000 requests for statistically meaningful percentiles
  3. Query the status endpoint to review recommendations
  4. Apply the suggested configuration from the nginx_config.snippet field
  5. Reset metrics and observe for regressions
  6. Iterate as traffic patterns change

Histogram Buckets

The module uses exponential bucket boundaries optimized for typical web traffic:

Size buckets: <1k, 1-2k, 2-4k, 4-8k, 8-16k, 16-32k, 32-64k, >64k

Time buckets: <10ms, 10-50ms, 50-100ms, 100-500ms, 500ms-1s, 1-5s, 5-10s, >10s

Raw bucket counts are available in the histograms section of the JSON response for custom analysis or alerting thresholds.

Conclusion

The NGINX tuning module eliminates guesswork from proxy configuration by providing data-driven recommendations based on your actual traffic. Instead of copying buffer sizes from generic blog posts, you can make informed decisions backed by p95 and p99 percentile analysis of real requests.

The module is available from the GetPageSpeed NGINX Extras repository for RHEL-based distributions and from the APT repository for Debian and Ubuntu.

Install the NGINX tuning module, let it observe your traffic, and apply the generated configuration snippet. Your NGINX reverse proxy will be optimized for your specific workload—not someone else’s assumptions.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience • Maintainer of GetPageSpeed RPM repository • Contributor to open-source NGINX modules

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.