NGINX dynamic upstream management lets you add, remove, and modify backend servers at runtime through a REST API — without reloading NGINX. This eliminates the connection disruptions that come with every nginx -s reload in high-traffic environments.
Every nginx -s reload is a gamble. In high-traffic environments, that graceful reload isn’t graceful — it’s a latency spike your users measure in failed API calls and broken WebSocket connections. Worker processes drain, new ones spin up, and in-flight requests get dropped. For organizations running auto-scaling infrastructure, blue-green deployments, or canary releases, this reload tax accumulates with every infrastructure change.
NGINX Plus solves this problem with its Dynamic Configuration API — a REST API that lets you manage upstream servers at runtime. However, NGINX Plus starts at $3,675/year per instance. For teams running dozens of NGINX instances, that cost becomes prohibitive.
The math is brutal: 10 NGINX Plus instances cost $36,750/year. A GetPageSpeed subscription covers all your instances for under $500/year — and includes every nginx-mod enhancement, not just the NGINX dynamic upstream API.
What if you could get the same NGINX dynamic upstream management capability outside of NGINX Plus?
With nginx-mod, you can. The nginx-mod package includes a built-in REST API module that provides NGINX Plus-compatible NGINX dynamic upstream management. No load_module directives, no matching module versions to your NGINX binary, no recompiling when you upgrade — it works out of the box, and dnf upgrade nginx-mod keeps it working.
nginx-mod also ships with dozens of production-ready modules — from Brotli compression to GeoIP2 — all pre-tested against the same NGINX build, so you never resolve ABI mismatches at 2 AM.
What Is NGINX Dynamic Upstream Management?
NGINX dynamic upstream management allows you to modify your upstream server pool at runtime through a REST API. Instead of editing configuration files and triggering a reload, you send HTTP requests to add servers, remove servers, adjust weights, or mark servers as down.
This capability is essential for modern infrastructure patterns:
- Auto-scaling: When your cloud provider adds or removes application instances, the NGINX dynamic upstream API immediately routes traffic to new servers without a reload.
- Blue-green deployments: Gradually shift traffic from the old version to the new version by adjusting server weights through the API.
- Canary releases: Add a new server with low weight, monitor its performance, then increase its share of traffic — all without touching configuration files.
- Maintenance windows: Mark a server as
downto drain connections gracefully, perform maintenance, then bring it back online.
How nginx-mod Bridges the Gap
The nginx-mod package is a drop-in replacement for standard NGINX. It includes several enhancements over the stock NGINX binary, and one of the most significant is the built-in API module for NGINX dynamic upstream management.
Here is what the nginx-mod API supports:
| Operation | HTTP Method | Description |
|---|---|---|
| List upstreams | GET | View all upstream groups and their servers |
| View server | GET | Get configuration and statistics for a specific server |
| Add server | POST | Add a new server to an upstream group |
| Modify server | PATCH | Change weight, max_conns, mark down, etc. |
| Remove server | DELETE | Remove a server from an upstream group |
| Reset statistics | DELETE | Reset per-upstream connection counters |
| View NGINX info | GET | Get version, PID, uptime, and build info |
| View connections | GET | Get accepted, dropped, active, and idle counts |
Because the API module is statically compiled into nginx-mod, there is no load_module directive needed. It is available immediately after installation.
Installation
RHEL, CentOS, AlmaLinux, Rocky Linux
First, add the GetPageSpeed repository:
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
Then enable the nginx-mod repository and install:
sudo dnf config-manager --enable getpagespeed-extras-nginx-mod
sudo dnf install nginx-mod
If you already have standard NGINX installed, swap it:
sudo dnf swap nginx nginx-mod
Start NGINX:
sudo systemctl enable --now nginx
Verify the build includes the API module:
nginx -V 2>&1 | grep -o 'add-module=ngx_http_api_module'
You should see add-module=ngx_http_api_module in the output. No load_module directive is needed because the module is compiled directly into the nginx-mod binary.
Debian and Ubuntu
First, set up the GetPageSpeed APT repository, then install:
sudo apt-get update
sudo apt-get install nginx-mod
On Debian/Ubuntu, the package handles module loading automatically. No
load_moduledirective is needed.
Configuration
Enabling the NGINX Dynamic Upstream API
The NGINX dynamic upstream API requires two configuration elements:
- The
apidirective in alocationblock — exposes the REST API endpoint. - The
zonedirective in eachupstreamblock — allocates shared memory for dynamic server management.
Here is a minimal working configuration:
upstream backend {
zone backend_zone 256k;
server 10.0.0.1:8080;
server 10.0.0.2:8080;
}
server {
listen 80;
location /api/ {
api write=on;
allow 127.0.0.1;
deny all;
}
location / {
proxy_pass http://backend;
}
}
The api Directive
Syntax: api [write=on|off];
Context: location
Default: write is off (read-only)
The api directive enables the REST API at the specified location. By default, the API is read-only — you can view upstream information and connection statistics, but you cannot add, modify, or remove servers.
To enable write operations (POST, PATCH, DELETE), use api write=on;. Always restrict access to the API endpoint using allow and deny directives, since write access lets anyone modify your NGINX dynamic upstream configuration.
# Read-only API (safe for monitoring)
location /api/ {
api;
allow 10.0.0.0/8;
deny all;
}
# Read-write API (for automation scripts)
location /api/ {
api write=on;
allow 127.0.0.1;
deny all;
}
The zone Directive
Syntax: zone name size;
Context: upstream
The zone directive allocates a shared memory zone for the upstream group. This shared memory is what enables dynamic modifications — all worker processes share the same upstream state, and API changes are immediately visible to every worker.
Upstreams without a zone directive are not accessible through the API. The minimum recommended zone size is 256k.
upstream backend {
zone backend_zone 256k;
server 10.0.0.1:8080;
}
The state Directive
Syntax: state path;
Context: upstream
The state directive persists NGINX dynamic upstream changes to disk. Without it, any servers you add or modify through the API are lost when NGINX restarts. With state, NGINX atomically writes the current upstream configuration to a file after every change, and restores it on startup.
upstream backend {
zone backend_zone 256k;
server 10.0.0.1:8080;
state /var/lib/nginx/state/backend.conf;
}
When using the state directive together with static server directives, the static servers act as an initial seed. Once NGINX writes to the state file (after the first API change), the state file becomes the sole source of truth. On subsequent restarts, the state file replaces the static server list — matching the NGINX Plus behavior.
You can also omit static server directives entirely and manage the full server list through the API. In that case, seed the state file with your initial server before the first start:
sudo mkdir -p /var/lib/nginx/state
echo "server 10.0.0.1:8080;" | sudo tee /var/lib/nginx/state/backend.conf
sudo chown -R nginx:nginx /var/lib/nginx/state
The state file uses standard NGINX server directive syntax:
server 10.0.0.1:8080;
server 10.0.0.2:8080 weight=5 max_conns=100;
server 10.0.0.3:8080 backup;
SELinux note for RHEL-based systems: NGINX workers need write access to the state directory. Set the correct SELinux context:
sudo chcon -R -t httpd_var_lib_t /var/lib/nginx/state
sudo semanage fcontext -a -t httpd_var_lib_t "/var/lib/nginx/state(/.*)?"
REST API Reference
The API follows a hierarchical URL structure:
/api/{version}/{section}/{subsection}/...
Currently, the API version is 1.
Read Endpoints
Get API versions:
curl http://127.0.0.1/api/
[1]
Get NGINX information:
curl http://127.0.0.1/api/1/nginx
{
"version": "1.28.2",
"build": "nginx-mod",
"address": "myserver.example.com",
"generation": 1,
"load_timestamp": "2026-02-27T15:28:45Z",
"pid": 2908,
"ppid": 2907
}
Get connection statistics:
curl http://127.0.0.1/api/1/connections
{
"accepted": 15420,
"dropped": 0,
"active": 12,
"idle": 38
}
List all upstream groups:
curl http://127.0.0.1/api/1/http/upstreams/
This returns a JSON object with every upstream group that has a zone directive, including per-server statistics like request counts, fail counts, and health state.
List servers in an upstream group:
curl http://127.0.0.1/api/1/http/upstreams/backend/servers/
[
{
"id": 0,
"server": "10.0.0.1:8080",
"weight": 1,
"max_conns": 0,
"max_fails": 1,
"fail_timeout": "10s",
"slow_start": "0s",
"backup": false,
"down": false
}
]
Write Endpoints
All write operations require api write=on; in the location block.
Add a server to your NGINX dynamic upstream group:
curl -X POST -H "Content-Type: application/json" \
-d '{"server":"10.0.0.2:8080","weight":5}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/
The response includes the assigned server ID and all default parameters:
{
"id": 1,
"server": "10.0.0.2:8080",
"weight": 5,
"max_conns": 0,
"max_fails": 1,
"fail_timeout": "10s",
"slow_start": "0s",
"backup": false,
"down": false
}
Supported parameters for POST:
| Parameter | Type | Default | Description |
|---|---|---|---|
server |
string | required | Server address (IP:port or hostname:port) |
weight |
integer | 1 | Load balancing weight |
max_conns |
integer | 0 | Max concurrent connections (0 = unlimited) |
max_fails |
integer | 1 | Failures before marking unavailable |
fail_timeout |
string | “10s” | Time window for max_fails and unavailability period |
slow_start |
string | “0s” | Gradual traffic ramp-up duration |
backup |
boolean | false | Mark as backup server |
down |
boolean | false | Mark as permanently down |
Modify a server:
curl -X PATCH -H "Content-Type: application/json" \
-d '{"weight":10,"max_conns":50}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/1
You can modify weight, max_conns, max_fails, fail_timeout, slow_start, and down. You cannot change the server address or backup flag — these are immutable after creation.
Mark a server as down (drain connections):
curl -X PATCH -H "Content-Type: application/json" \
-d '{"down":true}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/1
Remove a server from the NGINX dynamic upstream group:
curl -X DELETE http://127.0.0.1/api/1/http/upstreams/backend/servers/1
The response returns the remaining servers in the group. You cannot remove the last primary server — NGINX needs at least one active upstream server.
Reset upstream statistics:
curl -X DELETE http://127.0.0.1/api/1/http/upstreams/backend/
This resets per-server counters (fails, checks, access timestamps) without removing any servers.
Error Handling
The API returns structured JSON errors with descriptive codes:
{
"error": {
"status": 404,
"code": "UpstreamNotFound",
"text": "upstream not found"
}
}
Common error codes include:
| Code | HTTP Status | Meaning |
|---|---|---|
UpstreamNotFound |
404 | Upstream group does not exist or lacks a zone |
UpstreamServerNotFound |
404 | Server ID does not exist |
UpstreamNotEnoughPeers |
400 | Cannot remove the last server |
UpstreamServerImmutable |
400 | Attempted to change server or backup |
MethodDisabled |
405 | Write operation without write=on |
Practical Use Cases
Auto-Scaling Integration
Integrate the NGINX dynamic upstream API with your cloud provider’s auto-scaling events. When a new instance launches, add it to the upstream group immediately:
# Called by auto-scaling hook when instance 10.0.0.5 comes online
curl -X POST -H "Content-Type: application/json" \
-d '{"server":"10.0.0.5:8080","slow_start":"30s"}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/
The slow_start parameter gradually ramps up traffic to the new server over 30 seconds. This prevents a cold instance from being overwhelmed.
When an instance terminates, remove it:
# Find the server ID first
SERVER_ID=$(curl -s http://127.0.0.1/api/1/http/upstreams/backend/servers/ \
| python3 -c "import sys,json; servers=json.load(sys.stdin); \
print(next(s['id'] for s in servers if s['server']=='10.0.0.5:8080'))")
# Remove it
curl -X DELETE \
"http://127.0.0.1/api/1/http/upstreams/backend/servers/${SERVER_ID}"
Blue-Green Deployments
Shift traffic from the blue environment to the green environment by adjusting weights in your NGINX dynamic upstream configuration:
# Start: Blue has all traffic (weight 10), Green is down
# Step 1: Bring green up with low weight
curl -X PATCH -d '{"down":false,"weight":1}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/1
# Step 2: Increase green weight gradually
curl -X PATCH -d '{"weight":5}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/1
# Step 3: Equal traffic
curl -X PATCH -d '{"weight":10}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/1
# Step 4: Mark blue as down
curl -X PATCH -d '{"down":true}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/0
Graceful Maintenance
Before performing maintenance on a backend server, drain its connections and remove it from the NGINX dynamic upstream rotation:
# Mark server as down (stops receiving new connections)
curl -X PATCH -d '{"down":true}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/2
# Wait for active connections to drain
sleep 30
# Perform maintenance...
# Bring it back with slow start
curl -X PATCH -d '{"down":false,"slow_start":"60s"}' \
http://127.0.0.1/api/1/http/upstreams/backend/servers/2
Comparison with NGINX Plus
The nginx-mod API is compatible with the NGINX Plus dynamic upstream API. Most scripts and tools written for NGINX Plus work with nginx-mod with minimal changes. However, there are some differences:
| Feature | NGINX Plus | nginx-mod |
|---|---|---|
| NGINX dynamic upstream management | Yes | Yes |
| Add/remove/modify servers | Yes | Yes |
| State persistence | Yes | Yes |
| State replaces static servers | Yes | Yes |
| Server parameters (weight, max_conns, etc.) | Yes | Yes |
| Backup server support | Yes | Yes |
slow_start parameter |
Yes | Yes |
| NGINX info and connection stats | Yes | Yes |
| API version | 9 | 1 |
| Stream (TCP/UDP) upstream management | Yes | Not yet |
| SSL certificate management | Yes | No |
| Key-value stores | Yes | No |
| Active health checks | Yes | No |
| Price per year (10 instances) | $36,750 | Under $60 |
For the vast majority of use cases — adding and removing upstream servers at runtime without a reload — nginx-mod provides the same functionality as NGINX Plus at a fraction of the cost. The features that nginx-mod does not include (stream API, SSL management, key-value stores) are rarely needed for NGINX dynamic upstream management.
API Compatibility
The URL structure is compatible. The main difference is the API version number (1 vs 9). If you have existing NGINX Plus scripts, update the version number in your API calls:
# NGINX Plus
curl http://127.0.0.1/api/9/http/upstreams/backend/servers/
# nginx-mod
curl http://127.0.0.1/api/1/http/upstreams/backend/servers/
The JSON request and response formats are identical.
Security Best Practices
The API endpoint gives direct control over your NGINX dynamic upstream routing. Therefore, follow these practices to secure it.
Restrict Access by IP
Always use allow and deny directives to limit who can reach the API:
location /api/ {
api write=on;
allow 127.0.0.1;
allow 10.0.0.0/8;
deny all;
}
Separate Read and Write Access
Create two locations — one read-only for monitoring tools, one read-write for automation:
# Monitoring (read-only, accessible from monitoring network)
location /api/ {
api;
allow 10.0.0.0/8;
deny all;
}
# Automation (read-write, localhost only)
location /api-admin/ {
api write=on;
allow 127.0.0.1;
deny all;
}
Use HTTPS for Remote Access
If the API is accessible over a network (not just localhost), serve it over TLS:
server {
listen 8443 ssl;
ssl_certificate /etc/nginx/ssl/api.crt;
ssl_certificate_key /etc/nginx/ssl/api.key;
location /api/ {
api write=on;
allow 10.0.0.0/8;
deny all;
}
}
State File Permissions
Ensure the state directory and files are owned by the NGINX user and not world-readable:
sudo chown -R nginx:nginx /var/lib/nginx/state
sudo chmod 750 /var/lib/nginx/state
sudo chmod 640 /var/lib/nginx/state/*.conf
Troubleshooting
“Unknown directive ‘api'”
You are running standard NGINX instead of nginx-mod. Install nginx-mod:
sudo dnf config-manager --enable getpagespeed-extras-nginx-mod
sudo dnf swap nginx nginx-mod
“upstream not found” (404) for an Existing Upstream
The upstream block is missing the zone directive. The NGINX dynamic upstream API only manages upstreams that use shared memory zones. Add zone name size; to your upstream block:
upstream backend {
zone backend_zone 256k; # Required for API access
server 10.0.0.1:8080;
}
State File Not Being Created
Check three things in order:
- Directory exists:
ls -la /var/lib/nginx/state/ - Ownership is correct: The directory must be owned by the
nginxuser (or whatever user the worker processes run as). - SELinux context: On RHEL-based systems, set
httpd_var_lib_tcontext:
sudo chcon -R -t httpd_var_lib_t /var/lib/nginx/state
Check the NGINX error log for “Permission denied” messages related to the state file.
“MethodDisabled” Error on POST/PATCH/DELETE
The API is running in read-only mode. Change api; to api write=on; in your location block.
“cannot remove the last server” Error
You cannot remove the last primary (non-backup) server from an upstream group. NGINX requires at least one server to proxy traffic to. If you need to drain all servers, mark them as down instead of removing them.
Complete Production Configuration Example
Here is a production-ready NGINX dynamic upstream configuration that demonstrates all the features discussed above:
# Dynamic upstream with state persistence
upstream app_backend {
zone app_backend_zone 256k;
state /var/lib/nginx/state/app_backend.conf;
}
upstream api_backend {
zone api_backend_zone 256k;
state /var/lib/nginx/state/api_backend.conf;
}
# API endpoint (restricted to localhost)
server {
listen 127.0.0.1:8080;
location /api/ {
api write=on;
}
}
# Main application server
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /api/v1/ {
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
After starting NGINX, seed your upstreams through the API:
# Add application servers
curl -X POST -d '{"server":"10.0.0.1:3000"}' \
http://127.0.0.1:8080/api/1/http/upstreams/app_backend/servers/
curl -X POST -d '{"server":"10.0.0.2:3000"}' \
http://127.0.0.1:8080/api/1/http/upstreams/app_backend/servers/
# Add API servers with higher max_conns
curl -X POST -d '{"server":"10.0.0.3:5000","max_conns":100}' \
http://127.0.0.1:8080/api/1/http/upstreams/api_backend/servers/
These servers persist across restarts thanks to the state directive.
Conclusion
Every deployment you run today with nginx -s reload is costing you latency you do not need to pay. NGINX dynamic upstream management eliminates configuration reloads when your backend infrastructure changes, and the nginx-mod package makes this powerful capability available to everyone.
Whether you are managing auto-scaling groups, running blue-green deployments, or simply need zero-downtime maintenance windows, the built-in API in nginx-mod gives you the tools to manage your upstreams programmatically — with the same API that NGINX Plus uses, at a fraction of the cost.
Install nginx-mod from the GetPageSpeed repository — your subscription covers every module, every NGINX version, and every server you run. The nginx-mod page and Debian/Ubuntu APT repository provide platform-specific installation instructions.

