You deploy NGINX as a reverse proxy. Your upstream servers live behind domain names — maybe they are cloud instances with elastic IPs, containers behind a service mesh, or CDN origins managed by a third party. Everything works until the IP address behind that domain name changes. NGINX keeps routing traffic to the old IP. Requests fail. You SSH in and run nginx -s reload at 3 AM. The fix for this problem is NGINX upstream resolve — a parameter that tells NGINX to re-resolve DNS in the background, keeping your upstream peer list current without manual intervention.
This is not a bug. It is how NGINX has always worked: upstream server addresses are resolved by the operating system’s getaddrinfo() during configuration loading, and the resulting IPs are baked into the worker process memory for the lifetime of that configuration. If a domain name fails to resolve at startup, NGINX refuses to start entirely with a blunt host not found in upstream error. For static infrastructure this is fine. For anything dynamic — cloud auto-scaling, container orchestration, DNS-based failover, or blue-green deployments — it is a serious limitation.
The resolve parameter for the server directive inside upstream blocks solves this problem. It tells NGINX to re-resolve domain names in the background according to their DNS TTL, updating the upstream peer list without any reload. Moreover, it allows NGINX to start even when a domain name is temporarily unresolvable, marking it as down until DNS succeeds. The NGINX upstream resolve feature has become essential for anyone running dynamic infrastructure behind a reverse proxy.
NGINX Open Source Is Getting Plus Features
For over a decade, the resolve parameter was exclusive to the commercial NGINX Plus product. The code existed in the NGINX source tree since 2014, but it was deliberately held back from the open-source build. Third-party modules like nginx-upstream-dynamic-servers emerged to fill this gap.
That changed in November 2024 with NGINX 1.27.3 (mainline), and subsequently NGINX 1.28.0 (stable, April 2025). The resolve parameter and per-upstream resolver directive are now available in open-source NGINX — no commercial license required. This is part of a broader trend: features that once justified an NGINX Plus subscription are steadily migrating to the open-source build.
With resolve now free, the remaining Plus-exclusive upstream features are the runtime REST API, the state directive for persistence, and active health checks. As we will see below, NGINX-MOD already covers all of these — effectively bringing open-source NGINX to Plus-level capability.
How It Works
When you add resolve to a server line inside an upstream block, NGINX switches from one-time startup resolution to a background timer-based resolution cycle:
- At startup, the domain is resolved using the configured
resolver(notgetaddrinfo()). If resolution fails, the server is marked as down — but NGINX still starts. - A timer fires in each worker process based on the DNS TTL (or the
validparameter of theresolverdirective). - When the timer fires, NGINX re-resolves the domain asynchronously. If the returned IP addresses differ from the current set, the upstream peer list is updated in shared memory.
- All workers see the updated addresses immediately because the peer list lives in a shared memory
zone.
This means DNS changes propagate to NGINX automatically, without any reload, and without any request-path latency penalty.
Configuration
Prerequisites
The native NGINX upstream resolve parameter requires two things:
- NGINX 1.28.0 or later (stable) or 1.27.3+ (mainline)
- A
zonedirective in the upstream block — peers must be stored in shared memory so all workers see updates
Without a zone, NGINX will reject the configuration with a clear error:
resolving names at run time requires upstream "backend" to be in shared memory
Basic Example
http {
upstream backend {
zone backend_zone 64k;
resolver 8.8.8.8 valid=30s;
resolver_timeout 5s;
server backend.example.com resolve;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
The resolver directive can be placed either at the http level or directly inside the upstream block. Placing it inside the upstream block (as shown above) is cleaner because different upstreams can use different DNS servers.
The valid parameter overrides the DNS TTL for re-resolution timing. Set it to 30s or 60s for upstreams whose IPs change frequently, or leave it unset to honor the TTL from DNS responses.
Multiple Servers with Weights and Failover
The resolve parameter works with all standard upstream parameters — weight, max_fails, fail_timeout, and backup:
upstream api_pool {
zone api_pool_zone 64k;
resolver 8.8.8.8 valid=60s;
server primary-api.example.com resolve weight=5 max_fails=3 fail_timeout=30s;
server fallback-api.example.com resolve backup;
}
In this configuration, primary-api.example.com receives five times the traffic. If it fails three consecutive health checks within 30 seconds, traffic shifts to the backup server. Both domain names are re-resolved every 60 seconds regardless of traffic volume.
Production-Ready Configuration
Here is a complete configuration suitable for production use with NGINX upstream resolve and a dynamically resolved backend:
upstream resilient_backend {
zone resilient_zone 64k;
resolver 8.8.8.8 valid=30s;
resolver_timeout 5s;
server backend.example.com resolve max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://resilient_backend;
proxy_set_header Host $host;
proxy_connect_timeout 5s;
proxy_next_upstream error timeout http_502 http_503;
}
}
For guidance on tuning proxy buffer sizes alongside this configuration, see our guide on tuning proxy_buffer_size in NGINX.
Installation
If you are running NGINX 1.28 or later, the NGINX upstream resolve parameter is built in — no installation is needed. Verify your version:
nginx -v
If you see nginx/1.28.0 or higher, you already have native support.
For Older NGINX Versions (Pre-1.27.3)
If you are running an older NGINX version and cannot upgrade, the upstream-dynamic module provides the same resolve parameter as a third-party dynamic module.
RHEL, CentOS, AlmaLinux, Rocky Linux
sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-upstream-dynamic
Then load the module by adding the following at the top of /etc/nginx/nginx.conf, before the http {} block:
load_module modules/ngx_http_upstream_dynamic_servers_module.so;
Note: This module is available for older NGINX builds. On current NGINX 1.28+ packages from the GetPageSpeed repository, the native feature supersedes this module and the package is not needed.
The upstream-dynamic module does not require a
zonedirective, unlike the native implementation. However, it requires theresolverdirective to be defined at thehttplevel rather than inside theupstreamblock.
Testing Your Configuration
Verify Syntax
nginx -t
If you see unknown directive "resolve", your NGINX version is too old for native support and you need the upstream-dynamic module.
Verify Runtime Behavior
Reload NGINX and confirm the upstream resolves and proxies successfully:
systemctl reload nginx
curl -s -o /dev/null -w "%{http_code}" http://localhost/
A 200 response confirms the upstream domain resolved and the proxy is working.
Verify DNS Re-Resolution
To confirm that NGINX re-resolves domain names without a reload, enable debug logging temporarily:
# In nginx.conf, set error_log level to debug
error_log /var/log/nginx/error.log debug;
After reloading, watch the error log for resolution messages:
grep "resolve" /var/log/nginx/error.log | tail -5
You should see periodic entries showing the resolver activity for your upstream domains.
The Variable Trick: A Simpler Alternative
If you do not need upstream features like load balancing, health checks, or weighted distribution, there is a simpler approach that works on any NGINX version. When proxy_pass contains a variable, NGINX resolves the domain name per-request using the location-level resolver:
server {
listen 80;
resolver 8.8.8.8 valid=30s;
set $backend_host "backend.example.com";
location / {
proxy_pass http://$backend_host;
proxy_set_header Host backend.example.com;
}
}
This works because NGINX detects the variable in proxy_pass and switches to per-request DNS resolution instead of compile-time upstream binding. For more on programmatic NGINX control, see our guide on the NGINX Lua module.
Limitations of the Variable Trick
However, the variable trick comes with significant trade-offs:
- No load balancing — you cannot distribute traffic across multiple servers
- No
max_failsorfail_timeout— no passive health checking - No
backupservers — no automatic failover - No
keepaliveconnections — connection pooling to the upstream is lost - Per-request DNS overhead — each request triggers a resolver lookup (though NGINX caches results according to the
validparameter)
For a single upstream server with simple proxying needs, the variable trick is sufficient. For anything more sophisticated, use the NGINX upstream resolve parameter with a proper upstream block.
Comparison with Angie
Angie is a popular fork of NGINX maintained by former NGINX core developers. Angie’s open-source edition has included the resolve parameter since its first release — the developers made a deliberate choice to ship features that NGINX had withheld for Plus.
# Works in Angie open-source edition
upstream backend {
zone backend_zone 64k;
server backend.example.com resolve;
}
Angie additionally supports the service parameter for SRV record resolution, which enables DNS-based service discovery:
upstream backend {
zone backend_zone 64k;
server backend.example.com service=http resolve;
}
Note: Native NGINX 1.28+ also supports the service parameter for SRV records, matching Angie’s capability.
Features that remain exclusive to Angie PRO (commercial) include the state directive for persisting dynamically added servers, the runtime configuration API with write access, and active health checks. For dynamic upstream management via external service registries like Consul or etcd, see our guide on NGINX Stream Upsync.
Comparison with NGINX Plus — and Closing the Gap
NGINX Plus has offered the resolve parameter for over a decade. Now that NGINX upstream resolve has landed in open-source NGINX 1.28+, the feature gap between Plus and open source has narrowed significantly. Here is what remains Plus-exclusive — and how to get each capability without a Plus subscription:
| Feature | NGINX OSS 1.28+ | NGINX Plus | NGINX-MOD |
|---|---|---|---|
resolve parameter |
Yes | Yes | Yes |
Per-upstream resolver |
Yes | Yes | Yes |
zone shared memory |
Yes | Yes | Yes |
SRV record discovery (service=) |
Yes | Yes | Yes |
| Upstream REST API (runtime add/remove) | No | Yes | Yes |
state directive (persist across reloads) |
No | Yes | Yes |
| Active health checks | No | Yes | Yes |
Slow start (slow_start) |
No | Yes | Yes |
With the resolve parameter now native in NGINX 1.28+, the only remaining Plus-exclusive upstream features were the runtime API, state persistence, and active health checks. NGINX-MOD — GetPageSpeed’s enhanced NGINX build — includes all three:
- Dynamic upstream API: A Plus-compatible REST API that lets you add, remove, and modify upstream servers at runtime — no reload required. Includes the
statedirective for persisting changes across restarts and theslow_startparameter for gradual traffic ramp-up. - Active health checks: Proactive server monitoring with six check types (HTTP, TCP, SSL, MySQL, AJP, FastCGI) — more than NGINX Plus’s three. Detect failures before users are affected.
The result: NGINX-MOD with native resolve provides complete Plus-level upstream management at a fraction of the cost. NGINX Plus starts at $3,675/year per instance. A GetPageSpeed Pro subscription — which includes NGINX-MOD — costs $200/year per server, and Fleet pricing drops to $5/server/month for teams managing multiple servers.
Performance Considerations
Shared Memory Sizing
The zone directive allocates shared memory for the upstream peer list. For a typical upstream with one to ten servers, 64k is more than adequate. Each peer entry consumes roughly 256 bytes. A 64 KB zone can hold several hundred peers, which covers even upstreams where a single domain resolves to dozens of IPs (common with cloud load balancers).
If you have many upstreams, each one needs its own named zone. Monitor for zone "..." is too small errors in the error log if you suspect the zone is undersized.
Resolver Overhead
DNS resolution happens asynchronously in the background on a timer — it does not add latency to individual requests. The resolver uses UDP by default, falling back to TCP for responses larger than 512 bytes. For high-volume environments, consider pointing the resolver at a local caching DNS server (like systemd-resolved on 127.0.0.53 or dnsmasq) rather than an external DNS provider to minimize resolution latency.
Connection Draining
When DNS changes and the upstream peer list updates, existing connections to old IPs are not immediately terminated. They drain naturally as requests complete. New requests go to the updated IPs. This is smoother than nginx -s reload, which can briefly disrupt long-lived connections (WebSockets, streaming responses).
Security Best Practices
Use a Trusted Resolver
The resolver directive controls which DNS server NGINX trusts for upstream IP resolution. A compromised or spoofed DNS response could redirect your traffic to a malicious server. Best practices:
- Prefer local resolvers (
127.0.0.53,127.0.0.1) over public ones (8.8.8.8) in production - Use DNS over TCP or DNS-over-TLS if your resolver supports it, to prevent UDP spoofing
- Never use a resolver controlled by an untrusted party
upstream backend {
zone backend_zone 64k;
# Use the local systemd-resolved stub
resolver 127.0.0.53 valid=30s;
server backend.internal resolve;
}
For hostname-based access control that complements dynamic upstreams, see our NGINX Reverse DNS module guide.
Restrict the valid Parameter
A very short valid duration (e.g., 1s) increases DNS query volume and widens the window for cache-poisoning attacks. A very long duration delays legitimate failover. A valid value between 10s and 60s balances freshness and security for most deployments.
Combine with Passive Health Checks
Use max_fails and fail_timeout alongside resolve to detect backend failures even between DNS updates:
server backend.example.com resolve max_fails=3 fail_timeout=30s;
For proactive failure detection that does not rely on real user traffic, consider active health checks available in NGINX-MOD.
Troubleshooting
“host not found in upstream”
This error at startup means the domain name could not be resolved and you are not using the resolve parameter. Add resolve to the server line, ensure a resolver is configured, and add a zone:
upstream backend {
zone backend_zone 64k;
resolver 8.8.8.8;
server backend.example.com resolve; # resolve lets NGINX start even if DNS fails
}
“resolving names at run time requires upstream to be in shared memory”
You used resolve without a zone directive. Add a zone:
upstream backend {
zone backend_zone 64k; # Required for resolve
resolver 8.8.8.8;
server backend.example.com resolve;
}
Upstream Stuck on Old IPs After DNS Change
Check whether the valid parameter is set too high, or if your DNS server returns a very long TTL. Lower the valid duration:
resolver 8.8.8.8 valid=10s;
Also verify the resolver is reachable from the NGINX host:
dig @8.8.8.8 backend.example.com
Upstream Shows “(no live upstreams)” in Error Log
If the domain name is temporarily unresolvable, the server is marked as down. NGINX logs no live upstreams when all servers in the upstream block are down. This resolves automatically when DNS succeeds again. To prevent complete upstream failure, add a static fallback server:
upstream backend {
zone backend_zone 64k;
resolver 8.8.8.8 valid=30s;
server backend.example.com resolve;
server 10.0.0.100 backup; # Static fallback, always available
}
Which Approach Should You Use?
| Scenario | Recommended Approach |
|---|---|
| NGINX 1.28+ with multiple upstream servers | Native resolve parameter |
| NGINX 1.28+ with a single upstream server, no load balancing needed | Variable trick (set $backend ...) or native resolve |
| Older NGINX (pre-1.27.3) with upstream features | upstream-dynamic module |
| Older NGINX with a single upstream server | Variable trick |
| Need runtime API to add/remove servers | NGINX-MOD |
| Need active health checks | NGINX-MOD |
| Full Plus-level upstream management | NGINX-MOD with native resolve |
For the vast majority of modern deployments running NGINX 1.28+, the native NGINX upstream resolve parameter is the right choice. It is built in, well-tested, and integrates with all upstream features. For teams that also need the runtime API or active health checks, NGINX-MOD delivers the complete package.
Conclusion
Dynamic DNS resolution for NGINX upstreams is no longer a commercial-only feature. With NGINX 1.28+, the resolve parameter is available to everyone — bringing automatic DNS re-resolution, graceful handling of unresolvable domains, and seamless IP address updates without reloads. This is part of a broader convergence between NGINX open source and NGINX Plus, and the remaining gaps — the dynamic upstream API, active health checks, and state persistence — are already available through NGINX-MOD from the GetPageSpeed repository.
Whether you are managing cloud-based microservices, containerized workloads, or traditional multi-server deployments, NGINX upstream resolve eliminates manual intervention when backend IPs change — keeping your reverse proxy infrastructure resilient and self-healing.

