Site icon GetPageSpeed

NGINX Upstream Jdomain: Dynamic DNS for Any NGINX Version

NGINX Upstream Jdomain: Dynamic DNS for Any NGINX Version

You configure NGINX as a reverse proxy with upstream servers defined by domain names. The backend runs on Kubernetes, auto-scaling groups, or a cloud provider that rotates IPs. Everything works β€” until the DNS record changes. NGINX keeps sending traffic to the old IP address because it resolved that hostname once at startup and cached the result permanently. Requests fail. Users complain. You SSH in and run nginx -s reload to force a fresh DNS lookup. The NGINX upstream jdomain module eliminates this manual intervention by re-resolving domain names on the fly.

This is not a bug β€” it is how NGINX has always worked. Upstream server addresses are resolved during configuration loading via the system’s getaddrinfo(), and the resulting IPs are baked into worker process memory for the lifetime of that configuration. If a domain name fails to resolve at startup, NGINX refuses to start entirely.

The NGINX upstream jdomain module solves this problem by re-resolving domain names asynchronously during normal operation. Instead of the standard server directive, you use the jdomain directive inside an upstream block, and the module handles DNS lookups in the background β€” updating the upstream peer list without any reload.

Important context: Starting with NGINX 1.27.3 (November 2024) and the stable NGINX 1.28.0 (April 2025), NGINX open source includes a native resolve parameter for the server directive that provides the same capability β€” and more. The jdomain module remains the right choice for deployments running older NGINX versions (pre-1.27.3) where upgrading is not an option. For NGINX 1.28+, use the native resolve parameter instead.

How NGINX Upstream Jdomain Works

Unlike the native resolve parameter β€” which uses a background timer to re-resolve DNS on a periodic schedule β€” the NGINX upstream jdomain module uses a request-driven approach:

  1. At startup, the module performs a blocking DNS lookup for each jdomain domain name. The resolved IPs are cached.
  2. When a request arrives for the upstream, the module checks whether the configured interval has elapsed since the last DNS lookup.
  3. If the interval has elapsed, the module initiates an asynchronous DNS resolution using the configured resolver.
  4. The triggering request still uses the previously cached IPs. Only subsequent requests see the updated addresses.
  5. Once resolution completes, the module updates the peer list in place.

This request-driven design means DNS re-resolution only happens when traffic is flowing. If an upstream receives no requests for an extended period, no unnecessary DNS lookups occur. However, the trade-off is that the first request after a DNS change may still route to a stale IP.

The module does not require a shared memory zone directive. It stores state in each worker process independently. This simplifies configuration but means that different workers may see different DNS results briefly during transitions.

Installation

RHEL, CentOS, AlmaLinux, Rocky Linux

sudo dnf install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf install nginx-module-upstream-jdomain

Then load the module by adding the following at the top of /etc/nginx/nginx.conf, before the http {} block:

load_module modules/ngx_http_upstream_jdomain_module.so;

Debian and Ubuntu

First, set up the GetPageSpeed APT repository, then install:

sudo apt-get update
sudo apt-get install nginx-module-upstream-jdomain

On Debian/Ubuntu, the package handles module loading automatically. No load_module directive is needed.

Configuration

The NGINX upstream jdomain module introduces a single directive β€” jdomain β€” used inside upstream blocks in place of (or alongside) standard server directives.

Prerequisites

A resolver directive must be configured at the http level in your NGINX configuration. Without it, the module cannot perform runtime DNS lookups:

http {
    resolver 8.8.8.8;
    # ...
}

Basic Example

The simplest configuration resolves a domain name and proxies traffic to it on port 80:

resolver 8.8.8.8;

upstream backend {
    jdomain api.example.com;
}

server {
    listen 80;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host api.example.com;
    }
}

The module resolves api.example.com at startup, then re-resolves it every 1 second (the default interval) when requests arrive.

Custom Port and Tuning

For backends listening on a non-standard port, with a longer re-resolution interval and a larger IP buffer:

resolver 8.8.8.8;

upstream backend {
    jdomain api.example.com port=8080 max_ips=10 interval=10;
}

server {
    listen 80;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host api.example.com;
    }
}

This configuration caches up to 10 IP addresses and re-resolves the domain every 10 seconds.

Strict Mode with Backup Failover

The strict parameter marks the jdomain server as down whenever DNS resolution fails β€” including timeouts, server errors, and empty responses. Combined with a backup server, this provides automatic failover:

resolver 8.8.8.8;

upstream backend {
    server 10.0.0.100:8080 backup;
    jdomain api.example.com strict;
}

server {
    listen 80;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host api.example.com;
    }
}

If api.example.com fails to resolve, NGINX routes traffic to the static backup server at 10.0.0.100. When DNS succeeds again, traffic returns to the dynamically resolved addresses. Without strict, only specific DNS error types (NXDOMAIN and FORMERR) trigger the failover.

IPv4 or IPv6 Only

Filter resolved addresses to a specific address family:

upstream backend {
    jdomain api.example.com ipver=4;
}

Use ipver=4 for IPv4 only, ipver=6 for IPv6 only, or omit the parameter (default) to accept both families.

Connection Limiting

The max_conns parameter limits simultaneous connections to the upstream, preventing a single backend from being overwhelmed:

upstream backend {
    jdomain api.example.com max_conns=100;
}

Combining with Load Balancing Algorithms

The NGINX upstream jdomain module works with other load balancing directives. You can use multiple jdomain entries, mix them with standard server directives, and add keepalive:

resolver 8.8.8.8;

upstream backend {
    least_conn;
    jdomain primary.example.com;
    jdomain secondary.example.com;
    server 10.0.0.50:8080 backup;
    keepalive 16;
}

Critical ordering rule: If you use an alternate load balancing algorithm (least_conn, hash, etc.), it must appear before any jdomain directives. Placing jdomain before the load balancer directive will cause NGINX to crash at runtime. This is because many load balancing modules override internal handlers, and the jdomain module must initialize after them.

Directive Reference

Syntax:  jdomain <domain-name> [port=N] [max_ips=N] [interval=N] [ipver=N] [max_conns=N] [strict]
Context: upstream
Parameter Default Description
port 80 Backend server port
max_ips 4 Maximum number of resolved IP addresses to cache. If a domain resolves to more IPs than this value, excess addresses are silently dropped
interval 1 Seconds between DNS re-resolution attempts. Supports time values (e.g., 10, 30s). Resolution only occurs when requests trigger it
ipver 0 Address family filter: 0 = both IPv4 and IPv6, 4 = IPv4 only, 6 = IPv6 only
max_conns 0 (unlimited) Maximum simultaneous connections to the upstream
strict off When enabled, marks the server as down on any DNS resolution failure. Without strict, only NXDOMAIN and FORMERR errors trigger down status

Implicit server defaults: Each jdomain directive creates an underlying NGINX upstream server with weight=1, max_fails=1, and fail_timeout=10s.

Comparison: Jdomain vs. Native Resolve vs. Angie

With native DNS resolution now available in open-source NGINX and Angie (an NGINX fork), the landscape for dynamic upstream DNS has changed significantly. Here is how the three approaches compare:

Feature jdomain module NGINX 1.28+ native resolve Angie OSS resolve
DNS re-resolution Request-driven Timer-based (TTL/valid) Timer-based (TTL/valid)
Requires zone directive No Yes Yes
Requires resolver directive Yes (http level) Yes (http or upstream level) Yes
Stale request on DNS change Yes (one request lag) No No
IP address limit Yes (max_ips, default 4) No limit No limit
SRV record support No Yes (service=) Yes (service=)
Stream (TCP/UDP) support No (HTTP only) Yes Yes
State persistence across reloads No Yes (state directive) Yes
Shared memory for cross-worker state No Yes Yes
NGINX starts when DNS fails Only with backup server Yes (marks server as down) Yes (marks server as down)
Available since ~2014 November 2024 (1.27.3) January 2023 (1.1.0)
Maintenance status Archived (October 2025) Active Active

The native resolve parameter is the clear winner for modern deployments. It resolves DNS on a background timer (no stale-request problem), supports unlimited IPs, works with both HTTP and TCP/UDP upstreams, and integrates with the state directive for persistence.

Angie β€” maintained by former NGINX core developers β€” shipped the resolve parameter in its open-source edition nearly two years before NGINX followed in November 2024. If you are already using Angie, you have had this feature since version 1.1.0.

For a detailed guide on the native resolve parameter, see NGINX Upstream Resolve: Dynamic DNS for Load Balancing.

When to Use Jdomain

Despite the native alternative, the NGINX upstream jdomain module remains relevant in specific scenarios:

For all other situations β€” especially new deployments on NGINX 1.28+ β€” use the native resolve parameter or consider NGINX-MOD, which bundles the native feature with active health checks, a dynamic upstream API, and other Plus-level features.

Migration from Jdomain to Native Resolve

If you are upgrading to NGINX 1.28+ and currently use the jdomain module, migration is straightforward. Here is a before-and-after comparison:

Before (jdomain):

load_module modules/ngx_http_upstream_jdomain_module.so;

http {
    resolver 8.8.8.8;

    upstream backend {
        jdomain api.example.com port=8080 max_ips=10 interval=30;
    }

    server {
        listen 80;
        location / {
            proxy_pass http://backend;
        }
    }
}

After (native resolve):

http {
    upstream backend {
        zone backend_zone 64k;
        resolver 8.8.8.8 valid=30s;
        server api.example.com:8080 resolve;
    }

    server {
        listen 80;
        location / {
            proxy_pass http://backend;
        }
    }
}

Key differences in the migration:

  1. Remove load_module β€” no third-party module needed
  2. Replace jdomain with server ... resolve β€” the port moves to the server address
  3. Add a zone directive β€” required for native resolve (shared memory for cross-worker state)
  4. Move resolver into the upstream block β€” optional but cleaner
  5. Map interval to valid β€” the valid parameter on the resolver directive controls re-resolution timing
  6. No max_ips equivalent β€” native resolve handles all returned addresses automatically

Testing Your Configuration

Verify Syntax

After configuring the NGINX upstream jdomain module, test the configuration:

nginx -t

If you see unknown directive "jdomain", ensure the load_module directive is present at the top of nginx.conf.

Verify Runtime Behavior

Reload NGINX and confirm the upstream resolves correctly:

systemctl reload nginx
curl -s -o /dev/null -w "%{http_code}" http://localhost/

A 200 response confirms the domain resolved and the proxy is working.

Verify DNS Re-Resolution

To confirm that the NGINX upstream jdomain module re-resolves DNS at runtime, add the $upstream_addr variable to your access log format. This logs the actual IP address NGINX connects to for each request:

log_format upstream_debug '$remote_addr - $upstream_addr - $status';

server {
    access_log /var/log/nginx/upstream-debug.log upstream_debug;
    # ...
}

After reloading, send several requests spaced apart by more than the interval value and inspect the log:

curl -s -o /dev/null http://localhost/
sleep 2
curl -s -o /dev/null http://localhost/
cat /var/log/nginx/upstream-debug.log

You should see the resolved IP addresses in the $upstream_addr field. If the domain resolves to multiple IPs, you will see different addresses across requests β€” confirming that the jdomain module is actively re-resolving DNS.

Performance Considerations

Resolution Overhead

The NGINX upstream jdomain module resolves DNS asynchronously using NGINX’s internal resolver, so lookups do not block request processing. However, because resolution is triggered by incoming requests, the very first request after the interval elapses incurs a DNS query β€” though this request still uses the cached (possibly stale) result, so there is no added latency from the user’s perspective.

The max_ips Limit

The max_ips parameter defaults to 4, which may be too low for domains that resolve to many addresses (common with cloud load balancers or CDNs). If your domain resolves to more IPs than max_ips, excess addresses are silently dropped. Set max_ips to at least the expected number of IP addresses:

jdomain cdn.example.com max_ips=20 interval=30;

Worker Process Isolation

Because the jdomain module does not use shared memory zones, each NGINX worker process maintains its own DNS cache independently. In practice, this means:

Interval Tuning

The default interval=1 (one second) is aggressive. For most production deployments, a longer interval reduces DNS query volume without sacrificing responsiveness:

jdomain api.example.com interval=30;

A 30-second interval means DNS changes take at most 30 seconds to propagate β€” acceptable for most cloud environments.

Security Best Practices

Use a Trusted DNS Resolver

The resolver directive controls which DNS server the NGINX upstream jdomain module trusts for IP resolution. A compromised or spoofed DNS response could redirect traffic to a malicious server:

resolver 127.0.0.53;

Always Configure Backup Servers

Without a backup server, a DNS resolution failure at startup prevents NGINX from starting entirely. In production, always pair jdomain with a static backup:

upstream backend {
    server 10.0.0.100:8080 backup;
    jdomain api.example.com strict;
}

This ensures NGINX starts and continues serving even when DNS is temporarily unavailable.

Combine with Passive Health Checks

The implicit server defaults include max_fails=1 and fail_timeout=10s. For more resilient behavior, consider increasing these values by combining jdomain with standard server parameters on a separate backup server:

upstream backend {
    server 10.0.0.100:8080 backup max_fails=3 fail_timeout=30s;
    jdomain api.example.com strict;
}

For proactive failure detection, see NGINX active health checks.

Troubleshooting

β€œhost not found in upstream” at Startup

The module performs a blocking DNS lookup at startup. If the domain cannot be resolved and no backup server exists, NGINX refuses to start:

nginx: [emerg] ngx_http_upstream_jdomain_module: host not found in upstream "api.example.com"

Fix: Add a backup server so NGINX can start even when DNS fails:

upstream backend {
    server 10.0.0.100:8080 backup;
    jdomain api.example.com strict;
}

Alternatively, verify that the resolver is reachable and the domain name is correct:

dig @8.8.8.8 api.example.com

β€œno resolver” Error

If you see no resolver in the error log, the resolver directive is missing from the http block:

http {
    resolver 8.8.8.8;
    # ... rest of configuration
}

NGINX Crashes with Load Balancer Directive

If NGINX crashes at runtime when using least_conn, hash, or another load balancing directive with jdomain, ensure the load balancer directive appears before jdomain in the upstream block:

# CORRECT - load balancer before jdomain
upstream backend {
    least_conn;
    jdomain api.example.com;
}

# WRONG - will crash at runtime
upstream backend {
    jdomain api.example.com;
    least_conn;
}

Upstream Stuck on Stale IPs

If the upstream seems to use old IP addresses despite DNS changes:

  1. Check the interval parameter β€” the default is 1 second, but if you set a large value, re-resolution takes longer
  2. Remember that the request triggering the lookup still uses the old IP β€” the next request gets the new one
  3. Verify the resolver directive is present and the DNS server is reachable
  4. Check max_ips β€” if the new DNS response has more IPs than max_ips, some addresses may be dropped

Conclusion

The NGINX upstream jdomain module has served as a vital solution for dynamic DNS resolution in NGINX upstream blocks since 2014 β€” filling a gap that NGINX left open for over a decade by restricting the resolve parameter to the commercial NGINX Plus product. For deployments running NGINX versions before 1.27.3, it remains the most straightforward way to handle changing backend IPs without manual reloads.

For modern NGINX 1.28+ deployments, migrate to the native resolve parameter, which eliminates the stale-request problem, supports unlimited IPs, works with TCP/UDP streams, and integrates with shared memory zones for cross-worker consistency. Combined with NGINX-MOD β€” which adds active health checks, a dynamic upstream API, and state persistence β€” open-source NGINX now matches NGINX Plus for upstream management at a fraction of the cost.

The jdomain module source is available on GitHub.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience β€’ Maintainer of GetPageSpeed RPM repository β€’ Contributor to open-source NGINX modules

Exit mobile version