When you deploy multiple backend servers behind NGINX, maintaining session persistence becomes crucial. Without NGINX sticky sessions, users may lose their shopping carts, authentication states, or form data when load balancing redirects them to a different server. This comprehensive guide explains how to implement NGINX sticky sessions using cookie-based load balancing.
What Are NGINX Sticky Sessions?
Sticky sessions, also known as session persistence or session affinity, ensure that all requests from a single user go to the same backend server. NGINX sticky sessions solve a fundamental problem in distributed systems. Additionally, they eliminate the need for shared session storage across servers.
Consider this scenario. A user logs into your application on Server A. The backend stores their session data locally. Without session persistence, the next request might go to Server B. Server B has no knowledge of the user’s session. Consequently, the user appears logged out. Sticky sessions prevent this frustrating experience.
Why Choose Cookie-Based Session Persistence?
NGINX offers several methods for session persistence. However, cookie-based sticky sessions provide the most reliable solution. Here’s why they outperform alternatives:
IP Hash Limitations
The built-in ip_hash directive routes clients based on their IP address. This method has significant drawbacks. First, users behind NAT or corporate proxies share IP addresses. Therefore, all these users route to the same server. This creates uneven load distribution.
Moreover, mobile users frequently change IP addresses. They switch between WiFi and cellular networks. Each change potentially routes them to a different server. As a result, their sessions break unexpectedly.
Cookie-Based Advantages
Cookie-based session affinity overcomes these limitations. The server assigns each client a unique tracking cookie. NGINX reads this cookie and routes accordingly. This method offers several benefits:
- Accurate client identification – Each browser gets a unique identifier
- Network-independent persistence – IP changes don’t affect routing
- Configurable expiration – You control session duration
- Secure options – Support for HttpOnly and Secure flags
- Consistent hashing – Minimal disruption when servers change
Installing the NGINX Sticky Module
The standard NGINX distribution doesn’t include sticky session support. You need the nginx-sticky-module-ng third-party module. On Rocky Linux, AlmaLinux, or RHEL systems, install it from the GetPageSpeed repository:
dnf install nginx-module-sticky
After installation, load the module in your NGINX configuration. Add this line at the top of /etc/nginx/nginx.conf:
load_module modules/ngx_http_sticky_module.so;
Verify the configuration syntax:
nginx -t
You should see a successful validation message. Reload NGINX to activate the module:
systemctl reload nginx
Basic NGINX Sticky Sessions Configuration
Let’s start with a simple configuration. This example creates an upstream group with cookie-based session persistence:
upstream backend {
sticky name=srv_id expires=1h path=/;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This configuration creates a cookie named srv_id. The cookie expires after one hour. NGINX sets the cookie path to the root directory. When a client first connects, NGINX assigns them to a backend server. Subsequently, NGINX creates the tracking cookie. All future requests from that client route to the same server.
Understanding Sticky Directive Options
The sticky directive accepts numerous parameters. Understanding each option helps you optimize your NGINX sticky sessions configuration.
Cookie Name and Expiration
The name parameter sets the cookie identifier. Choose a descriptive name for debugging purposes. For example, use backend_route or session_server.
The expires parameter controls cookie lifetime. Use time units like 1h, 30m, or 1d. Setting appropriate expiration balances session persistence with server flexibility. Shorter durations allow faster rebalancing. Longer durations provide better user experience.
upstream backend {
sticky name=backend_route expires=2h;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
Domain and Path Settings
The domain parameter specifies cookie scope. Use this for applications spanning multiple subdomains:
upstream backend {
sticky name=srv_id expires=1h domain=.example.com path=/;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
The leading dot in .example.com allows the cookie to work across all subdomains. For instance, both www.example.com and api.example.com share the same session routing.
The path parameter limits cookie scope to specific URL paths. This proves useful when different applications run on the same domain.
Hash Algorithms
The module supports multiple hash algorithms for generating server identifiers. Choose based on your security and performance requirements:
MD5 (default):
upstream backend {
sticky hash=md5 name=srv_id expires=1h;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
SHA-1 (more secure):
upstream backend {
sticky hash=sha1 name=srv_id expires=1h;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
Index (simplest, fastest):
upstream backend {
sticky hash=index name=srv_id expires=1h;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
The index option stores the server’s position number directly. This approach is fastest but reveals your infrastructure. Attackers could determine how many backend servers exist.
Secure NGINX Sticky Sessions Configuration
Production environments require secure cookie handling. Add the secure and httponly flags:
upstream backend {
sticky name=srv_id expires=1h domain=.example.com path=/ secure httponly;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
The secure flag ensures browsers only send the cookie over HTTPS connections. This prevents session hijacking through unencrypted traffic.
The httponly flag blocks JavaScript access to the cookie. This mitigates cross-site scripting (XSS) attacks. Malicious scripts cannot steal session routing information.
HMAC-Based Cookie Security
For enhanced security, use HMAC-signed cookies. This prevents cookie tampering:
upstream backend {
sticky hmac=sha1 hmac_key=your_secret_key_here
name=srv_id expires=1h secure httponly;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
Replace your_secret_key_here with a strong, random secret. NGINX signs each cookie with this key. Any modification invalidates the cookie. Consequently, attackers cannot forge routing cookies.
Generate a secure key using this command:
openssl rand -base64 32
Store the key securely. If you change it, all existing sessions will reset.
Handling Server Failures
What happens when a sticky server goes down? By default, NGINX gracefully falls back to round-robin selection. The client gets a new cookie pointing to a healthy server.
Strict Mode (No Fallback)
Some applications require absolute session consistency. They prefer errors over session loss. Enable strict mode with no_fallback:
upstream backend {
sticky name=srv_id expires=1h no_fallback;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
With this setting, NGINX returns a 502 Bad Gateway error if the assigned server is unavailable. Use this option carefully. It prioritizes session integrity over availability.
Configuring Server Health Checks
Combine NGINX sticky sessions with proper health checking. This ensures NGINX quickly detects failed servers:
upstream backend {
sticky name=srv_id expires=1h;
server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
}
The max_fails parameter sets how many consecutive failures trigger server removal. The fail_timeout parameter defines the recovery check interval. After 30 seconds, NGINX attempts to route traffic to the failed server again.
Built-in Alternatives: Hash Directive
If you cannot install third-party modules, NGINX’s built-in hash directive provides an alternative. This approach requires your application to set a session cookie. Learn more about this technique in our NGINX map directive guide:
upstream backend {
hash $cookie_PHPSESSID consistent;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
}
}
This configuration hashes the PHPSESSID cookie value. The consistent keyword enables consistent hashing. When you add or remove servers, only a fraction of sessions redistribute.
Consistent Hashing Explained
Consistent hashing minimizes session disruption during scaling events. Traditional hashing redistributes all sessions when server count changes. Consistent hashing only affects sessions that mapped to added or removed servers.
The algorithm creates a virtual ring with 160 points per server weight unit. When routing, NGINX finds the nearest ring point. This mathematical approach ensures minimal redistribution.
Fallback for Missing Cookies
New visitors don’t have session cookies yet. Handle this gracefully with a map:
map $cookie_PHPSESSID $backend_route {
default $cookie_PHPSESSID;
"" $remote_addr;
}
upstream backend {
hash $backend_route consistent;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
This configuration uses the session cookie when available. For new visitors, it falls back to IP-based routing. Once the application sets the session cookie, subsequent requests use it.
Performance Considerations
NGINX sticky sessions impact both performance and scalability. Consider these factors when designing your architecture.
Memory Usage
Each server tracks active connections. More connections mean higher memory usage. Monitor your servers’ resource consumption. Scale horizontally when approaching limits.
Uneven Load Distribution
Sticky sessions can cause load imbalance. One server might receive more long-running sessions. Use weighted server definitions to compensate:
upstream backend {
sticky name=srv_id expires=1h;
server 192.168.1.10:8080 weight=3;
server 192.168.1.11:8080 weight=2;
server 192.168.1.12:8080 weight=1;
}
Higher weights direct more new sessions to that server. Adjust weights based on observed traffic patterns.
Session Storage Alternatives
Consider centralized session storage as an alternative. Redis or Memcached can store sessions. All servers access the shared store. This eliminates session affinity requirements.
However, shared session storage adds complexity. It introduces another potential failure point. Additionally, it increases network latency. NGINX sticky sessions often provide simpler, faster solutions.
Combining with Proxy Cache
You can combine sticky sessions with proxy caching for optimal performance. The cache handles static content while session persistence manages dynamic requests:
upstream backend {
sticky name=srv_id expires=1h secure httponly;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
}
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m;
server {
listen 443 ssl http2;
server_name example.com;
location /static/ {
proxy_pass http://backend;
proxy_cache app_cache;
proxy_cache_valid 200 1h;
}
location / {
proxy_pass http://backend;
}
}
Verifying Your Configuration
Always validate your NGINX configuration before deployment. Use the built-in syntax checker:
nginx -t
Additionally, use gixy to analyze your configuration for security issues:
gixy /etc/nginx/nginx.conf
Gixy detects common misconfigurations and security vulnerabilities. It validates regular expressions, header handling, and access controls.
Testing NGINX Sticky Sessions
After deployment, verify session persistence works correctly. Use curl to test:
curl -c cookies.txt -b cookies.txt http://example.com/
curl -c cookies.txt -b cookies.txt http://example.com/
curl -c cookies.txt -b cookies.txt http://example.com/
The -c flag saves cookies to a file. The -b flag sends cookies from that file. Each request should reach the same backend server.
Check the cookie contents:
cat cookies.txt
You should see your sticky session cookie with the configured name.
Monitoring and Troubleshooting
Enable detailed logging to troubleshoot NGINX sticky sessions issues. For comprehensive monitoring, consider the NGINX VTS module:
log_format upstream_log '$remote_addr - $upstream_addr - $cookie_srv_id';
server {
access_log /var/log/nginx/upstream.log upstream_log;
# ... rest of configuration
}
This log format shows client IP, selected upstream server, and cookie value. Analyze logs to verify consistent routing.
Common Issues
Sessions not persisting: Verify the cookie domain matches your site. Check browser developer tools for cookie presence.
Uneven distribution: New sessions naturally distribute evenly. Existing sessions stay pinned. Wait for cookie expiration to rebalance.
502 errors with no_fallback: A sticky server is down. Remove no_fallback or restore the server.
Complete Production Configuration
Here’s a production-ready configuration combining all best practices:
load_module modules/ngx_http_sticky_module.so;
upstream backend {
sticky hmac=sha1 hmac_key=a8f5f167f44f4964e6c998dee827110c
name=SERVERID expires=2h domain=.example.com path=/
secure httponly;
server 192.168.1.10:8080 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 weight=1 max_fails=3 fail_timeout=30s;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
ssl_certificate /etc/nginx/ssl/example.com.crt;
ssl_certificate_key /etc/nginx/ssl/example.com.key;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 10s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
}
This configuration includes HMAC-signed cookies, secure flags, health checks, and proper proxy headers. It serves as a solid foundation for production deployments.
Conclusion
NGINX sticky sessions solve critical session persistence challenges in load-balanced environments. Cookie-based session affinity provides the most reliable approach. It correctly identifies clients regardless of network changes.
For Rocky Linux, AlmaLinux, and other Enterprise Linux systems, the sticky module is available from GetPageSpeed repositories. Installation takes just one command.
Remember to secure your cookies with HMAC signing, secure flags, and httponly attributes. Monitor your deployment and adjust weights for optimal distribution.
Whether you use the dedicated sticky module or built-in hash directive, proper configuration ensures your users enjoy seamless experiences across your server infrastructure.

