yum upgrades for production use, this is the repository for you.
Active subscription is required.
Modern web applications demand real-time features. Users expect instant chat messages, live notifications, and streaming updates. Traditionally, developers reach for Node.js or similar event-driven platforms. However, there is a powerful alternative: NGINX Nchan.
This module transforms NGINX into a complete pub/sub server. It handles WebSocket connections, Server-Sent Events (SSE), and long-polling natively. You get real-time capabilities without adding Node.js to your stack. Moreover, Nchan scales to hundreds of thousands of concurrent connections with minimal CPU overhead.
In this comprehensive guide, you will learn how to install the Nchan module, configure pub/sub channels, and build a complete chat and notification system. All examples are tested and production-ready.
What is NGINX Nchan?
NGINX Nchan is a scalable pub/sub module for NGINX. It enables real-time communication between your backend and web clients. The module supports multiple transport protocols simultaneously:
- WebSocket for bidirectional, full-duplex communication
- Server-Sent Events (SSE) for efficient server-to-client streaming
- Long-polling for maximum browser compatibility
- HTTP chunked transfer for streaming responses
Additionally, Nchan provides message buffering, channel multiplexing, and Redis integration for horizontal scaling. These features make it suitable for production deployments at any scale.
Why Choose Nchan Over Node.js?
Node.js excels at real-time applications. However, it introduces complexity to your infrastructure. You need to manage another runtime, handle process supervision, and configure reverse proxying. Furthermore, Node.js requires careful memory management under high load.
NGINX Nchan offers several advantages:
- Simplified architecture: No separate application server required
- Native NGINX integration: Uses NGINX’s battle-tested event loop
- Lower resource consumption: Handles 300K+ subscribers per second on commodity hardware
- Familiar configuration: Standard NGINX directives and patterns
- Built-in message persistence: Memory, disk, or Redis storage options
Therefore, if your real-time needs are primarily pub/sub (chat, notifications, live feeds), the Nchan module provides a simpler path to production.
Installing Nchan on RHEL-Based Systems
NGINX Nchan is available as a pre-built package from the GetPageSpeed repository. This approach ensures compatibility with your NGINX version and simplifies updates.
Step 1: Enable the GetPageSpeed Repository
First, install the repository configuration package. This works on Rocky Linux, AlmaLinux, CentOS Stream, and RHEL:
sudo dnf install -y https://extras.getpagespeed.com/release-latest.rpm
Step 2: Install the Nchan Module
Next, install the nginx-module-nchan package:
sudo dnf install -y nginx-module-nchan
The package automatically installs NGINX if not already present. It also handles all dependencies for you.
Step 3: Load the Module
After installation, you must load the module in your NGINX configuration. Add the following line at the top of /etc/nginx/nginx.conf, before any other directives:
load_module modules/ngx_nchan_module.so;
Step 4: Verify the Installation
Finally, test your configuration and restart NGINX:
sudo nginx -t
sudo systemctl restart nginx
If the test passes, Nchan is ready for use. You can verify the module is loaded by checking the NGINX error log for the version message.
Basic Pub/Sub Configuration
Nchan uses a publisher/subscriber model. Publishers send messages to channels. Subscribers receive messages from channels. Let’s start with a minimal configuration.
Creating Publisher and Subscriber Endpoints
Add the following to your server block:
# Publisher endpoint - POST messages here
location = /pub {
nchan_publisher;
nchan_channel_id $arg_id;
}
# Subscriber endpoint - clients connect here
location = /sub {
nchan_subscriber;
nchan_channel_id $arg_id;
}
This configuration creates two endpoints. The publisher accepts POST requests to send messages. The subscriber delivers messages to connected clients.
Testing the Basic Setup
Open two terminal windows. In the first, start a subscriber:
curl -s -N "http://localhost/sub?id=test"
The -N flag disables buffering, so messages appear immediately. In the second terminal, publish a message:
curl -s -X POST "http://localhost/pub?id=test" -d "Hello, Nchan!"
You should see the message appear instantly in the subscriber terminal. This demonstrates real-time message delivery with NGINX Nchan.
Understanding Channel IDs
The nchan_channel_id directive determines which channel receives the message. In our example, $arg_id extracts the channel ID from the URL query parameter. You can use any NGINX variable here:
# Channel from URL path
location ~ ^/channel/(?<chan>[a-zA-Z0-9_-]+)$ {
nchan_pubsub;
nchan_channel_id $chan;
}
# Channel from header
location /events {
nchan_subscriber;
nchan_channel_id $http_x_channel_id;
}
# Channel from cookie
location /notifications {
nchan_subscriber;
nchan_channel_id $cookie_user_id;
}
This flexibility allows you to integrate Nchan with your existing authentication and routing logic.
Building a Complete Chat System
Now let’s build a production-ready chat system with NGINX Nchan. This example includes chat rooms, system messages, and message history.
Chat System Configuration
Create a new configuration file at /etc/nginx/conf.d/chat.conf:
# Shared memory for Nchan channels
# Adjust based on your expected message volume
nchan_max_reserved_memory 128M;
server {
listen 80;
server_name chat.example.com;
# Serve static frontend files
root /var/www/chat;
index index.html;
# Enable open_file_cache for static files
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# Subscribe to chat room messages
location ~ ^/api/chat/(?<room_id>[a-zA-Z0-9_-]+)/subscribe$ {
nchan_subscriber;
nchan_channel_id "chat:$room_id" "system:$room_id";
# Return last 10 messages on connect
nchan_subscriber_first_message -10;
# Keep messages for 24 hours
nchan_message_timeout 24h;
nchan_message_buffer_length 1000;
}
# Publish message to chat room
location ~ ^/api/chat/(?<room_id>[a-zA-Z0-9_-]+)/publish$ {
nchan_publisher;
nchan_channel_id "chat:$room_id";
}
# System messages (join/leave notifications)
location ~ ^/api/chat/(?<room_id>[a-zA-Z0-9_-]+)/system$ {
allow 127.0.0.1;
deny all;
nchan_publisher;
nchan_channel_id "system:$room_id";
}
# Static files
location / {
try_files $uri $uri/ =404;
}
}
This configuration demonstrates several important Nchan features:
- Channel multiplexing: Subscribers receive from both
chat:andsystem:channels - Message history: New subscribers get the last 10 messages immediately
- Message persistence: Messages remain available for 24 hours
- Access control: System messages can only be published from localhost
For more on NGINX allow and deny directives, see our dedicated guide.
Chat Frontend Example
Create a simple HTML frontend at /var/www/chat/index.html:
<!DOCTYPE html>
<html>
<head>
<title>Nchan Chat Demo</title>
<style>
#messages { height: 400px; overflow-y: auto; border: 1px solid #ccc; padding: 10px; }
.system { color: #666; font-style: italic; }
</style>
</head>
<body>
<h1>Chat Room: General</h1>
<div id="messages"></div>
<input type="text" id="input" placeholder="Type a message...">
<button onclick="send()">Send</button>
<script>
const messages = document.getElementById('messages');
const input = document.getElementById('input');
const roomId = 'general';
// Connect using EventSource (SSE)
const eventSource = new EventSource(`/api/chat/${roomId}/subscribe`);
eventSource.onmessage = function(event) {
const div = document.createElement('div');
div.textContent = event.data;
messages.appendChild(div);
messages.scrollTop = messages.scrollHeight;
};
eventSource.onerror = function() {
console.log('Connection lost, reconnecting...');
};
function send() {
const message = input.value.trim();
if (!message) return;
fetch(`/api/chat/${roomId}/publish`, {
method: 'POST',
body: message
});
input.value = '';
}
input.addEventListener('keypress', function(e) {
if (e.key === 'Enter') send();
});
</script>
</body>
</html>
This frontend uses Server-Sent Events for subscribing. SSE automatically reconnects if the connection drops. The browser handles all the complexity for you.
Testing the Chat System
Verify the configuration passes validation:
sudo nginx -t
sudo systemctl reload nginx
Then test message delivery:
# Start a subscriber in one terminal
curl -s -N -H "Accept: text/event-stream" \
"http://localhost/api/chat/general/subscribe"
# Send messages from another terminal
curl -s -X POST "http://localhost/api/chat/general/publish" \
-d "Hello from user 1"
curl -s -X POST "http://localhost/api/chat/general/publish" \
-d "Hello from user 2"
Messages should appear instantly in the subscriber terminal.
Implementing Push Notifications
Push notifications require a different pattern than chat. Each user has their own channel. Your backend publishes notifications to user-specific channels.
Notification Configuration
Add these locations to your server block:
# User subscribes to their notification channel
location ~ ^/api/notifications/(?<user_id>[a-zA-Z0-9_-]+)$ {
nchan_subscriber;
nchan_channel_id "notify:$user_id";
nchan_subscriber_first_message newest;
nchan_message_timeout 7d;
nchan_message_buffer_length 100;
}
# Internal endpoint for sending notifications
location ~ ^/internal/notify/(?<user_id>[a-zA-Z0-9_-]+)$ {
# Only allow internal requests
allow 127.0.0.1;
deny all;
nchan_publisher;
nchan_channel_id "notify:$user_id";
}
Sending Notifications from Your Backend
Your application backend can send notifications using simple HTTP requests:
# Send notification to user123
curl -X POST "http://localhost/internal/notify/user123" \
-H "Content-Type: application/json" \
-d '{"type":"message","from":"alice","text":"Hey!"}'
In Python, this becomes:
import requests
def send_notification(user_id, data):
url = f"http://localhost/internal/notify/{user_id}"
response = requests.post(url, json=data)
return response.status_code == 200
In PHP:
function sendNotification($userId, $data) {
$url = "http://localhost/internal/notify/{$userId}";
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($data));
curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: application/json']);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
curl_close($ch);
return $result !== false;
}
Frontend Notification Handler
const userId = 'user123'; // Get from authentication
const eventSource = new EventSource(`/api/notifications/${userId}`);
eventSource.onmessage = function(event) {
const notification = JSON.parse(event.data);
// Show browser notification
if (Notification.permission === 'granted') {
new Notification(notification.type, {
body: notification.text
});
}
// Update UI badge
updateNotificationBadge();
};
Monitoring Performance
Nchan provides a built-in status endpoint for monitoring. Enable it in your configuration:
location = /nchan_status {
nchan_stub_status;
allow 127.0.0.1;
deny all;
}
Query the endpoint to see current statistics:
curl -s http://localhost/nchan_status
The output includes valuable metrics:
total published messages: 15234
stored messages: 847
shared memory used: 2048K
shared memory limit: 131072K
channels: 156
subscribers: 2341
redis pending commands: 0
redis connected servers: 0
nchan version: 1.3.7
Monitor these values to understand your system’s behavior. The stored messages and shared memory used metrics help you tune buffer sizes. The subscribers count shows concurrent connections. For more detailed monitoring, consider the NGINX VTS Module which provides comprehensive traffic statistics.
Scaling with Redis
For multi-server deployments, NGINX Nchan supports Redis as a message broker. This enables horizontal scaling across multiple NGINX instances.
Redis Configuration
First, install Redis on your server:
sudo dnf install -y redis
sudo systemctl enable --now redis
Then configure Nchan to use Redis:
# Define Redis upstream
upstream redis_server {
nchan_redis_server redis://127.0.0.1:6379;
}
server {
listen 80;
server_name realtime.example.com;
# Use Redis for all channels
nchan_redis_pass redis_server;
location = /pub {
nchan_publisher;
nchan_channel_id $arg_id;
}
location = /sub {
nchan_subscriber;
nchan_channel_id $arg_id;
}
}
With Redis enabled, messages are shared across all NGINX servers. A subscriber on server A receives messages published on server B.
Redis Cluster Support
For high availability, Nchan supports Redis Cluster:
upstream redis_cluster {
nchan_redis_server redis://node1:6379;
nchan_redis_server redis://node2:6379;
nchan_redis_server redis://node3:6379;
}
The module automatically handles failover and data distribution across cluster nodes.
Security Best Practices
Real-time endpoints require careful security consideration. Here are essential practices:
Restrict Publisher Access
Never expose publisher endpoints to the public internet:
location ~ ^/api/.*/publish$ {
# Only allow from application servers
allow 10.0.0.0/8;
allow 127.0.0.1;
deny all;
nchan_publisher;
nchan_channel_id $channel_id;
}
Use Authorization Callbacks
Nchan supports authorization callbacks to validate subscribers:
location /subscribe {
nchan_authorize_request /auth;
nchan_subscriber;
nchan_channel_id $arg_channel;
}
location = /auth {
internal;
proxy_pass http://your-backend/verify-subscription;
proxy_pass_request_body off;
proxy_set_header X-Channel-Id $arg_channel;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header Cookie $http_cookie;
}
Your backend /verify-subscription endpoint returns 200 to allow or 403 to deny. For basic authentication needs, see our guide on NGINX Basic Auth.
Validate Channel Names
Use regex to restrict channel ID formats:
location ~ ^/channel/(?<chan>[a-z0-9_-]{1,64})$ {
nchan_pubsub;
nchan_channel_id $chan;
}
This prevents directory traversal attacks and limits channel name length.
Configuration Validation with Gixy
Before deploying NGINX configurations to production, validate them with Gixy. This tool detects common security misconfigurations:
gixy /etc/nginx/conf.d/chat.conf
Gixy checks for issues like improper try_files usage, missing security headers, and regex vulnerabilities. Always run it before deploying configuration changes.
Troubleshooting Common Issues
Messages Not Delivered
Check these common causes:
- Channel ID mismatch: Ensure publisher and subscriber use identical channel IDs
- Firewall blocking: Verify ports are open for long-lived connections
- Proxy buffering: Disable buffering in upstream proxies
Connection Drops
Long-lived connections may drop due to:
- Proxy timeouts: Increase
proxy_read_timeoutin upstream configurations. See our NGINX timeout guide for details. - Load balancer limits: Configure sticky sessions or increase idle timeout
- Client-side issues: Implement reconnection logic in your frontend
Memory Usage Growing
If shared memory grows unbounded:
- Reduce
nchan_message_timeoutto expire old messages faster - Lower
nchan_message_buffer_lengthper channel - Increase
nchan_max_reserved_memoryif needed
Performance Tuning
For high-traffic deployments, optimize these settings:
# Increase worker connections
events {
worker_connections 65535;
}
# Nchan-specific tuning
nchan_max_reserved_memory 256M;
nchan_message_buffer_length 50;
nchan_message_timeout 1h;
Also tune your operating system:
# Increase file descriptor limit
echo "* soft nofile 65535" >> /etc/security/limits.conf
echo "* hard nofile 65535" >> /etc/security/limits.conf
# Tune kernel network parameters
sysctl -w net.core.somaxconn=65535
sysctl -w net.ipv4.tcp_max_syn_backlog=65535
For optimal static file delivery alongside your real-time endpoints, review our guide on NGINX sendfile, tcp_nopush, and tcp_nodelay.
Conclusion
NGINX Nchan provides a powerful foundation for real-time web applications. It eliminates the need for separate WebSocket servers while offering excellent performance and scalability. The module integrates seamlessly with existing NGINX deployments and supports multiple transport protocols.
Start with the basic pub/sub examples in this guide. Then expand to the complete chat system as your needs grow. For production deployments, enable Redis for horizontal scaling and implement the security best practices described above.
The nginx-module-nchan package from GetPageSpeed repositories makes installation straightforward on any RHEL-based system. Combined with NGINX’s proven reliability, Nchan offers a compelling solution for modern real-time applications.
