fbpx

Server Setup

Faster Web Server Stack Powered by Unix Sockets and PROXY protocol

by , , revisited on


We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth yum upgrades for production use, this is the repository for you.
Active subscription is required.

tldr; With Varnish and Hitch gaining UNIX sockets support, there are fewer reasons not to use them in a single server scenario.

Sockets (UDS) benefits include:

  • Bypassing network stack’s bottleneck, thus twice as fast with huge workloads
  • Security: UNIX domain sockets are subject to file system permissions, while TCP sockets are not. As a result, it is much easier to regulate which users have access to a UNIX domain socket than it is for a TCP socket

And simply one another benefit comes from UDS definition:

Unix sockets allow inter-process communication (IPC) between processes on the same machine.

So UDS is exactly designed for what we’re after: running Varnish, NGINX and Hitch and having them talk to each other, on the same machine!

The other major modern goodie aside from UDS is the PROXY protocol. When you’re stacking up HTTP capable software like NGINX and Varnish together, inevitably you have to deal with the problem of propagating client IP address to your application. The PROXY protocol allows you to seamlessly transfer client IP addresses in between the software which knows how to talk that protocol.

As we’re all fans of NGINX here, same with me, you would want to keep NGINX for TLS termination and not introduce Hitch. But…., there are some issues in those regards:

NGINX Issue Number 1

NGINX is not capable of forwarding PROXY protocol via http proxy module. It means that server { proxy_pass .... } TLS termination to a Varnish which listens on a PROXY protocol, will not work.

NGINX Issue Number 2

NGINX is capable of forwarding PROXY protocol via stream module. That is:

nginx 443 (stream TLS), proxy protocol forward
-> varnish (socket, PROXY)
-> nginx, socket

… will work!

However, nginx SSL stream + Varnish listening on PROXY protocol won’t support HTTP/2 because nginx SSL stream does not know how to negotiate ALPN proto.

Pathetic NGINX! (sorry NGINX, I still like you) 🙂

The better setup

So now you know why we have to use Hitch in order to leverage both UDS and PROXY protocol. Simply because it can do both.

In fact, you can think of Hitch as an NGINX with “stream module” which doesn’t have the second issue, and HTTP/2 works fine with it.

Let’s formulate what we want and what we’re dealing with:

  • Our same server setup should not use TCP in between! This is silly if we can do UDS 🙂
  • All the software for UDS-only server stack is already there since year 2018!
  • Most of the systems are single server and never outgrow traffic requirements. Varnish is sufficient enough for them without having to scale to many servers
  • Thanks to UDS, we don’t consume network ports unnecessarily. There is no port mapping required

We will also add Cloudflare to the equation.. 🙂 For fun and future proofing. Before we start, I will put a note that I had success with the setup, but did not have enough time to polish the rough corners of the post. But thought I’d publish it anyway for interested folks.

Typical Request flow

It’s always good to understand how an HTTP request will travel through our whole stack:

Browser -> Cloudflare 443 SSL -> Hitch 443 SSL -> (PROXY protocol) -> Varnish socket -> (regular HTTP, sorry) -> NGINX socket

As you can see, things are not without tiny imperfection. Why? Because Varnish can listen on PROXY protocol, but it cannot talk PROXY protocol.
This is very fine, and we’ll address this below.

Prerequisites:

  • CloudFlare account with a domain you own
  • CentOS 7 server
  • Few minutes of your time

Make sure to register and set up your domain with Cloudflare first. You will need at least 2 DNS records setup there:

  • ‘example.com’ pointing to your server IP
  • ‘*.example.com’ pointing to your server IP

At this point, you should disable orange cloud icons (CDN off) in Cloudflare.

Install Software Stack

# Varnish LTS and latest Hitch will be installed from this repo: 
yum -y install https://extras.getpagespeed.com/release-latest.rpm
# PHP will be installed form this repo:
yum -y install http://rpms.remirepo.net/enterprise/remi-release-7.rpm
# for "yum-config-manager" command
yum -y install yum-utils
yum-config-manager --enable getpagespeed-extras-varnish60 remi-php73
yum -y install nginx varnish hitch php-fpm php-cli php-opcache certbot python2-certbot-dns-cloudflare 

Newer Hitch >= 1.5 (with socket support and TLS 1.3, bundled with hitch-deploy-hook script) comes from GetPageSpeed extras repository, among other things.

SELinux matters

We are not going to disable SELinux, but going to temporarily silence it, until we apply our special policy:

semanage permissive -a varnishd_t
semanage permissive -a httpd_t

It’s also helpful to yum install selinux-policy-doc so you can man varnishd_selinux.

Overall, I recommend this RedHat guide for SELinux as your bedtime reading material.

Configure FirewallD

firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --zone=public --add-service=https --permanent
firewall-cmd --reload

Configure Hitch

First, we prepare stuff for Hitch, namely the dhparams.pem file, as well as TLS certificate.

Generate dhparams.pem

openssl dhparam -rand - 2048 | sudo tee /etc/hitch/dhparams.pem

Generate TLS certificate

We will use Certbot, the official LetsEncrypt client, for generating certificate. It is best to use DNS based validation coupled with Certbot because it allows you to fetch certificates even when your website has not yet been launched – domain is not yet pointed to the server’s IP address.

With your domain already registered in Cloudflare, you need to fetch your API key and create ~/.cloudflare.ini file (/root/.cloudflare.ini to be precise):

# Cloudflare API credentials used by Certbot
dns_cloudflare_email = john@example.com
dns_cloudflare_api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxx

Secure the file with:

chmod 0600 ~/.cloudflare.ini

Our setup is going to be valid for one domain and all of its subdomains. We can generate a wildcard certificate as such:

certbot certonly --dns-cloudflare --dns-cloudflare-credentials ~/.cloudflare.ini \
  -d example.com -d *.example.com \
  --email john@example.com --non-interactive --agree-tos \
  --deploy-hook="/usr/bin/hitch-deploy-hook" \
  --post-hook="/usr/bin/systemctl reload hitch"

Don’t worry if you got:

Hook command “/usr/bin/systemctl reload hitch” returned error code 1

The important is:

 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/example.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/example.com/privkey.pem

The file usable by Hitch will be auto-generated by hitch-deploy-hook at /etc/letsencrypt/live/example.com/hitch-bundle.pem.
(You may want to ote up here to be able to use Certbot certificates without deployment hooks).

Now onto editing our Hitch configuration at /etc/hitch/hitch.conf:

# Run 'man hitch.conf' for a description of all options.

# Our Linux kernel is recent enough, so let's benefit from the TFO speedup:
tcp-fastopen = on

tls-protos = TLSv1.0 TLSv1.1 TLSv1.2

pem-file = "/etc/letsencrypt/live/example.com/hitch-bundle.pem"

frontend = {
    host = "*"
    port = "443"
}

backend = "/var/run/varnish/varnish.sock"  
workers = 4 # number of CPU cores

daemon = on

# We strongly recommend you create a separate non-privileged hitch
# user and group
user = "hitch"
group = "hitch"

# Enable to let clients negotiate HTTP/2 with ALPN:
alpn-protos = "h2, http/1.1"

# Varnish is our backend and it listens over PROXY
write-proxy-v2 = on             # Write PROXY header

Hitch user should be able to read/write to Varnish socket file so we will make it a member of varnish group:

usermod -a -G varnish hitch

Configure Varnish

We need to tell Varnish to listen on a socket (Hitch SSL will use it to talk to Varnish). That’s all where it will listen in our setup, being the caching layer of our app. However, we also want to keep it on a private HTTP port (6081) – this is going to be used only by external cache purging apps which can be easily configured with TCP details.

Run systemctl edit varnish and paste in:

[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd -f /etc/varnish/default.vcl -s malloc,256m \
    -a /var/run/varnish/varnish.sock,PROXY,user=varnish,group=hitch,mode=660 \
    -a 127.0.0.1:6081 \
    -p feature=+http2

Since we’re going to store socket files in a dedicated directory, /var/run/varnish, we will “tell” SELinux what we’re going to use the directory for:

semanage fcontext -a -t varnishd_var_run_t "/var/run/varnish(/.*)?"

And we also make sure that our sockets directory is created after reboot:

cat << _EOF_ >> /etc/tmpfiles.d/varnish.conf
  d /run/varnish 755 varnish varnish
_EOF_

And we also create it right away via:

systemd-tmpfiles --create varnish.conf

For SELinux’s happiness you could run restorecon -v /var/run/varnish, but this is not required because systemd-tmpfiles takes care of that.

Configure Varnish VCL at /etc/varnish/default.vcl

vcl 4.1;

import std;

acl purgers { "127.0.0.1"; }

backend default {
    .path = "/var/run/nginx/nginx.sock";
}
sub vcl_recv {
    if (req.method == "PURGE") {
            if (!client.ip ~ purgers) {
                    return(synth(405,"Not allowed."));
            }
            return (purge);
    }
}
...

If you attempt to start Varnish now (before NGINX is started up), this might generate an error, because of this bug – Varnish wants backend’s socket to be available. So don’t start it, yet.

Configure NGINX

Our NGINX instance will be configured to listen primarily on UNIX socket. It will also listen 80, but that’s just for http:// to https:// redirects.

Unfortunately, NGINX does not have options on permissions for socket files and defaults to world read and writeable file. So it is better to have a dedicated directory /var/run/nginx, to hold its socket file(s). This would allow governing access to the socket file easier.

Make sure to setup correct SELinux label to our directory:

semanage fcontext -a -t httpd_var_run_t "/var/run/nginx(/.*)?"

And ensure that the directory is created at boot time:

cat << _EOF_ >> /etc/tmpfiles.d/nginx.conf
  d /run/nginx 750 nginx varnish
_EOF_

This way, we let only Varnish to access the NGINX’s socket files.

And we also create it right away via:

systemd-tmpfiles --create nginx.conf

Now, onto actual NGINX configuration.

First, we empty the default example configuration file. We should not delete it so that subsequent upgrades do not restore it.
We also create de-facto standard directory layout for virtual hosts

echo > /etc/nginx/conf.d/default.conf
mkdir /etc/nginx/{sites-available,sites-enabled}

/etc/nginx/nginx.conf

We will auto-include our available websites, by putting this to http {} context:

include /etc/nginx/sites-enabled/*.conf;

/etc/nginx/conf.d/varnish.conf

This file will ensure that in “Varnish -> NGINX” request, we trust that the client IP address in X-Forwarded-For header is the real thing:

real_ip_header X-Forwarded-For;
set_real_ip_from unix:;
real_ip_recursive on;

Configure a website in NGINX

mkdir -p /srv/www/example.com/{public,sessions,logs}

Create user:

useradd example
chown -R example:example /srv/www/example.com

/etc/nginx/sites-available/example.com.conf

server {
    # Our NGINX listens at port 80 ONLY for https redirects
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name example.com www.example.com;
    return 301 https://$server_name$request_uri;
}
server {
    listen unix:/var/run/nginx/nginx.sock;
    server_name www.example.com;
    return 301 https://$server_name$request_uri;
}
server {
    listen unix:/var/run/nginx/nginx.sock default_server;
    server_name example.com;

    root /srv/www/example.com/public;
    index index.php;

    # pass the PHP scripts to PHP-FPM listening on socket
    location ~ \.php$ {
        fastcgi_pass unix:/var/run/php-fpm/example.com.sock;
        fastcgi_index  index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

Enable the site:

ln -s /etc/nginx/sites-available/example.com.conf /etc/nginx/sites-enabled/example.com.conf 

Configure PHP-FPM

Similar to NGINX, let’s trim default configuration file to avoid its restoration:

cp -p /etc/php-fpm.d/www.conf /etc/php-fpm.d/www.template
echo > /etc/php-fpm.d/www.conf

Actual config will go to our custom file. Ensure at least these settings in new file /etc/php-fpm.d/example.com.conf:

[example.com]
user = example
group = example
listen = /var/run/php-fpm/example.com.sock
listen.owner = example
listen.group = example
listen.mode = 0660

pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35

php_admin_value[error_log] = /srv/www/example.com/logs/php.log
php_admin_flag[log_errors] = on
php_value[session.save_handler] = files
php_value[session.save_path]    = /srv/www/example.com/sessions

For NGINX to be able to read and write to socket file and website files, we make nginx user to be part of example user group:

usermod -a -G example nginx

We need to start NGINX now, to create the socket Varnish relies on, then Varnish.

systemctl start nginx

At this point, NGINX listens on port 80 and redirects to TLS port 443. It also listens on UNIX socket.

Now we can start Varnish:

systemctl start varnish

Now Varnish listens on port 6081 (for cache invalidation, primarily) and on a UNIX socket /var/run/varnish/varnish.sock which is used as the backend for Hitch.

Under website user (example), create file /srv/www/example.com/public/index.php with contents:

<?php phpinfo();

Check your example.com in browser, and hopefully, it works :). Search for $_SERVER['REMOTE_ADDR'] in the page and you should see your real IP address thanks to the PROXY protoco!

Potential issues

Make sure to check nginx error log at /var/log/nginx/error.log.

Verify:

netstat -lx
Active UNIX domain sockets (only servers)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  2      [ ACC ]     STREAM     LISTENING     11601    /run/dbus/system_bus_socket
unix  2      [ ACC ]     STREAM     LISTENING     9813     /run/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     168812   /var/run/nginx/nginx.sock
unix  2      [ ACC ]     SEQPACKET  LISTENING     9851     /run/udev/control
unix  2      [ ACC ]     STREAM     LISTENING     7117     /run/systemd/journal/stdout
unix  2      [ ACC ]     STREAM     LISTENING     168952   /var/run/varnish/varnish.sock

Now it’s OK to enable our stack’s services at boot time via:

systemctl enable --now hitch varnish nginx

Bonus tip:

As we’re talking primarily about UDS, be sure to know the simple difference of localhost vs 127.0.0.1 when setting up MySQL connection details of your app.

Make sure you set your website to use MySQL sockets on a single server setup.
You typically achieve this by putting “localhost” as database host setting in configuration file, as opposed to “127.0.0.1”, which would use TCP/IP stack.

That’s about it.

Naturally, this guide cannot touch everything like the actual setup of your app (PHP CMS, or whatnot). We merely showed the main concepts of the single server setup, which will leverage UDS and PROXY protocol for efficiency.

It’s the year 2019 and there is software to ensure that your single-server uses UNIX sockets, without using slower, loopback TCP-based IPC!

Literature

  1. Nandor Sz.

    Hi Danila,

    first of all thank You for this very thorough, yet clear and understandable tutorial which is still relevant for single-server use cases in 2021. It helped me a lot, so now I’d like give a little help back 🙂
    I’ve found two typos in the /etc/nginx/sites-available/example.com.conf text :

    listen unix:/var/run/nginx.sock;

    listen unix:/var/run/nginx.sock default_server;

    And they should be

    listen unix:/var/run/nginx/nginx.sock;

    listen unix:/var/run/nginx/nginx.sock default_server;

    Kind regards,
    Nandor Sz.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.