fbpx

Magento 2 / NGINX / Varnish

Magento 2: Distributing cache purge requests over multiple Varnish instances

by ,


We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth yum upgrades for production use, this is the repository for you.
Active subscription is required.

Magento 2 is one of the shiny open source E-Commerce frameworks that support Varnish out of the box.
Not only it does support ESI block-based caching, but it also does support multiple Varnish instances.

Configure multiple Varnish instances in Magento CLI

Magento makes it easy to configure much of the framework configuration, including cache, via CLI. You can set multiple Varnish instances for sending PURGE requests using a comma-separated list, for example:

php bin/magento setup:config:set --no-interaction --http-cache-hosts=127.0.0.1:6081,1.2.3.4:6081

We just told Magento that there is a Varnish instance at local port 6081, thus on the same machine.
And also we told Magento about another, remote Varnish instance at 1.2.3.4, listening on port 6081.

Now Magento knows to send cache purge requests to all those configured Varnish instances, whenever content has been changed.
There is no limit on the number of Varnish instances you can configure this way.

However, this support is a bit rudimentary. When you save a product, Magento sends out cache purge requests to each configured Varnish instance sequentially.

That means, your backend becomes increasingly slow when updating data, especially when you expand your Varnish-based CDN to many more edge servers.

Broadcasting HTTP messages

Since Varnish is essentially an HTTP server, and Magento only needs to send purge requests to each configured Varnish instance, broadcasting makes much sense here.

What’s that? HTTP broadcasting means simply repeating exactly the same request, but sending it to another machine/IP address. In our case, we need to broadcast all PURGE and BAN requests.

There are open source solutions to perform HTTP broadcasting which is quite useful in the case of distributing PURGE requests to many Varnish instances.

However, there are few and often outdated.

Varnish Enterprise includes Varnish Broadcaster, which essentially does HTTP broadcasting. However, not everyone can afford Enterprise.

Using NGINX

NGINX and its mirror module can broadcast a single HTTP request over multiple servers quite easily.
The ngx_http_mirror_module module of NGINX can be used to implement broadcasting HTTP requests to multiple servers without any waiting.

Here’s a configuration that does just that with one local Varnish instance and 6 more remote Varnish instances which are strategically positioned for a US-based website.
We only need to set this on the main Magento 2 instance, which runs the standard LEMP + Varnish stack.
We set up a new server block in NGINX, which listens at port 3000, e.g. /etc/nginx/sites-enabled.com/broadcaster.conf:

upstream local {
   ip_hash;
   server localhost:80;
   keepalive 5;
}

upstream ca {
   ip_hash;
   server ca.example.com:80;
   keepalive 5;
}

upstream ga {
   ip_hash;
   server ga.example.com:80;
   keepalive 5;
}

upstream london {
   ip_hash;
   server london.example.com:80;
   keepalive 5;
}

upstream sydney {
   ip_hash;
   server sydney.example.com:80;
   keepalive 5;
}

upstream toronto {
   ip_hash;
   server toronto.example.com:80;
   keepalive 5;
}

upstream tx {
   ip_hash;
   server tx.example.com:80;
   keepalive 5;
}


server {
    listen 3000;
    server_name localhost;

    # purges happen over fast connections, so we can disable buffering for better speed
    proxy_buffering off;

    proxy_set_header Host $host;

    location / {
        mirror /ca;
        mirror /ga;
        mirror /sydney;
        mirror /toronto;
        mirror /tx;
        mirror /london;
        proxy_pass http://local;
    }

    location = /ca {
        internal;
        proxy_pass http://ca$request_uri;
    }

    location = /ga {
        internal;
        proxy_pass http://ga$request_uri;
    }

    location = /london {
        internal;
        proxy_pass http://london$request_uri;
    }

    location = /sydney {
        internal;
        proxy_pass http://sydney$request_uri;
    }

    location = /toronto {
        internal;
        proxy_pass http://toronto$request_uri;
    }

    location = /tx {
        internal;
        proxy_pass http://tx$request_uri;
    }
}

In the first part of the above configuration, we defined a number of upstreams which are all our Varnish instances running in various locations. For example, tx.example.com stands for Texas,
whereas ca is an instance in California. All Varnish instances run on the port 80.

Next, we use as many mirror directives as our upstreams, specifying locations that define how the mirrored requests are proxied over individual locations.

Now we can tell Magento that we have only one Varnish instance because our NGINX broadcaster config will take care of sending PURGE requests over our entire “Varnish network”:

php bin/magento setup:config:set --no-interaction --http-cache-hosts=127.0.0.1:3000

That’s it. You’ll notice that Magento 2 backend became much faster. It only waits for the completion of purge requests from the local/main Varnish instance.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.