Site icon GetPageSpeed

Setting up Varnish as Full Page Cache for Magento 2

Magento powered by Varnish Cache

Magento powered by Varnish Cache

Once you have set up your new Magento 2 instance, chances are, it’s running with the “Built-in” cache.

The “Built-in” cache is actually backed with either file or Redis storage.

It is important to understand that the “Built-in” cache will always be slower than Varnish. That is because it always needs to execute some portion of PHP code in order to serve cached pages.

Varnish on the other side allows serving cache data while completely bypassing the execution of PHP. This makes it very fast!

And that is why Varnish cache is the standard, recommended solution for running Magento 2 in production.

So, what changes need to be done on your server for Magento 2 to work with Varnish? This post is just about that! Transitioning your Magento 2 installation from the simple built-in cache to the fully featured Varnish Full Page Cache.

The stack

Open-source Varnish does not support HTTPS but this is not a game changer. The de-facto standard setup is an NGINX sandwich in which a request goes through:

To remove some confusion, in this kind of setup we actually run one instance of NGINX that provides all those server blocks mentioned.

Set up Varnish

Choose the Varnish version

Magento 2 supports multiple Varnish versions and can generate VCL (Varnish Configuration Language) files for each supported version.

However, when you run a critical application, which Magento 2 is, you should aim for LTS versions.

At the moment of writing, there is one LTS version of Varnish, which is the 6.0.x series. To install that, a few commands are enough on a CentOS/RHEL server.

You can find installation instructions for Varnish 6.0 LTS here, but if you plan to use extra Varnish modules (VMODS), we recommend relying on our commercial repository (requires subscription).

CentOS/RHEL 7

sudo yum -y install https://extras.getpagespeed.com/release-latest.rpm
sudo yum -y install yum-utils
sudo yum-config-manager --enable getpagespeed-extras-varnish60
sudo yum -y install varnish

CentOS/RHEL 8

sudo dnf -y install https://extras.getpagespeed.com/release-latest.rpm
sudo dnf -y install varnish

CentOS/RHEL 9+

sudo dnf -y install https://extras.getpagespeed.com/release-latest.rpm dnf-plugins-core
sudo dnf config-manager --enable getpagespeed-extras-varnish60
sudo dnf -y install varnish

Choose the Varnish port

With the prevalence of SSL-only websites, we recommend keeping the default port that Varnish runs on, which is 6081.
So there is no need to do anything on that side. Keeping the default port allows easy HTTP to HTTPS redirects configuration in NGINX.

Configure NGINX

As we mentioned earlier, a single NGINX instance is more than enough for supporting a Varnish-based setup. We need the following server { ... } blocks:

Let’s start with the latter.

HTTPS and other redirects in NGINX

Remember, since in our setup Varnish does not listen on the insecure HTTP web port 80, NGINX should do it, in order to:

In fact, if you’re transitioning from the “built-in” cache, you probably already have set up those server blocks for redirection anyway. But it makes sense to double-check that you follow the HSTS requirements about them for the most secure redirect flow.

Provided that the canonical name is www.example.com (meaning that’s what we want visitors to actually see), 3 redirect server blocks are required to satisfy proper HSTS implementation.

server {
    listen 80; 
    server_name example.com;
    return 301 https://example.com$request_uri;
}
server {
    listen 443 ssl http2;
    more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains; preload";
    ssl_certificate ...;
    ssl_certificate_key ...;
    server_name  example.com;
    return 301 https://www.example.com$request_uri;
}
server {
    listen 80; 
    server_name www.example.com;
    return 301 https://www.example.com$request_uri;
}

For more details about proper HSTS redirect flow in NGINX configuration, check out the related post:

Dissecting HTTPS redirect requirements of HSTS

Configure NGINX to trust client IP addresses sent by Varnish

Considering our stack consists of 3 main elements, it is important that information about the client IP addresses is not lost in the way of proxying.
In the steps further below we will make NGINX that terminates TLS, proxy requests to Varnish for caching, and then Varnish itself will talk to NGINX that actually bound to execution of Magento code.

The remote IP address must be translated via the X-Forwarded-For header, all the way until it reaches the Magento code.

Since both NGINX and Varnish do proxying, we need to ensure the “Magento NGINX” trusts the X-Forwarded-For header coming from both, for finding the correct end-user IP:

Create file /etc/nginx/conf.d/varnish.conf with contents:

set_real_ip_from 127.0.0.1;
port_in_redirect off;
absolute_redirect off;

TLS termination server in NGINX

When a request comes from a client, we want it to be secure. This dictates the necessity of having the special NGINX TLS terminal block.
Aside from doing the encryption of connection for us, the job of this block is having NGINX forward requests to Varnish.
We describe our Varnish instance details in the upstream configuration of NGINX. Specifying keepalive ensures faster communication between the “TLS NGINX” and Varnish.


upstream varnish { ip_hash; server 127.0.0.1:6081; keepalive 32; } server { listen 443 ssl; server_name www.example.com; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; location / { proxy_pass http://varnish; proxy_buffering off; proxy_request_buffering off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header Host $host; access_log off; log_not_found off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; client_max_body_size 32M; # if we do not put it here, then Nginx will error out with 413 entity too large, because it does not know what value we have in actual HTTP terminated server proxy_set_header X-Forwarded-Host $http_host; proxy_set_header Ssl-Offloaded "1"; proxy_read_timeout 600s; } # For the LetsEncrypt webroot plugin, make sure that stuff goes through. location ^~ /.well-known/acme-challenge/ { allow all; # Avoids having this request go through Varnish. root /srv/www/example.com/pub; } location ^~ /.well-known/apple-developer-merchantid-domain-association { allow all; # Avoids having this request go through Varnish. root /srv/www/example.com/pub; } }

As you see, we don’t do any logging there, because, for the most part, we are interested in logging the uncached request.
Aside from that Varnish does logging of its own, so cached requests log can be examined using CLI utilities:

Varnish Command-Line Utilities. Tips and Tricks

The “Magento NGINX” server block

This is where you actually process uncached requests, and it is the main piece of NGINX configuration for Magento 2.

The contents of the stock nginx.conf.sample configuration should be used for good security from the start.

To use it, we must create our “Magento NGINX” server block like so:

server {
    listen 8080;
    server_name example.com;
    set $MAGE_ROOT /srv/www/example.com;
    set $MAGE_DEBUG_SHOW_ARGS 0;
    # copy contents of nginx.conf.sample here
}

We run “Magento NGINX” on port 8080 which is not visible to the public.
Varnish will be making the requests to that port.

You will require to have one modification following this. That is uncommenting the upstream fastcgi_backend { ... } and editing it with your relevant PHP-FPM pool’s UNIX socket path.

Tuning NGINX buffers

Lastly, because the latest Magento 2 produces huge header value for content security policy, we need to tell NGINX to be ok with Varnish responding back with huge headers:

Create /etc/nginx/conf.d/custom.conf with contents:

# maximum HTTP headers size, nginx will always buffer this no matter if proxy buffering is off or not
proxy_buffer_size 18k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;

Configure Magento 2 for Varnish

After all the server software is set up in place, let’s configure Magento 2 for Varnish. These steps actually change two things about Magento’s behavior:

As always, the best way to configure things is using the CLI.

To tell Magento to use Varnish FPC instead of the built-in cache, run:

php bin/magento config:set system/full_page_cache/caching_application 2

Aside from that, we want Magento to instruct longer caching in Varnish, which is 2 weeks:

# Make cache stay longer, two weeks
bin/magento config:set --scope=default --scope-code=0 system/full_page_cache/ttl 1209600

And to tell Magento where is Varnish listening for all requests (where we send the PURGE requests to), run:

php bin/magento setup:config:set --no-interaction --http-cache-hosts=127.0.0.1:6081

Next, we must export the Varnish configuration file with the appropriate values:

bin/magento varnish:vcl:generate –export-version=6 –access-list localhost –backend-host localhost –backend-port 8080 –output-file var/default.vcl

Now, as a sudo user, copy the file in place of the default Varnish configuration:

sudo /bin/cp /srv/www/example.com/vae/default.vcl /etc/varnish/default.vcl

We have to make sure to fix the notorious Magento 2 bug and fix the usage of pub, by fixing the VCL.

sed -i 's@/pub/health_check.php@/health_check.php@g' /etc/varnish/default.vcl

Also, in /etc/varnish/default.vcl we highly recommend the purge ACL list to include the server’s own IP address, as well as IPv6, like so:

acl purge {
   "127.0.0.1";
   "::1";
   "localhost";
   "x.x.x.x"; # replace with server public IP
}

This is important because some servers hardcode domains of the hosted websites as `127.0.0.1, and some as external server IPs.
The result is for these different cases, Varnish will send PURGE requests via different network interfaces.
We allow all, to cover all scenarios.

Checking and starting without downtime

We believe you followed everything precisely, and to reduce the downtime checked at a minimum the NGINX config: nginx -t is a must to run to ensure a proper configuration is in place.

And now we’re actually ready to make NGINX run with the new configuration.

nginx -t && systemctl reload nginx

After this, we can finally set Varnish to start now and at boot time:

systemctl --enable now varnish

That’s it. Requests go through Varnish for caching and you can start checking your cache hit ratio with varnishstat command.

Exit mobile version