Magento 2 / Server Setup / Varnish

Setting up Varnish as Full Page Cache for Magento 2

by ,

We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth yum upgrades for production use, this is the repository for you.
Active subscription is required.

Once you have set up your new Magento 2 instance, chances are, it’s running with the “Built-in” cache.

The “Built-in” cache is actually backed with either file or Redis storage.

It is important to understand that the “Built-in” cache will always be slower than Varnish. That is because it always needs to execute some portion of PHP code in order to serve cached pages.

Varnish on the other side allows serving cache data while completely bypassing the execution of PHP. This makes it very fast!

And that is why Varnish cache is the standard, recommended solution for running Magento 2 in production.

So what are the changes you need to be done on your server for Magento 2 to work with Varnish? This post is just about that! Transitioning your Magento 2 installation from the simple built-in cache, to fully featured Varnish Full Page Cache.

The stack

Open source Varnish does not support HTTPS but this is not a game changer. The de-facto standard setup is an NGINX sandwich in where a request goes through:

  • NGINX TLS server block, which does exactly what Varnish can’t – provide a secure connection for clients
  • Varnish cache
  • NGINX Magento server block, which is bound to PHP-FPM engine for running Magento

To remove some confusion, in this kind of setup we actually run one instance of NGINX that provides all those server blocks mentioned.

Set up Varnish

Choose the Varnish version

Magento 2 actually supports a bunch of Varnish versions and is capable of generating VCL (Varnish configuration) files for each one that is supported.

However, when you run a critical application, which Magento 2 is, you should aim for LTS versions.

At the moment of writing, there is one LTS version of Varnish, which is the 6.0.x series. To install that, a few commands are enough on a CentOS/RHEL server.

You can find installation instructions for Varnish 6.0 LTS here, but if you plan to use extra Varnish modules (VMODS), we recommend relying on our commercial repository (requires subscription).


sudo yum -y install
sudo yum -y install yum-utils
sudo yum-config-manager --enable getpagespeed-extras-varnish60
sudo yum -y install varnish


sudo dnf -y install
sudo dnf -y install varnish

Choose the Varnish port

With the prevalence of SSL-only websites, we recommend keeping the default port that Varnish runs on, which is 6081.
So there is no need to do anything on that side. Keeping the default port allows easy http->https redirects configuration in NGINX.

Configure NGINX

As we mentioned earlier, a single NGINX instance is more than enough for supporting a Varnish-based setup. To name a few server blocks we need, are a few:

  • The “TLS NGINX” server block for terminating TLS and proxying requests to varnish
  • The “Magento NGINX” server block for connecting to PHP-FPM (actual Magento application server block if you can say so. And if you have an existing installation with the “Built-in” cache, that’s a block you already have
  • Supplementary server blocks only for the purpose of redirecting to the canonical protocol and domain name.

Let’s start with the latter.

HTTPS and other redirects in NGINX

Remember, since in our setup Varnish does not listen on the insecure HTTP web port 80, NGINX should do it, in order to:

  • make redirects from plain to secure HTTP protocol.
  • redirect from non-canonical domain names of the website, to canonical. E.g. from www. to bare domain or vice versa, depends on your preference for www. to be canonical or not

In fact, if you’re transitioning from the “built-in” cache, you probably already have set up those server blocks for redirection anyway. But it makes sense to double-check that you follow the HSTS requirements about them for the most secure redirect flow.

Provided that the canonical name is (meaning that’s what we want visitors to actually see), 3 redirect server blocks are required to satisfy proper HSTS implementation.

server {
    listen 80; 
    return 301$request_uri;
server {
    listen 443 ssl http2;
    more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains; preload";
    ssl_certificate ...;
    ssl_certificate_key ...;
    return 301$request_uri;
server {
    listen 80; 
    return 301$request_uri;

For more details about proper HSTS redirect flow in NGINX configuration, check out the related post:

Dissecting HTTPS redirect requirements of HSTS

Configure NGINX to trust client IP addresses sent by Varnish

Considering our stack consists of 3 main elements, it is important that information about the client IP addresses is not lost in the way of proxying.
In the steps further below we will make NGINX that terminates TLS, proxy requests to Varnish for caching, and then Varnish itself will talk to NGINX that actually bound to execution of Magento code.

The remote IP address must be translated via the X-Forwarded-For header, all the way until it reaches the Magento code.

Since both NGINX and Varnish do proxying, we need to ensure the “Magento NGINX” trusts the X-Forwarded-For header coming from both, for finding the correct end-user IP:

Create file /etc/nginx/conf.d/varnish.conf with contents:

port_in_redirect off;
absolute_redirect off;

TLS termination server in NGINX

When a request comes from a client, we want it to be secure. This dictates the necessity of having the special NGINX TLS terminal block.
Aside from doing the encryption of connection for us, the job of this block is having NGINX forward requests to Varnish.
We describe our Varnish instance details in the upstream configuration of NGINX. Specifying keepalive ensures faster communication between the “TLS NGINX” and Varnish.

upstream varnish { ip_hash; server; keepalive 32; } server { listen 443 ssl; server_name; ssl_certificate /etc/letsencrypt/live/; ssl_certificate_key /etc/letsencrypt/live/; location / { proxy_pass http://varnish; proxy_buffering off; proxy_request_buffering off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header Host $host; access_log off; log_not_found off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; client_max_body_size 32M; # if we do not put it here, then Nginx will error out with 413 entity too large, because it does not know what value we have in actual HTTP terminated server proxy_set_header X-Forwarded-Host $http_host; proxy_set_header Ssl-Offloaded "1"; proxy_read_timeout 600s; } # For LetsEncrypt webroot plugin, make sure that stuff goes through. location ^~ /.well-known/acme-challenge/ { allow all; # Avoids having this request go through Varnish. root /srv/www/; } location ^~ /.well-known/apple-developer-merchantid-domain-association { allow all; # Avoids having this request go through Varnish. root /srv/www/; } }

As you see, we don’t do any logging there, because for the most part, we are interested in logging the uncached request.
Aside from that Varnish does logging of its own, so cached requests log can be examined using CLI utilities:

Varnish Command-Line Utilities. Tips and Tricks

The “Magento NGINX” server block

This is where you actually process uncached requests, and it is the main piece of NGINX configuration for Magento 2.

The contents of the stock nginx.conf.sample configuration should be used for good security from the start.

To use it, we must create our “Magento NGINX” server block like so:

server {
    listen 8080;
    set $MAGE_ROOT /srv/www/;
    # copy contents of nginx.conf.sample here

We run “Magento NGINX” on port 8080 which is not visible to the public.
Varnish will be making the requests to that port.

You will require to have one modification following this. That is uncommenting the upstream fastcgi_backend { ... } and editing it with your relevant PHP-FPM pool’s UNIX socket path.

Tuning NGINX buffers

Lastly, because the latest Magento 2 produces huge header value for content security policy, we need to tell NGINX to be ok with Varnish responding back with huge headers:

Create /etc/nginx/conf.d/custom.conf with contents:

# maximum HTTP headers size, nginx will always buffer this no matter if proxy buffering off or not
proxy_buffer_size 18k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;

Configure Magento 2 for Varnish

After all the server software is set up in place, let’s configure Magento 2 for Varnish. These steps actually change two things about Magento behavior:

  • it makes sure Magento sends PURGE requests to Varnish when you have updated content
  • it makes Magento echo ESI tags reference various blocks that have different cache lifetime, and Varnish will process those, leveraging its own full potential

As always, the best way to configure things is using the CLI.

To tell Magento to use Varnish FPC instead of the built-in cache, run:

php bin/magento config:set system/full_page_cache/caching_application 2

Aside from that, we want Magento to instruct longer caching in Varnish, which is 2 weeks:

# Make cache stay longer, two weeks
bin/magento config:set --scope=default --scope-code=0 system/full_page_cache/ttl 1209600

And to tell Magento where is Varnish listening for all requests (where we send the PURGE requests to), run:

php bin/magento setup:config:set --no-interaction --http-cache-hosts=

Next, we must export the Varnish configuration file with appropriate values:

bin/magento varnish:vcl:generate –export-version=6 –access-list localhost –backend-host localhost –backend-port 8080 –output-file var/default.vcl

Now, as a sudo user, copy the file in place of the default Varnish configuration:

sudo /bin/cp /srv/www/ /etc/varnish/default.vcl

We have to make sure to fix the notorious Magento 2 bug and fix the usage of pub, by fixing the VCL.

sed -i 's@/pub/health_check.php@/pub/health_check.php@g' /etc/varnish/default.vcl

Also, in /etc/varnish/default.vcl we highly recommend the purge ACL list to include the server’s own IP address, as well as IPv6, like so:

acl purge {
   "x.x.x.x"; # replace with server public IP

This is important because some servers hardcode domains of the hosted websites as `, and some as external server IPs.
The result is for these different cases, Varnish will send PURGE requests via different network interfaces.
We allow all, to cover all scenarios.

Checking and starting without downtime

We believe you followed everything precisely, and to reduce the downtime checked at a minimum the NGINX config: nginx -t is a must to run to ensure a proper configuration is in place.

And now we’re actually ready to make NGINX run with the new configuration.

nginx -t && systemctl reload nginx

After this, we can finally set Varnish to start now and at boot time:

systemctl --enable now varnish

That’s it. Requests go through Varnish for caching and you can start checking your cache hit ratio with varnishstat command.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: