NGINX / Varnish / Wordpress

Accelerate WordPress with Varnish Cache

by ,

We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth yum upgrades for production use, this is the repository for you.
Active subscription is required.

WordPress is a widely used content management system, but it can often be slow to load pages, especially when dealing with high traffic. Fortunately, there is a solution to this problem – Varnish.

Varnish is a powerful caching software that can significantly speed up your WordPress site by providing a full page cache of its pages and delivering them quickly to your users.

In this article, we will show you how to accelerate WordPress with Varnish on a RedHat-based system. We will assume that your WordPress site is hosted on NGINX.

How Varnish excels in comparison to WordPress caching plugins

It is important to understand the many benefits of Varnish that make WordPress caching plugins fade in comparison:

  • Varnish caches complete pages and serves cached pages without the use of PHP. WordPress cache plugins rely on running PHP to deliver cached pages, which is slow. You can configure many of the WordPress cache plugins to serve cached pages via NGINX, but this is still slow because it relies on checking files on the disk
  • Varnish uses RAM as its cache storage by default. A cached page is delivered without touching the disk at all!
  • Varnish has an amazing VCL configuration language that allows you to be very flexible in your cache configuration rules.

Step 1: Remove an existing caching plugin

If you have an existing full-page cache plugin installed on your WordPress site, you will need to remove it. Varnish is far more effective than any WordPress caching plugin, and using both can cause conflicts and unexpected behavior.

Note that this does not apply to object cache plugins. Object cache plugins will nicely complement your Varnish-based full-page cache.

Step 2: Install Varnish

The next step is to install Varnish on your server.
At this time, we recommend fetching Varnish 6.0.x LTS using our repository. For example, on a CentOS/RHEL 7 system, you can run:

sudo yum -y install yum-utils
sudo yum-config-manager --enable getpagespeed-extras-varnish60
sudo yum -y install varnish

In case you want to rely on whichever Varnish version is shipped with your operating system, use the following command:

sudo yum install varnish

Once Varnish is installed, you can enable its automatic startup and launch immediately using the following command:

sudo systemctl enable --now varnish

By default, Varnish listens on port 6081. You don’t need to change its port. We are going to use NGINX as a TLS terminator. It will forward requests to Varnish, and we can keep using NGINX for HTTP to HTTPS redirects.

Step 3. Configure Varnish

Varnish configuration requires modifying /etc/varnish/default.vcl. What we are going to do there, is define which server Varnish has to talk to in order to fetch content when Varnish receives a request. That is our “main” NGINX server block.
And we will also configure how Varnish handles cache purge requests.

Here’s a sample Varnish configuration that will suffice for a WordPress installation that doesn’t have cache-unfriendly plugins (read on those below):

vcl 4.0;

# Where Varnish should forward requests
backend default {
    .host = "";
    .port = "8080";

# Which IP addresses can send cache purge requests
acl purge {
sub vcl_recv {   
    # Remove the proxy header to mitigate the httpoxy vulnerability
    # See
    unset req.http.proxy;

    # Purge logic
    if(req.method == "PURGE") {
        if(!client.ip ~ purge) {
            return(synth(405,"PURGE not allowed for this IP address"));
        if (req.http.X-Purge-Method == "regex") {
            ban("obj.http.x-url ~ " + req.url + " && obj.http.x-host == " +;
            return(synth(200, "Purged"));
        ban("obj.http.x-url == " + req.url + " && obj.http.x-host == " +;
        return(synth(200, "Purged"));

sub vcl_backend_response {
    # Inject URL & Host header into the object for asynchronous banning purposes
    set beresp.http.x-url = bereq.url;
    set beresp.http.x-host =;

sub vcl_deliver {
    # Cleanup of headers
    unset resp.http.x-url;
    unset resp.http.x-host;    

Now you can reload the Varnish configuration by restarting it:

systemctl restart varnish

Step 4: Install the Proxy Cache Purge plugin

Varnish caches the latest versions of your WordPress pages, and you will need to install a plugin that can automatically invalidate cached pages when changes are made. We recommend using the Proxy Cache Purge plugin.

Install and activate the plugin using WordPress CLI:

wp plugin install varnish-http-purge --activate

Once the plugin is installed, you can configure it with the Varnish IP and port:

wp option add vhp_varnish_ip

Step 3: Configure NGINX

The next step is to configure NGINX to work with Varnish. We will set up NGINX as a “sandwich” with the first NGINX server acting as a TLS terminator and the second NGINX server forwarding requests to Varnish.

To do this, we will need to make some changes to the existing NGINX configuration file. For simplicity of explanation, we assume your current NGINX setup consists of 2 server blocks. One block does HTTP to HTTPS redirection and the other block actually handles PHP requests, aka the main server block:

# the redirection server block:
server {
    listen 80;
    return 301 https://$host$request_uri;

# the main server block:
server {
    listen 443 ssl http2;
    # SSL configuration goes here
    # ...
    location ~ \.php$ {
        fastcgi_pass ...

When we introduce Varnish as a caching layer, we have to make these changes:

  • adjust the main server block to listen on an arbitrary port, e.g. 8080
  • introduce the TLS termination server block

The first change is rather trivial. Under your main server block, remove the SSL configuration and change listen 443 ssl; to listen 8080;:

# the main server block:
server {
    listen 8080;
    # SSL configuration goes here
    # ...
    location ~ \.php$ {
        fastcgi_pass ...

Next up, add a new server {} TLS terminating block like this:

upstream varnish {
  keepalive 64;

server {
    listen 443 ssl http2;
    # SSL configuration goes here
    # ...
    location / {
        proxy_pass http://varnish;
        proxy_buffering off;
        proxy_request_buffering off;
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-Port 443;
        proxy_set_header Host $host;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header X-Forwarded-Host $http_host;
        proxy_set_header Ssl-Offloaded "1";

Next, we will use NGINX real IP module to ensure that NGINX trusts the end visitor IP address from Varnish, and does not include the Varnish port while performing redirects.
Create /etc/nginx/conf.d/varnish.conf with contents:

port_in_redirect off;
absolute_redirect off;

So now you can reload NGINX configuration by running systemctl reload nginx.
Any request coming to the site will be redirected to HTTPS, which is routed via Varnish as the caching layer.

Step 6: Test and monitor

With Varnish and NGINX configured, you should notice a significant improvement in the speed of your WordPress site. However, it’s important to monitor your site and test it regularly to ensure that everything is working as expected.

You can use tools like Pingdom or GTmetrix to test the load time of your site and identify any areas that may still be slow. Additionally, you can use the Varnish log files to monitor traffic and identify any issues.

To easily check if your website is being cached fine in Varnish, examine its response headers and look for the X-Varnish or Age header:

  • The Age header tells you how long Varnish has the page in the cache. A positive value indicates a cache hit, while 0 means a cache miss.
  • The X-Varnish include one or two numbers. The first number is the incoming request ID and the second number is the response ID. Two numbers present in the header mean the request is a cached one. And a single number indicates a cache miss.

Improving your Varnish cache hit ratio

Here is a couple of reasons why you’re may not getting a cache hit in Varnish and how to correct them.

1. Cache-unfriendly plugins

What about cache-unfriendly plugins? If your cache doesn’t work, you probably have one. There is a myriad of plugins that have no regard for external caches like Varnish. Most of the time they are badly coded and issues have to be filed with the plugins’ authors.

The number one bad thing that those plugins do is: start a PHP session without any good reason.
A well-written plugin should start a PHP session when there’s data to persist to the browser and never again.

To check if you have such a plugin, examine the response headers of your website and look for Set-Cookie.
If Varnish sees such a response from WordPress, such a page can’t be cached, as Varnish simply ensures that user-specific data (which the cookie is) isn’t shared between users.

2. Tracking cookies

If you use Google Analytics or any other kind of tracking for your website, chances are that they rely on cookies to persist their data.
When a cookie is set for your website, the browser will send it with every subsequent request.

It is important to understand that Varnish will by default, bypass its cache whenever it sees any cookie in the request. This isn’t Varnish limitation. It simply does this by default to prevent user-specific information being shared between users.

How to deal with this and improve cacheability?

Certainly, you can configure some complex logic to cache user sessions separately, but we’re going to mention a more simple solution here that will be sufficient for many websites.

The solution is whitelisting WordPress cookies in your VCL config and stripping any other cookies. This can be done by adding the following in your vcl_recv { ... } block:

    if (req.http.Cookie !~ "comment_author_|wordpress_(?!test_cookie)|wp-postpass_|woocommerce") {
        # no essential cookies, nuke the rest!
        unset req.http.cookie;

Don’t forget to apply the updated config by running systemctl reload varnish.

3. Warm your cache

A page that has not yet been visited is not cached. To improve user experience you may want to warm the cache to ensure that visitors get cached content only.

The Proxy Cache Purge plugin works by purging pages from cache whenever you update your page. This ensures that the visitors get to see the fresh data.
Our complementary Proxy Cache Warmer plugin automatically warms the purged pages, so even if you’re actively updating your content, visitors are experiencing faster cached browsing.


Varnish is an incredibly powerful caching system that can significantly speed up your WordPress site. By configuring NGINX as a TLS terminator and setting up Varnish to cache frequently accessed pages, you can ensure that your site is fast and responsive for your users.

The article by no means covers all the specifics of configuring Varnish with WordPress. We intentionally made a few assumptions in order to illustrate how Varnish and NGINX can be coupled together. Hopefully, this lets you get started on your journey to a well-behaving, well-cached WordPress installation. For more details, you may want to check out our blog posts.

  1. Tim

    Thank you for this wonderful interesting article! I have a question regarding caching: at the moment I use nginx as reverse proxy with WordPress and PHP FPM. I usr nginx fastcgi cache and would like to ask, if varnish provides a benefit over the nginx built in caching capabilities?
    Thanks and greetings

    • Danila Vershinin

      Hi Tim,

      If you configure the NGINX FastCGI cache to be stored in memory, set a decently large cache lifetime value, and set up cache purging from within your app (e.g. by using it together with a module like cache-purge, it will perform just as good as Varnish and you’ll benefit from reduced software overhead.

      However, the main benefit of Varnish still prevails: its VCL configuration language allows you to configure caching in virtually unlimited ways, while with NGINX declarative configuration nature, complicated caching rules will be written with numerous maps and simply will be a pain to maintain. And of course, Varnish supports ESI for block-based caching, something that NGINX unfortunately doesn’t support.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: