PHP / Server Setup

Linux Huge Pages and Web Performance

by , , revisited on


One aspect where your server can be optimized for performance is making its memory management more efficient.

By default, programs operate on memory using small chunks of data. When large blocks of memory have to be accessed and written in small chunks, things are getting slower.

Huge Pages mechanism enables programs to work with memory using larger chunks. And thus it is faster.

There are basically two types of Huge Pages available.

1. Explicit Huge Pages (Hugetlbfs)

First is explicit huge pages (Hugetlbfs). These require programs to be compiled with support for them.

2. Transparent Huge Pages (THP)

Second is transparent huge pages. These must be enabled by the kernel settings. Even the programs which are not aware of huge pages can leverage them if transparent huge pages are enabled.

THP are enabled by default starting from CentOS / RedHat 6.

Nice explanation on explicit (old) vs transparent huge pages is here.

However, I have seens some performance enthusiasts skip the lines and going as far as saying:

… automatically enables Hugepages support if you have a Linux kernel that supports it + CentOS 7 and are not using redis server. If you use redis server, hugepages support is disabled for best redis server performance and memory usage.

Obviously without giving any distinction to end users about transparent vs explicit huge pages. These things can be enabled and disabled independently of each other. Not to say a careful approach needs to be taken while adjusting configuration values for each.

So what should we really do and what should you disable or keep? It depends on the situation, but in most cases, you would never want to disable explicit huge pages!

Redis and Huge Pages

Redis only has problem with transparent huge pages. So if you must use Redis, disable THP.

PHP and Huge Pages

PHP opcache extension can actually benefit from huge pages. PHP 7, by default, is compiled with support for explicit huge pages. In this case you may want to configure few things for smooth operation.

Your specific PHP opcache version might have been compiled without support for huge pages. Run this to confirm:

php -i | grep opcache

Empty output means that you’re out of luck as PHP was compiled without --enable-huge-code-pages switch (or old enough to not support it).

Result like this:

opcache.huge_code_pages => On => On

…means that huge pages are supported and enabled in Opcache.

If you want to enable huge pages support for Opcache (provided it was compiled with it), add to your php.ini the following:

opcache.huge_code_pages=1

Next thing to know is that the number of huge pages is controlled via kernel setting /proc/sys/vm/nr_hugepages. What’s the proper setting value?

Suppose that you want to allocate 256M of RAM to PHP Opcache (YMMV). Pages that are used as huge pages are reserved inside the kernel and cannot be used for other purposes. So we don’t want to allocate too many explicit huge pages.

Let’s find out the size of a huge page on your system:

grep "Hugepagesize:" /proc/meminfo
Hugepagesize: 2048 kB

So each huge page is equal 2MB and we need a total of 256 / 2 = 128 huge pages allocated by kernel. Edit /etc/sysctl.conf and add

# Allocate 128*2MiB for HugePageTables
vm.nr_hugepages = 128

Apply changes with:

sysctl -p

Or better yet, reboot the server so that the kernel can reallocate things properly. (You may get too few huge pages reserved unless rebooted).

Observe your huge pages have been allocated with:

grep HugePages_Total /proc/meminfo

Sample output:

AnonHugePages: 0 kB
HugePages_Total: 128

In this sample output the kernel successfully allocated all huge pages (HugePages_Total). Depending on the programs starting up during boot, you may not be so lucky. So the kernel manual gives as a helpful hint:

The administrator can allocate persistent huge pages on the kernel boot command line by specifying the “hugepages=N” parameter, where ‘N’ = the
number of huge pages requested. This is the most reliable method of allocating huge pages as memory has not yet become fragmented.


Also published on Medium.

Leave a Reply