Server Setup

CentOS/RHEL 8: how to build the kernel RPM with native CPU optimizations

by , , revisited on

We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth yum upgrades for production use, this is the repository for you.
Active subscription is required.

Native CPU optimizations and RPM

Every server and workstation is not guaranteed to have the same CPU. That is why the RPM packages are commonly distributed for a specific architecture which is the umbrella term for a wide set of hardware with some common CPU instructions.

Because RPM packages seek to satisfy a wide range of CPUs, it is common that they don’t come with all the possible CPU optimizations for a machine that they would be installed on. They don’t benefit from the native optimizations of a specific CPU.

Compiling software with the native CPU optimizations will produce highly optimized binaries for the specific hardware.
Such binaries may yield minor to 15-30% performance improvement in specific workloads.

Building software with native optimizations is great, but it often comes with complete neglect of making things consistent and reproducible.

You don’t have to give up on RPM packaging if you decide to super optimize your server/workstation with native CPU optimizations.
Moreover, you can greatly reduce the time it takes to reinstall “native software” if it was already built in the form of an RPM package.
And of course, the benefit of being able to distribute your package to a fleet of same-CPU machines is great.

How to build “native” kernel RPM

So here I’ll introduce a method on how to compile an RPM package with native CPU optimizations, for the very heart of your CentOS/RHEL system – the Linux kernel.

It goes down to a script that you may save and launch any time.
The script is to be run by a regular user that has sudo privileges, It also goes without saying that you must build the native kernel packages on the machine with the same CPU where you will install them.

sudo dnf -y install
sudo dnf install mock replace
sudo usermod -a -G mock $USER
mkdir -p ~/kernel-native
cd ~/kernel-native
dnf download --source kernel
rpm2cpio kernel-*.src.rpm | cpio -idmv

RPM_OPT_FLAGS=`echo $(rpm -E %optflags) | sed 's@-O2@-O3@' | sed  's@-m64@-march=native@' | sed 's@-mtune=generic@-mtune=native@'`
replace '${RPM_OPT_FLAGS}' "${RPM_OPT_FLAGS}" -- kernel.spec
replace '# define buildid .local' '%define buildid .native' -- kernel.spec
mock -r epel-8-x86_64 --no-clean --no-cleanup-after --spec=$spec --sources=. --resultdir=. --buildsrpm
mock -r epel-8-x86_64 --no-clean --no-cleanup-after --rebuild --resultdir=. *.src.rpm
sudo dnf install kernel-4*.native.*x86_64.rpm kernel-core-4*.native.*x86_64.rpm kernel-modules-4*.native.*x86_64.rpm

What the script does is the following.

First, it fetches the latest available kernel’s source RPM and extracts it. The source RPM contains actual kernel sources and any patches by RedHat.
This ensures that our resulting package is closely equivalent and compatible with the actual CentOS/RHEL packaging.

Next, we patch up the SPEC file, replacing the compiler options to include higher optimization level -O3, and set the optimizations to be native for the CPU we run with: -mtune=native.
Essentially, we change the {$RPM_OPT_FLAGS} to:

-O3 -march=native -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=native -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection

We use mock to consistently build the kernel packages.

The last line ensures that those packages are installed afterward.
The resulting RPM packages are saved in ~/kernel-native for further distribution to other same-CPU machines, or reinstallation later.


Simply reboot to have your newly installed kernel applied.

Run uname -a and look out for native in the output:

Linux hostname 4.18.0-193.6.3.el8.native.x86_64 #1 SMP Sat Jun 20 16:25:47 MSK 2020 x86_64 x86_64 x86_64 GNU/Linux

Congrats. Your kernel is using your CPU to its full potential.


Building kernel packages is not a fast thing to complete. Have patience, drink some coffee 🙂

Sure enough, since our RPM packages are bound to a specific CPU, they may or may not work on other machines that come with a different CPU

You may want to yum versionlock kernel to prevent the upgrade to a non-native kernel. It is best to be coupled with setting up a workflow for auto-rebuilding the native package.
This way you would get the best of both worlds: the native packaging, and continuous updates.

Absent such workflow, the manual updates are mere cd ~/kernel-native, moving out the old kernel packages and re-running the script.

Did you know?

The kernel packages are unique in a way, because you usually have multiple versions of those installed, despite them having the same base name.
This is great, because a faulty kernel can be avoided during boot time.

To set up how many latest kernels are retained during kernel installation, you can use dnf:

sudo dnf config-manager --setopt installonly_limit=4 --save

With the above, the system will have up to 4 latest kernels installed simultaneously. The default is 3.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: