Skip to main content

Server Setup

Clear Disk Space on CentOS, RHEL, Rocky Linux & Fedora

by , , revisited on


We have by far the largest RPM repository with NGINX module packages and VMODs for Varnish. If you want to install NGINX, Varnish, and lots of useful performance/security software with smooth yum upgrades for production use, this is the repository for you.
Active subscription is required.

📅 Updated: February 18, 2026 (Originally published: September 22, 2016)

Running low on disk space? Here are quick commands to clear disk space on CentOS, RHEL (6 through 9), Rocky Linux, AlmaLinux, Fedora, and other RPM-based Linux distributions. Each tip below helps you reclaim wasted disk space and keep your server running smoothly.

Clear disk space on a Linux server terminal

TL;DR

curl -Ls http://bit.ly/clean-centos-disk-space | sudo bash

Prefer to know what you’re running? Read on for individual commands to clear disk space on your server.

Before anything, install the yum-utils package (RHEL/CentOS 7 and earlier):

yum -y install yum-utils

Find what is eating your disk space

Before blindly deleting files, find out where your disk space is going. Start with a high-level overview of your mounted filesystems:

df -h

Then drill down into the root filesystem to find the largest top-level directories:

du --max-depth=1 -h / 2>/dev/null | sort -hr | head -20

Typical disk space hogs on RPM-based servers include /var/log, /var/cache, /tmp, /home, and /var/lib/docker. Once you know which directories are largest, use the matching tips below to clear disk space in each area.

For interactive exploration, install ncdu — a fast, ncurses-based disk usage analyzer:

yum install ncdu       # RHEL/CentOS 7
dnf install ncdu       # RHEL 8+, Rocky Linux, Fedora
ncdu /

ncdu lets you browse directories by size and delete files interactively. This makes it easy to spot unexpected large files.

Find deleted files still using disk space

Sometimes df shows a partition is full, but du reports less usage than expected. This happens when files are deleted but still held open by running processes. The kernel keeps the disk space allocated until the process releases the file handle.

Find these phantom files with lsof:

lsof +L1 | grep deleted

To reclaim the space immediately, restart the processes holding those files. Alternatively, truncate them in place:

# Find the PID and file descriptor, then truncate
: > /proc/<PID>/fd/<FD>

This technique often reveals gigabytes of “hidden” disk usage from log files that were rotated but not released by long-running daemons.

1. Trim log files

Log files are often the biggest disk space hogs on a Linux server. A single verbose application can fill /var/log with gigabytes of data in days.

find /var -name "*.log" \( \( -size +50M -mtime +7 \) -o -mtime +30 \) -exec truncate {} --size 0 \;

This truncates any *.log files on /var that are either older than 7 days and greater than 50M, or older than 30 days. We use truncate instead of rm because deleting a log file that a process still has open won’t free space until restart. Truncating keeps the file handle valid and immediately reclaims space.

For more aggressive cleaning, trim all files regardless of extension. Often log files in /var/log don’t have a .log extension:

find /var/log -type f -exec truncate "{}" --size 0 \;

After trimming, consider setting up logrotate rules for any application that generates large logs. This prevents the problem from recurring.

2. Clean up systemd journal logs

On systems running systemd (RHEL/CentOS 7+, Rocky Linux, AlmaLinux, Fedora), the journal can consume several gigabytes over time. Check how much space it uses:

journalctl --disk-usage

You may be surprised to find it using 2–4 GB or more, especially on servers running for months without maintenance.

Vacuum old journal entries

Remove journal entries older than 7 days:

sudo journalctl --vacuum-time=7d

Or limit total journal size to 200 MB, removing oldest entries first:

sudo journalctl --vacuum-size=200M

You can combine both options. For example, keep at most 500 MB and nothing older than 30 days:

sudo journalctl --vacuum-size=500M --vacuum-time=30d

To verify cleanup worked, run journalctl --disk-usage again. Confirm the reported size has dropped.

Set a permanent journal size limit

By default, systemd-journald can use up to 10% of the filesystem. On a 100 GB disk, that’s up to 10 GB of journal logs. To set a hard cap, edit /etc/systemd/journald.conf:

[Journal]
SystemMaxUse=200M

You can also limit the runtime journal (stored in /run/log/journal on volatile tmpfs):

RuntimeMaxUse=50M

Then restart the journal service:

sudo systemctl restart systemd-journald

This ensures the journal never exceeds 200 MB on disk, even after reboots. Adjust the value as needed — 100M works for a small VPS, while busy production servers may need 500M or more.

3. Cleanup YUM/DNF cache

The simple command to clean up package manager caches:

yum clean all      # RHEL/CentOS 7 and earlier
dnf clean all      # RHEL 8+, Rocky Linux, AlmaLinux, Fedora

Note that this won’t remove everything. For instance, metadata for disabled repositories remains untouched. On systems with many third-party repositories, stale metadata can waste hundreds of megabytes.

To free space taken by orphaned data from disabled or removed repositories:

rm -rf /var/cache/yum /var/cache/dnf

Also, when you accidentally run yum or dnf without sudo, a user-cache is created. Delete that too:

rm -rf /var/tmp/yum-* /var/tmp/dnf-*

4. Remove orphan packages

Orphaned packages that no installed software needs waste disk space silently. Over time, as you install and remove software, these leaf packages accumulate.

Check existing orphan packages

package-cleanup --quiet --leaves          # RHEL/CentOS 7
dnf repoquery --extras                    # RHEL 8+

On RHEL 8+ you can also use dnf autoremove to remove packages installed as dependencies but no longer required:

dnf autoremove

Confirm removing orphan packages

If happy with the suggestions, run:

package-cleanup --quiet --leaves | xargs yum remove -y

Always review the package list before confirming. You don’t want to accidentally remove a library that a custom application depends on.

5. Remove old kernels

Before removing old kernels, reboot first to boot into the latest kernel. You cannot remove a kernel currently in use.

The following commands keep just 2 latest kernels installed:

(( $(rpm -E %{rhel}) >= 8 )) && dnf remove $(dnf repoquery --installonly --latest-limit=-2 -q)
(( $(rpm -E %{rhel}) <= 7 )) && package-cleanup --oldkernels --count=2

Each kernel package with modules typically takes 200–300 MB. On servers with many kernel updates, removing old kernels can free a gigabyte or more.

With some VPS providers (Linode, for example), servers use provider’s built kernels. In that case, keeping more than 1 old kernel makes little sense:

(( $(rpm -E %{rhel}) >= 8 )) && dnf remove $(dnf repoquery --installonly --latest-limit=-1 -q)
(( $(rpm -E %{rhel}) <= 7 )) && package-cleanup --oldkernels --count=1

6. Remove WP-CLI cached WordPress downloads

WP-CLI saves WordPress archives every time you set up a new site. On servers hosting many WordPress sites, these cached archives can add up to several hundred megabytes:

rm -rf /root/.wp-cli/cache/*
rm -rf /home/*/.wp-cli/cache/*

7. Remove Composer cache

PHP’s Composer package manager keeps a local cache of every downloaded package:

rm -rf /root/.composer/cache
rm -rf /home/*/.composer/cache

8. Remove core dumps

If you had severe PHP failures causing segfaults with core dumps enabled, you likely have quite a few of those. They’re not needed after debugging:

find -regex ".*/core\.[0-9]+$" -delete

Clean up systemd coredumps

On systemd-based systems, coredumps may also be stored by systemd-coredump in /var/lib/systemd/coredump/. Check how much space they use:

coredumpctl list
du -sh /var/lib/systemd/coredump/

Remove all stored coredumps:

rm -rf /var/lib/systemd/coredump/*

9. Remove error_log files (cPanel)

If you use cPanel, you likely have dozens of error_log files scattered across web directories:

find /home/*/public_html/ -name error_log -delete

10. Remove Node.js caches

npm and node-gyp create local caches that can grow to hundreds of megabytes per user:

rm -rf /root/.npm /home/*/.npm /root/.node-gyp /home/*/.node-gyp /tmp/npm-*

11. Remove Mock caches

Been building RPM packages with mock? Those root caches can be quite large. If you no longer need to build RPM packages:

rm -rf /var/cache/mock/* /var/lib/mock/*

12. Clean up Docker

If Docker is installed, old images, containers, and volumes can consume significant disk space. Check usage before starting:

docker system df

Then remove all stopped containers, unused networks, dangling images, and unused volumes:

docker system prune --all --volumes

Use with caution in production — only prune what you don’t need. For less aggressive cleanup that only removes dangling resources:

docker system prune

To see which individual volumes are largest:

docker volume ls
docker system df --verbose

13. Clean up /tmp

Old temporary files can accumulate over time. Remove files not accessed in the past 7 days:

find /tmp -type f -atime +7 -delete

Note: systemd-tmpfiles-clean.timer handles this automatically on most modern systems. However, if disk space is critically low, manual cleanup provides immediate relief.

14. Clear generic program caches

Multiple programs store caches under users’ .cache subdirectory. Examples: /home/username/.cache/progname. Common offenders include pip, yarn, and mesa_shader_cache.

Clear disk space taken by those caches:

rm -rf /home/*/.cache/*/* /root/.cache/*/*

15. Clear Flatpak and Snap caches

On Fedora workstations and newer RHEL/Rocky installations, Flatpak and Snap packages can accumulate unused runtimes and old versions.

Remove unused Flatpak runtimes:

flatpak uninstall --unused

For Snap, remove old revisions that are no longer active:

snap list --all | awk '/disabled/{print $1, $3}' | while read pkg rev; do snap remove "$pkg" --revision="$rev"; done

Other tips to clear disk space

When your disk is full, you might see this message:

Cannot create temporary file – mkstemp: Read-only file system

This usually means the filesystem was mounted read-only due to a disk full event. A simple reboot typically fixes it. If you cannot reboot, try remounting:

mount -o remount,rw /

Preventing disk space issues

After reclaiming space, take steps to prevent recurrence:

  • Set up logrotate for all applications producing log output.
  • Configure SystemMaxUse in journald.conf as shown in section 2.
  • Schedule a weekly cron job to clean /tmp and package caches.
  • Use monitoring (Zabbix, Prometheus node_exporter, or simple cron + mail alerts) to warn when disk usage exceeds 80%.

Want an automated solution? Check out the diskspace tool on GitHub.

D

Danila Vershinin

Founder & Lead Engineer

NGINX configuration and optimizationLinux system administrationWeb performance engineering

10+ years NGINX experience • Maintainer of GetPageSpeed RPM repository • Contributor to open-source NGINX modules

  1. Olubodun Agbalaya

    Quite handy

    Reply
  2. Raunak Sarkar

    bookmark this people

    Reply
  3. Web Hosting

    Keep coming back to this, very handy.

    Reply
  4. Simon

    Very useful

    Reply
  5. Jose Peña

    Good tips

    Reply
  6. John Clarke

    Thanks the first 4 won me back 6Gb!

    Reply
  7. Fedora Gold-image – Work-Pie

    […] remove /var/log/* 11. dmesg -c 12. modify /etc/fstab for add-on disks 13. clean other files – https://www.getpagespeed.com/server-setup/clear-disk-space-centos 14. […]

    Reply
  8. Haris Durraniis Durrani

    I freed up 15 GB

    Reply
  9. RameshK

    Thank you

    Reply
  10. Jonathan G

    Hi Danila

    All these tips are great (thank you) but the biggest advance I found was to delete old mysql bin logs. (Centos 7 and 5.5.64-MariaDB)

    I ran the command below to show where I had files of more than 100M

    find / -type f -size +100M

    And the answer was loads of old Mysql bin (log) files in /var/lib/mysql. I followed the guide here to delete them and set up my.cnf to delete them automatically in future, after 10 days:

    https://dba.stackexchange.com/questions/30930/how-soon-after-updating-expire-logs-days-param-and-restarting-sql-will-old-binlo/30938#30938

    And disk usage dropped from 94% to 26%!

    This page is really useful and I think it would be great to add this step too – with all the usual caveats on backups and caution – since I think it’s a very common problem.

    Jonathan

    Reply
  11. Eric J Thornton

    Seems to be the same content as *****

    Reply
    • Danila Vershinin

      You can check how it’s not the same using web.archive.org. The contents here are original and pre-date your referenced article by 2 years in time.

      If Ryan, the “IT Project Manager, Web Interface Architect and Lead Developer for many high-traffic web sites” were to include reference to the original source, it would be welcome.
      He at least did add some images and explanatory text, kudos to him for that. But I don’t think that’s crucial to the point of creating a rewritten article and not referencing me.

      Well, the Internet is an open network. I can’t force anyone to do anything 🙂

      Reply
  12. pcgreengr

    My old ssd vps has a 60GB limit and it reached its limit while I was at the dentist (of course). Cpanel went down, email services went down, clients went nuts, could only ssh into it from my phone. Found this amazing one command script after Googling for a minute if there was such a script, executed it, took like 5 seconds, did a reboot, everything was back up, working better than ever and it saved me 15GB of space without breaking anything (I could only go down to 2-3GB by deleting things manually).
    Simply amazing, you saved me. Thank you!

    Reply
  13. Gastón

    Are these commands compatible and safe for AlmaLinux 8.6?

    Thank you!!

    Reply
  14. Neofrek

    Fail2Ban
    root /usr/bin/sqlite3 /var/lib/fail2ban/fail2ban.sqlite3 “delete from bans where timeofban <= strftime(‘\%s’, date(‘now’, ‘-40 days’));vacuum;”

    OR

    sudo /etc/init.d/fail2ban stop
    sudo rm -rf /var/lib/fail2ban
    sudo /etc/init.d/fail2ban start
    sudo reboot

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.