Quickly Find Large Files in Linux

Posted on June 11, 2025

Disk space has a way of mysteriously disappearing, especially on development machines filled with node_modules directories, Docker images, and forgotten database dumps. Linux provides powerful command-line tools for hunting down space hogs, and mastering these commands can save you from emergency cleanup sessions when your disk fills up at the worst possible moment.

The du (disk usage) command is your primary weapon. Running du -h --max-depth=1 /home gives a human-readable summary of space usage one level deep in your home directory. For a more surgical approach, du -h /var/log | sort -hr | head -20 finds the 20 largest items in your log directory, sorted by size. The -h flag makes sizes human-readable (showing MB, GB), while sort -hr sorts in reverse order by human-readable numbers.

For finding individual large files rather than directories, the find command excels. find / -type f -size +100M 2>/dev/null locates all files larger than 100MB, redirecting permission errors to avoid cluttering the output. You can combine this with other commands: find . -type f -size +50M -exec ls -lh {} \; | sort -k5 -hr finds large files and displays them with full details, sorted by size.

Modern systems might have ncdu, an ncurses-based disk usage analyzer that provides an interactive interface for navigating and understanding disk usage. For a graphical approach, tools like baobab (GNOME's Disk Usage Analyzer) provide visual representations. But knowing the command-line tools ensures you can diagnose disk issues even on remote servers or minimal systems. Regular disk cleanup is preventive maintenance that keeps your system running smoothly.