Running out of disk space is a common problem on Linux systems, especially on servers, VPS hosting environments, and development machines.
Large log files, backups, cache directories, and forgotten downloads can quickly consume storage without notice. Fortunately, Linux provides powerful command-line tools that make it easy to locate large files and directories.

In this guide, you will learn several practical ways to find large files in Linux using commands like find, du, and sort.
Why finding large files matters?
Large files can slow down backups, consume SSD space, trigger server alerts, and even crash applications when storage becomes full. Regularly checking disk usage helps you:
- Free up storage space.
- Detect abnormal log growth.
- Improve server performance.
- Prevent downtime caused by full disks.
- Clean unnecessary backups and cache files.
Before deleting anything, always verify that the file is not required by the system or applications. Just delete the unnecessary and known files or folders.
Check disk usage first.
Before searching for specific files, it is helpful to check overall disk usage.
Run:
df -h
Example output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 42G 5.5G 89% /
The -h option displays sizes in a human-readable format such as MB and GB.
Find large files using find command:
The find command is one of the best tools for locating large files.
To search for files larger than 500 MB:
find / -type f -size +500M
Explanation:
/searches the entire system-type flimits results to files-size +500Mfinds files larger than 500 MB
To search for files larger than 1 GB:
find / -type f -size +1G
This command may show permission errors for restricted directories. To suppress them:
find / -type f -size +1G 2>/dev/null
The 2>/dev/null part hides error messages.
Find large files in a specific directory:
Searching the whole system may take time. You can target a specific directory instead.
Example:
find /var -type f -size +200M
This is useful for checking:
/var/logfor huge logs./homefor user downloads./backupfor old backup archives.
Find the largest files in linux:
To display the largest files sorted by size:
find / -type f -exec du -h {} + 2>/dev/null | sort -rh | head -20
This command:
- Finds files.
- Calculates their size.
- Sorts them from largest to smallest.
- Shows the top 20 largest files.
This is one of the most practical ways to quickly identify storage issues.
Find large directories in linux:
Sometimes directories consume more space than individual files.
Use:
du -h --max-depth=1 / | sort -rh
Explanation:
duchecks disk usage.--max-depth=1limits results to one directory level.sort -rhsorts sizes from largest to smallest.
This helps identify which directories are using the most space.
Find large log files:
Log files are a common cause of storage problems on Linux servers.
To locate large log files:
find /var/log -type f -size +100M
If logs become too large, consider enabling or checking log rotation using logrotate.
Find recently modified large files:
You can also search for large files modified recently.
Example:
find / -type f -size +500M -mtime -7 2>/dev/null
This finds files:
- Larger than 500 MB.
- Modified within the last 7 days.
This is useful for detecting sudden storage spikes.
Delete large files carefully:
Once you identify unnecessary files, delete them cautiously.
Example:
rm filename
Avoid deleting:
- Database files.
- System libraries.
- Application data directories.
- Active log files without understanding their purpose.
If unsure, move files to a backup location before permanent removal.
Useful size units in find command.
The find command supports multiple size units:
| Unit | Meaning |
|---|---|
k | Kilobytes (KB) |
M | Megabytes (MB) |
G | Gigabytes (GB) |
Examples:
-size +500k
-size +100M
-size +2G
Conclusion.
Linux provides several powerful tools for identifying large files and directories. The find, du, and sort commands make it easy to diagnose storage problems, clean unnecessary data, and maintain healthy disk usage.
For most administrators, the fastest method is:
find / -type f -exec du -h {} + 2>/dev/null | sort -rh | head
Regular disk cleanup and monitoring can prevent performance issues and unexpected downtime on linux systems.
Leave a Reply