One of my linux boxes ran out of disk space, which surprised me, because it definitely didn’t have that much stuff on it. When I check with df it says I have used 212GB on my / path:
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 227G 212G 5.2G 98% /
So, I tried to use du to see if maybe a runaway log file was the cause, but this says I have only used 101GB on my / path (this is also more in-line with how much space I expected to be used):
$ du -h | sort -h
...
101G /
Using those commands with sudo outputs the same sizes.
My filesystem is Btrfs, I’ve tried the suggestion to use btrfs balance start ... but this actually INCREASED my disk usage to 99% lol
So my question is… what on earth is using the remaining 111GB?? Why can I not see it in du?
I typically investigate with
ncduwhich gives very useful visualization like :--- /home/fabien/Prototypes/esphome/.esphome ---------------------------------------------------------------------------------------------------------------- /.. 3.1 GiB [######################] /platformio 218.1 MiB [# ] /build 28.0 KiB [ ] /idedata 8.0 KiB [ ] /storageand let’s you iterate. Here for example you’d go into
platformioand get another view, pressdto delete files or directories that aren’t needed anymore if it’s a stale project e.g.node_modules. Go back, etc.So yes, warmly recommended, both on desktop and remote servers. It’s way easier IMHO that
du -sh ./directorythencd, rinse and repeat. It’s also way WAY faster then GUI equivalents … because you navigate and take action, e.g. delete, with your keyboard.All that being said, if it’s about your filesystem rather than your files, it probably won’t help much. I don’t know enough about btrfs to help unfortunately.
ncdu
Oh this one is very cool! Unfortunately it also only shows the same 101GB being used:
ncdu 1.22 ~ Use the arrow keys to navigate, press ? for help --- / ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 93.1 GiB [###########################] /home 6.5 GiB [# ] /usr 790.4 MiB [ ] /var 173.0 MiB [ ] /boot 12.8 MiB [ ] /etc 1.7 MiB [ ] /root 1.3 MiB [ ] /run 44.0 KiB [ ] /tmp @ 4.0 KiB [ ] initrd.img.old @ 4.0 KiB [ ] initrd.img @ 4.0 KiB [ ] vmlinuz.old @ 4.0 KiB [ ] vmlinuz @ 4.0 KiB [ ] lib64 @ 4.0 KiB [ ] sbin @ 4.0 KiB [ ] lib @ 4.0 KiB [ ] bin . 0.0 B [ ] /proc 0.0 B [ ] /sys 0.0 B [ ] /dev 0.0 B [ ] /media e 0.0 B [ ] /srv e 0.0 B [ ] /opt e 0.0 B [ ] /mnt
BTRFS snapshots world be my first guess. ‘sudo btrfs subvolume list /‘
There is one listed:
ID 256 gen 137604 top level 5 path @rootfsLooks like it is just my filesystem though?
btduis an excellent tool for finding out what’s taking up space in btrfsLegend! It found a second filesystem named “UNREACHABLE”:

It looks like an exact duplicate of my main filesystem “/@rootfs”, I’m guessing this is why my disk space filled up. Do you know how I’d go about removing the duplicate? (If it’s safe to do so)
I’m not a btrfs expert but AFAIK high unreachable space usage is usually a result of fragmentation. You might want to defragment the filesystem and see if that helps.
I will note that btrfs makes estimations of used/available space very difficult by design, and you especially can not trust what standard UNIX tools like
dfanddutell you about btrfs volumes. Scripting aroundduor usingncduwill not help here in any way. You might want to read this kernel.org wiki article as well as the man pages for the btrfs tools (btrfs(8)and particularlybtrfs-filesystem(8)), which among other things provide versions ofdfandduthat actually work, or at least they do most of the time instead of never.
You could try
sudo dua i /:sudo: Without it, it might miss some filesdua: helps a lot with browsing directories and checking for their contents
I agree with the other suggestions, though if you want a quick and easy GUI tool, I use Filelight
The bit of information you’re missing is that
duaggregates the size of all subfolders, so when you saydu /, you’re saying: “how much stuff is in / and everything under it?”If you’re sticking with
du, then you’ll need to traverse your folders, working downward until you find the culprit folder:du /* (Note which folder looks the biggest) du /home/* (If /home looks the biggest)… and so on.
The trouble with this method however is that
*won’t include folders with a.in front, which is often the culprit:.cache,.local/share, etc. For that, you can do:du /home/.*Which should do the job I think.
If you’ve got a GUI though, things get a lot easier 'cause you have access to GNOME Disk Usage Analyzer which will draw you a fancy tree graph of your filesystem state all the way down to the smallest folder. It’s pretty handy.
Bookmarked! Thanks!
GUI disk space analyzers are absolutely amazing.
For those who prefer KDE and/or donut graphs, Filelight has you covered.
I had the exact same problem on one of my virtual boxes. The problem baffled me for two years and I just added more space to the box a few times to fight it as I couldn’t solve the issue. It wasn’t the inodes, deleted but open files or anything common like that.
The problem was my mounts. I had occasionally failing mounts combined with crontabs that accessed and wrote data to those mounts. Do you know what happens when you accidentally wrote let’s say 200gb data to /mnt/a and then later mount a drive over that mount point? It magically ‘disappears’ as you’d exclude that mount from the calculations.
Might be you don’t have anything mounted and none of the above is useful to you. But this solved my issue and it’s quite curious and silly. Just set up mount points to not be writeable and problem went away.





