One of our development servers went down today. Problems started with deployment script that claimed that claimed “No space left on device”, although partition was not nearly full. If you ever run into such trouble – most likely you have too many small or 0-sized files on your disk, and while you have enough disk space, you have exhausted all available Inodes. Below is the solution for this problem.

1. check available disk space to ensure that you still have some

$ df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda             33030016  10407780  22622236  32% /
tmpfs                   368748         0    368748   0% /lib/init/rw
varrun                  368748        56    368692   1% /var/run
varlock                 368748         0    368748   0% /var/lock
udev                    368748       108    368640   1% /dev
tmpfs                   368748         0    368748   0% /dev/shm

2. check available Inodes

$ df -i

Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/xvda            2080768 2080768       0  100% /
tmpfs                  92187       3   92184    1% /lib/init/rw
varrun                 92187      38   92149    1% /var/run
varlock                92187       4   92183    1% /var/lock
udev                   92187    4404   87783    5% /dev
tmpfs                  92187       1   92186    1% /dev/shm
If you have IUse% at 100 or near, then huge number of small files is the reason for “No space left on device” errors.

3. find those little bastards

$ for i in /*; do echo $i; find $i |wc -l; done

This command will list directories and number of files in them. Once you see a directory with unusually high number of files (or command just hangs over calculation for a long time), repeat the command for that directory to see where exactly the small files are.

$ for i in /home/*; do echo $i; find $i |wc -l; done

4. once you found the suspect – just delete the files

$ sudo rm -rf /home/bad_user/directory_with_lots_of_empty_files

You’re done. Check the results with df -i command again. You should see something like this:

Filesystem            Inodes   IUsed   IFree IUse% Mounted on

/dev/xvda            2080768  284431 1796337   14% /
tmpfs                  92187       3   92184    1% /lib/init/rw
varrun                 92187      38   92149    1% /var/run
varlock                92187       4   92183    1% /var/lock
udev                   92187    4404   87783    5% /dev
tmpfs                  92187       1   92186    1% /dev/shm

115 thoughts on “No space left on device – running out of Inodes

  1. Many thanks for this; ran into the mysterious no space left problem and was baffled, but these instructions worked like a charm.

  2. Great information – This may have resolved several months of scratching our heads trying to figure out why /var showed 52% usage and the disk space was filling up.

    Great info.



  3. Ivan, you’re my hero of the day !

    I thought I have to make a new installation of my Ubuntu-Lucid Server, but you helped my out.
    My Twonky MediaServer Program creates billons of useless files… 🙁


  4. Very helpful. We were out of imagination to solve a no disk space problem when the disk actually had enough free space 🙂

  5. Just the cure!

    My /tmp ran of out inodes… mainly in /tmp/orbit-gdm.

    I nuked that folder and went for 100%I to just 1%I usage.


  6. Hi, does it cover RH linux as well? I tried this syntax for RH linux bu it doesnt work…

  7. Some times is difficult to erase the files with the command rm -f -r /tmp/*

    In such situation you could you use the next command:

    sudo find / -type f -print | xargs — sudo rm -f —

    This tip is for all the people that stay in the fire line jaja lol ; )

  8. Thanks a lot for sharing!
    This help us, and prevent from helping so frequently. You’re so kind!!

  9. Great help! my webserver hung the same way. find /opt showed Zend application kept millions of sessions so rm -R zendapp/var/sessions & mkdir & chmod

  10. Ivan, is there a simple way to increase the number of available inodes? Because due to the nature of my backups I use them all up after about 3 months. I’d happily sacrifice a few gigs to quadruple the available inodes… if it’s possible, and if it doesn’t cause other issues.

  11. Thanks, this put me into right track.
    For all to know: /var/log/* {daily} is a bad, bad thing to have in your logrotate.conf… 🙂

  12. I never write comments, but this inode article probably best on web! Thanks this helped me to diagnose inode filled folders and clear them out! Other solutions were not helpful because system reached 100% inodes and any complex command line was not possible because out of inode space!

  13. I was very happy to uncover this site. I wanted to thank you for your time just for this fantastic read!!
    I definitely enjoyed every little bit of it and I have you saved as a favorite to
    see new stuff in your site.
       hashmi. . . . . .  .

  14. Much obliged for this; kept running into the secretive no space left issue and was astounded, however these directions had exactly the intended effect.

  15. Why not use the find command with size option to find all the files and then delete them

    find -type f -size 0 | xargs rm

  16. Thank you for sharing but it will not be always the best solution to search for the directory having the most files, sometime you have a directory that have one file and this file is the biggest file in your server.
    In my case I had a file that took alone 77% of my device space.
    I recommand searching for the biggest directories instead of directories having the most files:
    This command will list the top 10 largest file/directories under the directory var for example.
    du -a /var | sort -n -r | head -n 10

  17. saved my day. thanks a lot man.

    just want to be confirm about 1 thing. that command only delete bad empty files not important files.
    because i have just delete one folder with that command and freed up 5% inodes. but i have couple of more folders with large number of files but im a bit scared to apply this with those.

    because i used this command
    sudo rm -rf /usr/src

    and now i can see the whole src folder is now deleted.

  18. I can’t thank you enough. You just saved my weekend. This was the exact solution I needed today to resolve an emergency on a failing server. 1k Kudos to you!

  19. Physicist turned software engineer turned entrepreneur turned product manager TURNED saviour! you’re my hero 🙂 thanks heaps for saving me!

  20. Thank you guy so much for this guide!

    I’ve been woken up at 04:00 in the morning to fix a critical Solaris server, this helped me to solve this issue in less than 30 minutes.

    Way to go!

  21. may be somewhat faster to attack the suspected fodler

    for i in /*; do count=`find $i | wc -l`; if [ $count -gt 1000 ]; then echo $i $count; fi; done

  22. Here is another bump.

    You helped me find the little bastards and cheer me up at the same time

  23. I have never heard a word “inodes” prior to this day. My server has crashed and was down for over half of the day, I spend 4 hours trying to find out how to fix it until I found this tutorial. Seems that deleting old backup files + all the postfix files after using all these scripts resolved the problem with 100% usage of inodes. From the time I read this article till my server fix there was just about 40 min difference. Thanks man, its really saved my day (or night, since it is 1 am now).

  24. Thank you for the nice troubleshooting. If this method did not help us then is there any other method to troubleshoot.

  25. Thank you so much! I have been using Linux for 4 years and, except for the first half year which was tough (coming from Windows I guess it is understandable), my work is much faster since then and I cannot be happier. But today I had this problem with the “inodes” (I had never heard of them until today) and started to sweat. I was thinking of formatting my laptop but you saved me a lot of time.
    Thank you very much again!

  26. Hi,
    Could you please provide the script for disk space/old logs delete.
    The script should show the difference between disk space (in %) before and after cleared the logs.

  27. Thank you so much for this wonderful article. This saved my day. I followed the steps and found out that the /usr/src was taking up huge amount of space. I ran the for loop in there to find out that it were all the linux packages. It struck to me that it might be the apt-get cache and the apt-get packages that might be unused.

    I ran the following commands to clear the system.

    apt-get autoclean
    apt-get autoremove

    This brought down the inodes usage from 100% to 26%.

  28. Be careful, this method doesn’t check special directories, with a dot before (/root/.cache as example).

  29. Thanks for good guide and simple. I go to the folder (/var/lib/oracle/grid/rdbms/audit) which contain 3762154 files of type +ASM_ora_…., removing all these files there’l be no impact to the database. I use oracle linux.


  30. hai
    this article was good.
    but i dnt get d result.
    when i check: df -i

    tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
    /dev/loop2 82M 82M 0 100% /snap/core/4206
    /dev/loop3 82M 82M 0 100% /snap/core/4110
    cgmfs 100K 0 100K 0% /run/cgmanager/fs
    tmpfs 362M 96K 362M 1% /run/user/1000

    am not able ti remove d snap/core

    plz any one help me

  31. This solution worked like a charm for me. I faced this issue on AWS EC2 Linux instance. Using the technique above by Ivor, I realized that the culprit was /usr/src directory, where kernel header files are copied during instance update.
    Deleting all files from /usr/src directory solved the inode count issue .. Just too good Ivor. Thanks a ton.

  32. Thank you so much ! There’s so much bullshit everywhere on the subject and you just solve it in a few lines with some nice explanations.

  33. I had deleted the /usr/src/ content where linux headers were present. This led to breaking the key based authentication with AWS EC2.
    Be careful, while performing this activity. Instead, I went for deleting some user sessions logged on server and it brought the site back.

  34. Thank you!

    As an ex Windows user this one was really baffling until I worked through your post. Cheers!

  35. Hi,
    I have a question. Is it safe to delete all files & folders from these directories?
    I found most of the file that have high inodes is in:
    – /usr/share/perl/5.18.2 (mostly .pm file)
    – /usr/share/doc
    – /usr/lib/python2.7
    – /usr/lib/python3
    – /usr/lib/python3.4
    – /usr/lib/perl
    – /sys/module
    – /sys/kernel/debug

    Just want to know if it is safe to delete all of these files.
    Thank you in advance!

  36. `for i in /home/*; do echo $i; find $i |wc -l; done`
    you need to quote the $i

  37. My man, OH MY MAN! This little girl ows you days of her life! Thank you so mcho for this comprehensive and clear guide.

    If you happen to visit Panama, contact me and lets chat about tech and drink some coffee (tea for me) or beer! Thank you!

  38. I see you don’t monetize,
    don’t waste your traffic, you can earn extra cash every month with new monetization method.
    This is the best adsense alternative for any type of website (they approve all sites), for more info simply
    search in gooogle: murgrabia’s tools

  39. Can someone please clarify this part with a real example? /home/bad_user/directory_with_lots_of_empty_files

  40. I tried this way.
    I went to
    then ran this command
    I’ve found that at one folder it is taking too much time.
    i pressed
    Ctrl+C (to break the process)
    then I change the command with the directory where it was stopped and run it again.
    i tried the same thing 3-4 times.
    and i found one folder where there were 4096 folders
    inside each folder there were more than 3000+ files / folder
    so i’ve removed that and everything went well… the inode was showing 50% usage now what it was showing 100% earlier 🙂

    Thanks again

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.