No space left on device – running out of Inodes

One of our development servers went down today. Problems started with deployment script that claimed that claimed “No space left on device”, although partition was not nearly full. If you ever run into such trouble – most likely you have too many small or 0-sized files on your disk, and while you have enough disk space, you have exhausted all available Inodes. Below is the solution for this problem.

1. check available disk space to ensure that you still have some

$ df

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda             33030016  10407780  22622236  32% /
tmpfs                   368748         0    368748   0% /lib/init/rw
varrun                  368748        56    368692   1% /var/run
varlock                 368748         0    368748   0% /var/lock
udev                    368748       108    368640   1% /dev
tmpfs                   368748         0    368748   0% /dev/shm

2. check available Inodes

$ df -i

Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/xvda            2080768 2080768       0  100% /
tmpfs                  92187       3   92184    1% /lib/init/rw
varrun                 92187      38   92149    1% /var/run
varlock                92187       4   92183    1% /var/lock
udev                   92187    4404   87783    5% /dev
tmpfs                  92187       1   92186    1% /dev/shm
If you have IUse% at 100 or near, then huge number of small files is the reason for “No space left on device” errors.

3. find those little bastards

$ for i in /*; do echo $i; find $i |wc -l; done

This command will list directories and number of files in them. Once you see a directory with unusually high number of files (or command just hangs over calculation for a long time), repeat the command for that directory to see where exactly the small files are.

$ for i in /home/*; do echo $i; find $i |wc -l; done

4. once you found the suspect – just delete the files

$ sudo rm -rf /home/bad_user/directory_with_lots_of_empty_files

You’re done. Check the results with df -i command again. You should see something like this:

Filesystem            Inodes   IUsed   IFree IUse% Mounted on

/dev/xvda            2080768  284431 1796337   14% /
tmpfs                  92187       3   92184    1% /lib/init/rw
varrun                 92187      38   92149    1% /var/run
varlock                92187       4   92183    1% /var/lock
udev                   92187    4404   87783    5% /dev
tmpfs                  92187       1   92186    1% /dev/shm
Share

97 thoughts on “No space left on device – running out of Inodes”

  1. I can’t thank you enough. You just saved my weekend. This was the exact solution I needed today to resolve an emergency on a failing server. 1k Kudos to you!

  2. Physicist turned software engineer turned entrepreneur turned product manager TURNED saviour! you’re my hero 🙂 thanks heaps for saving me!

  3. Thank you guy so much for this guide!

    I’ve been woken up at 04:00 in the morning to fix a critical Solaris server, this helped me to solve this issue in less than 30 minutes.

    Way to go!

  4. may be somewhat faster to attack the suspected fodler

    for i in /*; do count=`find $i | wc -l`; if [ $count -gt 1000 ]; then echo $i $count; fi; done

  5. Here is another bump.

    You helped me find the little bastards and cheer me up at the same time

  6. I have never heard a word “inodes” prior to this day. My server has crashed and was down for over half of the day, I spend 4 hours trying to find out how to fix it until I found this tutorial. Seems that deleting old backup files + all the postfix files after using all these scripts resolved the problem with 100% usage of inodes. From the time I read this article till my server fix there was just about 40 min difference. Thanks man, its really saved my day (or night, since it is 1 am now).

  7. Thank you for the nice troubleshooting. If this method did not help us then is there any other method to troubleshoot.

  8. Thank you so much! I have been using Linux for 4 years and, except for the first half year which was tough (coming from Windows I guess it is understandable), my work is much faster since then and I cannot be happier. But today I had this problem with the “inodes” (I had never heard of them until today) and started to sweat. I was thinking of formatting my laptop but you saved me a lot of time.
    Thank you very much again!

  9. Hi,
    Could you please provide the script for disk space/old logs delete.
    The script should show the difference between disk space (in %) before and after cleared the logs.
    Thanks,
    Eswar

  10. Thank you so much for this wonderful article. This saved my day. I followed the steps and found out that the /usr/src was taking up huge amount of space. I ran the for loop in there to find out that it were all the linux packages. It struck to me that it might be the apt-get cache and the apt-get packages that might be unused.

    I ran the following commands to clear the system.

    apt-get autoclean
    apt-get autoremove

    This brought down the inodes usage from 100% to 26%.

  11. Be careful, this method doesn’t check special directories, with a dot before (/root/.cache as example).

  12. Thanks for good guide and simple. I go to the folder (/var/lib/oracle/grid/rdbms/audit) which contain 3762154 files of type +ASM_ora_…., removing all these files there’l be no impact to the database. I use oracle linux.

    Cheers!

  13. hai
    this article was good.
    but i dnt get d result.
    when i check: df -i

    tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
    /dev/loop2 82M 82M 0 100% /snap/core/4206
    /dev/loop3 82M 82M 0 100% /snap/core/4110
    cgmfs 100K 0 100K 0% /run/cgmanager/fs
    tmpfs 362M 96K 362M 1% /run/user/1000

    am not able ti remove d snap/core

    plz any one help me

  14. This solution worked like a charm for me. I faced this issue on AWS EC2 Linux instance. Using the technique above by Ivor, I realized that the culprit was /usr/src directory, where kernel header files are copied during instance update.
    Deleting all files from /usr/src directory solved the inode count issue .. Just too good Ivor. Thanks a ton.

  15. I’ve been exploring for a bit for any high quality articles or
    weblog posts in this kind of area . Exploring
    in Yahoo I ultimately stumbled upon this web site. Studying this information So i’m happy to show
    that I’ve a very just right uncanny feeling I discovered exactly
    what I needed. I most definitely will make certain to do not omit
    this web site and provides it a glance regularly.

  16. Thank you so much ! There’s so much bullshit everywhere on the subject and you just solve it in a few lines with some nice explanations.

  17. I had deleted the /usr/src/ content where linux headers were present. This led to breaking the key based authentication with AWS EC2.
    Be careful, while performing this activity. Instead, I went for deleting some user sessions logged on server and it brought the site back.

Leave a Reply

Your email address will not be published. Required fields are marked *