no space left on this device 🇬🇧- 8 mins
- The Problem
- Permanent Solution
In this post, one of the well known (-open to discuss :)-) error of “No space left on device” which is caused due to Docker will be solved with different approaches.
Note: “No space left on device” error can be caused due to any other reason than docker itself. Hence, it would be nice to make sure that the error is caused due to docker volumes.
You can check whether it is caused due to docker volumes or not by following steps over here
Note that the explanations and examples may differ according to your environment, hence instead of taking these information as rules, applying them as a reference would be much better approach.
The system information for particular scenario described on post is given below:
- PRETTY_NAME: Debian GNU/Linux 9 (stretch)
- NAME: Debian GNU/Linux
- VERSION_ID: 9
- VERSION: 9 (stretch)
- KERNEL_RELEASE: 4.9.0-6-amd64
Docker daemon version
- Version: 19.03.8
- API version: 1.40 (minimum version 1.12)
- Go version: go1.12.17
- Git commit: afacb8b7f0
- Built: Wed Mar 11 01:24:36 2020
- OS/Arch: linux/amd64
- Experimental: false
Although, it is NOT fully required to have same or somehow closer version of OS and Docker daemon, it could be nice to know exactly which conditions the following fix can work.
Most of the cases, the following information could be valid for all Debian based systems however it is not for sure.
Docker can be used for most of the cases although now Kubernetes or any other orchestration systems preffered to used. That kind of orchestration systems may not cause this problem, hence they mostly provide auto scaling and runs on cloud. However, when you have a server with all responsibility, you may face this exact problem.
The problem is installation of docker into any system with default settings may create head ache in the future. If you are planning to use docker, intensively, the main reason of that, docker is generally using
place for docker volumes. In most scenarios, servers do not use root path for storing information, instead the appropriate approach for storing information on server is creating data volumes which has huge capacity and ability to extend or shrink time to time. The problem starts to show off itself when docker containers have been used for long period of time.
Ensure about the problem
It is quite handy to check whether the error of “No space left on device” caused by docker volumes. To do that following simple bash commands can be used.
This will display free and used disk space for
/var/lib path. Afterwards, you can see how is the difference between free and used spaces among usage percentage. An example is given below, it is taken from the system that I mentioned at the beginning in system information section.
As you can observe from the output above, I have plenty of spaces for volume
/dev/mapper/data-data, however I was getting “No space left on the device” error, because root path became full due to docker volumes.
It can be quickly checked by the command below.
A temporary solution would be pruning docker volumes by running ;
it will prune all volumes which are NOT in-use, if the volumes which are creating this bunch of data are in-use, prune command will not have any effect on the error. Although it works, it is a temporary solution, it may re-trigger the error in future. To resolve this issue permanently following approach could be used: Permanent Solution
The approach of having temporary solution can be extended with cronjobs, however, it is definitely not nice way of handling issues.
However, it could be nice to have cron jobs which prune docker system time to time, because old docker images, containers and some other leftovers can create bunch of reclaimable storage. Docker prune commands do not have effect on resources (containers, volumes and networks) which are under use.
You can setup a cronjob which weekly prunes system, as provided below.
You can write a script according to your needs then place it to cron job as well. Before adding cron jobs, make sure $USER has valid permissions for docker group.
Now, we are sure that the error caused due to docker volumes which means we can proceed with a permanent solution.
If it the error is not caused due to docker volumes, it could be caused from running short of inodes or basically not enough storage area.
Permanent solution is using one of storage volume for storing docker volumes, instead of using root path. These are the steps that you may take;
Stop docker service !
Rsync all data under
/var/lib/docker to other directory under storage volume
Rename default docker volumes path, this for taking a backup until we have success at the end.
Create symbolic link to actual place
-g /data/mnt/docker as shown above in
Start Docker daemon
If there was no error during implementation of steps, you can now test your setup.
Note that thee should NOT be space between curly brackets when listing mount point only.
As you can observe above docker is using
/data/mnt/docker for docker volumes, now we can safely remove old backup data.
rm -rf /var/lib/dockerbckp
I would like to mention that there is an information about changing docker volume instead implementing all steps mentioned above, just specifiying path under
/etc/docker/daemon.js file would work (-example daemon.js is given-). However, when I tried that approach, it did not worked.
Check this documentation: https://docs.docker.com/config/daemon/systemd/
I hope, this post can help others to find required information quickly and implement necessary steps to get to work.