My EE install was filling up 92% my VPS’ 25 gig drive. I wanted to post this because I spent some time chasing down the issue and hadn’t found an explicit fix on the forum.
Problem 1: Clean up old docker images
Originally I had ~2.5gb of reclaimable space. This post gave me a starting point. You can run the below command to see how much space docker is able to reclaim.
$ sudo docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 7 7 1.92GB 55.27MB (2%)
Containers 7 7 8.54kB 0B (0%)
Local Volumes 22 22 332.8MB 0B (0%)
Build Cache 0 0 0B 0B
You can then run
$ sudo docker system prune -af
to removed the extraneous images. Make sure to append the “-a” portion, to remove all unused images not just dangling ones (see docker system prune --help).
Problem 2: Gigantic log file in /var/lib/docker
That cleared up a couple of gigs, but I still had a lot more disk space being used than I would expect:
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 19G 4.9G 80% /
After some digging I found a post on redhat that corresponded to /var/lib/docker taking up 17 gb.
$ sudo find /var/lib/docker/containers/ -name "*.log" -exec ls -sha {} \; | sort -hk1 -r | head -1
12G /var/lib/docker/containers/CONTAINER-ID/CONTAINER-ID-json.log
Then I truncated the log file above down to 10 mb (replacing CONTAINER-ID with the correct text):
$ sudo truncate -s 10M /var/lib/docker/containers/CONTAINER-ID/CONTAINER-ID-json.log
$ sudo df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 6.7G 17G 29% /
Back down to a reasonable size at last!
This probably isn’t an ideal solution, but until I find a cleaner solution I just added the following as a root cron job:
0 4 * * * find /var/lib/docker/containers/ -iname "*-json.log" -exec truncate -s 10M {} \;
It simply truncates all *-json.log files in /var/lib/docker/containers/ down to 10M. IF you want to do the same, just $ sudo crontab -e
and modify the above to your liking.
Hopefully this helps someone else who had a similar issue. Cheers.