I’m hosting a few services using docker. For something like an openstreetmap tileserver, I’d like it to remain on my SSD because high speed improves performance, and the directory is unlikely to grow and fill the drive.
For other services like NextCloud, speed isn’t as important as storage size, so I might want it on a larger HDD raid.
I know it’s trivial to move the volumes directory to wherever, but can I move some volumes to one directory and some volumes to another?
No idea. I personally use PVs and PVCs with k3s and it’s trivial there with some downtime
If you use a volume, you can mount that anywhere.
volumes: lemmy_pgsql: driver: local driver_opts: type: none o: bind device: '/mnt/data/lemmy/pgsql'
Then in your service add a volume
volumes: - lemmy_pgsql:/var/lib/postgresql/data:Z
Is there any advantage to bind mounting that way? I’ve only ever done it by specifying the path directly in the container, usually
./data:data
or some such. Never had a problem with it.with the way I do it, you can also use NFS as a backend
Well, I know you can define volumes for other filesystem drivers, but with bind mounts, you don’t need to define the bind mount as you do, you can just specify the path directly in the container volumes and it will bind mount it. I was just wondering if there was any actual benefit to defining the volume manually over the simple way.
I don’t know if this is naughty but I use bind mounts for everything, and docker compose to keep it all together.
You can map directories or even individual files to directories/files on the host computer.
Normally I make a directory for the service then map all volumes inside a ./data directory or something like that. But you could easily bind to different directories. For example for photoprism I mount my photos from a data drive for it to access, mount the main data/database to a directory that gets backed up, and mount the cache to a directory that doesn’t get backed up.
Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.
It also means you can go hog wild with
docker system prune -af --volumes
and there’s no risk of losing any of your data.Yes that’s what I do too!
Overnight cron to stop containers, run borgmatic, then start the containers again.
I’ve never not used bind mounts. That data is persistent. Nonpersistent data is fine on docker volumes.
Docker wants you to use volumes. That data is persistent too. They say volumes are much easier to backup. I disagree, I much prefer the bind mounts, especially when it comes to selective backups.
Volumes are horrible, how would I easily edit a config file of the programm running inside, if the container deosnt even start.
Bind mounts + ZFS datasets are the way to go.
I do that, until some container has permissions issues.
I tinker, try and fix it, give up and use a volume. Or I fix it, but it never seems to be the same fixI occasionally have had permissions issues but I tend to be able to fix them. Normally it’s just a matter of deleting the files on the host and letting the container create them, though it doesn’t always work it usually does.
You can run docker containers with multiple volumes. e.g. pass something like
-v src1:dst1 -v src2:dst2
as arguments todocker run
.So – if I understood your question correctly – yes, you can do that.
I have several NFS shares that host multiple docker volumes. So yes.
This is mostly an IOPS dependent answer. Do you have multiple hot services constantly hitting the disk? If so, it can be advantageous to split the heavy hitters across different disk controllers, so in high redundancy situations that means different dedicated pools. If it’s a bunch of services just reading, filesystems like ZFS use caching to almost completely eliminate disk thrashing.