I’m hosting a few services using docker. For something like an openstreetmap tileserver, I’d like it to remain on my SSD because high speed improves performance, and the directory is unlikely to grow and fill the drive.

For other services like NextCloud, speed isn’t as important as storage size, so I might want it on a larger HDD raid.

I know it’s trivial to move the volumes directory to wherever, but can I move some volumes to one directory and some volumes to another?

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    5
    ·
    12 hours ago

    If you use a volume, you can mount that anywhere.

    volumes:
      lemmy_pgsql:
        driver: local
        driver_opts:
          type: none
          o: bind
          device: '/mnt/data/lemmy/pgsql'
    

    Then in your service add a volume

        volumes:
          - lemmy_pgsql:/var/lib/postgresql/data:Z
    
    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      Is there any advantage to bind mounting that way? I’ve only ever done it by specifying the path directly in the container, usually ./data:data or some such. Never had a problem with it.

  • Dave@lemmy.nz
    link
    fedilink
    English
    arrow-up
    17
    ·
    18 hours ago

    I don’t know if this is naughty but I use bind mounts for everything, and docker compose to keep it all together.

    You can map directories or even individual files to directories/files on the host computer.

    Normally I make a directory for the service then map all volumes inside a ./data directory or something like that. But you could easily bind to different directories. For example for photoprism I mount my photos from a data drive for it to access, mount the main data/database to a directory that gets backed up, and mount the cache to a directory that doesn’t get backed up.

    • suicidaleggroll@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      16 hours ago

      Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.

      It also means you can go hog wild with docker system prune -af --volumes and there’s no risk of losing any of your data.

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 hours ago

        Yes that’s what I do too!

        Overnight cron to stop containers, run borgmatic, then start the containers again.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      16 hours ago

      I’ve never not used bind mounts. That data is persistent. Nonpersistent data is fine on docker volumes.

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        14
        ·
        16 hours ago

        Docker wants you to use volumes. That data is persistent too. They say volumes are much easier to backup. I disagree, I much prefer the bind mounts, especially when it comes to selective backups.

        • KaninchenSpeed@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 hours ago

          Volumes are horrible, how would I easily edit a config file of the programm running inside, if the container deosnt even start.

          Bind mounts + ZFS datasets are the way to go.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      16 hours ago

      I do that, until some container has permissions issues.
      I tinker, try and fix it, give up and use a volume. Or I fix it, but it never seems to be the same fix

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        9
        ·
        16 hours ago

        I occasionally have had permissions issues but I tend to be able to fix them. Normally it’s just a matter of deleting the files on the host and letting the container create them, though it doesn’t always work it usually does.

  • e0qdk@reddthat.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    19 hours ago

    You can run docker containers with multiple volumes. e.g. pass something like -v src1:dst1 -v src2:dst2 as arguments to docker run.

    So – if I understood your question correctly – yes, you can do that.

  • astrsk@fedia.io
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    16 hours ago

    This is mostly an IOPS dependent answer. Do you have multiple hot services constantly hitting the disk? If so, it can be advantageous to split the heavy hitters across different disk controllers, so in high redundancy situations that means different dedicated pools. If it’s a bunch of services just reading, filesystems like ZFS use caching to almost completely eliminate disk thrashing.