I used to simply use the ‘latest’ version tag, but that occasionally caused problems with breaking changes in major updates.

I’m currently using podman-compose and I manually update the release tags periodically, but the number of containers keeps increasing, so I’m not very happy with this solution. I do have a simple script which queries the Docker Hub API for tags, which makes it slightly easier to find out whether there are updates.

I imagine a solution with a nice UI for seeing if updates are available and possibly applying them to the relevant compose files. Does anything like this exist or is there a better solution?

  • Krafting@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    1 year ago

    WatchTower can auto uodate your container or notify you when an update is available, I use it with a Matrix account for notifications

    • mersh@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      +1 for watchtower. I’ve been using it for about a year now without any issues to keep anywhere from 5 to 10 Docker containers updated.

    • Dusty@l.dusty-radio.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Sorry if it’s obvious, but I don’t see a way to use Matrix for notifications on their documentation and my searching is coming up blank. Do you by chance have a tutorial for this?

      • Krafting@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Here is how I did it:

        docker run -d \
          --name watchtower \
          -v /var/run/docker.sock:/var/run/docker.sock \
          -e WATCHTOWER_NOTIFICATION_URL=matrix://username:password@domain.org/?rooms=!ROOMID:domain.org \
          -e WATCHTOWER_NOTIFICATION_TEMPLATE="{{range .}}[WatchTower] ({{.Level}}): {{.Message}}{{println}}{{end}}" \
          containrrr/watchtower
        

        Edit: I created a pull request to the WatchTower documentation, here: https://github.com/containrrr/watchtower/pull/1690

    • Carol2852@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This looks great. I was looking at Watchtower again a few days ago, but I don’t want to auto update my containers, just get notified for updates. I usually just keep the RSS feed of the project in my feed reader, but diun looks like a proper solution. Thanks!

    • g5pw@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Huh, that’s actually way better than my current setup of spamming me on Telegram every time there’s an update

  • bookworm@feddit.nl
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    1 year ago

    Since my “homelab” is just that, a homelab, I’m comfortable with using :latest-tag on all my containers and just running docker-compose pull and docker-compose up -d once per week.

    • Toribor@corndog.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      This is mostly my strategy too. Most of the time I don’t have any issues, but occasionally I’ll jump straight to a version with breaking changes. If I have time to fix I go find the patch notes and update my config, otherwise I just tag the older version and come back later.

      I’ve recently been moving my containers from docker compose into pure ansible though since I can write roles/playbooks to push config files and cycle containers which previously required multiple actions on docker compose. It’s also helped me to turn what used to be notes into actual code instead.

    • easeKItMAn@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Just put all commands into a bash file. Starting with ‘’docker tag’’ changing tag to something else in case I need to revert and than pull, compose up. All run by crontab weekly. In case something breaks the latest working container is still there.

  • LachlanUnchained@lemmyunchained.net
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    edit-2
    1 year ago

    The beer way I’ve found is to wait till something breaks. Message around on forums asking why I’m getting errors till someone recommends update and restart.

    Blindly Remove the docker. Recreate.

    And hope none of the configs break. ✌️💛

      • deleted@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        I use watchtower and hope nothing will break. I never read breaking changes.

        When an issue happens, I just search the internet or change the tag to a known working version until the issue is resolved.

        I can afford to have my server down for a few days. It’s not critical to me.

      • mea_rah@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        It kind of depends on what are your priorities. In my experience it’s usually much easier to upgrade to latest version from previous version, than to jump couple versions ahead, because you didn’t have time doing upgrades recently…

        When you think about it, from the development point of view, the upgrade from previous to last version is the most tested path. The developers of the service probably did exactly this upgrade themselves. Many users probably did the same and reported bugs. When you’re upgrading from version released many months ago to the current stable, you might be the only one with such combination of versions. The devs are also much more likely to consider all the changes that were introduced between the latest versions.

        If you encounter issue upgrading, how many people will experience the same problem with your specific versions combination? How likely are you to see issue on GitHub compared to a bunch of people that are always upgrading to latest?

        Also moving between latest versions, there’s only limited set of changes to consider if you encounter issues. If you jumped 30 versions ahead, you might end up spending quite some time figuring out which version introduced the breaking change.

        Also no matter how carefully you look at it, there’s always a chance that the upgrade fails and you’ll have to rollback. So if you don’t mind a little downtime, you can just let the automation do the job and at worst you’ll do the rollback from backup.

        It’s also pretty good litmus test. If service regularly breaks when upgrading to latest without any good reason, perhaps it isn’t mature enough yet.

        We’re obviously talking about home lab where time is sometimes limited, but some downtime usually not a problem.

      • roofuskit@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        It depends on the project. If the project doesn’t make an effort to highlight them I would consider using a different one.

        But any decent OSS will make a good change log for their updates that you can read.

        • psykal@lemmy.fmhy.ml
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          I’ve just been updating my containers every week or so and if something breaks I’ll try and fix it. It would definitely be preferable to “fix” in advance, but with enough containers getting updated, checking/reading every change becomes a fair amount of work. Most of the time nothing breaks.

          Downvotes are cool but if this is a bad way of doing things just tell me.

              • roofuskit@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Well, there’s always the “if it ain’t broke don’t fix it” mantra. There’s a few reasons I tend to update. Because there’s a feature I want or need, to fix a big that affects me, or because a software frequently updates with breaking changes and keeping up with reading change logs is the best way to deal with that. The last option is usually because if I keep up with it I don’t have to read and fix multiple months of breaking changes.

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I read the changelogs for the apps, and manually update the containers. Too many apps have breaking changes between releases.

  • Protegee9850@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I just use docker compose files. Bundle my arr stack in a single compose file and can docker compose pull to update them all in one swoop.

    • DigitalPortkey@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Just so I understand, you’re using your compose file to handle updating images? How does that work? I’m using some hacked together recursive shell function I found to update all my images at once.

      • Protegee9850@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        There’s plenty of tutorials out there for it. A quick DuckDuckGo search turned up this as one of the first results, but the theory is the same if you wanted to bundle ‘arr containers instead of nginx/whatever. https://www.digitalocean.com/community/tutorials/workflow-multiple-containers-docker-compose

        Essentially you create docker compose file for services, within which you have as many containers as you want set up like you would any other compose file. You ‘docker compose pull’ and ‘docker compose up -d’ to update/install just like you would for individual docker container, but it does them all together. It sounds like others in the thread have more automated someone with services dedicated to watching for updates and running those automatically but I just look for a flag in the app saying there’s an update available and pull/ up -d whenever it’s convenient/I realize there’s an update.

    • grizzlywan@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Compose is the best. Way more granular control. And makes migration entirely pain free. Just ran into the case for it. Set it and forget it, use the same compose for updates.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Watchtower auto updates for me.

    Sometimes stuff breaks, if it does and I can’t fix it, I’ll just roll back to a backup for that stack and figure it out from there.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Ideally containers are provided with a major release version tag, so not just :latest but :0.18 for all 0.18.x releases that should in theory not break compatibility.

    Then you can set your Podman systemd configuration file (I use Quadlet .container files) to automatically check for new versions and update them.

      • exu@feditown.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Well, most projects publish their dockerfiles so you could take ans rebuild them with the tags you want. And all the building can be built into a CI/CD pipeline so you just have to make a new push with the latest versions.

        I should make something like that.

    • Leafimo@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      this is the way to do it.

      and periodically keep taps on main releases to swap from 0.18 to 0.19

    • dr_robot@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      I originally used this too, but in the end had to write my own python script that basically does the same thing and is also triggered by systemd. The problem I had was that for some reason podman sometimes thinks there is a new image, but when it pulls it just gets the old image. This would then trigger restarts of the containers because auto-update doesn’t check if it actually downloaded anything new. I didn’t want those restarts so had to write my own script.

      Edit: but I lock the version manually though e.g. nextcloud 27 and check once a month if I need to bump it. I do this manually in case the upgrade needs an intervention.

  • CodingSquirrel@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I use something called What’s Up Docker to check for docker updates. It integrates nicely with Home Assistant, so I made a card on my server state dashboard that shows which containers have updates available. I’ll check every so often and update my docker-compose files.

  • chandz05@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Auto update with “latest” version tag, and re-pull to a specific previous version if there are problems. Got too many containers to keep up with individual versions

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      If you pull ‘latest’ and then want to roll back, how do you know what version you were in before? Is there a way to see what version/tag actually got pulled when you pull latest?

      • chandz05@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Last time it happened was with one of the newer Nextcloud updates. It was a bit of trial and error, but I eventually went back to a version that worked and I could fix the underlying issue. There should be a list of version tags either on dockerhub or GitHub that list all versions that have been pushed to live and are available to pull

  • JoeKrogan@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I pin versions and stick to stable releases as I want stability. Everything is behind a VPN so I’m not too worried. I check them and update once a week or so.

  • Millwiller@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Kubernetes with ArgoCD declarative config and then Renovate. It automatically makes prs against my config repo for container/chart versions with the change log in the description

    • deafboy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      You obviously know a thing or two about Kubernetes. I’m trying to learn. I’ve been at the cloud native conference, I attended the vmware tanzu course, even played with microk8s on my laptop. I still look for the “aha!” moment, when I understand the point of it all, and everything clicks into place.

      However, whenever I see somebody describe their setup, I just cringe. It all just feels like we’re doing simple things in an obscure and difficult way.

      The technology has been here for almost a decade, and it’s obviously not going away. How can I escape the misery, and start loving k8s?

      Picture somehow related…

      • witten@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        You weren’t asking me, but I’ve used K8s professionally and my take is that K8s is only suited for business environments, ones with a good number of devs and users and complex deployment/runtime needs. You’re not finding that “aha!” with K8s for self-hosting at home because, simply put, you are not the target market. It’s way overkill for your needs. The one exception is if you’re trying to learn it at home so you can use it in a corporate environment. In that case, go wild. But just don’t expect it to make sense for most modest home lab or self-hosting needs.

      • Millwiller@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        For sure, just stacking turtles all the way down… 🐢 It’s definitely overkill for a home lab, but I’m an infra engineer, and it’s what I use daily, so setting it up was worth it because I’m already really familiar with the stack. That said, I do absolutely love having declarative setup at home because I’ll sometimes go months without touching things. Before I spent the time to make it declarative, I’d frequently forget how I set certain things up and waste time redoing, or figuring out where I left off. Now I just check commit history and I’m always moving forward.

    • mariom@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      +1 for renovate.

      A little bit different setup - helmfile in git repository + pipelines in woodpecker-ci.