People resoundingly suggested using containers. So I’ve been reading up. I know some things about containers and docker and what not. But there are a few decision points in the jellyfin container install instructions that I don’t know the “why”.

Data: They mount the media from disk, which is good cause it’s on a NAS. But for the cache and config they use docker volumes. Why would I want a docker volume for the config? Wouldn’t I want to be able to see it from outside the container easier? What am I gaining by having docker manage the volume?

Cache: I saw a very old post where someone mentioned telling docker to use ram for the cache. That “seems” in theory like a good idea for speed. I do have 16gb on the minipc that I am running this all on. But I don’t see any recent mentions of it. Any pros/cons?

The user. I know from work experience that generally you don’t want things running as root in the container. But… do you want a dedicated user for each service (jellyfin, arr*)? Or one for all services, but not your personal user? Or just use your personal user?

DLNA. I had to look that up. But I don’t know how it is relevant. The whole point seems to be that jellyfin would be the interface. And DLNA seems like it would allow certified devices to discover media files?

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    9 days ago

    Can’t speak to the RAM thing. My cache is a 320GB Toshiba hard drive I dug out of an old laptop in 2014, haven’t really had issues but I don’t do a lot of high fidelity transcodes as my local devices tend to support the codecs natively and remotely I’m limited by upload speed anyway because residential fiber and asymmetric speed.

    They mount the media from disk, which is good cause it’s on a NAS. But for the cache and config they use docker volumes. Why would I want a docker volume for the config?

    Better performance, useful for cache.

    The user. I know from work experience that generally you don’t want things running as root in the container.

    Doesn’t matter if you don’t use expose to the internet.

    I run docker Daemon as root, only have one user on the server with sudo and I removed all firewall packages, idc about any of it because NAT means nothing can access it from outside without a VPN, all other stuff that needs to be public is via cloudflare tunnels, and I have a separate device with only an exposed VPN server using key+pw auth for using services available only on LAN.

    A good NAT fixes all problems, just don’t use that demonic ipv6 crap, don’t use UPnP, don’t expose random ports (ssh etc) and you’re good, speaking as MSc and employed cybersec engineer of several years and aspiring pentester (hacker rank on HtB btw I use arch btw etc etc.).

    If it needs to be public that’s a very different story.

    If you want actual security/defense in depth then yes you want a separate user with no path to root per service with ACLs on least privilege principle where they only have access to run executables needed for the absolute barest essentials so no interactive shells or most of your bins (use facl for this), any scripts should have hard coded paths etc., be especially careful with what you actually expose via any mounts, make sure to also run something like LinPEAS to look for misconfigs and if you do any SMB/Windows/AD stuff run enum4linux-ng and such and such and ofc use unattended upgrades and refresh containers regularly.

    The whole point seems to be that jellyfin would be the interface. And DLNA seems like it would allow certified devices to discover media files?

    Basically if you want other devices to control Jellyfin without a client native to them enable it. Ever use casting? It works like that in practice. It uses broadcast like Bonjour/Rendezvous protocols in principle.

    • SailorsLife@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      Thanks.

      Interesting. I didn’t think about performance. I can see how a docker volume would be better optimized. And for a cache that makes sense. I was considering doing a bind mount for the config for easier visibility when debugging things. But keeping the volume for the cache now makes sense… thanks for that.

      I technically work for a company that is in the security space. But I myself just can’t really get into it. It seems like there is always so many things that could be done to improve security, but there is never the resources to do most of them in companies. And that would really eat at me. We hire companies to do pen testing. They seem like home inspectors. They have to find a few things to help the customer (us) justify the expense, but once they do, they don’t need to look much deeper. And half the things they find will be low/mediums that will never get fixed. And in the end, the only reason companies seem to hire them is so they can advertise that they did, or to meet their customers security requirements. All in all, it just feels so sad. :(

      anyway. If I am following you… you run a custom NAT for your home network? I know my router has one, but sounds like you don’t trust the routers? Is that right? And then you run a vpn server on the inside to handle any external access. That seems smart. Is that like common practice, or something you do because of your background?

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        I don’t run a custom NAT, I just don’t port-forward much.

        I have my ISP’s router as a gateway/fiber endpoint, hooked up to a TP-link 1gig switch and an Archer C7 with OpenWRT as a semi-dumb AP/switch and handle DHCP to give clients my recursive DNS.