• 4 Posts
  • 76 Comments
Joined 1 month ago
cake
Cake day: February 11th, 2026

help-circle
  • Hahaha baby steps! But I’ll look at it; if nothing else I think it would be very funny to have the dev equivalent of Jar Jar Binks end the format war by accident (which I say as just a joke; I of course have no idea how complex the issue actually is).

    Edit: Damn dude, you weren’t kidding about the challenge. You’re right that HISTV won’t work for DoVi 5 in its current state - “Fundamental to FFMPEG” does kinda mean “fundamental to HISTV”, being that it’s basically just a clever wrapper around FFMPEG. That said, if your tooling is ready, I’ve got a plan to integrate it. The philosophy with HISTV is to preserve whatever we can and fall back gracefully as far as we have to if we can’t preserve the current profile, which your tools slot into like they were made for it (Rust Gang represent (👉゚ヮ゚)👉).


  • Ah, then the real slowness is going to come from having them on a spinning disk HDD. For friends, 3Mbps or 4Mbps target bitrate should be plenty, with the 2x multiplier should be enough to preserve detail. No need to touch anything else, you don’t need precision mode for it. Maybe start with 3 and that on just one episode, and see how you go - if you find yourself noticing it looks blocky, bump up to 4Mbps and you’ll be golden.



  • Fun fact - HISTV actually has two-pass encoding! Though, with enough system RAM you can actually look ahead far enough that you can get the benefits of two-pass in just a single pass. I have a bit about this in the README.md:

    Precision mode

    One checkbox for the best quality the app can produce. It picks the smartest encoding strategy based on how much RAM your system has:

    Your RAM What it does
    16GB or more Looks 250 frames ahead to plan bitrate (single pass)
    8-16GB Looks 120 frames ahead to plan bitrate (single pass)
    Under 8GB Scans the whole file first, then encodes (two passes)

    Two-pass only happens when precision mode is on AND the system has less than 8GB RAM AND the file would be CRF-encoded. Reason being those lookaheads above. Lookahead buffers live in memory. On low-RAM systems that buffer would be too large, so the app falls back to two-pass instead and stores the analysis run in a tempfile on disk. To break down each one:

    • Pass 1: Runs ffmpeg with -pass 1 writing to a null output. ffmpeg analyses the entire file and writes a statistics log (the passlog file) describing the complexity of every scene. No actual video is produced - this pass is pure analysis.
    • Pass 2: Runs ffmpeg with -pass 2 using the statistics from pass 1. The encoder now knows what’s coming and can distribute bits intelligently across the whole file - spending more on complex scenes, less on simple ones - without needing a large lookahead buffer in RAM. After both passes complete, the passlog temp files are cleaned up.

    The biggest problem with two-pass encoding is the speed. It has to do two passes over the whole file: one analysis, one actually encoding. With a 250-frame lookahead, you’re basically just writing your passlog into RAM - and reading from it - as you go. With 120-frame lookahead your CPU will likely catch up to the passlog at times, but you can still write to and read from it as you go, so you still get similar speed, and still close enough in quality that it doesn’t really make a difference, in a single pass.


  • You know what? I think I can figure out a way to estimate final file size and display it to the user. It’ll only work if “Precision Mode” is off though - that uses “CRF” or “Constant Rate Factor” which basically tells the encoder “be efficient, but make the file as big as it needs to be to look good”. As a result there’s no way to tell how big the file will end up being - the encoder makes the decision on the fly.

    With “Precision Mode” off, HISTV has two gears:

    1. If your file is already small enough (at or below the target bitrate), it uses “CQP” or “Constant Quantisation Parameter” (the QP I/P numbers), which tells the encoder “Use this quality level for every frame, I don’t care how big the file ends up”. It’s fast and consistent - every frame gets the same treatment.
    2. If your file is too big, it switches to VBR (Variable Bit Rate), which tells the encoder “Stay around this target bitrate, spike up to the peak ceiling on complex scenes, but don’t go over”. It’s how the app actually shrinks files. You can estimate the output size with target mbps * seconds / 8 - so a 60-second clip at 4Mbps lands around 30MB. <- This is the maths I’m thinking about doing to display the estimate to the user.




  • Haha yes, I use it regularly! And yes, it’ll be plenty fast on your system. I have very deliberately gone over the code base looking for inefficiencies six times now, so it runs nice and lean - I do an efficiency/hygiene pass every couple of releases to make sure bloat doesn’t creep in.

    As ark3@lemmy.dbzer0.com said, CPU encoding is slow but it preserves the most quality. No kidding, it really is night and day compared to GPU encoding - for this, just tick “Precision mode” in HISTV. It’s about 1x speed on most videos, so a 45 minute file will take about 45 minutes.

    GPU does go a lot faster, my 7900 XTX rips through 1080p at about 28x speed so a 45 minute file takes about 2 minutes. This is good enough for most content; just untick “precision mode” and set the multiplier to 2x or 3x with a bitrate of 4 if you want better GPU quality in HISTV. The multiplier says how high the peak bitrate can go, so you keep more data for fast-moving scenes, without forcing the encoder to keep useless data for slow scenes.


  • This is super helpful (and I see that “btw”, you got a smile with that one (⁠☞゚⁠ヮ゚⁠)⁠☞). Thank you for the heads up and all this detailed information! I’m excited to check out Certum.

    there seem to be very few people who simultaneously - pathologically dislike using Windows regularly - still want to make it easier for people on Windows to minimize Windows Defender complaints when running software that they build

    Describes me to a T 😅 My career is rooted in support, so my pathologies include trying to make things end-user easy.


  • There really aren’t many to be honest, Tdarr is super powerful! But the setup is a lot, at least on first run. The main point of HISTV is for the times when you can’t be bothered to set up tdarr, like if you only have to do spot conversions, or for people who don’t want to learn how to use Tdarr. But there are a few features unique to HISTV!

    I have built in disk space monitoring so your drive doesn’t fill up during encoding, which tdarr doesn’t do; I don’t know if tdarr supports turning gif/webp/mjpeg/apng into MP4/MKV; also, tdarr doesn’t auto detect your hardware, where HISTV does a few test transcodes on startup to determine not only what hardware is available but whether the encoder for that hardware is working.


    1. Thank you! I am obnoxiously proud of myself for that one 😅

    2. That’s actually how this all started for me haha! I was sitting there tweaking the same command as I hit videos with different formats or quality levels, and I thought to myself that it’d be a lot easier with just a few smarts like detecting if a video is already at the target quality level. The first version was actually just a winforms GUI wrapping that very PowerShell command - it’s come a long way in just a few weeks.

    3. Subler is interesting! Hmm. Currently HISTV just copies over ask the subtitle tracks directly. I think I could add a subtitle function like Subler’s to bake in subtitle tracks, if the user has the subtitle file they want to use. Would that be useful? Metadata download from the internet is messy though, I might want to leave that to the existing *arr solutions.



  • Thank you very much, that’s very kind! 😊

    I didn’t know I could get the cert from elsewhere, I appreciate the heads up; my experience lies mainly in networking, rather than software. I’ll open that possibility back up, then. It’d be a first for me, and I’m getting a real taste for firsts with this project!

    I’m not taking anyone’s money, though, even Windows users (right now, that includes me… I’m working on it lol). I’m doing this purely for the love of the game!






  • Can’t believe nobody has said why The Warp exists - or rather why it overlaps with and breaks into our reality.

    Warhammer has Space Elves - the Eldar/Aeldari. Powerful psykers. Millennia ago they got so kinky and on so many drugs they fucked a hole in reality itself, a hole that promptly engulfed their homeworld and sent them on an endless exodus in their Worldships. They were such powerful psychics their sexcapades also manifested demons and the Chaos Gods from the formless energies of The Warp.

    The ones that renounced their horny ways became the Eldar, the ones who didn’t became the Dark Eldar who are still really into leather and chains.

    This isn’t super relevant to the story of the game, I just really like this bit of the lore. It reads like a punchline:

    Why’s the universe so jacked up? Fuckin’ space elves…



  • Yes, and it’s actually pretty good at it. The code won’t be the most efficient, it won’t be elegant or beautiful… but it will mostly work, and someone with technical experience can get it over the line. Case in point: I can “sort of” code, but my career has been spent writing simple scripts. Nothing more complicated than workstation provisioning, find and replace with some regex, PowerShell with a WinForms GUI, etc. Despite being relatively low level in terms of actually building applications, I’ve been able to “project manage” and hand-edit Claude output into a working application. It’s basically just a frontend for FFMPEG, with some smarts and automation built in. Not particularly impressive in absolute terms, but it’s a lot snappier and prettier than anything else I’ve ever put together and I’m proud of it. I got it from concept to working in a few days, and added major features plus a few efficiency passes and bug fixes in two weeks - an absolutely incredible pace.

    This comment is going to get absolutely nuked with downvotes, I guarantee it - but that won’t change the fact that I’m successfully building stuff with AI.


  • I spent quite a lot of effort getting Stoat up and running because they aren’t working on the selfhosted version, only to get a nice email from the German government that my server was running an outdated version of React with RCE vulnerabilities. Nuked that stack at 3am.

    Also I fixed their Tenor integration to be provider agnostic so the self-hoster could choose a different gif provider like klipy (Tenor turned off their API so gif search in Stoat is broken), tried to contribute that one small change back to the main project, immediately rejected because “we have no plans for klipy support”.

    Not worth the effort, IMO.