Been using this in nursing school - a lot of our content is done on horribly designed websites, and it’s pretty common to hit submit and… Naw it didn’t take: all your shit just disappeared.
So, save with single file first, then submit, and if it fucks up then I’ve got a backup to copy and paste a replacement out of.
Clearing the whole form when submitting should be defined a crime with fines.
Whoa. If anyone else s wondering lie I was how the hell this works, it’s basically a ZIP file that appears as HTML: https://github.com/gildas-lormeau/Polyglot-HTML-ZIP-PNG
This is probably the best solution I’ve found so far.
Unfortunately, even this is no match for the user-hostile design of, say, Microsoft Copilot, because it hides content that is scrolled off screen so it’s invisible in the output. That’s no fault of this extension. It actually DOES capture the data. It’s not the extension’s fault that the web site intentionally obscures itself. Funnily enough, if I open the resulting html file in Lynx, I can read the hidden text, no problem. LOL.
I was on a site that did that and was confused why my text search wasn’t finding much. Thanks devs for breaking basic browser features.
Actually that might not have been done to deliberately disrupt your flow. Culling elements that are outside of the viewport is a technique used to reduce the amount of memory the browser consumes.
…which should be used only when the browser is running out of memory.
Well… that would make sense. But it’s much much easier to just do it preemptively. The browser API to check how much memory is available are quite limited afaik. Also if there are too many elements the browser will have to do more work when interacting with the page (i.e. on every rendered frame), thus wasting slightly more power and in a extreme cases even lagging.
For what it’s worth, I, as a web developer, have done it too in a couple occasions (in my case it was absolutely necessary when working with a 10K × 10K table, way above what a browser is designed to handle).
I’m a big fan given that MHTML & MAFF were abandoned and WebArchive is Safari-only.
Very sad to learn these were abandoned
Not a single self-hosted read-it later use single file in their backend. I wonder why… single file works flawlessly on every site !
The only one that works similarly is linking.
Hey, my nick! I’m not 100% sure but readeck saves into a single file.
Haha kinda scary !! XD
Yeah I tried readeck and while it does better than the others it still doesn’t use single file (or I missed some configuration?)
I do like it because it even transcribes YouTube videos and that’s very neat ! However, I work lot on superuser, stack*,ask* pages which aren’t properly scraped and render with their comments.
The only perfect working solution I found was singlefile + Zotero.
OMG, are you me? I’m also a heavy stack* and ask* user, that’s so funny!
I’m not sure if it really uses single file, I’m a relatively new user and while linking around the system, saw that it saves its stuff in a single … what was it… gz or zip or tug. But not sure if it just scrapes everything and puts it into an archive file.
If you use the browser extension, it saves the page as rendered (at least it should). If not, open a bug report, the maintainer seems quite helpful.
You can also somehow script extractors I believe, so it should be possible to correct saving ask* and stack* pages.