π = 10
in base 10, 10 = 10.
Keyoxide: aspe:keyoxide.org:KI5WYVI3WGWSIGMOKOOOGF4JAE (think PGP key but modern and easier to use)
π = 10
in base 10, 10 = 10.
Smb should be fine. I used it for years on my primary systems (I moved to sshfs when I migrated to linux finally), and it wasn’t ever noticeably less performant than local disks.
Compared to local ntfs partitions anyway, ntfs itself isn’t all that fast in file operations either.
If you are looking at snapshots or media, that is all highly sequential and low file operations anyway. Something like gaming off of a nas via smb does also work, but I think you notice the lag smb has. It might also be iops limitations there.
Large filesizes and highly random fast low-latency reads is a very rare combination to see. I’d think swap files, game assets, browser cache (usually not that large to be fair).
For anything with fewer files and larger changes it always ran at over 100MiB/s for me until I exhausted the disk caches, so essentially the theoretical max accounting for protocol losses.
for music what I use is AIMP. I only hope it can work with wine because I don’t want to run a VM for it
I use that on android. Never knew there were desktop versions, odd that it supports android but not other linux.
Wine is very reliable now, it will almost certainly work out of the box.
Otherwise there are also projects to run android apps on linux, though no doubt at much more effort and lower chance of success than wine.
because I prefer a local player over jellyfin
I used vlc then mpv for years before setting up jellyfin. I could still use them if I wanted to.
For internet access, the largest of files (~30Mbit/s) came up against my upload limit, but locally still played snappily.
Scrubbing through files was as snappy as playing off of my ssd.
I do understand wanting music locally. I sync my music locally to my phone and portable devices too so I’m not dependent on internet connectivity. None of these devices even support hdds however, for my pc I see no reason not to play off of my nas using whatever software I prefer.
I didn’t want to buy him an SSD unnecessarily big […] for the lower lifespan
Larger ssds almost always have higher maximum writes. If you look at very old (128 or 256GB drives from 2010-2015 ish) or very expensive drives you will get into higher quality nand cells, but if you are on a budget you can’t afford the larger ones and the older ones may have 2-3 times the cycles per cell but like a tenth the capacity, so still 1/3rd the total writes.
The current price optimum to my knowledge is 2TB SSDs for ~85USD with TLC up to 1.2PBW, so about 600 cycles. If you plan on a lifetime of 10 years, that is 330GB per day, or 4GB/day/USD. I can’t even find SLC on the market anymore (outside of 150USD 128GB standalone chips), but I have never seen it close to that price per bytes written. (If you try looking for slc ssds, you will find incorrectly tagged tlc ssds, with tlc prices and lifetime. That is because “slc cache” is a common ssd buzzword).
I didn’t want to buy him an SSD unnecessarily big […] for the cost
Another fun thing about HDDs is that they have a minimum price, since they are large bulky chunks of metal that are inherently hard to manufacture and worth their weight in materials.
That lower cutoff seems to be around 50USD, for which you can get 500GB or 2TB at about the same price. 4TB is sold for about 90USD.
In terms of price, ignoring value just going for the cheapest possible storage, there is never a reason to by an HDD below 2TB for ~60USD. A 1TB SSD has the same price as a 1TB HDD, below that SSDs are cheaper than HDDs.
So unless your usecase requires 2TB+, SSDs are a better choice. Or if it needs 1TB+ and also has immensely high rewrite rates.
a few VMs, a couple of snapshots
I have multiple complete disk images of various defunct installs, archived on my nas. That is a prime example for stuff to put into network storage. Even if you use them, loading them up would be comparable in speed to doing it off of an HDD.
Oh yeah absolutely. As mentioned above I myself use spinning rust in my nas.
The difference is decreasing over time, but it’ll be ages before ssds trump hdds in price per TB.
The difference now compared to in the past is that you are looking at 4TB SSDs and 16TB HDDs, not 512GB SSDs and 4TB HDDs, and in my observation the vast majority has no use for that amount of storage currently, while the remainder is willing or even happy to offload the storage onto a separate machine with network access, since the speed doesn’t matter and it’s the type of data you might want to access rarely but from anywhere on any kind of device.
Compare for example phones that are trying to sell you 0.25 or 0.5 TB as a premium feature for hundreds of usd in upmark.
If anyone had use for 2TB of storage, they would instead start at 0.5 and upsell you to 2 and 4 TB.
I myself have 32TB of storage and am constantly asking around friends and family if anyone has large amounts of data they might wanna put somewhere. And there isn’t really anyone.
Even the worst games only use up so many TB, and you don’t really wanna game off of HDD speeds after tasting the light. And if you’d have to copy your game over from your HDD, the time it’d take to redownload from steam is comparable unless your internet is horrifically bad.
My extensive collection of linux ISOs is independent and stable, and I do actually share it with a few via jellyfin, but in all its greatness both in amount and quality it still packs in below 4TB. And if you wanna replicate such a setup you’d wanna do it on a dedicated machine anyway.
If I had to slim down I could fit my entire nas into less than 4TB if I’m being honest with myself, in my defense I built it prior to cost-effective 4TB SSDs. The main benefit for me is not caring about storage. I have auto backups of my main apps on my phone, which copy the entire apk and data directories, daily, and move them to the server. That generates about 10GB per day.
I still haven’t bothered deleting any of those, they have just been accumulating for years. If I ever get close to my storage capacity, before buying another drive I’d first go in and delete the 6TB of duplicate backups of random phone apps dated 2020-2026.
I wrote a paper grouping together info of tons of simulations. And instead of taking out the measurement files containing the relevant values every 10 simulation steps (2.5GB), or the data of all system positions and all measured quantities every 2 steps (~200GB), I copied the entire runtime directory. For 431 simulations, 8.5GB per, totaling 1.8TB.
And then later my entire main folder for that entire project and the program data and config dirs of the simulation software, for another half a TB. I could have probably saved most of that by looking into which files contain what info and doing some most basic sorting. But why bother? Time is cheap but storage is cheaper.
But to go for simply the feeling of swimming in storage capacity, you first need to experience it. Which is why I think noone wants it. And those that do already have a nas or similar setup.
Maybe you see a usecase that would see someone without knowledge or equipment need tons of cheap storage in a single desktop pc?
M.2 nvme uses PCIe lanes. In the last few generations both AMD and intel were quite skimpy with their PCIe lane offering, generally their consumer CPUs have only around 20-40 lanes, with servers getting over 100.
In the default configuration, nvme gets 4 lanes, so usually your average CPU will support 5-10 M.2 nvme SSDs.
However, especially with PCIe 5.0 now common, you can get the speed of 4 PCIe 3.0 lanes in a single 5.0 lane, so you can easily split all your lanes dedicating only a single lane per SSD. In that configuration your average CPU will support 20-40 drives, with only passive adapters and splitters.
Further you can for example actively split out PCIe 5.0 lanes into 4x as many 3.0 lanes, though I have not seen that done much in practice outside of the motherboard, and certainly not cheaply. Your motherboard will however usually split out the lanes into more lower-speed lanes, especially on the lower end with only 20 lanes coming out of the CPU. In practice on even entry-level boards you should count on having over 40 lanes.
As for price, you pay about 30USD for a pcie x16 to 4 M.2 slot passive card, which brings you to 6 M.2 slots on your average motherboard.
If you run up against the slot limit, you will likely be using 4TB drives and paying at the absolute lowest a grand for the bunch. I think 30USD is an acceptable tradeoff for a 20x speedup almost everyone on this situation will be taking.
If you need more than 6 drives, where you would be looking at a pcie sata or sas card previously, you can now get x16 pcie cards that passively split out to 8 M.2 slots, though the price will likely be higher. At these scales you almost certainly go for 8TB SSDs too, bringing you to 6 grand. Looking at pricing I see a raid card for 700usd, which supports passthrough, i.e. can act as just a pcie to M.2 adapter. There are probably cheaper options, but I can’t be bother to find any.
Past that there is an announced PCIe x16 to 16 slot M.2 card, for a tad over 1000usd. That is definitely not a consumer product, hence the price for what is essentially still a glorified PCIe riser.
So if for some reason you want to add tons of drives to your (non-server) system, nvme won’t stop you.
The inputs of the model are full copies of copyrighted data, so the “amount used” is the entirety of the copyrighted work.
If you want to apply current copyright law to the inner working of artificial networks, you run into the problem that it doesn’t work on humans either.
A human remembering copyrighted works, be it memorization or regular memory, similarly is creating a copy of that copyighted work in their brain somewhere.
There is no law criminalizing the knowledge or inspiration a human obtains from consuming media they did not have the rights to consume. (In many places it isn’t even illegal to aquire and consume media you don’t have rights to, only to provide it to others without those rights)
Criminalizing knowledge, or brains containing knowledge, can’t possibly be a good idea, and I think neural nets are too close to the function of the brain to apply current regulation to one but not the other. You would at minimum need laws explicitly specifying to only apply to digital neural nets or something similar, and it apears this page is trying to work in existing regulation. (If we do create law only applying to digital neural nets, and we ever create intelligent enough ai it could deservedly be called a person, then I’m sure that ai wouldn’t be greatly happy about weird discriminatory regulation applying to only its brain but not that of all the other people on this planet.)
A neural net is working too similarly to the human brain to call the neural net a copy but the human brain “learning, memorization, inspiration”. If you wanna avoid criminalizing thoughts, I don’t see a way to make the arguments this website makes.
I have a nas with 32TB. My main pc has 2TB and my laptop 512GB. I expected to need to upgrade especially the laptop at some point, but haven’t gotten anywhere near using up that local storage without even trying.
I don’t have anything huge I couldn’t put on the nas.
At this point I could easily go 4TB on the laptop and 8TB the desktop if I needed to.
Spinning rust is comparable in speed to networking anyway, so as long as noone invents a 20TB 2.5’’ hdd that fits my laptop for otg storage, there would be no reason something would benefit from an hdd in my systems over in my nas.
Edit:
Anything affordable in ssd storage has similar prices in M.2-nvme and 2.5’'-sata format. So unless you have old hardware, I see the remaining use for sata as hdd-only.
At least sata is well on its way towards dying, so the problem will solve itself in some more years.
My machines all have nvme exclusively now, only some servers are left using sata. And I would say the type of user at risk of fucking up a dd command (which 95% of the time should be a cp command) doesn’t deal with servers. Those are also not machines you plug thumb drives into commonly.
In 5-10 years we will think of sda as the usb drive, and it’ll be a fun-fact that sda used to be the boot drive.
Busy raising the deductible?
VM with one dedicated usb hub passed thru?
South Korea’s President Yoon reverses martial law after lawmakers defy him
Only because lawmakers literally broke into parliament to have the vote since the military was complicit with Yoon into not letting them in.
If 190 didn’t have the balls to break into parliament, there would have been a quick downward spiral of military lead authoritarianism.
The military was in on this. All other lawmakers said they were not aware of this and the tanks rolled in minutes after the order which means they were prepared for the announcement in secret. …and he will try again… My guess is when he is impeached for this coup attempt.
However next time they won’t let them in the building as easily since that was the key to stopping it.
It kinda does. In android and windows you have some apps/windows appear fully black in all screenshots and recordings. For windows this is some drm functionality which programs can call up. It also allows locking audio devices etc.
Yes, seems you are right. Not sure where I got the impression.
Unrelated, when I researched this I saw that acme.sh, zerossl, and a bunch of other acme clients are owned by the same entity, “Stack Holdings”/“apilayer.com”. According to this, zerossl also has some limitations over letsencrypt in account requirements and limits on free certificates.
By using ZeroSSL’s ACME feature, you will be able to generate an unlimited amount of 90-day SSL certificates at no charge, also supporting multi-domain certificates and wildcards. Each certificate you create will be stored in your ZeroSSL account.
It is suspicious that they impose so many restrictions then waive most on the acme api, where they presumably could not compete otherwise. On their gui they allow only 3 certificates and don’t allow multi-domain at all. Then even in the acme client they somehow push an account into the process.
[…] for using our ACME service you have to create and use EAB (External Account Binding) credentials within your ZeroSSL dashboard.
EAB credentials are limited to a maximum per user/per day. [This might be for creating them, not uses per credential, unsure how to interpret this.]
This all does make me slightly worry this block around apilayer.com will fall before letsencrypt does.
Other than letsencrypt and zerossl, this page also lists no other full equivalents for what letsencrypt does.
They don’t offer wildcard certs, but otherwise I think they are.
I wanna say acme.sh defaults to them.
What for? What did they do?
Unrealistic
per minute watched
Per started minute
Ofc, no problem.
Since this thread was initially about beginner friendly distros, I wanted to ensure I wasn’t going around recommending an inferior or problematic distro to new users as their first experience.
Wayland and GPU stuff should be very good in endeavor, better than most systems I have seen, better than openSUSE leap and mint certainly. I don’t know fedora however.
Endeavor has its own base repo, but also the regular arch stuff like aur. The AUR is probably the best source for all those programs that are usually missing in your repo, and since the base stuff is stable in endeavor there is no problem if some random program needs a special version or a manual install sometimes, it won’t affect anything else.
The AUR is not the main package source for endeavor.
I don’t know your hardware, but the combination of up to date system components, endeavors focus on just working, and all the shit in the aur (to my understanding flatpak is currently quite useless for drivers) sound like it should just accept any hardware at least as well as other linux distros.
On a sidenote for flatpaks. There is this long running conflict between stability, portability, and security. The old-school package systems are designed to allow updating libraries systemwide, switching-in abi compatible replacements containing fixes. On the other hand, you have appimage, flatpak, …, which bring their own everything and will therefore keep running on old unsafe libraries sometimes for years before the developers of all those specific projects update their projects’ versions of all those libraries.
Kill your darlings