Linux, thanks to steam, is better at gaming than windows, esecially for older games. Proton ftw
Linux, thanks to steam, is better at gaming than windows, esecially for older games. Proton ftw
I don’t understand why linux doesn’t get all the love patches like Windows.
ROCm? Is that even supported now? Last time I checked it was still a dumpster fire. What are the RAM and VRAM reqs for the Mixtral8x7b?
I stumbled upon a website through DDG, and after a long intro, the main section supposedly where the thing I was searching for had “Sorry I can’t fulfill your request right now”. Basically a fully generated page to match my search with some parasitic seo tactics. The web be chaging. Front page of DDG.
But basing a recommendation on a ballpark anectdotal evidence is eidiculous.
Doing the maths, that’s ~200$/chip. Even Nvidia 4060 is more expensive.
Holy shit, that’s the rookiest mistake.
Actually there is a russian guy who made that happen. Youtube it.
But it’s not deterministic.
Interesting read, basically the demonstrated that Gpt_4 can understand causality, using random graphs. Interesting take-away though is this excerpt:
“And indeed, as the math predicts, GPT-4’s performance far outshines that of its smaller predecessor, GPT-3.5 — to an extent that spooked Arora. “It’s probably not just me,” he said. “Many people found it a little bit eerie how much GPT-4 was better than GPT-3.5, and that happened within a year.”
But idle still would run much more than 15w. There a very good compilation google sheets for the most efficient X86 cpus, but once you start factoring hdds and ssds, it’s only natural to go higher (20w-30w) at least. That’s at least double than rpis
The mian issue with Mini/used PCs is the power efficiency. It’s just a waste of wattage and performanve/Watt is very bad, especially at idle.
Link for the lurkers https://github.com/evilsocket/pwnagotchi
Pwnagotchi is an A2C-based “AI” leveraging bettercap that learns from its surrounding WiFi environment to maximize the crackable WPA key material it captures (either passively, or by performing authentication and association attacks). This material is collected as PCAP files containing any form of handshake supported by hashcat, including PMKIDs, full and half WPA handshakes.
The landscape is changing so fast thanks to LLMs, everything is becoming gated behind logins. Thanks ChatGPT.
I swear, youtube sponsorships are like anti-ads. 9 times out of 10 they’re doing something sketchy.
We’re the minority though.
I am saying that from an employee perspective, what is my reason to support Masimo? Unless I am a suck up for corporations, why would I even support Masimo. The way I see it, the more restriction a company has on its employees (ie you are forbidden from working at a competitor with your expertise) the less power the employees have.
How is this even an argument for capitalism? Just shouting capitalism does not earn you free points. Think it through, step by step, human-gpt.
I think the case is still developping, but I hate these laws that forbid employees from working at other companies. I thate to take Apple’s side, but I don’t think hiring the engineers was wrong.
Like you accummulate knowledge at your current company, and you’re not supposed to use it ever in any job? Bullshit. Masimo could have offered their knowlesge employees better salaries stock options so they stay, at the end of this case if Masimo wins, it’s the employees that will lose.
Anyone working in a specialized field will find it hard to be hired as new companies will be afraid of the same thing here.
As an OSS user, and developper, OPT-OUT is a shitty practice. It should be opt-in to users who face crashes issues if they want to share that data (they care enough to provide their info to the dev to fix it). I know this makes users sound entitled, but otherwise the “opt-out” permission will be exploited by someone which will make users even more paranoid about OSS apps.
Temporary solution, but works for now.
I have a ryzen apu, so I was curious. I tried yesterday to fiddle with it, and managed to up the “vram” to 16gb. But installing xformers and flash-attention for LLM support on igpus is not officially supported and was not possible to install anything past pytorch. It’s step further for sure, but still needs lots of work.