The deficit is about the same as what we pay on interest for all that debt.
$1T
…This does not seem like an issue that should be made worse.
The deficit is about the same as what we pay on interest for all that debt.
$1T
…This does not seem like an issue that should be made worse.
What is the rationale? The US debt is already insane.
…Tax breaks for musk?
Does it really matter?
Musk just killed the US budget bill with a bunch of tweets that were explicit lies, and people lapped them up. Bizarre, obvious AI slop gets tons of engagement on Facebook.
And there are zero consequences.
Trump and Musk embraced a post truth world, so maybe it’s time for their opposition to stop pretending like America can (as a whole) think critically and fight fire with fire. Maybe it will accelerate the fall of the platforms that enable them.
I’m not sure how much I trust that poll.
Data was collected by contacting cell phones via MMS-to-web text, landlines via interactive voice response and email (phone list provided by Aristotle, email lists provided by Commonwealth Opinions), and an online panel of voters pre-matched to the L2 voter file provided by Rep Data. The survey was offered in English.
If someone just called or texted me out of the blue for a survey like that, I would be tempted to lie about my opinion of Luigi out of fear. Honestly I find it shocking so many people ‘confessed’ to that… it has to be an underestimate.
The defense gets to weed them out too, and I feel like less sympathetic jurors would be quite “obvious”
Many young, healthy people haven’t had to deal with it much, but this is also the demographic highly engaged on social media and probably very sympathetic to him.
He’s already thrown away his life.
Honestly it would be demoralizing for him to get sentenced to death (or life) in this information environment. People would just move on, back to the status quo. But if he gets off, its a vindication of what he did.
The defense has to agree to the jury.
There’s no way the prosecution can stack the jury with Musk fans or whatever, not a chance.
To be fair, BG3 is like bottled lightning, and I think it’s unreasonable to expect many (if any) other studios to produce something like that.
Even the Divinity games were way above par, with a much more lukewarm (but not unsuccessful, I guess?) reception.
Almost certainly not. The A770 is built like an “upper midrange” GPU while the B580 is a smaller die.
If there’s ever an B770 or whatever, maybe consider it.
If you’re using them for running like coder llms though, that’s a different story.
And then congress with actually do something about it…
It uses embedded LPDDR5X, so it will not be upgradeable unless the mobo/laptop maker uses LPCAMMs.
And… that’s kinda how it has to be. Laptop SO-DIMMs are super slow due to the design of the DIMMs, and they need crazy voltages to even hit the speeds/timings they run at now.
We effectively can if we threaten to pull all support and harass Ukraine instead…
Not that I want that, or have any say in that as a US citizen…
You could list it locally depending on where you are, through FB marketplace or Craigslist.
Otherwise, yeah, eBay.
They’re kinda already there :(. Maybe even worse than raspberry pies.
Intel has all but said they’re exiting the training/research market.
AMD has great hardware, but the MI300X is not gaining traction due to a lack of “grassroots” software support, and they were too stupid to undercut Nvidia and sell high vram 7900s to devs, or to even prioritize its support in rocm. Same with their APUs. For all the marketing, they just didn’t prioritize getting them runnable with LLM software
Your OS uses it efficiently, but fundamentally it also limits what app developers can do. They have to make apps with 2-6GB in mind.
Not everything needs a lot of RAM, but LLMs are absolutely an edge case where “more is better, and there’s no way around it,” and they aren’t the only one.
It’s just smarter with the same number of parameters. Try Qwen QwQ or Qwen coder 32B, see for yourself… it stacks up well against huge models like the 123B Mistral Large, or even GPT-4.
Why? Alibaba trained it well, presumably with better data than OpenAI or whomever else, though specifics are up for debate. Some suggests that bilingual training on English/Chinese (aka the two largest text corpuses in existance) significantly helps the model over mostly english. Some say the government just gave them better data. There’s also suggestions that having so few GPUs compared to American AI companies made the Chinese “thrifty,” and gave them far more incentive to be innovative rather than brute forcing models (which has diminishing returns).
My old Razer Phone 2 (circa 2019) shipped with 8GB RAM, and that (and the 120hz display) made it feel lighting fast until I replaced it last week, and only because the microphone got gunked up with dust.
Your iPhone 14 Pro has 6GB of RAM. Its a great phone (I just got a 16 plus on a deal), but that will significantly shorten its longevity.
B580 24GB and B770 32GB
They would be incredible, as long as they’re cheap. Intel would be utterly stupid for not doing this.
With OpenAPI being backed by so many big names, do you think they will be able to upset CUDA in the future or has Nvidia just become too entrenched?
OpenAI does not make hardware. Also, their model progress has stagnated, already matched or surpassed by Google, Claude, and even some open source chinese models trained on far fewer GPUs… OpenAI is circling the drain, forget them.
The only “real” competitor to Nvidia, IMO, is Cerebras, which has a decent shot due to a silicon strategy Nvidia simply does not have.
The AMD MI300X is actually already “better” than Nvidia’s best… but they just can’t stop shooting themselves in the foot, as AMD does. Google TPUs are good, but google only, like Amazon’s hardware. I am not impressed with Groq or Tenstorrent.
https://en.wikipedia.org/wiki/Horseshoe_theory