

Diffusion models iteratively convert noise across a space into forms and that’s what they are trained to do. In contrast to, say, a GPT that basically performs a recursive token prediction in sequence. They’re just totally different models, both in structure and mode of operation. Diffusion models are actually pretty incredible imo and I think we’re just beginning to scratch the surface of their power. A very fundamental part of most modes of cognition is converting the noise of unstructured multimodal signal data into something with form and intention, so being able to do this with a model, even if only in very very narrow domains right now, is a pretty massive leap forward.
It’s kind of mind blowing how dismissive of the ACA people are, even those who were aware before it went into effect. It wasn’t by any means what it should have been, but medical access unequivocally improved vastly as a result of it.