- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world

I’m completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it’s running through an ultra realistic beauty filter.
The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace’s face look “sexier” because apparently that’s what realism looks like now.
I wouldn’t be so baffled if this was some experimental setting they were testing, but they’re advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.



… How if flying a spaceship different from driving a car? They’re both controlled applications of kinetic energy to move people or objects.
At the end of the day, it’s all a pile of transistors and the only thing that is of import is the intent behind usage.
In one case it’s saying you can use a neural net to take something rendered at resolution A/4 and make it visually indistinguishable from the same render at resolution A.
The other is rendering something and radically changing the artistic or visual style.
Upsampling can be replicated within some margin by lowering framerate and letting the GPU work longer on each frame. It strives to restore detail left out from working quicker by guessing.
You cannot turn this feature off and get similar results by lowering the frame rate. It aims to add detail that was never present by guessing.
Upsampling methods have been produced that don’t use neural networks. The differences in behavior are in the realm of efficiency, and in many cases you would be hard pressed to tell which is which. The neural network is an implementation detail.
In the other case, the changes are more broad than can be captured by non AI techniques easily. The generative capabilities are central to the feature.
Process matters, but zooming out too far makes everything identical, and the intent matters too. “I want to see your art better” as opposed to “I want to make your art better”.
What…? It’s more like chemical vs nuclear rocket. You’re not even comparing the same thing while these are both the same things with different views. You don’t like this one, so suddenly it doesn’t met your arbitrary conditions to be acceptable, so now you’re coming up with incorrect analogies to try and make a point. Great job!
And you didn’t even read past the first sentence I see.
Saying they’re the same because they both use a neural network is roughly equivalent to saying things are they same because they’re both manipulating kinetic energy.