Jyk0L8eLs7jd7es.png

I’m completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it’s running through an ultra realistic beauty filter.

The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace’s face look “sexier” because apparently that’s what realism looks like now.

I wouldn’t be so baffled if this was some experimental setting they were testing, but they’re advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.

  • plantfanatic@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    58
    ·
    23 hours ago

    But it literally follows the same process. Why is one slop, but not the other? You’re being hypocrital.

    • popcar2@piefed.caOP
      link
      fedilink
      English
      arrow-up
      80
      arrow-down
      8
      ·
      23 hours ago

      One is upscaling the image while preserving it as much as possible, the other is applying a filter to try and “enhance” it by drastically changing the image and ignoring artist’s intent. What’s hard to get?

      • kieron115@startrek.website
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        13
        ·
        edit-2
        18 hours ago

        This isn’t applying a filter, it’s applying running the image through a transformer network trained on advanced lighting methods like subsurface scattering to make materials more lifelike. It seems to change artistic intent quite a lot on these existing games, but frankly I’m excited to see what creators do with a game designed from the ground up to utilize AI-enhanced lighting. The DF video also states that this is an early preview (hence the dual 5090s) that is expected to change over time.

        • Gathorall@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          7 hours ago

          If it was made for that the slopifier would be able to identify the light sources. Before that it is art and environment destroying irrelevant bullshit. From all the slop examples, the best Nvidia can deliver, it is shown that they ignore the lighting of the scene.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          19 hours ago

          it’s applying advanced lighting methods like subsurface scattering to make materials more lifelike.

          It is not. It is approximating the results of training data consisting of output images that have been rendered with subsurface scattering. It isn’t actually running the subsurface scattering algorithm.

      • Ledivin@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        48
        ·
        23 hours ago

        How is “upscaling while preserving it” not the exact same philosophy as “enhance by applying a filter?”

        You just don’t like the specific filter, it’s very literally the same process.

        • Nibodhika@lemmy.world
          link
          fedilink
          English
          arrow-up
          31
          arrow-down
          4
          ·
          22 hours ago

          Because a pixelated circle being upscaled is a circle, but a pixelated circle being turned into a high definition pie is no longer a circle, and that’s especially problematic if the circle was just a cross hair or some other random circle like thing the AI thought was meant to be a pie.

          Yes, both things are the same, but that’s like saying you had a tiny spider in your house and you were okay because it killed mosquitoes in your house, so you should be okay with having a colony of bats since they are also animals and eat mosquitoes. Yes, both are the same, but the scales and the amount of intrusion are completely different.

          • grue@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            19 hours ago

            If your training data has a pixelated circle as an input and a circle as output, your neural network will “upscale” your pixelated circle to a circle. If your training data has a pixelated circle as input and a high definition pie as output, your neural network will “upscale” your pixelated circle to a high definition pie. Even if it’s the same algorithm in both cases.

            • Nibodhika@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              Yes, that’s precisely my point. The difference is in what the algorithm is trying to do, traditional DLSS uses the image rendered in resolution X as output and scaled down to X/2 as input (for example), so it’s trained to upscale images, whereas this new thing uses who knows what as either, and clearly outputs something that is not an upscaled version of the frame.

        • heavyboots@lemmy.ml
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          22 hours ago

          Current DLSS intent: We can only render this at like 720p with enough frames, so let’s do that and use AI anti-aliasing tricks so that when we present it at 4k, none of the jaggies are visible on-screen like they would be with raw 720p upscaling.

          DLSS5 intent: Using our pile of stolen artwork neural net that we can now render at 60fps+ let’s “reimagine” the entire look of the game as we present it on-screen, even if it was already running at 4k just fine.

          TLDR; How big the neuralnet is and what your train it for matters.

          • zaphod@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            6
            ·
            22 hours ago

            Ideally you’d have a DLSS-like system trained specifically trained for only one game instead of a general system. Then you can train it on 4k with highest settings and you should get something that doesn’t mess with the style of the game.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          22 hours ago

          … How if flying a spaceship different from driving a car? They’re both controlled applications of kinetic energy to move people or objects.

          At the end of the day, it’s all a pile of transistors and the only thing that is of import is the intent behind usage.

          In one case it’s saying you can use a neural net to take something rendered at resolution A/4 and make it visually indistinguishable from the same render at resolution A.
          The other is rendering something and radically changing the artistic or visual style.

          Upsampling can be replicated within some margin by lowering framerate and letting the GPU work longer on each frame. It strives to restore detail left out from working quicker by guessing.
          You cannot turn this feature off and get similar results by lowering the frame rate. It aims to add detail that was never present by guessing.

          Upsampling methods have been produced that don’t use neural networks. The differences in behavior are in the realm of efficiency, and in many cases you would be hard pressed to tell which is which. The neural network is an implementation detail.
          In the other case, the changes are more broad than can be captured by non AI techniques easily. The generative capabilities are central to the feature.

          Process matters, but zooming out too far makes everything identical, and the intent matters too. “I want to see your art better” as opposed to “I want to make your art better”.

          • plantfanatic@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            7 hours ago

            What…? It’s more like chemical vs nuclear rocket. You’re not even comparing the same thing while these are both the same things with different views. You don’t like this one, so suddenly it doesn’t met your arbitrary conditions to be acceptable, so now you’re coming up with incorrect analogies to try and make a point. Great job!

            • ricecake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 hours ago

              And you didn’t even read past the first sentence I see.

              Saying they’re the same because they both use a neural network is roughly equivalent to saying things are they same because they’re both manipulating kinetic energy.

    • half_built_pyramids@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      23 hours ago

      Not all answers are easy. This new dlss looks like it was trained on stolen work. Old dlss had a neutral network that was tuned before the plagiarism machine became popular.

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      18 hours ago

      Are you really asking why compressing and uncompressing art made by a human being is different from slop produced by the slop machine?

      One exists to reconstruct an image as closely to the original as possible while saving space, the other is meant to insert arbitrary changes to the initial image and produce something else.

    • bdonvr@thelemmy.club
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      18 hours ago

      Oh yeah? Well vegatables are both in pig troughs and on dinner plates. Why’s one slop and not the other? They were grown with the same process!

      Because one is shitty and the other isn’t.

      • plantfanatic@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        If the vegetables are the same, they aren’t slop. Pigs aren’t fed vegetables, they use rotten vegetables. Your analogy doesn’t work, if you actually comprehend the basics of it….

        If the vegetables weren’t rotten, yeah most people would eat the “slop” since it’s just vegetables, you would let good food go to waste just because the “name” you’re arbitrarily and incorrectly using for all pig feed?