• sonofearth@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 天前

      Wait DLSS is about upscaling right? The “features” mentioned in OP’s post are about motion interpolation that makes the video seem to be playing at higher fps than the standard 24fps used in movies and shows.

      • vithigar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        14
        ·
        2 天前

        Because names mean nothing Nvidia has also labeled their frame generation as “DLSS”.

      • lemming741@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 天前

        It allows more resolution by cutting the fps. Fake frames are inserted into the gaps to get the fps back.

          • NekuSoul@lemmy.nekusoul.de
            link
            fedilink
            English
            arrow-up
            12
            ·
            edit-2
            2 天前

            It’s both. Nvidia just started calling everything DLSS, no matter how accurately it matches the actual term.

            Image upscaling? DLSS. Frame generation? DLSS. Ray reconstruction? DLSS. Image downscaling? Surprisingly, also DLSS.

            • AdrianTheFrog@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 天前

              Frame generation is the only real odd-one-out here, the rest are using basically the same technique under the hood. I guess we don’t really know exactly what ray reconstruction is doing since they’ve never released a paper or anything, but I think it combines DLSS upscaling with denoising basically, in the same pass.

          • dual_sport_dork 🐧🗡️@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 天前

            What you’re thinking of is “DLSS Super Resolution.” The other commenters are right, nVidia insists on calling all of their various upscaling schemes “DLSS” regardless of whether they’re image resolution interpolation or frame interpolation. Apparently just to be annoying.

            There is a marginally handy chart on their website:

            All of it is annoying and terrible regardless of what it’s called, though.

        • Yggstyle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 天前

          Its simply “visual noise” that tricks the viewer into thinking they are getting more of something than they are. Its a cheap inconsistent filler. Its nvidia not admitting they hit a technical wall and needing a way to force new inferior products onto the market to satisfy sales.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 天前

      DLSS Frame Generation actually uses the game’s analytic motion vectors though instead of trying to estimate them (well, really it does both) so it is a whole lot more accurate. It’s also using a fairly large AI model for the estimation, in comparison to TVs probably just doing basic optical flow or something.

      If it’s actually good though depends on if you care about latency and if you can notice the visual artifacts in the game you’re using it for.

      • Yggstyle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 天前

        Motion blur is consistent and reproducible using math. The other isn’t. Something that cannot produce consistent results and is sold as a solution does have a name though: snake oil.

        • AdrianTheFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 天前

          Motion blur in video games is usually a whole lot less accurate at what it’s trying to approximate than averaging 4 frame generation frames would be. Although 4 frame generation frames would be a lot slower to compute than the approximations people normally make for motion blur.

          Yes, motion blur in video games is just an approximation and usually has a lot of visible failure cases (disocclusion, blurred shadows, rotary blur sometimes). It obviously can’t recreate the effect of a fast blinking light moving across the screen during a frame. It can be a pretty good approximation in the better implementations, but the only real way to ‘do it properly’ is by rendering frames multiple times per shown frame or rendering stochastically (not really possible with rasterization and obviously introduces noise). Perfect motion blur would be the average of an infinite number of frames over the period of time between the current frame and the last one. With path tracing you can do the rendering stochastically, and you need a denoiser anyways, so you can actually get very accurate motion blur. As the number of samples approaches infinity, the image approaches the correct one.

          Some academics and nvidia researchers have recently coauthored a paper about optimizing path tracing to apply ReSTIR (technique for reusing information across multiple pixels and across time) to scenes with motion blur, and the results look very good (obviously still very noisy, I guess nvidia would want to train another ray reconstruction model for it). It’s also better than normal ReSTIR or Area ReSTIR when there isn’t motion blur apparently. It’s relying on a lot of approximations too, so probably not quite unbiased path tracing quality if allowed to converge, but I don’t really know.

          https://research.nvidia.com/labs/rtr/publication/liu2025splatting/

          But that probably won’t be coming to games for a while, so we’re stuck with either increasing framerates to produce blur naturally (through real or ‘fake’ frames), or approximating blur in a more fake way.