• potterman28wxcv@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      They are for providing special hardware for Neural Network inference (most likely convolutional). Meaning they provide a bunch of matrix multiplication capabilities and other operations that are required for executing a neural network.

      Look at this page for more info : https://www.nvidia.com/en-us/data-center/tensor-cores/

      They can be leveraged for generative AI needs. And I bet that’s how Nvidia provides the feature of automatic upscaling - it’s not the game that does it, it’s literally the graphic cards that does it. Leveraging AI of video games (like using the core to generate text like ChatGPT) is another matter - you want to have a game that works on all platforms even those that do not have such cores. Having code that says “if it has such cores execute that code on them. Otherwise execute it on CPU” is possible but imo that is more the domain of the computational libraries or the game engine - not the game developer (unless that developer develops its own engine)

      But my point is that it’s not as simple as “just have each core implement an AI for my game”. These cores are just accelerators of matrix multiplication operations. Which are themselves used in generative AI. They need to be leveraged within the game dev software ecosystem before the game dev can use those features.

      • dillekant@slrpnk.net
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        it’s not the game that does it, it’s literally the graphic cards that does it The game is just software. It will execute on the GPU and CPU. DLSS (proprietary) and XeSS (OSS) are both libraries to run the AI bits of the cards for upscaling, because they weren’t really being used for anything. Gamedevs have the skills to use them just like regular AI devs do.

        By AI here I mean what is traditionally meant by “game AI”, pathfinding, decisionmaking, co-ordination, etc. There is a counterstrike bot which uses neural nets (CPU), and it’s been around for decades now. It is trained like normal bots are trained. You can train an AI in a game and then have the AI as NPCs, enemies, etc.

        We should use the AI cores to do AI.

        • potterman28wxcv@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          You could imagine training one AI for each game AI problem like pathfinding but what is see the benefit over just using classical algorithms?

          Can DLSS and XeSS be used for something else than upscaling?

          • dillekant@slrpnk.net
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            what is the benefit over just using classical algorithms

            Utilisation. A CPU isn’t really built for deep AI code, so it can’t really do realistic AI given the frame budget of doing other things. This is famously why games have bad AI. Training AI via AI algorithms could make the NPCs more realistic or smarter, and you could do this within reasonable frame budgets.

            • potterman28wxcv@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I see. You want to offload AI-specific computations to the Nvidia AI cores. Not a bad idea, although it does mean that hardware that do not have them will have more CPU load so perhaps the AI will have to be tuned down based on the hardware they run on…

              • dillekant@slrpnk.net
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                so perhaps the AI will have to be tuned down based on the hardware they run on…

                Yes, similar to Raytracing which still needs a traditional pipeline, with AI you will have “enhanced” (Neural Nets) and “basic” (if statements).