Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the inevitable near future is that games are not just upscaled by AI, but they are entirely AI generated in realtime. I’m not technical enough to know what this means for future console requirements, but I imagine if they just have to run the generative model, it’s… less intense than how current games are rendered for equivalent results.


I don't think you grasp how many GPUs are used to run world simulation models. It is vastly more intensive in compute that the current dominant realtime rendering or rasterized triangles paradigm


I don't think you grasp what I'm saying? I'm talking about next token prediction to generate video frames.


Yeah, which is pretty slow due to the need to autoregressively generate each image frame token in sequence. And leading diffusion models need to progressively denoise each frame. These are very expensive computationally. Generating the entire world using current techniques is incredibly expensive compared to rendering and rasterizing triangles, which is almost completely parallelized by comparison.


in a few years it's possible that this will run locally in real time


Okay you clearly know 20x more than me about this, so I cannot logically argue. But the vague hunch remains that this is the future of video games. Within 3 to 4 years.


I don't think that will ever happen die to extreme hardware requirements. What I do see happen is that only an extremely low fidelity scene is rendered with only basic shapes, no or very little textures etc. that is them filled in by AI. DLSS taken to the extreme, not just resolution but the whole stack.


I’m thinking more procedural generation of assets. If done efficiently enough, a game could generate its assets on the fly, and plan for future areas of exploration. It doesn’t have to be rerendered every time the player moves around. Just once, then it’s cached until it’s not needed anymore.


Even if you could generate real-time 4K 120hz gameplay that reacts to a player's input and the hardware doesn't cost a fortune, you would still need to deal with all the shortcomings of LLMs: hallucinations, limited context/history, prompt injection, no real grasp of logic / space / whatever the game is about.

Maybe if there's a fundamental leap in AI. It's still undecided if larger datasets and larger models will make these problems go away.


I actually think many of these are non-issues if devs take the most likely approach which is simply doing a hybrid approach.

You only need to apply generative AI to game assets that do not do well with the traditional triangle rasterization approach. Static objects are already at practically photorealistic level in Unreal Engine 5. You just need to apply enhancement techniques to things like faces. Using the traditionally rendered face as a prior for the generation would prevent hallucinations.


Realtime AI generated video games do exist, and they're as... "interesting" as you might think. Search YouTube for AI Minecraft


Good luck trying to tell a "cinematic story" with that approach, or even trying to prevent the player from getting stuck and not being able to finish the game, or even just to reproduce and fix problems, or even just to get consistent result when the player turns the head and then turns it back etc etc ;)

There's a reason why such "build your own story" games like Dwarf Fortress are fairly niche.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: