- cross-posted to:
- gaming@lemmy.zip
- cross-posted to:
- gaming@lemmy.zip
cross-posted from: https://lemmy.zip/post/54658106
“AI brainrot is bad for our souls” An interesting article that explores why games are increasingly finding themselves in situations where AI art was used, ether intentional or not.



I saw an “AI” tool recently which showed how you could create two very different poses for a character, and it would “tween” between the two in a realistic, convincing way. It could be described as “genAI” I suppose, but the company claimed they were very specific with how they trained the model and what it was intended to do.
There were still animators upset about it, and I get it. I’d probably be upset about it if I were in that profession. I’m certainly upset about LLM use in programming. But if I squint really hard, I can barely eke out a vision of limited, targeted, vetted tools which accomplish very specific aids to creators in their professional workflows.
That is not by and large how any of the services we regularly hear about are built and marketed. There’s a wide gulf between ethically-sourced, limited professional workflow tools and the Copilots & Soras & Sunos of the world. I would say as a general rule, if something is produced based on a “prompt”, it should be immediately viewed with immense suspicion.
i saw a tool like that in like 1999. like, not super-realistic but convincing enough. above animorphs level at least.
idk about immediately throwing out prompts either. i did a big half-rant about this the other day but the current gen of models are all vector spaces. they are basically multidimensional topographical maps with the prompt as the starting coordinates and some traversal algorithm as the means of producing output. as long as they stick around the prompts are needed.
Beyond that, there’s definite value in being able to create queries using natural language, and not needing to always know the specific technical terminology of something while still being able to get pointed in the right direction.
It’s the whole non-deterministically regurgitate a poor quality combination of all your stolen training data while not citing sources and using absurd resources that I have a problem with, personally.