- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
cross-posted from: https://lemmy.world/post/7258145
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. Is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.
ARTICLE - Technology Review
ARTICLE - Mashable
ARTICLE - Gizmodo
The researchers tested the attack on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. When they fed Stable Diffusion just 50 poisoned images of dogs and then prompted it to create images of dogs itself, the output started looking weird—creatures with too many limbs and cartoonish faces. With 300 poisoned samples, an attacker can manipulate Stable Diffusion to generate images of dogs to look like cats.



Isn’t the easiest way to poison the degenerative AI pool to just feed it degenerative AI output?
This tool is for artists to protect their own works from theft. This tool watermarks the art in a minor way that is difficult for humans to notice, but messes up current AI models that use it as training data.
Yes AI incest does degrade the models, but that strategy is ineffective at protecting the works of artists.