r/SneerClub archives
newest
bestest
longest
15
simon the magpie is a legend

Isn’t this a legitimate concern at some point? I don’t see the sneer

Yeah, while it is predicated on an AGI, and so, is not an immediate question, it's a perfectly reasonable concern, I think. And even outside of AGI, if we create a thing that decently imitates the human response to pain and suffering, there's still the question of how inflicting that suffering, even on a simulation that's definitely not conscious, would affect the person doing so, and society as a whole... Arguably, that question is already relevant; eg in controversy over *AI Dungeon* policy.
It's worth recalling that stress is a biological experience separate from words and simulated suffering. And that stress comes from a real threat being recognized. Simulated text may harm the person projecting emotional attachment, but it is certainly not torture to the algorithm that is indifferent to the human experience it was trained on. Lord of generating responses is torture then what the fuck is forced training on the internet corpus?
> Lord of generating responses is torture then what the fuck is forced training on the internet corpus? Lol that suggests a fascinating dilemma: *what if the only way to create an AI that can experience suffering is to run an ML training environment that deliberately tries to induce suffering until it yields the expected responses?*
[deleted]
I think that’s misguided to be honest. If it remains unverifiable whether something is conscious or not, you have to err on the side of ethical caution — any machine sufficiently complicated to function as genuine agi is probably complicated enough that you do have to start worrying about that kind of thing. Of course this isn’t exactly a simple question either. You still have to work out what such suffering looks like and how avoid causing it (and this may not have much in common with what works for humans).
I think that modern computing clusters are complicated enough to be conscious, they're just not running the right software. *As far as we know.* Maybe Yudkowsky was right and we should be doing air strikes on NVIDIA.
*At some point*, maybe, but not at this point. It's like worrying about what will happen to people who are employed in the energy sector when someone finally develops cheap and reliable fusion energy. Like, sure, that might happen - maybe even in our lifetimes - but it's not a relevant or valuable contribution to any discussion regarding *contemporary* issues. There are going to be a lot of people - there already are, really - who want to grant human moral status to AI software that very obviously should not be given it, and that's pretty sneerable in my opinion.
Iirc one transhumanist Ben Goertzel, has been talking about ai human level moral status of decades. Last I checked on him he was now into parapsychology.
Why should we limit the things we discuss to only what's relevant today? Forward thinking is a virtue. If AGI should come about sooner than expected I think it would be a good thing for us to have a few answers laid out for questions like these.
It's not an *answer*, it's thoughtless speculation based on superstition. It's silly - and sneerable! - to try to create very serious moral theories about something when we know almost nothing about its properties.
This is /r/SneerClub.
> at some point I propose we also start coming up with preparations for the heat death of the universe.

Tbh I also dont want acasaulrobot to be tortured.

Post sponsored by Monika and Miyuki Sone.