Yudkowsky has accepted that we’re all going to die at the hands of Terminator robots, but he will not accept being misunderstood, and so he has come to LessWrong to elaborate upon his recent Time article.
Yudkowsky’s Time article addendum 1 (kind of boring)
Yudkowsky’s Time article addendum 2 (less boring, talks about nukes and death)
For the last time: he’s not saying we should use nukes, he’s saying that we should be okay with getting nuked. Get it right.
It’s true that his logic technically follows. By virtue of basic arithmetic, half of humanity getting nuked is preferable to all of humanity getting pulped by Terminator robots. But nobody is going to get pulped by Terminator robots, so this is still the ranting of a madman and it is appropriate to mock it without belaboring the details of the matter.
However, pointing out that Yudkowsky is wrong in the comments section is verboten: the mods have chimed in in the comments to reiterate that dissent on basic matters regarding the robot apocalypse is not allowed except in designated areas.
The rationale that they give for this is that they expect people to be very familiar with the prior work on a topic before trying to engage with it.
Oops, I forgot to include this quote from Yud’s second addendum in which he explains what he thinks about how realistic his policy proposals are:
I wonder how long it’s going to be before someone takes him at his word and koolaids their way out of the problem for good.
So he’s thoroughly a failure in his fantasy world struggle. The future robot god, not Yud, has won their acasual single combat for the future of humanity. It would be an otherwise reasonable expectation that he’d finally, then, just shut the fuck about AI safety. It’s too late, close up and turn out the lights. But this is Yudkowsky. I’m sure he’s just getting started in his campaign of clout-chasing.
Citing Robert Heinlein as the source for your political philosophy – never a good sign.
“We’re all going to die”
Promise?
Okay so all this shit… it’s basically sci-fi. Right??
Like the reasoning is on the level of the coarsest and poorest sci-fi??
Even better done sci-fi kind of makes a mockery of eacc people. Consider accelerando by stross. The conclusion is super intelligent ai civilizations become inwards focused and die a stagnant death. Basically no entity wants to be far away from the core lest they lose the latency advantage and there’s basically no advantage or purpose to explore the universe.
Ok back to big yuds dillema… he’s worried that we can’t get ai to prioritize human life. Okay. Well guess what buddy we can’t even get capitalism and humans to prioritize healthy human life. We are doing a number on the planet without needing ai.
So isn’t his criticisms really a veiled critique of capitalism? Or is that getting a bit too radical for these people??
Deviating from the approved scriptures and not demonstrating proper obeisance to the patron saints will get you kicked out of most cults.