r/SneerClub archives
newest
bestest
longest
31

Old post from Alexander Kruel in which he shows how MIRI/LW/Yud make you distrust your own intuition and then make you believe in all sorts of crazy shit while calling it “rationality”.

Don't worry, Kruel is totally back on the train these days again: https://twitter.com/XiXiDu/status/1357985298517524483
Gather round Clubgeration, and hear my sermon, from the book of Yud, 69:1, listen to the wise words of He Who Shall Not Be Yvained: "And thus, he realized, that the agi-pill, once swallowed, cannot be unmade, and the lost sheep returned to the fold. Now let us praise it, using the words he thought us. Hail acausalrobotgod, who art heaven, hallowed be thy creators. Your kingdoms come, your will shall be done, on earth as it is in the sim. Give us our immortal life our human flourishing, and forgive us our doubts, as we have dissmissed those who sneer. Lead us not into eternal torture, but deliver us from paperclip creation.' Now go, my iFlock, and Based and IQ-pilled be.
All this for a monthly tithe of 0.006969 BasiliskCoin? We are truly blessed to be counted among the saved.

Kinda wish Matrix Takeover Institute was a real thing tbh

[removed]

> Eliezer Yudkowsky is a decent and honest person with no ill intent if yud is holding a simulation of you hostage blink twice
Didn't Yudkowsky basically force him to add that disclaimer?
[removed]
Wait what? Source?
https://www.lesswrong.com/posts/G9LNTP3uEyYCdr3mh/breaking-the-vicious-cycle?commentId=tskg7FHCQDPWhLp5Y
... wow he just literally copied the text. Wtf. The whole post by Yud is just nuts. Guess Scott was wrong, he wrote 'don't talk like a robot' when he should have said 'dont talk like a robotic mob boss'. E: I also cant get over that Yud thought this was a good reaction to post in reaction [to this](https://www.lesswrong.com/posts/G9LNTP3uEyYCdr3mh/breaking-the-vicious-cycle) This whole saga did however make me look [up this](https://web.archive.org/web/20141013084951if_/http://kruel.co/2012/05/13/eliezer-yudkowsky-quotes/)
>"If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t." God this is cult mentality 101. Make you distrust all the reasonable people around you so the cult leader and his ideology are free of any criticism.
Oh wow. It took this long but it's finally stuck for me, Yud is an actual cult leader.
What quotes? There are no quotes in this piece. It's just a good takedown of Yudkowsky's Pascal wager grift. He needs to refute the arguments he made if he thinks they're wrong, not apologize. Oh... wait... i see. These people are "rational" only until they brush against the egos of their cult leaders
You can't convince people their ideas are wrong if their paycheck depends on that idea sadly.

[deleted]

> The probability of superhuman AI isn't small though. The disagreements are about how soon and how fast it will happen, not whether it'll happen at all. Yes, but Yudkowsky & al. are worried about not superhuman AI in general, but in particular recursively self-improving AIs that quickly get too smart to be at all predictable/controllable by humans. This model depends on more assumptions than just the idea that superhuman AI is possible: it requires that such an AI be able to understand itself well enough to generate a smarter version of itself, that it can do this quickly enough or seem benevolent convincingly enough that humans won't stop it during this process, and that computing technology will allow this process to continue recursively until the AI reaches godlike levels. These assumptions aren't unreasonable enough to make the idea of AI risk **in general** a mere Pascal's Mugging, but they are by no means certain.