r/SneerClub archives
newest
bestest
longest
Question; What the hell happened to Yud while I wasn't paying attention? (https://www.reddit.com/r/SneerClub/comments/13uij1k/question_what_the_hell_happened_to_yud_while_i/)
53

15 years ago, he was a Singularitarian, and not only that but actually working in some halfway decent AI dev research sometimes (albiet one who still encouraged Roko’s general blithering). Now he is the face of an AIpocalypse cult.

Is there a… specific promoting event for his collapse into despair? Or did he just become so saturated in his belief in the absolute primacy of the Rational Human Mind that he assumed that any superintelligence would have to pass through a stage where it thought exactly like he did and got scared of what he would do if he could make his brain superhuge?

[deleted]

It really hurts to see it laid out like this. I also had a bad case of "it is my responsibility to save the world with my Amazing Brain" when I was 17. Got smacked out of that shit real quick by meeting some actually smart people. But I still remember being paralyzed out of accomplishing things by terror that I would fail at them and thus turn out to not be a genius, and therefore useless. A whole life led like that makes my heart hurt to think about. edit: have now read further. what the fuck do you *mean* they promoted roko's stupid fucking basilisk so much it caused multiple psychotic breaks. goddamn roko's piece of shit basilisk was notorious as one of the most up its own ass bits of nonsense doomerism ever put to text from the day it was published. I have been thinking trite shit about "never attribute to malice what can be chalked up to stupidity" but there is no way at *all* that someone at the top thought "this is a good argument that people should genuinely worry about" about **roko's fucking basilisk.** there is no attribution for that but a cult leader choosing to maintain control through fear. man when I made this thread I was like "wow, Yud's turned into kind of a nut apparently. how wacky." what a turnaround.
>what the fuck do you mean they promoted roko's stupid fucking basilisk so much it caused multiple psychotic breaks. goddamn roko's piece of shit basilisk was notorious as one of the most up its own ass bits of nonsense doomerism ever put to text from the day it was published. Sadly it has the *exact* same format and reasoning and payoff matrix as Pascal's Wager, which has been haunting people for hundreds of years and which, *somehow*, is still kind of taken seriously even though it's absurd. People are just like that with this kind of anxiety based mind trap.
The dumbest part of Roko's Basilisk and everyone freaking out about it is that ... it's just describing consequences we deal with with basically everything, and framing it as some novel existential threat.
A lot of their anxieties about Skynet-type AI are just sublimated anxieties about real threats brought about by inequality, the military-industrial complex, capitalism, class, their own IQ eugenics and value system, etc. It's the right-wing nerd version of finding a scapegoat to blame the problems of capitalism on, Skynet is indirect enough that it won't hurt their tech entrepreneur friends' feelings when you talk about how bad it can be. Their earlier belief about the singularity was similar, except it adopted techno-optimist and free market assumptions from that era of neoliberalism instead of today's doom. The Singularity had the potential to transform society and the human species into something unknown, but new and better, and perhaps eternal. Now, AI will literally kill us all and if there is any continuity of life it will be endless virtual torture. They both lead to some kind of end-times. It's an inability to understand and interact with the realities of capitalism directly, IMO, especially with its inevitable end game.
While the usual suspects of existential questions are implicit in the realities of capitalism, I think it's a littlw putting the cart before the horse to lay specific blame on capitalism here. Much as inordinate concern about the apocalypse is a logical consequence of thinking that certain Christian beliefs are specifically applicable to the modern day, Roko's Basilisk flows directly from taking the Rationalists beliefs about probability and AI seriously. You don't need to go further than that.
Say what you will about Pascal's Wager but at least it has the excuse that he died before he could complete it so of course there are problems. What the fuck is Rocko's excuse?
My favorite response is "Oh, you don't have to worry about Roko's Basilisk. Why? /u/Groudon466's Basilisk, of course! *My* basilisk eternally simulates and punishes everyone who selfishly attempted to contributed to the creation of *Roko's* Basilisk. Have fun with that one."
Yeah, there's actually a standard objection to Pascal's Wager along those lines, the many/infinite gods objection iirc. It's a pretty good one. Then of course there's Pascal's Mugging, and just pointing out that decision theory breaks down into incoherency when you treat unsubstantiated claims as serious possibilities (since it confuses metaphysical possibility and epistemic possibility, the latter of which merely stems from uncertainty) and insert infinite payoffs.
> I have been thinking trite shit about "never attribute to malice what can be chalked up to stupidity" but there is no way at all that someone at the top thought "this is a good argument that people should genuinely worry about" about roko's fucking basilisk. there is no attribution for that but a cult leader choosing to maintain control through fear. per the basilisk post, it was an idea that was already going around at SIAI (as was) when roko was interning there in mid-2010. and yudkowsky did try very hard to suppress it. this worked extremely badly.
> roko's piece of shit basilisk uses EY's Timeless Decision Theory, therefore of course he's going to anti-promote it because it confirms he's The Most Important Person. That's why he loves it, it centers him in the AI doom narrative. That's it. He [apparently frequently](https://twitter.com/jessi_cata/status/1651364353139146752) thinks hard about AI and then tells no one on purpose
Hanlon's Razor is a good heuristic ("never attribute to malice what can be adequately explained by stupidity") but you also really need Grey's Corollary too: "any sufficiently advanced incompetence is indistinguishable from malice."
>but there is no way at all that someone at the top thought "this is a good argument that people should genuinely worry about" about roko's fucking basilisk. [Correct](https://www.reddit.com/r/Futurology/comments/2cm2eg/comment/cjjbqqo/?utm_source=share&utm_medium=web2x&context=3). Yud did not, in fact, think it was a good argument. Found with 1 minute of googling. A particularly (incredibly) fitting quote given the sub and the fact-distortion at play: >If I had to state the basic quality of this situation which I overlooked, it wouldn't so much be the Streisand Effect as the existence of a large fraction of humanity---thankfully not the whole species---that **really really wants to sneer at people**, and **which will distort the facts as they please if it gives them a chance for a really good sneer**.
[deleted]
>This idea -- that you can speak or think into non-existence an entire civilization based on an internet post-- is pure sci-fi gobbledygook. How the hell did you get "speaking or thinking a civilization into non-existence" from the bit you quoted?
[deleted]
I'll repeat: How the hell did you get "speaking or thinking a civilization into non-existence" from the bit you quoted? You still have yet to actually quote anything about speaking a civilization into non-existence. Are you trying to claim that "a real hazard" is referring to "the end of civilization" and not "harm to the mental health of some people"? Edit: Banned for disagreeing, so I won't be responding. Users should keep in mind that this sub does not allow dissent. In case you were wondering why it's such an echo chamber.
Actually, Yud found it to be such a good argument that he printed it out, framed it and put it on his wall next to his degrees and awards. It's so impressive that people come over to take pictures with it and leave. Sometimes he gets up at night just to give it a high five. One time Yud called his mom and asked her if she loved him or Roko's Basilisk more, and his mom said the latter. She loved the good argument more than her own son. He cried tears of joy because she passed the test that meant that she is Smart™/saved. Thank God.
You might note here that "someone at the top" does not, in fact, name Yud specifically in any way. If I had meant him, I would have said him. Jessica Taylor reports that she suffered a psychotic break which was significantly shaped by the idea that she had failed to prevent an outcome in which humanity was tortured by a powerful AI, which was a topic of concern at MIRI. Being embarrassed by one of the original expressions of that idea, disagreeing with and disavowing it, does not make invalid the result in which an organization which was shaped by his ideas from its foundation, and that he had a continual guiding role in, promoted that idea until it became the focus of someone's breakdown. He does not need to have personally explained the dangers of the Basilisk to her to bear some culpability in what happened. He is not solely responsible by any means, was probably unaware of the specific conditions in which this idea was repackaged and distributed in MIRI, and was, as a genuinely compassionate person, probably horrified by the results. But there is still a direct line of action between his Sequences and that breakdown, at which I was expressing horror. Which seems reasonable, after asking "oh man what happened to Yud" and receiving an account of an emotional collapse partially incited by a pervasively stupid idea that took root in ground he helped till. Inferences are easy and natural to make, because no conversation ever has perfect clarity, and generally there is no harm from these miscommunications. But perhaps, in the future, consider making sure someone said what you are saying they said before getting in a really good sneer, huh?
> there is no way at all that someone at the top thought "this is a good argument that people should genuinely worry about" about roko's fucking basilisk. there is no attribution for that but a cult leader choosing to maintain control through fear. Yud actually tried to suppress discussion of it on the site at first, since it was stupid and yet making a number of users worry. This, unfortunately, backfired hard, as the Streisand Effect kicked in and it became popularly discussed way outside of the site, to a much greater extent than if he had just let discussion of it run its course. Eventually, he acknowledged the backfire and unbanned discussion of it.
I'm not sure how much I trust this. There's a lot of AI hype that I find dubious - it's the new trendy tech thing, sure, but I'm pretty doubtful that it'll turn out to be revolutionary rather than just another useful, but specialized, tool. It stinks of ad copy. It also pretty uncritically accepts the idea the Yudkowsky is a one-in-a-billion genius and The Smartest Guy On Earth Ever when as far as I can tell he did exceptionally well at tests in *elementary school* and then spent the subsequent 30 years convincing people that he's the greatest genius of our generation without ever actually achieving anything with his supposed intelligence.
[deleted]
>Siskind Scott Alexander, HBD, and neoreaction come up in chapter 6 of *Extropia's Children* which is now published. I was never all that impressed by Yudkowski as anything but a fanfic writer but since *Extropia's Children* goes on to describe very bad things, I can see why it begins by presenting the most sympathetic possible view of Yudkowski (and like, if he were just a guy on the Internet with more confidence than achievements, is that so bad? Its the things his disciples do in meatspace that are the problem)
[deleted]
I agree that Evans should have summarized that email in a sentence or two since it raises questions about the original purpose of SlateStarCodex! Likewise, trusting the Anti-Neoreactionary FAQ as evidence that Alexander was not a fellow traveller, without noting that Alexander's hostility to feminists and sympathy to biological race thinking are warning signs beside the angry rants about "the Cathedral" and the email saying that Alexander wanted to learn the insights of the NeoReactionaries. But Evans did at least bring up that a gang of racists and reactionaries hung out on SlateStar and were accepted as participants. Alexander, Bostrom, and Pinker are three famous people in the social network he defines in chapter 1 who at least toy with biological race thinking in private and seem to have done so before say 2012. Its not just Michael Anissimov.
It's more that Yud has perfected a Gifted Kid Grift possibly more ambitious than any other, all without realizing.
Okay I just wasted way too much time reading that, but the best part was the people who seem like they read Stross’s Laundry Files novels and thought they were nonfiction.
Important note that since this was written stuff came out that the whole .info site is very dubious and is likely to be misinformation. We have talked about this more here somewhere. (E: I meant Zizians.info, see our initial discussion here. https://www.reddit.com/r/SneerClub/comments/qct67e/more_cultishness_zizianism_masked_attackers_in/ And the reveal it was fake here: https://www.reddit.com/r/SneerClub/comments/12dxwf4/nsfl_those_warnings_against_ziz_were_fake_and/). E2 and now that I read further, another big nitpick is the author simply not mentioning the whole leaked emails of Scott at all. (Nor Scotts long history of anti feminism) not mentioning sneerclub while indirectly (and dgrerad and sandifier directly) mocking us (yes I took the inferiour writer bit to mean me as well ;) (im not serious btw, even if it fits)) is also a bit iffy, esp if he then uses what we said in words here often enough for his modified risk equation.
>the reveal it was fake here: I didn't manage to follow that one. How do we know any of what's written there is true? Knowing none of the people involved, I can at most now say zizians.info could as much be fake as true.
I dont really know any of these people, but that .info site already had a few red flags, and just spreading it as true while there is acounter argument to it being true because you dislike ziz or think she is nuts. Esp as there is a pattern in Rationalism to demonize (trans) women who speak out against it. (Im wary of turning radical epistemoligist when suddenly it comes out we might have been duped and then going 'well cannot know wither which of this stories is true', after first spreading the sneerworthy information as true).
A really interesting read, thank you!

No, he was never working in halfway decent AI research.

What happened is that he is not successful at his silly “AI alignment” nonsense and he’s therefore worried that the rapid progress of AI means the acausal robot god will arrive before he’s made it safe and therefore we will all die.

iirc, Yud's progression went like this: - try to make a stock trading bot, the bot failed to work - try to make a [new programming language](https://flarelang.sourceforge.net/prog-overview.html), the language failed to work - try to [build a superintelligent AI in his basement that would re-write Earth's future light cone](http://web.archive.org/web/20020123022751/http://singinst.org/GISAI.html), the AI failed to work - build [a cult echo chamber](https://www.lesswrong.com/highlights) and conduct ["AI alignment research"](http://intelligence.org), and get the cult to pay his salary while he does this -- the "AI alignment research" failed to work. The cult is unfortunately still chugging along today - argue that since he failed at it, making AI in any way safe is impossible, go on a podcast tour series lamenting this, while promoting stochastic terrorism against machine learning data centers -- we are here I wonder what he'll do when AI fails to kill everyone
Somehow find a way to get Peter Thiel to give him more money?
Thiel was making fun of Eliezer for his despair and neo-Luddism, so that money source probably won’t give him any more. See here: https://www.reddit.com/r/SneerClub/comments/11vgtpr/peter_thiel_isnt_happy_about_miris_new_death_with/
> Thiel Well, he is a bit of a vc but for (non democratic) social movements. Even if he makes fun of them, early donations by Thiel prob game him a bit of influence at a steal, esp if you look at the people he influenced via Yud, esp as the Yuddites think themselves above politics.
yeah, Thiel basically moved in and bought the extropian movement in the late 2000s
Finally some real innovation from Thiel, not doing SV style incubator/VC stuff for companies but for social organizations. (a bit like how Putin does it for misinformation).
OMG his language was supposed to be written in XML? The only language I know of that does that is the downloaded version of Scratch programs.
You dont understand xpath is the future!
Or when the current bubble bursts, likely when the AI outfits die screaming under a horde of copyright and other suits.
>I wonder what he'll do when AI fails to kill everyone I assume he'll fail to work
You know, apparently 15 years ago, enough citations of better researched and respected works was enough to sell me. Levels Of Organization In General Intelligence introduced me to a lot of the papers that encouraged me to go on and get my degree in compneuro. And wow, am I *wincing* on rereading it. There is nothing actually here that is not citation from better works. Ah, well. We grow, we learn, we do better. Or... some people do, anyway.
What should I read instead?
Well, 15 years ago I was 15, and reconstructing the path I took to arrive at some actually decent literature is not gonna apply now, or to an adult. So, what would you *like* to read, in terms of what kind of information are you attempting to intake from the reading? 'cuz I seriously doubt that I can make any better recommendations on the general philosophy of the topic than people who are here more regularly, but I *can* recommend some of the texts that have shaped my concepts of the details of its execution. Which are necessarily a niche subcomponent of a much vaster field, as is the way with all ivory-tower wonks like me.
I mean, a lot of the stuff we read and admired as teenagers looks superficial or misguided 15 years later! The only tragedy is getting stuck and never realizing "Feynman has some great stories but is not a model of how to treat women" or "my bold reporting is probably not going to break the corrupt local power structure in a few months while I also make new friends and acquire a love interest"

Currently theres more grift opportunities in doomerism than hopium.

see, the thing about Yud is that I'm pretty sure he's completely sincere. he believes every word he says. he's a crank, not a charlatan.
I'm pretty sure that's worse somehow. 😅
Doomsday cultists and preachers, a tale as old as time... Curious...

[deleted]

Completely self-obsessed in a way that is also completely lacking in self awareness, with a savior complex? Totally, yeah. It just seemed like a complete 180 was unusual for someone like that. As has been pointed out elsewhere, it's actually not even a single degree of change. It's just "if I didn't make the AI, no one else could possibly do it right." Also a little bit of Nerd Armageddon to sit opposite the original Nerd Rapture. "I will not have to live with my failed ambitions if the world ends, so I really hope it does."
[deleted]
Yud's only real prior is that he's the main character who is going to save the world. That kind of tends to point him away from scenarios where that might not be the case. Another person might have grown out of it by his 40s, but he's built different.
Oh, yeah, I kind of miscommunicated there; I mean that it appears at *first glance* to be a complete reversal, but under a thin layer of paint it is the exact same course. Also I would strongly caution against calling him stupid; whatever else he is, he's not that, at all. He is an example of a very common problem that has beset all of human history, which is the assumption that being intelligent in a particular way is broadly applicable to *every* way, or that intelligence is a linear scale between *stupid* and *smart*. The sort of thing that ends up with the people at the Manhattan project designing nuclear weapons, on the understanding that they were smart enough to oversee their subsequent deployment. Or, more benignly, how Niels Bohr could get lost in a flat empty field because he was *really* dumb about directional extrapolation sometimes.
There is iirc also some personal tragedy involved, early death of somebody close. So everybody involved deserves a bit of therapy (and a bit of money, im still looking for the nega thiel to fund us).
For thousands of years, over and over again, people have been confronted with the spectre of death, and come to the conclusion that they have figured out how to deal with the pain it inflicts on those left behind. He was born in a time in which he was able to turn to practical-seeming physical solutions, instead of philosophy. But it's the same thing. It hurts, and we want it to stop hurting, and we will tie ourself into knots to make it do so.
This post provides some useful context about why certain posters act strange https://twitter.com/visakanv/status/1661218895435534336
Good lord. Sounds like Yud met a bunch of salespeople and mistook charisma for intelligence.
Or met a bunch of ~~marks~~potential allies and investors, and realized that maybe it would be wise to flatter them.
as i said: my dude have you never heard of cocaine

No he was always like this.

The difference is that the AI he was planning for and researching for now exists and his research didn’t help with any of it, and so he’s convinced that since the whole purpose of his AI research was to steer future AI development safety, that he’s failed and the end times have come.

I think the thing that really rankles him is that what we have now *isn't* the AI he was planning for and researching. He was envisioning a program carefully laid out by the greatest geniuses of our age, not quite in their own image but close enough; the ultimate triumph of the rational mind, perfected and finally purged of all its flaws. What we have now instead is an AI that works much more like real human brains do; by enormous amounts of stochastic action, and even more enormous amounts of being wrong a *whole freakin' bunch*. Current AI is doing what it's doing in a way that is almost a direct refutation of his rationalist ideas. Conscious self-reflection and decision making is not the culmination and greatest value of the human brain, it's a neat and completely incidental bonus feature that has grown on a system that works by shuffling signals until patterns shake out.
Yeah, his idea was a Bayesian superintelligence that could figure out general relativity by looking at three frames of an apple falling. And once the superintelligence existed, created and spearheaded by him, he would be relieved of the burden of being the most brilliant and important person on earth, because he would happily concede that to a superintelligence (created by him) if not to a human. He could finally be a normal guy and live a normal life letting his Friendly AI handle everything.

Leader of a doomsday cult found his 2012, comet or senator’s visit.

but actually working in some halfway decent AI dev research sometimes

Yudkowsky did some actual math, but at a paralysingly slow pace. mostly he recruited better mathematicians, who also worked at a paralysingly slow pace.

Honestly, in my experience with genuine calculative heavyweights, of the sort where you can smell their neurons sizzling before they enter a room; Having someone around who picks them out, recruits them to specific tasks, and keeps them focused on those tasks is usually the only reason anything significant gets done. I'd call it just as important a job as the calculation itself. Granted, I am picking through the endless chaff for tiny germs of worth from his projects, but at least it was nonzero for a *little bit*. I guess.
oh yeah, nothing wrong with math, that's fine. but even then MIRI's output is paralysingly slow. What the hell are they *doing* in there all decade.

He’s extremely annoying and not that smart