r/SneerClub archives
newest
bestest
longest
37

I’m just trying to understand Rationalism & counter-arguments. I stumbled across it listening to a snippet of Lex Fridman interviewing Aella and then went down a weird rabbit hole… found out that some sort of Rationalist commune is in my hometown of Berkeley (*how* did I never hear of this?)… got my own weird instinctive “whoa seems like a cult” feeling… googled Jaron Lanier/ Yudkowsky and, well, here I am .

I’m trying to wrap my head around what may just be very wordy gobbledegook (is that all it is?), and I’m getting a sort of Objectivist flavor mixed with some AI worship kind of thing. Is that right?

Help me out here: I’d like to sneer with you, but I don’t quite get what’s going on.

Sort of. It’s a part philosophy, part cult, part loose social group centered around techies and academics in the SF Bay Area, but has affiliated people around the world.

Rationalism is generally objectivist/libertarian, but has a bunch of specific, weird shibboleths like hyper focusing on the existential risk of AI, certain kinds of long term social harm, polygamy, nootropics and “self hacking” etc.

They have a dedicated subreddit to sneer at them for a couple reasons: (1) They fly too under the radar for most media outlets to care, and (2) they are that special mix of condescending and incorrect for any thinking persons hackles not to be raised.

For further reading I recommend this

Hey as an ex-Christian I have liked FundieSnark for quite a while, so I'm all good with the sneering concept. Their shibboleths are really something: kind of Scientology-esque honestly. Thanks for the link, I'll check it out.
>Scientology-esque Rationalists even have their own "very dangerous thing which should not be discussed publicly" *à la* Xenu. It is called "Roko's Basilisk" and is a benevolent AI so powerful it has godlike abilities. Knowledge of it is an info hazard because if you know about it and don't do everything you can to bring it into reality it will torture a simulation of you for eternity once it exists. I get the sense that it is taken a lot less seriously than Xenu though, for whatever that's worth.
They also have weird obsessions like the word "shibboleth". ​ They pretend to be non-religious but also wrap themselves in tons of religious stuff. A top rationalist went full catholic for example. ​ They do tend to be IQ maximalists.
Don't forget about the unnecessarily large words and run on sentences
This seems an odd take. Most rationalists explicitly reject both objectivism and libertarianism. The first because it’s an intellectually dishonest non-philosophy, and the second because it’s a race to the bottom with horrendous coordination problems. I’m pretty sure Yudkowsky has pieces that disavow both.
Dear rationalists, if you reject libertarianism why does Aella talk or post a poll about fucking children every week 😎
I don't see how anyone can miss the parallels between objectivism and rationalism. Even their names have an uncanny similarity: it's like they're both trying to achieve an otherwise impossible state of intellectual purity by summoning it into existence with nominative determinism. For their own sake the rationalists would *have* to explicitly reject objectivism, because otherwise it's too obvious that what they're doing is just a rebranded version of it for the digital age.
While im notbsure about the whole objectivist/libertarian thing (Californian ideology might be a better fit, but that also includes parts of the previous 2 ideologies) I do wanted to remark on this: > I’m pretty sure Yudkowsky has pieces that disavow both. Scott also has a anti-nrx article. How did that work out? (And the anti pua article scott posted also didnt get rid of those ideas in the community, and might have introduced a lot of people to that neonazi). So this isnt a great defence. (If course im comparing Yud to Scott here, and at least Yud has tried to get rid of a few bad groups from lw, Scott never has, instead he banned marxbro for pointing out that people misquoted marx).
> Scott also has a anti-nrx article. How did that work out? When I read that article years ago, the neoreactionary ideas he was 'objectively discussing' seemed so obviously wrong even when interpreted in the most generous light possible that it never crossed my mind he secretly *agreed* with them.
I think at the time he didn't agree with them, but he has found several he does (like the whole race/iq thing) and he might have changed his mind on a few others. See how he now sort of disavows the article.
so it turns out he sent That Email around the time of the anti-NRx article
Ah right so it is worse than I thought. Thanks for the correction.
they explicitly reject it then implicitly do it anyway these fuckers self-label as "left-liberal" while embracing scientific racism via neoreaction, come the fuck on
Scott Alexander's frequently imagined utopia is coined as “The Archipelago”. This idea that you can somehow easily and handily disaggregate yourself from anyone you do not wish to interact with, and that you can simply freely enter the community that matches your wants, and that somehow these borders are frictionless, is quite libertarian in concept. Of course, when others wish to disaggregate from him, and explain why, he screams bloody murder.

Lex Fridman interviewing Aella

oh my fucking god

She's a little like Paris Hilton 2.0.
She's also a pretty big transphobe, and like a lot of these people dresses up her bigotry in "scientific" sounding language.
I had never heard of her: kinda like Lex, but his voice puts me to sleep.
you betta off her takes are of such temperature that for a while sneerclub had to have an aella moratorium cos she was just coming out with banger after banger
Truly one of the most severe cases of brainworms I've seen.

Alright, I’ll just whip this out in a totally unhinged and unstructured manner because I don’t know how to begin to organize my thoughts on this topic.

It starts with Overcoming Bias, a blog co-written by Eliezer Yudkowsky, Robin Hanson, and Nick Bostrom. That’s sort of the seed of this whole thing—cognitive biases lead us astray, right? Why not work on counteracting them with reason and science? This is, at a high level, what Rationalism is all about. Let’s look at this trio for a moment.

“If you’ve ever heard of George Mason University economist Robin Hanson, there’s a good chance it was because he wrote something creepy,” wrote Slate reporter Jordan Weissmann in 2018. People in the Rationalist-sphere often dabble in red pill rhetoric, scientific racism, and general neoreactionary hot takes. It’s a seedy underbelly.

Nick Bostrom wrote the book Superintelligence, all about how humanity might get wiped out by AI. He’s an Oxford professor of philosophy, so he’s done quite well for himself. This idea about our future AI overlords annihilating us is the BIG idea of Rationalism.

And then there’s … Yudkowsky. He’s the author of a piece of Harry Potter fanfiction longer than War and Peace. He’s also the de facto leader of the Rationalist movement—he started the website LessWrong as a gathering place for newcomers and wrote the holy scriptures of Rationalism, known simply as The Sequences. Oh, and that work of fanfiction? Also a tool of recruitment. It’s all about Harry Potter being really rational. Whatever floats your goat, I guess. He’s a co-founder of the MIRI institute, which launched with funding from none other than conservative libertarian Peter Thiel. Yudkowsky has assumed the role, more or less, of humanity’s “last hope” in the fight against those future AI overlords I mentioned earlier. That’s his deal. Does he know a lot about how AI works? Not really. At least not on a technical level. But this doesn’t seem to bother him. He was the author of the recent TIME letter about how we should be prepared to literally nuke server farms; a Fox News reporter even brought it up in a White House press conference.

There’s a term that’s been tossed around a bit: AI safety. This is the term used by the doomsday culters (because come on, if your ideology is based on an actual doomsday scenario that’s just an honest description of what it’s about); AI ethics is the term used by people who are more concerned about the misuse of AI by humans. These two camps hate each other’s guts, basically. The famous letter by the Future of Life Institute about a six-month moratorium on the development of large language models? That’s from the AI safety bunch.

Elon Musk met musician Grimes after he thought of a pun based on LessWrong culture (Roko’s basilisk/rococo’s basilisk), and learned Grimes had already thought of it. Oh, and Roko’s basilisk is literally the Rationalist version of Satan. Basically, it’s like this: given some (insane) assumptions, you should conclude that the world is a simulation run by an AI that will torture you forever if you don’t speed up the coming of superintelligence. So, yeah. Hell and damnation and all that. There’s a lot of this shit—it comes with the culty territory. The simulation hypothesis itself is a contemporary version of gnosticism, for instance, and the eschatological nature of the Rationalist movement speaks for itself.

Alrighty, still with me? Good. Let’s digress, for a moment.

Ever heard of effective altruism? It’s basically the same culture. They overlap. Effective altruism is all about making the world a better place by using your big smart brains instead of your dumb heart. And then there’s longtermism—the idea (heavily promoted by Musk, and others) that the long-term progress of humanity is the only thing that matters. Climate change is unlikely to wipe out all of us, they say, while overlord AI demi-gods will probably do it because Yudkowsky said so and he’s real smart. Therefore: ignore climate change! It’s fine. No need to respond to it. Who cares? Oh, this sounds really convenient for conservatives? Oh, and conservatives are funding these movements from behind the scenes? Uhh!

William MacAskill, also stationed at Oxford, has been the biggest promoter of effective altruism and longtermism. Remember that whole FTX debacle? Sam Bankman-Fried committing an insane amount of fraud, literally historic shit? Yeah, MacAskill was the dude who sent Bankman-Fried down that path. “Go make a shit ton of money, however you can! It’s all ethical because this is effective altruism!” That’s another aspect of the EA/longtermist movement: fucking people over to make lots of money is morally defensible because you are smarter than other people so you know better than them how to make the world a better place with your money! Oh, and his business partner Caroline Ellison? She formerly ran a Tumblr blog dedicated to … Yudkowsky’s ideas. Especially the Harry Potter stuff. Yeah.

There’s also Scott Siskind, of the (former) blog Slate Star Codex and (currently) Astral Codex Ten. He’s connected to the overall Rationalist movement, but he’s not as, well, batshit crazy as the rest of them. He’s the gateway drug, I guess.

So, let’s sum up. The Rationalists started with a couple of guys thinking, “Hey, let’s overcome our cognitive biases!” Then Yudkowsky became obsessed with the idea of superintelligence, leading Bostrom to write the book on the topic, and he started LessWrong as a cultish hub of activity. There’s a seedy underbelly of neoreactionaries in the community, and Rationalist projects tend to get funded by conservatives who appreciate their non-SJW takes on various subjects.

That’s a brief overview, I guess, though the rabbit hole is deep indeed.

I think the term AI safety is actually used among some serious researchers, the one they don't is "alignment"
The term 'alignment' is common enough among researchers; at least those working for private companies. It's true that 'AI safety' is used outside the Rationalist community, but it also serves as an ideological dividing line. If you believe in the AI doomsday scenario (death by paperclips, etc), you probably lean closer to the culture of the Rationalist movement than the Social Justice movement (or whatever you'd like to call it). There's a [Vox piece](https://www.vox.com/future-perfect/2022/8/10/23298108/ai-dangers-ethics-alignment-present-future-risk) on the schism between the AI Safety and the AI Ethics 'factions'.
I've never seen "alignment" in AI discourse used as anything other than a dog whistle to indicate that the speaker is also one of our very good friends.
very good! Ellison and Siskind are great hooks to mention the scientific racism and neoreaction, but that's all that's missing
Great writeup! I just got done catching up to speed on all this stuff since like, early 2021 and this helped cement it in my brain again. > There's also Scott Siskind . . . He's connected to the overall Rationalist movement, but he's not as, well, batshit crazy as the rest of them. He's the gateway drug, I guess. That's... depressing. I thought he was way worse than the typical Rationalist.
> I just got done catching up to speed on all this stuff since like, early 2021 and this helped cement it in my brain again. For that I can only apologize. > That's... depressing. I thought he was way worse than the typical Rationalist. He's fairly tame as far as the "thought leaders" of this movement goes. At least that's been my experience.
Siskand's less concerned with the AI doomsday scenarios, and thinks the pressing issues of our time are "rationally" investigating whether the Nazis had any good beliefs, scientific racism, whether there are too many women in STEM these days, and eugenics instead.

What we sneer at here as Rationalism has not that much to do with rationalism in many ways. Looking at the rationalwiki page of lesswrong might help. (Rationalwiki is both not really rational nor Rationalist btw, it is more of a skeptical wiki).

I could be remembering this completely wrong but I thought rationalism started off as like a branch of the skeptic movement, and wasn't originally about the whole Bayesian reasoning schtick until big yud and less wrong came on the scene. But then again I barely followed it back in the day so I could be completely misremembering
Think that is a different group. With big R Rationalism just mean the lesswrong people and the overcoming bias people. Not sure how much skeptic they ever where.
The Cowpox of Doubt pretty much sums up rationalist views of skeptics.

I think it’s helpful to view the rationalist stuff within a broader social, political, historical, and ideological context. It’s far too crazy to understand in a vacuum.

don't forget the uh russian orthodox mysticism?
Yud is the katechon.

Wait, what does Lanier have to do with any of that?

Nothing really: that's just how I google stuff. I like what Lanier has to say and apparently he used to live in Berkley. And apparently he had a conversation w Yudkowski in 2008 which was linked to on reddit, and someone in that thread linked to this subreddit. Sorry, shoulda been more specific about the link there.
Please don’t lump Jaron with EY and the other rats!
Not at all: he seems more *rational* lol Here's the conversation, I've been listening to it. https://m.youtube.com/watch?v=Ff15lbI1V9M
That's a good one. Jaron is anti-EY in the sense that he is "anti-AI circle jerk." He sees computing as something that should help humans, not as an endeavor to replace or transcend humans. In Dawn of The New Everything, he gives several definitions of VR and one is that VR is the opposite of AI. Page 278: >Forty-sixth VR Definition: VR = −AI (VR is the inverse of AI).
He's pretty much the anti pope..hys personal philosophy is almost the exact opposite of EYs...also , he went to college early, instead of not at all.

You can’t spell gobbledygook without EY

I've always spelled it gobbledegook. Always isn't very often though.
I wanted the joke, mostly.

Rationalism is about rationalisation, op, not rationality.

How does Lex even fit into the mushy rotting Rationalism jigsaw puzzle? I haven’t really watched him so I’m wondering if I should stay far away or if he’s not that bad.

He interviews almost every one of them. Is he a rationalist too? Or like an EA? Or something else, or neither? I never hear about him

I'm curious about this too. I don't think he is a rationalist, but he seems to be interviewing them a bit.
To me Lex is like a soft-talking smiley face masking the shoggoth. He doesn't just platform the rationalists, but also the broader [Californian Ideology](https://en.wikipedia.org/wiki/The_Californian_Ideology)/[TESCREAL](https://twitter.com/xriskology/status/1635313838508883968)/[Agents of Doom](https://www.bbc.com/future/article/20211014-agents-of-doom-who-is-hastening-the-apocalypse-and-why) constellation of actors/ideologies.
Yeah. Tbf his [interview with Coffeezilla](https://www.youtube.com/watch?v=hi9Rf0oLdHk) was pretty good, but in retrospect it seems that had more to do with Coffeezilla than with Lex. His interview style is basically "let the guest talk about stuff" which depends heavily on the guest's personality and basic beliefs to work.
Coffeezilla interviews and guest appearances are quite good, it turns out he can do this stuff in real time

Rationalism on its own isn’t a problem but when you mix it into the tech/business/startup world, these people like to play god using philosophy and intellectualism as it’s gateway, kind of “we are all different contrarians” and no one understands us, so it’s up to us to save the world and it’s cool to break things in the name of “innovation” a self perpetuating echo chamber that doesn’t get a lot of pushback. Ideas like longtermism and effective altruism are ideas to look into to start ex. https://youtu.be/B_M64BSzcRY

> Rationalism on its own isn’t a problem I think it actually is the original sin from which all the other dysfunctions of rationalism and EA are born. The idea that you can rigidly systematize methods of drawing conclusions about the world and abstract them away from emotional considerations is counterproductive and contrary to scientific evidence. If I had to choose one unifying feature of rationalists' dysfunctions, I would say that it's their universal failure to understand (or even be aware of) their own emotions.
Also, rationalism in its original form was opposed to empiricism, which held that knowledge came from sensory experience (observable facts about the world). The rationalists we sneer at will make some nods to empiricism, but they still believe you can derive most things through reason alone (with perhaps a small amount of information as a starting point). Descartes showed where that takes you.
The Rationalist movement in philosophy and the modern Rationalists have almost nothing to do with each other. The fact that they have the same name has way more to do with our Rationalists lack of education than any similarities in philosophy or outlook. Also, I’m not sure I like your Descartes sneer, it’s not like Berkeley and Locke didn’t have bad hidden premises either.
> The Rationalist movement in philosophy and the modern Rationalists have almost nothing to do with each other. The fact that they have the same name has way more to do with our Rationalists lack of education than any similarities in philosophy or outlook. While that's largely true, I see similarities in the belief rationalists have that AI can flawlessly simulate past humans through historical records and could derive the theory of relativity through a few frames of an apple falling in front of some grass. They're admitting the need for a little bit of starting data, but still believe that if you're smart enough you can derive a complete model of how the universe works from almost nothing. > Also, I’m not sure I like your Descartes sneer, it’s not like Berkeley and Locke didn’t have bad hidden premises either. My point is just that the history of attempts to derive truth through reason without actually going out and looking at the world don't fill one with confidence.
I had no idea Sabine made a video on this too. Fantastic
Yeah, love her educated researched snark. Her deadpan congratulations of Bostrom making a career of multiplying powers of ten was gold. Her other videos are fantastic and so well researched, we need more scientific influencers like her.

Damn lex interviewed Aella. Wish I didnt know that

> googled Jaron Lanier/ Yudkowsky

what prompted you to google that combination?

Just like my brain lol I liked what Lanier had to say in the Social Dilemma & have checked him out a bit here and there in relation to his views on tech & stuff. Idk why but his point of view really resonates with me. Oh and ALSO I knew that he had lived in Berkeley for quite a while, so I figured he might be aware of the huge community that is apparently there which I had never heard of.
I first encountered his work like 2016ish and very much felt the same way. I think you'd love his books. My favorite is the one I mentioned earlier, Dawn of The New Everything. That being said, I also enjoyed You Are Not a Gadget and Who Owns The Future. The one about deleting your social media accounts is good for what it is, but not amazing. There's r/JaronLanier that I wish more people would post in, but given that I'd imagine most of his fans try to avoid social media, I'm not surprised that they don't.

Some call it the Dark Arts