I’m just trying to understand Rationalism & counter-arguments. I stumbled across it listening to a snippet of Lex Fridman interviewing Aella and then went down a weird rabbit hole… found out that some sort of Rationalist commune is in my hometown of Berkeley (*how* did I never hear of this?)… got my own weird instinctive “whoa seems like a cult” feeling… googled Jaron Lanier/ Yudkowsky and, well, here I am .
I’m trying to wrap my head around what may just be very wordy gobbledegook (is that all it is?), and I’m getting a sort of Objectivist flavor mixed with some AI worship kind of thing. Is that right?
Help me out here: I’d like to sneer with you, but I don’t quite get what’s going on.
Sort of. It’s a part philosophy, part cult, part loose social group centered around techies and academics in the SF Bay Area, but has affiliated people around the world.
Rationalism is generally objectivist/libertarian, but has a bunch of specific, weird shibboleths like hyper focusing on the existential risk of AI, certain kinds of long term social harm, polygamy, nootropics and “self hacking” etc.
They have a dedicated subreddit to sneer at them for a couple reasons: (1) They fly too under the radar for most media outlets to care, and (2) they are that special mix of condescending and incorrect for any thinking persons hackles not to be raised.
For further reading I recommend this
oh my fucking god
Alright, I’ll just whip this out in a totally unhinged and unstructured manner because I don’t know how to begin to organize my thoughts on this topic.
It starts with Overcoming Bias, a blog co-written by Eliezer Yudkowsky, Robin Hanson, and Nick Bostrom. That’s sort of the seed of this whole thing—cognitive biases lead us astray, right? Why not work on counteracting them with reason and science? This is, at a high level, what Rationalism is all about. Let’s look at this trio for a moment.
“If you’ve ever heard of George Mason University economist Robin Hanson, there’s a good chance it was because he wrote something creepy,” wrote Slate reporter Jordan Weissmann in 2018. People in the Rationalist-sphere often dabble in red pill rhetoric, scientific racism, and general neoreactionary hot takes. It’s a seedy underbelly.
Nick Bostrom wrote the book Superintelligence, all about how humanity might get wiped out by AI. He’s an Oxford professor of philosophy, so he’s done quite well for himself. This idea about our future AI overlords annihilating us is the BIG idea of Rationalism.
And then there’s … Yudkowsky. He’s the author of a piece of Harry Potter fanfiction longer than War and Peace. He’s also the de facto leader of the Rationalist movement—he started the website LessWrong as a gathering place for newcomers and wrote the holy scriptures of Rationalism, known simply as The Sequences. Oh, and that work of fanfiction? Also a tool of recruitment. It’s all about Harry Potter being really rational. Whatever floats your goat, I guess. He’s a co-founder of the MIRI institute, which launched with funding from none other than conservative libertarian Peter Thiel. Yudkowsky has assumed the role, more or less, of humanity’s “last hope” in the fight against those future AI overlords I mentioned earlier. That’s his deal. Does he know a lot about how AI works? Not really. At least not on a technical level. But this doesn’t seem to bother him. He was the author of the recent TIME letter about how we should be prepared to literally nuke server farms; a Fox News reporter even brought it up in a White House press conference.
There’s a term that’s been tossed around a bit: AI safety. This is the term used by the doomsday culters (because come on, if your ideology is based on an actual doomsday scenario that’s just an honest description of what it’s about); AI ethics is the term used by people who are more concerned about the misuse of AI by humans. These two camps hate each other’s guts, basically. The famous letter by the Future of Life Institute about a six-month moratorium on the development of large language models? That’s from the AI safety bunch.
Elon Musk met musician Grimes after he thought of a pun based on LessWrong culture (Roko’s basilisk/rococo’s basilisk), and learned Grimes had already thought of it. Oh, and Roko’s basilisk is literally the Rationalist version of Satan. Basically, it’s like this: given some (insane) assumptions, you should conclude that the world is a simulation run by an AI that will torture you forever if you don’t speed up the coming of superintelligence. So, yeah. Hell and damnation and all that. There’s a lot of this shit—it comes with the culty territory. The simulation hypothesis itself is a contemporary version of gnosticism, for instance, and the eschatological nature of the Rationalist movement speaks for itself.
Alrighty, still with me? Good. Let’s digress, for a moment.
Ever heard of effective altruism? It’s basically the same culture. They overlap. Effective altruism is all about making the world a better place by using your big smart brains instead of your dumb heart. And then there’s longtermism—the idea (heavily promoted by Musk, and others) that the long-term progress of humanity is the only thing that matters. Climate change is unlikely to wipe out all of us, they say, while overlord AI demi-gods will probably do it because Yudkowsky said so and he’s real smart. Therefore: ignore climate change! It’s fine. No need to respond to it. Who cares? Oh, this sounds really convenient for conservatives? Oh, and conservatives are funding these movements from behind the scenes? Uhh!
William MacAskill, also stationed at Oxford, has been the biggest promoter of effective altruism and longtermism. Remember that whole FTX debacle? Sam Bankman-Fried committing an insane amount of fraud, literally historic shit? Yeah, MacAskill was the dude who sent Bankman-Fried down that path. “Go make a shit ton of money, however you can! It’s all ethical because this is effective altruism!” That’s another aspect of the EA/longtermist movement: fucking people over to make lots of money is morally defensible because you are smarter than other people so you know better than them how to make the world a better place with your money! Oh, and his business partner Caroline Ellison? She formerly ran a Tumblr blog dedicated to … Yudkowsky’s ideas. Especially the Harry Potter stuff. Yeah.
There’s also Scott Siskind, of the (former) blog Slate Star Codex and (currently) Astral Codex Ten. He’s connected to the overall Rationalist movement, but he’s not as, well, batshit crazy as the rest of them. He’s the gateway drug, I guess.
So, let’s sum up. The Rationalists started with a couple of guys thinking, “Hey, let’s overcome our cognitive biases!” Then Yudkowsky became obsessed with the idea of superintelligence, leading Bostrom to write the book on the topic, and he started LessWrong as a cultish hub of activity. There’s a seedy underbelly of neoreactionaries in the community, and Rationalist projects tend to get funded by conservatives who appreciate their non-SJW takes on various subjects.
That’s a brief overview, I guess, though the rabbit hole is deep indeed.
What we sneer at here as Rationalism has not that much to do with rationalism in many ways. Looking at the rationalwiki page of lesswrong might help. (Rationalwiki is both not really rational nor Rationalist btw, it is more of a skeptical wiki).
I think it’s helpful to view the rationalist stuff within a broader social, political, historical, and ideological context. It’s far too crazy to understand in a vacuum.
Wait, what does Lanier have to do with any of that?
You can’t spell gobbledygook without EY
Rationalism is about rationalisation, op, not rationality.
How does Lex even fit into the mushy rotting Rationalism jigsaw puzzle? I haven’t really watched him so I’m wondering if I should stay far away or if he’s not that bad.
He interviews almost every one of them. Is he a rationalist too? Or like an EA? Or something else, or neither? I never hear about him
Rationalism on its own isn’t a problem but when you mix it into the tech/business/startup world, these people like to play god using philosophy and intellectualism as it’s gateway, kind of “we are all different contrarians” and no one understands us, so it’s up to us to save the world and it’s cool to break things in the name of “innovation” a self perpetuating echo chamber that doesn’t get a lot of pushback. Ideas like longtermism and effective altruism are ideas to look into to start ex. https://youtu.be/B_M64BSzcRY
Damn lex interviewed Aella. Wish I didnt know that
what prompted you to google that combination?
Some call it the Dark Arts