Hello. I’ve been checking the sub for quite a while now, and decided that i could use some help from you. Made this account just to post this.
I’m a former rationalist dealing with the leftovers of this ideology that still occupy my mind in a very damaging way. I became attracted to this whole thing some years ago and for a long time it captivated me. This was until i started to notice the abusive cultish (and of course, bigoted) tendencies of the whole sphere, tendencies that have been described in detail by other ex rationalists in this same sub.
I have some mental health issues that (among other things) cause me to experience an extremely tiresome amount of rumination and intrusive thoughts. And as you can guess, rationalist ideas and rumination are not a good combination.
Even though i left rationalism for good the ideas i learned during my time in those groups still cause me anxiety and general mental discomfort. The possibility of being a simulation, the infohazards, the implications of the MWI and a multiverse, etc. all still manage to show up in my head and prevent me from having a healthy relationship with my psychological issues.
But perhaps the most damaging thing is the fact that the feelings of inferiority fostered by the ideology are hard to get over. I think it’s the most damaging part because it’s the reason why the other ideas are taken seriously at all. As other people have pointed out, the sequences build a case for disregarding scientists and experts in different fields, and instead put your trust in the power of Bayes as an entire epistemology. After all it’s hard to argue with math, right?
I think an aspect that has been a bit ignored it’s not the things the rationalist leaders say, but instead the things they don’t say. Yudkowsky claims to know several “secrets of the universe”, and if you mix this with the devotion to Bayes he builds in you, you get an strange aura around him and other rationalists, a feel that they know things you don’t. And they used math to get that knowledge.
I know this post is a bit long and lacks specific questions, but i feel like i needed to vent these feelings a bit. But like the title says, what advice would you give in order to “break the spell” of rationalist ideas and leaders? I’m already seeing a psychologist for my general mental health issues, but i think it’s helpful to discuss this with people familiar with the whole sphere.
EDIT: Thanks to everyone who took the time to reply and offer some help, it means a lot to me. I will check out the stuff you recommended, it can be a meaningful contribution to my overall recovery from these experiences.
Funny, because when I calculate P(H|E) = P(E|H) * P(H) / P(E), with:
or with
I end up coming out of such calculations with more trust in experts and less trust in myself. The amount of evidence I need to accumulate in order to justify belief in the hypothesis that “all the experts are wrong” is immense, and rightly so.
The thing rationalists miss out on is that Bayes’s theorem is only as good as the numbers you put into it. Garbage in, garbage out. And so the confidence that rationalists have in their own rationality, by doing very precise calculations using very unexamined assumptions, is the source of all comedic value that fuels this sub.
Something that helps me when I get too obsessed with a philosopher or a philosophical idea is to normalize them. The point of the below thoughts is to embrace the idea that no one has the secrets to the universe. The main rationalist ideas occupy interesting (or not…) places in ongoing and totally normal academic debates.
When rationalists present their ideas they are imbued with a heavy amount of creepy-ass rhetoric. Yud presents the Sequences as the key to rationality and the universe. Not only is this way oversellling, he makes totally undefended and questionable assumptions at practically every step (although I understand he deleted a lot of the original sequences? I haven’t looked at them in yeaers). SSC can’t write an argument to save his life that isn’t obscured by a million random-ass thoughts.
Normalizing rationalist ideas means grappling with them as just normal positions in a complicated academic dialectic outside of any bombastic rhetoric and writing. And to me this means four things.
First: Despite what many rationalists will say, there are lots of smart people who defend opposing ideas very well. So part of normalizing the rationalist sphere is to seriously explore serious thinkers who totally reject core rationalist ideas. Maybe you’ll come away realizing that the rationalist ideas you found convincing were just built off of empty rhetoric. For example, Carlo Rovelli rejects the MWI and defends his own account. And you can even watch Rovelli and David Wallace (a defender of the MWI) very reasonably and without any “secres of the universe” vibe debate the different theories on youtube. It’s possible if you’re really bothered by their utilitarianism, then reading some Christine Korsgaard might help. If the AGI stuff bothers you, lots of philosophers reject the possibility of true artificial intelligence of any sort. Some think intelligence is irreducibly biological (Searle) while other’s think the whole concept of AGI is based on two illicit moves: anthropomorphizing computers and computerizing the mind (PMS Hacker, Evan Thompson, Hubert Dreyfus, me).
Second: One of the creepy aspects of the rationalist sphere is the way a collection of different ideas are billed as a complete package and as the only “rational” package to buy. Part of normalizing the rationalist package is to recognize that the “glue” that the rationalists claim holds their ideas together can be pried apart. There are people whose work I like defending some versions of Bayesianism epistemology and who have nothing to do with the rationalist sphere. Liam Kofi Bright is basically an anti-rationalist and a defender of Bayesianism (I think he has also done work on trusting experts).
Third: a direct corrolary is that part of the rationalist bullshit is the creepy vibes and rhetoric. But by reading philosophers who agree with the rationalists on some idea but reject the rationalist cult, that may take away some of the culty psychological attraction their ideas have. Bayesianism epistemology is a hotly contested idea in philosophy that needs lots of defending!
Fourth: The rationalists have massive holes in their world view when it comes real important parts of human experience, specifically race and gender. So I would recommend reading feminist and race theorists. Currently I’m reading two feminist books, Think Like a Feminist by Carol Hay and We Are Not Born Submissive by Manon Garcia. With regards to the philosophy of race, I recommend The Racial Contract by Charles Mills.
The goals: I think ultimately my goal here is for you to really figure out what arguments you find genuinely convincing in the rationalist discourse and what arguments you were just duped into finding convincing because of all the bullshit rhetoric. It’s possible that you’ll come away from all this still thinking that some rationalist ideas are correct. Well that’s fine because the other goal is to show you that you can believe some rationalist ideas without being a kook. Although I think only kooks actually believe in the simulationist hypothesis.
I mean, let’s be real, my understanding is that Yud’s only real claim to originality in the academic community is the evil AI stuff. It’s an interesting idea, but in the grand scheme of interesting philosophical ideas, it strikes me as a little on the paltry side.
Ultimately, I think the biggest problem with rationalism is it’s an insular community masquerading as an open community. So seriously reading anything that challenges their core commitments or addresses their big blind spots might help get them out of your head.
You might consider reading “Cultish” by Amanda Montell:
https://www.amazon.com/Cultish-Language-Fanaticism-Amanda-Montell/dp/0062993151
This book documents the type of abuse of language that you’ve experienced by way of Rationalism. From my layman’s perspective, Rationalism is well on the road to developing the sort of thought-terminating cliches and overloaded language that allows cults to close off members from any external influence.
I doubt this alone will help, but there’s a world where you can still “trust in the power of Bayes” without being a rationalist.
First, note that Bayes theorem is (roughly speaking) a mathematical way to quantify how one should learn from experience. You have some prior preconceived belief, see some new evidence, and want to adjust your preconceived belief. No issues here.
The issue is in the world quantify. For the record, I’m not saying Bayes theorem shouldn’t use numbers! Just that in most “casual” appeals to it, the numbers are highly suspect, and should be the thing that is interrogated, not the use of a mathematical theorem.
This is perhaps best seen as a broader issue with utilitarianism, and at the core of any real nonsensical utilitarian argument (although there are other things as well). If I say that enslaving Ben Shapiro would give me 30 expected utilons, and (since I would make him do incredibly funny things) it would also give each person on twitter .1 expected utilon, and would only cost Ben -1000000 utilons then by linearity of expectation, we should bring slavery back.
Of course, I made up those numbers, and came to some conclusion as a result. If I made up different numbers, I could have come to a different conclusion via the same process. The step of making up numbers allowed me to smuggle in my opinion into the “simple mathematical calculation”, and gives the opinion a veneer of objectivity when it is nonsensical at face value.
You get precisely the same issue with Bayes theorem. In casual settings it is used to justify quantifying opinions that are made up, and one can change the end quantification by changing the initial ballpark numbers one throws into it, so it really becomes a way (rhetorically at least) to ” beef up” a qualitative opinion (which one smuggles in via the ballpark numbers) into a quantitative one, which people tend to view as more compelling.
For the record there are (formally) fucked up things with Bayes theorem as well (anyone who supports the idea of improper priors is highly suspect), but I would personally mostly view it as a convenient trick to do the “qualitative -> quantitative” conversion people love to do with it.
“The” MWI is one of those ideas that you can afford to not take seriously, even if you get into quantum physics. I put “the” in quotes because there isn’t just one version, but at least half a dozen mutually contradictory variants. Even the people who do take it seriously can’t agree with each other about how to make a well-formulated, actually scientific theory out of it. And, of course, none of them have persuaded the people who prefer any of the other interpretations. Once you peel back the technical and historical inaccuracies, Yudkowsky’s proclamations on the subject turn out to be empty smugness more than anything else.
Lots of good answers here but I’ll try to jam one into a little gap that others haven’t touched on as much: find better substitutes for the positive things that Rationalism did seem to offer you. You’re curious and you want to improve your own knowledge and reasoning - that’s great, so don’t give it up! Just find more fulfilling sources of input. For example, do you want to be more rational? Read Kahneman, which will show you the limits of your own intuition (but also the limits of our shared knowledge of how to overcome it). Want to understand the interpretations of quantum mechanics? I recently enjoyed Philip Ball’s Beyond Weird, which actually saves Everett’s interpretation for last because it’s a weird outlier compared with the other ones but first you have to understand which mechanical problems it actually resolves instead of just enjoying the implications for philosophy (and fiction).
Basically I’m saying you fundamentally have to give up on their “get smart quick” scheme of developing an internet-argument-ready level of knowledge about abstruse issues by reading rambling blog posts written by laypeople in their free time, and read actual books (or at least reputable and professionally edited and fact-checked magazines). For a lot of people that would involve a significant rearrangement of their lifestyle, but nowadays there are a lot of different ways you can fit it into yours: you can read e-books on a mobile device, even while listening to music, or listen to audiobooks and -articles (Audm) while doing other things that don’t keep your brain occupied.
Also, as the best comments say, absolutely don’t try to self-medicate your real diagnosable clinical disorders with internet posts. There is no substitute for trained help, even if you have to Rationalize it as finding someone who’s spent years of actual daytime work hours updating their priors instead of someone winging it with a good slogan they overheard from a friend of a friend.
I suffer from a number of mental health issues but by no means am I qualified to give professional advice on how to approach them. However, I will say this, that instead of attempting to completely divorce yourself from the ideas that you were exposed to, perhaps there is a way to make peace with them. Many people enjoy thinking about MWI or simulation theory or AGI ( Me for one and probably many people here) without buying into the implications the rationalists offer up about how these ideas influence our lives or will in the future. I’ve also suffered from an inferiority complex in my life and one thing I’ve found that helps is to remember the things that I do or the experiences I have that those “superior” others may not - things they are missing out on. Rationalists haven’t really accomplished anything as a group beyond getting rich silicon valley types to buy into their nonsense. But they drain alot of time into these activities. I don’t imagine they “smell the roses” too often.
Others have said great things about the false numeracy that Rationalist-style Bayes calculations deal in. But maybe I can help to defang your fears about Many Worlds.
What is my long-term goal, in the abstract? It’s to reshape the world to be the best possible world that I could live in. I want to fight bigotry and injustice because they make the world a worse place for me to live in, because even when they don’t hurt me personally, they hurt the people I care about. And “the people I care about” casts a very wide net. Seeing people suffer makes me sad.
Now, assuming MWI, what do my counterparts in other quantum branches want? Well, quantum branches occur when there are multiple ways that an individual particle could go. On rare occasions, those quantum effects bubble up to the macroscopic world… but the vast majority of them are utterly and entirely irrelevant to me and to the things I care about. So my quantum counterparts for the most part also have the same goals as I do. It’s just that each of us can only affect our own quantum branch.
Each of us, me and my counterparts, are working together to repaint the future from the bleak place it could have been to the bright place it actually will be. Our shared, strongly aligned desires are acting as a process by which the deterministic many-worlds future of the multiverse is decided, and our contribution to that calculation ensures that our values are reflected in that process.
But is this really so different from my relationship with my fellow human beings in this quantum branch? Many of them also have a vision of the future that aligns with mine. And w unlike my counterparts, I can support them and they can support me. That lets us accomplish even more than me and my counterparts could have done alone. There’s no sense putting too much thought into what my counterparts are doing, since we can’t affect each other; I should be focused on what I can change and the power that I do have.
Oh yikers, that’s terrifying. I just casually browse this sub sometimes because it’s funny and find the rationalists sus, but I always thought their love of Bayes was probably fine or maybe even a rare point in their favor. As a social scientist I think we often need to be using Bayesian rather than frequentist statistics for reasons relating to the assumptions of the methods and the claims we’re generally trying to make…
but math as an epistemology is definitely a hazard. Others have left a lot of good notes on a few of the important topics here, but what I haven’t seen people talking about is that any type of math or logic is a socially constructed tool for representing the world, and while these tools can be useful you can’t put all your faith in them as ground truth. A classic story problem begins “you have two apples…”, but what does that actually mean? Those apples certainly aren’t identical in the real world, so can you really justify adding them like they’re multiples of the same object? What if you take a bite out of one?
Silly, I know, but that is just to point out that math is one way of simplifying the rich texture of reality so that we can make inferences about it, and can cover up details that other forms of representation or description might capture. No mode of representation or analysis like that is going to be perfect, and using a tool of representation like a certain branch of math or logic as your ground truth is bound to get you into trouble.
Additionally, any mode of analysis like that is going to allow room for subjectivity in how it’s actually applied in the real world. That is fine, we cannot be perfectly rational no matter how hard we try, but failing to acknowledge subjectivity is something rationalists are really good at. In the case of Bayes a lot of subjectivity is introduces in selecting priors. The apples example shows that subjectivity is introduced when you decide on a unit of analysis. We also introduce subjectivity in how we select the questions we are going to use Bayes to ask. I could ask “what race commits the most crimes?” and the answer would likely support a racist worldview, or I could ask “what factors lead individuals to be more prone to crime?” and I would likely get an answer that does not support a racist worldview.
Idk if I worded this the best, but hope it helps a little.
Maybe look into existentialism to deal with the whole simulation problem. Camus is pretty good.
Also the entire “rational” sphere has big “too smart for school” energy.
Sounds very cultish
I’ll be honest, I know little about the rationalist sphere, I just stumbled across this sub and joined because I liked the vibe.
I do know a bit about Shapiro and j peterman and to my understanding part of their schtick is that whole “removal of emotion” in their argument as some kind of stoic bravado that gives a false impression of being above it all. But the reality is they’re not relying on facts so much as they’re leaning into anger, contempt, and fear, which are the only acceptable emotions to show if you buy into toxic tropes of masculinity.
I have nothing to add other than the fact that I’ve noticed that rationalism seems to jive people with OCD-like tendencies, I say that as someone with the condition. Maybe the worry that rationalist-inspired thoughts produce will lessen as you work with your psychologist more.
Read Justin E. H. Smith’s book Irrationality. It might not be a cure, but it might help.
Sorry can’t really help you here. I always found the whole linking to secret knowledge and their ‘math’ a bit meh. And I was never convinced by the basilisk nor simulation theory. The theory of infohazards has some merit imho, but probably only in my personal definitions so it isn’t that generalizable, and it often vary personal what can be an infohazard (and the concept of infohazards is one itself for some). For an example of something which I think is a general infohazard: I think people should be careful to not link to far right extremist literature without any comment (which sadly a lot of media does tend to do, and which discouraged by anti-extremism researchers (another sidenote here, I recall reading research long ago that if you read something, to understand it you have to at some level believe it, which can be dangerous)). But this isn’t something worried about in the Rationalist sphere, where apart from the two anti-FAQs, linking to bad shit is done without a blink of an eye.
Do hope you manage to figure it out however. I did once hear about psychologists at universities having more experience with these kinds of intrusive thoughts via logical math things, that might help.
What also might help is to realize that apart from funding from various billionaires (Who support Rationalism and NRx for obvious reasons) and fawning respect from contrarians (who love to have their ‘ha, my contrarianism is right’ idea confirmed), and a subset of tech workers looking to have their biasses confirmed(**), or science fiction fans, these people aren’t that important (And also, often just plain wrong (See how Yud just accepts MWI as true because it is good for the other ideas he has) or misguided), and in general wasting a lot of their time (see gwern site). Also well, their shit simply doesn’t work(*) (but that is more about the whole art of Rationality, which they seem to have mostly given up (when is the last time you saw ‘epistemic status’ for example, fuck I talked about that more here than they do) and not the mindworms (if you don’t mind me calling them that) you are suffering from.
Hope this helps a little bit.
*: This quote jumped out at me:
As you can see, it doesn’t give clarity of mind at all (makes me think of various self help books which actually don’t help). The rest of the article is a massive, as the kids say, cope (we aren’t winning but we could be if we did this one small trick because we are already so good at things).
**: On the note of biasses being confirmed/contrarianism. A while ago Scott Alexander was going all ‘payday loans are good actually’, which turns out to just be based on flawed data from an ideological source. Isn’t relevant to your situation, but I just wanted to share this somewhere in sneerclub at least.
Not sure what you need, but you might get some use from David Chapman’s (@meaningness) writings:
https://metarationality.com/rationalism
https://meaningness.com/collapse-of-rational-certainty
I’ve also assembled some of my own critical material: http://www.hyperphor.com/ammdi/pages/Rationalism.html
but honestly Chapman’s is much more systematic and probably better suited to a recovering rationalist trying to deprogram themselves.
best way to do it is to read reputable history books and great literature and poetry. when you do that you’ll feel something. After that point it’s smooth sailing. these people are not actually that smart, they’re just bigoted in a way thats rewarded by a bigoted society
Hey. Not sure how helpful it is, but i had a similar personal transition when leaving religious fundamentalism (before the “rationalist” movement picked up steam), and experienced many of the same difficulties in shifting away from the modes and habits of thinking that came with it.
Fundamentalism and internet rationalism both rely on “ideology laundering.” For the rationalists it’s Bayesian probability and for the fundamentalist christians it’s “the words of the Bible”: both use rhetorical card tricks to equate their assumptions and views with “math” or “the text of Scripture,” and frame others as arguing against that “neutral authority.” It’s critical to understand that you do not have to out-math someone to disagree with them or reject their premises because they conflict with your values or bring pain to your life. You might be wrong! But that’s not the end of the world. Trusting yourself is important; you may not be an authority on statistics but you’re an authority on what you value and believe and experience.
Also, if there are friends or trusted folks you can connect with in other communities (online or off) who can support you without being on any ‘side’ of these philosophical debates, it can be a huge help.
Jungian psychoanalysis. Jung dealt with exactly this.
This is a bit out of my wheelhouse, and I don’t want to offer you any bad advice, but I hope you feel better. The only thing I can suggest is having an open mind and trying to be humble and try and discuss things as much as you can.
Check out subreddit overlap with that community:
https://subredditstats.com/subreddit-user-overlaps/slatestarcodex
They’re heavily biased and in their own little internet echo chamber, don’t put too much stock in their irrationality.
Bayes’ theorem and probability theory in general is meaningful only if you have a correct probabilistic model of the reality (or system), which you want to model.
Watch this video about Bertrand’s paradox: https://www.youtube.com/watch?v=mZBwsm6B280 : Depending on the small details of the probabilistic model, one can get very different results. And this is what we get in the math context, where things are very deterministic and perfectly measurable.
It is nearly impossible to make a reasonably correct probabilistic model when talking about anything where humans or human decisions are involved. So if someone claims to use Bayes’s theorem in their decision making, they lie ¯\_(ツ)_/¯