In the footnotes they actually came out and said:
> We should remember that EA is sometimes worryingly close to racist, misogynistic, and even fascist ideas. For instance, Scott Alexander, a blogger that is very popular within EA, and Caroline Ellison, a close associate of Sam Bankman-Fried, speak favourably about “human biodiversity”, which is the latest euphemism for “scientific” racism. [Editor’s note: we left this in a footnote out of fear that a full section would cause enough uproar to distract from all our other points/suggestions. A full-length post exploring EA’s historical links to reactionary thought will be published soon
I’m looking forward to the full length post but I’m not optimistic it will actually get all the way through to the community.
> we left this in a footnote out of fear that a full section would cause enough uproar to distract from all our other points/suggestions
If you feel like you can't openly critique aspects of the movement you're part of for fear of backlash, isn't it time to consider whether the movement can actually be reformed?
I guess we’ll see how the full length post goes… not super optimistic, but ideally maybe the movement will fragment and the more racist fragment won’t be able to borrow respectability/good-faith from the more well intentioned fragment?
Yeah, I predict that the authors of this piece (and the alleged forthcoming follow-up) and anyone who evenly tacitly supports anti-racism gets informally "rebutted" in a 5000-word Siskind turd and maybe a Yudkowsky twitter thread and then they will have no options left but to leave.
On the other hand, Scott has all that Substack money now, so maybe he doesn't care enough to bother?
You don't need anyone to smear them for that to happen - you just need nobody with power to listen. Which, from already existing comments on their post, looks like what's going to happen.
If someone does smear them, I expect it to be someone that's much more active on the EA Forum than either.
Meh. People are sensitive if you tell them directly they're really wrong instead of doing it incrementally. Or if you're impolite about things you know they'll never agree with you about and you have to tolerate (e.g. I think Yudkowsky is dumb and most of them don't).
I'd argue there's a pretty big difference between "we have to frame this sensitively" and "we have to hide this in a footnote in the hopes that most of our audience won't notice it".
I’m glad someone is, uh, working on it. But there are still nuggets
of stuff like this:
And thus much more likely to be read as hostile and/or received with
hostility
May I be so bold as to suggest you take every piece of social or
conversational advice you have ever received from Scott Siskind and
burn it in a fire?
That same thought kept coming back to me when I was looking at the EA forum discussion around Bostrom. They kept using ideas from Scott’s blog to defend Bostrom’s “apology” or even original emails and to tone police his critics or they would go meta-level in a way that obfuscates the issue and ultimately defends Bostrom (albeit indirectly). And it kept occurring to me, no shit Scott’s blog provides good material for defending Bostrom and/or obfuscating the issue, because Scott himself is crypto-racist.
>They keep using ideas from Scott's blog to defend Bostrom's "apology"
Can you give an example? I didn't notice this (though I might have been too angry at everyone supporting Bostrom to notice).
I think a lot of the general norms that Scott promotes (even if he didn't invent them per se) are implicitly optimized to avoid anyone ever calling Scott or Scott's friends racist. These norms include:
* Civility as a value unto itself ("attacking rationalists is bad behavior!")
* Principle of charity as a value unto itself ("if he says he's not racist, he can't be racist!")
* That whole dumb-ass essay where Scott throws out all of human knowledge and declares that it's only racism when *he* says it's racism, which he then defines in the stupidest way possible
A *non-charitable person* might say that one takes up these norms when one knows one is very likely to otherwise be called a racist.
The most immediate example that comes to mind is Eliezer “joking” about Bostrom’s apology being a scissor statement. (Not a direct defense but a deflection serving as an indirect defense).
Forcing myself to look back through the EA forums comments…
* there is a highly upvoted comment asking for EA to be inclusive neurodivergent people. (I.e. Neurodivergent people have less filters so we should be accepting when they spew racism). It’s a very Scott-esque tactic to appropriate inclusivity language to defend racism…
* “High-decouplers” and “low-decouplers”
* going meta in a way that loses track of the issue
* treating hbd as remotely legitimate and worth considering
I think only Yudkowsky's tweet and maybe the openness to HBD are connected to SSC.
The *-decouplers thing comes from some blogger through LW, and the person writing about inclusiveness to neurodivergent people has been sneered at here independently, because he's some creepy professor.
My mistake then. I guess I’ve lost track of which ideas/trends originated with Scott, which were popularized by him, and which are merely popular with his readers.
The post seems pretty good to me. And I appreciate their goal of
fixing EA. I think it’s unlikely to succeed because many (if not all) of
the flaws they describe pretty much make EA what it is.
I do respect though that there are people in EA who actually seem
committed to altruism as opposed to the cult.
Indeed, in their [critical comment](https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1?commentId=fFhWWSaTctki3pGdm) Larks claims:
>I think this is in fact a common feature of many of the proposals: they generally seek to reduce what is differentiated about EA. If we adopted all these proposals, I am not sure there would be anything very distinctive remaining. We would simply be a tiny and interchangeable part of the amorphous blob of left wing organizations.
They have this sneering attitude toward literally everyone not affiliated with them or million/billionaires who works in nonprofits, charity, organizing, mutual aid, policy etc. and then they wonder why no one likes them! Though they do have an answer for that: we all just resent how much more effective and smart they are.
To be fair, if we include the greater movement surrounding them, i *am* somewhat resentful that they are much more effective at like.. making the world more racist or something, than I am at the opposite
One of the commenters worrying that the article's suggestions are primarily left wing works at an "EA-aligned quantitative crptyo hedge fund."
And if THAT isn't worth sneering at, then I don't know what is.
Good piece, though I do wonder if it’s missing the point.
I have been thinking a lot on this idea that rationalism’s original
sin is an excessive preoccupation with individual intelligence.
This is what drives them to ignore established fields, experts,
books, etc in favor of Harry Potter nerds trying to derive everything
from first principles. It’s why they’re trying to reinvent the wheel and
rarely have any part of the mainstream academic conversation. And why
they just can’t get past these “just asking question” arguments about
HBD.
And I don’t know that there’s a way to get past that without throwing
everything out. It’s not that rationalism/EA was co-opted by racist
grifters, it’s that maybe it was just the wrong approach to begin
with.
Genuine question -- which came first? I always assumed EA was a particular offshoot of rationalism/lesswrong. But was this actually a case of EA getting "invaded" by a worse group?
I think EA is sort of an amalgamation of ideas, coming mostly from:
1. Singer
2. MacAskill, Ord and other analytic philosophers
3. GiveWell (Holden Karnofsky and Elie Hassenfeld)
4. Rationality
I feel that out of those, the contribution of Rationality is the smallest, although it has had much influence over the format of discourse (and through that also the contents).
> I think it’s important to characterise Singer as a secondary influence
I always wonder what the "you must save the drowning child, and expand your moral circle until every drowning child matters as much as yours" dude thinks of becoming the guy who'll be remembered instead as the "math proves that no amount of experienceable suffering matters, so eat the poor and bow to our new feudal lords" dude.
I guess I should've included Bostrom as a direct influence - is he the father of Longtermism? I'm not exactly sure.
Although - I never got around to finishing "the Precipice", but the half I did read didn't sound anything like Bostrom or Yudkowsky to me.
I don't know about the father of it, but Bostrom's discussion of existential risk, transhumanism, and the like goes back before McAskill and Ord were really active and lines up pretty well with the longtermist stuff I've seen.
Bostrom essentially developed his whole viewpoint in collaboration with Yudkowsky, idk whether you’d call that rationalist influence on him or vice versa but they’ve been entangled since the very beginning
I've been involved in EA since 2014. There's always been some incidental overlap, but it certainly feels like the latter. Back then, in terms of focus, it was about 60% poverty/health, 30% animal welfare, and 10% other stuff like existential risk and AI.
I am only like 10% into it, but it’s suddenly striking how some of
the areas of EA thinking that are criticized here – eg excessively
quantitative, bogus bayesian reasoning – parallel some of the reasons
for predictions about an AI apocalypse.
> and then literally every response is shot down or *rationalised* away
The one thing I will give rationalists credit for is unintentionally naming their intellectual position correctly
Some of us in EA already know this. Probably about the same group as what the post calls "heterodox EAs". The rest just think outsiders are stupid or acting in bad faith.
I don't think you understand Gerard
That the blog post
Is the single most powerful instrument ever conceived by the human mind.
Worlds will quake and crumble
wonder if they will consider that left wing politics is about helping
people as much as possible, emancipating the working class. it’s more
coherent than giving money to some random charities.
why “altruism”, why not “solidarity”? is it because “altruism” as
they see it is an individualist thing, an act of martyrdom, personal
sacrifice that can be done without social interaction, a ” lone
beautiful soul” smartly scheming to “help” people without their consent,
without consulting them?
> wonder if they will consider that left wing politics is about helping people as much as possible, emancipating the working class. it's more coherent than giving money to some random charities.
>
> why "altruism", why not "solidarity"? is it because "altruism" as they see it is an individualist thing, an act of martyrdom, personal sacrifice that can be done without social interaction, a " lone beautiful soul" smartly scheming to "help" people without their consent, without consulting them?
Judging by the comments, they are immediately accepting only the shallowest criticisms and deflecting the rest with citations to bullshit blogposts from Siskind.
So I'm gonna go with *no, they will not consider adopting better ideas*.
That's a really interesting question and I think your answer is pretty good.
Altruism ≡ Measurability ≡ Money. Solidarity involves actual relationships between people, altruism can be accomplished by sending a check, you can just set up a monthly payment and forget about it, without changing anything very fundamental in the world. Solidarity is political, altruism requires no fundamental changes to the way things are managed.
I once made a Reddit poll asking people which of the principles of the French Revolution they thought was most important. Liberty and Equality both got many votes, but Solidarity (which I personally value most) lost by a large margin.
LessWrong runs one of these approximately annually, as a rationalist realises that rationalism is really bad at everything and a way to make yourself into a dumbass who makes awful decisions. They post, nothing changes, everyone continues.
EA has the added problem that there's a shit-ton of money involved, and the subculture has become finely attuned to making its rich donors happy. Nothing will change, everyone will continue.
But people do leave the movement. Movements, like any other sufficiently large organization, exist for and by themselves. But a movement that can't attract or sustain its membership withers.
So basically, the good news is there's a steady trickle of people exiting the cult -- the bad news is this doesn't change the state of the cult at all, or affect their onboarding rate of new cult members (or are they mainly keeping their existing "whales" happy nowadays instead of going for quantity?)
ConcernedEAs is so sad to me because they’re so close to figuring out
that the philanthropy stuff is actually cryptoneoliberalism and that the
cryptoneoliberalism is actually neoconservatism.
The people around you are into natalism and race science. They hate
you.
>ConcernedEAs is so sad to me because they're so close to figuring out that the philanthropy stuff is actually cryptoneoliberalism and that the cryptoneoliberalism is actually neoconservatism.
>
>The people around you are into natalism and race science. They hate you.
They know. They just can't say it or their critiques won't get taken seriously by the community.
“we need to reconsider the influence that rich EAs have within the
movement” the whole purpose of the movement is to make rich people feel
better by talking about hypothetical misanthropic murder AIs who are
worse than anything they could possibly do by a factor of 30billion
It’s hard to read the full-length article as anything other than an
argument EA as it exists as a concept and social reality should be taken
out back and shot. Like it is good but mostly serves to damn the entire
movement. It is in that boring EA way, utterly brutal.
“The EA community is notoriously homogenous, and the “average EA” is
extremely easy to imagine: he is a white male[9] in his twenties or
thirties from an upper-middle class family in North America or Western
Europe … Let us name him “Sam”, if only because there’s a solid chance
he already is.”
[deleted]
I’m glad someone is, uh, working on it. But there are still nuggets of stuff like this:
May I be so bold as to suggest you take every piece of social or conversational advice you have ever received from Scott Siskind and burn it in a fire?
I find the omission of SneerClub personally insulting
The post seems pretty good to me. And I appreciate their goal of fixing EA. I think it’s unlikely to succeed because many (if not all) of the flaws they describe pretty much make EA what it is.
I do respect though that there are people in EA who actually seem committed to altruism as opposed to the cult.
Good piece, though I do wonder if it’s missing the point.
I have been thinking a lot on this idea that rationalism’s original sin is an excessive preoccupation with individual intelligence.
This is what drives them to ignore established fields, experts, books, etc in favor of Harry Potter nerds trying to derive everything from first principles. It’s why they’re trying to reinvent the wheel and rarely have any part of the mainstream academic conversation. And why they just can’t get past these “just asking question” arguments about HBD.
And I don’t know that there’s a way to get past that without throwing everything out. It’s not that rationalism/EA was co-opted by racist grifters, it’s that maybe it was just the wrong approach to begin with.
Ah, finally. Eventually they’ll schism and there will be a section of EA that actually does what the label says.
I am only like 10% into it, but it’s suddenly striking how some of the areas of EA thinking that are criticized here – eg excessively quantitative, bogus bayesian reasoning – parallel some of the reasons for predictions about an AI apocalypse.
[deleted]
hey they figured some stuff out!
wonder if they will consider that left wing politics is about helping people as much as possible, emancipating the working class. it’s more coherent than giving money to some random charities.
why “altruism”, why not “solidarity”? is it because “altruism” as they see it is an individualist thing, an act of martyrdom, personal sacrifice that can be done without social interaction, a ” lone beautiful soul” smartly scheming to “help” people without their consent, without consulting them?
I’m actually somewhat impressed, and did not expect to be.
ConcernedEAs is so sad to me because they’re so close to figuring out that the philanthropy stuff is actually cryptoneoliberalism and that the cryptoneoliberalism is actually neoconservatism.
The people around you are into natalism and race science. They hate you.
Just read Anand Giridharadas and get out.
“we need to reconsider the influence that rich EAs have within the movement” the whole purpose of the movement is to make rich people feel better by talking about hypothetical misanthropic murder AIs who are worse than anything they could possibly do by a factor of 30billion
Y’all actually read this??
It’s hard to read the full-length article as anything other than an argument EA as it exists as a concept and social reality should be taken out back and shot. Like it is good but mostly serves to damn the entire movement. It is in that boring EA way, utterly brutal.
“We reached the point where we would feel collectively irresponsible if we did not voice our concerns some time ago,”
Great example of how poor sentence structure can fuck up a basic sentiment
Sarah Taber has thoughts :
https://twitter.com/SarahTaber_bww/status/1617194799261487108
(a long thread with some good replies)
They’re becoming self-aware guys!
Also, https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1#We_are_incredibly_homogenous
Rekd.