posted on September 08, 2022 09:36 PM by
u/contextfree
51
u/EnckesMethod52 pointsat 1662675855.000000
It wouldn’t be surprising if someone grouped EA with what people call
“wokeness,” since both have argued in support of making concrete
material sacrifices on behalf of non-white people. This author equates
all EA with longtermism and then ropes in “wokeness” by analogizing
people today harmed by systemic racism or sexism with the hypothetical
matrioshka brains of the year 1 trillion, as if caring about the former
is similarly silly to caring about the latter. It makes more sense to
see longtermism as a sort of judo move by more right-wing nerds against
the material critique of EA, to remove any time horizon or discount
factor such that judging the relative effectiveness of charitable acts
becomes impossible and they can argue that doing exactly what want is as
effectively altruistic as giving all their money to bed net
charities.
> It makes more sense to see longtermism as a sort of judo move by more right-wing nerds against the material critique of EA,
I believe that they're just all the same people though
If you look at [cause prioritisation](https://forum.effectivealtruism.org/posts/83tEL2sHDTiWR6nwo/ea-survey-2020-cause-prioritization) over time, EA has been steadily dropping support for global poverty in favour of AI risk, to the point where AI risk is nearly the top spot (and may have overtaken it in the 2 years since the survey). Even more stark is the graph about engagement vs cause support: the more engaged in EA someone is, the less they care about near-term causes (aka the things that help people right now).
Essentially, while EA used to mainly be about helping real people with evidence based intervations, and giving a lot of money to the third world, the AI-risk mindworm has slowly taken over the movement and it's leaders, so now it's becoming mainly about making flimsy guesses about future robots based on vibes, and giving money to STEM research groups in silicon valley.
It's interesting to look back on [this Givewell exec's evaluation](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si#Objection_3__SI_s_envisioned_scenario_is_far_more_specific_and_conjunctive_than_it_appears_at_first_glance__and_I_believe_this_scenario_to_be_highly_unlikely_) of MIRI (then called the Singularity Institute) from 2012. I think every point of Holden Karnofsky's critique still holds up, more or less:
* The AI doomsday scenario is massively contingent and assumes advances that don't match up with the way machine research is actually headed.
* If sentient AI is feasible, it's not obvious that defining and hard-coding a "friendly" utility function for it wouldn't itself lead to Bad Computer Times.
* The If-There's-Even-A-Tiny-Chance-AI-Doomsday-Could-Happen-You-Need-To-Give-Us-All-Your-Money argument is really dumb in a way that smacks of the worst sophistries of theism.
* "SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself."
The post is preceded by a brief dissenting comment from Luke Muehlhauser, executive director of SI. Four years later, Muehlhauser left the Institute and joined Givewell.
Karnofsky himself is now a true believer. He reactivated his LessWrong account last year, and now he posts every week or two on topics like "Digital People" and "AI Could Defeat All Of Us Combined."
Yeah, it's a damn shame. EA has a lot of great ideas and research, but it's losing out to the bad ideas of the "shut up and multiply" camp. I think this is causing a quiet exodus of the empiricists until only the rationalists are left.
They are very open to cricitism (if it's in a tone they deem acceptable), so it's possible that if you plant enough enough seeds of doubt in their flimsy chain of logic they'll come back to reality. Realistically I don't think it's likely though.
I think EA as a distinct movement that identifies itself as such is pretty closely tied to longtermists. But the concept of examining how productive different charitable organizations or different interventions or generally trying to optimize the use of limited resources to do the most good has spread well beyond them, if indeed it ever *was* their original thing.
Looking into it, it looks like EA developed as kind of a combo of the rationalists/LessWrongers, but also the GiveWell charity, which is more bed nets and antimalarials (at least originally).
The history and the demographics are kind of murky to me. The mods and other people who have been following them for a longer time can say more about that.
But I'm looking at EA Twitter accounts and they're either old lesswrongers, or also into X-risk, AI, longtermism, Bryan Caplan, Yudkowski and Scott Alexander.
I don't see any meaningful differences at the moment. They might have a different focus, but they seem to be the same people to me.
And most of them are following a combination of LessWrongers like Julia Wise, Julia Galef, Gwern, Alyssa Vance and Sarah Constantin. So I don't see a lot of difference with 2012 LessWrong, either.
The EA Wikipedia article's history section says that EA came together out of several communities, some of which were the rationalist and LessWrong groups, but also the group around the charity evaluator GiveWell. There's certainly a lot of overlap between the bed nets discussions and the AI/longtermist discussions, but I think there are lots of self-identified EA people who subscribe to the former and not the latter.
Regardless of the history, I don't think it would change my opinion that to the extent EA is like "wokeness," it's in its drive for the privileged to transfer their personal resources (wealth for EA, political power for progressivism) to the less privileged, and to the extent longtermism/AI risk takes over EA, it dilutes that drive and makes EA less like progressivism (contra the piece). I don't think anyone pushes AI risk/longtermism *consciously* thinking "I'm going to subvert EA so I can keep my money," but I think the prominence of these preoccupations in EA stems from a combo of the self-interest of some (i.e. they'd rather pay themselves to write whitepapers about sci-fi rather than buy bed nets, or even they just find the sci-fi stuff more fun) and the paranoid tendencies (verging on mental illness in some cases) of others.
GiveWell is two people: Holden Karnofsky and Elie Hassenfeld, who started GiveWell with the insights they got from working at the Bridgewater Associates hedge fund.
The oxford group looks like a group of like 20 something people. The Future of Humanity institute. They founded CEA, GWWC and 80khours. They voted on naming Effective Altruism Effective Altruism. It was started by Nick Bostrom, and the people working there are Toby Ord, Anders Sandberg, Nick Beckstead, William MacAskill, Benjamin Todd.
I think it's safe to say they were all futurists before they were Effective Altruists.
(lol Beckstead is now working for Sam Bankman Fried's cryptocompany)
The internal discussion often seems to be strategic. 'People would think we're "weird" if we don't donate towards other charities'. And a lot of the comments are very much in favor of doing more futurist stuff and fewer foreign aid stuff.
I wonder if EA has changed, if there's an Atlantic divide, or if the techbro side is just more visible online. When I attended a few EA meetings at uni 6 years ago the focus of the discussions in decreasing order of importance were tropical and airborne disease research, getting into the UK civil service's Fast Stream to gain power quickly and thereby influence government policy, getting a high paying finance or consulting job so one could donate to charity, and animal welfare. AI stuff was only mentioned as a hypothetical to help explain the concept of future discounting, and even then was presented as something an "eccentric" would be concerned about.
> When I attended a few EA meetings at uni 6 years ago
That's very interesting!
But like, how high were the chances they were Roko's Basilisk people giving you milk before meat?
The EA forums seem to have a mix of support and heavy criticism for longtermism. There is plenty of worry about the obsession with AI risk. It just seems pretty divisive. BUT, I think the people in control of most of the money are heavily into tech and AI, so that stuff gets a huge focus.
More centrist drivel. “Sure, hinting about sending undesirables into
death camps is bad but if you try to make a nazi lose his job you’re
just as bad.” Wut?
Unherd is very far right. Mary Harington has praised Bronze Age Pervert and Douglas Murray is quite vocal about skin color and its relation to culture and civilization.
Not to mention that Kathleen Stock is probably into Jennifer Bilek.
I just had a look at this website, and it's very strange. A lot of the writers are far-right, but they also have articles by Terry Eagleton, the literary critic, who's a Marxist.
Tim Montgomerie aside, if it hadn’t started out that way you’d have been able to work out it would at least wind up at “far right with a smattering of Eagleton” from the first foot editorial aim of reflexive, unreflective, contrarianism
I can understand why unherd would want a left-wing writer, to basically point to him and say, "look, Eagleton writes for us, so obviously we can't be far right, can we?" But I'm not still not sure what Eagleton gets out of writing for them. If nothing else, he's always been very anti-Islamophobia, and even criticised Martin Amis for it, while unherd seems to regularly publish stuff about Islam that's as bad or worse than what Amis said.
Well it’s Terry, isn’t it? Love it or hate it, his approach has always been to work away at junctures of extreme tension (the Marxist and Catholic literary critic fighting a war of attrition with “Theory” from within!) and he’s made no bones about being a bit of a gadfly. But in another sense he’s a bit of a relic from an age when either you published your work somewhat agnostically - with the publisher implicitly taking a similar attitude to who it published - or only published in your tiny journal with a circulation of three or four others on your particular strand of Trotskyist schismatism.
> I'm not still not sure what Eagleton gets out of writing for them
A paycheck perhaps? The right apparently pays well.
E: otoh, considering they don't seem to have editors (check the links, 90% of them point to the longtermist amazon book page, and not anything else). I doubt people are getting paid for this. Amateur hour stuff. And none of the people going 'well done, excellent article' etc in the comments seem to have noticed.
It makes sense for someone to want their views to possibly reach people they wouldn't normally reach. Sticking to only writing in spaces where everyone agrees is probably less influential than maybe convincing a few people with very different views than you.
I suppose we’re lucky Stock’s engagement with the EA scene seems to
be surface level—she probably skimmed this book, browsed the celebrity
blurbs, and stopped exploring as soon as she found a hitching post for
her hobby horses. If she’d opened a few more browser tabs, she would
have found EA attracts plenty
of the anti-woke (“NeoReactionaries. Traditionalists. Monarchists.
Nationalists. Religious pronatalists…”) as well as the “skinny, specky”
Oxbridge types she was so keen to dismiss. She missed a connection with
some fellow travelers because the second someone asked her to widen the
scope of her empathy, she turned her brain off.
I absolutely love it when people talk about “woke” things and “woke”
people as if they are actually real, existing phenomena, rather than
them simply applying a hot new (and deeply stupid) term to an incredibly
vague, amorphous mass of things and people.
The confidence with which people like this particular writer use it
as if it were an established concept is breathtaking. It tells us
nothing about progressives or progressivism, but it tells us so
goddamn much about the writer and their social circle and media
consumption habits – all of it fucking awful.
It’s actually, what, ironic? Given that the site is called “UnHerd” - Wake up sheeple!
Or, maybe that’s not irony. Maybe it’s more that she thinks this kind of meme driven truthy BS is actually a cogent analysis of the real world.
E: weird how they added musk to the header as an image, as he, while
he might be ideologically close (at least the agi fears and the
Rationalist parts), has very little to do with EA and hates ‘wokeness’
himself. (Also note how the intro of the article already sets up a
frame, this is just propaganda, before talking about EA they are put
down as weird people who no sound person should listen to, making you
biased against them, instead of the other way around, where you show
they are nuts and that is why you shouldnt listen to them). Also nobody
seems to have noticed that most of the links are broken and actually
only point to the Amazon buy the book page and not the actual thing she
is referring to.
She is. She was the professor who, if I remember rightly, quit her job because some students criticized her for being head transphobe of transphobe mountain. She also wrote some awful TERF screed as an academic philosophy of gender piece and got skewered for that. Otherwise, she was just generally around TERF Twitter being a huge TERF.
“[…] assuming we can give them happy lives, we have a duty to have
more children; and we should also explore the possibility of “space
settlement” in order to house them all.”
It wouldn’t be surprising if someone grouped EA with what people call “wokeness,” since both have argued in support of making concrete material sacrifices on behalf of non-white people. This author equates all EA with longtermism and then ropes in “wokeness” by analogizing people today harmed by systemic racism or sexism with the hypothetical matrioshka brains of the year 1 trillion, as if caring about the former is similarly silly to caring about the latter. It makes more sense to see longtermism as a sort of judo move by more right-wing nerds against the material critique of EA, to remove any time horizon or discount factor such that judging the relative effectiveness of charitable acts becomes impossible and they can argue that doing exactly what want is as effectively altruistic as giving all their money to bed net charities.
More centrist drivel. “Sure, hinting about sending undesirables into death camps is bad but if you try to make a nazi lose his job you’re just as bad.” Wut?
I suppose we’re lucky Stock’s engagement with the EA scene seems to be surface level—she probably skimmed this book, browsed the celebrity blurbs, and stopped exploring as soon as she found a hitching post for her hobby horses. If she’d opened a few more browser tabs, she would have found EA attracts plenty of the anti-woke (“NeoReactionaries. Traditionalists. Monarchists. Nationalists. Religious pronatalists…”) as well as the “skinny, specky” Oxbridge types she was so keen to dismiss. She missed a connection with some fellow travelers because the second someone asked her to widen the scope of her empathy, she turned her brain off.
I absolutely love it when people talk about “woke” things and “woke” people as if they are actually real, existing phenomena, rather than them simply applying a hot new (and deeply stupid) term to an incredibly vague, amorphous mass of things and people.
The confidence with which people like this particular writer use it as if it were an established concept is breathtaking. It tells us nothing about progressives or progressivism, but it tells us so goddamn much about the writer and their social circle and media consumption habits – all of it fucking awful.
this is correct only in the sense that both are buzz terms when you could be much more specific about the actual thing you are actually for or against
Isn’t she like a big transphobe?
And about UnHerd Ow look, It is Moldbug! (And some bad news for this article)
E: weird how they added musk to the header as an image, as he, while he might be ideologically close (at least the agi fears and the Rationalist parts), has very little to do with EA and hates ‘wokeness’ himself. (Also note how the intro of the article already sets up a frame, this is just propaganda, before talking about EA they are put down as weird people who no sound person should listen to, making you biased against them, instead of the other way around, where you show they are nuts and that is why you shouldnt listen to them). Also nobody seems to have noticed that most of the links are broken and actually only point to the Amazon buy the book page and not the actual thing she is referring to.
“Heartbreaking: The Worst Person You Know Just Made A Great Point”
“[…] assuming we can give them happy lives, we have a duty to have more children; and we should also explore the possibility of “space settlement” in order to house them all.”