r/SneerClub archives
newest
bestest
longest
EA looks outside the bubble: "Samaritans in particular is a spectacular non-profit, despite(?) having basically anti-EA philosophies" (https://www.reddit.com/r/SneerClub/comments/140qkoc/ea_looks_outside_the_bubble_samaritans_in/)
59

LessWrong: Things I Learned by Spending Five Thousand Hours In Non-EA Charities

An EA worked for some real nonprofits over the past few years and has written some notes comparing them with EA nonprofits. Among her observations are:

  • “Institutional trust unlocks a stupid amount of value, and you can’t buy it with money […] Money can buy many goods and services, but not all of them. […] I know, I know, the EA thing is about how money beats other interventions in like 99.9% of cases, but I do think that there could be some exception”
  • “I now think that organizations that are interfacing directly with the public can increase uptake pretty significantly by just strongly signalling that they care about the people that they are helping, to the people that they are helping”
  • “reputation, relationships and culture, while seemingly intangible, can become viable vehicles for realizing impact”

Make no mistake, though, she was not converted by the do-gooders, she just thinks they might have some good ideas:

[Lack of warm feelings in EA] is definitely a serious problem because it gates a lot of resources that could otherwise come to EA, but I think this might be a case where the cure could be worse than the disease if we’re not careful

During her time at real nonprofits she attempted some cultural exchanges in the other direction too, but the reception was not positive:

they were immediately turned off by the general vibes of EA upon visiting some of its websites. I think the term “borg-like” was used.

At least one commenter got the message:

But others, despite being otherwise receptive, seem stuck in EA mindset:

Inspired by this post, another EA goes over to the EA forum to propose that folks donate a little money to real nonprofits, but the reaction there is not enthusiastic:

“general vibes…borg-like”

I can’t breathe.

OP is so close to making a breakthrough. In the spirit of being generous I won’t sneer too hard. They make some great observations about how addressing issues requires structural and institutional action, which is usually difficult to quantify for the purposes for a CBA.

Of course it would suck if they came away from this further entrenched in EA-ness, but from the looks of it this is the first step in digging themselves out of that hole.

honestly it’s pretty hard for me to sneer at EA people. the roboapocalypse / atomic suffering ones, sure. Easy. but I can’t sneer at the givedirectly people, you know? they’re making the world less hellish for the people who need it most. the fuck have I done except whine about it?

yeah, this is a good example of someone who's trying really hard to do well for the world getting involved in an exciting new paradigm that just doesn't work in practice - and being surprised that the traditional charities have evolved to fit the existing real world quite well look at this: > I know it sucks for nerds to hear that reputation (popularity) is important but I think it’s unfortunately a real thing, and not just on the margin. One Weird Trick keeps never being one.
I'll offer two notes on that: 1. There is substantial overlap between the robot apocalypse crowd and the EA crowd. Indeed, many EAs consider so-called "X-risk" to be the "charitable" cause that is most worthy of their time and money. 2. You say that they're making the world less hellish, but *are they?* How do you know? The EAs are sneerable precisely because, if we judge them by their own standards of success (quantitatively verifiable metrics regarding resource efficiency in altruistic goals), they fare very poorly. I personally like the idea of data-driven metrics for charitable work, but that is only one component of improving the world. They are so narrowly focused on data and "reason" that they fail to be effective in doing altruism, including going so far as to spend enormous amounts of time and money on problems that don't even exist (see above).
I think the thread also showcases the limitations of the Californian/TESCREAL/EA worldview when it comes to operating at scale. Like, as an individual I have no problem with someone deciding to look at where they're donating to see how well that money is being spent, and in as much as givewell and the like have enabled that it's probably been good. At the same time, they're utterly incurious as to *why* these problems exist, and it leads to some bizarre conclusions like saying "it's okay to accept stolen money from grifters and thieves as long as nobody finds out" because the only reason to not do that is "reputational damage" and not the fact that wealthy thieves and con artists got that way by *making the problems you're trying to solve worse*. How much of SBFs money came from people who couldn't afford to lose it who bought the lies he told about crypto? How much of it came from scammers and cons who relied on crypto to get paid from their traditional marks? To what extent is poverty both in the US and globally the result of the same system that allowed Peter Thiel and Sam Altman and other billionaire EA donors to make their unfathomable fortunes? The answer is of course to not ask the question in the first place. And if you're an individual that's probably fine. You probably don't have enough systemic power to meaningfully impact these broader systems with your individual donating dollars, and so (unless you want to talk about the moral trainwreck of "earn to give") EA ideas won't do any harm. But if you're talking about establishing organizations, advocating politically, and trying to actually turn the levers of power then you *need* to be asking those questions or else you're missing the entire goddamn point of moving those levers in the first place. And all of that is without dealing with the AI doomers and associated general griftiness of the EA field specifically. If it's not some undefinable X-risk it's advancing EA itself (or advancing X-risk as a broader concept, because why not do both terrible things together), which are both very hard-to-quantify areas of impact for a movement that is ostensibly about using data to optimize charitable giving. How exactly are we supposed to measure how effective MIRI has been in advocating against AI development? Like, what would those numbers *even look like*? What would be the *units* of such a measurement?
So here's a personal story: last year I got a grant from SBF's fund to work on an academic AI research project, to the tune of a few thousand dollars. At the time, we didn't know about the colossal fraud, but I did assume that since it was crypto money, much of it probably came from poor people addicted to gambling or from shady sources. So I went to my most progressive friends, and asked them whether they thought I should accept the grant. The answer was unanimous: whatever else SBF would realistically do with the money would be worse than contributing to actual research, and me accepting this relatively small amount won't legitimize FTX, so I should take it. At worst, I would decide some time later to donate it all to GiveDirectly or some other org helping the global poor (though it didn't necessarily come from them vs. Western poor people). By now I might have actually donated it already, if it weren't for the threat of clawbacks from bankrupt FTX.
Getting a one time 1k grant is a bit different than institutionally courting the worst billionaires. But in the end it remains hard to do ethical things under capitalism. (Another post for your hypocrisy compilation sneer sneer clubbers)
To what extent do EA attitudes about efficiency basically radicalize some existing bad tendencies in non profits like their tendency to minimize ‘overhead’ and so look good on charity ranking sites by underpaying staff?
It seems like EA charities tend to pay pretty well judging by the career pages of GiveWell/Open Philanthropy. Unclear how generalizable that is.
Yeah they tend to argue that they need to pay well to attract talented people who could make a lot of money outside the nonprofit world. But they also pretty much all hire from within the very insular EA community soooo…
Chaps, you know, chaps!
To be fair to them, an EA forum is where I first heard someone really clearly articulate how bullshit the "overhead" critique of charities is -- charities should be judged based on how much impact they have on the world rather than how exactly they do it, a charity that has "no overhead" but also does nothing useful is obviously worse than a charity that has "lots of overhead" but by so doing succeeds at helping people The "overhead" thing is very much an outcome of the way Charity Navigator evaluates nonprofits, and GiveWell in its early days was almost defined by its feud with CN and trying to fight this idea that the definition of a "good" charity was simply one with "low overhead"
That's one of those things that sounds eminently reasonable in the abstract, but which isn't so compelling once it makes contact with reality. In trying to pose a dilemma between "overhead" and "impact", what the EAs are doing is replacing an imperfect metric (overhead) with one that can be difficult or impossible to measure (impact). This is what allows them to do truly insane things like spending charitable donations on buying castles and vacations for themselves, or paying for "research" into the robot apocalypse. They can rationalize these things as having immense future "impact", because you can rationalize pretty much anything if your success metrics can't actually be measured. One of the purposes of looking at overhead is to try to answer the following question: "is this *really* a charity, or is it a tax-advantaged grift that people are using to enrich themselves and their friends?" The EAs come out looking really bad when you ask that question, so maybe it's not surprising that they're opposed to it.
Sure, but it cuts both ways -- the "overhead" thing is a way you can attack pretty much any traditional charity simply for having a large number of employees who have the level of salary and benefits typical to their field and is what exerts the constant downward pressure on headcount and compensation that can make working in this field so hellish
Ea forums, posts about global gealth and safety 1200, animal welfare 800, ai risk 1700. Yes their internal priorities are really weird. And then there is the whole cryonics thing which a lot of higher ups believe in, which combined with the singularity and AGI makes it not altruism, just wanting to live forever.
Frozen billionaire skulls are the mummies/sarcophagi of our time.
I have read that short story!
I’d go further and suggest anyone looking at the world through the stupid cost benefit lens is producing “negative value” by existing.
Is GiveWell a good indicator of EA priorities?
GiveWell is a good-ish indicator of EA *global health* priorities, although there are some other orgs in the area such as those incubated by Charity Entrepreneurship*. This is where roughly half of EA money goes. The other half goes to existential risk reduction (including AI but also pandemics** and nuclear). They're not actually halves since some smaller fraction also goes towards animal welfare***. *Some examples include the Lead Exposure Elimination Project, Family Empowerment Media, and Suvita. **The Nucleic Acid Observatory is an example of an EA org aimed at pandemic prevention. ***This includes orgs like the Humane League, Animal Charity Evaluators, the Fish Welfare Project, the Shrimp Welfare Initiative, and also maybe orgs working on wild animal welfare which I'm not really on board with.
I am an EA so people on this sub may disagree with this, but IMO basically nobody on this sub would dislike the work Charity Entrepreneurship do, which is wholly focused on making the lives of people and/or animals better, and is basically robustly good. They have no focus on longtermism, and do so much wonderful stuff.
this is the problem that "EA" is several tendencies in a trenchcoat, and the other problem that the AI doom crank tendency sucks up all the oxygen
Likely more people in EA agree with you than you realise.
I know they do, they email me! I'm sympathetic to the basic pitch, and they're good people trying to make the world better. But there's no good idea you can't make into a bad one by just doing it *hard enough*.
Then these "more people than [we] realize" should speak out against longtermism and AI doomerism more.
Yeah, then OP’s point #2 seems completely wrong, no?
At least half wrong. For AI spending, that's up for interpretation. I personally think AI x-risk is real but EA spending on it isn't effective. Others here undoubtedly think even worse.
> the fuck have I done except whine about it? There's a simple solution for this.
Nuclear drone strike?
The Give Directly folks make some pretty good arguments about the problem of professionalized charity to be honest (I say, working for a non profit married to someone working for a non profit). There’s a very good interview/argument between them and the International Rescue Committee out there about the limits of pure cash and the fact of motivated reasoning in hiring as a non profit on the “The Rest is Politics: Leading” imprint.
Michael Hobbs, who is very far from a rationalist and who in many ways fits the rat community's negative stereotype of a "woke activist", is nonetheless an EA apologist to the extent that his many years working in nonprofit development convinced him the EA "conversation" was a set of questions that needed public airing even if the answers they came up with were dumb -- it is legitimately frustrating how getting actual solid data about results in that world is like pulling teeth and how hard it is to defend a lot of the nonprofit industrial complex from the average dude on the street's suspicion that the money they give just disappears
Exactly I personally know an EA and rationalist who donates 10% of income hard to really frame that as a “bad” thing even if they have certain views I oppose
Are we sneering at just donating to charities? One can donate as much one would like without buying into EA/rationalist stuff. In fact, it's very normal to do that! You can also volunteer to help local nonprofits!
For me it kinda depends on the charity, byt personally I dont blame the low level donaters, like all charities have problems, to heads of charities/ea otoh, or people who go full libertarian and say that charities should replace gov action. EA is prob just as effective as normal charities if you factor in the castles them actually paying their employees well and the 50% acausalrobotgod tax.
Are they giving 10% to an organization working to ensure the proper care and feeding of far-future hypothetical tamagotchis?

It’s funny that this one is basically corporate PR 101 (especially for bad industries with a physical presence in a community) : “We can donate a little money locally just to project warmth and connection to the people around us.”

For people who supposedly prize efficiency, they spend a lot of time reinventing the wheel on things like social change theory, international development, and corporate social responsibility.