r/SneerClub archives
newest
bestest
longest
Add Your Own Egg: Great article about moral philosophy that includes critique of the effective altruist movement (https://thepointmag.com/2016/examined-life/add-your-own-egg)
6

As an EA, I agree with everything they said (and thought it was a great article). I’ve never understood what sneerclub has against EA other than “hurr rationalists bad”, but this article doesn’t exactly make the case.

This passage sums it up: EA is justifiably appealing to a certain type of person, but I have nothing but the utmost respect for people who spend their lives helping their local communities and causes/people they care about

I can see what “doing the most good” offers as an ideal to those who haven’t got one already; roughly what the Church or the military once offered to young men who didn’t have any firm ideas about what they wanted to do with their lives. They could certainly do worse. But what if one has firm ideas on this question already? How could an attempt from the outside to overrule these thoughts be anything other than alienating?

The usual objection is that a significant fraction of the population involved is just putting out the malaria nets and otherwise maximizing good done per dollar as an explicit smokescreen to onramp people into the singulatarian crackpottery and that a number of higher-ups are rather horrible human beings.
I've heard this a lot but I think it's silly -- the whole point of EA is to collectively work out how we can do the most good. When I'm trying to recruit students to EA, I use the malaria net example because it's a compelling example of the sorts of questions we consider, and immediately illustrates why we care so much about, well, effectiveness. However, it's unreasonable to think that the most effective causes will also be the most intuitive and immediately appealing, so there's nothing inherently weird/bad about EAs deciding that odd-sounding causes are actually the most effective. I do think it's an extremely bad idea to decide that AI risk (or similar "weird" causes) is The Most Important cause area and completely forget about actually helping people/being a good person. That definitely happens, but a) it's hardly an indictment of the whole movement and b) I doubt even hardcore AI risk people would endorse this "smokescreen" strategy.
> b) I doubt even hardcore AI risk people would endorse this "smokescreen" strategy. my dude, we've literally cited them on this subreddit
Link? I haven't seen that but admittedly I haven't read most of the posts
I was thinking of [this discussion](https://www.reddit.com/r/SneerClub/comments/8lx8k8/a_startlingly_frank_discussion_between_two/). cptsdcarlosdevil is Ozy, long active in the Berkeley bit of the rationalist subculture. Ozy says "hmm it's a bad idea," but that's in response to it being a popular local strategy. bonus beats: cultist shows up in the comments to argue why this is all sensible.
It looks like Ozy notices local groups focusing on global development when their organizers are more concerned about AI risk, and concludes that they're pursuing a smokescreen strategy. Maybe some organizers are consciously using a smokescreen strategy, but I do exactly what Ozy criticizes for the reasons I mentioned in my earlier comment. EA is meant to be a question, not an answer: if I get someone interested in EA by using the malaria nets example, and they later convince the EA movement to shift resources away from AI risk towards AMF (or something else), that's a *success story*. Currently, I think AI risk is an extremely important issue (whatever, I've been suckered by Big Yud's incredible charisma), but my goal as an organizer isn't to get people interested in fighting AI risk, it's to get people interested in thinking how to make the world a better place. If organizers followed Ozy's (implicit?) advice, it would be nearly impossible to stop EA from becoming a circlejerk: if we only talk about some weird cause, we only recruit people that are already biased towards that weird cause, and eventually the weird cause totally takes over the movement. Instead, my goal is to get people interested in the *question* "how can we do good as effectively as possible?" through examples that are compelling to a broad audience, so that EA becomes as diverse as possible. Again, if everyone recruited this way decides AI risk is really dumb, that's a *good thing* because it helps shift us away from bad causes (in this case, AI) and towards good causes.
I'm not a SC regular and only came here after the Kathy Forth thing, but personally: I like the ideas behind effective altruism, but not Effective Altruism. In particular, the tendency of at least a pretty loud part of the movement to reify "good", and dismiss any kind of action other than giving resources to the most efficient charity possible (whether it's malaria nets, animal rights, preventing an AI from torturing everyone, etc) as a waste of time and resources. A lot of the community also focuses on a relatively narrow view of things that could help, due to them being easier to evaluate the material efficiency of - I think I've seen someone, possibly around here, give the example of "donating to a homeless shelter vs donating to a campaign that would, if successful, change local policies to lower the barriers towards housing and permanently significantly lower the homelessness rates". (That's not necessarily inherent to the base principles of effective altruism, which I honestly mostly agree with, but due to things like the opinions of high-status people and the demographics being similar to the LW rationalists and having the same tendency to miss the broader picture, the EA movement itself does seem, at least to a kinda-outside observer like me, to have these issues.) Also, a more concrete problem: some things about this ideal of doing the most good you can possibly do, when taken to their logical extremes, can lead to personal issues. For example, consider someone who's prone to scrupulosity: things might end up like "How many people could I save if I just donated most of my food budget to [charity of your choice] and survived on the bare minimum?"

Not explicitly, but it does address the problem.