r/SneerClub archives
newest
bestest
longest
[NSFW] Vox starts their own series inspired by the EA movement. A genuine attempt at exploring the best parts of the movement, or an excuse for galaxy-brained neoliberal shills to wildly speculate and push their own hobby horses? Discuss. (https://www.vox.com/future-perfect/2018/10/15/17924288/future-perfect-explained)
19

EA is not so bad when they stick to malaria nets and the like. It’s good to have people who care about weeding out the effective charitable causes from the bogus ones, even if there are other groups doing it better.

It’s when they start thinking Eliezer’s bullshit research foundation is “effective” “atruism” (yeah, altruism for him…), or denouncing any attempts to reduce sexual assault that the problems begin. Fortunately this Vox group seems to be made of mostly of people like Kelsey who are mostly in the good category, so I’m not too upset about this.

Yeah, I think an unappreciated positive of EA is that most of these people wouldn't be hitting the streets anyway. This way a bunch of liberals (and even some right-wingers!) toss a shitload of money at some genuinely good third world causes that they otherwise wouldn't have touched. This is good, even if a lot of it gets wasted on kooky AI style nonsense.
I agree that malaria nets and direct charity are good, but at this point i feel like the people interested in that would be better off separating themselves from effective altruism entirely and starting a new movement, since i fear "effective altruism" at this point is hopelessly compromised.
And not to mention, when people give to givewell charities, Open Phil just gives less to compensate, since they were going to fund all of givewell's stuff anyways. So you're basically giving money to a billionaire by funding the standard givewell charities. The thing about this that irks me is how much show and pomp givewell/open phil have gone to to make it seem like this isn't what's going on.
Woah, can you give more detail on this? Do they actually have all their funding guaranteed?
OpenPhil has a policy not to fund more than 50% of any organization, so no, this is incorrect. Source: https://www.openphilanthropy.org/blog/technical-and-philosophical-questions-might-affect-our-grantmaking#Philanthropic_coordination_theory Full disclosure: I get funded by them.
In contrast to David Manheim's response, the truth is that no, Open Phil's policy to not fund more than 50% of any organization's funding gap is just another tactic they institute because they know that the EAs will cover most of the rest of these org's funding gaps, or they care more about the potential payoff that EAs will spend money instead of them than they care about being 100% certain these organizations will get totally funded. So, no, they very much don't have their funding guaranteed, because Open Phil has invested quite a lot of energy into making it seem like EAs need to give instead of them. Their most loudly stated argument as to why EAs need to be funding more of these organizations is that the organizations will be too much under the sway of Open Phil if they get all their funding from Open Phil. Sure, but when I checked last year, Open Phil was barely giving away 5% of its endowment per year, the minimum amount legally required of a charitable foundation. Wouldn't it be better to ease up on stressing the moral imperative of "you have to give now! Save the world!" that EAs peddle, and have Open Phil do more? It's somewhat tiring for me to go about this, but I'm willing to. Ben Hoffman already wrote lots of this up on his blog, and got ignored pretty hard, if you're looking for original sources or more details. Anyways, you can always go *engage them in good faith* about what their intentions really are, but I prefer to call bullshit when I see it and spend my time elsewhere.
Right, so currently donating to AMF will marginally result in more malaria nets, but only because Open Phil could easily fill them all up but has decided not to ? I can see the issue they run into with the givers [dilemna](https://blog.givewell.org/2014/12/02/donor-coordination-and-the-givers-dilemma/): > Imagine that two donors, Alice and Bob, are both considering supporting a charity whose [**room for more funding**](https://www.givewell.org/international/technical/criteria/scalability) is $X, and each is willing to give the full $X to close that gap. If Alice finds out about Bob’s plans, her incentive is to give nothing to the charity, since she knows Bob will fill its funding gap. Conversely, if Bob finds out about Alice’s funding plans, his incentive is to give nothing to the charity and perhaps support another instead. This creates a problematic situation in which neither Alice nor Bob has the incentive to be honest with the other about his/her giving plans and preferences – and each has the incentive to try to wait out the other’s decision. But in this scenario, "alice" is hundreds of individual donor making real sacrifices, while "bob" is a single billionaire doling out pocket change. Seems like a 50/50 split isn't actually fair here, the billionaire should just fund it all and tell the donors to find another place for their money... ​
Yeah, that's a good summary. I agree with your conclusion too.
>And not to mention, when people give to givewell charities, Open Phil just gives less to compensate, since they were going to fund all of givewell's stuff anyways w h a t t h e f u c k
Yeah. Amusingly, SneerClub is the largest group of ex-ea/lwers that I know of, as far as "starting a new movement" goes. Everyone I see who's interested in "starting a new movement" is all apologetic to EA, though, so fuck it. Academia is the closest existing thing to what lesswrong should have been, though I would probably go to a "Sneerclub meetup" if such a thing ever happened.

The problem with the “bednets are great, ai is wonky” view on EA is that, in EA orgs themselves, the folks in charge typically secretly feel that AI research and the like is actually what’s important, and the other stuff is just for pulling people in. Why do you think CEA moved to the Bay a while back?

Hell, this is a widely discussed problem on the ea forums and in the dank ea memes group. “Is it ethical that we always talk about bednets when that’s not what we care about”. Been debated dozens of times.

Seconded. I'm all for charity and nets and all that, but at this point i wouldn't touch any group which associated itself with the term "effective altruism" with a 10 foot pole.

A couple points of concern:

  • Even the nice and rational EAs hold some batty and occasionally downright manipulative beliefs. Ozy, for instance, isn’t skeptical about AI risk. Kelsey donates to AI risk charities. Their friends do that creepy-ass ’meat before milk’ thing to con people into their community. I doubt the folks I’ve named are deliberately bad actors, but it’s not reassuring that they’re the good guys of the movement either.

  • The bednets are less effective than advertised. Understandably, people don’t like using them, the materials degrade, and the insecticide loses its potency [citation]. And people use nets for other purposes, because alleviating crushing poverty makes malaria worth the gamble. That doesn’t mean bednets are bad, or that they should no longer be distributed. But the efficacy of bednets is transparently a cudgel and a carrot among EAs. (If you donate, you certainly saved a child overwhelming suffering. If you don’t, well congratulations! That child sure is suffering terribly.) In short, a legitimately effective charity initiative– although perhaps in a more limited capacity than initially believed– is being misrepresented. I wouldn’t be surprised if other ““hyper-effective”” charities had similar issues.

>The bednets are [less effective than advertised](https://www.thelancet.com/journals/langlo/article/PIIS2214-109X%2816%2930238-8/fulltext#seccestitle140) The study: >Bednet and entomological data indicated that nets were performing as intended, with good physical integrity, insecticide availability, and bioefficacy after 12 and 18 months of use...Reasons for the lack of association between ITN usage and malaria transmission in Haiti are not entirely clear and a single case-control study may not be definitive. But one likely explanation for a lack of protective effect may be vector behaviour. ***A albimanus*** **in Haiti tends to bite outdoors and, at least in some locations, at times when people are not likely to be under nets**.[8](https://www.thelancet.com/journals/langlo/article/PIIS2214-109X%2816%2930238-8/fulltext#),  [9](https://www.thelancet.com/journals/langlo/article/PIIS2214-109X%2816%2930238-8/fulltext#),  [33](https://www.thelancet.com/journals/langlo/article/PIIS2214-109X%2816%2930238-8/fulltext#),  [34](https://www.thelancet.com/journals/langlo/article/PIIS2214-109X%2816%2930238-8/fulltext#) > >More broadly, the effectiveness of ITNs against malaria in areas where *A albimanus* is the primary vector might depend on this vector's predominant biting and resting locations (indoors *vs* outdoors) and preferred biting times. **A study from areas of Nicaragua where** ***A albimanus*** **is dominant and primarily bites indoors and late at night, found that ITNs were effective in reducing malaria in study clusters with rates of ITN use above 16%**; however, a similar study in areas of Peru where *A albimanus* had greater outdoor and earlier biting rates found that ITNs did not significantly reduce malaria Seems like they're effective in some contexts. This study was done in Haiti, if it relates to behavior of mosquitoes in Haiti than the results have limited generalization. >And people use nets for [other purposes](https://www.theguardian.com/commentisfree/2010/jul/08/mosquito-nets-cant-cure-malaria) This is largely a myth, indeed in the study above people were using bednets for their intended purpose otherwise the low efficacy wouldn't be much of a mystery
>This is largely a myth, indeed in the study above people were using bednets for their intended purpose otherwise the low efficacy wouldn't be much of a mystery From this study in Tanzania, [87% of households](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4198669/) used bednets for fishing. In this study from Kenya, anywhere between [5.9% to 43.3%](https://malariajournal.biomedcentral.com/articles/10.1186/1475-2875-7-165) of distributed bednets were used for fishing. The effect of mosquito nets on fish population [isn't great](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0191519), obviously. I'm not claiming that bednets suck, for the record. I called them a 'legitimately effective charity initiative' and I stand by that characterization. Rather, my comments are meant to debunk some of the more messianic rhetoric surrounding them. ​

Adam Johnson has some recent tweets, e.g. here, where he points out some of the issues with Vox’s coverage of rich westerners doing stuff in Africa.

I'm revising my answer to the OP to "a genuine attempt at exploring the best parts of a galaxy-brained neoliberal shill's hobby horse."
Ooh I like it when he does those.

Dylan Matthews is pretty good in that he cares about altruism but has a healthy skepticism of AI risk.

neoliberal shills

In what universe is this the primary problem with EA?

The neolibs are the ones pushing poverty reduction, malaria nets, animal welfare, actually effective climate reform, increased immigration… By far the best parts of the movement. What is left after you get rid of those issues? AI risk?

That's the primary problem with Vox, not EA. Edit-- it depends on who you ask and what issue is on the table though. They're consistently in the progressive camp on social issues, but vacillate between actually progressive economic policies, and positioning themselves as sober incrementalist wonks. Which I find hard to actually criticize, because that description could easily be levied at me as well.
Jeez that vox explains stuff episode on Netflix bout the gender pay gap was... So bad.
I know it sounds nerdy or whatever but I'm still smarting from that weird bit Klein did comparing how cash work between EU members versus how they work between US States. Like there wasn't anything *fundamentally wrong* with it but it just...wasn't right. Why does my phone buzz me now when I get facebook notifications instead of just messages. I didn't ask for this
Probably the same reason there are now flies buzzing around this cesspool I'm swimming in. I should probably get out.
>actually effective climate reform
Well, yeah. Many environmentalists who profess to care about climate change detest effective solutions with broad support among economists and climate scientists, such as carbon taxation or nuclear power.
The word "effective" springs out there
effective

It just felt so low-stakes, so unimportant in the grand scheme of things.

A lot of people thought that about climate change and Electoral College too, dumbass. You’re a journo, make a better fucking argument than your feeeeeliiiings.

To answer the original post…yes to the former, though I’ve never been a fan of EA. EA didn’t get women suffrage, it didn’t free slaves, it didn’t free Haiti–only the wresting of political power from the hands of the unwilling makes long-term material improvements.

I also tripped over that line. It pushed me off to be honest. The United States Senate is important. The only reason Matthews felt it was unimportant is because about 1,000 people cover the Senate. If he really cared about doing important reporting he pick a regulatory body and write about all the rule making and admin law rulings to relate to that body. Or he would write about local politics. These are deeply important and significantly under covered areas. But both of those types of reporting are boring and worse yet they both require hard work and knowledge. So instead he decided that he’d make a podcast about EA. Talk about “low stakes” and “unimportant” Ugh. If he was honest he would just admit that Vox has a partnership with the Gates foundation and he is whoring himself out for some Gates money.
The partnership is between Vox and the Rockefeller Foundation--it's in the article. I'm sure they wish they had that Gates money tho. They did publish an interview with Gates today. Gates seemed a little defensive at Klein's line of questioning, I think he was reading a bit more into the implications of the questions than Ezra had intended. Guess you don't get to be one of the world's richest men without having to be able to shrewdly defend what you spend your charity's money on...
My bad. I had heard there were murky connections between vox and Gates in the past (see citations needed pod). I guess he admits to whoring himself out for big philanthropy. Still doesn’t make me feel any more generous to him.
murky connections in this case is that [gates owns a chunk of comcast who partially own vox](https://twitter.com/adamjohnsonNYC/status/1052085416499793921). idk if there's anything more direct than that
Good catch! Probably safe to assume Gates has a finger in every pie.
Counterpoint: women’s suffrage and freed slaves in America don’t make any difference to hundreds of thousands of kids killed by malaria, blinded by nutrient deficiencies, or rendered cognitively disabled by parasites each year. Countercounterpoint: there’s no reason you can’t both support charitable efforts to eradicate the biggest problems facing underdeveloped nations while also trying to redistribute wealth and restructure the political economy; the latter goal just doesn’t count as “altruism” for obvious reasons. The hope that wresting power from the ruling class automatically solves all problems of developing nations is just as utopian as the hope that AI alignment solves [insert your favorite problem with the world here].
>Counterpoint: women’s suffrage and freed slaves in America don’t make any difference to hundreds of thousands of kids killed by malaria, blinded by nutrient deficiencies, or rendered cognitively disabled by parasites each year. In what universe is this anything even approaching a counterpoint?
You’re right, we’re in different conceptual universes. Why does the fact that EA never achieved women’s suffrage have anything to do with whether its goals are worthwhile? One of its goals is have fewer preventable child deaths. Why did you bring up women’s suffrage?
I didn't bring up women's suffrage, /u/zhezhijian did, in order to make the point that the Effective Altruist movement can be accused of having a limited focus on throwing money at problems. Of course, to point out that that limited focus has its own independent value is not a counterpoint, ihis second sentence you've just read is making that point. And this third one is making the point that I have no idea why you narrowly introduce a focus on America when your interlocutor did nothing of the sort.
Oh my b didn’t read the username. Right I agree that my counterpoint is not a real counterpoint, which is why it comes with the countercounterpoint. Both goals are valid but obtaining women’s suffrage is not going to have a first-order effect on EA causes and vice versa, so it doesn’t make any sense to say “has EA ever won any women the right to vote?”
But of course it *does* make sense if you read what the comment asserts: that long term material change is founded on realigning the distribution of power in a society. Falling malaria levels in unstable or autocratic governments are obviously extremely welcome, but if political re-alignments and democratic recognition don't come with them, then it makes sense to worry that it they are not the kind of nail in the coffin that some EAers (and neoliberal poptimists) want to present them as. Obtaining women's suffrage indeed *could* easily have a first-order or second-order effect on EA-related causes such as reducing HIV infection by improving democratic recognition and therefore policy; in fact I have no idea (a) why you flatly assert that it wouldn't and (b) how you motivate this putting first-order effects first.
I think we’re basically in agreement; the only qualm I have is that the good should not be the enemy of the best. EA isn’t stupid just because the difference in quality-of-life brought about by major societal and political changes in autocratic regimes can be large. I am more optimistic about $10 going toward bed nets to bring about positive change than $10 going toward the establishment of democratic governments. Obviously success in such a case brings about huge effects, but one of the guiding principles of EA that I think is worthwhile is putting money down where it is very clear that you have a known effect on the problem. There are many problems that throwing money at doesn’t solve, or it’s unclear at whole you ought to be throwing money. I’m not asserting that fixing Big Problems won’t have a big effect; I’m asserting that it’s utopian thinking to believe “It’s stupid to focus on effects of global poverty when we can just solve global poverty and be done with all its effects.” It’s a similar issue to AI alignment: sure, maybe if we have a billion-IQ benevolent robot god it would solve all our problems. But we should still probably be focusing on solving the problems at hand in case that billion-IQ benevolent robot god doesn’t work out, or it turns out to be such a hard thing to do that there is no end in sight. If we don’t know that we can even solve the Big Problem, then attributing work on that problem to have first-order effects on the little problems is putting the cart before the horse.
> “It’s stupid to focus on effects of global poverty when we can just solve global poverty and be done with all its effects.” And as I've stressed, this is a complete strawman, indeed that is what this disagreement is about: you appear to be reacting to a caricature of EA-sceptics which pro-EA or EA-optimistic people often waste their time reacting to. This is a discussion about the potential for trade-offs between charity and promoting good governance, the very *opposite* of utopian thinking. Indeed, in this conversation the EA position comes out with a whiff of the utopian about it if they're unwilling to entertain that there might be such a trade-off.
Was the original commenter not saying “EA is stupid because we should just be wresting power from the ruling elite, which EA doesn’t do?” I wasn’t responding to a straw man, I was responding to that guy. It’s not clear to me that I misread him. I do not understand how the EA position is utopian. It does not make promises of a perfect future, it makes promises of the form “if you put $x toward y charity then on expectation you save z lives. The following is a list of charities for which z is the largest number.” This is not utopian; frankly it is boring— it’s just a dressed-up spreadsheet. (Maybe the utopian part is where they argue why you should use their spreadsheet to decide how to do your charitable giving? But this applies to all such analyses, not just EA.) The difficulty with putting money toward good governance is that it is very hard to quantify the trade-off because no one can tell you precisely what change that money is making in the world, so the biggest optimists can say “it brings about stable governance” and no one can prove them right or wrong. This invites utopian thinking like “just wrest power from the ruling elite and the rest will work itself out” from utopian thinkers like the first guy I was replying to, but not from all people who think that capital is distributed poorly at present. EA is nice because you are restricting yourself to problems for which the effect of money is a known quantity, but I totally admit that there are other things one can do besides EA that bring about positive change in the world. Again, I think we’re basically in agreement unless I’m very confused about your position.
> Was the original commenter not saying “EA is stupid because we should just be wresting power from the ruling elite, which EA doesn’t do?” No, obviously /u/zhezhijian wasn't saying that. I'm surprised I need to guide you through this, given how much you were complaining a short time ago in [this exchange](https://www.reddit.com/r/SneerClub/comments/9o0wdi/scott_runs_anxiety_supplement_human_clinical/e7rl21b/?context=3) that somebody on /r/sneerclub wasn't reading the article they were talking about. It's also somewhat infuriating because you're evidently not being careful to parse my explanatory comments either, which I'll deal with in a minute. Here is what was originally said: >To answer the original post...yes to the former, though I've never been a fan of EA. That's yes, to this proposition "A genuine attempt at exploring the best parts of the movement" and the qualification "though I've never been a fan of EA". Obviously, this does nowhere imply that EA is, to quote your paraphrase "stupid". They then go on to say this: >EA didn't get women suffrage, it didn't free slaves, it didn't free Haiti--only the wresting of political power from the hands of the unwilling makes long-term material improvements. There's a couple of ways you seem to have mischaracterised this. First, the suggestion that it's utopian to argue on the basis of real and changes that actually happened actually in politics. Specifically, three counts: (1) women's suffrage; (2) freedom from slavery; (3) Haiti's political freedom, which is both a sub- and super-set of (2), because Haiti's revolution freed slaves, and because Haiti became a free state of its own. This doesn't strike me as utopian at all, and you simply plug in this objection (correct me if it isn't an objection, but you seem to have rolled with that): >The hope that wresting power from the ruling class automatically solves all problems of developing nations is just as utopian as the hope that AI alignment solves [insert your favorite problem with the world here]. Now since they don't actually say that EA is bad, and indeed imply that EA has a good side, it's hard to get EA is "stupid" out of it. What they do is point to examples of real material change which they think aren't the bread and butter of EA, hence their ambivalent attitude towards EA. This is one way in which I'm getting annoyed. But worse, you're making up the utopian aspect of this *out of whole cloth*. There simply is no place in which our friend up there is expressing those utopian ideas about solving all problems by wresting power from the ruling class. At no point do they say that solving all problems by wresting power from the ruling class is the be-all and end-all of making the world a better place. ---- I have a further objection to this part to, let's see it, because I mentioned it before but you seem to have ignored it: >women’s suffrage and freed slaves **in America** don’t make any difference to hundreds of thousands of kids killed by malaria, blinded by nutrient deficiencies, or rendered cognitively disabled by parasites each year. Who brought up America? Our friend up there didn't use the word, referred to two general concepts (women's suffrage and the freedom of slaves) which surely are universal issues. You plugged in America for some reason which is not clear, but certainly trivialised the universality of those political principles by localising them to America, in contrast to your own implied internationalism. They even brought up Haiti, which you ignored, in spite of its being not just a local event but a symbol of national liberation in a more global context. This is quite frustrating, because I don't feel like I'm talking past you in quite the same way, which brings me to the rest. ----- Now it comes to where you seem to have misunderstood, or misinterpreted what I said. I am not saying that the EA position is utopian *per se*, I'm saying that it has, quote >a whiff of the utopian about it **if they're unwilling to entertain that there might be such a trade-off**. Fresh emphasis. Just because there's a balance sheet involved doesn't mean that it doesn't have a utopian aspect. Indeed, the targets of the phrase "utopian socialism" included utilitarians who thought they could achieve a more ideal society by totting up numbers on a balance sheet (be it cash or utilons). This isn't about that though: the point I was making is that it is in the character of utopians to ignore crucial trade-offs in favour of a rosier ideological picture (whether or not that includes balance-sheets) which if you will remember is roughly what you were accusing our friend of doing. >This invites utopian thinking like “just wrest power from the ruling elite and the rest will work itself out” from utopian thinkers like the first guy I was replying to, but not from all people who think that capital is distributed poorly at present. As we have seen, this is a misrepresentation - a straw man - of what they actually said, on several levels. >Again, I think we’re basically in agreement unless I’m very confused about your position. I'm not really in agreement, in the sense that I object to the moral framework implied by the idea that private cash transfers are the best way for most people in the West to effectively and justly improve the world, but we are at least in broad agreement that there's room for political change and...cash transfers. Where we're in broad disagreement is in that I checked what the person you replied to had said, which you didn't.
I think where we are miscommunicating is here, in our reading of the original comment. Since we are reading it in different ways it seems to each of us as if we are coming from different places: I infer when someone says “I’ve never been a fan of EA because it doesn’t solve [problem that it doesn’t set out to solve]” that the implication is that the problems it does set out to solve are basically irrelevant in comparison to the problems they’d prefer to see solved. Imagine the following substitution: “I’ve never been a fan of Women’s Suffrage because it doesn’t solve the problem of children dying from malaria.” Perhaps you understand now my reading of the original comment. Do you still think it is unfair? I am still unconvinced that I have misread it, if you still think it is unfair we can agree to disagree on the meaning of this reddit comment. I don’t believe I ever suggested that I advocate for the moral framework implied by the idea that private cash transfers are the best way for people in the West to improve the world (or any moral framework for that matter), but I do believe that there are many problems which yield basically immediately to cash transfers, and that the goal of solving such problems should not be in competition with loftier goals that probably do not yield immediately to cash transfers. This is where you ascribe to me something that I do not believe, but that we probably agree on. I thought in my countercounterpoint that I made it clear that my point is principally that altruism and justice are not incompatible, so it is unclear to me what it is we are arguing about if not our reading of the original comment.
I am unaware of any point at which I ascribe to you any belief about cash transfers as part of any arguments I make. And I certainly never suggest that you advocate for, quote, >the moral framework implied by the idea that private cash transfers are the best way for people in the West to improve the world Clearly you have misread my point.
In light of me saying > I think we are in agreement to which you respond > I’m not really in agreement, in the sense that I object to one at this point expects a state of affairs or structure to which you think I would not object. The structure in question is > the moral framework implied by the idea that private cash transfers are the best way for most people in the West to effectively and justly improve the world And it would appear to follow as a corollary that you ascribe to me > a belief about cash transfers namely, the belief that > private cash transfers are the best way for most people in the West to effectively and justly improve the world. Anyway I responded to you in another comment to /u/zhezhijian saying that you are completely right in your response to my strawmanning. Another place where my understanding departs from that of /u/zhezhijian is that he believes the “proper comparison” is not between the different goals that EA and societal-level reformation often seek to achieve, but between the same goals. I.e. it would have never occurred to me to compare a hypothetical EA charity helping slaves to escape the South with fighting the American civil war. Indeed, EA might not even make sense for such a case. It is not so much a method of making current charities more effective, but a method of finding which problems yield easily to raw money and which charities spend it most efficiently to solve those problems. You don’t have any power over which problems yield readily to raw finances, so in a way EA is inextricably linked to its small class of problems until the funding saturates the good it can do. (Of course, if you make up existential risks and divide infinity by zero, then you can pretend that AI safety is a EA cause. To me this is a big problem.) It is understandable to me that if someone believes that EA is just some modifier on existing forms of charity (nerds making sure charities have good accounts?), they wouldn’t be a big fan of it. This is a very different view than the one I take of EA, and of which I think is generally acknowledged.
> I.e. it would have never occurred to me to compare a hypothetical EA charity helping slaves to escape the South with fighting the American civil war. Of course this is the proper comparison. Given that some group of people is living shitty lives, the question that EA is supposed to answer is, what is most likely to give them better lives? So what would be better, a charity to help slaves, or freeing them? What raises the net utility of the world? > the different goals that EA and societal-level reformation often seek to achieve You need to read what William MacAskill and Peter Singer have actually said about EA. They are utilitarians who genuinely believe that making a ton of money and donating it to the best charity is the best thing you can do with your life. If this doesn't represent your view of EA, then you don't believe in EA. You especially don't believe in EA if you are not a hardcore utilitarian. > It is not so much a method of making current charities more effective, but a method of finding which problems yield easily to raw money and which charities spend it most efficiently to solve those problems. You don’t have any power over which problems yield readily to raw finances, so in a way EA is inextricably linked to its small class of problems until the funding saturates the good it can do. You vastly overestimate the thought and care done by EA advocates. A great deal of the science that EA purports to base itself on is shoddy. One of the pet causes of EA advocates is deworming, but the evidence that deworming leads to good long-term outcomes is extremely shaky: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4523932/ And yet, deworming is one of the top things you can do, according to GiveWell. I mean, how the fuck would you calculate whether a cause was "saturated" by money anyway? Have you ever worked in the nonprofit sector? Literally nobody knows how to say no to more money. It's not like anyone's intentionally dishonest, but when people get truly attached to a cause, they can't help but think it's the most important thing in the room. Of course they won't turn down money. Also, what the hell, "raw money"? "Raw money" is just a pile of money. It can't turn into utilons on its own. That money is ultimately spent by _people,_ who are fallible. There is never any such thing as "raw money." This money has to go to people who are overseas, who may or may not speak the local language, it has to not get seized by local leaders or brigands, and then it has to actually get spent wisely. A lot of shit can happen between someone in America hitting "send" and a bed net finally getting delivered. I can't believe you said "raw money." God, I'm tempted to start a deworming charity and run away with all the cash. Oops, I mean, "raw money." I'm not saying that looking for the most effective charity is bad, but if you really want to pick a good charity, using GiveWell's formula of "science says you'll save X lives for Y dollars" is a poor approach. What I recommend doing instead is looking for charities run by people in the communities they intend to serve. So, for instance, if you intend to donate to a Tanzanian charity, pick one that's run by locals. That way, the charity will be guided by knowledge of local needs, and the money is less likely to be wasted, even if the impact is much more constrained.
> Of course this is the proper comparison. Given that some group of people is living shitty lives, the question that EA is supposed to answer is, what is most likely to give them better lives? Some people suffer from horrible diseases which cost enormous amounts of money to treat. Unfortunately EA will not be of any help tl these suffering people since it will not suggest that people who want to do the most good with the money they donate should donate to such treatment, because it is expensive. > https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4523932/ The chief finding appears to be that short-term effects of deworming appear to be small, but my understanding of the deworming initiative is that it is focused on long-term improvement in earnings and development through a mechanism that is not yet well-understood. However if they mention this and I can’t find it, I’ll hide behind the fact that it’s hard to read on my phone. > I mean, how the fuck would you calculate whether a cause was "saturated" by money anyway? Very easily in some cases. If no one is contracting malaria at night, then the bed net cause has been saturated. If there are no more parasitic worms, or everyone has easy access to antiparasitic drugs, then the deworming cause has been saturated. In the United States, polio treatment is no longer something many people donate money to because the “War on polio” is basically over. > A lot of shit can happen between someone in America hitting “send” and a bed net finally getting delivered. I can’t believe you said “raw money” Hey /u/noactuallyitspoptart come defend me against the straw-man accusation that I believe a pile of money can spontaneously transform into bednets. It is only fair 😢.
>It is only fair Life's not fair, and I have jobs to apply for.
after i complete my phd I’m going straight into the [swine business](https://www.linkedin.com/jobs/view/swine-technician-warrior-ridge-farm-at-clemens-food-group-880890622?trkInfo=searchKeywordString%3AArtificial%2BInsemination%2CsearchLocationString%3A%252C%2B%2Cvertical%3Ajobs%2CpageNum%3A0%2Cposition%3A1%2CMSRPsearchId%3Ad248391b-3fcb-4d51-a3d4-ef97009d212e&refId=d248391b-3fcb-4d51-a3d4-ef97009d212e&trk=jobs_jserp_job_listing_text) .
The proper comparison is not "woman's suffrage" versus "malaria," it's "some EA charity that helps runaway slaves" versus "fighting the American Civil War." Of course altruism is great. I've volunteered in Tanzania, I've volunteered with the homeless, I've never gone a single year of my life without donating _thousands of dollars_ or volunteering my time to some charity. Not that you knew that about me, but that's just to drive home how lame your reading that I'm opposed to charity is. I'd bet a good amount of money I'm more active and giving than most people on this sub. I don't know why it is that people take "X is inadequate" to mean "X is stupid," but I suppose criticizing charitable giving activates people's impulse to characterize you as a Grinch. What I am sick and tired of is liberals acting like charity is the solution, and neglecting the importance of structural issues. The day Vox starts writing about neo-colonialism is the day I quit griping about people paying too much attention to EA. What I object to is Dylan Matthews framing malaria as somehow representing some of the most important issues in the world. Yeah, no. Malaria's fatality rate is comparable to the flu's. I remember being super scared of malaria before I went to Tanzania, thinking I was definitely going to die if I caught it because the hospitals there are so bad...then I saw this: "In 2015, there were roughly 212 million malaria cases and an estimated 429 000 malaria deaths." (http://www.who.int/features/factfiles/malaria/en/) Flu fatality rates: "Worldwide, these annual epidemics are estimated to result in about 3 to 5 million cases of severe illness, and about 290 000 to 650 000 respiratory deaths. In industrialized countries most deaths associated with influenza occur among people age 65 or older (1)." (http://www.who.int/news-room/fact-sheets/detail/influenza-(seasonal)) The flu is, in fact, far more dangerous. Imagine a bunch of people from Africa coming to the West, and telling us our most pressing issue was wearing face masks during flu season. That's a bit like what the malaria net initiative is. Now, if you're sincerely interested in the economics of developing countries, I suggest reading this link, and you'll see hard numbers that show why all the aid in the world is a pittance compared to structural issues, and perhaps you'll see why I find people who focus solely on charity, while ignoring the predatory behavior of their own first world governments, so very distasteful: https://www.theguardian.com/global-development-professionals-network/2017/jan/14/aid-in-reverse-how-poor-countries-develop-rich-countries I've had multiple conversations about overseas charity now, and not once does this kind of thing ever come up. If you really want to help countries overseas, you can't just look at GiveWell and think you've done your job. I'm going to say this again because I expect to be misread. I like charity. I like justice. I do not like people who only talk about charity and not about justice. The Vox article, which opens by saying, in essence, "malaria is one of the most important things in the world," is guilty of being heavy on charity and is light on justice. I don't like that. By all means, let us all give our time and money to charity, but if your involvement with the world stops there, shame on you.
Okay, my mistake. I don’t understand why in the proper comparison “of some EA charity that helps runaway slaves”^1 with “fighting the American Civil War” you would bill yourself as “not a big fan” of the former^2. I am glad you commit so much of yourself to charitable efforts; that’s really commendable. I think the least fair bit of my original comment was probably the final paragraph; I’ll admit that was totally off-base. ( /u/noactuallyitspoptart if it satisfies you I this is where I say “yes your criticism that I am strawmanning his position is salient insofar as it concerns whether he has utopian views”). Since we both seem to agree with my second paragraph that charity and justice at not in competition but that the latter is preferable to the former, I think we’re on the same page. 1. I guess the Underground Railroad with, like, accountants? 2. Or to be very pedantic and faithful to your first comment, you’d be “Not a big fan of EA,” where here we are considering a hypothetical efficient charity that does Underground Railroad type work as a branch of EA.
While I agree broadly with your analysis of neocolonialism being a larger problem that those that charity is equipped to handle, the research on which the guardian article you link is not clearly of very high caliber, and if the “trillion dollar” or “24 times” numbers are depressing you as to the relative impotence of aid, you can at least rest assured that those are an overestimate of the true problem of illicit financial transactions since the methodology used was extremely coarse: https://www.econstor.eu/bitstream/10419/141281/1/85890862X.pdf
Yes, of course I think your reading is unfair, because I wrote out quite a long bit explaining the specific ways in which I think it is unfair in some detail. I don't like "Imagine the following substitution" replies because they aren't relevant. Swapping one thing out for another doesn't help me unless I've misunderstood what you're saying. However, there's no reason for me to think I've misunderstood, and I've explained a number of ways in which I understood *exactly* how this all fits together, *none of which you are engaging with here*. Let me be totally clear: I *already* understood your misreading of that comment, and *explained in detail* where you had departed from its content. I included points about how you had erroneously introduced "America" without prompting from the person you were responding to. I pointed out in several places where you had ignored relevant information in the original comment. So you both introduced irrelevant things such as "America" and also failed to account for things actually present in the text even after they'd been pointed out to you, and are carrying that on now. >if you still think it is unfair we can agree to disagree on the meaning of this reddit comment. No. I agree to disagree with people when we're on the same sort of level, i.e. we've reached an impasse where both of us is aware of where the other's coming from but are caught in a spiral of other disagreements. This isn't a case of that. You're just ignoring specific corrections I've made to your interpretation and shooting straight for "well I guess that's just your opinions". That's not what's happening here: I've given specific and deliberate reasons why you're **wrong** in your interpretation, and you're replying to me not engaging with any of those points. >I thought in my countercounterpoint that I made it clear that **my point is principally that altruism and justice are not incompatible**, so it is unclear to me what it is we are arguing about **if not our reading of the original comment.** Let the first bolded part be (1), and the second bolded part be (2). 1. A significant part of my point has been that /u/zhezhijian did not say that these things are incompatible, and that you have strawmanned their position. I have laid out a number of reasons why this is a strawman, and you have not responded to any of them with any specificity. You have preferred to reply in generalities, so please do otherwise and respond to what I've said, because I don't feel like I'm talking to a person who is talking to me right now either. 1. I have concentrated quite heavily on you being wrong about the message of the original comment, and you haven't responded to those things. Instead, you've continued to defend EA (as if it were under attack). I recommend you go over this discussion and try to be more parsimonious about what I've said: I may have not made myself ideally clear but I have certainly made it clear that I disagree with you on certain points of interpretation. You haven't really dealt with any of those point about interpretation at all.
Thanks poptart! You know how to read.
/u/noactuallyitspoptart knows how to read, and you don't.
>/u/noactuallyitspoptart knows how to read Well let's not go nuts
> The hope that wresting power from the ruling class automatically solves all problems of developing nations is just as utopian as the hope that AI alignment solves [insert your favorite problem with the world here]. What an interesting reading. Are you very good at seeing shapes in clouds, too? Haiti is poor as fuck, of course freeing itself from France didn't solve all its problems. I never said that gaining power solves all problems. My only claim is that there is no amount of charity in the world that would have done as much good for Haitians as freedom from slavery.
Right, I accept that claim. I was responding in fact to the more optimistic claim that > Obtaining women’s suffrage indeed could easily have a first-order or second-order effect on EA-related causes such as reducing HIV infection by improving democratic recognition and therefore policy This is what I was (Okay, unfairly) characterizing as “wresting power” (getting women the right to vote) “automatically solving” (having a first order effect on) “all problems” (one of which being HIV infection). As you note, this doesn’t always work— Haiti is independent but still very poor. It is also conceivable that a place could be made to enact women’s suffrage and not see a reduction in its incidence of HIV. I’m also pretty positive on the “wresting power from the ruling class” idea, just not on its capacity to solve the *same* problems as EA, which also can constitute long-term material improvement.
> only the wresting of political power from the hands of the unwilling makes long-term material improvements Wow, you are such a brave revolutionary!

[deleted]

Not a direct sneer. Just staying on the safe side of the rules.