r/SneerClub archives
newest
bestest
longest
Is effective altruism as bad as the stuff it’s associated with? (https://www.reddit.com/r/SneerClub/comments/jezh22/is_effective_altruism_as_bad_as_the_stuff_its/)
57

I’m a high school senior about to decide what direction I want to take my career, and I recently discovered Effective Altruism and 80,000 hours. It’s actually been pretty helpful, and I like the idea of going into a field where I could maximize the positive impact I could have. The EA movement seemed great at calling out the hypocrisy of spending huge amounts of effort on certain issues that are salient but not as important, while ignoring issues that are important but far away.

I had no idea that EA was related to LessWrong and the rationalist stuff. I’m not too terminally online, but I know a bit about the rationalist community (Rokos Basilisk, racism, the like). I’m honestly kinda disappointed.

I’m a lefty, so I always thought that the billionaire-worshipping and lack of criticism of capitalism from people in the EA movement was odd. It’s also really confusing to me that a movement based on not discriminating impacts and helping the developing world could have such an overlap with race-realists and tradcons.

Basically, what the fuck?

Edit: also there’s this lol

[deleted]

Agreed. I still thing the fundamental principles of EA are worth keeping. That being said, I wonder if it’s worth it to keep engaging with the community as a whole. I don’t want to throw the whole thing away, but I’m not sure if I should still try to aim for EA meetups, fellowships, 80,000 hours advising, etc.
Go for the 80,000 hours advising! They're not going to try to indoctrinate you, and they can help you think meaningfully about your future, skills, and career goals. You can do whatever you want with it, there are literally no strings attached. I basically loathe the rationalist 'community' and am highly skeptical of EA from a philosophical standpoint, and I still found it really valuable.
Thank you for the advice! 80,000 hours does seem much better than all the other rationalist stuff. I found their series on choosing a career really interesting. If I may ask, how did their advising help? I’m a high school senior right now, but I’ll know where I’m going to college in a few months and I was wondering if I could/should use their help to start planning out what I want to do in college in terms of majors, activities, etc.
> 80,000 hours advising Sorry, what? Take an outcomes-focused approach maybe, and don't count intermediate goals that are easy to fool yourself with as a substitute. Things like "spreading awareness" or "having meetups" are worthless to their stated purposes if they don't translate into *actually effective altruism.* In the end this would probably result in working with a nonprofit charity with specific real-world goals. This is one of the major things that make various "rationalist" organizations akin to scams. Give money to MIRI or you're dooming the future! What have they actually done? Mostly spread awareness about why you should give them more money.
I definitely wouldn’t pay for, well, anything related to EA, since there’s no way it wouldn’t be a scam. I’m also aware that raising awareness is mostly bullshit. What I mean is engaging with groups that practice EA in a tangible way. A lot of the colleges I’m looking at have EA groups or fellowships that you can join which provide career guidance and other opportunities. I was wondering if they were worth joining, or if EA has also been taken over by the same right-wing/race-realist kinds of people that make up other rationalist communities.
My (limited) contact with the EA community suggests that it's not even close to that bad wrt racism. It's anecdotal but none of the EA people I have met have been any worse than the average American in that respect, and many are far better. The major worry I would have is dealing with too many capitalists. Is the only difference between an EA career advisor and a normal one that, after the EA person tells you to become an investment banker, they say, "But remember to donate some when you're rich!"? On the other hand, if the career advisor is suggesting the charities that you should work at to do the most good, that sounds amazing.
> The major worry I would have is dealing with too many capitalists. I’m with you 100% on this. You cannot make an effort to do the most good while also being uncritically supportive of capitalism. There is something to the idea of making a lot of money using the resources of a wealthy western nation and funneling it to poorer countries. But at the same time, none of the problems you’re trying to solve can go away without addressing the fact that they’re largely a result of the failures of capitalism. Earning to give certainly helps, but if it’s blind to the root cause of the problems it’s trying to solve, I don’t know how effective it can be. If it can help me get a job at a nonprofit that is effective at making an impact though, it might be worth it. I might try it out just to see how it goes.
>There is something to the idea of making a lot of money using the resources of a wealthy western nation and funneling it to poorer countries. [There isn’t as much to that idea as one might think, actually.](https://www.theguardian.com/global-development-professionals-network/2017/jan/14/aid-in-reverse-how-poor-countries-develop-rich-countries) Collectively, wealthier countries take out far more than they put in. Even on an individual level, odds are that, over the course of making the giant bucket of money you plan to deploy, you’ll have facilitated the extraction of more than you could possibly put back in.
Fair point. Though an EA would respond by saying that you should compare the two situations of 1. You taking the job and donating half your salary vs 2. Someone else taking the job and donating nothing. Since the other person probably won’t donate, by taking the job you’re still —according to EA— improving the world. I wish EA would focus more on economic imperialism though. For a community focused on solving problems, they don’t do a very good job of analyzing why those problems exist. [Case in point. ](https://www.reddit.com/r/EffectiveAltruism/comments/jc99k8/why_i_stan_elon_musk/?utm_source=share&utm_medium=mweb)
Of course they’re not. The entire premise of “effective altruism,” as (supposedly) distinguished from other charitable work, is that it’s *effective*; altruism and, by extension, the continuing need for it, is taken as a given. On a fundamental level, *EA is system-agnostic*, so it isn’t going to produce systemic critiques *by design*. Or think of it this way: if you build your brand of charity (and let’s be clear here: EA is, in the neutral, descriptive sense, a brand) on the perception that your charity is more “effective” than other kinds of charity, what incentive does that actually give you to end the need for charity altogether? If you did that, your brand would cease to be meaningful at all.
> so it isn’t going to produce systemic critiques by design. This is the important point here. I'm relatively convinced that the most important step to reduce suffering and increase wellbeing would be widespread adoption of market socialism. EA has zero resources to even evaluate this claim. I mean, to most EA folks, my claim is laughable, because it's either not implementable, unlikely to be implemented, or whatever. They also, as u/paperspectre says, are rather bad at looking at the roots of problems. Do we really think if the areas of Africa suffering from the kind of worm infections EA collects money against had a much much better, fairer etc. economy, were not suffering from the aftermath and continuing effects of colonialism, that our charity would be as needed for plenty of their projects? At least an effective altruist should consider this. This is kinda made worse for me when they invest so much brainpower into long-term stuff, but... somehow don't obviously think about long-term development of poor regions, and what, systemically, needs to be done.
Charity is only one form of altruism. I'm not "in" the EA community, but I'm aware of it, so I was surprised when you asserted that EA is a "brand" of charity (as opposed to e.g. a social movement or ethical stance). When I Google "effective altruism" the top hit says (at the top of the page): > In short, effective altruism is commonly viewed as being about the moral obligation to donate as much money as possible to evidence-backed global poverty charities, or other measurable ways of making a short-term impact. > In fact, effective altruism is not about *any* specific way of doing good. > Rather, the core is idea is that some ways of contributing to the common good are far more effective than typical. In other words, ‘best’ is far better than ‘pretty good’, and that seeking out the best will let you have far more impact. And Misconception #4 on that page is that EA is not about system change. So insofar as they are a "brand", they are in conscious opposition to their brand being what you say it is. Which answers the question of what happens in the unlikely event that the world changes within our lifetimes to no longer need charity: EA would turn their focus to other altruistic endeavours.
This is an important point but one thing to keep in mind is who is being meant by "you" here, *i.e.*, the group(s) of people responsible for perpetuating systems that keep charity necessary in whatever way. EA in a lot of ways is a product of economic and political systems that fail to solve problems. To be fair, much of the charity out of EA takes this into consideration. One heuristic William MacAskill has given for assessing if a charity will be effective in solving a problem is to figure out if it's a problem either the government or the private sector either can't or won't solve. That's already recognizing that the current system fails. EA as a brand has also been built mostly based on charity but its scope has dramatically expanded in the last several years. Much of the non-profit work done under the EA umbrella is that of research organizations very different from typical charity. More of the EA community has also embraced startup or social entrepreneurship as a means of doing good as opposed to operating in the non-profit sector. These are parts of EA that could continue if the need for conventional charity, *e.g.*, to end endemic poverty in the poorest parts of the world. There is also the possibility EA as a brand could at some point highlight these alternatives to charity for whatever reason(s). None of that undoes the systems causing the problems that make all this altruism necessary in the first place. Some of that stuff may play a contributing role in causing some of those problems. EA could do much more to acknowledge, consider or do something about those issues. If they took those issues seriously enough to recognize what's necessary to resolve systemic problems and pursue those ends, EA could be very different. One reason why EA doesn't pursue those ends is because of its closeness to capital. There is already one big multi-billion dollar philanthropic foundation behind EA, and there are relationships to other multi-billion dollar institutions, like Silicon Valley corporations. Much of EA also has connections to high society in places where it's concentrated and so will also be more bourgeois because of that. It's awkward for EA to criticize the system that produces the wealth used to pay for most of the stuff EA does. To be fair, the EA community has also come to focus more on more direct systemic change. The huge foundation behind EA is called the Open Philanthropy Project. The philanthropists behind it are liberals. Being immersed in a sea of liberal capitalism contributes to some problems in EA but the foundation's philanthropy focused on systemic change is generally progressive. None of that work could be called very 'anti-capitalist', at least by anyone who really understands anti-capitalism. It's often based on progressive or liberal economics and research in other social sciences. Work in EA, even for systemic change, typically doesn't focus on capitalism as a whole. EA's typical approaches to systemic change could best be described as reformist (in relation to capitalism). While it's worth criticizing EA in relation to problems caused by one, or more, broader system(s) it is part of, much of the EA community doesn't wield the power or influence to make progress on solving those problems. A big part of that will be EA barely does anything to build that influence on its own as a movement or as part of a broader coalition. That's largely caused by the problems in how they approach anything the least bit political. Yet ultimately for a number of reasons most of EA is focused on remedying harms caused by systemic problems as opposed to directly solving those problems. If that's still wrong compared to something else far better to do, the EA community is apparently at a loss for what that would be.
This is an intriguing point to me that I don't completely believe but I'm willing to entertain: "the EA community doesn't wield the power or influence to make progress." What should an individual do to make progress and what does it look like on a long time-scale? Also, how about doing that and funding EA causes at the same time?
Well, the point isn't as strong when it's taken out of context. "Much of the EA community", the operative word being "much", can't make progress on some problems. Again, when 90% of funding for everything in EA comes from a single, centralized source, it's that source that has most of the control over where money in EA goes. Much of the EA community is subject to the whims of the Open Philanthropy Project. They've made marginal progress on issues that are some of the best efforts on Earth for those causes, like how clean meat development and international animal welfare reforms sparked by EA are gradually but dramatically transforming animal agriculture in several countries. Yet that EA as a movement had enough money behind it they thought they wouldn't need to shore up other ways to pursue solutions to the problems they prioritize. $12 billion spent in the most effective ways on the margin in sub-Saharan Africa to relieve diseases keeps communities healthy and alive long enough they can focus on developing. Yet none of that will end endemic poverty there like international trade or political reform will. EA doesn't have as much influence in those arenas with regards to alleviating endemic global poverty. Assuming one takes AI alignment as an existential risk seriously, the EA community has enough roots in the Bay Area and Silicon Valley they can play a significant role there. Yet Russia, China, Japan and South Korea are pursuant of advanced AI as well and EA doesn't have any influence on international relations to slow down an AI arms race as it's driven from outside the Western world. A dramatic increase in meat consumption in the growing economies of populous regions of the world could counteract all of the progress EA has ever made for alleviating the suffering of farm animals, and then some. EA also tends to look at the world in a way *just short* of systemic enough to see threats to solving all these problems, like how a global pandemic or climate crisis will exacerbate political tensions both within and between countries every which way will make it far more difficult to solve any of those problems. Most in EA still think about each of these causes by hierarchically sorting them into a vertical rank as opposed to thinking about them interdependently. Progress is being made on each of the issues EA is trying to solve and it's ambiguous which ones will be solved when, why and how. Yet EA doesn't have enough control to tell how its marginal impact relates to changes driven by bigger world systems. All the progress could be reversed, and the problems exacerbated, and EA exerts barely any influence to prevent that. EA seeks to become a globally transformative movement like communism or liberalism once were and that are world systems today. Yet EA is so subject to the whims of those world systems it could be subverted at any time. The centralized organization of EA is a single choke point. EA needs to get upstream instead of being downstream of the parts of systems creating or preventing solutions to the problems EA is trying to solve. Regarding both pursuing changes in other ways and funding EA causes, there are lots of individuals in EA who do that. Most of what others do in EA is work for an organization associated directly with effective altruism or go work for an outside one according to individual specialization and advice from 80,000 Hours or something. If someone in EA has a strong preference or inclination towards one cause, I'd suggest they should probably focus on that one. Otherwise, what the individual should do depends a lot on the individual. In what ways do you want to do good--not the cause but the activities you do in the name of that cause? How does that fit in with your broader worldview? What are your skills, interests, specialties, connections that make it so you can do good in ways most others couldn't?
Thank you so much. This is such a high effort response. I think it's fair to say that you've given me far more than any other stranger on the internet recently. Is there a forum that I could ask the crowd here very occasional serious questions that is not this subreddit, which I believe is for fun? There's many interesting criticisms of EA here but no obviously place to discuss them. I wonder, are you giving genuine but measured praise for EA in the first two paragraphs, for example in progress on animal welfare? The following is wishful thinking but could one person not have an immense effect downstream of systems if they are a leader within their community on all sorts of things like sustainability, the community involvement, and philanthropy? To be clear, I think I have pretty damn low impacts on my friends' actions but I think of myself as using low resources, working in a questionable sector, and giving a lot away. For now, I remain in the for-profit tech sector. I have the rest of my life to think, and it doesn't seem guaranteed to me that I can make a big impact on non-profits were I to move in the near future.
> This is such a high effort response. I think it's fair to say that you've given me far more than any other stranger on the internet recently. You're welcome :) I'm glad you appreciate it. > Is there a forum that I could ask the crowd here very occasional serious questions that is not this subreddit, which I believe is for fun? There's many interesting criticisms of EA here but no obviously place to discuss them. This forum is for fun or casual though it's occasionally serious, such as criticisms of the rationality and similarly-minded communities when they're egregiously misleading. I'm not aware of other fora on Reddit for discussing interesting criticisms of EA but I'm happy to chat with you about them through a different medium. Feel free to send me a private message to communicate by some other means. >I wonder, are you giving genuine but measured praise for EA in the first two paragraphs, for example in progress on animal welfare? Yeah, I'm part of the EA movement and I broadly support its efforts. I just think there is some cognitive dissonance in EA about how they focus on the marginal impacts of individuals or single organizations without formulating enough contingent strategies for a variety of global-systemic shifts. I should clarify that that's my opinion and the typical model, within the EA community, of world affairs outside the EA community, could be more accurate than I'm giving it credit for. > The following is wishful thinking but could one person not have an immense effect downstream of systems if they are a leader within their community on all sorts of things like sustainability, the community involvement, and philanthropy? Yes, I think another major issue in EA is many individuals in the movement not taking enough initiative to try things that defy typical actions in EA. Efforts outside EA related or similar to the methods used in EA or causes EA focuses on could be as or more effective than those in EA. Some earning to give in EA have advocated in the companies they work for to support evidence-based and/or high-impact charity by encouraging others to give successfully, or shifting corporate CSR or employee donation-matching policy. Some have done it successfully. Examples of novel and successful attempts at interventions not prescribed by the status quo in EA can be found by searching through the Effective Altruism Forum. > To be clear, I think I have pretty damn low impacts on my friends' actions but I think of myself as using low resources, working in a questionable sector, and giving a lot away. This is a challenge I face as well. I have a lower income than many people, including in EA, and less opportunity to increase my income. I've worked in jobs I've had ethical qualms about before because it's the job I needed to make ends meet and it was difficult to find the breathing room to find another job for a while. It seems like you may be doing the best you could have done so far and the paths forward your considering might be the best marginal options. EA existing as an ecosystem created a new space and opportunities for some individuals to do good in ways they otherwise would have no precedent for accomplishing. While, again, EA exaggerates its impact, like how much money as a single kind of resource can accomplish absent a greater focus/investment on other resources. Unfortunately, until there is another paradigm that poses clearly better alternatives for careers or whatnot for those in EA, I don't know what option would be better for them than some of the kinds of careers advised by 80,000 Hours. One thing is my criticisms of EA are systemic. *I.e.*, they're meant to encourage either major shifts in EA at the highest level, or for those seeking to do good in the abstract to not think of EA as it currently exists to not think of it as sufficient or the be-all-end-all for 'doing the most good.' I'm not sure what alternative approaches would be better yet but I'm trying to figure out a new perspective. I expect EA is conditionally necessary but probably insufficient as a coalition to achieve its own goals, and additional efforts from outside EA to complement it will need to rise. There are elements of the tech/IT sector that are questionable but as a whole but I don't perceive a problem with, *e.g.*, working in the tech department of a company not primarily focused on tech development. Corporations like Google, Facebook, Twitter and Amazon generate value, and more than just economic, but also pose serious threats, some dire, to various social goods (*e.g.*, democracy, social stability, human rights and civil liberties, lack of public transparency and accountability regarding negative externalities, accelerated development of potentially dangerous AI systems, collusion/oligopoly/cartels/monopoly, both within and between countries). These are problems I've seen receive some public concern out of EA but it's insufficient, for which I think there are several factors, some of which are more sympathetic and credible but, altogether, are ultimately insufficient to justify equivocation on said issues: 1. That so many in EA earn a relatively high income for earning to give from these corporations. 2. The closeness of the EA community and organizations therein to these corporations, *e.g.*, through their collaborations on AI alignment. 3. Already having an elitist mindset that reinforces a culture of fear, re: publicly criticizing powerful individuals, groups or institutions, that already exists in Big Tech. 4. Being immersed in a subculture of Big Tech that already encourages that culture of fear even if EA didn't have as much of an elitist mindset that normalized that culture of fear within EA. 5. A wrongful and motivated cost-benefit analysis that as a whole these corporations can be excused for unethical actions because of the net benefit they provide to society or the world as a whole. > For now, I remain in the for-profit tech sector. I have the rest of my life to think, and it doesn't seem guaranteed to me that I can make a big impact on non-profits were I to move in the near future. That seems likely. There are job boards on 80,000 Hours and in other spaces in EA that highlight job opportunities in both the for-profit and non-profit sectors for tech workers to have a direct and high positive impact through their work without as many ethical downsides coming out of some bigger corporations.
I read that post and it's basically saying Musk is doing a bunch of cool stuff that could also save the world. It's just that almost everything has to go right for his companies and everything happening in the world around him needs to line up to give him an opportunity. That post doesn't state those as assumptions but it's making them. It's also describing the value of everything Musk is doing as "insanely high" or something like that. That goes against the principle of EA trying to take a more scientific and effective approach to doing good. The most upvoted comment pointed out a lot of unacknowledged assumptions and problems with lots of what Musk has done. Yet neither that comment nor the original post acknowledged the problems of imperialism, like how Musk's companies acquire the cobalt and lithium for technology like batteries, or what those companies and Musk will do to get them. So, there is no evidence there that lots of folks in EA in general, or at least on the EA subreddit, don't have a blindspot for imperialism.

I would say that despite the prevalence of bad ideas, effective altruism is by far the least bad thing we cover here. I would say it does quite a lot of good by directing more funds to third world poverty causes in ways that tend to avoid the counter-productive elements of other charities. Unfortunately, thanks to the rationalists, it also results in a ton of money being thrown into glorified tech think tanks due to overblown AI fears powered by group think and a chain of dodgy assumptions. I feel like these people would have blown their money on something else anyway, though.

Do you have a recommendation for how an individual should allocate their charity budget? EA global poverty only, for example, or something much different?
Personally I find GiveWell's recommendations to be pretty good. They're rigorously backed by evidence, or at least more so than almost anything else you can send your money to. There's tons of material on their site so you can decide for yourself.
On my own I would have considered that a strong candidate. However the community here seems to not favor following the EA community at every turn, only when sensible. I am trying to get a fuller picture for myself. Concretely, what might people here be donating to this year?

I don’t believe there’s anyone out there who disagrees with the notion that we should be effective in our how we use our available resources to benefits others. The worst of it is how much effort is put into galaxy brain takes which lead so-called effective altruists to conclusions which are anything but the stated goal of the movement, such that mosquito nets to fight malaria lose out to quasi-science fiction doomsaying of ever-imminent rise of a malevolent AI god. I can’t say if the worst of it is a ‘loud fringe’ sort of situation but what I do come across in my limited slice of the internet is more often reflections on hypotheticals of existential risk and future persons than development of actionable solutions to concrete harms for living persons.

If you're a bad boy reddit gangster and you want to find out for yourself what Rationalist communities are about, go to their actual websites and read the posts ordered by new. I dare you!-- :P to observe and sample Internet communities for yourself, in date order, and discover them as they really are, and ignore the malicious storytelling, cherry-picking and disinfo peddled by the BPD English Literature drop-outs who get their rocks off on this subreddit.
'kay. Thanks for replying to my comment from 18 days ago with a sneer at /r/sneerclub.
To be clear I was sneering at /r/sneerclub , not you. Seriously, once I started sampling communities and looking at people I realized people aren't so bad as people say. It helped me spiritually.
NECROPOST: yeah, I did, and they’re fucking insane.

I have no particular view on EA, but tangentially, this is how I would have gotten the most out of college if I could do it over again:

  1. People fucking love to give college students advice, and most of it is either received wisdom or has more to do with the advice-giver themselves. You’re matriculating at a particularly strange moment in history, and the received wisdom will be even less relevant than usual. Take all advice that people give you about Your College Career (including mine) with a massive grain of salt. It’s your life and you’re the one who has to live in it.

  2. Continue to avoid becoming Terminally Online.

  3. College might be the last time (unless you go to graduate school) that you have easy access to a bunch of professors who kind of have to talk to you. Develop relationships with the ones who are working in areas that you might want to go into yourself.

  4. Leave yourself open to discovering new things about yourself: you might run into unexpected challenges in areas that previously came “naturally” to you, or find a knack for things you thought you weren’t good at or had no prior interest in.

  5. Most of the skills that have helped me in my working life (basic logical reasoning, public speaking, writing, editing, harder-to-summarize social skills) were developed as a byproduct of my high school and undergraduate academic work (and job experience, and just generally Being Alive); the content of that work almost wasn’t that important, in retrospect. Hone your self-presentation and the way you communicate your ideas, both in writing and in speech.

  6. This is the one I’m having the hardest time articulating concisely, but I’ll try to sum it up as “find a weird niche”. Look at the people around you who seem very certain of their future paths in life (particularly if those are pretty traditional/accepted paths: med school, law school, management consulting, etc.) and try to distinguish yourself from your own “herd” in some way. Groups like “pre-law” will tend to have certain gaps in their knowledge and skill set, on average. You can get a lot of mileage out of being “the pre-law person who also knows a lot about data visualization” or “the poli sci major with an intimate knowledge of agricultural tech” or whatever. Knowing another language can be extremely helpful and is more difficult to pick up later.

  7. Other social spheres are essential: e.g. “Model UN nerd who only knows other Model UN nerds” is not a life I would wish upon anyone. Spend time getting to know people outside your major and your main extracurriculars. If your ultimate goal is to work at a nonprofit, it’ll help to have plenty of experience talking to different types of people, understanding their values and needs, and getting them to like and trust you.

  8. I absolutely did not follow this advice at your age but don’t drink too much, don’t do too many drugs, and get a good night’s sleep as often as you can.

I really appreciate you taking the time to write all this out. I’ll probably make a load of mistakes in college, and I probably won’t be able to maximize my utility as much as possible or whatever, but I’m gonna try as to heed this advice as much as possible. Thank you.
lol as I was typing it I thought "this person seems to have a decent value system generally; maybe all this is too basic" make lots of mistakes! learn shit about yourself! be curious about other people! it is an extremely weird time to be alive and I truly wish you all the best

This is a problem I’ve been struggling with as well. I think there’s enough good stuff in EA that it’s worth participating, but being open about your reservations with some of the associated fringe weirdness and trying to shift the center of gravity towards global poverty/animal rights and away from AI Woo etc. Also, try to bring in squishier factors when people start doing utilitarian calculations: “Will that course of action change you as a person¹ ? Would it work if we all did that? Where do you think the money ultimately comes from?” You don’t actually have to argue for virtue ethics or deontology or anything, because those factors can all be considered in utilitarian terms as long as you’re not afraid of considering ideas that without numbers.

  1. Also, consider that question about following the path I’m recommending, and be self-aware of it as you go forward.
I think this makes a lot of sense. EA has the potential to be really great, if it wasn’t for the lack of focus on systemic issues and over focus on stuff like AI. If it can bring together people motivated by empathy, rather than the pseudo-intellectual bullshit that rationalism tends to attract, I can see it being really fulfilling to work in.

The kids are gonna be alright, imho.

Is effective altruism that shit Peter Singer (?) was on about back in the day? Not to malign the guy because it may not have been him. Where it’s like:

“Oh, people are dying and you want to help? Become a stockbroker and donate 60% of your earnings to the Gates foundation. That’s the most effective thing you can do.”

If that’s the one, I don’t find it as disgusting as certain rationalist undercurrents. Think neoliberalism vs. fascism.

yes, it's Singer
Ugh. Disappointing.
basically they reinvented a lot of EA from first principles, then found Singer and adopted him as their intellectual heft. He's right into EA and gives talks and so on. I mean, the basic idea is ok? But EA as it stands is a worked example of how there's no good idea that can't be turned into a fucking terrible idea if you just do it hard enough.
>there's no good idea that can't be turned into a fucking terrible idea if you just do it hard enough. To Motte an idea (verb). I didn't even particularly like the basic premise, but even so it's disappointing to hear it's been Motted into some bizarre anti-Basilisk fund/grift.
Effective Altruism as we know it was named by Yudkowsky. For him and his lot, EA has *always* meant funding AI grift. The stuff that isn't pants-on-head stupid was grafted on later.
Only stockbrokers dont actually create any value imho, otoh if you think ecological disaster is prime cause one, speeding up wealth creation and circulation does a lot of damage, prob more than yoh can charity away. But yeah im silly nitpicking.
Yeah, I think underlying the idea is a complex of liberal beliefs that would be almost impossible to pick through. E.g. that it is an individual response (of the would-be stockbroker) that will change outcomes *without* changing the system that gives rise to them. Like how you're responsible for the environment when you reuse your bags at the supermarket. Don't look at BHP over there, sweetie, they're too big to fail. I would ask, what would a victim of this system in the periphery -- trapped in the same belief complex -- do? Presumably emigrate at any and all cost, then follow the stockbroker-charity example, or simply send money back home. What would that same victim do without the brain poisoning? Join a Maoist organisation in all likelihood and be rather more effective. The "effective" altruist would probably shake his head at this. Because another part of that complex is a belief that economic weapons/actions are acceptable, whereas political ones are reprehensible. Believing so makes for quite an ineffective altruist indeed.
> Join a Maoist organisation in all likelihood and be rather more effective. Why Maoist?
Seems to be the predominant tendency that threatens the state of things in peripheral countries in the past few decades. Not by accident either in my opinion, but it can simply be read as "socialist" in that post if you like.
Oh, sorry, I misread your parent comment. For some reason I thought it was people in the developed world joining Maoist organizations but if they're in the developing world that's different. I'm not convinced a Maoist or some other kind of socialist organization is the best way to go by default but under a lot of circumstances, it could be one of the better options.
EA as a paradigm has this massive blindspot. Too much individualism, and refusing to acknowledge the broken systems/ideologies that are causing the issues or preventing their resolution.
Fwiw 80000, at least, does not generally endorse "earning to give" for most people https://80000hours.org/2020/08/misconceptions-effective-altruism/#misconception-2-effective-altruism-is-entirely-about-donations-or-earning-to-give

How Effective Altruism works in practice:

  • Some charities are more effective than others, and you should donate to the more effective ones.

yeah, sounds obvious and sensible

  • As a first-worlder, you are basically rich, even if you don’t feel like it, and you therefore have an ethical obligation to contribute to those who aren’t - almost certainly more than you do now.

this is pretty sound reasoning actually, I can get behind this

  • Therefore, we can and should stack-rank every charitable initiative in the world according to an objective numerical scoring system,

wait what

  • and clearly the most cost-effective initiative possible for all of humanity is donating to fight the prospect of unfriendly artificial intelligence,,

what the

  • and oh look, there just happens to be a charity for that precise purpose right here! WHAT ARE THE ODDS,,,,

fuckin

“Effective Altruism is mostly about giving money to Yudkowsky” “Yes but we can’t actually say that out loud

Which charity evaluator says fighting against future AI is the number one cause/charity to donate to?
There isn't a single organization in effective altruism that evaluates the AI stuff that way but a bunch of people in there just kind of try doing it occasionally.

If you’ve already decided you want to be altruistic, effective altruism seems like an effective way of going about it, as long as you steer clear of the kind of people who are naturally attracted to an ethical system that values the lack of normal human empathy.

But planning your entire career around it, i.e. making as much money as possible so you can donate more, seems like a bridge too far. I think that whole branch of EA exists mainly to make overpaid brogrammers retrospectively feel like they’re saving the world by becoming filthy rich, the latter part of which they were already trying to do anyway, now that their day jobs no longer give them that feeling. Frankly I think the higher you try to raise your income, the less possible it is to do so ethically, and you wouldn’t e.g. rob an orphanage in Switzerland to give 10% of the take to orphanages in Burundi.

Follow your passions and skills, not just lucre, and donate whatever you can spare plus a kidney.

I had no idea that EA was related to LessWrong and the rationalist stuff.

FWIW I think it’s fairer to say EA came from Peter Singer, who is at least an actual philosopher, and the ratsphere merely rediscovered and colonized it. I suspect he doesn’t know them and wouldn’t like them. Of course there are things about him we can sneer at too.

Yeah I definitely don’t like the idea of earning to give. Maybe it’s selfishness and the EAers are right that it would be better at increasing utility, but I would definitely hate that life. > I think that whole branch of EA exists to make techbros feel like they’re saving the world by being filthy rich. That’s almost definitely it tbh.
Btw, what are the criticisms of Peter Singer? From what I’ve seen he seems better than most involved with EA (at least from what I’ve seen online).
singer is a hardcore utilitarian, so most criticisms of hardcore utilitarianism apply to singer. he’s said his fair share of inflammatory shit ([disabled infants should be euthanized, ](https://www.nj.com/mercer/2015/06/protesters_call_for_princeton_university_professor.html), [Hitler happened because we let stupid people vote](https://www.reddit.com/r/EffectiveAltruism/comments/jcsxb6/singer_would_hitler_have_got_in_power_in_the/?utm_source=share&utm_medium=ios_app&utm_name=iossmf), [sometimes animals deserve life more than people](https://www.google.com/amp/s/amp.theguardian.com/lifeandstyle/1999/nov/06/weekend.kevintoolis)) but it’s mostly just because he’s a prominent util proselytizer. not sure it’s specifically singer. important to note that utilitarianism is a wide field w many subdivisions and not all utilitarians would agree w singer, but singer has strong influence in the EA community
Wow, immensely bad takes. Hitler never even won a majority, don't people know this? Idiots on Twitter keep telling me to "vote out fascism" and "Hitler wouldn't have won if people voted for his opponent." How far has this ignorance spread? In any case, education doesn't change people's class interests, except in the sense that it sometimes lets them become more petty-bourgeois - but even that only applies to individuals and not to whole societies.
People seem terminally not in the know how German and Dutch elections (and the resulting coalitions from these elections) work. I think the French and Belgians also have similar systems btw.
If you are able to spare time, I don't fully understand the objection to a programmer such as myself. Let's say that my life plan significantly features but is not dominated by developing a career in the for profit sector, possibly be overpaid by companies, and look for effective global poverty charities, and even learn about global economic unfairness from time to time. As I understand your objection (but I acknowledge I may not understand it), I should consider paths outside for-profit, but that's giving up generally higher earnings in the for-profit sector

ascended braingod answer: utilitarianism is the infernal logic of capitalism and anything that preserves utilitarianism justifies capitalism so yes

meme answer: yes yud sux lol

actual non-nuanced answer: just don’t be racist and ur fine

nuanced answer: ur fine, but it’s worth considering why/how an ostensibly selfless, charitable, altruistic movement breeds so many repugnant ideas. is it just how the movement’s managed? are you just seeing a gross loud very online contingent? or does EA somehow justify/lead to those ideas? possibly, possibly not. it’s hard to argue against the immediate benefits of unconditional cash transfers. but again… it’s genuinely really hard to say and you should think about it before you get in too deep.

You make some very good points. I’m surprised that EA doesn’t attract a completely different group of people. You’d think a movement that spends so much effort helping the developing world would be more critical of capitalism, imperialism, and neoliberalism...but it just isn’t. I guess the kinds of people who are obsessed with being hyper-rational are also the same kinds of people that are likely to fall down those 4chan rabbit holes that claim to hold some secret about the world that only the most intelligent people can figure out. Maybe too much focus on the *effective* and not enough focus on the *altruism*. I hope it turns out to be one of those things where the online portion of it makes it seem worse than it is. I guess I’ll have to go out and interact with it in real life to see if that’s the case.
Because its purpose is to help capitalists spend some of their money and feel good about the "impact" that they make. The idea of "effective" altruism makes most sense when it's talking about a narrowly-targeted charity, but in that scenario more money almost always makes you more effective, so if someone shows up with a lot of money it's ineffective to turn him away for reasons of principle. That leads to the erosion of principle and the domination of guys with a lot of money to give away. Those guys all got their money from exploiting people so that gives the charity a massive blindspot when it comes to that kind of exploitation. This criticism applies to capitalist charity generally but I think EA is a particularly pure case. Plus, are they actually *helping* the developing world? Or are they setting up channels that control how the developing world gets resources? How come the developing world doesn't have enough resources to develop on its own? Because there just aren't resources in Africa? Or because they all got shipped out? Charity doesn't solve political problems. In fact charities exist within political and economic frameworks which they usually have to stay congruent to and reproduce in order to survive as organizations. Take a look at /r/nonprofitcritical before you make any career moves.
>I’m surprised that EA doesn’t attract a completely different group of people. To be honest, I think it does. I only dunk my toes in the EA community but I get the impression there's a big range of people involved with it, including many who are very critical of capitalism and some who actually want to overthrow capitalism. In the long term, if we want to save the world, capitalism's got to go (or change until it's almost unrecognisable from its present and all past forms) but if I'm asking the question "where can I donate money to help people as much as possible", that's more likely to be to the AMF or Cool Earth than the Socialist Workers' Party.
Aside from the point about the organization of EA most of the questions you ask might be better asked about utilitarianism in general.
yeah very true

Here’s an idea, if you haven’t heard this one already. Once you graduate from college, give this a try for a few months: https://labornotes.org/2014/02/organizers-worth-their-salt

I think one of the most valuable skills a lefty can develop is the ability to talk to strangers across class and race lines. And it’s important to get out of your bubble as much as possible. Plus, you’ll get the satisfaction of directly attacking a structural problem.

I’ve heard of SALTing, always thought it sounded really cool. If I ever get the chance to do it, I might try it out.

In large, it is a way that bourgeoise individuals can soothe their egos concerning their decisions about prioritizing the accumulation of wealth over the personal sacrifice in the cause of social change.

The real problem is there is no valid metric of morality or good. This is problematic as Singer wants to base Effective Altruism on Kantian ethics. In contrast, Kant asserts ethical decisions are purely binary.

The fact Effective Altruism lacks any structural or systematic criticism should be a big red flag. I’m trying to say that donating 10-25% of your income to charitable causes doesn’t make you a saint. Simultaneously, there are more ways to do good than organizing a protracted people’s war to foment revolution.

You don’t need a rational framework to be a decent moral person.

Here’s an exercise for the would-be effective altruist.

  1. Read this essay by William MacAskill, one of the founders and big names of EA, purporting to debunk criticisms of Mark Zuckerberg’s philanthropy LLC.

  2. Read the links MacAskill purports to debunk, and compare them to what he says in his essay. Ask yourself: What are the main points of each of these links? How do they compare to MacAskill’s summaries? If you are looking to engage with quality arguments, why would you look at a random 4 sentence post on the subreddit r/self? What reason might MacAskill have to use shoddy argumentation to talk up what a good person Zuckerberg is?

  3. Read some news articles about what Zuckerberg and Facebook have been up to in the half-decade since, and compare to what MacAskill says about Zuckerberg in this essay.

I think this encapsulates the problem with a lot of EA really well. Looking at donations purely in terms of their immediate outcomes ignores every systemic cause involved in creating the situation. Zuckerburg’s donations may do good, but that doesn’t excuse all the harm that was caused in the process of him acquiring so much wealth, not does it excuse the existence of a system that allows for that sort of wealth accumulation to exist —especially since that same system is responsible for so many of the problems that EA is trying to solve in the first place.

[deleted]

>One is the notion that it is possible to choose a field to enter, a ‘cause area’ if you will, on the basis that you can do ‘more good’. Isn’t that reasonable though? Someone with great quantitative skills and terrible social skills and someone with great writing skills but terrible quantitative skills would have two different ‘optimum’ career paths. Though I guess that’s different than choosing a cause area, but I don’t think it’s too unreasonable to assume that what someone is good at and what someone likes doing are fairly similar. Regardless, it’s much easier to get people to donate time and money if it’s something they care about, which is better than nothing. You’re right that effects are hard to quantify, but I think it is possible to compare action ‘a’ and action ‘b’ on the basis of similar metrics, i.e lives saved, incomes boosted, vaccines distributed, etc. My biggest problem with it, really, is that it usually fails to question why problems exist. It’s easier for capitalists to justify their actions by saying it allows them to donate more, but the system that allows them to make so much money is the same system that is often responsible for the conditions of developing countries. It’s hard to imagine EA ‘global priorities research’ discussing the best ways for developing countries to combat neoliberalism.
>It’s easier for capitalists to justify their actions by saying it allows them to donate more, but the system that allows them to make so much money is the same system that is often responsible for the conditions of developing countries. This is the crux of it for me too. I'm sure rationalist nonsense can make this "EA" stuff much worse, but it has always been a dog's breakfast.
Amusingly enough, I agree (at least in principle) with this "join the field you can do more on" because it runs counter to the endemic Engineer's Disease that plagues the rationalist community. Of course they often Dunning-Kruger their way to areas far outside their domain anyway, but the idea itself is solid, especially for a self-aware mediocre STEMlord such as myself.

I have nothing constructive to add or advice to give, but I want to ask what your opinion is of this

Specifically the section about the hedonic rat farm

Lol, I’ve never seen the rats on heroin thing proposed seriously before. I think most EA aren’t inclined to disagree with that line of thinking though, either from a negative utilitarian point of view where the priority should be to decrease current suffering, or from a kind of anthropocentrism that values human/intelligent life over animals like rats.
>a kind of anthropocentrism that values human/intelligent life over animals like rats You will get a lot less of that in the EA community than you might expect. >a negative utilitarian point of view where the priority should be to decrease current suffering There are of course EAs proposing that. But the problem is, that if you seriously follow through the logic of that you end up in anti-environmentalism: https://reducing-suffering.org/habitat-loss-not-preservation-generally-reduces-wild-animal-suffering/

The EA stuff and the stuff talked about here is pretty far apart imho, sure there are some cray folks (every org has them) and the miri (or was that renamed recently) is a bit weird, but there are no EA people advocating for the 14 words so we can save the planet as far as I know.

Extreme EA is silly or weird. Extreme ‘rationalism’ (im assuming rationalists would deny this is part of it) leads to neo-nazis, HBD, NRx, PUAs etc.

The ideas behind EA arent that bad, even if they arent that effective.

E: and dont forget sneerclub is basically about the online presence of all these things, offline it is prob all different, and offline determining if somebody is actually racist/sexist, or just made a stupid remark and to see if people actually understand what you mean (via voice tone/body reactions etc) is a lot easier. Esp as people tend to not use a lot of emotions in serious text :shrug emoticon: (interactions between non-neurotypicals and neurotypicals makes this a little bit harder of course, it also takes up more time, and is more one on one).

EA is a chimera, just one more fruitless attempt to quantify the unquantifiable. It would be harmlessly stupid (albeit annoyingly presumptuous—as if they were the first people to ever think of applying reason and evidence to altruism!) if it were confined to the clique of navel-gazing philosophers who launched it in what one suspects was a half-baked effort to morally justify their own lives of cushy irrelevance (“What if the most important work of all were puzzling out what the most important work is?!”).

Instead, it’s become downright pernicious thanks to the efforts of cretins like Yudkowsky and Bostrom, who’ve recalibrated that self-justifying navel-gazing to their puerile interests, come up with “The most important way for humanity to spend its resources is a preemptive strike against Skynet,” and actually managed to persuade a bunch of gormless zillionaires to cough up millions for that notional purpose. (And meanwhile those selfsame zillionaires move heaven and earth to shelter the far greater remainders of their fortunes from taxes, and Yudkowsky et al. just go on whistling Dixie, and the tide rises, and the rainforests burn…)

[deleted]

[deleted]
[deleted]
[deleted]

EAs tend to most-often be moderate left-wing, except for the fact that they love billionaires.

You have an overlap between EA and LessWrong, and an overlap between LessWrong and SSC, and an overlap between SSC and race realists, but in the end it really doesn’t add up to an overlap between EA and race realists.

If you engage with the EA community you will run into people willing to entertain Rokos Basilisk. You will most likely not find any endorsement of racism. You will find silicon valley libertarians telling you that the most effective thing to do is to develop new cryptocurrencies. You will most likely not find any tradcons.

RemindMe! 2 days

There is a 1 hour delay fetching comments. I will be messaging you in 2 days on [**2020-10-22 22:07:38 UTC**](http://www.wolframalpha.com/input/?i=2020-10-22%2022:07:38%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://np.reddit.com/r/SneerClub/comments/jezh22/is_effective_altruism_as_bad_as_the_stuff_its/g9hcc7k/?context=3) [**CLICK THIS LINK**](https://np.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FSneerClub%2Fcomments%2Fjezh22%2Fis_effective_altruism_as_bad_as_the_stuff_its%2Fg9hcc7k%2F%5D%0A%0ARemindMe%21%202020-10-22%2022%3A07%3A38%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://np.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20jezh22) ***** |[^(Info)](https://np.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://np.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://np.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://np.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|

ea is a career? how does that work?